uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
3,212,635,537,506 | arxiv | \subsection{Proof of Lemma~\ref{lem:PIocn}}
\section{PSPACE-hardness} \label{sec:pspace-hard}
Recall that a language is \emph{definite\xspace} if it is a finite Boolean combination of languages of the form $w\Sigma^*$, for $w \in \Sigma^*$.
In this section we prove the following result which, in particular, implies the lower bound of Theorem~\ref{thm:pspace-comp}:
\begin{theorem}\label{thm:pspacehard}
For every class $\mathcal{F}$ containing all definite\xspace languages,
the $\mathcal{F}${ }separability problem for languages of OCN\xspace is \textsc{PSpace}\xspace-hard.
\end{theorem}
A convenient \textsc{PSpace}\xspace-hard problem, to be reduced to $\mathcal{F}$ separability of OCN\xspace, can be extracted from~\cite{FJ15}.
Given an OCA\xspace $\mathcal{A}$ and $b\in \mathbb{N}$, the \emph{bounded} non-emptiness problem asks whether $\mathcal{A}$ accepts some word
by a \emph{$b$-bounded} run; a run is $b$-bounded if counter values along the run are at most $b$.
\begin{theorem}[\cite{FJ15}]
The bounded non-emptiness problem is \textsc{PSpace}\xspace-complete, for $\mathcal{A}$ and $b$ represented in binary.
\end{theorem}
A detailed analysis of the proof reveals that the problem remains \textsc{PSpace}\xspace-hard even if the input OCA\xspace
$\mathcal{A} = (Q, \alpha_0, \alpha_f, T, T_{=0})$ is assumed to be \emph{acyclic},
in the sense that there is no reachable configuration $\alpha$
with a non-empty
run
$\alpha \trans{} \alpha$. Observe that an acyclic OCA\xspace has no $b$-bounded run longer than $b |Q|$, a property which
will be crucial for the correctness of our reduction.
\begin{proposition} \label{prop:acyclic}
The bounded non-emptiness problem is \textsc{PSpace}\xspace-complete, for acyclic $\mathcal{A}$ and $b$ represented in binary.
\end{proposition}
We are now ready to prove Theorem~\ref{thm:pspacehard}, by reduction from bounded non-emptiness of acyclic OCA\xspace.
Given an acyclic OCA\xspace $\mathcal{A}= (Q, (q_0, 0), (q_f, 0), T, T_{=0})$ and $b\in\mathbb{N}$, we construct in polynomial time two OCN\xspace
$\mathcal{B}$ and $\mathcal{B}'$, with the following properties:
\begin{enumerate}
\item[(a)] if $\mathcal{A}$ has a $b$-bounded accepting run then $L(\mathcal{B}) \cap L(\mathcal{B}')\neq \emptyset$
(and thus $L(\mathcal{B})$ and $L(\mathcal{B}')$ are not $\mathcal{F}$ separable);
\item[(b)] if $\mathcal{A}$ has no $b$-bounded accepting run then $L(\mathcal{B})$ and $L(\mathcal{B}')$ are $\mathcal{F}$ separable.
\end{enumerate}
The two OCN\xspace $\mathcal{B}$ and $\mathcal{B}'$ will jointly simulate a $b$-bounded run of $\mathcal{A}$, obeying an invariant that
the counter value $v$ of $\mathcal{B}$ is the same as the counter value of $\mathcal{A}$, while the counter value of $\mathcal{B}'$ is $b-v$.
The actual input alphabet of $\mathcal{A}$ is irrelevant; as the input alphabet of $\mathcal{B}$ and $\mathcal{B}'$ we take $\Sigma = T \cup T_{=0}$.
The OCN\xspace $\mathcal{B}$ behaves essentially as $\mathcal{A}$, except that it always allows for a zero test.
Formally, $\mathcal{B}= (Q, (q_0, 0), (q_f, 0), U)$, where the transitions $U$ are defined as follows.
For every transition $t = (q, a, q', z) \in T$, there is a corresponding transition
$
(q, t, q', z) \in U.
$
Moreover, for every zero test $t = (q, a, q') \in T_{=0}$, there is a transition
$
(q, t, q', 0) \in U.
$
On the other hand, the OCN\xspace $\mathcal{B}'$ starts in the configuration $(q_0, b)$, ends in $(q_f, b)$,
and simulates the transitions of $\mathcal{A}$ but with the opposite effect.
Formally, $\mathcal{B}' = (Q \cup X, (q_0, b), (q_f, b), U')$, for $X$ a set of auxiliary states.
For every transition $t = (q, a, q', z) \in T$, there is a corresponding transition
$
(q, t, q', -z) \in U
$
with the effect $-z$ opposite to the effect of $t$.
Moreover, for every zero test $t = (q, a, q') \in T_{=0}$, we include into $U'$ the following three transitions
\[
(q, \varepsilon, p, -b) \qquad
(p, \varepsilon, p', +b) \qquad
(p', t, q', 0),
\]
for some auxiliary states $p, p'$.
The aim of the first two transitions is to allow the last one only if the counter value is
at least $b$ (and thus exactly $b$, assuming there is also a run of $B$ on the same input).
We need to argue that the implications~(a) and~(b) hold.
The first one is immediate: every $b$-bounded accepting run of $\mathcal{A}$ is faithfully simulated by $\mathcal{B}$ and $\mathcal{B}'$, and thus
the languages $L(\mathcal{B})$ and $L(\mathcal{B}')$ have non-empty intersection.
For the implication~(b), suppose $\mathcal{A}$ has no $b$-bounded accepting run.
The first step is to notice that the languages $L(\mathcal{B})$ and $L(\mathcal{B}')$ are necessarily disjoint.
Indeed, any word $w \in L(\mathcal{B}) \cap L(\mathcal{B}')$ would describe a $b$-bounded accepting run of $\mathcal{A}$:
$\mathcal{B}$ ensures that the counter remains non-negative, while $\mathcal{B}'$ ensures that the counter does not increase beyond $b$
and that the zero tests are performed correctly.
Let $L$ contain all prefixes of words from $L(\mathcal{B})$, and likewise $L'$ for $L(\mathcal{B}')$.
Let $n = b |Q|$. Recall that due to acyclicity, $\mathcal{A}$ has no $b$-bounded run of length $n$ (in the sense of the number of transitions)
or longer.
Thus, for the same reason as above, the intersection $L \cap L'$ contains no word of length $n$ or longer.
In simple words, we are going to show that for a word of length $n$ or longer, it is enough to inspect its prefix of length $n$ in order
to classify the word between $L(\mathcal{B})$ and $L(\mathcal{B}')$.
We define a language $K\in\mathcal{F}$ as follows:
\[
K \ := \ \big(L(\mathcal{B}) \cap \Sigma^{< n}\big) \ \cup \ \bigcup_{w\in L, |w| = n} w \Sigma^*,
\]
where $\Sigma^{<n}$ stands for the set of all words over $\Sigma$ of length strictly smaller than $n$, and
$|w|$ denotes the length of $w$.
The language $K$ belongs to $\mathcal{F}$ indeed, as $\mathcal{F}$ is closed under finite unions, and
every singleton $\{w\}$ belongs to $\mathcal{F}$, due to
\[
\{w\} = w\Sigma^* - \bigcup_{a \in \Sigma} wa\Sigma^*.
\]
It remains to argue that $K$ separates $L(\mathcal{B})$ and $L(\mathcal{B}')$. By the very definition $L(\mathcal{B}) \subseteq K$,
as $K$ contains all words from $L(\mathcal{B})$ of length strictly smaller than $n$,
and all words starting with a prefix, of length $n$, of a word from $L(\mathcal{B})$.
For disjointness of $K$ and $L(\mathcal{B}')$, observe that the languages $L(\mathcal{B}) \cap \Sigma^{< n}$ and $L(B')$ are disjoint, as already
$L(\mathcal{B})$ and $L(\mathcal{B}')$ are.
Moreover, for every $w\in L$ of length $|w| = n$, the languages $w\Sigma^*$ and $L(\mathcal{B}')$ are disjoint, as already
the intersection $L\cap L'$ contains no word of length $n$ or longer.
\begin{remark} \rm
The OCN\xspace $\mathcal{B}$ and $\mathcal{B}'$ used in the reduction can be easily made deterministic.
On the other hand, by a general result of~\cite{CCLP16} we learn that regular separability of nondeterministic OCN\xspace polynomially reduces to
regular separability of \emph{deterministic} OCN\xspace, making the latter \textsc{PSpace}\xspace-complete too.
\end{remark}
\section{Introduction}
We mainly focus on separability problems for languages of finite words.
We say that a language $K$ is \emph{separated from} another language $L$ by a language $S$,
if $K \subseteq S$ and $L \cap S = \emptyset$.
For two families of languages $\mathcal{F}$ and $\mathcal{G}$, the \emph{$\mathcal{F}${ }separability problem for $\mathcal{G}$}
asks, for two given languages $K, L \in \mathcal{G}$ over the same alphabet,
whether $K$ is separated from $L$ by some language from $\mathcal{F}$.
In this paper we mainly consider the separator class $\mathcal{F}$ of regular languages (thus using the term \emph{regular separability}).
As regular languages are closed under complement, $K$ is separated from $L$ by a regular language
if, and only if $L$ is separated from $K$ by a regular language. Therefore we shortly say that $K$ \emph{and} $L$ are \emph{regular separable}.
As the class $\mathcal{G}$ we consider the languages of \emph{one counter automata} (NFA extended with a non-negative counter that can be
incremented, decremented and tested for zero), or its subclass -- the languages of \emph{one counter nets} (one counter automata without zero tests).
\myparagraph{Motivation and context}
Separability is a classical problem in formal languages.
It was investigated most extensively for $\mathcal{G}$ the class of regular languages, and for $\mathcal{F}$ a suitable subclass thereof.
Since regular languages are effectively closed under complement, the $\mathcal{F}${ }separability problem
is in that case a generalization of the \emph{$\mathcal{F}${ }characterization problem},
which asks whether a given language belongs to $\mathcal{F}$:
indeed, $L \in \mathcal{F}$ if and only if $L$ is separated from its complement by some language from $\mathcal{F}$.
Separability problems for regular languages were investigated since a long time using
a generic connection established by Almeida~\cite{Almeida-pmd99} between profinite semigroup theory and separability.
Recently it attracted a lot of attention also outside algebraic community,
which resulted in establishing the decidability of $\mathcal{F}${ }separability for the family $\mathcal{F}$ of separators being, among others,
\begin{itemize}
\item the piecewise testable languages~\cite{DBLP:conf/icalp/CzerwinskiMM13,DBLP:conf/mfcs/PlaceRZ13}
\item the locally and locally threshold testable languages~\cite{DBLP:conf/fsttcs/PlaceRZ13},
\item the languages definable in first order logic~\cite{DBLP:journals/corr/PlaceZ14},
\item the languages of certain higher levels of the first order hierarchy~\cite{DBLP:conf/icalp/PlaceZ14}.
\end{itemize}
The first result has been recently generalized to finite ranked trees \cite{DBLP:conf/icalp/Goubault-Larrecq16}.
Separability of non-regular languages attracted little attention till now.
The reason for this may be twofold. First, for regular languages one can use standard algebraic tools, like syntactic monoids,
and indeed most of the results have been obtained using algebraic techniques.
Second, the few known negative results on separability of non-regular languages are strongly discouraging.
To start off,
some strong intractability results have been known already since 70's, when Szymanski and Williams proved
that regular{ }separability of context-free languages is undecidable~\cite{DBLP:journals/siamcomp/SzymanskiW76}.
Later Hunt~\cite{DBLP:journals/jacm/Hunt82a} strengthened this result: he showed that $\mathcal{F}${ }separability of context-free languages
is undecidable for every class $\mathcal{F}$ containing all \emph{definite\xspace} languages,
i.e., finite Boolean combinations of languages of the form $w\Sigma^*$ for $w \in \Sigma^*$.
This is a very weak condition, hence the result of Hunt suggested
that nothing nontrivial can be done outside regular languages with respect to separability problems.
Furthermore, Kopczy\'{n}ski has recently shown that regular{ }separability is undecidable
even for languages of visibly pushdown automata~\cite{Kopczynski16},
thus strengthening the result by Szymanski and Williams once more.
On the positive side, piecewise testable{ }separability has been shown decidable
for context-free languages, languages of vector addition systems with states (\vass languages\xspace),
and some other classes of languages~\cite{DBLP:conf/fct/CzerwinskiMRZ15}.
This inspired us to start a quest for decidable cases beyond regular languages.
Once beyond regular languages, the regular{ }separability problem seems to be the most intriguing.
\vass languages\xspace is a well-known class of languages, for which the decidability status of the regular{ }separability problem is unknown.
A few positive results related to this problem have been however obtained recently.
First, decidability of unary (and modular) separability of reachability sets\footnote{Note that these are sets of vectors, not words.}
of VASS\xspace was shown in~\cite{CCLP17};
the problem is actually equivalent to regular separability of commutative closures of \vass languages\xspace.
Second, decidability of regular{ }separability of languages of Parikh automata was shown recently in~\cite{CCLP16}.
Parikh automata recognize exactly the same languages as \emph{integer-VASS\xspace} (a variant of VASS\xspace where one allows
negative counter values~\cite{KR03,CFM11}), and therefore are a subclass of \vass languages\xspace.
The open decidability status of regular separability of \vass languages\xspace is our main motivation in this paper.
A more general goal is understanding for which classes of languages regular separability problem is decidable.
\myparagraph{Our contribution}
We consider the regular{ }separability problem for languages of one counter automata (with zero test)
and its subclass, namely one counter nets (without zero test); the latter model is exactly VASS\xspace in dimension 1.
The two models we call shortly OCA\xspace and OCN\xspace, respectively.
Our main result is decidability of the regular{ }separability problem for languages of one counter nets.
Moreover, we determine the exact complexity of the problem, namely \textsc{PSpace}\xspace-completeness.
For complexity estimations we assume a standard encoding of OCA\xspace (or OCN\xspace) and their configurations;
in particular we assume binary encoding of integers appearing in the input.
\begin{theorem}\label{thm:pspace-comp}
Regular{ }separability of languages of OCN\xspace is \textsc{PSpace}\xspace-complete.
\end{theorem}
Our approach to prove decidability is by \emph{regular over-approximation}: for every OCN\xspace language $L$
there is a decreasing sequence of (computable) regular languages over-approximating $L$,
such that two OCN\xspace languages are regular separable if, and only if some pair of their approximants is disjoint.
Furthermore, the latter condition can be reduced to a kind of reachability property of the cross-product
of two OCN\xspace, and effectively checked in \textsc{PSpace}\xspace by exploiting effective semi-linearity of the reachability set of
the cross-product.
Our \textsc{PSpace}\xspace lower bound builds on \textsc{PSpace}\xspace-hardness of bounded non-emptiness of OCA\xspace~\cite{FJ15}.
It is interesting to compare the regular{ }separability problem with the regularity problem,
which asks whether a given language is regular.
For every class $\mathcal{G}$ effectively closed under complement, regular{ }separability is a generalization of regularity,
as $L$ is regular if, and only if $L$ and its complement $\bar{L}$ are regular{ }separable.
It turns out however that regularity of OCN\xspace languages can not be reduced to regular separability:
while we prove regular{ }separability decidable, the regularity problem is undecidable
for OCN\xspace languages~\cite{ValkVidal81,BensUndecidability}.
As our second main contribution, we show that adding zero tests leads to undecidability,
for any separator language class containing all definite\xspace languages.
In particular, regular languages are an example of such class.
\begin{theorem}\label{thm:undecidability}
For every language class $\mathcal{F}$ containing all definite\xspace languages,
the $\mathcal{F}${ }separability problem for languages of OCA\xspace is undecidable.
\end{theorem}
Our argument is inspired by the undecidability proof by Hunt~\cite{DBLP:journals/jacm/Hunt82a}: we show, roughly speaking, that
every decidable problem reduces in polynomial time to the separability problem for OCA\xspace.
\myparagraph{Organization}
In Section~\ref{sec:oca} we define the models of OCA\xspace and OCN\xspace,
then Sections~\ref{sec:approx}--\ref{sec:pspace-hard} are devoted to the proof of Theorem~\ref{thm:pspace-comp},
and finally Section~\ref{sec:undecid} contains the proof of Theorem~\ref{thm:undecidability}.
The proof of Theorem~\ref{thm:pspace-comp} is factorized as follows:
in Section~\ref{sec:approx} we introduce the regular over-approximation of OCN\xspace languages,
in Section~\ref{sec:in-pspace} we provide a \textsc{PSpace}\xspace procedure for testing the disjointness property of approximants, as discussed above,
and in Section~\ref{sec:pspace-hard} we give a \textsc{PSpace}\xspace lower bound.
The last section~\ref{sec:remarks} contains some concluding remarks, including the discussion of undecidability of the regularity problem for OCN\xspace.
\section{Regular over-approximation of OCN\xspace} \label{sec:approx}
For an OCN\xspace $\mathcal{A}$ and $n > 0$, we are going to define an NFA $\mathcal{A}_n$ which we call \emph{$n$-approximation} of $\mathcal{A}$.
As long as the counter value is below $n$, the automaton $\mathcal{A}_n$ stores this value exactly
(we say then that $\mathcal{A}_n$ is in \emph{low} mode);
if the counter value exceeds $n$, the automaton $\mathcal{A}_n$ only stores the remainder of the counter value modulo $n$
(we say then that $\mathcal{A}_n$ is in \emph{high} mode).
Thus $\mathcal{A}_n$ can pass from low mode to high one; but $\mathcal{A}_n$ can also nondeterministically decide to pass the other way around, from high to low mode.
Let $Q$ be the state space of $\mathcal{A}$, and let $(q_0, 0)$ and $(q_f, 0)$ be its initial and final configurations.
As the state space of $\mathcal{A}_n$ we take the set
\[
Q_n = Q \times \{0, \ldots, n-1\} \times \{\textsc{low}, \textsc{high}\}.
\]
The initial and final state of $\mathcal{A}_n$ are $(q_0, 0, \textsc{low})$ and $(q_f, 0, \textsc{low})$, respectively.
Every transition $(q, a, q', z)$ of $\mathcal{A}$ induces a number of transitions of $\mathcal{A}_n$, as defined below
(for any $c$ satisfying $0 \leq c < n$):
\begin{align*}
& \big( (q, c, \textsc{low}), a, (q, c{+}z, \textsc{low}) \big) && \text{if } 0 \leq c{+}z < n \\
& \big( (q, c, \textsc{low}), a, (q, (c{+}z) \modulo{n}, \textsc{high}) \big) && \text{if } n \leq c{+}z \\
& \big( (q, c, \textsc{high}), a, (q, (c{+}z) \modulo{n}, \textsc{low}) \big) && \text{if } c{+}z < 0 \\
& \big( (q, c, \textsc{high}), a, (q, (c{+}z) \modulo{n}, \textsc{high}) \big).
\end{align*}
Note that passing from high mode to low one is only possible if the counter value (modulo $n$) drops, after an update, strictly below 0;
in particular, this requires $z < 0$.
\begin{example}
Recall the languages $K$ and $L$ from Example~\ref{ex:ocnsep}, and consider an OCN\xspace $\mathcal{A}$ recognizing $K$ that has two states $q_0$, $q_f$,
and three transitions:
\begin{align*}
& (q_0, a, q_0, +1) \\
& (q_0, \varepsilon, q_f, 0) \\
& (q_f, b, q_f, -1).
\end{align*}
The 2-approximating automaton $\mathcal{A}_2$ has 8 states $\set{q_0, q_f}\times\set{0, 1}\times\set{\textsc{low}, \textsc{high}}$.
In state $(q_0, 1, \textsc{low})$ on letter $a$, the automaton is forced to change the mode to $\textsc{high}$; symmetrically,
in state $(q_f, 0, \textsc{high})$ on letter $b$, the automaton can change its mode back to $\textsc{low}$:
\begin{align*}
\big( (q_0, 1, \textsc{low}), a, (q_0, 0, \textsc{high}) \big) \\
\big( (q_f, 0, \textsc{high}), b, (q_f, 1, \textsc{low}) \big).
\end{align*}
Otherwise, the mode is preserved by transitions; for instance, in high mode the automaton changes the state irrespectively of the input letter:
for every $q\in\set{q_0, q_f}$, $x\in\set{a, b}$ and $c\in\set{0,1}$, there is a transition
\begin{align*}
\big( (q, c, \textsc{high}), x, (q, 1-c, \textsc{high}) \big).
\end{align*}
The language recognized by $\mathcal{A}_2$ is $$\setof{a^n b^m}{(n = m < 2) \vee (n, m \geq 2 \wedge n \equiv m \modulo{2})}.$$
\end{example}
According to the definition above, the automaton $\mathcal{A}_n$ can oscillate between low and high mode arbitrarily many times.
Actually, as we argue below, it is enough to allow for at most one oscillation.
\begin{proposition} \label{prop:high}
For every run of $\mathcal{A}_n$ between two states
in high mode,
there is a run over the same word between the same states which never exits the high mode.
\end{proposition}
\begin{proof}
Indeed, observe that if $\mathcal{A}_n$ has any of the following transitions
\begin{align*}
& \big( (q, m, \textsc{low}), a, (q', m', \textsc{low}) \big) \\
& \big( (q, m, \textsc{low}), a, (q', m', \textsc{high}) \big) \\
& \big( (q, m, \textsc{high}), a, (q', m', \textsc{low}) \big)
\end{align*}
then $\mathcal{A}_n$ necessarily has also the transition
\begin{align*}
\big( (q, m, \textsc{high}), a, (q', m' \modulo{n}, \textsc{high}) \big).
\end{align*}
Thus every run oscillating through high and low modes that starts and ends in high mode, can be simulated by a one that never exits high mode.
\end{proof}
A run of an OCN\xspace $\mathcal{A}$ we call \emph{$n$-low}, if the counter value is strictly below $n$ in all configurations of the run.
Proposition~\ref{prop:appr-char} below characterizes the language of $\mathcal{A}_n$ in terms of runs of $\mathcal{A}$, and will be useful
for proving the Approximation Lemma below.
Then Corollary~\ref{cor:appr-prop}, its direct consequence, summarizes some properties of approximation
useful in the sequel.
\begin{proposition} \label{prop:appr-char}
Let $\mathcal{A} = (Q, (q_0, 0), (q_f ,0), T)$ be an OCN\xspace, and let $n > 0$.
Then $w \in L(\mathcal{A}_n)$ iff
\begin{enumerate}
\item[(a)] either $\mathcal{A}$ has an $n$-low run over $w$,
\item[(b)] or $w$ factorizes into $w = w_\textsc{pref} w_\textsc{mid} w_\textsc{suff}$, such that $\mathcal{A}$ has the following runs
\begin{align}
\begin{aligned} \label{eq:threeruns}
& (q_0, 0) \trans{w_\textsc{pref}} (q, n+d) \\
& (q, c n+d) \trans{w_\textsc{mid}} (q', c' n+d') \\
& (q', n+d') \trans{w_\textsc{suff}} (q_f, 0),
\end{aligned}
\end{align}
for some states $q, q' \in Q$ and natural numbers $c, c' \geq 1$ and $d, d' \geq 0$.
\end{enumerate}
\end{proposition}
\begin{proof}
We start with the 'if' direction. If there is an $n$-low run over $w$ in $\mathcal{A}$ then clearly $w \in L(\mathcal{A}_n)$.
Otherwise, suppose that $w = w_\textsc{pref} w_\textsc{mid} w_\textsc{suff}$ and the words $w_\textsc{pref}$, $w_\textsc{mid}$ and $w_\textsc{suff}$ admit the runs
as stated in~\eqref{eq:threeruns} above.
Then clearly $\mathcal{A}_n$ admit the following runs:
\begin{align*}
& (q_0, 0, \textsc{low}) \trans{w_\textsc{pref}} (q, d \modulo{n}, \textsc{high}) \\
& (q, d \modulo{n}, \textsc{high}) \trans{w_\textsc{mid}} (q', d' \modulo{n}, \textsc{high}) \\
& (q', d' \modulo{n}, \textsc{high}) \trans{w_\textsc{suff}} (q_f, 0, \textsc{low})
\end{align*}
and thus $(q_0, 0) \trans{w} (q_f, 0)$ in $\mathcal{A}_n$ as required.
For the 'only if' direction, suppose $w \in L(\mathcal{A}_n)$.
If $\mathcal{A}_n$ has a run over $w$ that never exits low mode, then clearly $\mathcal{A}$ has an $n$-low run over $w$.
Otherwise, consider any run of $\mathcal{A}_n$ over $w$. Distinguish the first and the last configuration in high mode along this run,
say $(q, d, \textsc{high})$ and $(q', d', \textsc{high})$.
The two configurations determine a factorization of the word $w$ into three parts $w = w_\textsc{pref} w_\textsc{mid} w_\textsc{suff}$ such that
$\mathcal{A}_n$ admit the following runs:
\begin{align*}
& (q_0, 0, \textsc{low}) \trans{w_\textsc{pref}} (q, d, \textsc{high}) \\
& (q, d, \textsc{high}) \trans{w_\textsc{mid}} (q', d', \textsc{high}) \\
& (q', d', \textsc{high}) \trans{w_\textsc{suff}} (q_f, 0, \textsc{low}).
\end{align*}
The first and the last run imply the first and the last run in~\eqref{eq:threeruns}.
For the middle one, we may assume (w.l.o.g., by Proposition~\ref{prop:high}) that $\mathcal{A}_n$ never exits high mode,
which implies immediately existence of the middle run in~\eqref{eq:threeruns}.
\end{proof}
\begin{corollary} \label{cor:appr-prop}
Let $\mathcal{A}$ be an OCN\xspace and let $m, n > 0$.
Then
\begin{enumerate}
\item[(a)] $L(\mathcal{A}) \subseteq L(\mathcal{A}_n)$,
\item[(b)] $L(\mathcal{A}_n) \subseteq L(\mathcal{A}_m)$ if $m \mid n$.
\end{enumerate}
\end{corollary}
\begin{proof}
The first inclusions follow easily by the characterization of Proposition~\ref{prop:appr-char}.
The second one is easily shown by definition of $n$-approximation.
\end{proof}
Now we state and prove the Approximation Lemma, which is the crucial property of approximation.
In the sequel we will strongly rely on direct consequences of this lemma, formulated as
Corollaries~\ref{cor:apprlemma} and~\ref{cor:appr} below.
\begin{lemma}[Approximation Lemma] \label{lem:appr}
For an OCN\xspace $\mathcal{A}$, the following conditions are equivalent:
\begin{enumerate}
\item[(a)] $L(\mathcal{A})$ is empty,
\item[(b)] $L(\mathcal{A}_n)$ is empty, for some $n > 0$.
\end{enumerate}
\end{lemma}
\begin{proof}
Clearly (b) implies (a), by Corollary~\ref{cor:appr-prop}(a).
In order to prove that (a) implies (b), fix $\mathcal{A} = (Q, (q_0, 0), (q_f, 0), T)$ and
suppose that the languages $L(\mathcal{A}_n)$ are \emph{non-empty}
for \emph{all} $n > 0$; our aim is to show that $L(\mathcal{A})$ is non-empty either.
In the sequel we do not need the non-emptiness assumption for \emph{all} $n$;
it will be enough to use the assumption for some fixed $n$ computed as follows.
Let $|Q|$ be the number of states of $\mathcal{A}$ and $d_\mathcal{A}$ be the maximal absolute value of
integer constants appearing in transitions $T$ of $\mathcal{A}$.
Then let $K =
|Q| \cdot d_\mathcal{A}$, and let $n = K!$ ($K!$ stands for $K$ factorial.)
Let $w$ be a fixed word that belongs to $L(\mathcal{A}_n)$.
Our aim is to produce a word $w'$ that belongs to $L(\mathcal{A})$,
by a pumping in the word $w$; the pumping
will allow to make a run of $\mathcal{A}_n$ into a correct run of $\mathcal{A}$.
As $w\in L(\mathcal{A}_n)$, by Proposition~\ref{prop:appr-char} we learn that $w$ satisfies
one of conditions (a), (b). If $w$ satisfies (a) then $w' = w\in L(\mathcal{A})$ as required.
We thus concentrate, from now on, on the case when $w$ satisfies condition (b) in Proposition~\ref{prop:appr-char}.
Let's focus on the first (fixed from now on) run of $\mathcal{A}$ in~\eqref{eq:threeruns}, namely
\[
(q_0, 0) \trans{w_\textsc{pref}} (q, n+d),
\]
for some prefix $w_\textsc{pref}$ of $w$ and $d \geq 0$.
This run starts with the counter value $0$, and ends with the counter value at least $n$.
We are going to analyze closely the prefix of the run that ends immediately before the counter value exceeds $K$
for the first time; denote this prefix by $\rho$.
A configuration $(q, m)$ in $\rho$ we call \emph{latest} if the counter value stays strictly above
$m$ in all the following configurations in $\rho$. In other words, a latest configuration is the last one in $\rho$
where the counter value is at most $m$.
A crucial but easy observation is that the difference of counter values of two consecutive latest configurations is at most $d_\mathcal{A}$.
Therefore, as $K$ has been chosen large enough, $\rho$ must contain more than $|Q|$ latest configurations.
By a pigeon hole principle, there must be a state of $\mathcal{A}$, say $q$, that appears in at least
two
latest configurations.
In consequence, for some infix $v$ of $w_\textsc{pref}$, the OCN\xspace $\mathcal{A}$ has a run over $v$ of the form
\[
(q, m) \trans{v} (q, m'), \quad \text{ for some } m < m' \leq m+K.
\]
As a consequence, the word $v$ can be repeated an arbitrary number of times,
preserving correctness of the run but increasing the final counter value.
Recall that the final counter value of $\rho$ is $n+d$, while we would like to achieve
$c n + d$ (for $c$ in Proposition~\ref{prop:appr-char}). Modify the word $w_\textsc{pref}$ by adding
$(c-1) \cdot n / (m' -m)$ repetitions of the word $v$, thus obtaining a new word $w'_\textsc{pref}$ such that $\mathcal{A}$ has a run
\begin{align} \label{eq:runpref}
(q_0, 0) \trans{w'_\textsc{pref}} (q, cn + d).
\end{align}
In exactly the same way we modify the suffix $w_\textsc{suff}$ of $w$, thus obtaining a word $w'_\textsc{suff}$
over which the OCN\xspace $\mathcal{A}$ has a run
\begin{align} \label{eq:runsuff}
(q', c' n + d') \trans{w'_\textsc{suff}} (q_f, 0).
\end{align}
By concatenation we obtain a word $w' = w'_\textsc{pref} w_\textsc{mid} w'_\textsc{suff}$ which is
accepted by $\mathcal{A}$, by composition of the run~\eqref{eq:runpref}, the middle run in~\eqref{eq:threeruns}, and the run~\eqref{eq:runsuff}.
Thus $L(\mathcal{A})$ is non-empty, as required.
\end{proof}
As {OCN\xspace}s are closed under products with finite automata and these products commute with $n$-approximations, we get:
\begin{restatable}{corollary}{CorApprLemma} \label{cor:apprlemma}
For an OCN\xspace $\mathcal{A}$ and a regular language $R$, the following conditions are equivalent:
\begin{enumerate}
\item[(a)] $L(\mathcal{A})$ and $R$ are disjoint,
\item[(b)] $L(\mathcal{A}_n)$ and $R$ are disjoint, for some $n > 0$.
\end{enumerate}
\end{restatable}
\begin{proof}
Fix an OCN\xspace $\mathcal{A} = (Q, (q_0, 0), (q_f, 0), T)$ and an NFA $\mathcal{B} = (P, p_0, p_f, U)$ recognizing the language $R$.
For convenience we assume here that $\mathcal{A}$ has an $\varepsilon$-transition of the form $(q, \varepsilon, q, 0)$ in every state $q\in Q$,
and $\mathcal{B}$ has a self-loop $\varepsilon$-transitions $(p, \varepsilon, p)$ in every state $p\in P$.
We will use the synchronized product $\mathcal{A} \times \mathcal{B}$ of OCN\xspace $\mathcal{A}$ and NFA $\mathcal{B}$, which is the OCN\xspace defined
by
\[
\mathcal{A} \times \mathcal{B} \ = \ \big(Q\times P, ((q_0, p_0), 0), ((q_f, p_f), 0), V\big),
\]
where a transition $\big((q, p), a, (q', p'), z\big) \in V$ if $(q, a, q', z) \in T$ and $(p, a, p') \in U)$.
Observe that the synchronized product construction commutes with $n$-approximation: up to isomorphism of finite automata,
\begin{align} \label{eq:crossprodn}
(\mathcal{A} \times \mathcal{B})_n \ = \ \mathcal{A}_n \times \mathcal{B}.
\end{align}
Condition (a) in Corollary~\ref{cor:apprlemma} is equivalent to emptiness of the product $\mathcal{A}\times\mathcal{B}$ which,
by the Approximation Lemma applied to $A\times \mathcal{B}$, is equivalent to emptiness of the \emph{left} automaton in~\eqref{eq:crossprodn}, for some $n$.
Therefore condition (a) is also equivalent to emptiness of the \emph{right} automaton in~\eqref{eq:crossprodn}, for some $n$.
Finally, the latter condition is equivalent to condition (b).
\end{proof}
\begin{restatable}{corollary}{CorAppr} \label{cor:appr}
For two OCN\xspace $\mathcal{A}$ and $\mathcal{B}$, the following conditions are equivalent:
\begin{enumerate}
\item[(a)] $L(\mathcal{A})$ and $L(\mathcal{B})$ are regular separable,
\item[(b)] $L(\mathcal{A}_n)$ and $L(\mathcal{B})$ are disjoint, for some $n > 0$,
\item[(c)] $L(\mathcal{A}_n)$ and $L(\mathcal{B}_n)$ are disjoint, for some $n > 0$.
\end{enumerate}
\end{restatable}
\begin{proof}
In order to prove that (a) implies (b), suppose that a regular language $R$ separates $L(\mathcal{B})$ from $L(\mathcal{A})$, i.e., $R$ includes $L(\mathcal{B})$ and
is disjoint from $L(\mathcal{A})$. By Corollary~\ref{cor:apprlemma} we learn that for some $n > 0$, $R$ and $\mathcal{A}_n$ are disjoint. Thus necessarily
$L(\mathcal{B})$ and $L(\mathcal{A}_n)$ are disjoint too.
To show that (b) implies (c) use Corollary~\ref{cor:apprlemma} for OCN\xspace $\mathcal{B}$ and regular language $L(\mathcal{A}_n)$.
We get that there exists $m > 0$ such that $L(\mathcal{B}_m)$ and $L(\mathcal{A}_n)$ are disjoint. Then
using Corollary~\ref{cor:appr-prop}(b) we have that $L(\mathcal{A}_{nm})$ and $L(\mathcal{B}_{nm})$ are disjoint as well.
Finally, (c) easily implies (a), as any of the regular languages $L(\mathcal{A}_n)$, $L(\mathcal{B}_n)$ can serve as a separator
(Corollary~\ref{cor:appr-prop}(a) is used here).
\end{proof}
Our decision procedure for OCN\xspace, to be presented in the next section,
will test condition (b) in Corollary~\ref{cor:appr}.
\begin{remark} \label{rem:oca} \rm
Interestingly, exactly the same notion of approximation can be defined for OCA\xspace as well.
Even if Propositions~\ref{prop:high} and~\ref{prop:appr-char} are no more valid for OCA\xspace,
all other facts proved in this section still hold for this more general model,
in particular the Approximation Lemma and Corollaries~\ref{cor:apprlemma} and~\ref{cor:appr}.
Confronting this with undecidability of regular separability for OCA\xspace (which we prove in Section~\ref{sec:undecid}) leads to a conclusion that
the characterizations of Corollary~\ref{cor:appr} are not effectively testable in case of OCA\xspace, while they are in case of OCN\xspace.
\end{remark}
\section*{Acknowledgment}
We thank James Worrell for allowing us to include his undecidability proof of OCN\xspace regularity~\cite{BensUndecidability}\footnote{
Found out at \emph{Autob{\'o}z'16}, the annual research camp of Warsaw automata group and friends.},
and Christoph Haase, Mahsa Shirmohammadi, Patrick Totzke and Georg Zetzsche for fruitful discussion and valuable suggestions.
We are also grateful to the reviewers for their valuable comments.
\bibliographystyle{plain}
\section{Undecidability for one counter automata} \label{sec:undecid}
In this section we prove Theorem~\ref{thm:undecidability}.
The argument is similar to the proof of the previous section, except that instead of reducing a fixed undecidable problem,
we provide a polynomial reduction from \emph{every} decidable one.
This idea derives from the insight of~\cite{DBLP:journals/jacm/Hunt82a}.
A universal model of computation that will be convenient for us is 2-counter machines.
\emph A deterministic 2-counter machine $\mathcal{M}$ consists of a finite set of \emph{states} $Q$ with distinguished \emph{initial} state $q_0 \in Q$,
\emph{accepting} state $q_\textup{acc} \in Q$ and \emph{rejecting} state $q_\textup{rej} \in Q$,
two \emph{counters} $c_1, c_2$,
and a set of transitions, one per state $q\in Q - \set{q_\textup{acc}, q_\textup{rej}}$.
Thus the accepting state and the rejecting one have no outgoing transitions.
There are two types of transitions. Type 1 transitions increment one of the counters ($i \in \set{1, 2}$):
\begin{enumerate}
\item[(1)] in state $q$, increment $c_i$ and go to state $q'$;
\end{enumerate}
and type 2 transitions conditionally decrement one:
\begin{enumerate}
\item[(2)] in state $q$, if $c_i>0$ then decrement $c_i$ and go to state $q'$, otherwise go to state $q''$.
\end{enumerate}
A configuration $(q, n_1, n_2)$ of $\mathcal{M}$ consists of a state $q$ and values $n_1, n_2 \geq 0$ of the counters.
We write $(q, n_1, n_2) \trans{} (q', n'_1, n'_2)$ if a sequence of transitions leads from configuration $(q, n_1, n_2)$ to
$(q', n'_1, n'_2)$.
We say that $\mathcal{M}$ \emph{accepts} a number $k \in \mathbb{N}$ if $(q_0, k, 0) \trans{} (q_\textup{acc}, 0, 0)$,
and \emph{rejects} $k$ if $(q_0, k, 0) \trans{} (q_\textup{rej}, 0, 0)$.
Note our specific requirement that acceptance or rejection only happens with both counter values equal to 0.
The machine $\mathcal{M}$ is \emph{total} if every $k \in \mathbb{N}$ is either accepted or rejected by $\mathcal{M}$.
The language $L(\mathcal{M})$ recognized by $\mathcal{M}$ is set of all numbers accepted by $M$.
Every decidable language, say over the alphabet $\set{0, 1}$, is recognized by some total, deterministic
2-counter machine, under a suitable encoding.
Indeed, every word $w \in \set{0, 1}^*$ can be encoded, using binary representation, as a natural number $n(w)$.
It is quite standard to show that then for every total deterministic Turing machine $\mathcal{T}$,
there is a total deterministic 2-counter machine $\mathcal{M}$ such that
$w \in L(\mathcal{T})$ if, and only if $2^{n(w)} \in L(\mathcal{M})$
\footnote{
The exponent arises from the standard simulation of a Turing machine by a 3-counter machine;
the latter is further simulated by a 2-counter machine which stores the values of the 3 counters $c, d, e$
in the form $2^c 3^d 5^e$.
}
Thus, modulo the encoding, decidable languages are a subclass of (in fact, the same class as)
subsets $L \subseteq \mathbb{N}$ of natural numbers recognized by total deterministic 2-counter machines.
These subsets $L\subseteq \mathbb{N}$ we call below \emph{decidable problems}.
Let $\mathcal{F}$ be a class of languages containing all definite\xspace languages.
We are going to show a polynomial time reduction from any decidable problem $L \subseteq \mathbb{N}$ to
$\mathcal{F}${ }separability of OCA\xspace languages.
This implies undecidability of the latter problem.
Indeed, decidability of $\mathcal{F}${ }separability of OCA\xspace languages, say in time
$f(n)$ where $n$ is the size of input,
would imply that every decidable problem $L \subseteq \mathbb{N}$ is actually decidable in time
$f(p(n))$ for some polynomial $p$, thus contradicting the time hierarchy theorem
(see for instance Thm.~9.10 in~\cite{Sipserbook}, one can assume without loss of generality that
$f$ is time-constructible, i.e., fulfills conditions of the time hierarchy theorem).
\begin{proposition}\label{prop:oca-reduction}
Every decidable problem $L \subseteq \mathbb{N}$ reduces polynomially
to the $\mathcal{F}$ separability problem of OCA\xspace languages.
\end{proposition}
\begin{proof}
Let $\mathcal{M}$ be a fixed total deterministic 2-counter machine recognizing a language $L$.
Given $k \in \mathbb{N}$, we construct two OCA\xspace $\mathcal{A}_1, \mathcal{A}_2$ with the following properties:
\begin{enumerate}
\item[(a)] if $k\in L(\mathcal{M})$ then $L(\mathcal{A}_1) \cap L(\mathcal{A}_2) \neq \emptyset$
(and thus $L(\mathcal{A}_1)$ and $L(\mathcal{A}_2)$ are not $\mathcal{F}$ separable);
\item[(b)] if $k \notin L(\mathcal{M})$ then $L(\mathcal{A}_1)$ and $L(\mathcal{A}_2)$ are $\mathcal{F}$ separable.
\end{enumerate}
As the input alphabet $\Sigma$ of $\mathcal{A}_1$ and $\mathcal{A}_2$ we take the set of transitions of $\mathcal{M}$.
We define two OCA\xspace:
\begin{align*}
\mathcal{A}_1 & = (Q, (q_0, k), (q_\textup{acc}, 0), T_1, T_{1, = 0}), \\
\mathcal{A}_2 & = (Q, (q_0, 0), (q_\textup{acc}, 0), T_2, T_{2, = 0}),
\end{align*}
where transitions $T_1$ (resp.~$T_2$) and zero tests $T_{1, =0}$ (resp.~$T_{2, =0}$) are, roughly speaking, transitions of
$\mathcal{M}$ where the second (resp.~first) counter is ignored.
Formally, for every transition $t$ of type 1 on counter $c_1$, there is a transition
$
(q, t, q', +1) \in T_1;
$
and for every transition $t$ of type 1 on counter $c_2$, there is a transition
$
(q, t, q', 0) \in T_1.
$
For every transition $t$ of type 2 on counter $c_1$, we include the following transition and zero test:
\[
(q, t, q', -1) \in T_1 \qquad (q, t, q'') \in T_{1, =0}.
\]
Finally, for every transition $t$ of type 2 on counter $c_2$, we include the following two transitions:
\[
(q, t, q', 0) \in T_1 \qquad (q, t, q'', 0) \in T_1.
\]
Transitions and zero tests of $\mathcal{A}_2$ are defined symmetrically, with the roles of $c_1$ and $c_2$ swapped.
We need to argue that the implications~(a) and~(b) hold.
The first one is immediate: every sequence of transitions of $\mathcal{M}$ leading from $(q_0, k, 0)$ to $(q_\textup{acc}, 0, 0)$,
treated as a word over $\Sigma$, belongs both to $L(\mathcal{A}_1)$ and $L(\mathcal{A}_2)$.
In order to prove implication~(b), suppose $k\notin L(\mathcal{M})$. We first observe that $L(\mathcal{A}_1)$ and $L(\mathcal{A}_2)$ are
necessarily disjoint; indeed, any $w\in L(\mathcal{A}_1) \cap L(\mathcal{A}_2)$ is a sequence of transitions that accepts $k$.
As $\mathcal{M}$ is total by assumption, we know that $(q_0, k, 0) \trans{} (q_\textup{rej}, 0, 0)$ in $\mathcal{M}$;
let $n$ be the length of the corresponding sequence of transitions.
Let $L_1$ contain all prefixes of words from $L(\mathcal{A}_1)$, and likewise $L_2$ for $L(\mathcal{A}_2)$.
It is crucial to observe that the intersection $L_1 \cap L_2$ contains no word of length $n$ or longer.
Indeed, any $w\in L_1 \cap L_2$ is a sequence of transitions of $\mathcal{M}$ starting from $(q_0, k, 0)$,
and thus cannot be longer than $n$. Moreover $w \in L_1 \cap L_2$ cannot
lead, as a sequence of transitions of $\mathcal{M}$, to the rejecting state (as it has no outgoing transitions),
and thus $w$ can not have length $n$ either.
The rest of the proof is along the same lines as in the previous section.
In simple words, we claim that for a word of length $n$ or longer, it is enough to inspect its prefix of length $n$ in order
to classify the word between $L(\mathcal{A}_1)$ and $L(\mathcal{A}_2)$.
Formally, we define a language $K\in\mathcal{F}$ as follows:
\[
K \ := \ \big(L(\mathcal{A}_1) \cap \Sigma^{< n}\big) \ \cup \ \bigcup_{w\in L_1, |w| = n} w \Sigma^* .
\]
The language $K$ belongs to $\mathcal{F}$ for the reasons discussed in the previous section.
It remains to argue that $K$ separates $L(\mathcal{A}_1)$ and $L(\mathcal{A}_2)$.
By the very definition $L(\mathcal{A}_1) \subseteq K$,
as $K$ contains all words from $L(\mathcal{A}_1)$ of length strictly smaller than $n$,
and all words starting with a prefix, of length $n$, of a word from $L(\mathcal{A}_1)$.
For disjointness of $K$ and $L(\mathcal{A}_2)$, observe that the languages $L(\mathcal{A}_1) \cap \Sigma^{< n}$ and $L(A_2)$ are disjoint,
as already $L(\mathcal{A}_1)$ and $L(\mathcal{A}_2)$ are.
Moreover, for every $w\in L_1$ of length $|w| = n$, the languages $w\Sigma^*$ and $L(\mathcal{A}_2)$ are disjoint, as already
the intersection $L_1\cap L_2$ contains no word of length $n$ or longer.
\end{proof}
\begin{remark} \rm
Theorem~\ref{thm:undecidability} is used in~\cite{CCLP16} to prove undecidability of the regular separability problem
for visibly one counter automata (cf.~Theorem 5 in~\cite{CCLP16}).
The proof assumes that the alphabets of the input visibly one counter automata can be different;
on the other hand, when two input visibly one counter automata are assumed to be over the same alphabet (i.e., they perform
increment and decrement operations on the same input letters) the problem seems to be decidable~\cite{ChristofsDecidability}.
This shows that the decidability border is quite subtle. In addition,
the regular separability problem becomes once more undecidable when one
extends visibly one-counter automata over the same alphabet to
visibly pushdown automata over the same alphabet, as shown in~\cite{Kopczynski16}.
\end{remark}
\section{One counter automata and nets} \label{sec:oca}
In order to fix notation we start by recalling finite automata, in a specifically chosen variant convenient for us later, when working with one counter automata.
A \emph{nondeterministic finite automaton} (NFA) $\mathcal{A} = (Q, q_0, q_f, T)$ over a finite alphabet $\Sigma$
consists of a finite set of control states $Q$, distinguished initial and
final states $q_0, q_f \in Q$ (for convenience we assume here, w.l.o.g., a single final state),
and a set of \emph{transitions} $T \subseteq Q \times \Sigma_{\varepsilon} \times Q$, where $\Sigma_{\varepsilon} = \Sigma\cup\set{\varepsilon}$.
For a word $v \in (\Sigma_{\varepsilon})^*$, let $\deeps{v}$ be the word obtained by removing all occurrences of $\varepsilon$.
A run of $\mathcal{A}$ over a word $w\in\Sigma^*$ is a sequence of transitions of the form
$$(p_0, a_1, p_1), (p_1, a_2, p_2), \ldots, (p_{n-1}, a_n, p_n)$$
such that $\deeps{(a_1 \ldots a_n)} = w$.
The run is \emph{accepting} if $p_0 = q_0$ and $p_n = q_f$.
The language of $\mathcal{A}$, denoted $L(\mathcal{A})$, is the set of all words $w$ over which $\mathcal{A}$ has an accepting run.
Languages of NFA are called \emph{regular}.
\myparagraph{One counter automata and nets}
In brief, a one counter automaton (OCA\xspace) is an NFA with a non-negative counter, where
we allow for arbitrary changes of the counter value in one step.
Formally, an OCA\xspace is a tuple $\mathcal{A} = (Q, \alpha_0, \alpha_f, T, T_{=0})$, where $Q$ are control states as above.
A \emph{configuration} $(q, n)\in Q\times\mathbb{N}$ of $\mathcal{A}$ consists of a control state and a non-negative counter value.
There are two distinguished configurations, the initial one $\alpha_0 = (q_0, n_0)$ and the final one $\alpha_f = (q_f, n_f)$.
The finite set $T \subseteq Q \times \Sigma_{\varepsilon} \times Q \times \mathbb{Z}$
contains \emph{transitions} of $\mathcal{A}$. A transition $(q, a, q', z)$ can be fired in a configuration $\alpha = (q, n)$ if $n+z \geq 0$, leading to a new configuration $\alpha' = (q', n+z)$.
We write $\alpha \trans{a} \alpha'$ if this is the case.
Finally, the set $T_{=0} \subseteq Q\times\Sigma_{\varepsilon}\times Q$ contains \emph{zero tests}. A zero test $(q, a, q')$ can be fired
in a configuration $\alpha = (q, n)$ only
if $n = 0$, leading to a new configuration $\alpha' = (q', n)$. Again, we write $\alpha \trans{a} \alpha'$ if this is the case.
A run of an OCA\xspace over a word $w\in \Sigma^*$ is a sequence of transitions and zero tests of the form
$$\alpha_0 \trans{a_1} \alpha_1 \trans{a_2} \ldots \trans{a_n} \alpha_n$$
such that $\deeps{(a_1 \ldots a_n)} = w$;
we briefly write $\alpha_0 \trans{w} \alpha_n$ if this is the case, and $\alpha_0 \trans{} \alpha_n$ if a word $w$ is irrelevant.
The run is accepting if $\alpha_0$ is the initial configuration of $\mathcal{A}$, and $\alpha_n$ is the final one.
The language of $\mathcal{A}$, denoted $L(\mathcal{A})$, is the set of all words $w$ over which $\mathcal{A}$ has an accepting run.
A one counter net (OCN\xspace) is an OCA\xspace without zero tests, i.e., one with $T_{=0} = \emptyset$.
We drop the component $T_{= 0}$ and denote OCN\xspace as $(Q, \alpha_0, \alpha_f, T)$.
In other words, an OCN\xspace is exactly a VASS\xspace in dimension 1.
\begin{example} \label{ex:ocnsep}
Consider two OCN\xspace languages over the alphabet $\set{a,b}$:
\[
K = \setof{a^n b^n}{n\in\mathbb{N}}
\qquad
L = \setof{a^n b^{n+1}}{n\in\mathbb{N}}.
\]
An example regular language separating $K$ from $L$ is
$R = \setof{a^n b^m}{n \equiv m \modulo{2}}.$
Indeed, $R$ includes $K$ and is disjoint with $L$.
On the other hand, $K$ and $L' = \setof{a^n b^m}{m > n}$ are not regular separable (which follows by Corollary~\ref{cor:appr} below).
\end{example}
\myparagraph{Other modes of acceptance}
We briefly discuss other possible modes of acceptance of OCA\xspace.
First, consider a variant of OCA\xspace with a finite set of initial configurations, and a finite set of final ones.
This variant can be easily simulated by OCA\xspace as defined above.
Indeed, add two fresh states $q_0, q_f$, and fix the initial and final configurations $\alpha_0 = (q_0, 0)$ and
$\alpha_f = (q_f, 0)$. Moreover, add transitions enabling to go from $\alpha_0$ to every of former initial configurations, and
symmetrically add transitions enabling to go from every of former final configurations to $\alpha_f$.
The above simulation reveals that w.l.o.g. we can assume that the counter values $n_0$ and $n_f$
in the initial and final configurations are 0. This will be implicitly assumed in the rest of the paper.
Yet another possibility is accepting solely by control state: instead of a final configuration $\alpha_f = (q_f, n_f)$,
such an OCA\xspace would have solely a final control state $q_f$, and every run ending in a configuration $(q_f, n)$, for arbitrary
$n$, would be considered accepting.
Again, this variant is easily simulated by our model: it is enough to assume w.l.o.g.~that $q_f$ has no outgoing transitions nor zero tests,
add a transition $(q_f, \varepsilon, q_f, -1)$
decrementing the counter in the final state, and fix the final configuration as $(q_f, 0)$.
Finally, note that all the simulations discussed above work for OCN\xspace as well.
In particular, in the sequel we may assume, w.l.o.g., that the counter values in initial and final configurations of OCN\xspace are 0.
\section{PSPACE algorithm}\label{sec:in-pspace}
In this section we prove the \textsc{PSpace}\xspace upper bound of Theorem~\ref{thm:pspace-comp}.
All the \textsc{PSpace}\xspace complexity statements below are understood with respect to the size of the two input OCN\xspace,
under binary encoding of integers.
The proof splits into two parts. In the first one (up to Remark~\ref{rem:decidability})
we reduce the (non-)separability problem to a kind of reachability property in the cross-product of $\mathcal{A}$ and $\mathcal{B}$.
In the second (more technical) part
we concentrate on testing this reachability property in \textsc{PSpace}\xspace.
\myparagraph{Vector addition systems with states}
We start by recalling the notion of \emph{integer} vector addition systems with states (integer-VASS\xspace).
For $d > 0$, a $d$-dimensional integer-VASS\xspace $\mathcal{V} = (Q, T)$, or $d$-integer-VASS\xspace, consists of a finite set $Q$ of control states, and a finite
set of transitions $T \subseteq Q\times \mathbb{Z}^d\times Q$.
A configuration of $\mathcal{V}$ is a pair $(q, v) \in Q\times\mathbb{Z}^d$ consisting of a state and an integer vector.
Note that we thus allow, in general, negative values in configuration (this makes a difference between integer-VASS\xspace and VASS\xspace);
however later we will typically
impose non-negativeness constraints on a selected subset of coordinates.
A $d$-integer-VASS\xspace $\mathcal{V}$ determines a step relation between configurations: there is a step from $(q, v)$ to $(q', v')$ if
$T$ contains a transition $(q, z, q')$ such that $v' = v + z$.
We write $(q, v) \trans{} (q', v')$ if there is a sequence of steps leading from $(q, v)$ to $(q', v')$, and say that
$(q', v')$ is \emph{reachable} from $(q, v)$ in $\mathcal{V}$.
\myparagraph{Cross-product operation}
We will use a cross-product operation over one counter nets.
For two OCN\xspace $\mathcal{A} = (Q, \alpha_0, \alpha_f, T)$ an $\mathcal{B} = (P, \beta_0, \beta_f, U)$, their cross-product
$\mathcal{A} \otimes B$ is a 2-integer-VASS\xspace
whose
states are pairs of states $Q\times P$ of $\mathcal{A}$ and $\mathcal{B}$, respectively, and whose transitions contain all triples
\[
\big( (q, p), (z, v), (q', p') \big)
\]
such that there exists $a\in \Sigma_{\varepsilon}$ with $(q, a, q', z) \in T$ and $(p, a, p', v) \in U$.
For convenience we assume here that every OCN\xspace has an $\varepsilon$-transition of the form $(q, \varepsilon, q, 0)$ in every control state $q$.
Note that $\mathcal{A} \otimes \mathcal{B}$ is unlabeled --- the alphabet letters are only used to synchronize $\mathcal{A}$ and $\mathcal{B}$ ---
and allows, contrarily to $\mathcal{A}$ and $\mathcal{B}$, for negative values on both coordinates.
Moreover note that there is no distinguished initial or final configuration in an integer-VASS\xspace.
We will later need to impose a selective non-negativeness constraint on values of configurations.
For a $d$-integer-VASS\xspace $\mathcal{V}$ and a sequence $C_1, \ldots, C_d$, where $C_i = \mathbb{N}$ or $C_i = \mathbb{Z}$ for each $i$, by
$\constrvass{V}{C_1, \ldots, C_d}$ we mean the transition system of $\mathcal{V}$ truncated to the subset
$Q\times C_1 \times \ldots \times C_d \subseteq Q\times\mathbb{Z}^d$ of configurations.
For instance, $\constrvass{(\mathcal{A}\otimes\mathcal{B})}{\mathbb{N}, \mathbb{N}}$ differs from $\mathcal{A} \otimes \mathcal{B}$ by imposing the non-negativeness constraint
on both coordinates, and is thus a 2-VASS\xspace.
On the other hand, in $\constrvass{(\mathcal{A}\otimes\mathcal{B})}{\mathbb{Z}, \mathbb{N}}$ the counter of $\mathcal{A}$ can get arbitrary integer values while the counter of $\mathcal{B}$
is restricted to be non-negative.
\myparagraph{Disjointness assumption}
Fix for the rest of this section two input OCN\xspace
$$\mathcal{A} = (Q, (q_0, 0), (q_f, 0), T) \quad \text{ and } \quad \mathcal{B} = (P, (p_0, 0), (p_f, 0), U),$$
and let $\mathcal{V} = \mathcal{A}\otimes\mathcal{B}$ be their cross-product.
If the intersection of $L(\mathcal{A})$ and $L(\mathcal{B})$ is non-empty, the answer to the separability question is obviously negative.
We may thus consider only input OCN\xspace $\mathcal{A}$ and $\mathcal{B}$ with $L(\mathcal{A})$ and $L(\mathcal{B})$ are disjoint.
This is eligible as the disjointness can be effectively checked in \textsc{PSpace}\xspace.
Indeed, the intersection of $L(\mathcal{A})$ and $L(\mathcal{B})$ is nonempty if, and only if
\[
\big( (q_0, p_0), 0, 0 \big) \trans{} \big( (q_f, p_f), 0, 0 \big)
\]
in the 2-VASS\xspace $\constrvass{\mathcal{V}}{\mathbb{N}, \mathbb{N}}$, which can be checked in \textsc{PSpace}\xspace
by the result of~\cite{DBLP:conf/lics/BlondinFGHM15}.
\begin{assumption}
In the sequel, w.l.o.g.,~we assume that $L(\mathcal{A})$ and $L(\mathcal{B})$ are disjoint.
\end{assumption}
Our strategy is to reduce regular separability of $\mathcal{A}$ and $\mathcal{B}$ to (a kind of) reachability property in their
cross-product $\mathcal{V}$,
and then to encode this property using (multiple) systems of linear Diophantine equations.
The number of systems will not be polynomial, however they will be all enumerable in polynomial space.
Using the enumeration,
our decision procedure will boil down to checking a suitable property of solution sets of these system.
\myparagraph{Reduction to reachability in $\mathcal{V}$}
Recall Corollary~\ref{cor:appr}(b) which characterizes regular non-separability by non-emptiness of the intersection of
$L(\mathcal{A}_n)$ and $L(\mathcal{B})$, for all $n > 0$, which, roughly speaking, is equivalent to a reachability property
in the cross-product of NFA $\mathcal{A}_n$ and the OCN\xspace $\mathcal{B}$, for all $n > 0$.
We are going now to internalize the quantification over all $n$, by transferring the reachability property to the cross-product $\mathcal{V}$
of the two OCN\xspace $\mathcal{A}$ and $\mathcal{B}$.
For convenience we introduce the following terminology. For $n > 0$ we say that $\mathcal{V}$ \emph{admits $n$-reachability} (or
$n$-reachability holds in $\mathcal{V}$) if
there are $q, q' \in Q$, $p, p'\in P$, $m, m' \geq n$, $l, l' \geq 0$ and $m'' \in \mathbb{Z}$ such that
$m'' \equiv m' \modulo{n}$ and
\begin{enumerate}
\item[(a)] $\big( (q_0, p_0), 0, 0 \big) \trans{} \big( (q, p), m, l \big)$ in $\constrvass{\mathcal{V}}{\mathbb{N}, \mathbb{N}}$,
\item[(b)] $\big( (q, p), m, l \big) \trans{} \big( (q', p'), m'', l' \big)$ in $\constrvass{\mathcal{V}}{\mathbb{Z}, \mathbb{N}}$,
\item[(c)] $\big( (q', p'), m', l' \big) \trans{} \big( (q_f, p_f), 0, 0 \big)$ in $\constrvass{\mathcal{V}}{\mathbb{N}, \mathbb{N}}$.
\end{enumerate}
The $n$-reachability in $\mathcal{V}$ differs in three respects from ordinary reachability
$\big( (q_0, p_0), 0, 0 \big) \trans{} \big( (q_f, p_f), 0, 0 \big)$
in $\constrvass{\mathcal{V}}{\mathbb{N}, \mathbb{N}}$.
First, we require two intermediate values of the counter in $\mathcal{A}$, namely $m, m'$, to be at least $n$.
Second, in the middle part we allow the counter of $\mathcal{A}$ to be negative.
Finally, we allow for a mismatch between $m'$ and $m''$.
Thus $n$-reachability does \emph{not} imply non-emptiness $(q_0, 0) \trans{} (q_f, 0)$ of $\mathcal{A}$.
On the other hand, $n$-reachability \emph{does} imply non-emptiness $(p_0, 0) \trans{} (p_f, 0)$ of $\mathcal{B}$.
\begin{proposition} \label{prop:three-parts}
$\mathcal{A}$ and $\mathcal{B}$ are not regular separable if, and only if $\mathcal{V}$ admits $n$-reachability for all $n > 0$.
\end{proposition}
\begin{proof}
Using the characterization of Corollary~\ref{cor:appr}(b), it suffices to show that for every $n > 0$,
$L(\mathcal{A}_n) \, \cap \, L(\mathcal{B}) \neq \emptyset$ if, and only if $\mathcal{V}$ admits $n$-reachability.
Fix $n > 0$ in the sequel.
For the ,,only if'' direction, let $w \in L(\mathcal{A}_n) \, \cap \, L(\mathcal{B})$.
As $w \in L(\mathcal{A}_n)$, we may apply Proposition~\ref{prop:appr-char}.
Note that the condition (a) of Proposition~\ref{prop:appr-char} surely does not hold, as we know that $w \notin L(\mathcal{A})$;
therefore condition (b) must hold
for some states $q, q' \in Q$ and natural numbers $c, c' \geq 1$ and $d, d' \geq 0$.
Put $m := n + d$, $m' := n + d'$ and $m'' := m' + (c' - c + 1)n$ (recall that $m''$ may be negative).
As $w\in L(\mathcal{B})$, the corresponding states $p, p'$ and counter values $l, l'$ can be taken from the corresponding two positions
in an accepting run of $\mathcal{B}$ over $w$.
The chosen states $q, q', p, p'$ and integer values $m, m', l, l', k$ prove $n$-reachability in $\mathcal{V}$, as required.
For the ''if'' direction suppose that $\mathcal{V}$ admits $n$-reachability, and let $w_\textsc{pref}$, $w_\textsc{mid}$ and $w_\textsc{suff}$ be some words
witnessing the conditions (a)--(c) of $n$-reachability.
In particular, this implies
\begin{align} \label{eq:wgore}
(q, m+(c-1)n) \trans{w_\textsc{mid}} (q', m''+(c-1)n) \text{ in } \mathcal{A}
\end{align}
for $c\geq 1$ large enough.
This also implies that the word $w = w_\textsc{pref} w_\textsc{mid} w_\textsc{suff}$ belongs to $L(\mathcal{B})$.
We will prove that $w$ also belongs to $L(\mathcal{A}_n)$, by demonstrating
that the factorization $w = w_\textsc{pref} w_\textsc{mid} w_\textsc{suff}$ satisfies the condition (b) in Proposition~\ref{prop:appr-char}.
(Note that (a) in Proposition~\ref{prop:appr-char} can not hold for $w$, as it would be in contradiction with
disjointness of $L(\mathcal{A})$ and $L(\mathcal{B})$.)
Indeed, for $d := m - n$, $d' := m' - n$, we obtain then runs over $m_\textsc{pref}$ and $m_\textsc{suff}$ as required in (b)
in Proposition~\ref{prop:appr-char}.
In order to get a run over $w_\textsc{mid}$, we take $c\geq 1$ large enough so that~\eqref{eq:wgore} holds;
for $c' := c + (m'' -m')/n$, \eqref{eq:wgore} rewrites to
\[
(q, cn+d) \trans{w_\textsc{mid}} (q', c'n + d') \text{ in } \mathcal{A},
\]
as required.
\end{proof}
Building on Proposition~\ref{prop:three-parts}, we are going to design a decision procedure to check
whether $\mathcal{V}$ admits $n$-reachability for all $n>0$.
To this end we slightly re-formulate $n$-reachability, using the following relations
expressing the conditions (a)--(c) of $n$-reachability:
\begin{align} \label{eq:relations}
\textsc{pref}_{q}^{p}, \ \textsc{suff}_{q}^{p} \subseteq \mathbb{N}^2, \qquad
\textsc{mid}_{q q'}^{p p'} \subseteq \mathbb{N}^2 \times\mathbb{Z} \times \mathbb{N},
\end{align}
for $q, q' \in Q$ and $p, p'\in P$, defined as follows:
\begin{align*}
\textsc{pref}_{q}^{p}(m, l) \iff \ & (a) \text{ holds} \\
\textsc{mid}_{q q'}^{p p'}(m, l, m'', l') \iff \ & (b) \text{ holds} \\
\textsc{suff}_{q'}^{p'}(m', l') \iff \ & (c) \text{ holds.}
\end{align*}
Let $R \subseteq \mathbb{N}^2 \times \mathbb{Z}$ contain all triples $(m, m', x)$ satisfying the following formula:
\newcommand{\defR}{& \text{there exist } q, q' \in Q, \ p, p'\in P, \ l, l' \in \mathbb{N}
\text{ s.~t.} \\
& \textsc{pref}_q^p(m, l) \land \textsc{mid}_{q q'}^{p p'}(m, l, m' {+} x, l') \land \textsc{suff}_{q'}^{p'}(m', l').
}
\begin{align} \begin{aligned}
\label{eq:defR}
\defR
\end{aligned} \end{align}
Then $n$-reachability is equivalent to saying that some $(n_1, n_2, n_3)\in R$ satisfies
\begin{align} \label{eq:witness}
n_1, n_2 \geq n \quad \text{ and } \quad n | n_3.
\end{align}
Any triple $(n_1, n_2, n_3)$ satisfying the condition~\eqref{eq:witness} we call \emph{$n$-witness} in the sequel.
In this terminology, our algorithm is to decide whether $R$ contains $n$-witnesses for all $n>0$.
\myparagraph{Semi-linear sets}
For a set $P \subseteq \mathbb{Z}^l$ of vectors, let $P^*\subseteq \mathbb{Z}^l$ contain all vectors that can be obtained as a finite sum, possibly the empty one,
and possibly with repetitions, of vectors from $P$.
In other words, $P^*$ is the set of \emph{non-negative} linear combinations of vectors from $P$.
\emph{Linear sets} are sets of the form $L = \set{b} + P^*$, where $b\in \mathbb{Z}^l$, $P$ is a finite subset of $\mathbb{Z}^l$,
and addition $+$ is understood element-wise.
Thus $L$ contains sums of the vector $b$ and a vector from $P^*$.
The vector $b$ is called \emph{base}, and vectors in $P$ \emph{periods}; we write
shortly $b + P^*$.
Finite unions of linear sets are called \emph{semi-linear}.
We use sometimes a special case of semi-linear sets of the form $B + P^*$,
for finite sets $B, P$.
\begin{remark} \label{rem:decidability} \rm
For decidability,
observe that
all the sets
appearing in~\eqref{eq:defR} are effectively semi-linear.
Indeed, $\textsc{pref}_q^p$ is essentially the reachability set of a 2-VASS\xspace, and thus effectively
semi-linear~\cite{DBLP:conf/lics/BlondinFGHM15}, and likewise for $\textsc{suff}_q^p$; and
effective semi-linearity of $\textsc{mid}_{q q'}^{p p'}$ can be derived from
Parikh theorem (see, e.g., Lemma 3.4 in~\cite{Georg}).
In consequence, the set $R$ is effectively semi-linear too.
Thus non-separability reduces to checking if a given semi-linear set contains $n$-witnesses for all $n>0$.
However, in order to get tight \textsc{PSpace}\xspace upper bound, we need to provide suitable estimations on representation size of semi-linear sets.
To this aim we introduce \emph{\pspace-enumerable\xspace sets}.
\end{remark}
\myparagraph{\pspace-enumerable\xspace sets}
Recall that complexity estimations are with respect to the sizes of the input OCN\xspace $\mathcal{A}$ and $\mathcal{B}$.
For a finite set of vectors $P$, we say that an algorithm \emph{enumerates} $P$ if it computes consecutive elements of a sequence
$p_1, \ldots, p_m$, possibly with repetitions, such that $P = \set{p_1, \ldots, p_m}$; in other words, every element of
$P$ appears at least once in the sequence, but no other element does.
An algorithm enumerates a linear set $L = b + P^*$ if it first computes $b$ and then enumerates $P$.
If there is a polynomial space algorithm which enumerates $L$, the set $L$ is called \emph{\pspace-enumerable\xspace}.
A semi-linear set $S$ we call \pspace-enumerable\xspace (slightly abusing the notation) if for some sequence of linear sets
$L_1, \ldots, L_k$ such that
$$
S = L_1 \cup \ldots \cup L_k,
$$
there is a polynomial space algorithm that
first enumerates $L_1$, then enumerates $L_2$, and so on, and finally enumerates $L_k$.
In particular, this means that for some polynomial bound $N$,
every base and every period can be stored using at most $N$ bits.
Propositions~\ref{prop:effsolv} and~\ref{prop:effsolv2} below state that all the sets appearing in~\eqref{eq:defR}
are \pspace-enumerable\xspace.
The next Proposition~\ref{prop:Reffsolv}, their direct consequence,
says the same about the set $R$; it will be the cornerstone of our decision procedure.
Proofs of the propositions are postponed towards the end of this section.
\begin{proposition} \label{prop:effsolv}
For every $q \in Q$ and $p \in P$, the sets
$\textsc{pref}_{q}^{p}$ and $\textsc{suff}_{q}^{p}$ are \pspace-enumerable\xspace.
\end{proposition}
\begin{proposition} \label{prop:effsolv2}
For every $q, q' \in Q$ and $p, p' \in P$, the set
$\textsc{mid}_{q q'}^{p p'}$ is \pspace-enumerable\xspace.
\end{proposition}
\begin{restatable}{proposition}{PropR} \label{prop:Reffsolv}
The set $R$ is \pspace-enumerable\xspace.
\end{restatable}
\noindent
The set $R$ is therefore a finite union of linear sets,
\begin{align} \label{eq:R}
R = L_1 \cup \ldots \cup L_k,
\end{align}
each of them being \pspace-enumerable\xspace.
The next lemma allows us to consider each of the linear sets separately:
\begin{lemma} \label{lem:unionwitness}
If a finite union $X_1 \cup \ldots \cup X_k \subseteq \mathbb{N}^2\times\mathbb{Z}$
contains $n$-witnesses for all $n>0$,
then some of $X_1, \ldots, X_k$ also does.
\end{lemma}
\begin{proof}
We use a monotonicity property: if $n' | n$ then
every $n$-witness is automatically also $n'$-witness.
Consider a sequence of $(n!)$-witnesses,
for $n>0$, contained in $X$. One of the sets $X_1, \ldots, X_k$ necessarily
contains infinitely many of them.
By monotonicity, this set contains $(n!)$-witnesses for all $n>0$, and hence $n$-witnesses for all $n>0$.
\end{proof}
Relying on Lemma~\ref{lem:unionwitness} and Proposition~\ref{prop:Reffsolv}, our procedure
guesses one of the linear sets~\eqref{eq:R}.
It thus remains to describe a \textsc{PSpace}\xspace algorithm for the following core problem: for a given
\pspace-enumerable\xspace linear set $L = b+P^* \subseteq \mathbb{N}^2\times\mathbb{Z}$, determine whether
it contains $n$-witnesses for all $n>0$.
\myparagraph{Decision procedure for the core problem}
In case of a linear set $L$, the condition we are to check boils down to two separate sub-conditions:
\begin{lemma} \label{lem:separately}
$L = b+P^*$ contains $n$-witnesses for all $n>0$ if, and only if
\begin{enumerate}
\item[(a)] for every $n$, there is $(n_1, n_2, n_3) \in L$ with $n_1, n_2 \geq n$;
\item[(b)] for every $n$, there is $(n_1, n_2, n_3)\in L$ with $n | n_3$.
\end{enumerate}
\end{lemma}
\begin{proof}
Put $b = (b_1, b_2, b_3)$.
Indeed, if $(b_1, b_2, b_3) + (k_1, k_2, k_3) \in L$ for $b_1 + k_1, b_2 + k_2 \geq n$,
and $(b_1, b_2, b_3) + (m_1, m_2, m_3) \in L$ for $n | (b_3 + m_3)$, then
$(b_1, b_2, b_3) + n(k_1, k_2, k_3) + (m_1, m_2, m_3) \in L$ is an $n$-witness.
Hence conditions (a) and (b) imply that $L$ contains $n$-witnesses for all $n>0$.
The opposite direction is obvious.
\end{proof}
Condition (a) in Lemma~\ref{lem:separately} is easy for algorithmic verification: enumerate vectors in $P$
while checking
whether some vector is positive on first coordinate, and some (possibly different) vector
is positive on second coordinate.
As the last bit of our decision procedure,
it remains to check condition (b) in Lemma~\ref{lem:separately}.
Writing $b_3$, resp.~$P_3$, for the projection of $b$, resp.~$P$, on the third coordinate, we need to check
whether the set $b_3 + {P_3}^* \subseteq \mathbb{Z}$ contains (possibly negative) multiplicities of all $n>0$.
We build on:
\begin{restatable}{proposition}{PropLast} \label{prop:last}
The set $b_3 + {P_3}^*$ contains multiplicities of all $n>0$ if, and only if
$b_3$ is a linear combination of $P_3$, i.e.,
\begin{align} \label{eq:proplast}
b_3 = a_1 p_1 + \ldots + a_k p_k,
\end{align}
for $a_1, \ldots, a_k \in \mathbb{Z}$ and $p_1, \ldots, p_k \in P_3$.
\end{restatable}
\begin{proof}
For the 'only if' direction,
suppose that $b_3 + {P_3}^*$ contains multiplicities of all positive numbers.
If $b_3 = 0$ then it is (the empty) linear combination of $P_3$; suppose therefore that $b_3 \neq 0$.
Note that this implies in particular that $P_3$ is forcedly nonempty.
Fix arbitrary $n$ such that $n \in P_3$. Suppose $n>0$ (if $n<0$ take $-n$ instead of $n$).
By the assumption,
$b_3 + p \equiv 0 \modulo{n}$ for some $p\in {P_3}^*$, i.e.,
\[
b_3 \equiv - p \modulo{n}.
\]
Then $b_3 = - p + a n$ for some $a\in\mathbb{Z}$,
hence
a linear combination of $P_3$ as required.
For the 'if' direction, suppose $b_3$ is a linear combination of $P_3$ as in~\eqref{eq:proplast},
and let $n>0$.
It is possible to decrease the numbers $a_1, \ldots, a_k$ by multiplicities of $n$ so that they become non-positive.
Thus we have
\[
b_3 \equiv -a_1 p_1 - \ldots - a_k p_k \modulo{n},
\]
for $a_1, \ldots, a_k \in \mathbb{N}$, i.e., $b_3 \equiv - p \modulo{n}$ for some $p\in {P_3}^*$.
In consequence $b_3 + p \equiv 0 \modulo{n}$,
as required.
\end{proof}
Thus we only need to check whether $b_3$ is a linear combination of $P_3$.
By the Chinese remainder theorem, this is equivalent to
$b_3$ being a multiplicity of the greatest common divisor
of all numbers in $P_3$.
Thus our decision procedure enumerates the set $P$, computes the greatest common divisor $g$
of projections $p_3$ on the third coordinate of all vectors $p\in P$, and finally checks whether $g | b_3$.
The upper bound of Theorem~\ref{thm:pspace-comp} is thus proved.
\begin{remark} \rm
From the proof of \textsc{PSpace}\xspace upper bound we can extract only a \emph{doubly exponential} bound on $n$ in Corollary~\ref{cor:appr}(b)
(exhaustive checking if $L(\mathcal{A}_n) \cap L(\mathcal{B}) \neq \emptyset$ for all $n$ so bounded would only yield an \textsc{ExpSpace}\xspace algorithm).
First, the proof of Proposition~\ref{prop:last} reveals that if the set $b_3 + {P_3}^*$ does not contain multiplicities of all
$n>0$, and $P_3$ is nonempty, then it does not contain multiplicities of some period $n\in P_3$, hence $n$ bounded exponentially
(one can obtain even a better bound for $n$, namely the greatest common divisor of all periods from $P_3$, which still is only exponentially bounded in general).
Thus by Lemma~\ref{lem:separately} we get an exponential bound on the smallest $n$ such that a linear set $L$ does not contain an $n$-witness.
Then we lift the bound to semi-linear sets (cf.~Lemma~\ref{lem:unionwitness}) but at the price of encreasing it to doubly exponential:
indeed, if every component linear set $L_i$ does not contain an $n_i$-witness for some $n_i>0$,
the semi-linear set $R = L_1\cup \ldots\cup L_k$ does not contain
an $n$-witness, for $n$ the least common multiplicity of all $n_i$ and hence doubly exponential.
We have thus bounded the smallest $n$ such that $n$-reachability does not hold in $\mathcal{V}$, and consequently
(cf.~the proof of Proposition~\ref{prop:three-parts}) $n$ in Corollary~\ref{cor:appr}(b).
We do not know if this bound can be improved to single exponential (which would immediately make the exhaustive check a \textsc{PSpace}\xspace algorithm).
The question seems to be quite challenging, as our construction of the set $R$ relies on
a combination of several nontrivial results~\cite{DBLP:conf/lics/BlondinFGHM15,Hofman16,Kopczynski10}
(cf.~the proofs of Propositions~\ref{prop:effsolv} and~\ref{prop:effsolv2}).
\end{remark}
\subsection{Proof of Proposition~\ref{prop:effsolv}} \label{sec:proofeffsolv}
We concentrate on showing that the sets $\textsc{pref}_{q}^{p}$ are \pspace-enumerable\xspace.
(The sets $\textsc{suff}_q^p$ can be dealt with in exactly the same way as $\textsc{pref}_q^p$, but with $\mathcal{V}$ replaced by the
reverse of $\mathcal{V}$.)
In the sequel fix states $q, p$ of $\mathcal{A}$ and $\mathcal{B}$, respectively.
The set $\textsc{pref}_q^p$ is nothing but the reachability set of a 2-VASS\xspace $\constrvass{\mathcal{V}}{\mathbb{N},\mathbb{N}}$ in control state $(q, p)$,
from the initial configuration
$((q_0, p_0), 0, 0)$.
We build on a result of~\cite{DBLP:conf/lics/BlondinFGHM15} which describes the reachability set in terms of
sets reachable via a finite set of \emph{linear path schemes}, a notion that we are going to recall now.
Let $T$ be transitions of $\mathcal{V}$.
A linear path scheme is a regular expression over $T$ of the form:
\begin{align} \label{eq:lps}
E = \alpha_0 \beta_1^* \alpha_1 \ldots \beta_k^* \alpha_k,
\end{align}
where $\alpha_i, \beta_i \in T^*$. The sequences $\beta_1, \ldots, \beta_k$ are called \emph{loops} of $E$.
By \emph{length} of $E$ we mean the sum of lengths of all $\alpha_i$ and $\beta_i$.
Let $\textsc{Reach}_E$ (the reachability set via $E$) contain all pairs $(n, m)\in\mathbb{N}^2$ such that
$((q_0, p_0), 0, 0) \trans{} ((q, p), n, m)$ in $\constrvass{\mathcal{V}}{\mathbb{N}, \mathbb{N}}$ via a sequence of transitions that belongs to $E$.
Here is Thm.~1 in~\cite{DBLP:conf/lics/BlondinFGHM15}, translated to our terminology:
\begin{lemma}[\cite{DBLP:conf/lics/BlondinFGHM15}] \label{lem:blondyn}
There are computable bounds $N_1$, $N_2$, where $N_1$ is exponential and $N_2$ is polynomial in the size of $\mathcal{V}$,
such that $\textsc{pref}_q^p$ is the union of sets $\textsc{Reach}_E$, for linear path schemes $E$ of length at most $N_1$,
with at most $N_2$ loops.
\end{lemma}
In order to test whether a configuration is reachable in $\constrvass{\mathcal{V}}{\mathbb{N}, \mathbb{N}}$ by a given linear path scheme $E$,
it is not necessary to know the whole scheme.
For our purposes it is enough to describe $E$ as in~\eqref{eq:lps} using $4k+2$ pairs of integers.
Let $a_i \in \mathbb{Z}^2$, for $i = 0, \ldots, k$, be the total effect of executing the sequence $\alpha_i$,
and likewise $b_i$ for the sequence $\beta_i$, for $i = 1, \ldots, k$.
Moreover, let $c_i \in \mathbb{N}^2$, for $i = 0, \ldots, k$ be the (point-wise) minimal nonnegative values of counters that allow to execute the sequence $\alpha_i$ (in $\constrvass{\mathcal{V}}{\mathbb{N}, \mathbb{N}}$),
and likewise $d_i$ for the sequence $\beta_i$, for $i = 1, \ldots, k$.
The $4k+2$ pairs of numbers, namely $a_i, c_i$ (for $i = 0 \ldots k$) and $b_i, d_i$ (for $i = 1\ldots k$),
we jointly call the \emph{profile} of the linear path scheme $E$.
\begin{lemma} \label{lem:checknumbers}
Given pairs $a_i, \in \mathbb{Z}^2, c_i \in \mathbb{N}^2$ (for $i = 0 \ldots k$) and $b_i \in \mathbb{Z}^2, d_i \in \mathbb{N}^2$ (for $i=1\ldots k$),
it can be checked in \textsc{PSpace}\xspace if they form the profile of some linear path scheme.
\end{lemma}
\begin{proof}
Guess intermediate control states $(q_1, p_1)$, \ldots, $(q_{k}, p_{k})$ and
put
$(q_{k+1}, p_{k+1}) = (q, p)$.
Check that the following reachability properties hold in $\constrvass{\mathcal{V}}{\mathbb{N}, \mathbb{N}}$, for $i = 0, \ldots, k$ and $i = 1, \ldots, k$, respecively:
\begin{align*}
& \big( (q_{i}, p_{i}), c_i \big) \trans{} \big(q_{i+1}, p_{i+1}), c_i + a_i \big) \\
& \big( (q_{i}, p_{i}), d_i \big) \trans{} \big(q_i, p_i), d_i + b_i \big),
\end{align*}
and that the above properties fail to hold if any $c_i$ (resp.~$d_i$) is replaced by a point-wise smaller pair of numbers.
All the required checks are instances of the reachability problem for 2-VASS\xspace, hence doable in \textsc{PSpace}\xspace~\cite{DBLP:conf/lics/BlondinFGHM15}.
\end{proof}
Denote by $\textsc{Reach}_p$ the set of configurations reachable in $\constrvass{\mathcal{V}}{\mathbb{N}, \mathbb{N}}$ via some linear path scheme
with profile $p$.
Using Lemma~\ref{lem:checknumbers} we can enumerate
all profiles of linear path schemes~\eqref{eq:lps} of length at most $N_1$ with $k \leq N_2$ loops.
Note that each such profile can be represented (in binary) in polynomial space.
Thus by the virtue of Lemma~\ref{lem:blondyn} it is enough to show, for a fixed profile $p$,
that the set $\textsc{Reach}_p$ is \pspace-enumerable\xspace.
Fix a profile $p$ from now on.
As a convenient tool we will use \emph{linear Diophantine equations}.
These are systems of equations of the form
\begin{align} \label{eq:lineq}
a_1 x_1 + \ldots + a_l x_l = a,
\end{align}
where $x_1, \ldots, x_l$ are variables, and $a, a_1, \ldots, a_l$ are integer coefficients.
For a system $\mathcal{U}$ of such equations,
we denote by $\sol{\mathcal{U}} \subseteq \mathbb{N}^l$ the solution set of $\mathcal{U}$,
i.e., the set all of non-negative integer vectors
$(n_1, \ldots, n_l)$ such that the valuation $x_1 \mapsto n_1, \ldots, x_l \mapsto n_l$ satisfies all the equations in $\mathcal{U}$.
We say that a vector is bounded by $m$ if it is smaller than $m$ on every coordinate.
By $\size{\mathcal{U}}$ we denote the size of $\mathcal{U}$, with integers encoded in binary.
By Prop.~2 in~\cite{taming} we get:
\begin{lemma} \label{lem:hybrid}
$\sol{\mathcal{U}} = B + P^*$, with every base $b\in B$ and period $p\in P$ bounded by $2^N$, for a computable bound
$N \in \mathbb{N}$ polynomial in $\size{\mathcal{U}}$.
\end{lemma}
Observe that, forcedly, $P \subseteq \sol{\mathcal{U}_0}$ where
$\mathcal{U}_0$ denotes a modification of the system of linear equations $\mathcal{U}$ with all
right-hand side constants $a$ (cf.~\eqref{eq:lineq}) replaced by 0.
We will use Lemma~\ref{lem:hybrid} once we state the last lemma we need:
\begin{restatable}{lemma}{LemProfile}\label{lem:profile}
The set $\textsc{Reach}_p$ is a projection of the union
$$\sol{\mathcal{U}^1} \cup \ldots \cup \sol{\mathcal{U}^l},$$
for systems of linear Diophantine equations $\mathcal{U}^1 \ldots \mathcal{U}^l$
that can be enumerated in polynomial space.
\end{restatable}
\begin{proof
We assume that $c_0 = (0,0)$, as otherwise the set $\textsc{Reach}_p$ is empty.
Introduce variables $x_i$, for $i = 1 \ldots k$, to represent the number of times the loop $\beta_i$ has been executed.
The necessary and sufficient condition for executing the linear path scheme~\eqref{eq:lps}
can be described by a positive Boolean combination of linear inequalities (that can be further transformed into linear
equations using auxiliary variables), which implies Lemma~\ref{lem:profile}.
Indeed, for every $i = 1\ldots k$, we distinguish two sub-cases, $x_i = 0$ or $x_i > 0$. In the former case
(the loop $\beta_i$ is \emph{not} executed) we put the two inequalities described succinctly as (note that $a_j, b_j \in \mathbb{Z}^2$, while $c_j$, $d_j \in \mathbb{N}^2$ for all $j$)
\begin{align*}
& a_0 + b_1 x_1 + a_1 + \ldots + b_{i-1} x_{i-1} + a_{i-1} \geq c_i
\end{align*}
to say that $\alpha_i$ can be executed.
In the latter case (the loop $\beta_i$ \emph{is} executed) we put the following four inequalities
\begin{align*}
& a_0 + b_1 x_1 + a_1 + \ldots + b_{i-1} x_{i-1} + a_{i-1} \geq d_i \\
& a_0 + b_1 x_1 + a_1 + \ldots + b_{i-1} x_{i-1} + a_{i-1} + b_i (x_i - 1) \geq d_i
\end{align*}
to say that the first and the last iteration of the loop $\beta_i$ can be executed (which implies that each intermediate iteration of $\beta_i$ can be executed as well), plus the two inequalities
\begin{align*}
& a_0 + b_1 x_1 + a_1 + \ldots + b_{i-1} x_{i-1} + a_{i-1} + b_i x_i \geq c_i
\end{align*}
to assure that $\alpha_i$ can be executed next.
Finally, the two variables $(y_1, y_2)$ representing the final configuration, are related to other variables by the two equations:
\begin{align*}
& a_0 + b_1 x_1 + a_1 + \ldots + b_{k} x_{k} + a_{k} = (y_1, y_2).
\end{align*}
The set $\textsc{Reach}_p$ is the projection, onto $(y_1, y_2)$, of the union of solution sets of all the above-described systems of equations.
\end{proof}
The last two lemmas immediately imply that $\textsc{Reach}_p$ is \pspace-enumerable\xspace.
Indeed, by Lemma~\ref{lem:hybrid} applied to every of the systems $\mathcal{U}^i$, we have
$\sol{\mathcal{U}^i} = B_i+{P_i}^*$ for bases $B_i$ containing all vectors $b\in \sol{\mathcal{U}^i}$ bounded by $2^N$, and
periods $P_i$ containing all vectors $p\in \sol{{\mathcal{U}^i}_0}$ bounded by $2^N$, where $N$ is polynomial and computable.
Relying on Lemma~\ref{lem:profile},
the algorithm enumerates all systems $\mathcal{U}^i$, then enumerates
all $b\in B_i$ satisfying the above constraints, and for each $b$ it enumerates all periods $p\in P_i$ satisfying the above constraints.
The proof of Proposition~\ref{prop:effsolv} is thus completed.
\subsection{Proof of Proposition~\ref{prop:effsolv2}} \label{sec:proofeffsolv2}
In the sequel we fix states $q, q'$ of $\mathcal{A}$ and $p, p'$ of $\mathcal{B}$, respectively.
Our aim is to prove that $\textsc{mid}_{q q'}^{p p'}$ is \pspace-enumerable\xspace, by encoding this set as Parikh image of an OCN\xspace.
Recall that Parikh image $\pim{w}$ of a word $w \in \Sigma^*$, for a fixed ordering $a_1 < \ldots < a_k$
of $\Sigma$, is defined as the vector
$(n_1, \ldots, n_k)$ where $n_i$ is the number of occurrences of $a_i$ in $w$, for $i = 1, \ldots, k$. Parikh image lifts to languages: $\pim{L} = \setof{\pim{w}}{w\in L}$.
An OCN\xspace we call \emph{1-OCN\xspace} if all its transitions $(q, a, q', z)$ satisfy $z \in \set{-1, 0, 1}$.
We define a 1-OCN\xspace $\mathcal{C}$ of exponential size, over a 5-letter alphabet
$\set{a_0, b_0, a_+, a_-, b_f}$, such that
$\textsc{mid}_{q q'}^{p p'} $ is the image of the linear function of $\pim{L(\mathcal{C})}$.
$\mathcal{C}$ starts with the zero counter value, and its execution splits into three phases.
In the first phase $\mathcal{C}$ reads arbitrarily many times $a_0$ without modifying the counter,
and arbitrary many times $b_0$, increasing the counter by $1$ at every $b_0$.
Thus the counter value of $\mathcal{C}$ at the end of the first
phase is equal to the number of $b_0$s.
In the last phase, $\mathcal{C}$ reads arbitrarily many times $b_f$, decreasing the counter by $1$ at every $b_f$.
The accepting configuration of $\mathcal{C}$ requires the counter to be $0$.
Thus the counter value of $\mathcal{C}$ at the beginning of the last
phase must be equal to the number of $b_f$s.
In the intermediate phase $\mathcal{C}$ simulates execution of $\constrvass{\mathcal{V}}{\mathbb{Z}, \mathbb{N}}$.
The counter value of $\mathcal{C}$ corresponds, during this phase, to the counter value of $\mathcal{B}$.
On the other hand, the counter value of $\mathcal{A}$ will only be reflected by the number of $a_+$ and $a_-$ read by $\mathcal{C}$.
States of $\mathcal{C}$ correspond to pairs of states of $\mathcal{A}$ and $\mathcal{B}$, respectively; there will be also exponentially many
auxiliary states.
The phase starts in state $(q, p)$, and ends in state $(q', p')$.
A transition $\big( (q_1, p_1), (z_1, z_2), (q_2, p_2) \big)$ of $\mathcal{V}$ is simulated in $\mathcal{C}$ as follows:
First, if $z_1 \geq 0$ then $\mathcal{C}$ reads $z_1$ letters $a_+$; otherwise, $\mathcal{C}$ reads $-z_1$ letters $a_-$.
Second, if $z_2 \geq 0$ then $\mathcal{C}$ performs $z_2$ consecutive increments of the counter; otherwise $\mathcal{C}$ performs $-z_2$ decrements.
In both tasks, fresh auxiliary states are used.
We assume w.l.o.g.~that every transition of $\mathcal{V}$ satisfies $(z_1, z_2) \neq (0, 0)$; hence $\mathcal{C}$ has no $\varepsilon$-transitions.
This completes the description of the 1-OCN\xspace $\mathcal{C}$.
Let $S = \pim{L(\mathcal{C})} \subseteq \mathbb{N}^5$. Then
$\textsc{mid}_{q q'}^{p p'} = f(S)$, for the linear function $f: \mathbb{Z}^5 \to \mathbb{Z}^4$ defined by
(intensionally, we re-use alphabet letters in the role of variable names):
\[
(a_0, b_0, a_+, a_-, b_f) \mapsto (a_0, b_0, a_0 + a_+ - a_-, b_f).
\]
Therefore if $S$ is \pspace-enumerable\xspace then $f(S)$ is also so; it thus remains to prove that $S$ is \pspace-enumerable\xspace.
Our proof builds on results of~\cite{Hofman16,Kopczynski10}. In order to state it we need to introduce the concept of \emph{pump}
of an accepting run $\rho$ of $\mathcal{C}$ (called \emph{direction} in~\cite{Hofman16}). We treat accepting runs $\rho$ as sequences of transitions.
A pump of $\rho$ of first kind is a sequence $\alpha$ of transitions such that $\rho$ factorizes into $\rho = \rho_1 \rho_2$, and
$\rho_1 \alpha \rho_2$ is again an accepting run. Note that in this case the effect of $\alpha$ on the counter is necessarily 0.
A pump of second kind is a pair $\alpha, \beta$ of sequences of transitions, where the effect of $\alpha$ is non-negative,
such that $\rho$ factorizes into
$\rho = \rho_1 \rho_2 \rho_3$, and $\rho_1 \alpha \rho_2 \beta \rho_3$ is again an accepting run. Note that
in this case the effect of $\beta$ is necessarily opposite to the effect of $\alpha$.
Parikh image of a sequence of transitions $\pim{\rho}$ is understood as a shorthand for Parikh image of the input word of $\rho$.
Furthermore, we use a shorthand notation for Parikh image of a pump $\pi$: let $\pim{\pi}$ mean either
$\pim{\alpha}$ or $\pim{\alpha \beta}$, in case of the first or second kind, respectively.
Similarly, the length of $\pi$ is either the length of $\alpha$, or the length of $\alpha \beta$.
Lemma~\ref{lem:hofman} follows by~\cite{Hofman16}, Lem.~15, and~\cite{Hofman16arxiv}, Lem.~58 (see also~\cite{Kopczynski10}, Thm.~6):
\begin{lemma} \label{lem:hofman}
There is a computable bound $N$, polynomial in the size of $\mathcal{C}$, such that
$S$ is a union of linear sets of the form
\[
\pim{\rho} + \set{\pim{\pi_1}, \ldots, \pim{\pi_l}}^* \quad (l\leq 5),
\]
where $\rho$ is an accepting run of $\mathcal{C}$ of length at most $N$, and $\pi_1 \ldots \pi_l$ are
pumps of $\rho$ of length at most $N$.
\end{lemma}
We need one more fact:
\begin{restatable}{lemma}{Lemsz} \label{lem:6}
For $b\in \mathbb{N}^5$ and $P = \set{p_1, \ldots, p_l} \subseteq \mathbb{N}^5$, $l \leq 5$,
it is decidable in \textsc{PSpace}\xspace if there is an accepting run $\rho$ of $\mathcal{C}$ of length at most $N$
and pumps $\pi_1, \ldots, \pi_l$ of $\rho$ of length at most $N$,
such that $b = \pim{\rho}$ and $p_i = \pim{\pi_i}$ for $i = 1, \ldots, l$.
\end{restatable}
\begin{proof}
The algorithm simulates in parallel, in polynomial space, at most 6 different runs of $\mathcal{C}$.
Concretely, it guesses, for each of $p_i$, whether it corresponds to a pump of first or second kind.
We assume below that all pumps are of second kind, i.e., pairs $\alpha_i, \beta_i$
-- pumps of first kind are treated in a simpler but similar way.
Note that $N$ is exponential in the size of $\mathcal{A}$, $\mathcal{B}$, thus counting up to $N$ is possible in polynomial space.
Start by simulating nondeterministically a run $\rho$ of $\mathcal{C}$ of length at most $N$ (call this simulation \emph{main thread};
in addition, there will be also at most 5 \emph{pump threads}). The algorithm thus maintains the current configuration $c$
of $\mathcal{C}$, and chooses nondeterministically consecutive transitions to execute. In addition, the algorithm updates Parikh image of the
run executed so far (in polynomial space).
In the course of simulation the algorithm guesses nondeterministically $l$ points where a pump $i$ intervenes
into the simulated run, $i = 1, \ldots, l$.
At each so guessed point suspend all the threads, and add and run a new \emph{pump} thread,
responsible for simulating from the current configuration $c$ of $\mathcal{C}$ some sequence of transitions $\alpha_i$ of length at most $N$.
The simulation of $\alpha_i$ finishes nondeterministically, say in configuration $c_i$, if the counter value of $c_i$ is greater or
equal the counter value of $c$ (the effect of $\alpha_i$ is non-negative) and the state of $c_i$ is the same as the state of $c$.
Then Parikh image $a_i = \pim{\alpha_i}$ is stored (in polynomial space), and the simulation of
all threads (the suspended ones and the new one) are continued, with the proviso that all threads use \emph{the same}
nondeterministically chosen sequence of transitions. Thus the algorithm maintains up to 6 configurations
$c, c_1, \ldots, c_l$ of $\mathcal{C}$, and stores up to 5 vectors $a_1 \ldots a_l$.
Later in the course of simulation the algorithm also guesses nondeterministically $l$ points where a pump $i$
intervenes into the simulated run for the second time, $i = 1\ldots l$.
Similarly as above, at each so guessed point suspend all other threads
except for the one corresponding to pump $i$, and simulate in that thread some sequence $\beta_i$
of length at most $N$. This simulation terminates only if its current configuration becomes equal to the current
configuration of the main thread, i.e., $c_i = c$, and moreover Parikh image of $\beta_i$, say $b_i$, satisfies
$a_i + b_i = p_i$; and once this happens, the pump thread is cancelled.
Note that one pump thread might be cancelled before some another pump thread starts, this however does
not introduce any problem, as the total number of all pump threads is at most 5.
The whole simulation finishes when all pump threads are cancelled, the current configuration $c$ is
the final configuration of $\mathcal{C}$, and Parikh image of the run executed in the main thread equals $b$.
\end{proof}
The last two lemmas imply that $S$ is \pspace-enumerable\xspace. Indeed, it is enough to enumerate all
candidates $b, P$ bounded by $N$, as specified in Lemma~\ref{lem:hofman},
and validate them, using Lemma~\ref{lem:6}.
This completes the proof of Proposition~\ref{prop:effsolv2}.
\subsection{Proof of Proposition~\ref{prop:Reffsolv}}
Recall the definition~\eqref{eq:defR} of the set $R$ as a projection of a conjunction of constraints:
\begin{align*} \begin{aligned}
\defR
\end{aligned} \end{align*}
We are going to prove that $R$ is \pspace-enumerable\xspace.
The algorithm enumerates quadruples of states $q, q', p, p'$. For each fixed quadruple,
it enumerates (by Propositions~\ref{prop:effsolv} and~\ref{prop:effsolv2})
component linear sets of $\textsc{pref}_q^p$, $\textsc{mid}_{q q'}^{p p'}$ and $\textsc{suff}_q^p$.
Thus it is enough to consider three fixed \pspace-enumerable\xspace linear sets
\begin{align*}
L_\textsc{pref} & = b_\textsc{pref} + {P_\textsc{pref}}^* \subseteq \textsc{pref}_q^p \subseteq \mathbb{N}^2 \\
L_\textsc{mid} & = b_\textsc{mid} + {P_\textsc{mid}}^* \subseteq \textsc{mid}_{q q'}^{p p'} \subseteq \mathbb{N}^2\times\mathbb{Z}\times\mathbb{N} \\
L_\textsc{suff} & = b_\textsc{suff} + {P_\textsc{suff}}^* \subseteq \textsc{suff}_q^p \subseteq \mathbb{N}^2 .
\end{align*}
We now treat each of these linear sets as a system of linear Diophantine equations.
For instance, if $P_\textsc{pref} = \set{p_1, \ldots, p_k}$, all pairs $(x, y) \in L_\textsc{pref}$ are described by two equations
\begin{align} \label{eq:coeff}
(x_\textsc{pref}, y_\textsc{pref}) = b_\textsc{pref} + p_1 x_1 + \ldots + p_k x_k,
\end{align}
for fresh variables $x_1, \ldots, x_k$. Note that the number of variables is exponential, but the number of equations
is constant, namely equal two.
The same can be done with two other linear sets, yielding 6 more equations involving 6 variables
$x_\textsc{mid}, y_\textsc{mid}, x'_\textsc{mid}, y'_\textsc{mid}, x_\textsc{suff}, y_\textsc{suff}$, plus exponentially many other fresh variables.
Points in $L_\textsc{suff}$ are represented as $(x_\textsc{suff}, y_\textsc{suff})$, while points in $L_\textsc{mid}$ as $(x_\textsc{mid}, y_\textsc{mid}, x'_\textsc{mid}, y'_\textsc{mid})$.
(Since we only consider non-negative solutions of systems of equations, in case of
$L_\textsc{mid}$ two separate cases should be considered, depending on whether the final value $x'_\textsc{mid}$
is non-negative or not. For simplicity we only consider here the case when it is non-negative.)
In addition, we add the following equations (and one variable $x$):
\begin{align*}
x_\textsc{pref} & = x_\textsc{mid} \\
y_\textsc{pref} & = y_\textsc{mid} \\
y'_\textsc{mid} & = y_\textsc{suff} \\
x & = x'_\textsc{mid} - x_\textsc{suff}.
\end{align*}
In total, we have a system $\mathcal{U}$ of 12 equations (8 mentioned before and 4 presented above) involving exponentially many variables,
including these 9 variables
\begin{align} \label{eq:vars}
x_\textsc{pref}, y_\textsc{pref}, x_\textsc{mid}, y_\textsc{mid}, x'_\textsc{mid}, y'_\textsc{mid}, x_\textsc{suff}, y_\textsc{suff}, x.
\end{align}
The value of 3 of them is relevant for us, namely $x_\textsc{pref}, x_\textsc{suff}$ and $x$.
We claim that the projection of the solution set $\sol{\mathcal{U}}$ onto these 3 \emph{relevant} coordinates is \pspace-enumerable\xspace.
To prove this, we use a refinement of Lemma~\ref{lem:hybrid}, which also follows from
Prop.~2 in~\cite{taming}:
\begin{lemma} \label{lem:hybridrefined}
$\sol{\mathcal{U}} = B + P^*$, with every base $b\in B$ and period $p\in P$ bounded by $N$, for a computable bound
$N \in \mathbb{N}$ polynomial in the number of variables in $\mathcal{U}$ and maximal absolute values of coefficients in $\mathcal{U}$,
but exponential in the number of equations in $\mathcal{U}$.
\end{lemma}
By Lemma~\ref{lem:hybridrefined} we know that
the solution set of $\mathcal{U}$ is of the form $B + P^*$, for all bases $b \in B$ and all periods $p\in P$ bounded
exponentially.
Hence each number appearing in a base or a period is representable in polynomial space; the same applies to the projection
onto the 3 relevant coordinates.
Finally, observe that the projections of the sets $B$ and $P$ onto the 3 relevant coordinates
can be enumerated in polynomial space.
Indeed, in order to check whether a vector in $\mathbb{N}^3$ is (the projection of) a $B$ or $P$, we proceed similarly as in the proof of Proposition~\ref{prop:effsolv}:
the algorithm first guesses values of other 6 variables among~\eqref{eq:vars},
and then guesses the values of other (exponentially many) variables on-line,
in the course of enumerating the coefficients $p_i$ (cf.~\eqref{eq:coeff}); the latter is possible as the
sets $L_\textsc{pref}, L_\textsc{mid}$ and $L_\textsc{suff}$ are \pspace-enumerable\xspace.
\section{Final remarks} \label{sec:remarks}
Our main contribution is to show that the regular separability problem for OCN\xspace
is decidable (we also provide tight complexity estimation of the problem, namely \textsc{PSpace}\xspace-completeness, which we consider however less significant),
but it becomes undecidable for OCA\xspace (when zero tests are allowed).
We believe that this reveals a delicate decidability borderline.
For instance recall (cf.~Remark~\ref{rem:oca}) that the concept of $n$-approximation, a core technical ingredient of our decidability proof,
still works for OCA\xspace, including the Approximation Lemma, but is not prone to effective testing.
Below we discuss in more detail two other aspects:
relation to the regularity problem for OCN\xspace, and obstacles towards extending our approach to regular separability
of the many-dimensional extension of OCN\xspace, i.e., of VASS\xspace.
\myparagraph{Undecidability of regularity}
Our decidability result contrasts with undecidability of
the regularity problem for OCN\xspace (given an OCN\xspace $\mathcal{A}$, decide if $L(\mathcal{A})$ is regular?), shown in~\cite{ValkVidal81}.
The proof of~\cite{ValkVidal81} works for OCN\xspace accepting by final configuration
(as assumed in this paper, cf.~Section~\ref{sec:oca}), but not for OCN\xspace accepting solely by final state.
But even in this weaker model the regularity problem is undecidable, as discovered recently by
James Worrell~\cite{BensUndecidability}.
The proof is by reduction from finiteness of the reachability set of a lossy counter machine,
which is an undecidable problem~\cite{cheatsheet10}.
Consider a standard encoding of runs of such a machine as words, and consider the language of
\emph{reverses} of such encodings, i.e., encodings read backward.
It is not difficult to prove that the language is regular if, and only if the reachability set of the lossy counter machine is finite.
Moreover, one can construct an OCN\xspace that recognizes the complement of the language.
\myparagraph{Towards regular separability of VASS\xspace}
Our decidability proof builds upon a notion of $n$-approximation: an OCN\xspace $\mathcal{A}$ is over-approximated
by an NFA $\mathcal{A}_n$ which remembers the counter value of $\mathcal{A}$ exactly only below $n$, and modulo $n$ above this threshold.
Could one define $n$-approximation $\mathcal{V}_n$ of a VASS\xspace $\mathcal{V}$ by treating all the counters of $\mathcal{V}$
in that way?
In particular, such $n$-approximation would commute with the cross-product:
$\mathcal{V}_n \otimes \mathcal{U}_n = (\mathcal{V} \otimes \mathcal{U})_n$ for two VASS\xspace $\mathcal{V}$ and $\mathcal{U}$
(we extend here naturally the cross-product operation).
The Approximation Lemma (cf.~Lemma~\ref{lem:appr}), quite surprisingly, does not hold for so defined
notion of over-approximation.
Indeed, the Approximation Lemma would imply that regular separability of
$\mathcal{V}$ and $\mathcal{U}$ is equivalent to disjointness of languages of $\mathcal{V}_n$ and $\mathcal{U}_n$, for some $n>0$
(cf.~Corollary~\ref{cor:appr}), which is the same as
$L(\mathcal{V}_n \otimes \mathcal{U}_n) = L((\mathcal{V} \otimes \mathcal{U})_n) = \emptyset$ for some $n > 0$;
and finally, the latter condition would be equivalent, again due to the
Approximation Lemma, to $L(\mathcal{V} \otimes \mathcal{U}) = \emptyset$, which is the same
as the languages of $\mathcal{V}$ and $\mathcal{U}$ being disjoint.
Thus regular separability of $\mathcal{V}$ and $\mathcal{U}$ would be equivalent to disjointness of $\mathcal{V}$ and $\mathcal{U}$, which is not true in general.
The decidability status of the regular separability problem for \vass languages\xspace remains thus open.
|
3,212,635,537,507 | arxiv | \section*{Abstract}
{\bf
The study of non-equilibrium dynamics of many-body systems after a quantum quench received a considerable
boost and a deep theoretical understanding from the path integral formulation in imaginary time.
However, the celebrated problem of a quench in the Luttinger parameter of a one dimensional quantum critical system (massless quench)
has so far only been solved in the real-time Heisenberg picture.
In order to bridge this theoretical gap and to understand on the same ground massive and massless quenches,
we study the problem of a gaussian field characterized by a coupling parameter $K$ within a strip
and a different one $K_0$ in the remaining two semi-infinite planes.
We give a fully analytical solution using the electrostatic analogy with the problem of a dielectric material within a strip surrounded by an infinite medium of different dielectric constant, and exploiting the method of charge images.
After analytic continuation, this solution allows us to obtain all the correlation functions after the quench within a path integral approach in imaginary time,
thus recovering and generalizing the results in real time.
Furthermore, this imaginary-time approach establishes a remarkable connection between the quench
and the famous problem of the conductivity of a Tomonaga-Luttinger liquid coupled to two semi-infinite leads: the two are in fact related by a rotation of the spacetime coordinates.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}
\label{sec:intro}
The gaussian free field is arguably the simplest field theory.
Its euclidean action in 2D reads
\begin{equation}\label{action}
S = \frac{1}{2 \pi K } \int{\rm d} y \int {\rm d}z \left[ (\partial_y \phi)^2+(\partial_z \phi)^2 \right]
\end{equation}
where $\phi$ is a scalar field, and $K$ a (uniform) parameter related to the compactification radius of the bosonic field.
The gaussian free field can be solved by a variety of methods. In fact, using that the theory is quadratic, all correlation functions are determined in terms of its propagator only. Moreover, one can also exploit the fact that the theory is conformally invariant, which allows one to use all the powerful methods of conformal field theories (CFTs)~\cite{yellowbook}.
As easy as it is, it found nonetheless highly non-trivial applications in many-body quantum physics.
In 1D, in particular, it captures all the universal properties of interacting fermions and bosons by virtue of the Tomonaga-Luttinger liquid (TLL) paradigm~\cite{luttinger1963exactly, haldane1981effective, giamarchi2004}, in which case $K$ is referred to as Luttinger parameter and encodes the entire information about the interaction strength among particles.
In this case, the action \eqref{action} enters in the path integral formulation of equilibrium problems, namely in the study of TLLs at zero and finite temperature.
Moreover, the non-equilibrium properties of the same class of systems can be understood via a path integral formulation on a different geometry
\cite{Calabrese_QuenchesCorrelationsPRL,Calabrese_QuenchesCorrelations} (see Sec.~\ref{sec:pathintegral_quenches} below).
More recently, various inhomogeneous generalisations of \eqref{action} were studied in connection to inhomogeneous and time-dependent problems in TLLs,
both in- \cite{allegra2016inhomogeneous,dubail2017conformal,brun2017one,dubail2017emergence,brun2018inhomogeneous,bastianello2020entanglement,scopa2020one}
and out-of-equilibrium \cite{dubail2017conformal,ruggiero2019conformal,ruggiero2020quantum,ruggiero2021quantum,scopa2021exact,scopa2021exact2,collura2020domain,langmann_nonuniformtemperature,langmann2018diffusive,moosavi2019inhomogeneous,rsc-22,sc-08},
mainly aiming at describing gases in traps and their non-equilibrium dynamics
(see Section V in Ref.~\cite{alba2021generalized} for a more comprehensive review of the literature).
However, if we consider an inhomogeneous Luttinger parameter $K \to K(y,z)$, conformal invariance is generically lost and,
while the theory still remains quadratic, its propagator has generically to be determined numerically, as done, for example, in Ref.~\cite{brun2018inhomogeneous}, but in some cases it can be treated analytically, as e.g. in Refs. \cite{safi1995transport,safi_prop,safi_ac}.
We mention that TLLs are also characterised by a second parameter $u$, known as sound velocity.
However, an inhomogeneous $u \to u (y,z)$ can be reabsorbed in a change of metric in Eq. \eqref{action}, as explicitly worked out, e.g., in~\cite{allegra2016inhomogeneous,dubail2017conformal,brun2018inhomogeneous}.
In this work we consider a particular situation, where $K(y,z)$ is chosen to be piecewise-homogeneous, as in Fig. \ref{Fig_massless}, with a central strip characterised by a Luttinger parameter $K$ and two semi-infinite planes with parameter $K_0$.
In this case a fully analytical solution can be found by exploiting the electrostatic analogy with the problem of a dielectric material with piece-wise homogeneous dielectric constant, and relying on the method of charge images.
\begin{figure}[t]
\begin{center}
\includegraphics[height=6cm]{initial_geometry}
\caption{Space-time representation of a piecewise-homogeneous 2D gaussian free field \eqref{Sinhom} with alternating value of the coupling parameter, $K_0 - K - K_0$. Finding the propagator of the theory is equivalent to finding the potential generated by a unit charge $Q_0^+=1$ within a material characterized by dielectric constant $\frac{1}{\pi K}$ (yellow strip) surrounded by a material with different dielectric constant $\frac{1}{\pi K_0}$ (orange). In the quench problem (Sec.~\ref{sec:results_quench}), $z=x$ is the spatial direction, while $y=\tau$ is the (imaginary) time direction. In the problem of a TLL coupled to leads (Sec.~\ref{sec:LL_leads}), instead, the role of the coordinates is reversed.
}\label{Fig_massless}
\end{center}
\end{figure}
Our motivation here is again the study of quantum quenches in TLLs.
In fact, back in 2006, Refs.~\cite{Calabrese_QuenchesCorrelationsPRL,Calabrese_QuenchesCorrelations} provided a solution for
the quantum quenches in general CFTs starting from a massive (finite correlation) state.
The quench was studied using an imaginary time path integral approach
and conformal transformations, together with techniques of boundary CFTs ~\cite{cardy1984conformal,cardy2004boundary}, which inherit
methods of classical electromagnetism.
These papers provided some very general results for the space-time decay of correlation functions and generated an enormous following literature
(see Ref.~\cite{calabrese2016quantum} for a review).
Unfortunately, the method of Refs.~\cite{Calabrese_QuenchesCorrelationsPRL,Calabrese_QuenchesCorrelations}
does not apply to quenches starting from an initial critical (massless) state.
The latter, instead, has been solved, always back in 2006~\cite{Cazalilla_MasslessQuenchLL}, for the special case of a Luttinger model (with central charge $c=1$), i.e., a quench in the Luttinger parameter $K_0 \to K$ (see also \cite{ic-09,ic-10,cc-16,cce-15,rsm-12,dbz-12,phd-13,cbp-13}), but relying on the operator formalism (via Bogolioubov transformations).
Since then, however, nobody provided its characterization in terms of a path integral approach, and the question (already posed in Ref.\cite{Cazalilla_MasslessQuenchLL})
on whether it is possible to extend the method of Refs. \cite{Calabrese_QuenchesCorrelationsPRL,Calabrese_QuenchesCorrelations} to account for massless initial states, in TLLs or general CFTs, remained unanswered.
In this work we bridge this gap in the case of TLLs, by first arguing that the path integral formulation of massless quenches requires to deal with the action of an inhomogeneous 2D gaussian free field with piece-wise homogeneous Luttinger parameter.
Once established such relation, we can use the electrostatic solution of the latter problem to recover all the correlation functions after the quench from a massless state within a path integral approach in imaginary time.
In order to really put massive and massless quenches on the same ground, we further show that also for the massive quench can be solved directly via a very similar electrostatic analogy, where only the value of the charge images change, while their positions remain the same.
An interesting outcome of this path integral perspective on massless quenches is their connection with the celebrated problem of a Tomonaga-Luttinger liquid with a TLL parameter $K$ coupled to two semi-infinite leads with TLL parameter $K_0$.
For this problem it was shown \cite{safi1995transport,maslov1995landauer,ponomarenko1995renormalization} that contrarily
to what was incorrectly claimed initially~\cite{kane_qwires_tunnel_lettre}
the conductance is depending on the TLL parameter $K_0$ of the \emph{leads} and not the one of the system.
The two settings, of the quench and of the conductance with leads turn out to be simply related by an exchange of space and (imaginary) time directions, as it will be clear in the following.
The manuscript is organized as follows.
In Section~\ref{sec:pathintegral_quenches} we summarize the path integral approach to quenches in TLLs.
In Section~\ref{Sec_gen_prob} we report the electrostatic solution of the piece-wise homogeneous gaussian free field (whose derivation is detailed in Appendix~\ref{App_sol}).
In Section~\ref{sec:results_quench} we apply such solutions to get results for massless quenches in TLLs.
In Section~\ref{appB:massive_electro} we report on a similar electrostatic solution of the massive quench.
Section~\ref{sec:LL_leads} is an independent section about the problem of a TLL coupled to leads.
In Section \ref{sec:extensions} we discuss which are the problems to which how our method straightforwardly applies, and explicitly work out the example of the massless quench in a system of finite size with both periodic and open boundary conditions.
We conclude in Section~\ref{sec:conclusions} with a summary, some further comments on simple generalizations of our results, and future perspectives.
\section{Path integral approach to quenches in Tomonaga-Luttinger liquids} \label{sec:pathintegral_quenches}
In this section we introduce the path integral approach in imaginary time to quenches in Tomonaga-Luttinger liquids. Namely, we consider the problem of a one-dimensional many-body quantum system initialized in the state $|\psi_0 \rangle$ which is then let evolve with the Luttinger liquid hamiltonian
\begin{equation} \label{HLL}
\hat{H} [\hat{\phi}, \hat{\Pi}] = \frac{u}{2\pi } \int dx \; \left[ \frac{1}{K} (\partial_x \hat{\phi} )^2 +K (\pi \hat{\Pi} )^ 2 \right]
\end{equation}
with the fields $\hat{\phi}$ and $\hat{\Pi}$ satisfying the commutation relations $[\hat{\phi} (x), \hat{\Pi} (x')] = i \hbar \delta (x-x')$, and where $\{u,K \}$ are the sound velocity and Luttinger parameter, respectively, which fully define the model. Below we are going to set $\hbar =1$ and $u=1$.
We are interested in computing the expectation values of operators at some time $t$ after the quench.
In particular, the expectation value of a local operator $\hat{O}(x)$ at time $t$ can be written as
\begin{equation}\label{O}
\langle \hat{O}(x,t) \rangle = \lim_{\epsilon\to 0} \frac{ \langle \psi_0 | e^{- \hat{H} \epsilon} e^{i \hat{H} t} \hat{O}(x) e^{-i \hat{H} t} e^{- \hat{H} \epsilon} |\psi_0\rangle}
{\langle \psi_0 | e^{-2 \epsilon \hat{H}} | \psi_0 \rangle}
\end{equation}
where we introduced a damping factor $e^{-\epsilon \hat{H}}$, with $\epsilon>0$, in such a way to make the path-integral expression convergent.
In the path-integral formalism in imaginary time, the numerator in Eq.~\eqref{O} can be represented as
\begin{equation} \label{pathintegral}
\int [D \phi (x, \tau)] \langle \phi (x, \tau_1) | \psi_0 \rangle \langle \psi_0 | \phi (x, \tau_2) \rangle O(x, \tau=0) e^{- \int_{\tau_1}^{\tau_2} d \tau \mathcal{L} (\tau)}
\end{equation}
where $ \int_{\tau_1}^{\tau_2} d \tau \mathcal{L} (\tau)$ is the euclidean action associated to the TLL, $| \phi (x, \tau) \rangle$ is the coherent states basis, and $\tau_{1,2}$ are supposed to be real and only at the end should be analytically continued to $ \pm \epsilon - i t$.
At this point, Eq.~\eqref{pathintegral} represents a path-integral over a strip of width $\tau_1 - \tau_2 = 2 \epsilon$, with the operator $\hat{O}(x,\tau=0)$ in path integral formalism (namely, $O(x, \tau=0)$) inserted at $\tau=0$, while the initial state $|\psi_0 \rangle$ plays the role of boundary condition.
Depending on the nature of the initial state $|\psi_0 \rangle$, however, such geometry gets modified as we are now going to explain.
\subsection{Massive quench}
If the initial state $|\psi_0 \rangle$ is a massive state, namely has a finite correlation length, using renormalization group (RG) arguments~\cite{diehl1986theory}, it can be replaced by the RG-boundary state $|B \rangle$ to which it flows. Effectively, this is taken into account at leading order by introducing an \emph{extrapolation length} $\tau_0$, namely one makes the replacement $|\psi_0 \rangle \propto e^{-\tau_0 \hat{H}} | B \rangle$, so that Eq.~\eqref{O} becomes
\begin{equation}\label{Omassive}
\langle \hat{O}(x,t) \rangle \simeq
\frac{\langle B | e^{- \hat{H} \tau_0} e^{i \hat{H} t} \hat{O}(x) e^{-i \hat{H} t} e^{- \hat{H} \tau_0} |B\rangle}
{\langle B|e^{-2 \hat{H} \tau_
0} |B \rangle}
\end{equation}
where, since $\tau_0 >0$, we could safely take the limit $\epsilon \to 0$. The equation above can be represented in a similar way in the $(x,\tau)$ plane (with $\tau$ being the imaginary time) as a path-integral over a strip, but this time of width $2\tau_0$ with boundary condition fixed by $|B \rangle$.
As pointed out in Refs. \cite{Calabrese_QuenchesCorrelationsPRL,Calabrese_QuenchesCorrelations}, since both the theory \eqref{HLL} and the boundary state $|B\rangle$ are conformally invariant, one can use conformal maps and transformation of operators under those to evaluate the expectation value in the r.h.s. of Eq.~\eqref{Omassive} exactly.
In particular this path-integral approach paved the way for the determination of several entanglement measures \cite{cc-05,ctc-14,d-17}
that otherwise are very cumbersome to obtain even numerically in the operator approach \cite{bastianello2020entanglement,mck-22b,mck-22,ek-22}.
\subsection{Massless quench}
A different class of quenches is that starting from massless states, where, e.g., $|\psi_0 \rangle$ is the ground state of the Tomonaga-Luttinger liquid hamiltonian $\hat{H}_0$
with $K_0 \neq K$ (cf. \eqref{HLL}).
This means that in this case we are looking at a quench in the Luttinger parameter, i.e., $K_0\to K$.
While this problem has been solved in Ref.~\cite{Cazalilla_MasslessQuenchLL} by using Bogoliubov transformations,
the explicit formulation and solution in the path-integral approach has not been yet worked out. This is what we do in the following.
The initial state can now be viewed as $|\psi_0\rangle \propto \lim_{\beta\to\infty} e^{-\beta \hat{H}_0} |\psi\rangle$ with some generic state $|\psi\rangle$ (which has a non-zero overlap with the ground state). Indeed, the action of the limit in $\beta$ is to project onto the ground state of $\hat H_0$.
Eq.~(\ref{O}) therefore becomes
\begin{equation}\label{Omassless}
\langle \hat{O}(x,t) \rangle =
\lim_{\beta \to \infty}
\frac{\langle \psi | e^{- \hat{H}_0 \beta} e^{- \hat{H} \epsilon} e^{i \hat{H} t} \hat{O}(x) e^{-i \hat{H} t} e^{- \hat{H} \epsilon} e^{- \hat{H}_0 \beta} |\psi\rangle}
{\langle \psi| e^{- 2\hat{H}_0 \beta} |\psi\rangle}
\end{equation}
which is the path-integral over a plane with an inhomogeneous Luttinger parameter
(the imaginary evolution is in fact governed by two different hamiltonians, $\hat{H}$ and $\hat{H}_0$).
Specifically, the Luttinger parameter is equal to $K$ in a slab of size $2 \epsilon$ and $K_0$ otherwise.
The geometry is the one shown in Fig.~\ref{Fig_massless}, with $(z,y) =(x,\tau)$, and $L=2\epsilon$.
The discontinuity is along the imaginary time, while along the spatial direction the Luttinger parameter is uniform.
At this point let us stress an important difference between the massive and massless quench.
The action of the massive quench is defined within a strip of finite width with appropriate boundary conditions. This allows one to
map (conformally) the geometry to the upper half plane and {\it then} use the analogy with electrostatic and the method of charge images to solve the problem.
The geometry of the massless quench is quite different because it concerns a strip (whose length will be sent to zero at the end)
within two semi-infinite planes.
This means that an analogous transformation as the one used for the massive quench to map the strip into the upper half plane
cannot be used.
For this reason here we will approach the problem via its electrostatic analogy directly for the strip, without invoking any conformal mapping.
For comparison, we will also show that the same strategy can be applied to the massive quench.
\section{The general problem}\label{Sec_gen_prob}
In the previous section we reduced the problem of studying a massless quench to that of computing the path integral of an \emph{inhomogeneous} free gaussian theory in two dimensions, whose Euclidean action reads
\begin{equation} \label{Sinhom}
S = \frac{1}{2\pi} \int_{\Omega} {\rm d} y {\rm d}z \frac{1}{K(y,z)} \left[ (\partial_y \phi)^2+(\partial_z \phi)^2 \right],
\end{equation}
with $\Omega = \mathbb{R}^2$ defining the space where the theory lives.
In our special case, $K(y,z)$ is a piece-wise homogeneous function, namely it has the form (see Fig.~\ref{Fig_massless})
\begin{equation}
\label{Kyz}
K(y,z) =
\begin{cases}
K & 0<y<L \\
K_0 & \text{otherwise}
\end{cases}
\end{equation}
so it is constant in the $z$-direction and has an alternating discontinuity in $y$.
Since the theory is quadratic, solving it amounts to finding the propagator $G(y,z;y',z')=\langle \phi(y,z) \phi(y',z')\rangle$. The latter
satisfies the Poisson equation:
\begin{equation}\label{Eq_Poisson}
-\left[ \partial_y \frac{1}{K(y,z)} \partial_y + \partial_z \frac{1}{K(y,z)} \partial_z \right] G(y,z;y',z') = \pi \delta(y-y')\delta(z-z')
\end{equation}
and, in the electrostatic analogy, represents the electrostatic potential at position $(y,z)$ generated by a unit charge in $(y',z')$.
Because of the form \eqref{Kyz} of $K(y, z)$, the propagator $G$ satisfies the Poisson equation (\ref{Eq_Poisson}) with constant $K$
in each domain and at the boundaries one has to impose the continuity of
$G(y,z;y',z')$ and $\frac{1}{K(y,z)} \partial_y G(y,z;y',z')$ when $y \to 0^{\pm}$ and $y \to L^{\pm}$.
These are the standard conditions that an electrostatic potential has to satisfy
at the boundary between two materials with different dielectric constant $\frac{1}{\pi K_0}$ and $\frac{1}{\pi K}$ \cite{jackson1999classical}.
In the following, we will compute the propagator of this inhomogeneous system via the method of charge images.
It is then worth recalling that the 2D electrostatic potential generated by a point charge $Q_P$ at $(y_P, z_P)$ in a uniform dielectric medium is
\begin{equation} \label{V2D}
V_{2D}^{P} (y, z;y_P,z_P) = - Q_P \frac{K}{4} \log\frac{|(y-y_P)^2+(z-z_P)^2|}{a^2}
\end{equation}
with $a$ a cutoff distance,
which describes the equally well-known space dependence of the propagator in CFT.
Note that while this propagator is infinite in $(y,z)=(y_P,z_P)$, in field theory this is usually regularized via a short-distance cutoff.
\subsection{Solution}
The method of charge images amounts to finding the solution of the original problem via the introduction of fictitious charges
which guarantee the right boundary conditions at $y=0,L$ and so, by uniqueness of the solution, generate the potential of interest \cite{jackson1999classical}.
While the solution for two half planes with different dielectric constants can be directly found
in \cite{jackson1999classical}, for the problem of the strip one has to iterate the method by
finding the correct set of charges that impose the desired boundary conditions at the
edges of the strip and guess its asymptotic solution.
In Appendix \ref{App_sol} we outline the first steps of such iteration, while below we only state the full solution.
Likely this solution can be found in some textbooks, but it is easier to re-derive rather than searching for it.
We find that the potential inside the strip generated by a unit charge at position $(y',z')$ inside the strip (i.e., the propagator)
can be interpreted as the potential of a uniform medium of dielectric constant $\frac{1}{\pi K}$ generated by an infinite sequence of positive and negative charges with values and positions as follows
\begin{equation} \label{Q+}
Q_{2|n|}^+ = \lambda^{2|n|} >0 \qquad \qquad (y_{n}^+ = y'+ 2 L n, z') \quad\text{for}\quad n \in {\mathbb Z}
\end{equation}
and
\begin{equation} \label{Q-}
\begin{array}{l}
Q_{2|n|-1}^- =-\lambda^{2|n|-1} \qquad \qquad
\begin{cases}
(y_{n}^- = - y' + 2 n L, z') &\text{for}\quad n > 0 \\
(y_{n}^- = - y' + 2( n+1) L, z') &\text{for}\quad n < 0
\end{cases}
\end{array}
\end{equation}
with $\lambda=(K-K_0)/(K+K_0)$. Note that, except for $Q_0^+ (=1)$, they all come in pairs.
In Fig. \ref{Fig_charges_massless} we show the location of the charges.
This implies that the overall potential can be found as the superposition of the logarithmic
potential generated by each charge in a medium with dielectric constant $\frac{1}{\pi K}$.
Note that, as expected, if $K=K_0$ there is only the charge in the slab with unit value.
The propagator (electrostatic potential) is thus given by
\begin{equation} \label{propagator}
\begin{array}{l}
\displaystyle
G(y,z ; y',z') = \langle \phi(y,z) \phi(y',z') \rangle = - \frac{K}{4} \sum_{n\in \mathbb{Z}} Q_{2|n|}^+ \log \frac{| (z-z')^2+(y-y_n^+)^2| }{a^2}
\\ \vspace{-0.2cm} \\
\displaystyle \qquad\qquad\qquad
- \frac{K}{4} \sum_{n \in \mathbb{Z} \backslash \{0\}} Q_{2|n|-1}^- \log \frac{| (z-z')^2+(y- y_n^-)^2|}{a^2} + C
\end{array}
\end{equation}
where $C$ is a constant. This is fixed by imposing some condition for $(y, z)$ on the boundary $\partial \Omega$ of the space $\Omega$ where the theory lives. In this case, as mentioned, $\Omega = \mathbb{R}^2$ and for $y,z \to \pm \infty$ the logarithm in \eqref{propagator} would diverge. Therefore we need to regularize $G(y,z ; y',z') $ by considering an infinite additive constant ($|C| \to \infty$).
\begin{figure}[t]
\begin{center}
\includegraphics[height=6cm]{charges_massless}
\caption{Position of the charges defined in Eqs.~\eqref{Q+}-\eqref{Q-} that solve the piecewise-homogeneous gaussian field problem with alternating value of the coupling parameter, $K_0 - K - K_0$ (see Eqs. \eqref{Sinhom}-\eqref{Eq_Poisson} in the main text).
}\label{Fig_charges_massless}
\end{center}
\end{figure}
\section{Results for the massless quench} \label{sec:results_quench}
In order to specialize the general solution of Sec.~\ref{Sec_gen_prob} to the massless quench problem, we take $z=x$, the spatial direction, and $y=\tau$, the imaginary time axis.
As we mentioned, for this problem we work with a strip of size $L=2\epsilon$ and eventually will take $\epsilon\to0$.
Note that if we imagine to send $L\to 0$
all positive charges shrink on the same point,
as well negative charges (cf. Eqs.\eqref{Q+}-\eqref{Q-}).
This implies that if we place a unit positive charge in $(x,\tau)$
we will get two effective charges sitting at $(x,\tau)$ and $(x,-\tau)$ of charge
\begin{equation}\label{Qpiu}
Q^+_{tot} = ( 1+ 2\lambda^2 + 2\lambda^4 +\dots) = ( 2\sum_{n} \lambda^{2 n}-1)
= \frac{1}{2} \left(\frac{K_0}{K}+\frac{K}{K_0} \right) = \mu_+
\end{equation}
\begin{equation}\label{Qmeno}
Q^-_{tot} = - 2( \lambda+ \lambda^3 + \lambda^5 +\dots) = - 2 \lambda \sum_{n} \lambda^{2 n}
= \frac{1}{2} \left(\frac{K_0}{K}-\frac{K}{K_0} \right) = - \mu_-
\end{equation}
We want to compute $k$-points correlation functions of operators in the theory at time $t>0$ after the quench. Those are derivatives, $\partial_x \hat{\phi} (x, t) $ and $\partial_t \hat{\phi} (x, t)$, and vertex operators $\hat{V}_{\alpha} (x, t) = : e^{i \alpha \hat\phi(x,t)}: $.
We start by considering the correlations functions of the derivative operators at equal time. We focus on $k=2$, while the generalisation to $k>2$ is straightforward.
In this case, we define
\begin{equation}
J^{(2)} (x, t ; 0, t) \equiv \langle \partial_x \hat \phi (x, t) \partial_{x'} \hat \phi (x', t) \rangle |_{x'=0},
\quad
D^{(2)} (x, t ; 0, t ) \equiv \langle \partial_{t} \hat \phi (x, t) \partial_{s} \hat \phi (0, s) \rangle |_{ s=t}.
\end{equation}
Moving to imaginary time, they can be readily written in terms of the propagator \eqref{Eq_Poisson} specialised to the quench as
\begin{equation}
J^{(2)} (x,\tau ; 0, \tau) = \partial_x \partial_{x'} G(\tau, x; \tau, x') |_{x'=0},
\quad
D^{(2)} (x,\tau ; 0, \tau) = - \partial_{\tau} \partial_{\sigma} G(\tau, x; \sigma, 0) |_{\sigma=\tau}
\end{equation}
In the limit $\epsilon \to 0$, and after considering the analytic continuation $\tau \to i t$, we get the following equal-time correlations
\begin{eqnarray}
J^{(2)} (x,t ; 0, t) &=& -\frac{K \mu^+}{2} \frac{1}{x^2} + \frac{K \mu^-}{4}
\left[ \frac{1}{(x-2t)^2} + \frac{1}{(x+2t)^2}\right] \\
D^{(2)} (x,t ; 0, t) &=& - \frac{K \mu^+}{2} \frac{1}{x^2} - \frac{K \mu^-}{4}
\left[ \frac{1}{(x-2t)^2} + \frac{1}{(x+2t)^2}\right]
\end{eqnarray}
which show a power law behavior, as expected from the massless nature of the initial state.
We then move to the correlation functions of vertex operators
\begin{equation} \label{vertexn}
C_{ \{ \alpha_j \}}^{(k)} \equiv \langle \hat V_{\alpha_1} (x_1 ,\tau) \cdots \hat V_{\alpha_k} (x_k,\tau) \rangle \; .
\end{equation}
The easiest way to compute it is to recall that, in the electrostatic analogy, the logarithm of \eqref{vertexn} is (up to a sign) the electrostatic potential energy of a system of point charges $\{ \alpha_1, \cdots, \alpha_k \}$~\cite{brun2018inhomogeneous}, i.e., $\log C_{ \{ \alpha_j \}}^{(k)} = - U^{(k)}_{\{ \alpha_j \}} $.
Therefore, we need to modify the source term (r.h.s.) of the Poisson equation \eqref{Eq_Poisson}, that now takes the form of a sum of delta functions at the positions of the charges.
By linearity, the solution of such modified equation, namely the total potential $G(y, z)$, will be given by
\begin{equation}
G(y, z) = \sum_{i=1}^k \alpha_i G (y, z ; y_i, z_i)
\end{equation}
Now, the total electrostatic potential energy of the system
can be written as
\begin{equation}\label{eq_U}
U^{(k)}_{\{ \alpha_j \}} = \frac{1}{2} \int_{\Omega} {\rm d y} {\rm d} z \, E(y,z) \cdot D(y,z) - \frac12 \sum_{i =1}^{k} \alpha_i \lim_{(y,z)\to (y_i, z_i)} V_{2D}^{i}(y, z;y_i,z_i)
\end{equation}
where we defined the electric field $E = - \nabla G(y,z)$, and the displacement field $D(y,z) = \frac{1}{\pi K(y,z)} E(y,z)$ \cite{jackson1999classical}.
The last term (cf. Eq.~\eqref{V2D}) cancels the self-interaction which corrects the first term and it is needed because in our case we have a set of point-like charges.
The integration can be carried out by parts
\begin{equation}
\int_{\Omega} {\rm d y} {\rm d} z \, E(y,z) \cdot D(y,z) =
\sum_{i =1}^k \alpha_i G (y_i , z_i)
\end{equation}
where we used that $G(y, z) = 0$ on the boundary $\partial \Omega$, and we evaluated $\nabla \cdot D $ via the Poisson equation.
Therefore
\begin{equation}
\label{eq_U2}
U^{(k)}_{\{ \alpha_j \}} =
\frac{1}{2} \sum_{i =1}^k \alpha_i \lim_{(y,z)\to (y_i, z_i)} \left[ G (y , z) - V_{2D}^{i}(y, z;y_i,z_i) \right]
\equiv \frac{1}{2} \sum_{i =1}^k \, \alpha_i G_i (y_i,z_i)
\end{equation}
where we defined the function $G_i(y_i,z_i)$ as the potential generated by all charges (real and imaginary) that here we denote as the set $ \{ \alpha_j Q^{s}_m \} $, with $s=\pm$, except the charge $\alpha_i Q^{+}_0 = \alpha_i$ itself,
i.e.,
\begin{equation}\label{Gi}
G_i (y, z) = - \frac{K}{2} \left( \sum_{ ( j, m, s ) \backslash ( i, 0, + )} \alpha_j Q^{s}_m \log \frac{|r_{i,0}^+ -r_{j,m}^s |}{a} + C \right)
\end{equation}
where $r_{j, m}^s = (y_{j,m}^s, z_{j,m}^s)$ are the positions of the image charges associated to $\alpha_j$ (in particular $r_{j, 0}^+$ is the position of $\alpha_j$ itself).
In order to write an explicit expression, we now specify to $k=2$, namely the two point function
\begin{equation} \label{vertex}
C_{\alpha}^{(2)} (x,\tau ; 0, \tau) \equiv \langle V_{\alpha} (x,\tau) V_{-\alpha} (0,\tau) \rangle \; .
\end{equation}
So, in the electrostatic problem, we have two charges with value $\pm \alpha$ at position $(x,\tau)$ and $(0,\tau)$ inside the strip. The potential energy is given by \eqref{eq_U2} together with \eqref{Gi} with $k=2$ and $\alpha_1= -\alpha_2 =\alpha$.
As mentioned, however, in the $\epsilon \to 0$ limit, many image charges shrink on the same point.
Eventually, the potential energy can be effectively computed as the sum of the following contributions:
\begin{itemize}
\item the one of the charge $\alpha$ in $(x, \tau)$ due to the potential generated by the charge $-\alpha \mu^+$ in $(0,\tau)$, the charge $\alpha \mu^-$ in $(x,-\tau)$, and $- \alpha \mu^-$ in $(0,-\tau)$;
\item the one of the charge $\alpha$ in the strip and its charge images with the same sign $\alpha Q_{2|n|}^+$ (the latter, however, is associated to a distance $\log |4 \epsilon n|$ divergent when $\epsilon \to 0$ and independent on $x$ and $\tau$);
\item the same contributions for the potential energy of the charge $-\alpha$.
\end{itemize}
By summing all such contributions one gets (at leading order in $\epsilon$)
\begin{multline}
\label{pot_en}
U_{\alpha}^{(2)}(x,\tau;0,\tau) =
\frac12 \alpha^2 \mu^+ K \log \frac{| x +a |}{a} - \frac12 \alpha^2 \mu^- K \log \frac{| x + i 2 \tau|}{a} + \frac12 \alpha^2 \mu^- K \log \frac{|2 \tau|}{a} +\\
+ \frac12 \alpha^2 (\mu^+-1)\log \frac{\epsilon}{a} + \text{const}
\end{multline}
where we explicitly introduced the short-distance cutoff $a$.
Note that the (infinite) constant $C$ in the potential cancels because the total (real) charge is zero.
Note also that the same would not true for the one-point function, which due to the presence of such infinite constant is instead zero (as it should).
The result for the correlation function $\eqref{vertex}$ is finally given by
\begin{equation}
C_{\alpha}^{(2)}(x,\tau;0,\tau) = e^{- U_{\alpha}^{(2)}(x,\tau;0,\tau) }
\end{equation}
where in the last expression the dependence on $\epsilon$ has disappeared.
Now, to get its behavior in real time, the last step is to consider the analytic continuation $\tau \to i t$, leading to the equal-time correlation after the quench
\begin{equation}
C_{\alpha}^{(2)} (x, t; 0, t) =\left(\left| \frac{x}{a} \right|^{- \mu^+ K/2}\left|1-\left(\frac{x}{2t}\right)^{2}\right|^{\mu^- K/ 4}\right)^{\alpha^{2}}
\end{equation}
where we only kept the leading order in $a$. So, again, we find that it decays as a power-law.
We stress that our results perfectly match those obtained in Refs.~\cite{Cazalilla_MasslessQuenchLL,ThierryAditi1,ThierryAditi2,ruggiero2021quenches} by using the real-time operator approach.
\section{Electrostatic solution of the massive quench: a comparison}\label{appB:massive_electro}
In this section we show how to solve the massive quench without invoking any conformal transformation,
via its analogy with an electrostatic problem similar to the one used for the massless quench: this is very useful, because it really puts the two types of quenches on equal grounds.
The main differences are that now the size of the strip is finite $2\tau_0$ and,
most importantly, the boundary conditions that one has to impose
at the edges of the strip are different.
In fact for the massive quench one has to solve the \emph{homogeneous} Poisson equation
\begin{equation}\label{Eq_Poisson2}
\nabla^2 G(y,z;y',z') = - \pi K \delta(y-y')\delta(z-z')
\end{equation}
within the strip domain $\Omega$, and impose at the boundary $\partial \Omega$
(now given by the boundaries of such strip) the appropriate boundary conditions compatible with the chosen boundary CFT.
In the following we focus on the boundary condition $G(y=0, z)=G(y=2\tau_0, z) =0$, which is one of the possible conformal invariant boundary conditions \cite{cardy1984conformal}.
The other possible choices require minor adjustments that are easily taken into account and not worth discussing.
Thus, for $G=0$ at the boundaries, we end up with the electrostatic problem of a dielectric between two conducting lines (see Fig.~\ref{Fig_massive} (a)).
This problem can be solved, again, via the method of charge images, and the solution is given by a sequence of charges, as
shown in Fig.~\ref{Fig_massive} (b) \cite{jackson1999classical}.
The position of these charges is the same as those in Fig.~\ref{Fig_charges_massless}, the only difference being that
now their value is the same for all positive and negative charges.
Moreover, the overall constant in the propagator here is set to zero in order to satisfy the boundary conditions.
at the edges of the strip.
For the quench problem, again, in Fig.~\ref{Fig_massive} one sets $y=\tau$, $z= x$ and now $L= 2 \tau_0$.
With the same logic of the previous section, one can now use this electrostatic picture to compute correlation functions.
Below we focus in particular on the vertex operators.
\begin{figure}[t]
\begin{center}
\includegraphics[height=7cm]{fig_charges_massive}
\caption{Panel (a) : Problem of a charge $Q$ in a strip of dielectric material between two conducting lines at $y=0,L$. Panel (b) : Position and values of the image charges that give the correct electrostatic potential inside the strip.
This is the electrostatic analog of a massive quench problem, where the propagator plays the role of the electrostatic potential (see discussion in the text). To be compared with Fig.~\ref{Fig_charges_massless}.
}\label{Fig_massive}
\end{center}
\end{figure}
In order to compute the one point function
\begin{equation}
C^{(1)}_{\alpha}(0,\tau) = \langle e^{i \alpha \hat\phi(0,\tau)} \rangle
\end{equation}
we consider the potential energy of a charge in the strip at position $(0,\tau)$ due to all
other (image) charges.
Applying Eq. (\ref{eq_U}) we obtain
\begin{equation}
U^{(1)}_{\alpha}(0,\tau) = - \frac{\alpha^2 K}{4} \left[ \sum_{n\neq 0} \log \frac{|4 \tau_0 n|}{a} - \sum_{n} \log \frac{|2 \tau+ 4 \tau_0 n |}{a} \right]
\end{equation}
and therefore
\begin{equation}
C^{(1)}_{\alpha}(0,\tau) = e^{- U^{(1)}_{\alpha}(0,\tau)} = \left[ \frac{a}{2(\tau+2 \tau_0 n)} \Big|_{n=0} \prod_{n>0} \frac{4 \tau_0^2 n^2}{|\tau^2-4\tau_0^2 n^2|} \right]^{\frac{K\alpha^2}{4}}=
\left[\frac{\pi a}{4\tau_0} \frac{1}{\sin(\frac{\pi \tau}{2\tau_0} ) } \right]^{\frac{K\alpha^2}{4}}.
\end{equation}
We thus recognize the one point function obtained in \cite{Calabrese_QuenchesCorrelations} via conformal maps,
which, upon analytic continuation gives an exponential decay in time.
In order to compute the two point function at equal time
\begin{equation}
C^{(2)}_{\alpha}(x, \tau; 0,\tau) = \langle e^{i \alpha \hat\phi(x,\tau)} e^{-i \alpha \hat\phi(0,\tau)} \rangle
\end{equation}
we place a positive charge $\alpha>0$ in $(x,\tau)$ and a negative one $-\alpha$ at $(0,\tau)$.
There will be then two sets of image charges.
We thus compute the potential energy of these two charges in the strip, given by (again, we reintroduce the short-distance cutoff $a$)
\begin{equation}
\begin{array}{l}
\displaystyle
U^{(2)}_{\alpha}(x,\tau;0,\tau) = \frac12 \alpha^2 K \sum_n \log \frac{| x+a + i (4 \tau_0 n) |}{a}
\\ \vspace{-0.2cm} \\
\displaystyle
\qquad\qquad\qquad\qquad
- \frac12 \alpha^2 K \sum_n \log \frac{| x+a + i (2\tau + 4 \tau_0 n)|}{a} + 2 U^{(1)}_{\alpha}(0,\tau).
\end{array}
\end{equation}
This gives the two point function as
\begin{equation}
\begin{array}{l}
\displaystyle
e^{- U^{(2)}_{\alpha}(x,\tau;0,\tau) } = \left[ \frac{|(x+a)+i2 \tau|^2}{(x+a)^2} \Big| \prod_{n>0} \frac{(x+a+i2 \tau)^2+16 \tau_0^2 n^2}{(x+a)^2+ 16 \tau_0^2 n^2} \Big|^2 \right]^{\frac{\alpha^2K}{4}} e^{-2 U^{(1)}_{\alpha}(0,\tau)}
\\ \vspace{-0.2cm} \\
\displaystyle
=\left( \frac{|\sinh \frac{\pi (x+a+i 2 \tau)}{4 \tau_0}|^2}{|\sinh\frac{\pi (x+a)}{4 \tau_0}|^2}\right)^{\frac{K \alpha^2}{4}} e^{-2 U^1_{\alpha}(0,\tau)}
\simeq \left[ \left(\frac{\pi a}{4 \tau_0}\right)^{2} \frac{\cosh\frac{\pi x}{2\tau_0} - \cos\frac{\pi \tau}{\tau_0}}{2\sinh^2\frac{\pi (x+a)}{4 \tau_0} \sin^2\frac{\pi \tau}{2 \tau_0}} \right]^{\frac{K \alpha^2}{4}},
\end{array}
\end{equation}
where by $\simeq $ we mean the leading order in $a$.
Again, we recover the result obtained in \cite{Calabrese_QuenchesCorrelations}, and the analytic continuation allows us to recover the real time behavior.
\section{An apparently unrelated problem: Tomonaga-Luttinger liquid coupled to leads} \label{sec:LL_leads}
Quite remarkably, the problem of the massless quench that we have studied in Sec.~\ref{sec:results_quench}, turns out to be intimately related to the apparently different problem considered in the series of works \cite{safi1995transport,ponomarenko1995renormalization,maslov1995landauer}.
These works deal with the study a one dimensional conductor, a Tomonaga-Luttinger liquid with parameter $K$, coupled
to two semi-infinite leads, modelled as semi-infinite TLLs with parameter $K_0$.
The main focus of these papers is the study of the dc conductivity of the system, within linear response.
In a path integral formulation one can immediately see the similarity between the two problems: in fact
this one corresponds to consider a discontinuity of the parameter $K$ along the $x$ axes while keeping it constant along $\tau$.
The solution of the propagator is therefore the same with the exchange of space and time directions.
One can therefore use the general result of Sec.~\ref{Sec_gen_prob} for the imaginary-time propagator $G$ with $y=x$ and $z=\tau$ to derive the result
of \cite{safi1995transport,ponomarenko1995renormalization,maslov1995landauer} for the conductivity, following a similar path to that described in \cite{maslov1995landauer}.
In order to get the conductivity $\sigma_{\omega} (x,x')$, one just needs to Fourier transform the imaginary time propagator $G(x, \tau; x', 0)$ in frequency and take the $\omega\to 0$ limit of that propagator.
In fact it holds \cite{safi1995transport,ponomarenko1995renormalization,maslov1995landauer}
\begin{equation}
\sigma_{\omega}(x,x')= - e^2 \frac{\overline{\omega}}{\pi} G_{\overline{\omega}}(x,x')
\end{equation}
with $\omega= i \overline{\omega} + \epsilon$.
To get the Fourier transform, we use that
\begin{equation}
\int_{-\infty}^{\infty} e^{- i \omega \tau} \log[ a^2+\tau^2] = - \frac{2\pi}{|\omega|} e^{-a|\omega} - 4 \pi \gamma_{E}\delta(\omega)
\end{equation}
obtaining
\begin{equation}
\begin{array}{l}
\displaystyle
\mathcal{F}_{\tau \to \omega}[G(x, \tau; x', 0)] = \frac{K}{2} \frac{\pi}{|\omega|} \left\{
\sum_{n\in \mathbb{Z}} \left(\frac{K-K_0}{K+K_0} \right)^{2|n|} e^{- |\omega(x-x'-4 n L)|}
\right.
\\ \vspace{-0.2cm} \\
\displaystyle \qquad\qquad\qquad\qquad\qquad\qquad\qquad \left.
- \sum_{n \in \mathbb{Z} \backslash \{0\}} \left(\frac{K-K_0}{K+K_0} \right)^{2|n|-1} e^{- |\omega(x+x'-4 n L)|} \right\}.
\end{array}
\end{equation}
This reproduces the ansatz given, e.g., in Ref.~\cite{safi1995transport} for the propagator.
Let us just mention that as we give all the details of the propagator one could read it at \emph{arbitrary} frequency.
However, for the purpose of computing the dc conductivity one is interested in the $\omega\to 0$ limit only \cite{maslov1995landauer}. This reads
\begin{equation}
\begin{array}{ll}
\displaystyle
\lim_{\omega\to 0} \frac{\omega}{\pi} \mathcal{F}_{\tau \to \omega}[\langle\phi(x,\tau)\phi(x',0)\rangle] & \displaystyle = \frac{K} {2} \left[
\sum_{n\in \mathbb{Z}} \left(\frac{K-K_0}{K+K_0} \right)^{2|n|} - \sum_{n \in \mathbb{Z} \backslash \{0\}} \left(\frac{K-K_0}{K+K_0} \right)^{2|n|-1} \right]
\\ \vspace{-0.2cm} \\
& \displaystyle
= \frac{K} {2} \left[ \mu^+ - \mu^- \right]= \frac{K_0}{2}.
\end{array}
\end{equation}
This result shows that the conductivity only depends on the properties of the leads, in agreement with the results of \cite{safi1995transport,ponomarenko1995renormalization,maslov1995landauer}.
Our method allows for generalization to either frequency dependence or to more complicated cases. These will be discussed elsewhere.
\section{Massless quench in finite systems} \label{sec:extensions}
It is straightforward to generalise our results to study problems that within the path integral formulation admit the following structure: the associated worldsheet only has two straight interfaces in one of the two directions.
In this section, in particular, we take this direction to be the imaginary time, and focus on the massless quench in system of finite size $L $. This amounts to consider the space direction to be finite, ending up in a cylinder (PBC) or in a strip (OBC) geometry, as showed in Fig.~\ref{Fig:PBC_OBC}. In both cases, all the results discussed in Section~\ref{sec:results_quench} can be derived exactly as in the case of infinite system size, while only the core building block, namely the homogeneous propagator \eqref{V2D} has to be replaced by the corresponding one in finite system size (with the appropriate boundary conditions). Below we report directly some final useful formulas.
\begin{figure}[t]
\begin{center}
\includegraphics[height=6cm]{PBC_OBC}
\caption{Path integral geometries corresponding to the massless quench in finite system size: PBC give rise to the cylinder geometry on the left, while OBC to the strip geometry on the right.
}\label{Fig:PBC_OBC}
\end{center}
\end{figure}
\subsection{Periodic boundary conditions}
When considering a finite system with PBC the only change in the computations in the previous section is to replace the homogeneous propagator \eqref{V2D} of
an infinite system with its counterpart in the cylinder, i.e.
\begin{equation}
V_{\text{PBC}}^{Q}(x_{1},\tau_{1};x_{2},\tau_{2}) =-\frac{K Q}{4}\log\left(\sin^{2}\left[\frac{\pi}{L}(x_{1}-x_{2})\right]+\sinh^{2}\left[\frac{\pi}{L}(\tau_{1}-\tau_{2})\right]\right) \ .
\end{equation}
From the above equation, we can simply get the full propagator after a massless quench $K_0\to K$ by considering the geometry in Fig.~\ref{Fig:PBC_OBC} (left). The interfaces along the $\tau$-axis will give rise to the usual charge images $\{ Q_m^{s}\}$ at positions $\{ \tau_m^s \}$ (with $s=\pm$), as given in Eqs.~\eqref{Q+}-\eqref{Q-}, and thus
\begin{equation}
G_{\text{PBC}} (x,\tau;x',\tau')=\sum_{m} \sum_{s=\pm } V^{Q_m^s}_{\text{PBC}} (x,\tau;x',\tau_{m}^s) \ .
\end{equation}
Finally the correlation functions after the quench are obtained upon analytic continuation $\tau \to i t$. For example, the two-point correlation function of vertex operators now becomes
\begin{equation}
C_{\alpha}^{(2)}(x,t;0,t) =
\left[\left(\frac{1}{\sin^{2}\frac{\pi x}{L}}\right)^{\mu^{+}}\left(1-\frac{\sin^{2}\frac{\pi x}{L}}{\sin^{2}\frac{2\pi t}{L}}\right)^{\mu^{-}}\right]^{K\alpha^{2}/4}
\end{equation}
again in agreement with the result in \cite{Cazalilla_MasslessQuenchLL}.
Finally, note that, by exchanging the role of space and time directions, the same solution allows to treat the problem of a conductor coupled to leads at finite temperature (and eventually get the ac conductivity) \cite{safi_prop,safi_ac}.
\subsection{Open boundary conditions}
One can also work out the solution for a massless quench in a finite system of size $L$ with OBC imposed at the boundaries, see Fig.~\ref{Fig:PBC_OBC} (right). More specifically, we choose Dirichelet boundary conditions, namely a vanishing bosonic field at $x=0,L$.
Similarly to the PBC case, the full propagator after the quench can be written in terms of the one in the homogeneous strip, $V^Q_{\rm OBC}$, as
\begin{equation} \label{G_obc}
G_{\text{OBC}} (x,\tau;x',\tau')=\sum_{m} \sum_{s=\pm } V^{Q_m^s}_{\text{OBC}} (x,\tau;x',\tau_{m}^s) \ .
\end{equation}
However, we further note that, by exchanging role of space and time coordinates, the propagator in the homogeneous strip was computed in Section~\ref{appB:massive_electro} for the massive quench. The solution was given in terms of infinitely many charge images with alternating sign and usual positions $ \{ x^{\pm}_{n} \}$ with $x_{n}^{\pm}=\pm x'+2Ln$
(with $n\in\mathbb{Z}$) (cf. Fig.\ref{Fig_massive}). In formulas
%
\begin{equation} \label{V_obc}
V^{Q}_{\text{OBC}} (x,\tau;x',\tau' ) = \sum_{n} \sum_{s'=\pm} V^{s' Q}(x,\tau;x^{s'}_{n},\tau)
\end{equation}
%
where we used the following notation $V^Q ( y, z ; y_P, z_P ) \equiv V^P_{2D} (y,z;y_P,z_P)$ (cf. \eqref{V2D}).
An interesting perspective comes out if we combine \eqref{G_obc} and \eqref{V_obc}: the propagator $G_{\rm OBC}$, written as two infinite sums (one in the space and one in the time directions), can be interpreted via a unique distribution of charge images as shown in Fig.~\ref{Fig:charges_OBC}.
\begin{figure}[t]
\begin{center}
\includegraphics[height=8cm]{charges_OBC}
\caption{Distribution of charge images in the massless quench in a finite system with OBC. The physical strip corresponding to the one in Fig.~\ref{Fig:PBC_OBC} (right) is highlighted (light orange) for readability.
}\label{Fig:charges_OBC}
\end{center}
\end{figure}
Note that \eqref{V_obc} can be resummed giving
\begin{align}
V^{Q}_{\rm OBC}(x,\tau;x',\tau') & =-\frac{K}{4} Q \log\left(\frac{\cos\left[\frac{\pi(x-x')}{L}\right]-\cosh\left[\frac{\pi(\tau-\tau' )}{L}\right]}{\cos\left[\frac{\pi(x+x')}{L}\right]-\cosh\left[\frac{\pi(\tau-\tau')}{L}\right]}\right) \ .
\end{align}
Remarkably, in the case of the quench only (when considering the limit $\epsilon \to 0$) also the full propagator $G_{\rm OBC}$ in \eqref{G_obc} can be resummed.
The final step to get correlation functions is again to make the analytic continuation $\tau \to it$. We report as an example the result for two-point correlation function of vertex operators, which can be written as
\begin{multline}
C_{\alpha}^{(2)}(x,t;x',t) =
\left[
\left(\frac{\cos\left[\frac{\pi(x+x')}{L}\right]-1}{\cos\left[\frac{\pi(x-x')}{L}\right]-1}\right)^{\mu^{+}}\left(
\frac{\cos\left[\frac{\pi(x-x')}{L}\right]-\cos\left[\frac{2\pi t}{L}\right]
}{\cos\left[\frac{\pi(x+x')}{L}\right]-\cos\left[\frac{2\pi t}{L}\right]}
\right)^{\mu^{-}} \right.
\times \\ \times
\left. \left(\frac{1-\cos\left[\frac{2\pi t}{L}\right]}{\cos\left[\frac{2\pi x}{L}\right]-\cos\left[\frac{2\pi t}{L}\right]}\right)^{-\mu^{-}/2}\left(\frac{1-\cos\left[\frac{2\pi t}{L}\right]}{\cos\left[\frac{2\pi x'}{L}\right]-\cos\left[\frac{2\pi t}{L}\right]}\right)^{-\mu^{-}/2}
\right]^{\frac{K\alpha^{2}}{4}} \ .
\end{multline}
\section{Conclusions} \label{sec:conclusions}
In this work we have reconsidered the problem of the (massless) quench of the Luttinger parameter in a Tomonaga-Luttinger model.
This problem was already solved long ago in real-time by Bogolioubov transformations \cite{Cazalilla_MasslessQuenchLL}, but we present a new solution
that works in the path integral formalism in imaginary time.
This path integral formulation maps the quench into the electrostatic problem of a strip of a dielectric medium surrounded by another infinite dielectric.
Our approach bridges a theoretical gap that allows one to understand on the same ground massive and massless quenches in Luttinger models.
As a byproduct, we have shown that the quench problem is equivalent, after an exchange of space and time,
to the one of a finite one dimensional conductor coupled to two semi-infinite leads.
Our solution is very general and paves the way to the study of interfaces in other dimensions or geometries.
A part from the generalization to finite system quench considered in Sec.~\ref{sec:extensions} (or the equivalent problems in the LL coupled to leads), another straifghtforward generalization concerns the $d$-dimensional problem.
In fact, the values and positions of charge images \eqref{Q+}-\eqref{Q-} do not depend on the specific form of the potential generated by a single charge
(this can be easily understood from the derivation in Appendix \ref{App_sol}).
Thus, we can consider the problem of a heterogenous Gaussian theory in $d$ dimensions with
$d-1$ dimensional interfaces and use the same solution changing only the specific form of the potential generated by each charge.
A less straightforward and more interesting outlook concerns the study of the entanglement (both in and out of equilibrium)
in the presence of the permeable interface \cite{Bachas_PermeableWalls}.
Such problem has been considered a lot in the literature \cite{ss-08,ep-10,cmv-11c,bb-15,gm-17,cc-13,ep-12b,ep-12,ge-20,mt-21},
but we believe that the electrostatic analogy could lead to a simpler and more transparent solution.
\section*{Acknowledgements}
We thank J\'{e}r\^{o}me Dubail for useful discussions.
This work is supported by ``Investissements d'Avenir" LabEx PALM
(ANR-10-LABX-0039-PALM) (EquiDystant project, L. Foini).
PC acknowledges support from ERC under Consolidator grant number 771536 (NEMO).
This work was supported in part by the Swiss National Foundation under division II.
\begin{appendix}
\section{Solution of the equivalent electrostatic problem}\label{App_sol}
In this Appendix we report the solution of the electrostatic problem discussed in Sec.~\ref{Sec_gen_prob}, namely the one of a strip of dielectric material, surrounded by a different material. This geometrical setting leads to a discontinuity in the dielectric constant, that as in Fig.~\ref{Fig_massless}, we assume to be along the $y$ direction.
The value of the dielectric constant is $\epsilon=\frac{1}{\pi K}$ inside the strip of width $L$,
and $\epsilon'=\frac{1}{\pi K_0}$ outside.
The solution is derived via the method of charge images which amounts to
find the true potential introducing fictitious charges and considering the medium as homogenous.
Such charge configuration will be find iteratively, and is inspired by the solution of a dielectric between two conducting lines (this is briefly reviewed in Section~\ref{appB:massive_electro} in the context of its relation to massive quenches).
To fix the ideas, we suppose to have a charge $q$ at the point $(a,0)$ with $0< a < L$.
The problem is translational invariant along $z$ so without loss of generality we set $z=0$.
We will enforce the continuity of the potential $V(y,z;a,0)$
and of $\epsilon (y,z) \partial_y V(y,z;a,0)$ across the interface.
The goal is to find the potential $V_{in} (y, z; a, 0)$ within the strip, corresponding to the propagator $G(y, z; a, 0)$ in Eq.~\eqref{Eq_Poisson}.
The potential generated by a charge $q$ in two dimensions is logarithmic.
However, to show the generality of this approach we will consider a potential of the form
$V(y,z;y',z')=\frac{1}{\epsilon} q f((y-y')^2+(z-z')^2)$ where $f$ is a generic (smooth) function, and $(y',z')$ is the position of the charge.
{\it First step:} Let us first consider the interface at $y=0$, and solve the problem while ignoring the presence of the second interface.
In this case, the potential inside the strip is found by considering a single image charge, placed symmetrically to $q$ with respect to $y=0$.
Following \cite{jackson1999classical,mcdonald2018dielectric},
this can be understood by considering in $0\leq y \leq L$ a potential generated by two charges $q,q_1$ (the true one plus the image)
in $a$ and $-b<0$
\begin{equation} \label{Vin1}
V_{in}(y,z;a,0) = \frac{q}{\epsilon} f(z^2+(y-a)^2) + \frac{q_1}{\epsilon} f(z^2+(y+b)^2),
\end{equation}
and the potential outside $y<0$ as generated by the charges $q,p_1$ as
\begin{equation}
V_{out}^-(y,z;a,0) = \frac{q}{\epsilon} f(z^2+(y-a)^2) + \frac{p_1}{\epsilon} f(z^2+(y-c)^2).
\end{equation}
The continuity of $V$ at $y=0$ implies
\begin{equation}
b=c \qquad \text{and}\qquad q_1=p_1.
\end{equation}
The continuity of $\epsilon \, \partial_y V(y,z)$ implies
\begin{equation}
\left( q a f'(z^2+(a)^2) - q_1 b f'(z^2+(b)^2) \right) = \frac{\epsilon'}{\epsilon} \left( q a f'(z^2+(a)^2) + q_1 b f'(z^2+(b)^2) \right).
\end{equation}
This gives
\begin{equation}
b=a \qquad \text{and}\qquad q_1=-q \frac{\epsilon'-\epsilon}{\epsilon+\epsilon'}
\end{equation}
as anticipated.
{\it Second step:} We now focus on the other interface at $y=L$, and note that the solution \eqref{Vin1} does not satisfy the continuity conditions at $y=L$. Therefore $V_{in}$ has to be modified.
To do that, the idea is to ``balance'' the two charges $q, q_1$ generating the potential in the strip at the previous step, by placing two more charges symmetric to those with respect to the axis $y=L$.
To see that, we consider inside the strip ($0<y<L$) a potential of the form
\begin{equation}
V_{in}(y,z;a,0) = \frac{q}{\epsilon} f(z^2+(y-a)^2) + \frac{q_1}{\epsilon} f(z^2+(y+a)^2) + \frac{q_2}{\epsilon} f(z^2+(y-b)^2) + \frac{q_3}{\epsilon} f(z^2+(y-d)^2)
\end{equation}
with $b,d>L$, while for $y>L$ we take the following ansatz
\begin{equation}
V_{out}^+(y,z;a,0) = \frac{q}{\epsilon} f(z^2+(y-a)^2) + \frac{q_1}{\epsilon} f(z^2+(y+a)^2) + \frac{p_2}{\epsilon} f(z^2+(y-c)^2) + \frac{p_3}{\epsilon} f(z^2+(y-e)^2)
\end{equation}
with $c,e<L$.
The continuity of $V$ in $y=L$ implies
\begin{equation}
c=2L-b \qquad e = 2 L-d \quad q_2=p_2 \qquad q_3 = p_3 \ .
\end{equation}
The continuity of $\epsilon \, \partial_y V$ at $y=L$ implies:
\begin{equation}
\begin{array}{l}
\displaystyle
\frac{\epsilon-\epsilon'}{\epsilon} \left( q (L-a) f'(z^2+(L-a)^2) + q_1 (L+a) f'(z^2+(L+a)^2) \right)
\\ \vspace{-0.2cm} \\
\displaystyle =
\frac{\epsilon+\epsilon'}{\epsilon} \left( q_3 (d-L) f'(z^2+(L-d)^2) + q_2 (b-L) f'(z^2+(L-b)^2) \right),
\end{array}
\end{equation}
namely
\begin{equation}
d= 2 L-a \quad q_3 = q_1 = -q \frac{\epsilon'-\epsilon}{\epsilon+\epsilon'} \quad b=2 L +a \quad q_2 = - q_1 \frac{\epsilon'-\epsilon}{\epsilon+\epsilon'} = q \left(\frac{\epsilon'-\epsilon}{\epsilon+\epsilon'} \right)^2.
\end{equation}
{\it Following steps:} Then one has to proceed iteratively. It is easy to realize that at each step a new pair of image charges has to be added on one of the two sides of the strip, alternatively, in order to satisfy the continuity conditions at $y=0$ and $y=L$, respectively. While the positions of such pairs can be easily guessed by symmetry, the continuity conditions also fix the values of the image charges, which after a few steps can be guessed as well (and proved by induction).
In this way, we finally arrive to the solution presented in Sec. \ref{Sec_gen_prob} in Eq.~\eqref{Q+}-\eqref{Q-}. For comparison, one has to identify
\begin{equation}
Q_0^+ = q, \quad Q_1^- = q_1= q_3, \quad Q_2^{+} = q_2 = q_4, \quad \cdots
\end{equation}
and set $q=1$ (a unit charge).
\vspace{0.2cm}
Finally we note that our approach generalises even further.
In fact one can take a function $f$ of the form
\begin{equation}
V(y,z;y',z')=\frac{1}{\epsilon} q f(y-y',z-z',z+z')
\end{equation}
even with respect to its arguments. This allows one to relax the constraint of translational invariance along the $z$ direction, as for the quench with open boundary conditions (cf. Sec.~\ref{sec:extensions}).
\end{appendix}
|
3,212,635,537,508 | arxiv | \subsection{Computational Complexity}
We first analyze the computational complexity of a Massive MIMO base
station. Figure~\ref{fig:proc_dist} shows a high-level block diagram
of the signal processing for an OFDM-based massive MIMO system. Other
modulation options can be used, and single-carrier schemes may be
preferred.
The overall partition
of the processing presented here will still hold.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.8\textwidth]{processing_distribution.pdf}
\caption{Signal processing in an OFDM-based massive MIMO system for $M$ BS antennas and $K$ UEs.}
\label{fig:proc_dist}
\end{figure*}
The processing in Massive MIMO systems is logically grouped into three
categories:
\begin{enumerate}
\item The outer modem performing symbol (de)mapping,
(de)interleaving and channel (de)coding. This processing
performed on the transmit/receive bits applies to each User
Equipment (UE) individually.
\item The inner modem comprising channel estimation, and detection and precoding of the
uplink and downlink data, respectively. This central
processing aggregates/distributes data from/to all the antenna
chains.
\item The per-antenna processing which primarily consists of the analog and digital front-end (mainly re-sampling and filtering) and OFDM processing.
\end{enumerate}
We identify inherent parallelism and observe that the processing
complexity scales with the number of BS antennas, $M$, the number of
UEs, $K$, or both \cite{Steffen2016SIPS}:
\begin{itemize}
\item Per-antenna processing: Scales with $M$ as each antenna
requires OFDM (de)modulation and a digital/analog front-end.
\item Central processing: Scales with $M$ and $K$.
\item Per-user processing: Scales with $K$.
\end{itemize}
The number of digital signal processing operations performed in the
sub-systems provides a high-level estimate of
complexity. Table~\ref{tb:complexity_case} gives numbers for a sample
system with $M=100$ antennas at the base-station and $K=10$
simultaneous terminals. It is acknowledged that these estimates
represent an over-simplification, as the nature and precision of the
operations will be an important determining factor in the eventual
hardware complexity and power consumption.
\begin{table}[t!]
\caption{Estimated number of DSP operations in GOPS, for $M=100$ and $K=10$, $20$ MHz bandwidth,
and $3$ bps/Hz (16-QAM, code rate 3/4).\label{tb:complexity_case}}
\centering
\begin{tabular}{ | c | c | c | c | }
\hline
\textbf{\textit{Subcomponent}} & \textbf{\textit{Downlink data (DL)}} & \textbf{\textit{Uplink data (UL)}} & \textbf{\textit{Training}} \\
& \textbf{\textit{[GOPS]}} & \textbf{\textit{[GOPS]}} & \textbf{\textit{[GOPS]}} \\\hline
Inner modem & 175 & 520 & 290 \\\hline
Outer modem & 7 & 40 & 0 \\\hline
Per-antenna DSP & 920 & 920 & 920 \\\hline
\end{tabular}
\end{table}
Table~\ref{tb:complexity_case} demonstrates that the collective
per-antenna digital processing is demanding, and requires a
minimal-complexity implementation. Interestingly, the per-antenna
processing does not need to be performed with high precision to offer
very good performance. An in-depth analysis and efficient
implementation options are presented in Section~\ref{sec:per-antenna}.
For the inner modem processing in Massive MIMO, a high degree of
reconfigurability is desired in order to adapt to changing operating
conditions, such as the number of connected UEs, and their SNRs/path
losses. Section~\ref{sec:precoding-decoding} discusses efficient
algorithm-hardware co-design solutions for the Massive MIMO precoding
and detection.
Furthermore, reciprocity calibration needs to be performed
occasionally. Elegant solutions have been proposed and demonstrated,
see Section~\ref{sec:RF}.
Channel coding clearly is an essential component
of the wireless transmission, yet it is not Massive MIMO-specific and
therefore not further treated in this paper.
\subsection{Signal Interconnection and Data Transfer Complexity}
The transfer of data between processing components creates a
significant challenge, as the amount of signals and data to be
aggregated/distributed from/to all the antennas is very high. The
required data shuffling rate between the per-antenna processing and
the central processing is \cite{Steffen2016SIPS}
\begin{equation}
R_\textnormal{antennas2central}=M\times R_\textnormal{OFDM}\times W,
\label{eq:data_size}
\end{equation}
where $R_\textnormal{OFDM}$ is the sampling rate after OFDM processing
and $W$ is the word-length of one data sample. For a 100-antenna
20~MHz bandwidth system, the sampling rate $R_\textnormal{samp}$ at
each antenna is 30.72 $MS/s$ and thus
\begin{equation}
R_\textnormal{OFDM}=R_\textnormal{samp} \times \frac{N_\textnormal{data}}{N_\textnormal{sub}+N_\textnormal{CP}}=16.8\
\mbox{MS/s},
\end{equation}
where $N_\textnormal{data}$, $N_\textnormal{sub}$, and
$N_\textnormal{CP}$ are the number of data subcarriers, the total
number of subcarriers, respectively the number of cyclic prefix
samples. Assuming that 24 bits are used for one complex sample,
$R_\textnormal{antennas2central}$ equals 40.32 $Gb/s$. This
requirement is an order of magnitude higher than in a conventional
system.
Additionally, the data transfer network must re-organize data among
different dimensions. Figure~\ref{fig:data_network} illustrates the
uplink data shuffling between the per-antenna and the central
processing. First, \circled{1} in the figure, the data shuffling
network aggregates data samples of all subcarriers from all antenna
chains. Next, \circled{2} in the figure, it divides the entire data
into bandwidth chunks depending on the number of central processing
units in the system, and distributes the data to the corresponding
processing unit.
\begin{figure}[t!]
\centering \includegraphics[width=0.45\textwidth]{network.pdf} \caption{Illustration
of the data shuffling between the per-antenna and central
processing.} \label{fig:data_network}
\end{figure}
This high data transfer requirements has motivated the development of
decentralized processing architectures, which are introduced next.
\subsection{Decentralized Processing}
Depending on the selected MIMO processing algorithms, both the
processing performed in the per-antenna and in the central units, and
the communication between these two, will influence the resulting
system performance and overall complexity. For instance, the
maximum-ratio precoding operation $\sum_k \alpha_k \hat{\boldsymbol{g}}_k^*x_k$ can
be performed in each antenna path in a distributed manner, whereas
the zero-forcing algorithm requires centralized processing, specifically for the
inversion of the Gram matrix $(\hat{\boldsymbol{G}}^H\hat{\boldsymbol{G}})^{-1}$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{decentralized_processing.pdf}
\caption{Decentralized processing architecture, performing group-based operations
between the per-antenna processing (PAP) and the central unit.}
\label{fig:decentralized_processing}
\end{figure}
Decentralized processing enables parallel computing and offers a
balanced trade-off between system performance and data transfer
requirements \cite{SYA2012,DBLP:journals/pieee/PuglielliTLMLTW16,li2017decentralized,DBLP:conf/sigcomm/YangLYFTHZZ13}. The
authors of \cite{li2017decentralized} propose a decentralized
architecture for both uplink and downlink, illustrated
in Figure~\ref{fig:decentralized_processing}. Instead of aggregating
the full channel state information and transmit/received data vectors
at the centralized processing node, $M$ antenna nodes are grouped
into $B$ equally sized groups, each serving $C$ antenna nodes. A middle
-level processing node, labeled group processor, is introduced between
the per-antenna and central processor to handle the corresponding
data dedicated to the group of $C$ antenna nodes. As a result, a
limited amount of data is then aggregated/distributed to/from the
central processor, relaxing the requirements on the data transfer
network. For instance, the Gram matrix calculation
${\boldsymbol{Z}}=\hat{{\boldsymbol{G}}}^H\hat{{\boldsymbol{G}}}$ can be rewritten as
\begin{equation}
{\boldsymbol{Z}} = \sum_{b=1}^{B}\hat{{\boldsymbol{G}}}_b^H\hat{{\boldsymbol{G}}}_b,
\label{eq:dec_proc}
\end{equation}
where $\hat{{\boldsymbol{G}}}_b \in \mathbb{C}^{C \times K}$ is the local channel
estimate for each group of $C$ antennas. The decentralized processing
is performed such that the terms $\hat{{\boldsymbol{G}}}_b^H\hat{{\boldsymbol{G}}}_b$ are
computed at each group processor locally, and the results are
aggregated at the central processor for the final summation. The
tree-like distributed processing architecture is further elaborated
in \cite{Bertilsson2017}, with special focus on modularity and
scalability. Especially, the trade-off between data processing,
storage, and shuffling is investigated for maximum-ratio transmission,
zero-forcing, and MMSE algorithms.
\subsection{Signal Processing at Work in Massive MIMO Demonstrations}
Demonstrations that have proven the superior
spectral efficiency of Massive MIMO and the adequacy of DSP solutions
in real-life testbeds are illustrated here below. Furthermore we summarize the conclusions of this
paper and outline future research directions.
\begin{figure}[t!]
\centering
\begin{subfigure}[]
\centering
\includegraphics[width=0.20\textwidth]{lumami.jpg}
\end{subfigure}
\begin{subfigure}[]
\centering
\includegraphics[width=0.25\textwidth]{DistributedMassiveMIMONarrow}
\end{subfigure}
\caption{Two different Massive MIMO testbeds: (a) the LuMaMi testbed at
Lund University a with collocated antenna array (from \cite{Malkowsky2017IEEEACESS}) and (b) the KU Leuven testbed with separated antenna arrays.}
\label{fig:lumami}
\end{figure}
To prove a new wireless technology, it is very important
to build up testbeds to conduct verification and evaluate performance
in real-life environments with over-the-air transmission. For Massive
MIMO it is especially crucial, since performance is
dependent on propagation characteristics, and measurement-based
channel models themselves are still under development. Thanks to
recent advances in Software-Defined Radio (SDR) technology, several
Massive MIMO prototype systems have been built by both industry and
academia, including the Argos testbed with 96 antennas \cite{SYA2012},
Eurecom's 64-antenna testbed \cite{TestbedEurecom}, Facebook's ARIES
project \cite{TestbedFacebook}, the 100-antenna LuMaMi testbed from
Lund University
(Figure~\ref{fig:lumami}a) \cite{Malkowsky2017IEEEACESS}, SEU's
128-antenna testbed \cite{SEU_NUPT}, and testbeds exploring
distributed arrays from the KU Leuven
(Figure~\ref{fig:lumami}b) \cite{Chen2016} and University of
Bristol \cite{Harris2016SIPS}.
\subsubsection{World-Record in Spectral Efficiency and Massive MIMO in Mobility}
The signal processing techniques discussed in this paper, especially
the cross-level optimization methodology, have been exploited in the
development of Massive MIMO testbeds to enable real-time processing of
wide-band signals for large numbers of antennas. For instance, the
LuMaMi tested adopts the processing distribution scheme in
Figure~\ref{fig:proc_dist}, where 50 SDRs with Field-Programmable
Gate-Arrays (FPGAs) are used to perform per-antenna processing in a
parallel fashion. Four centralized FPGAs are responsible for
per-subcarrier processing, and the Peripheral Component Interconnect
Express (PCIe) with direct memory access (DMA) channels handles the
data shuffling. QR-decomposition based ZF processing has
been implemented to fully leverage the available parallel processing
resources in the FPGAs.
Diverse field trials, both indoors and outdoors with static and mobile
users, have been conducted using the Massive MIMO testbeds. In a 2016
experiment, a 128-antenna Massive MIMO base
station served 22 users, each transmitting with 256-QAM
modulation, on the same time-frequency
resource \cite{Harris2016SIPS}. The spectral efficiency benefits from
the spatial multiplexing as well as from the high constellation order,
enabled by the array gain. In practice, protocol overhead and FEC
redundancy will determine the actual net spectral efficiency. In the
actual demonstration a spectral efficiency of 145.6 bits/s/Hz was
achieved on a 20~MHz radio channel, representing a $\sim 20$~times
increase with respect to the current 4G air interface. The performance
was achieved in an environment without mobility and multi-cell
interference, which would constitute the limiting factors
performance in a practical deployment.
The same research group also demonstrated Massive MIMO operation in an
outdoor scenario with moderate
mobility \cite{Harris2017JSAC}. Figure~\ref{fig:moblity_test} shows
the measurement scenario where the 100-antenna LuMaMi testbed is
placed on the rooftop of a building facing a parking lot $\sim 75$~m
away. Ten single-antenna users are served in real time at 3.7~GHz,
including six users moving at pedestrian speed and four terminals on
vehicles moving at a speed up to around $50$ km/h. The spatial
multiplexing was fully achieved and the communication quality was on
average well maintained for all terminals \cite{MAMMOETD4_2}. Sporadic
interruptions could be traced back to temporary loss of
synchronization. It should be noted that both the speed of the cars
and the number of terminals could be larger in a real deployment. In
the proof of concept they were limited by the available test space and
equipment. In fact, at 3.7 GHz carrier frequency and with a slot
length of 0.5~ms, the maximum permitted mobility (assuming a two-ray
model with Nyquist sampling, and a factor-of-two design margin, as
in \cite{Marzetta16book}) is over 140 km/h \cite{MaMiBlog2016}.
\subsubsection{Further Investigation Needed for Synchronization}
A critical challenge requiring further investigation is the initial
synchronization between the base station and the user
terminals. This initial synchronization has to start without
any knowledge of the channels, and therefore cannot
benefit from an array gain. How to efficiently perform initial
time and frequency synchronization acquisition without the massive
array gain and how to explore the (partial) array gain to provide
faster and more robust synchronization are still open
questions. Two methods were studied during the LuMaMi
testbed experiments. One method is to reserve a dedicated RF chain for
the synchronization signal, which is transmitted using an
omni-directional antenna. In this case, a higher-power
PA (which is not available in LuMaMi) is needed to
provide coverage. Another method is to use
beam-sweeping for the synchronization signal \cite{Barati2015TWC}, but
this is inefficient, as it is essentially equivalent to repetition
coding, and also there is risk of synchronization loss when the users
are not hit by a beam. Improved techniques, based on space-time block
codes, have been investigated
\cite{Karlsson2017arXiv,Xia16,Meng16}. Iterative search and
tracking methods \cite{Marco2016CM} may have potential,
especially for mobile users.
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{mobility_test.pdf}
\caption{Overview of the testbed demonstration of Massive MIMO in a mobility scenario, at the campus of Lund University, Sweden.}
\label{fig:moblity_test}
\end{figure}
\subsection{Concluding on the Signal Processing}
Appropriate co-design of algorithms, hardware architectures, and
circuits in Massive MIMO implementations brings significant benefits:
\begin{itemize}
\item Energy efficient implementations of ``theoretically optimal''
Massive MIMO DSP architectures are nontrivial but possible. We have
detailed some of the most important innovations required, and
explained their analysis. The power consumption of conventional
macro base stations is dominated by the PA stage. They benefit in
Massive MIMO from the ability to operate on an order of magnitude
less transmit power.
\item The sufficiency of low-precision quantization and processing,
predicted by information-theoretic studies, has now also been
validated through real signal processing experiments. A reduction in
word-length up to 6 times compared to conventional systems
translates into corresponding savings in complexity, power
consumption and memory.
\item Dedicated and scalable hardware architectures implementing tailored
algorithms for large matrix processing facilitate zero-forcing
precoding at the base station in real time, at 30 mW power
consumption in relevant scenarios for a $128\times 8$ system.
\item Voltage over-scaling, a speculative concept just 5 years ago, has
found appropriate application in the Massive MIMO per-antenna
processing.
\item Smart control of algorithmic modes and scalable devices,
including body bias adaptation, can guarantee suitable
performance-power trade-offs over a wide range of communication
scenarios and channel propagation conditions.
\item Lean terminals could operate in typical broadband cellular Massive MIMO
networks at about $10\%-20\%$ of the power consumption
of equivalent conventional terminals, both in data
transmission and reception.
\item The efficiency of Massive MIMO base stations can be further
improved by relaxing the requirements of the RF and analogue
hardware. However, caution is needed as (non-linear) distortion may
under specific conditions combine coherently.
\end{itemize}
\subsection{Future Directions}
\subsubsection{Progress Massive MIMO Deployment in Actual Networks}
Integration of all components into deployment in actual networks
represents a vast design and
development effort, that will include:
\begin{itemize}
\item Overcoming challenges related to connection of the many antenna paths
to the central processing units. This involves implementing high-speed
interconnects and coping with potential coupling effects in the
front-end modules.
\item Devising efficient schedulers for large numbers of users.
Achieving the high spatial multiplexing
gains offered by Massive MIMO fundamentally requires that many terminals are
scheduled for service simultaneously. Tuning or re-design of higher-layer protocols
could be beneficial to shape the traffic patterns, such that aggressive spatial
multiplexing can be performed.
\item Designing antenna arrays.
Massive MIMO arrays do not have to be linear, rectangular or cylindrical.
Small antenna elements could be naturally integrated into the environment,
onto the surface of existing structures, or faces of buildings, for example,
in an aesthetically pleasing manner.
Insights from electromagnetics may guide the design of new types of arrays.
Specifically, for a given volume $V$, consider the corresponding
smallest possible sphere that contains $V$. If one covers the
surface of this sphere with antennas at a density of $\sim
1/\lambda^2$ elements per square meter, then there is no point in
installing any additional elements inside of the interior of $V$
\cite{FranceschettiM2015}. Sampling the surface on a
$\lambda\times \lambda$-grid captures all information in the
radiated field. In conclusion, what goes into the interior of $V$ is unimportant,
only the surface matters.
\end{itemize}
Industrial recognition of the value of Massive MIMO technology is
evidenced by the large number of contributions on the topic in the
3GPP-LTE standardization of New Radio (NR) for 5G systems. Leading
operators have already started to perform commercial field trials of the technology
\cite{MaMiBlog2016}.
\subsubsection{Enhanced Functionalities}
Large antenna arrays can also be used to perform accurate positioning
and localization. This feature can offer improved context-awareness to
services. Also the Massive MIMO communication system itself could
exploit this information to perform smart pilot allocation, for example.
\subsubsection{Scale Up Capacity and Efficiency}
The call for more and higher-quality wireless services is expected to
increase for many years, and the quest for wireless systems offering
higher spectral and energy efficiency will continue. Higher peak-rates
can be offered in Massive MIMO by performing spatial multiplexing of
several streams to one terminal. Actual gains may be limited due to
insufficient rank of the channel, yet for two streams this will mostly
be achievable with co-located antennas exploiting cross-polarization.
Wider bandwidth channels can be allocated especially in mmWave bands. Radio propagation and in particular absorption is considerably different at these frequencies. Arrays with a large number of antennas can be small in size, yet their effective gain may suffer from high losses on the interconnect. Consequently, Massive MIMO systems in these bands call for other architectures and their deployment will best suit particular use cases, for example hotspots.
With larger antenna arrays, both better spatial multiplexing and
array gains can be achieved. The new concepts of cell-free Massive MIMO \cite{ngo2017cell}
and intelligent
surfaces \cite{LIS2017} accelerate this trend to a next level.
With cell-free Massive MIMO, coherently cooperating antennas are spread out over a larger geographical area, providing improved macro-diversity and improved channel rank for
multiple-antenna terminals.
The intelligent surface concept envisages
distributed nodes that form electromagnetically active walls, floors,
and planar objects.
New research is urgently needed to bring these new concepts to their full potential.
\section{Introduction}
Massive MIMO is an efficient sub-6 GHz physical-layer technology for
wireless access, and a key component of the 5G New Radio (NR)
interface \cite{nr}. The main concept is to use large antenna arrays
at base stations to simultaneously serve many autonomous terminals, as
illustrated in Figure~\ref{fig:systemPict}
\cite{Larsson2014,Marzetta16book}. Smart processing at the
array exploits differences among the propagation signatures of the
terminals to perform spatial multiplexing. Massive MIMO offers two
main benefits:
\begin{enumerate}
\item Excellent spectral efficiency, achieved by spatial multiplexing
of many terminals in the same time-frequency resource \cite{Harris2017JSAC,SEU_NUPT}. Efficient multiplexing requires
channels to different terminals to be sufficiently distinct. Theory
as well as experiments have demonstrated that this can be achieved
both in line-of-sight and in rich scattering.
\item Superior energy efficiency, by virtue of the array gain, that
permits a reduction of radiated power. Moreover, the ability to
achieve excellent performance while operating with low-accuracy
signals and linear signal processing further enables considerable
savings in the power required for signal processing.
\end{enumerate}
This overview paper focuses on sub-6 GHz Massive MIMO systems
implemented with fully digital per-antenna signal processing. Massive
MIMO at mmWave frequencies is also possible, and can benefit from the
large bandwidth available at these frequencies. Propagation and
hardware implementation aspects are different at mmWaves; for example,
hybrid analog-digital beamforming approaches are typically considered
\cite{Molish2017}. However, this is not discussed further
here.
The complexity of the signal processing has been considered a
potential obstacle to actual deployment of Massive MIMO technology. An
obvious concern is how operations on large matrices and the
interconnection of the many antenna signals can be efficiently
performed in real-time. Moreover, real-life experiments have shown
that the channel responses to different terminals can be highly
correlated in some propagation environments. Appropriate digital
signal processing hence needs to feature interference suppression
capabilities, which further increases complexity.
This paper discusses the digital signal processing required to realize
the Massive MIMO system concept, and examines in detail the co-design
of algorithms, hardware architecture, and circuits
(Figure~\ref{fig:cross_level}). Unconventional, low-complexity
digital circuitry implementations in deeply scaled silicon are
possible, despite (and thanks to) the excess number of antenna
signals. A careful choice of algorithmic and circuit parameters
permits considerable reduction of the average energy consumption.
Terminals in turn can be implemented at low complexity while
benefiting from the channel hardening effect, that offers increased
reliability.
Proof of concept implementations and demonstrations have revealed
constraints that turned out more harsh than anticipated in initial
theoretical assessments. This concerns the interconnection of the
signals from all antennas, which poses a bottleneck that partly
necessitates distributed processing. Also, relaxing the specifications
of the analog and RF chains can result in higher distortion both in-band and out-of-band than initially anticipated, as hardware imperfections can in general not be considered uncorrelated.
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{Spatial_multi}
\caption{Massive MIMO exploits large antenna arrays at the base stations, to spatially multiplex many terminals.\label{fig:systemPict}}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{cross_level}
\caption{Massive MIMO opens up new hardware-software co-design opportunities for low-complexity circuitry.\label{fig:cross_level}}
\end{figure}
\section{Massive MIMO System Model} \label{sec:concept}
This section introduces the notation for MIMO transmission that is
used in the paper. Further details can be found in, for example,
\cite{Marzetta16book}. We consider the block-fading model where the
time-frequency domain is partitioned into coherence intervals within
which the channel is static. The number of samples in each coherence
interval is equal to the coherence time in seconds multiplied by the
coherence bandwidth in Hertz. For the signal processing algorithms
discussed in this paper, it does not matter whether there is coding
across coherence intervals or not.
In every coherence interval, a flat fading complex baseband channel
model applies. Let $M$ be the number of antennas at the base station,
and $K$ the number of terminals served simultaneously. Also, denote by
${\boldsymbol{g}}_k$ the $M$-vector of channel responses between the $k$th terminal
and the array. Then on uplink, for every sample in the coherence interval,
\begin{align}\label{eq:ul}
{\boldsymbol{y}} = \sum_{k=1}^K {\boldsymbol{g}}_k x_k + {\boldsymbol{w}},
\end{align}
where ${\boldsymbol{y}}$ is an $M$-vector comprising samples received at the base
station array, $\{x_k\}$ are symbols sent by the $k$th terminal, and
${\boldsymbol{w}}$ is noise. On downlink, assuming linear precoding,
\begin{align}\label{eq:dl}
y_k = {\boldsymbol{g}}_k^T \sum_{k'=1}^K {\boldsymbol{a}}_{k'} x_{k'} + w_k.
\end{align}
where $y_k$ is the sample received by the $k$th terminal, ${\boldsymbol{a}}_k$ is a
precoding vector associated with the $k$th terminal, $x_k$ is the
symbol destined to the $k$th terminal, and $w_k$ is $CN(0,1)$ receiver
noise.
The base station forms a channel estimate, $\hat{\boldsymbol{g}}_k$, of ${\boldsymbol{g}}_k$ for
each terminal $k$ by measurements on uplink pilots. Channel
estimation is discussed extensively in for example
\cite{Marzetta16book} (for independent Rayleigh fading) and
\cite{bjornson2017massive} (for correlated fading models).
On uplink, the data streams from the terminals are detected through
linear processing. This entails multiplication of ${\boldsymbol{y}}$ with a vector,
${\boldsymbol{a}}_k$ for each terminal, yielding the scalar $ {\boldsymbol{a}}_k^H {\boldsymbol{y}}$. Common
choices of the detection vector ${\boldsymbol{a}}_k$ include
\begin{align}
\begin{cases}
\text{max.-ratio:} & {\boldsymbol{a}}_k = \alpha_k \hat{\boldsymbol{g}}_k \\
\text{zero-forcing:} & {\boldsymbol{a}}_k = \alpha_k \left[ \hat{\boldsymbol{G}} (\hat{\boldsymbol{G}}^H\hat{\boldsymbol{G}})^{-1} \right]_{(:,k)} \\
\text{MMSE:} & {\boldsymbol{a}}_k = \alpha_k \left[ \hat{\boldsymbol{G}} (\hat{\boldsymbol{G}}^H\hat{\boldsymbol{G}} + {\boldsymbol{I}})^{-1} \right]_{(:,k)}
\end{cases}
\end{align}
where $\alpha_k$ is a normalizing constant (different for the three
methods), and $\hat{\boldsymbol{G}}=[\hat{\boldsymbol{g}}_1,\ldots,\hat{\boldsymbol{g}}_K]$. The result of
this linear processing will comprise the desired signal, embedded in
additive interference and noise.
On downlink, channel reciprocity is leveraged. Low-complexity
front-ends typically introduce non-reciprocity
and
this non-reciprocity needs to be compensated for; see
Section~\ref{sec:RF}. The base station forms the transmitted vector
$\sum_k {\boldsymbol{a}}_kx_k$ in (\ref{eq:dl}) where the precoding vector ${\boldsymbol{a}}_k$
is given by:
\begin{align}
\begin{cases}
\text{max.-ratio:} & {\boldsymbol{a}}_k = \alpha_k \hat{\boldsymbol{g}}_k^* \\
\text{zero-forcing:} & {\boldsymbol{a}}_k = \alpha_k \left[ \hat{\boldsymbol{G}}^*(\hat{\boldsymbol{G}}^T\hat{\boldsymbol{G}}^*)^{-1} \right]_{(:,k)} \\
\text{regularized zero-forcing:} & {\boldsymbol{a}}_k = \alpha_k \left[ \hat{\boldsymbol{G}}^*(\hat{\boldsymbol{G}}^T\hat{\boldsymbol{G}}^* + \lambda {\boldsymbol{I}})^{-1} \right]_{(:,k)}
\end{cases}
\end{align}
where, again, $\{\alpha_k\}$ are normalizing constants and $\lambda$
is a regularization parameter.
The signal received at the terminal will contain the symbol of interest, plus
additive interference and noise.
Many variations are possible and detection and precoding that take
multi-cell interference into account are also possible
\cite{bjornson2017massive,li2017massive}.
\subsection{Per-Antenna Functions: Coarse Processing Provides Excellent Performance}
\label{ref:course_PAP}
Massive MIMO can operate well with low-resolution signals. A profiling
of the per-antenna functionality in terms of generic operations per
second shows that for an LTE-like setup, about $80\%$ of the
complexity is in the filtering and the remaining $20\%$ is in the
(I)FFT operation. The filtering functionality is the most demanding
because of the need to over-sample and hence process at high speed.
Significant savings in complexity are therefore possible by minimizing
the resolution of this processing. An exploration of the word lengths
of the data signals, $n_sf$, and of the filtering coefficients, $n_f$,
is reported on in \cite{Gunnar2017}. The circuit area complexity
$C_\text{Filt}$ of the $T$-tap FIR filtering of I- and Q-signals as a
function of the word lengths is calculated using basic formulas for
the complexity of adders and multipliers, which are dependent on the
word-lengths $n$ and $m$ of the operands as follows:
\begin{equation}
\label{eq:filt_compl}
\begin{array}{l}
C_\text{add} = n\cdot \log_2 n \\
C_\text{mult} = n\cdot m \\
C_\text{Filt} = 2\cdot T\cdot (m+n)\cdot \log_2(m+n)+2\cdot T\cdot m\cdot n.
\end{array}
\end{equation}
If a smaller number of bits is used to represent the signals and the
filter coefficients, the hardware complexity as given in
(\ref{eq:filt_compl}) is reduced. However, decreasing the word length
will increase the quantization noise. For a desired transmission
quality the just-sufficient precision can be determined. Considering
that the quantization noise will be independent among the antennas,
its combined impact will be smaller for larger numbers of
antennas. This effect is illustrated in Figure~\ref{fig:FilterQuant}
for the rather demanding 64-QAM case, and an uncoded Bit-Error Rate
(BER) of $10^{-3}$. The curves were generated based on individual BER vs. SNR simulations for different coefficient and signal resolutions, from which the equal performance points were extracted. Dotted lines show equal-complexity (in terms of
area) solutions. For a 128$\times$4 Massive MIMO system, 4 and 5 bits
are sufficient for the signals and the coefficients, respectively, for
the targeted performance. This brings a 62\% complexity reduction for
the filters compared to the 8$\times$4 case. The outer right points on the curves are clearly always suboptimal and demonstrate that high-precision filter coefficients do not improve performance, while they can cause a significant complexity penalty. A similar observation holds for the upper left points.
\begin{figure}[t!]\centering
\includegraphics[width=1\linewidth]{FilQuan.png}
\caption{Representation of the relative circuit complexity (area) as function of the signal
and filter coefficient word lengths. The markers show possible
operating points with a BER of $10^{-3}$. The dashed lines with
numbers show operating points with equal complexity. The graphs
demonstrate that low-resolution processing is feasible with large
antenna arrays. From \cite{Gunnar2017}.}\label{fig:FilterQuant}
\end{figure}
For higher system loads, more bits are needed. At the system level one
could trade-off system load for constellation order to satisfy
throughput requirements.
This analysis provides evidence that low-complexity, coarse processing in the digital filters of the individual antenna signals can offer the required performance in Massive MIMO. In the downlink the signals will next be passed to the D/A converters. The latter could be low resolution as well. The more demanding design challenge for D/A converters however is to meet out-of-band emission specifications, as introduced in Section~\ref{sec:RF}.
The (I)FFT operations required in Massive
MIMO systems with multicarrier modulation can also be designed for
Massive MIMO operation specifically and benefit from the complexity
reduction brought by the coarse quantization. A thorough optimization
is however quite complex and should consider varying quantization at
the different butterfly stages.
\subsection{Processing at the Semiconductor's Edge}
\label{sec:edge}
Applications have benefited over the last decades from Moore's law,
providing ever higher performance at lower power consumption.
Integrated Circuits (ICs) have been able to operate at lower dynamic
power thanks to the scaling of the supply voltage $V_{dd}$. For
digital circuits, the average dynamic power consumption is
\begin{align}
P_\text{dyn, av} = (\alpha C)\cdot V_\text{dd}^2 \cdot f_s,
\end{align}
where $\alpha C$ is the effective switching capacitance of the module
and $f_s$ is the switching frequency. Clearly, $P_\text{dyn, av}$ scales
quadratically with the supply voltage $V_\text{dd}$.
However with scaling towards deep sub-micron CMOS technologies (65~nm
and smaller), designers are facing ever-increasing variability
challenges. The process, voltage and temperature (PVT) variabilities
are considered to be the three main contributors to circuit
variability. Conventionally, to cope with this challenge, ICs are
designed at the worst PVT corners, to ensure that they always operate
correctly. Figure~\ref{fig:VOS} illustrates the different operating
regions for ICs suffering from manufacturing variability.
\begin{figure}[t!]\centering
\includegraphics[width=0.9\linewidth]{vdd_VOS}
\caption{Different approaches to scaling of the supply voltage $V_\text{dd}$ to cope
with speed variability. Operation at the worst-case corner misses out
on potential energy savings. Adaptive Voltage Scaling (AVS) provides
the just-needed $V_\text{dd}$ for the circuit to function error-free. A
further reduction of $V_\text{dd}$ by Voltage Over-Scaling (VOS) would save
more power, yet would introduce processing errors.}\label{fig:VOS}
\end{figure}
The conventional design approach for worst-case conditions introduces
considerable margins, leading to reduced peak performance and wasted
power consumption. The worst-case synthesis assumes that all devices
in the circuit operate in the slow-process corner and experience the
least favorable voltage and temperature conditions. Temperature
variations can yield up to 20\% speed differences for a single D
flip-flop. For instance, \cite{Huang2016} shows that for 28-nm
technology, the performance (speed) difference for a representative circuit is as large as $2.2$
times between the typical case and the worst case. Adaptive scaling
techniques manage power dissipation and temperature by using a
variable supply voltage $V_\text{dd}$.
Scaling down the supply voltage is regarded as an error-free power
saving method as long as the signal timing constraints are
met.
However, the critical (minimum) $V_{dd}$ that
guarantees timing closure cannot be determined at design time due to
PVT variabilities and aging effects.
A third design approach has recently gained interest, namely to scale
the $V_\text{dd}$ below the critical supply voltage, which is called
Voltage Over-Scaling (VOS). In the VOS approach, the designer accepts
that sporadic errors might occur: for logic components, the signal
from the longest propagation paths can be mis-captured; for memory
components, it may lead to incorrect write/read data/address or data
loss. This methodology of approximate computing enables very
energy-efficient processing \cite{Han2013}.
Wireless communication systems are designed to cope with
distortions and errors occurring on the channel. They are hence
inherently good candidates for error-resilient processing
solutions.
In Massive MIMO, the large
number of antennas implies redundancy in the system. It is promising
to apply VOS specifically in the per-antenna processing,
reaching beyond the reliability margins of the circuits, but still
operating at a point where the computations are more often correct
than wrong.
\subsection{Massive MIMO Resilience to Circuit Errors}
Massive MIMO inherently is resilient to some circuits errors in the
per-antenna processing. Hardware errors in a number of antenna paths
can be absorbed by the system thanks to the averaging induced by the
large number of antennas -- reminiscent of how the effects of small-scale fading
average out in the coherent multi-user MIMO
processing \cite{Marzetta16book}. Semiconductor process variability was at first experienced globally, between wafers or circuits separated in space on a silicon wafer, hence die-to-die. Designers have thus realistically assumed transistor parameters to be correlated for nearby circuits on a specific die and chip. However, in deeply scaled technologies, device variability is mostly caused by the inaccuracy of lithography and etch technology. Intra-die (local) variations have consequently become significant, and are even reported dominant over global variations \cite{Saha2010}. This apparent design challenge comes with a new opportunity to shave margins in the implementation of Massive MIMO. Indeed, different
from the distortion resulting from non-linearities, the digital
distortion is independent of the signal and hence uncorrelated over
the antennas. The massive MIMO system will continue functioning even
when, sporadically, one or a few individual antenna signals is subject
to full failure. As mentioned in Section~\ref{sec:edge}, this opens
the door to operation of circuits with much lower design margins
compared to traditional specifications, and most interestingly at
lower supply voltages and hence power consumption.
The digital hardware errors in (I)FFT and filters introduced by silicon
unreliability and by ambitious design methodologies result in
incorrect bits during signal processing. This can be regarded as
digital circuit distortion. We characterize the impact on the purity
of the signal in terms of the signal-to-digital distortion ratio (SDDR):
\begin{equation}
\text{SDDR} = 10 \cdot \log \frac{{\sigma_s}^2}{ {\sigma_d}^2 }
\end{equation}
where ${\sigma_s}^2$ and ${\sigma_d}^2$ are the powers of the
error-free digital antenna signal output, and the noise power of the
digital distortion due to circuit unreliability, respectively. First,
we consider VOS errors which are temporary and local in nature. The
BER-performance is shown in Figure~\ref{fig:SDDR} for a severe SDDR distortion, where signals get stuck at a fixed value. Results for different modulation orders and both uncoded and coded performance (rate 3/4 soft decoded LPDC) are shown. The resulting SNR degradation remains limited to $<1$~dB for $3\%$ of the antennas being a ``victim'' of circuit errors in the coded 16-QAM, and even up to $10\%$ of the antennas in QPSK case.
\begin{figure}[t!]\centering
\includegraphics[width=1\linewidth]{ldpc.pdf} \caption{BER
performance versus channel SNR. Randomly affected ``victim
antennas" from significant digital hardware errors for uncoded and coded (3/4 soft LDPC) QPSK, and uncoded and coded (3/4 soft LDPC) 16-QAM. From \cite{Huang2017}. The legend denotes: i) error-free (star markers), ii) 3\% victim antennas (circle markers), and iii) 10\% victim antennas (triangle markers).}\label{fig:SDDR}
\end{figure}
When operating deeply scaled circuits without margins, occasionally a
full circuit failure may occur. The impact of this effect on the
Massive MIMO system performance is called ``antenna outage''. The
digital output are then permanently stuck at a fixed value, which
is assumed to be its maximum possible value. The SDDR of the outage
antenna is $-\infty$, as the signals from the victim antennas are
completely lost. This model is regarded a worst-case hardware
failure. Note that the $-\infty$ SDDR does not imply infinite noise to
the whole system, as only the victim antennas are affected and their
PA power is normalized among all antennas. Therefore, a single antenna
outage will not cause the system to fail entirely. The impact on the
system performance is shown in Figure~\ref{fig:fail} for different
antenna outage and system loads, for the pessimistic case where the
errors are not detected.
\begin{figure}[t!]\centering
\includegraphics[width=1\linewidth]{fail.pdf} \caption{Impact
of antenna outage on Massive MIMO system performance depends
on the system load, for the pessimistic case where the errors
are not detected. Disabling antennas will limit the impact of
antenna outage on the Massive MIMO system
performance. From \cite{Huang2017}.}\label{fig:fail}
\end{figure}
As demonstrated, Massive MIMO can operate well with rather severe
circuits errors, and thus allows significant VOS. The
impact increases with higher system load and modulation
constellations.
The $V_\text{DD}$ may be adapted according to the system
parameters to always offer just sufficient performance. In-situ
monitoring based on test signals can be applied to perform adequate
$V_\text{DD}$ scaling \cite{Huang2017}.
In order to further improve the system robustness towards hardware
errors, techniques to first detect hardware errors, and next either
neglect, or if needed disable, defective hardware can be
applied. Importantly, the distortion originating from digital circuit
errors fundamentally differs from pure random noise. While process
variations may feature continuous random distributions, their effect
typically results in discrete error events. Dedicated monitoring
circuitry can be established \cite{fojtik2012
for the functional components such as (I)FFTs and filters, that will detect these errors. If the Massive
MIMO system is operated whereby it receives information from the hardware
level on failing circuits, it can adapt its signal processing accordingly. One option is
to disable systematically failing antenna paths and no longer consider
them in the central processing. The BER results are given in Figure~\ref{fig:discarded} for a case with moderate system load (10$\times$100 in this simulation). It shows that excluding defective circuits limits the degradation level to $<0.5$~dB on uncoded QPSK for up to $\sim 10\%$ of the antenna paths failing. This approach is equivalent to
operating the Massive MIMO system with a reduced number of BS antennas
$M$. For a representative case of QPSK transmission in a 100-antenna, 10-user scenario and with
28~nm standard CMOS technology, up to 40\% power savings can be
achieved with negligible performance degradation \cite{liu2010}.
\begin{figure}[t!]\centering
\includegraphics[width=0.9\linewidth]{Antenna_discarded} \caption{BER
performance is only slightly degraded for up to $\sim 10\%$ of antennas failing. Systematic failure of circuits is detected and corresponding antenna signals are discarded. From \cite{Huang2017}.}\label{fig:discarded}
\end{figure}
In conclusion, lean per-antenna processing can
be performed in Massive MIMO systems. The very large number of
operations, due to the large number of antenna paths, can be
performed with low precision and with a profoundly scaled supply
voltage. Combined, these techniques can reduce the power consumption due
to the digital processing on each antenna path by an order of
magnitude. For an exemplary system with 100 antennas at the base
station, the total is comparable to a conventional MIMO system with an
order of magnitude less antennas.
\subsection{Implementation Challenges and Design Considerations}
Linear processing provides good precoding and detection performance
under favorable propagation conditions. However, linear processing in
Massive MIMO does not necessarily result in low computational
complexity given that the operations need to be performed on large
matrices. For instance, the complexity of computing $({\boldsymbol{G}}^H{\boldsymbol{G}})^{-1}$
for an $M\times K$ matrix ${\boldsymbol{G}}$ is
\begin{equation}
MK^2+K^3.
\end{equation}
This number is in the order of $10^4$ for an $M=128$, $K=16$
system. In TDD massive MIMO systems, processing latency is a crucial
design consideration, especially for high-mobility scenarios. The
analysis in \cite{Steffen2016SIPS} shows that the time budget for
operating the precoding is 150 $\mu$s to support a moderate mobility
of $70$ km/h. The high computational complexity and processing speed
need to be handled with reasonable hardware cost and power
consumption. These implementation challenges necessitate meticulously
optimized solutions following a systematic algorithm-hardware
co-design methodology.
A central property of Massive MIMO is that the column vectors of the
channel matrix are asymptotically orthogonal under favorable
propagation conditions. As a result, the Gram matrix, ${\boldsymbol{Z}}={\boldsymbol{G}}^H{\boldsymbol{G}}$,
becomes diagonally dominant, i.e.,
\begin{equation}
|z_{i,i}|\gg |z_{i,j}|, \text{for } i\neq j \text{ and } M\gg K,
\end{equation}
and for i.i.d.\ channels,
\begin{equation}
\frac{1}{M}{\boldsymbol{Z}} \rightarrow {\boldsymbol{I}}, \text{for } M\rightarrow\infty \text{ and for fixed } K.
\end{equation}
The extent of the diagonal dominance varies with the
characteristics of the antenna array, the propagation environment, and
the number of users served. Exploiting this dominance, approximate matrix
inversion can be performed to reduce the computational complexity.
Matrix inversion approaches can be categorized into three types:
explicit computation, implicit computation, and hybrid methods. We
next assess the complexity and suitability of these methods.
\subsection{Explicit Matrix Inversion}
Explicit matrix inversion can be performed using approaches such as
Gauss-elimination, Neumann series expansion
\cite{ZhuICC2015}, and truncated polynomial expansion
\cite{Mueller2016EURASIP}. Recently, the Neumann series approximation
has been identified as one of the most hardware-friendly algorithms for
Massive MIMO systems \cite{Hemanth2014ISCAS,Wu2014}. If a $K\times K$
matrix ${\boldsymbol{Z}}$ satisfies
\begin{equation}
\lim_{n \to \infty} ( {\boldsymbol{I}} - {\boldsymbol{X}}^{-1}{\boldsymbol{Z}})^n \simeq \mathbf{0}_K,
\end{equation}
its inverse can be approximated by a Neumann series with $L$ terms as:
\begin{equation}
{\boldsymbol{Z}}^{-1} \approx \sum_{n=0}^{L} \left( {\boldsymbol{I}} - {\boldsymbol{X}}^{-1} {\boldsymbol{Z}} \right)^n {\boldsymbol{X}}^{-1},
\label{eq:neuman_inv}
\end{equation}
where ${\boldsymbol{X}}$ is a pre-conditioning matrix. The number of terms, $L$, can
be used as a tuning parameter to trade off between complexity and accuracy. It
is shown in \cite{Wu2014} that using the main diagonal of the Gram
matrix,
\begin{equation}
{\boldsymbol{Z}}_d=\text{diag}[{\boldsymbol{Z}}_{1,1},\cdots,{\boldsymbol{Z}}_{K,K}],
\end{equation}
as the pre-conditioning matrix, the Neumann series approximation can
provide close-to-exact-inversion performance with $L=3$ when $K\ll
M$. However, a significant performance loss is demonstrated when $M/K
< 8$. To improve the accuracy, the following weighted Neumann series
approximation was introduced in \cite{Lee2016tvt,Nagy2017WCL}:
\begin{equation}
{\boldsymbol{Z}}^{-1} \approx \sum_{n=0}^{L} \alpha_n \left( {\boldsymbol{I}} - {\boldsymbol{X}}^{-1} {\boldsymbol{Z}} \right)^n {\boldsymbol{X}}^{-1}.
\label{eq:neuman_inv_modified}
\end{equation}
In \cite{Lee2016tvt}, the coefficients $\alpha_n$ are selected by solving the equation
\begin{equation}
\sum_{n=0}^{\infty} {\boldsymbol{B}}^n \approx \sum_{n=0}^{L} \alpha_n {\boldsymbol{B}}^n,
\label{eq:neuman_inv_coef}
\end{equation}
where
\begin{equation}
{\boldsymbol{B}}=-{\boldsymbol{Z}}_d^{-1/2}({\boldsymbol{Z}}-{\boldsymbol{Z}}_d){\boldsymbol{Z}}_d^{-1/2}.
\end{equation}
At the price of extra computational complexity, the method in
(\ref{eq:neuman_inv_modified}) improves the performance significantly,
especially in cases with a high user load.
\subsection{Implicit Matrix Inversion}
Implicit matrix inversion uses linear-solvers such as
conjugate-gradient \cite{Yin2014}, coordinate-descent \cite{Wu2016},
and Gauss-Seidel \cite{Gao2015ICC} to perform linear precoding and
detection, without explicitly calculating the Gram matrix inverse. In
\cite{Wu2016}, the coordinate-descent method is adopted to realize an
MMSE detector. The regularized squared Euclidean distance,
\begin{equation}
f({\boldsymbol{x}})= \|{\boldsymbol{y}}-{\boldsymbol{G}}{\boldsymbol{x}}\|_2^2+N_0\|{\boldsymbol{x}}\|_2^2,
\label{eq:mse}
\end{equation}
is minimized sequentially for each variable in ${\boldsymbol{x}}$ in a round-robin
fashion. In (\ref{eq:mse}), $N_0$ is the variance of each complex entry in the noise
vector ${\boldsymbol{w}}$. In each iteration, the solution for the $i$th element in ${\boldsymbol{x}}$ is
\begin{equation}
\hat{x}_i= \frac{1}{\|{\boldsymbol{g}}_i\|_2^2+N_0}{\boldsymbol{g}}_i^H \pp{{\boldsymbol{y}}-\sum_{j\neq i}{\boldsymbol{g}}_jx_j}.
\label{eq:code}
\end{equation}
This procedure is then repeated for $L$ iterations.
\subsection{Hybrid Method}
Matrix decomposition algorithms factorize the Gram matrix into
intermediate matrices, which are generally triangular. Forward or
backward substitution is then performed to accomplish the
corresponding precoding and detection operation. The solution in
\cite{Prabhu2017} utilizes QR-decomposition. The Gram matrix ${\boldsymbol{Z}}$ is
decomposed as
\begin{equation}
{\boldsymbol{Z}}={\boldsymbol{Q}}{\boldsymbol{R}},
\end{equation}
where ${\boldsymbol{Q}}$ is unitary and ${\boldsymbol{R}}$ is upper triangular. The linear
equation $\hat{{\boldsymbol{s}}}={\boldsymbol{Z}}^{-1}{\boldsymbol{s}}$ is then rewritten as
\begin{equation}
{\boldsymbol{R}}\hat{{\boldsymbol{s}}}= {\boldsymbol{Q}}^H{\boldsymbol{s}},
\end{equation}
which can be solved using backward substitution. This method avoids
the explicit computation of matrix inverses, relaxing (to some extent)
the requirements on data representation accuracy. By exploiting the
diagonally dominant property of the Gram matrix, modified
QR-decomposition can be performed \cite{Prabhu2017}. For instance, the
original solutions
\begin{equation}
\begin{array}{l}
c = a/r \\
s =b^*/r\\
r=\sqrt{|a|^2+|b|^2}
\end{array}
\label{eq:qr_givens}
\end{equation}
to the Givens rotation operation
\begin{equation}
\left[
\begin{array}{cc}
c & s \\
-s^{*} & c
\end{array}
\right]
\left[
\begin{array}{c}
a\\
b
\end{array}
\right]=
\left[
\begin{array}{c}
r\\
0
\end{array}
\right],
\label{eq:qr_givens_val}
\end{equation}
are approximated by
\begin{equation}
\begin{array}{l}
c=c_{\textrm{const}} \\
s=b^*/a.
\end{array}
\label{eq:qr_givens_approx}
\end{equation}
Equation (\ref{eq:qr_givens_approx}) makes use of the fact that
$|a|\gg |b|$ and results in 50\% complexity savings by introducing the
constant $c_{\textrm{const}}$.
Cholesky-decomposition (${\boldsymbol{Z}}={\boldsymbol{L}}\bL^*$) has also been studied for
Massive MIMO precoding and detection implementation
\cite{Alshamary2016ISIT,Rakesh2017ISCAS}. It has lower computational
complexity than the Neumann series expansion method (with $L\geq 4$)
\cite{Wu2014} and provides accurate processing independent of $M$ and
$K$. More importantly, the Cholesky decomposition imposes lower
memory requirements, since only the lower triangular
matrix ${\boldsymbol{L}}$ needs to be stored.
\subsection{Complexity versus Accuracy Trade-Off}
\begin{figure}[t]
\centering
\includegraphics[width=0.38\textwidth]{performanceK.pdf}
\caption{Simulated performance of different detection methods. The
subscripts in the legend indicate the fixed-point resolution of the
fractional part. Markers at $-4$ dB performance loss mean that the corresponding
detection scheme has a performance loss greater than $4$ dB or shows
an error floor before reaching a BER of $10^{-4}$.}
\label{fig:performK}
\end{figure}
To select appropriate processing algorithms for Massive MIMO is
non-trivial, and an analysis of the trade-off between computational
complexity and processing performance is necessary. Reference
\cite{Mirsad2014} presents such an analysis for different MMSE
detection techniques.
To evaluate the processing accuracy, we simulate the performance of
different detection techniques including Neumann series approximation
(NSA), Cholesky decomposition (ChD), modified QRD (MQRD), and
coordinate descent (CD). The effects of fixed-point arithmetics is
also taken into consideration to examine the required data
precision. In the simulations, $M=128$, $K$ sweeps from 8 to 32, and
an i.i.d.\ block Rayleigh fading channel with perfect channel
estimation and synchronization was considered.
A rate-$1/2$ convolutional code with generator polynomial
[171, 133] and a constraint length of 7 was used. Figure~\ref{fig:performK} shows the
performance at $10^{-4}$ BER relative to floating-point
ZF detection. The number of iterations $L$ for the NSA and CD was set
to 3. Implicit and hybrid methods are more robust to lower
resolutions, while NSA requires a larger number of bits to calculate
the matrix inverse explicitly. When $M/K$ is small the Gram matrix
becomes less diagonally dominant and approximate matrix inversion
methods suffer from a larger performance loss.
CD offers better interference cancellation when the user load is relatively high.
\begin{figure}[t]
\centering
\subfigure{
\includegraphics[width=0.38\textwidth]{compM.pdf}
}
\subfigure{
\includegraphics[width=0.38\textwidth]{compK.pdf}
}
\subfigure{
\includegraphics[width=0.38\textwidth]{compP.pdf}
}
\caption{Computational complexity (per instance of the detection
problem) of different implementations of ZF detection, for different
numbers of base station antennas, numbers of users, and channel
coherence duration.}
\label{fig:comp}
\end{figure}
Table \ref{tb:comp} lists the corresponding computational
complexity in terms of number of real multiplications. The
computation is divided into two parts depending on how frequently it
needs to be executed, i.e., per channel realization and per channel
use (instance of the detection problem). The Gram matrix calculation,
matrix decomposition, and matrix inversion are performed when the
channel changes, while matched-filtering and backward/forward
substitution are performed for each received vector. Thereby, the
computational complexity depends on the channel dynamics, i.e., the
number of samples ($P$) during which the channel is constant.
Figure~\ref{fig:comp} depicts the results. Different system setups and
channel conditions are analyzed. While changing $M$, $K$, and $P$ in
the three sub-figures, the other two are fixed to $M=128$, $K=16$, and
$P=5$, respectively. Several observations can be made. The detection
complexity grows linearly with $M$, enabling
large savings in transmit power by deploying
large numbers of antennas, with a mild increase in the processing
power. Moreover, the processing complexity (for explicit and hybrid
matrix inversion algorithms) can be dramatically reduced in static
environments, in which case the channel matrix-dependent operations
are performed very rarely.
In addition to the processing accuracy and computational complexity,
parallelism is an important aspect to be considered, and it
highly impacts the processing latency. Iterative algorithms such as
Neumann series approximation and coordinate descent can suffer from a
long processing latency for MUI-dominant channels. On the other hand,
matrix decomposition can be performed in a more parallel fashion and
was thus selected for the first Massive MIMO precoder-detector chip
introduced in the next section. Moreover, the intermediate results
${\boldsymbol{Z}}^{-1}$, ${\boldsymbol{L}}$, and ${\boldsymbol{Q}}{\boldsymbol{R}}$ can be shared between the uplink and
downlink processing, further simplifying the hardware.
\subsection{128$\times$8 Massive MIMO Precoder-Detector Chip Achieving 300 Mb/s at 60 pJ/b}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{qrd_givens_systolic_arr.jpg}
\caption{Simple, configurable, and scalable architecture for QRD-based massive MIMO precoder (From \cite{Prabhu2017}).}
\label{fig:arc}
\end{figure*}
Integrated hardware implementations will ultimately define both the
performance and power consumption of Massive MIMO systems. Hence,
algorithms should be selected such that the corresponding operations
can be mapped into simple, configurable, and scalable hardware
architectures to enable high throughput, low latency, and flexible
implementation. The reconfigurability and scalability are essential to
enable efficient operation in a wide range of conditions. In this
section we present a design \cite{Prabhu2017} demonstrating such an
algorithm and hardware architecture co-design, where the
QR-decomposition based ZF precoding is mapped onto a systolic array
architecture; see Figure~\ref{fig:arc}. The systolic array consists of
a homogeneous network of elementary processing nodes, where each node
performs the same pre-defined tasks. Due to the homogeneity, the
architecture is scalable to support different $M$ and $K$.
The data flow in a systolic array is straightforward and parallel, leading to a simple and
high-speed hardware implementation.
\begin{figure}[t!]
\centering
\begin{subfigure}[]
\centering
\includegraphics[width=0.20\textwidth]{isscc2017.jpg}
\end{subfigure}
\begin{subfigure}[]
\centering
\includegraphics[width=0.20\textwidth]{isscc2018.jpg}
\end{subfigure}
\caption{Microphotographs of massive MIMO precoder and detector chips: (a) From \cite{Prabhu2017} (b) From \cite{Tang2018}.}
\label{fig:chip}
\end{figure}
The QR-decomposition based precoder, together with a
Cholesky decomposition based detector, was fabricated using $28$ nm
FD-SOI (Fully Depleted Silicon On Insulator)
technology. Figure~\ref{fig:chip}(a) shows a photograph of the chip. It
occupies only a $1.1$ mm$^2$ silicon area and consumes $\sim~50$~mW
power for precoding and detection for
a 128$\times$8 Massive MIMO system with a $300$ Mb/s throughput. The
fabricated chip and the measurement results prove that the Massive
MIMO concept works in practice and that system-algorithm-hardware
co-optimization enables record energy-efficient signal processing. The
cross-level design approach also applies advanced circuits techniques
leveraging on the flexible FD-SOI body bias feature
\cite{Nicolas2012}. Using forward body bias or reverse body bias
allows systems to dynamically adjust processing speed and power
consumption of the chip towards the most efficient operating point.
The algorithm-hardware co-design method is further exploited in \cite{Tang2018} to map an iterative expectation-propagation detection (EPD) onto a condensed systolic array for higher hardware resource utilization. This detector chip (Figure~\ref{fig:chip}) is fabricated using $28$ nm FD-SOI technology and provides 1.8~Gb/s throughput with $127$~mW power consumption. It offers 3 dB processing gain comparing to \cite{Prabhu2017}, equivalent to a 2$\times$ boost in link margin that can be utilized to lower the TX power and relax the front-end requirements.
\subsection{Power Amplifiers Benefit from the Large Array}
The required output power of a Massive MIMO base station can be
reduced inversely proportionally to the square root of number of BS
antennas, or even linearly in operating regimes with good channel
estimation quality, thanks to the coherent combination of all antenna
signals. This results in significantly reduced output specifications
of the Power Amplifiers (PAs). The power amplification stage typically
accounts for $>70\%$ of the power consumption of base stations in
wireless broadband macro-cells \cite{Auer2011}. Moreover they
necessitate cooling, causing a $\sim 10\%$ overhead. The
reduced output power in Massive MIMO hence can reduce the total power
by a factor of 3 in an exemplary 100-antenna base station, assuming
that all other contributions remain equal.
The PA mostly operates at a low efficiency as a consequence of a
considerable back-off, required to avoid entering the saturation region. For
OFDM-based systems such as 3GPP-LTE, the PA typically operates
with a back off of 8--12~dB. Best-in-class solutions need complex
techniques that achieve an efficiency of $\sim 30\%$ \cite{Li2011}.
Entering the saturation region introduces non-linear distortion, which
comes with two detrimental effects: distortion of the intended
signal within the band of interest, and out-of-band (OOB) emissions
that result in adjacent channel leakage.
We consider a polynomial memoryless model \cite{Horlin2008} for the
non-linear behaviour of the PA. The impact on the signal at RF can be
expressed as:
\begin{equation}
y(t) = \sum_{p} \alpha_p {x}^p_{\text{RF}}(t),
\end{equation}
where $x_{\text{RF}}(t)$ is the input signal to the PA, $y(t)$ is the output
signal, and $\alpha_p$ is the non-linear distortion coefficient of the
PA for the $p$th harmonic component. The third-order harmonic will
have the largest impact both in terms of in-band distortion and
adjacent channel leakage. Furthermore, the amplitude will be limited
to the saturation amplitude $a_{out,sat}$ for input values exceeding
the input saturation amplitude $a_{in,sat}$:
\begin{equation}
\left|y(t)\right| = a_{\text{out,sat}}; \quad \left|x_{\text{RF}}(t)\right| > a_{\text{in,sat}.}
\end{equation}
The non-linear distortion resulting from the PAs in the many antenna
paths is hence signal dependent. The input signals to the PAs can be
correlated, depending on the specific communication scenario in terms
of users, channel responses, and power (im)balance among the
users. In \cite{Larsson2017PA} we analyzed how the distortion terms
can combine by means of a basic dual-tone modulation scheme. The
following effects can occur:
\begin{enumerate}
\item The distortions may add up coherently in the channel and
generate considerable out-of-band emissions. This will be the
case for example in a single-user situation with one strongly
dominating propagation direction.
\item In most multi-user scenarios
the precoder will provide significant different compositions
of signals to the antenna paths and hence power amplifiers. In
general, this will randomize the harmonic distortion terms.
\end{enumerate}
The constellation diagrams in Figure~\ref{fig:PAconstellation}
illustrate the impact of increasing the number of antennas at the base
station on the Error Vector Magnitude (EVM),
for a
case with equal-strength signals for the different users and i.i.d.\
Rayleigh fading channels. The results were simulated based on a cubic
polynomial model for the PA, which operates in saturation ($0$~dB with
respect to the $1$ dB compression point). With $M=30$ antennas at the
base station, the constellation points are seriously dispersed and an
EVM of $-10$ dB is measured. When increasing the number of antennas,
in steps of 10 in the graph, the clarity of the constellation diagram
greatly improves and for $M=100$ an EVM of $-22$ dB is observed.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{PAconstellation.png}
\caption{Increasing number of base station antennas improves the EVM with PAs operating in saturation.\label{fig:PAconstellation}}
\end{figure}
In conclusion, the power amplifiers benefit from the large array owing
to the drastically reduced total output power requirement. Moreover in
many typical conditions, Massive MIMO systems will not transmit
predominantly to one user and in one direction. One could then operate
the PAs efficiently in their non-linear region. Hence, a considerable
further improvement of the power consumption could be achieved.
However, the inconvenient truth is that in general, directive
emissions of OOB radiation can arise under some conditions. More
detailed mathematical models and results can be found
in \cite{DBLP:journals/corr/abs-1711-02439}.
\subsection{Coarse and Lean Convertors}
The impact of low-resolution data converters on system performance has been investigated. We give an overview of these theoretical results and discuss them in perspective of actual design constraints and merits of state-of-the-art data converters. These reveal that minimizing the resolution strictly (e.g., below 6 bits) does not result in a significant power reduction in a conventional base station. One should hence question any penalty in system performance and/or additional DSP complexity when considering very low resolution data converters.
A specific type of hardware distortion arises if low-precision A/D
converters are used at the base station. Such converters are highly
desirable owing to their low cost and power consumption. In principle for each bit reduction in resolution, the A/D converter power is halved. Doubling the sampling frequency
will double the power. This is reflected in the common figure-of-merit (F.o.M.) in terms of energy consumption per conversion step (cs) \cite{Pelgrom17book} used to assess the design merit of A/D converters implementing different architectural principles and resolution/bandwidth specifications:
\begin{equation}
\text{F.o.M.}_{\text{A/D}}= \frac{\text{Power}_\text{A/D}}{2^{\text{ENOB}}\cdot f_{s}}
\label{eq:FOMADC}
\end{equation}
where ENOB is the Effective Number of Bits resolution as measured and $f_{s}$ is the sampling frequency.
The
resulting quantization noise of A/D conversion is fairly easy to model accurately, and
rigorous information-theoretic analyses of its effect are available. In some cases, line-of-sight with a single terminal, the quantization
noise may combine constructively. However, in frequency-selective,
Rayleigh fading channels with large delay-spreads and multi-user
beamforming, the distortion averages out over the
antennas to a significant extent. Specifically, with 1-bit quantization, the quantization
noise has a power equal to $(\pi/2-1)P$ where $P$ is the received
signal power \cite{mollen1bit}, and the aggregate effect of the
quantization is approximately a loss in effective SINR of 4~dB.
The 1-bit A/D converter case is of particular interest as it allows operation without automatic
gain control (AGC), which simplifies hardware complexity.
With
$N$-bit quantization, $N>1$, corresponding results can be found
in \cite{mollen2016achievable}, and when $N$ grows eventually the
capacity formulas for the un-quantized
case \cite[Ch.~3]{Marzetta16book} are rediscovered. Other authors
have derived similar results subsequently \cite{7931630} -- and
earlier, using heuristic
arguments, \cite{fan2015uplink,zhang2016spectral}. Importantly, these
analyses take into account the fact that both the received pilots and
the payload data will be affected by quantization noise.
The loss in effective SINR due to quantization needs to be considered relatively to the extra power consumption resulting from adding bits resolution in the A/D converters.
Circuit innovation in data converters has brought great improvements in power efficiency. State-of-the-art designs for A/D converter cores achieve figures-of-merit following \ref{eq:FOMADC} in the order of $10$ fJ/cs \cite{vanderplas2008,Choo2016}.
A 6-bit ADC with a speed of several 100 Mbit/s consumes $<1$~mW.
Massive MIMO systems operating with low-resolution Digital-to-Analog (D/A) converters at the base station in the downlink transmission have also been studied. There is some evidence that they are sufficient to attain a good performance in terms of achievable link rate \cite{7887699,Jacobsson2017rate}. Also, while
these analyses are independent of the actual modulation and coding
used in the system, numerical end-to-end link simulations have
independently arrived at essentially the same
conclusion that the degradation of BER performance due to low-precision ($<6~bits$) D/A converters is negligible \cite{Desset2015}. It is however a misconception that the number of bits resolution affects the D/A converter power consumption in a similar way as it does for A/D converters. The constraint on OOB emission in combination with the swing to be delivered to the analog output signal are the dominant factors in the power and complexity of a D/A converter \cite{Pelgrom17book}. A relevant standard figure-of-merit (F.o.M.) for current-steering D/A converters is given by
\begin{equation}
\text{F.o.M.}_{\text{D/A}}= \frac{V_{\text{pp}}\cdot f_{\text{out}}\cdot 10^{\text{SFDR}/20}}{\text{Power}_{\text{D/A}}}
\label{eq:FOMDAC}
\end{equation}
where SFDR is the spurious free dynamic range, being the distance between the signal and the largest single unwanted component -- the spurious signal,
and $V_{pp}$ is the peak-to-peak signal swing which accounts for the
power (and design problems) needed for generating the analog signal in a digital-to-
analog converter. D/A converters with a resolution $<10$~bits are conveniently implemented by current injection or resistive architectures whose power consumption is typically not directly impacted by their resolution.
In contrast, the complexity of the reconstruction filter in the D/A converter is mostly determined by the SFDR specification, which will eventually determine the out-of-band (OOB) harmonic distortion. Digital predistortion and analog filtering to reduce OOB emissions have been proposed for coarsely quantized precoding in Massive MIMO \cite{Jacobsson2017}. The extra processing complexity in deeply scaled technology will be very reasonable, yet a degradation of the in-band signal-to-interference-noise-and-distortion ratio (SINDR) on the link is introduced. This presents the same trade-off between in-band transmission versus out-of-band rejection encountered in D/A converter design.
The trend in broadband wireless systems to increase spectral efficiency through a combination of higher order modulation constellations and conventional multi-layer MIMO has raised the resolution requirement for data converters $>\sim 12$~bits. Massive MIMO can operate without noticeable implementation loss with only $4-6$-bit A/D and D/A converter resolution. This reduces the power consumption of an individual A/D converter specifically with a factor $>\sim 50$, which more than compensates for the fact that $10-30$ times more converters are needed.
It is however neither necessary nor overall beneficial to reduce the resolution of A/D and D/A converters below 6~bits:
\begin{itemize}
\item On uplink, reducing the A/D resolution further will save less than $100$~mW in a 100~antenna basestation.
\item On downlink, a potential implementation loss of $0.5$~dB or more due to a D/A converters with a lower resolution may require $10\%$ more power in the PA stage. More importantly, the constraints on OOB emission will not be met. Dedicated processing will hence be needed to avoid or filter out unacceptable leakage in adjacent bands.
\end{itemize}
\subsection{Reciprocity Calibration in RF Front-Ends}
Channel estimates are obtained
from uplink pilots; see Section~\ref{sec:concept}. In
practice, the response observed by the digital
baseband processing for each user includes both the propagation
channel and the transceiver transfer functions. The full
responses for uplink and downlink can be expressed as:
\begin{equation}
\begin{array}{l}
{\boldsymbol{g}}_{k,UL}={\boldsymbol{R}}_{B} \tilde{{\boldsymbol{g}}_{k}} t_{k} \\
{\boldsymbol{g}}_{k,DL}^T=r_{k} \tilde{{\boldsymbol{g}}_{k}}^T {\boldsymbol{T}}_{B},
\end{array}
\end{equation}
where ${\boldsymbol{R}}_B$ and ${\boldsymbol{T}}_B$ are complex diagonal matrices containing the
base station receiver and transmitter responses, and $t_k$ and $r_k$
are the responses of the transmitter and receiver of user terminal
$k$. While the responses of the propagation channel $\tilde{\boldsymbol{g}}_{k}$
are reciprocal, the responses of the front-ends will typically cause
non-reciprocity in the full response. In the precoded Massive MIMO
downlink reception the following holds:
\begin{equation}
\begin{array}{l}
{\boldsymbol{R}}_{B} \neq {\boldsymbol{T}}_{B}\\
r_{k} \neq t_{k}\\
\Rightarrow {\boldsymbol{g}}_{k,DL} \neq {\boldsymbol{g}}_{k,UL}.
\end{array}
\end{equation}
When the corresponding estimates $\hat{\boldsymbol{g}}_{k}$ of ${\boldsymbol{g}}_{UL}$ are used
to calculate the precoding coefficients, they will introduce
Multi-User Interference (MUI) and potentially an SNR loss, depending on
the precoding vectors ${\boldsymbol{a}}_k$. We include the derivation for the
zero-forcing precoder, and refer to \cite{MAMMOETD2_4} for a
comprehensive treatment. Under the assumption of negligible channel
estimation errors and considering normalized responses to simplify
notation,\footnote{Power control does not impact
reciprocity, and it will show up as a scalar multiplication on the
individual terminal signals.} the received signals at the terminals
${\boldsymbol{y}}=[y_1,\ldots,y_K]^T$ are given by
\begin{equation}
{\boldsymbol{y}} = {\boldsymbol{G}}_{DL}^T {\boldsymbol{G}}_{UL}^*({\boldsymbol{G}}_{UL}^T{\boldsymbol{G}}_{UL}^*)^{-1} {\boldsymbol{x}} + {\boldsymbol{w}}
\end{equation}
where ${\boldsymbol{x}}$ and ${\boldsymbol{w}}$ are the $K$-vectors of transmitted symbols and received
noise samples, respectively. Writing out the front-end responses gives
the following expression:
\begin{equation}\label{eq:recip}
{\boldsymbol{y}} = ({\boldsymbol{R}}_{U} \tilde{\boldsymbol{G}}^T {\boldsymbol{T}}_{B}) ({\boldsymbol{R}}_{B}^{*}\tilde{\boldsymbol{G}}^*{\boldsymbol{T}}_{U}^*)({\boldsymbol{G}}_{UL}^T{\boldsymbol{G}}_{UL}^*)^{-1} {\boldsymbol{x}} + {\boldsymbol{w}},
\end{equation}
where ${\boldsymbol{R}}_{U}$ and ${\boldsymbol{T}}_{U}$ are diagonal matrices containing the
transmitter and receiver responses of terminals $t_{k}$ and
$r_{k}$. Equation (\ref{eq:recip}) shows that in general the combined
precoding, channel, and transceiver responses will not result in a
diagonal matrix. As a result, MUI will occur. Structurally it is the
multiplication of the base station's front-end responses
${\boldsymbol{T}}_{B}{\boldsymbol{R}}_{B}^*$ that is responsible for the MUI. The terminal
responses appear as scalar multiplications on the received symbols and
will be contained in the equalization processing in the terminal. A
suitable calibration procedure operating locally at the base station
can restore the reciprocity. Calibration data needs to be obtained
through measurements of the transceiver front-end responses, for which
several approaches have been proposed and validated:
\begin{itemize}
\item Utilization of
an auxiliary front-end, which sequentially measures the RF transceiver
front-ends. The method works well in conventional MU-MIMO
systems \cite{Bourdoux2003}. However, it does not scale well to large
numbers of antennas.
\item Exploitation of the coupling, essentially radio propagation, between antennas in
the array to derive the relative differences among the transceiver
responses. This solution has been implemented in real-life testbeds
and performs well \cite{Vieira2017}.
\end{itemize}
Analysis has shown that non-reciprocity
requirements are not as severe for Massive MIMO as in conventional systems \cite{MAMMOETD2_4} and depend on the system
load and precoding algorithms. The RF transceiver responses may vary
in time mainly due to temperature differences. The calibration
procedure hence needs to be repeated on a regular basis. In typical
conditions the required updating frequency is in the order of
hours. It thus introduces only very limited overhead.
\section{Signal Processing and Data Transfer Complexity Assessment}
\input{complexity.tex}
\section{Analog and RF Processing: Relax with Caution!}\label{sec:RF}
\input{RFrelax.tex}
\section{Algorithm-Hardware Co-Design for Precoding and Detection}
\label{sec:precoding-decoding}
\input{precode_decode.tex}
\section{Per-Antenna Chain processing at the Semiconductor Edge}
\label{sec:per-antenna}
\input{PAntP.tex}
\section{Terminals: Increased Reliability with Low-Complexity Signal Processing}
\label{sec:terminal}
\input{terminal.tex}
\section{Demonstrations, Conclusions and Future Directions}\label{sec:conclusions}
\input{conclusions.tex}
\section{Acknowledgment}
The authors thank their colleagues and collaborators in the European FP7-MAMMOET
project for the nice cooperation that truly progressed Massive MIMO technology.
\input{SP_overview_final.bbl}
\clearpage
\textbf{Liesbet Van der Perre} is Professor at the Department of
Electrical Engineering at the KU Leuven in Leuven, Belgium and a guest
Professor at the Electrical and Information Technology Department at
Lund University, Sweden. Dr. Van der Perre was with the
nano-electronics research institute imec in Belgium from 1997 till
2015, where she took up responsibilities as senior researcher, system
architect, project leader and program director. She was appointed
honorary doctor at Lund University, Sweden, in 2015. She was a
part-time Professor at the University of Antwerp, Belgium, from 1998
till 2002. She received her Ph.D. degree from the KU Leuven, Belgium,
in 1997.
Her main research interests are in wireless communication, with a
focus on physical layer and energy efficiency in transmission,
implementation, and operation. Prof. L. Van der Perre was the
scientific leader of FP7-MAMMOET, Europe's prime project on Massive
MIMO technology. Dr. Van der Perre has been serving as a scientific
and technological advisor, reviewer and jury member for companies,
institutes, and funding agencies. She is a member of the Board of
Directors of the company Zenitel since 2015. Liesbet Van der Perre is
an author and co-author of over 300 scientific publications. She was a
system architect for the OFDM ASICs listed in the \emph{IEEE
International Solid State Circuit Conference (ISSCC's) Best of 50
Years} papers in 2003. She co-authored the paper winning the
DAC/ISSCC 2006 design contest.
~
\textbf{Liang Liu} is an Associate Professor at Electrical and
Information Technology Department, Lund University, Sweden. He
received his Ph.D. in 2010 from Fudan University in China. In 2010,
he was with Electrical, Computer and Systems Engineering Department,
Rensselaer Polytechnic Institute, USA as a visiting researcher. He
joined Lund University as a Post-doc researcher in 2010 and is now
associate professor there. His research interest includes signal
processing for wireless communication and digital integrated circuits
design. Liang is active in several EU and Swedish national projects,
including FP7 MAMMOET, VINNOVA SoS, and SSF HiPEC, DARE. He is a
board member of the IEEE Swedish Solid-State Circuits/Circuits and
Systems chapter. He is also a member of the technical committees of
VLSI systems and applications and CAS for communications of the IEEE
Circuit and Systems society.
~
\textbf{Erik G. Larsson} received the Ph.D. degree from Uppsala
University, Uppsala, Sweden, in 2002.
He is currently Professor of Communication Systems at Link\"oping
University (LiU) in Link\"oping, Sweden. He was with the KTH Royal
Institute of Technology in Stockholm, Sweden, the George Washington
University, USA, the University of Florida, USA, and Ericsson
Research, Sweden. In 2015 he was a Visiting Fellow at Princeton
University, USA, for four months. His main professional interests are
within the areas of wireless communications and signal processing. He
has co-authored some 150 journal papers on these topics, he is
co-author of the two Cambridge University Press textbooks
\emph{Space-Time Block Coding for Wireless Communications} (2003) and
\emph{Fundamentals of Massive MIMO} (2016). He is co-inventor on 18
issued and many pending patents on wireless technology.
He is a member of the IEEE Signal Processing Society Awards Board
during 2017--2019. He is an editorial board member of the
\emph{IEEE Signal Processing Magazine} during 2018--2020.
From 2015 to 2016 he served as chair of the IEEE
Signal Processing Society SPCOM technical committee. From 2014 to
2015 he was chair of the steering committee for the \emph{IEEE
Wireless Communications Letters}. He was the General Chair of the
Asilomar Conference on Signals, Systems and Computers in 2015, and its
Technical Chair in 2012. He was Associate Editor for, among others,
the \emph{IEEE Transactions on Communications} (2010-2014) and the
\emph{IEEE Transactions on Signal Processing} (2006-2010).
He received the IEEE Signal Processing Magazine Best Column Award
twice, in 2012 and 2014, the IEEE ComSoc Stephen O. Rice Prize in
Communications Theory in 2015, the IEEE ComSoc Leonard G. Abraham
Prize in 2017, and the IEEE ComSoc Best Tutorial Paper Award in 2018.
He is a Fellow of the IEEE.
\end{document}
\subsection{Increased Service Levels on Low Complexity Terminals}
It has been shown that the Massive MIMO system concept does
not require any additional specific functionality at the UE side.
Massive MIMO terminals that have a single antenna, or apply simple diversity reception, will only be able
to receive a single spatial stream. However, large numbers of terminals can be multiplexed in the same
time-frequency slot, and every terminal can be allocated the full bandwidth of the system.
This results in a throughput per terminal comparable with that of conventional UEs
that receive multiple spatial streams in parallel.
5G terminals are expected to come in large numbers and support a
diverse set of service requirements. Next to the continued traffic
growth towards terminals allocated to human users, a variety of
devices will require Machine Type Communication
(MTC). Figure~\ref{fig:5Gtriangle} illustrates three main use cases
envisioned by
industry alliances and the International Telecommunication Union (ITU) \cite{ITU_5G}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{future_imt.jpg}
\caption{Envisioned use cases for future international mobile
telecommunication. (source: Recommendation ITU-R M.2083-0 ``Framework and overall objectives of the future development of IMT for 2020 and beyond'' \cite{ITU_5G})} \label{fig:5Gtriangle}
\end{figure}
Figure~\ref{fig:5Gtriangle} demonstrates that 5G technologies not
only need to enhance mobile broadband links. New solutions are
needed to connect a very large number of (ultra-) low-power devices and
machines requiring very reliable and low-latency services. Massive
MIMO can simultaneously support many broadband terminals in
sub-6~GHz bands in indoor, outdoor, and mobile environments. The technology
can also be tailored to optimally serve new MTC-based applications.
Especially for narrowband MTC, the high array gain and the high degree of spatial diversity offered by Massive MIMO
will help. The spatial diversity specifically gives rise to channel hardening.
The effects of array gain and channel hardening are
illustrated for a 128-antenna setup in Figure~\ref{fig:channel_hard}.
Consistently boosted signal levels over all terminal positions, thanks to the array gain, are
observed. Terminals can potentially transmit data at several
tens of dB
lower output powers. The latter however requires high-quality CSI to be available, and the
power allocated to pilots will limit the savings in practice. The channel hardening effect enhances the
reliability of the links and improves the quality of service; most
notably:
\begin{enumerate}
\item Increased performance at the cell edges, where terminals may experience limited or worst case no connectivity in current networks. Massive MIMO addresses
this challenge, provided good uplink pilot-based CSI acquisition is
ensured.
\item Power savings and hence longer autonomy for battery-powered
devices.
\item Improved reliability. Fewer packet retransmissions can also
reduce the end-to-end latency. The specifications put forward for Ultra
Reliable Low Latency Communication (URLLC) in 5G is to support a
$99.9999\%$ reliability, and an end-to-end latency better than 1ms.
\item Sustained good service levels in conditions with many
simultaneously active users.
\end{enumerate}
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{channel_hard.png}
\caption{The array gain and channel hardening effect demonstrated
experimentally, for a $M=128$, $K=8$ setup. With permission and
\copyright Ove Edfors, Lund University. \label{fig:channel_hard}}
\end{figure}
In the next paragraphs, first a typical broadband user equipment is
zoomed in on. It is indicated how low-power operation can be
achieved while keeping backward compatibility with 4G air
interfaces. Next, we discuss how tailored Massive-MIMO systems have great
potential to address the challenging requirements of MTC terminals.
\subsection{Energy Efficient Broadband Terminals}
No advanced processing is required at the UE in Massive MIMO systems.
In contrast, 4G systems deliver broadband services to UEs
through multiplexing of several spatial layers. We compare a typical Massive MIMO terminal with the reference case of a $4\times4$ MIMO link. The latter requires MIMO
detection at the terminal side in the downlink. Figure~\ref{fig:ConvRx}
shows a functional block scheme of a conventional broadband, multiple-antenna terminal receiver.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{ConvRx.png}
\caption{A conventional wideband receiver for multiple spatial
layers requires complex MIMO detection. \label{fig:ConvRx}}
\end{figure}
The complexity breakdown of a typical MIMO-OFDM baseband chain
identifies channel estimation and MIMO
detection as the main bottlenecks. We take as a reference $4\times
4$ MIMO-OFDM case where the multiple-antenna processing can be
conveniently performed per subcarrier resulting in relatively low-complexity
implementations \cite{Vandenameele2000}. We consider a
basic linear MIMO detector, and non-linear detectors implementing (ordered) Successive
Interference Cancellation (SIC). The latter are
required to achieve acceptable system performance especially in the
low-SNR regime and in high-mobility scenarios. The power
consumption of the inner modem receiver of the terminal in a Massive
MIMO system is estimated relatively to published VLSI implementations for
conventional MIMO receivers \cite{Yoshizawa2009, Ketonen2010}. A range
of algorithms and implementations for MIMO detectors have been
reported on, differing substantially in complexity. Our analysis is
based on typical data for the specific components, and our own design
know-how. Table~\ref{tb:UEcomplexity} summarizes the assessment for
both single-antenna and dual-antenna diversity-reception terminals,
demonstrating an expected reduction in power consumption of a factor 5 to
50.\footnote{A similar reduction in hardware complexity could be
achieved for UE radios custom-designed to operate in Massive MIMO
networks specifically. Backward compatibility with previous
broadband systems may require the presence of MIMO detection
hardware in broadband UEs in practice.}
The instantaneous throughput will be higher for conventional MIMO terminals receiving several spatial layers. To compare the energy efficiency (in Joule/bit), the same average throughput needs to be considered.
\begin{table}[t!]
\caption{Relative power consumption estimates for UE inner modem receivers.}
\label{tb:UEcomplexity} \centering
\begin{tabular}{ | c | c | c | c | }
\hline
\textbf{\textit{4 x 4}} & \textbf{\textit{4 x 4}}
& \textbf{\textit{Massive MIMO}} & \textbf{\textit{Massive MIMO}} \\
\textbf{\textit{linear}} & \textbf{\textit{non-linear}}
& \textbf{\textit{}} & \textbf{\textit{2-antenna}} \\
\textbf{\textit{detector}} & \textbf{\textit{detector}}
& \textbf{\textit{single antenna}} & \textbf{\textit{diversity}}
\\\hline
$P_{ref}$ & $1.5 - 5 P_{ref}$ & $\sim 10\% P_{ref}$ & $\sim 20\% P_{ref}$ \\\hline
\end{tabular}
\end{table}
\subsection{Tailored Solutions Fit for Low-Power Connected Devices}
MTC for sensors and actuators opens the door for a variety of new IoT
applications. Low energy consumption is essential to enable long
autonomy of devices powered by batteries or even relying on harvested
energy. The physics of radio propagation dictates a strong attenuation
on the link with distance, $d$:
\begin{equation}
P_\text{Rx} \propto G_\text{Tx}G_\text{Rx}d^{-n}P_\text{Tx},\ n=2\ \mbox{in\ free\ space}, n > 2\ \mbox{typically},
\label{eq:PathLoss}
\end{equation}
where $P_\text{Rx}$ and $P_\text{Tx}$ are the received and transmitted powers,
respectively, and $G_\text{Rx}$ and $G_\text{Tx}$ are directivity gains at
the receiving and transmitting end of the link. The above is
especially unfortunate for mostly uplink-dominated MTC. Low Power
Wide Area Network (LPWAN) technologies are dedicated to connect IoT
nodes at long ranges. We performed measurements with an IoT node communicating via a LORA gateway \cite{DRAMCO_tutorial}. Inspection of the power consumption of this
illustrative node in Table~\ref{tb:LoraPower} provides valuable
insights. The transmit power is relatively
high since the power amplifier needs to provide sufficient power to
cope with large-scale fading losses. The energy consumption, which
will ultimately determine the autonomy of the node, is shown in
Figure~\ref{fig:LoraEnergy}.
\begin{table}[t!]
\caption{Power consumption in different modes measured on a LPWAN IoT node.}
\label{tb:LoraPower} \centering
\begin{tabular}{ | c | c | }
\hline
\textbf{\textit{Operation mode}} & \textbf{\textit{Power consumption (mW)}} \\
\hline
Transmit & $\ge 140$\footnote{The module used to perform the measurements has a current limited to 45~mA.} \\\hline
Receive & $40$ \\\hline
Sense & $13$ \\\hline
Sleep & $0.1$ \\\hline
\end{tabular}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Lora_energy}
\caption{The transmit energy will dominate the battery time on a LPWAN IoT node.}\label{fig:LoraEnergy}
\label{fig:loraEnergy}
\end{figure}
This pinpoints the fierce challenge of connecting sensor nodes and
other autonomous devices at a longer range. Their traffic is mostly
dominated by uplink, hence putting the node in the most energy-consuming transmitting mode. Equation~(\ref{eq:PathLoss}) reveals that
fundamentally only few parameters can be influenced to improve the
link budget. Antennas at IoT nodes, due to size and cost constraints,
can hardly offer any gain and on the contrary not seldom introduce losses. Massive MIMO systems exhibit a
large-antenna array gain and apply an adaptive channel-matched
beamforming approach. They offer the opportunity to reduce the
transmit power in constrained MTC nodes proportionally to the square
root of number of BS antennas $M$ or even proportionally to $M$ if
accurate CSI is acquired. This enables the simultaneous service of a large number of
devices. This asset
is important to keep up with the predicted evolution towards Massive
MTC. A Massive MIMO-based LPWAN could also offer extended
coverage and increased reliability, provided that a power-efficient solution for the pilot-based
CSI-acquisition is implemented. This challenge, to develop Massive
MIMO technology for MTC services is further discussed in
Section~\ref{sec:conclusions}.
|
3,212,635,537,509 | arxiv | \section{Introduction}
Computing perturbative scattering amplitudes in gauge theories is a key tool in confronting
theories of particle physics with experimental results and there is considerable demand for new predictions particularly
at ``Next-Next-Leading Order'' (NNLO)~\cite{Bendavid:2018nar,Azzi:2019yne}.
Amplitudes are also the custodians of the symmetries
of the theory and as such are important for exploring
properties of theories which are not always manifest in a Lagrangian approach.
Computing amplitudes in closed analytic form is particularly useful in this regard.
Amplitudes for the scattering of gluons within a gauge theory are key, being both important
phenomenologically and central to gauge theory. Modern techniques have driven progress in the calculation of analytic expressions for tree and one-loop gluon scattering amplitudes but analytic expressions for two-loop and beyond amplitudes are relatively rare (although in theories of extended supersymmetry a great deal more progress has been made~\citep{Caron-Huot:2019vjl,Bourjaily:2019gqu}).
Computing two-loop amplitudes for gluon scattering in analytic form has proceeded by separating the amplitude into its physical components. Specifically,
amplitudes with a given color structure and specific choice of external helicities have been computed. For four-point scattering,
all of these components have been calculated~\citep{Glover:2001af,Bern:2002tk} (and more recently to all orders of dimensional regularisation in
\citep{Ahmed:2019qtg}). At five-point and beyond,
progress has been made in a variety of stages. In terms of color structure, the simplest amplitudes are the ``leading in color'' amplitudes which only
require planar two-loop integrals to be computed. For external helicity, the ``all-plus'' amplitude, where all external (outgoing) legs have the same helicity, has the most symmetry and is the simplest. The all-plus amplitudes vanish at tree level and so they have a relatively simple singularity structure at loop level.
In \citep{Badger:2013gxa,Badger:2015lda}
the five-point all-plus leading in color amplitude was computed using generalised unitarity techniques and subsequently presented in a very simple analytic form~\cite{Gehrmann:2015bfy}. In ref.~\cite{Dunbar:2016aux} it was recomputed using simpler four dimensional unitarity and recursion methods which is the methodology we use in this article.
The remaining leading in color five-point
helicity amplitudes have also been computed: in ref.~\cite{Badger:2018enw} the ``single-minus'' (an amplitude which also vanishes at tree level) was computed and the remaining helicities in \cite{Abreu:2019odu}.
In ref.~\cite{Badger:2019djh} the remaining parts of the full color all-plus five-point amplitude were calculated.
Beyond five-point only a few amplitudes are known. The leading in color all-plus amplitudes have been
computed using our
methodology for six-gluons~\citep{Dunbar:2016gjb} and seven gluons~\citep{Dunbar:2017nfy}. In ref.~\cite{Dunbar:2020wdh} a conjecture for a specific color sub-amplitude was presented valid for $n$-gluons.
In this article, we compute and present in closed analytic form the full color
all-plus six-point amplitude ${\cal A}_6^{(2)}(1^+,2^+,3^+,4^+,5^+,6^+)$.
This is the first full color six-point amplitude and contains a wider
class of color amplitudes than the four- and five-point cases.
Our methodology involves computing the polylogarithmic and rational parts of the finite remainder by a combination of techniques.
The polylogarithms are computed using four dimensional unitarity cuts and the rational parts are determined by recursion.
The amplitude contains double poles in (complex) momenta and we overcome the concomitant issues by
using augmented recursion~\cite{Alston:2012xd}. Our methods bypass the need to calculate non-planar integrals.
\section{Full Color Amplitudes}
A general two-loop amplitude for the scattering of $n$ gluons in a pure $SU(N_c)$ or $U(N_c)$ gauge theory
may be expanded in a color trace basis as
\begin{eqnarray}
& & {\cal A}_n^{(2)}(1,2,\cdots ,n) =
N_c^2 \sum_{S_n/\mathcal{P}_{n:1}} {\rm tr}[T^{a_1}T^{a_2}\cdots T^{a_n}] A_{n:1}^{(2)}(a_1,a_2,\cdots ,a_n) \notag \\
&+&
N_c\sum_{r=2}^{[n/2]+1}\sum_{S_n/\mathcal{P}_{n:r} } {\rm tr}[T^{a_1}T^{a_2}\cdots T^{a_{r-1}}]{\rm tr}[T^{b_r} \cdots T^{b_n}]
A_{n:r}^{(2)}(a_1,a_2,\cdots ,a_{r-1} ; b_{r}, \cdots, b_n)
\notag \\
&+& \sum_{s=1}^{[n/3]} \sum_{t=s}^{[(n-s)/2]}\sum_{S_n/\mathcal{P}_{n:s,t}}
{\rm tr}[T^{a_1}\cdots T^{a_s}]{\rm tr}[T^{b_{s+1}} \cdots T^{b_{s+t}}]
{\rm tr}[T^{c_{s+t+1}}\cdots T^{c_n}]
\notag
\\
& &
\hskip 7.0truecm
\times A_{n:s,t}^{(2)}(a_1,\cdots ,a_s;b_{s+1} ,\cdots, b_{s+t} ;c_{s+t+1},\cdots, c_n )
\notag \\
&+&\sum_{S_n/\mathcal{P}_{n:1}} {\rm tr}[T^{a_1}T^{a_2}\cdots T^{a_n}] A_{n:1B }^{(2)}(a_1,a_2,\cdots ,a_n)\,.
\end{eqnarray}
The partial amplitudes multiplying any trace of color matrices are cyclically symmetric in the indices within the trace. The
summations count each color structure exactly once. Specifically,
when the sets are of different lengths ($r-1\neq \frac{n}2$, $s\neq t $, $t\neq\frac{n-s}{2}$ and $3s\neq m,n$)
the sets $\mathcal{P}_{n:\lambda}$
are
\begin{align}
\mathcal{P}_{n:1}&=Z_{n}(a_1,\cdots, a_n),\notag\\
\mathcal{P}_{n:r}&=Z_{r-1}(a_1,\cdots,a_{r-1})\times Z_{n+1-r}(a_r,\cdots,a_n), \;\;\ r > 1, r-1 \neq n+1-r \notag\\
\mathcal{P}_{n:s,t}&=Z_s(a_1,\cdots,a_s)\times Z_t(a_{s+1},\cdots,a_{s+t})\times Z_{n-s-t}(a_{s+t+1},\cdots,a_n)\,.
\end{align}
When the sets have equal lengths, to avoid double counting
\begin{align}
\mathcal{P}_{2m:m+1}&=Z_{m}(a_1,\cdots,a_m)\times Z_{m}(a_{m+1},\cdots,a_{2m})\times Z_2, \\
\mathcal{P}_{n:s,s}&= Z_s(a_1,\cdots,a_s)\times Z_s(a_{s+1},\cdots,a_{2s})\times Z_{n-2s}(a_{2s+1},\cdots,a_n)\times Z_2,\notag\\
\mathcal{P}_{3m:m,m}&=Z_{m}(a_1,\cdots,a_m)\times Z_{m}(a_{m+1},\cdots,a_{2m})\times Z_{m}(a_{2m+1},\cdots,a_{3m})\times S_{3},\notag \\
\mathcal{P}_{2m:2s,m-s}&= Z_{2s}(a_1,\cdots,a_{2s})\times Z_{m-s}(a_{2s+1},\cdots,a_{s+m})\times
Z_{m-s}(a_{s+m+1},\cdots,a_{2m})\times Z_2 \,.\notag
\end{align}
For example for $A_{6:2,2}(a,b;c,d;e,f)$ the manifest symmetry is
\begin{align}
\mathcal{P}_{6:2,2}=Z_2(a,b)\times Z_2(c,d)\times Z_2(e,f) \times S_3(\{a,b\},\{c,d\},\{e,f\})
\end{align}
which means the summation of this particular term is over 15 terms.
The above expansion is valid for both a $SU(N_c)$ gauge theory and a $U(N_c)$ gauge theory. In the expansion for $SU(N_c)$ the color trace terms with a
single trace ${\rm tr}[T^{a_i}]$ are omitted. Specifically these are the terms $A_{n:2}^{(2)}$ and $A_{n:1,s}^{(2)}$ and $A_{n:1,1}^{(2)}$.
These functions are consistent gauge invariant objects whose role is the cancel other terms.
By letting one or more of the external gluons lie in the $U(1)$ part of $U(N_c)$ and requiring the full amplitude to vanish generates
relations between the partial amplitudes known as decoupling identities. For example letting leg $1$ be a $U(1)$ gluon and examining the
coefficient of ${\rm tr}[T^2T^3\cdots T^n]$ we obtain
\begin{equation}
A_{n:2}^{(2)}(1 ; 2,3,\cdots,n) +A_{n:1}^{(2)}(1,2,3,\cdots, n ) +A_{n:1}^{(2)}(2,1,3,\cdots, n )+\cdots +A_{n:1}^{(2)}(2,\cdots, 1, n ) =0 \,.
\end{equation}
This allows $A_{n:2}^{(2)}$ to be expressed in terms of the
$A_{n:1}^{(2)}$. Similarly the $A_{n:1,s}^{(2)}$ and $A_{n:1,1}^{(2)}$ may be expressed in terms of the $A_{n:1}^{(2)}$ and $A_{n:r}^{(2)}, r> 2$.
The decoupling identities can be used iteratively to express the sub-sub leading $SU(N_c)$ terms $A_{n:s,t}^{(2)}$, $s=1,2$,
in terms of $A_{n:1}^{(2)}$ and $A_{n:r}^{(2)}, r> 2$. Although this may not be the most efficient expressions for these.
Finally, if we consider $A_{n:1B}^{(2)}$,
the decoupling identities provide consistency constraints but do not relate these to the other amplitudes:
\begin{equation}
A_{n:1B}^{(2)}(1,2,3,\cdots, n ) +A_{n:1B}^{(2)}(2,1,3,\cdots, n )+\cdots+ A_{n:1B}^{(2)}(2,\cdots, 1, n )=0\,.
\label{eq:decoupleB}
\end{equation}
Decoupling identities do not exhaust the color relations and further constraints arise
from recursive approaches~\citep{Edison:2011ta,Edison:2012fn}
which imply extra relations
involving both $A_{n:1B}^{(2)}$ and other amplitudes. For $n=5$ these contain sufficient information to determine $A_{5:1B}^{(2)}$ but at $n=6$
and beyond the $A_{n:1B}^{(2)}$ is a further function which must be determined.
In summary, the minimal set of color trace amplitudes which must be determined to fully specify the amplitude are
$A_{n:1}^{(2)}$, $A_{n:r}^{(2)}$ with $r >2$, $A_{n:s,t}^{(2)}$ with $s >2$ and $A_{n:1B}^{(2)}$.
At six-point all partial amplitudes can be expressed in terms of $A_{6:1}^{(2)}$, $A_{6:3}^{(2)}$, $A_{6:4}^{(2)}$ and $A_{6:1B}^{(2)}$.
Explicitly, the specifically $U(N_c)$ amplitudes
are given by
\begin{align}
A_{6:2}^{(2)}( 1;2,3,4,5,6 )&=-A_{6:1}^{(2)}( 1,2,3,4,5,6 )-A_{6:1}^{(2)}( 2,1,3,4,5,6 )-A_{6:1}^{(2)}( 2,3,1,4,5,6 )\notag\\
&-A_{6:1}^{(2)}( 2,3,4,1,5,6 )-A_{6:1}^{(2)}( 2,3,4,5,1,6 )\,,
\notag \\
A^{(2)}_{6:1,1}( 1;2;3,4,5,6 )&=-A^{(2)}_{6:3}( 1,2;3,4,5,6 )+\sum_{\sigma\in OP\{\bar\alpha\}\{\beta\}}A^{(2)}_{6:1}(\sigma)
\nota
\intertext{and}
A^{(2)}_{6:1,2}(1;2,3;4,5,6)&=-A^{(2)}_{6:4}(1,2,3;4,5,6)-A^{(2)}_{6:4}(2,1,3;4,5,6)\notag\\
&-A^{(2)}_{6:3}(2,3;1,4,5,6)-A^{(2)}_{6:3}(2,3;4,1,5,6)-A^{(2)}_{6:3}(2,3;4,5,1,6).
\label{eq:U1decouple}
\end{align}
where $\{\bar\alpha\}=\{2,1\}$, $\{\beta\}=\{3,4,5,6\}$ and $OP\{S_1\}\{S_2\}$ is the set of all mergers of $S_1$ and $S_2$ which preserves the
order of $S_1$ and $S_2$ within the merged list. Note the first element in these sums has the list reversed although for a set of two legs this is meaningless.
%
The remaining $SU(N_c)$ partial amplitude is given by
\begin{align}
&A^{(2)}_{6:2,2}(1,2;3,4;5,6)=-\sum_{Z_2(\mu)}\;\sum_{\eta\;\in OP\{1\}\{\nu\}}
\;\sum_{\eta'\in OP\{2\}\{\rho\}} A^{(2)}_{6:4}(\eta;\eta')
\notag\\
&-\sum_{Z_2(\nu,\rho)}\;\sum_{\sigma\in OP\{\bar\mu\}\{\nu\}}
A^{(2)}_{6:3}(\rho;\sigma)
-\sum_{Z_2(\mu)}\sum_{Z_2(\{\nu,\rho\})}\;
\sum_{\sigma\in OP\{2\}\{\rho\}}A^{(2)}_{6:1,2}(1;\nu;\sigma),
\label{eq:a622dec}
\end{align}
where $\{\bar\mu\}=\{2,1\}$, $\{\nu\}=\{3,4\}$ and $\{\rho\}=\{5,6\}$.
This is an inefficient expression with considerable cancellation amongst the terms on the RHS. For example, the RHS of the above contains
terms with double poles in complex momenta whilst $A^{(2)}_{6:2,2}$ does not.
We calculate all eight $U(N_c)$ functions directly and we use (\ref{eq:decoupleB}), (\ref{eq:U1decouple}) and (\ref{eq:a622dec}) as consistency checks.
\section{Structure of the Amplitudes}
The IR singular structure of a color partial amplitude is determined by general theorems~\cite{Catani:1998bh}. Consequently we can split the amplitude into
singular terms $U^{(2)}_{n:\lambda}$ and finite terms $F^{(2)}_{n:\lambda}$,
\begin{eqnarray}
\label{definitionremainder}
A^{(2)}_{n:\lambda} =& U^{(2)}_{n:\lambda}
+ \; F^{(2)}_{n:\lambda} + {\mathcal O}(\epsilon)\, .
\end{eqnarray}
As the all-plus
tree amplitude vanishes, $U^{(2)}_{n:\lambda}$ simplifies considerably and is at worst $1/\epsilon^2$
~\cite{Kunszt:1994np}.
Specifically, $U^{(2)}_{n:1}$ is proportional to the one-loop amplitude,
\begin{equation}
U_{n:1}^{(2)} =A_{n:1}^{(1)} \times
\left[ - \sum_{i=1}^{n} \frac{1}{\epsilon^2} \left(\frac{\mu^2}{-s_{i,i+1}}\right)^{\epsilon} \right]
\end{equation}
and the two-loop IR divergences for the other un-renormalised partial amplitudes are presented
in a color trace basis in ref.~\cite{Dunbar:2019fcq}.
The finite remainder function $F_{n:\lambda}^{(2)}$ can be split into polylogarithmic and rational pieces,
\begin{equation}
F_{n:\lambda}^{(2)} = P^{(2)}_{n:\lambda}+R_{n:\lambda}^{(2)}\; .
\end{equation}
We calculate the former piece using four-dimensional unitarity and the latter using recursion.
\def{\rm F}{{\rm F}}
The one-loop all-plus amplitude is rational to leading order in $\epsilon$ and in four-dimensional unitarity effectively provides an additional
on-shell vertex~\citep{Dunbar:2016cxp,Dunbar:2017nfy}. The two-loop cuts effectively become one-loop cuts with a single insertion of this vertex
which
yield~\footnote{The functions ${\rm F}^{\rm 2m }$ and ${\rm F}^{\rm 1m }$ are the polylogarithimc parts of two-mass easy and one-mass one-loop box functions respectively.}
\begin{equation}
P_{n:\lambda}^{(2)} = \sum_{i} c^\lambda_{i} {\rm F}^{2m}_{i}\,,
\end{equation}
where $c^\lambda_i$ are rational coefficients,
\begin{eqnarray}
{\rm F}^{\rm 2m }(S,T,K^2_2,K_4^2) &=&
\mathop{\hbox{\rm Li}}\nolimits_2\left(1-{ K_2^2 \over S }\right)
+\ \mathop{\hbox{\rm Li}}\nolimits_2\left(1-{K_2^2 \over T}\right)
+ \mathop{\hbox{\rm Li}}\nolimits_2\left(1-{ K_4^2 \over S }\right)
+ \ \mathop{\hbox{\rm Li}}\nolimits_2\left(1-{K_4^2 \over T}\right)
\notag \\
& & -\mathop{\hbox{\rm Li}}\nolimits_2\left(1-{K_2^2K_4^2\over S T}\right)
+{1\over 2} \ln^2\left({ S \over T}\right)
\end{eqnarray}
and, in the specific case where $K_2^2=0$,
\begin{eqnarray}
{\rm F}^{\rm 2m }(S,T,0,K_4^2) &\equiv & {\rm F}^{\rm 1m }(S,T,K_4^2)
\notag\\ &=&
\mathop{\hbox{\rm Li}}\nolimits_2\left(1-{ K_4^2 \over S }\right)
+ \ \mathop{\hbox{\rm Li}}\nolimits_2\left(1-{K_4^2 \over T}\right)
+{1\over 2} \ln^2\left({ S \over T}\right)+{\pi^2 \over 6} \; .
\end{eqnarray}
\defC^{2m}_{a}{C^{2m}_{i}}
\defC^{2m}_{b}{C^{2m}_{ii}}
\defC^{1m}_{a}{C^{1m}_{6:1i}}
\defC^{1m}_{b}{C^{1m}_{6:1ii}}
\defC^{1m}_{c}{C^{1m}_{6:2ii}}
\defC^{1m}_{d}{C^{1m}_{6:2iii}}
\defC^{1m}_{e}{C^{1m}_{6:3i}}
\defC^{1m}_{f}{C^{1m}_{6:3ii}}
\defC^{1m}_{g}{C^{1m}_{6:3iii}}
\defC^{2m}_{a}{C^{2m}_{a}}
\defC^{2m}_{b}{C^{2m}_{b}}
\defC^{1m}_{a}{C^{1m}_{a}}
\defC^{1m}_{b}{C^{1m}_{b}}
\defC^{1m}_{c}{C^{1m}_{c}}
\defC^{1m}_{d}{C^{1m}_{d}}
\defC^{1m}_{e}{C^{1m}_{e}}
\defC^{1m}_{f}{C^{1m}_{f}}
\defC^{1m}_{g}{C^{1m}_{g}}
Defining\footnote{Here a null momentum is represented as a
pair of two component spinors $p^\mu =\sigma^\mu_{\alpha\dot\alpha}
\lambda^{\alpha}\bar\lambda^{\dot\alpha}$.
We are using a spinor helicity formalism with the usual
spinor products $\spa{a}.{b}=\epsilon_{\alpha\beta}
\lambda_a^\alpha \lambda_b^{\beta}$ and
$\spb{a}.{b}=-\epsilon_{\dot\alpha\dot\beta} \bar\lambda_a^{\dot\alpha} \bar\lambda_b^{\dot\beta}$.
\noindent{Also}
$ s_{ab}=(k_a+k_b)^2=\spa{a}.b \spb{b}.a=\langle a|b|a]$, $t_{abc}=(k_a+k_b+k_c)^2$, $[a|P_{bc}|d\rangle =[ab]\langle bd\rangle+ [ac]\langle cd\rangle$ etc.,
$ {\rm tr}_-[ijkl]\equiv {\rm tr}( \frac{(1-\gamma_5)}{2} \slashed{k}_{i} \slashed{k}_{j} \slashed{k}_{k} \slashed{k}_{l} ) =\spa{i}.{j}\spb{j}.{k}\spa{k}.{l}\spb{l}.{i}
\equiv \langle i|jkl|i]$ ,\\
\noindent{
${\rm tr}_+[ijkl]\equiv {\rm tr}( \frac{(1+\gamma_5)}{2} \slashed{k}_{i} \slashed{k}_{j} \slashed{k}_{k} \slashed{k}_{l} ) =\spb{i}.{j}\spa{j}.{k}\spb{k}.{l}\spa{l}.{i}$
and $\epsilon(i,j,k,l)={\rm tr}_+[ijkl]-{\rm tr}_-[ijkl]$.}
}
\begin{align}
C^{2m}_{a}(a;b,c;d;e,f) &=
\frac{i}{3}
\frac{\spb{e}.f^2}{\spa{a}.b\spa{b}.c\spa{c}.d\spa{d}.a}
\times {\rm F}^{2m}(t_{abc},t_{bcd},s_{bc},s_{ef})
\notag\\
C^{2m}_{b}(a;b,c;d;e,f) &=
\frac{i}{3}
\frac{\spb{e}.f^2}{\spa{a}.b\spa{b}.d\spa{d}.c\spa{c}.a}
\times {\rm F}^{2m}(t_{abc},t_{bcd},s_{bc},s_{ef}).
\label{eq:2mcoefs}
\end{align}
and
\begin{align}
C^{1m}_{a}(a,b,c;d,e,f)&=
\frac{i}{3}
\frac{t_{abc}\langle c|dP_{abc}|a\rangle+\langle c|defP_{def}|a\rangle+\langle a|fP_{def}|c\rangle s_{ef}}
{\spa{a}.b\spa{b}.c\spa{c}.a\spa{c}.d\spa{d}.e\spa{e}.f\spa{f}.a}\times {\rm F}^{1m}(s_{ab},s_{bc},t_{def})
\notag\\
\notag\\
C^{1m}_{b}(a,b,c;d,e,f)&=
\frac{i}{3}
\left(\frac{\langle a|dP_{abc}|c\rangle\langle c|dP_{abc}|a\rangle+\spa{c}.a(s_{ef}\langle a|fP_{abc}|c\rangle-\langle a|P_{abc}\,efd|c\rangle)}
{\spa{a}.b\spa{b}.c\spa{c}.a\spa{a}.d\spa{d}.c\spa{c}.e\spa{e}.f\spa{f}.a}\right)
\notag \\
& \hskip 10.0truecm\times {\rm F}^{1m}(s_{ab},s_{bc},t_{def})
\notag\\
\notag\\
C^{1m}_{c}(a,b,c;d,e,f)&=
- i\,
\frac{\spa{c}.a[d|ef|d]-[d|P_{abc}|c\rangle[d|P_{abc}|a\rangle}
{\spa{a}.b\spa{b}.c\spa{c}.a\spa{c}.e\spa{e}.f\spa{f}.a}\times {\rm F}^{1m}(s_{ab},s_{bc},t_{def})
\notag\\
\notag\\
C^{1m}_{d}(a,b,c;d,e,f)&=
i\,
\frac{[d|P_{abc}|a\rangle[d|f|c\rangle-[d|P_{abc}|c\rangle[d|e|a\rangle}
{\spa{a}.b\spa{b}.c\spa{a}.e\spa{e}.c\spa{c}.f\spa{f}.a}\times {\rm F}^{1m}(s_{ab},s_{bc},t_{def})
\notag\\
\notag\\
C^{1m}_{e}(a,b,c;d,e,f)&=
-2i\,
\frac{t_{abc}^2}
{\spa{a}.b\spa{b}.c\spa{c}.a\spa{d}.e\spa{e}.f\spa{f}.d}\times {\rm F}^{1m}(s_{ab},s_{bc},t_{def})
\notag\\
\notag\\
C^{1m}_{f}(a,b,c;d,e,f)&=
-2i\,
\frac{[d|P_{abc}|c\rangle^2}
{\spa{a}.b\spa{b}.c\spa{c}.a\spa{c}.e\spa{e}.f\spa{f}.c}\times {\rm F}^{1m}(s_{ab},s_{bc},t_{def})
\notag\\
\notag\\
C^{1m}_{g}(a,b,c;d,e,f)&=
-2i\,
\frac{\spb {d}.e^2\spa{c}.a^2}
{\spa{a}.b\spa{b}.c\spa{c}.a\spa{a}.c\spa{c}.f\spa{f}.a}\times {\rm F}^{1m}(s_{ab},s_{bc},t_{def}).
\label{eq:1mcoefs}
\end{align}
\defN_c{N_c}
Note that these six-point coefficients are conformally invariant: a feature noticed for the five-point all-plus amplitude in ref.~\citep{Henn:2019mvc}.
Using these definitions the results for $P_{6:\lambda}^{(2)}$ are:
\begin{align}
P_{6:1}^{(2)}(a,b,c,d,e,f)&=\sum_{\mathcal{P}_{6:1}}\Bigl( C^{1m}_{a}(a,b,c;d,e,f) + C^{2m}_{a}(a;b,c;d;e,f)\Bigr),
\label{eq:poly61}
\\
P_{6:3}^{(2)}(a,b;c,d,e,f)&
\notag\\
=\sum_{\mathcal{P}_{6:3}}\Bigg( &C^{1m}_{a}(a,b,c;d,e,f)+C^{1m}_{a}(a,c,b;d,e,f)+C^{1m}_{a}(c,a,b;d,e,f)
\notag\\
&-C^{1m}_{b}(a,c,d;b,e,f)-C^{1m}_{b}(c,a,d;b,e,f)-C^{1m}_{b}(c,d,a;b,e,f)
\notag\\
&-C^{1m}_{b}(d,e,f;c,a,b)+\frac12C^{1m}_{g}(d,e,f;a,b,c)
\notag\\
&+4C^{2m}_{a}(c;d,e;f;a,b)
+C^{2m}_{a}(b;e,f;a;c,d)+C^{2m}_{a}(f;b,a;e;c,d)
\notag\\
&-C^{2m}_{b}(e;f,a;b;c,d)-C^{2m}_{b}(f;e,b;a;c,d)
\notag\\
&+C^{2m}_{b}(d;e,b;f;a,c)-C^{2m}_{a}(b;d,e;f;a,c)-C^{2m}_{a}(d;e,f;b;a,c)\Bigg),
\label{eq:poly63}
\end{align}
\begin{align}
P_{6:4}^{(2)}(a,b,c;d,e,f)&
\notag\\
=\sum_{\mathcal{P}_{6:4}}\Bigg(
&\frac13 C^{1m}_{e}(a,b,c;d,e,f)-C^{1m}_{a}(a,b,c;f,e,d)
\notag\\
&+C^{1m}_{b}(d,b,a;c,e,f)+C^{1m}_{b}(b,d,a;c,e,f)+C^{1m}_{b}(b,a,d;c,e,f)
\notag\\
&+C^{2m}_{a}(a;f,e;b;c,d)-\frac{1}{2}C^{2m}_{b}(a;b,f;e;c,d)-\frac{1}{2}C^{2m}_{b}(f;a,e;b;c,d)
\notag\\
&+C^{2m}_{b}(a;b,d;c;e,f)-C^{2m}_{a}(a;b,c;d;e,f)-C^{2m}_{a}(d;a,b;c;e,f)\Bigg),
\label{eq:poly64}
\end{align}
\begin{align}
P_{6:2,2}^{(2)}(a,b;c,d;e,f)&
\notag\\
=\frac12\sum_{\mathcal{P}_{6:2,2}}\Bigg(
&C^{1m}_{g}(a,b,c;e,f,d)+C^{1m}_{g}(b,a,c;e,f,d)+C^{1m}_{g}(b,c,a;e,f,d)
\notag\\
&+6C^{2m}_{a}(d;a,b;c;e,f)-3C^{2m}_{b}(a;b,c;d;e,f)-3C^{2m}_{b}(b;a,d;c;e,f)\Bigg)
\label{eq:poly622}
\end{align}
and
\begin{align}
P_{6:1B}^{(2)}(a,b,c,d,e,f)&
\notag\\
=\sum_{\mathcal{P}_{6:1}}\Bigg(
&C^{1m}_{f}(a,b,c;f,d,e)-C^{1m}_{f}(c,b,a;d,e,f,)
\notag\\
&+C^{1m}_{f}(b,f,e;a,c,d)+C^{1m}_{f}(f,b,e;a,c,d)+C^{1m}_{f}(f,e,b;a,c,d)
\notag\\
&-C^{1m}_{f}(f,b,c;a,d,e)-C^{1m}_{f}(b,f,c;a,d,e)-C^{1m}_{f}(b,c,f;a,d,e)
\notag\\
&+6C^{2m}_{b}(f;b,e;d;a,c)-6C^{2m}_{a}(b;f,e;d;a,c)-6C^{2m}_{a}(f;e,d;b;a,c)
\notag\\
&+6C^{2m}_{a}(a;b,c;d;e,f)+3C^{2m}_{a}(f;b,c;e;a,d)+3C^{2m}_{a}(c;e,f;b;a,d)
\notag\\
&-3C^{2m}_{b}(b;c,f;e;a,d)-3C^{2m}_{b}(c;e,b;f;a,d)\Bigg).
\label{eq:poly61B}
\end{align}
This expression for $P_{6:1B}^{(2)}$ matches the $n$-point form of $P_{n:1B}^{(2)}$ given in \cite{Dunbar:2020wdh}.
The $U(N_c)$ pieces are:
\begin{align}
P_{6:2}^{(2)}(a;b,c,d,e,f)&
\notag\\
=\sum_{\mathcal{P}_{6:2}}
\Bigg(&C^{1m}_{b}(b,c,d;a,e,f)+C^{1m}_{c}(b,c,d;a,e,f)-C^{1m}_{a}(a,b,c;d,e,f)
\notag\\
&-C^{1m}_{a}(b,a,c;d,e,f)-C^{1m}_{a}(b,c,a;d,e,f)-2C^{2m}_{a}(b;c,d;e;f,a)
\notag\\
&+C^{2m}_{b}(b;c,a;d;e,f)-C^{2m}_{a}(a;b,c;d;e,f)-C^{2m}_{a}(b;c,d;a;e,f)\Bigg),
\label{eq:poly62}
\end{align}
\begin{align}
P_{6:1,1}^{(2)}(a;b;c,d,e,f)&
\notag\\
=\sum_{\mathcal{P}_{6:1,1}}
\Bigg(
&C^{1m}_{d}(c,d,e;a,b,f)-3C^{2m}_{a}(c;d,e;f;a,b)
\notag\\
&-C^{1m}_{c}(b,c,d;a,e,f)-C^{1m}_{c}(c,b,d;a,e,f)-C^{1m}_{c}(c,d,b;a,e,f)
\notag\\
&+3C^{2m}_{a}(b;c,d;e;f,a)+3C^{2m}_{a}(c;d,e;b;f,a)-3C^{2m}_{b}(c;d,b;e;f,a)\Bigg),
\label{eq:poly611}
\end{align}
and
\begin{align}
P_{6:1,2}^{(2)}(a;b,c;d,e,f)&
\notag\\
=\sum_{\mathcal{P}_{6:1,2}}
\Bigg(
&\frac12(C^{1m}_{c}(c,b,d;a,e,f)+C^{1m}_{c}(b,c,d;a,e,f)+C^{1m}_{c}(b,d,c;a,e,f))
\notag\\
&+\frac12(C^{1m}_{c}(c,b,d;a,f,e)+C^{1m}_{c}(b,c,d;a,f,e)+C^{1m}_{c}(b,d,c;a,f,e))
\notag\\
&-\frac12(C^{1m}_{g}(a,d,e;b,c,f)+C^{1m}_{g}(d,a,e;b,c,f)+C^{1m}_{g}(d,e,a;b,c,f))
\notag\\
&-\frac12C^{1m}_{g}(d,e,f;b,c,a)-C^{1m}_{c}(d,e,f;a,b,c)
\notag\\
&-C^{1m}_{d}(b,d,e;a,c,f)-C^{1m}_{d}(d,b,e;a,c,f)-C^{1m}_{d}(d,e,b;a,c,f)
\notag\\
&+3C^{2m}_{b}(c;b,f;e;a,d)+3C^{2m}_{b}(b;e,c;f;a,d)
\notag\\
&-3C^{2m}_{a}(b;e,f;c;a,d)-3C^{2m}_{a}(e;b,c;f;a,d)
\notag\\
&+3C^{2m}_{a}(c;d,e;f;a,b)+3C^{2m}_{a}(d;e,f;c;a,b)-3C^{2m}_{b}(d;e,c;f;a,b)
\notag\\
&-3C^{2m}_{a}(a;d,e;f;b,c)-3C^{2m}_{a}(d;e,f;a;b,c)+3C^{2m}_{b}(d;e,a;f;b,c)\Bigr)\,.
\label{eq:poly612}
\end{align}
\section{Rational Terms}
As $R_{n:\lambda}^{(2)}$ is a rational function we may calculate it using recursion techniques
by performing a complex shift of its external legs \cite{Britto:2005fq,Risager:2005vk} and analysing the singularities of the resultant complex function $R(z)$.
This is complicated because the amplitude has double poles in complex momenta.
The leading poles are determined by the amplitude's factorisation but there are no general theorems that determine the subleading poles.
We use color dressed augmented recursion as reviewed in \cite{Dunbar:2017nfy,Dunbar:2019fcq} to overcome the issue of double poles.
This requires generating certain doubly off-shell currents
which we present in appendix~\ref{app_currents}. The specific rational pieces are:
\subsection{$\mathbf{R_{6:1}^{(2)}}$}
\begin{align}
R_{6:1}^{(2)}(a,b,c,d,e,f)&= \frac{i}{9}\sum_{\mathcal{P}_{6:1}}
\frac{G_{6:1}^1+G_{6:1}^2+G_{6:1}^3+G_{6:1}^4+G_{6:1}^5}{\spa{a}.b\spa{b}.c\spa{c}.d\spa{d}.e\spa{e}.f\spa{f}.a}
\intertext{where}
G_{6:1}^1(a,b,c,d,e,f)&=\frac{s_{cd}s_{df}\langle f|a\,P_{abc}|e\rangle}
{\spa{f}.e\,t_{abc}}
+\frac{s_{ac}s_{cd}\langle a|f\,P_{def}|b\rangle}
{\spa{a}.b\,t_{def}},
\notag\\
G_{6:1}^2(a,b,c,d,e,f)&=\frac{\spb{a}.b\spb{e}.f}{\spa{a}.b\spa{e}.f}
\spa{a}.e^2\spa{b}.f^2
+\frac12\frac{\spb{f}.a\spb{c}.d}{\spa{f}.a\spa{c}.d}
\spa{a}.c^2\spa{d}.f^2,
\notag\\
G_{6:1}^3(a,b,c,d,e,f)&=\frac{s_{df}\spa{f}.a\spa{c}.d\spb{a}.c\spb{d}.f}{t_{abc}},
\notag\\
G_{6:1}^4(a,b,c,d,e,f)&=\frac{\langle a|be|f\rangle t_{abc}}{\spa{a}.f}
\intertext{and}
G_{6:1}^5(a,b,c,d,e,f)&=s_{fa} s_{bc}
+ s_{ac} s_{be}
+ \frac52 s_{af} s_{cd}
-8 [a|bcf|a\rangle
-8 [a|cde|a\rangle
-\frac12 [a|cdf|a\rangle
- \frac{11}2 [b|cef|b\rangle
\label{eq:leadingrat}
\end{align}
This was first calculated in \cite{Dunbar:2016gjb} and later presented in an alternative form \cite{Dunbar:2017nfy}. It was subsequently confirmed
by Badger et.al.~\cite{Badger:2016ozq}.
\subsection{$\mathbf{R_{6:3}^{(2)}}$}
\begin{align}
R_{6:3}^{(2)}(a,b;c,d,e,f)&=
\sum_{\mathcal{P}_{6:3}}
\Bigg[\frac{i}3\Big(H^1_{6:3}(a,b,c,d,e,f)-H^1_{6:3}(a,b,c,d,f,e)\Big)
\notag\\
&+\frac{i}3
\frac{\Big(G_{6:3}^2(a,b,c,d,e,f)+G_{6:3}^3(a,b,c,d,e,f)+G_{6:3}^4(a,b,c,d,e,f)\Big)}
{\spa{a}.b\spa{b}.c\spa{c}.a\spa{d}.e\spa{e}.f\spa{f}.d}
\notag\\
&+\frac{i}{12}
\frac{G_{6:3}^5(a,b,c,d,e,f)}
{\spa{a}.b\spa{b}.c\spa{c}.d\spa{d}.e\spa{e}.f\spa{f}.a}
\Bigg]
\label{eq:r63full}
\end{align}
where
\begin{align}
H_{6:3}^1(a,b,c,d,e,f)&=
\frac{G_{6:3}^1(a,b,c,d,e,f)}
{\spa{a}.b\spa{b}.c\spa{c}.d^2\spa{d}.e\spa{e}.f\spa{f}.a}
+ \frac{\spb{c}.d}{\spa{c}.d^2}
\frac{\spa{c}.f \spa{d}.b [b|f|d\rangle}
{\spa{a}.b \spa{a}.f \spa{b}.f \spa{d}.e \spa{e}.f}
\notag\\
G_{6:3}^1(a,b,c,d,e,f)&=
s_{ce} \langle c|bf|d\rangle - s_{cf} \langle c|be|d\rangle
\notag\\
G_{6:3}^2(a,b,c,d,e,f)&=\frac{[d|P_{def}b|a]\langle d|fP_{def}|a\rangle
+s_{de}[f|cbd|f\rangle
+[b|df|e]\langle b|c P_{abc}|e\rangle}
{t_{def}}
\notag\\
G_{6:3}^3(a,b,c,d,e,f)&=-\frac{s_{df} \langle d|fb|c\rangle [c|P_{abc}|e\rangle}
{\spa{d}.e t_{def}}
-
\frac{s_{de} \langle f|db|c\rangle [c|d|e\rangle}
{\spa{e}.f t_{def}}
\notag\\
G_{6:3}^4(a,b,c,d,e,f)&=
-s_{bd}s_{de} - [a|bde|a\rangle + [b|cde|b\rangle
-[a|bdf|a\rangle
\notag\\
&+[b|cdf|b\rangle
+[b|cef|b\rangle
-[b|def|b\rangle
\notag\\
G_{6:3}^5(a,b,c,d,e,f)&=
-4s_{ac}^2 + 2s_{ab}s_{ad} - 2s_{ac}s_{ad}
+2s_{ab}s_{ae} - 2s_{ac}s_{ae} + 2s_{bd}^2 - 2s_{be}^2
+2s_{bf}^2
\notag\\
&- 8s_{ac}s_{cd} + 4s_{bc}s_{cd} +
12 s_{bd}s_{cd} + 6 s_{cd}^2 - 8 s_{ac} s_{ce} +
12 s_{bc} s_{ce} + 16 s_{bd} s_{ce}
\notag\\
&+ 4 s_{be} s_{ce} +
8 s_{cd} s_{ce} + 2 s_{c e}^2 + 2 s_{c f}^2 - 8 s_{a c} s_{d e} -
4 s_{a d} s_{d e} - 4 s_{b c} s_{d e} + 4 s_{c d} s_{d e}
\notag\\
& +4 s_{c e} s_{d e} - 8 [a|bce|a\rangle
- 39 [a|bcf|a\rangle -
18 [a|bdf|a\rangle + 2 [a|bef|a\rangle
\notag\\
&- 10 [a|cdf|a\rangle - 2 [a|cef|a\rangle - 4 [a|def|a\rangle + 8 [b|cde|b\rangle -
4 [b|cdf|b\rangle
\notag\\
&- 4 [b|cef|b\rangle - 4 [b|def|b\rangle -
4 [c|def|c\rangle
\label{eq:r63pieces}
\end{align}
\subsection{$\mathbf{R_{6:4}^{(2)} }$}
\begin{align}
R_{6:4}^{(2)}(a,b,c,d,e,f)&=\frac{i}{36}\sum_{\mathcal{P}_{6:4}}
\Bigg[\frac{\Big(G_{6:4}^{1}(a,b,c,d,e,f)+G_{6:4}^{2}(a,b,c,d,e,f)\Big)
}
{\spa{a}.b\spa{b}.c\spa{c}.a\spa{d}.e\spa{e}.f\spa{f}.d}
\notag\\
&\hspace{1.3cm}+12\frac{\Big(G^{3}_{6:4}(a,b,c,d,e,f)+G^{4}_{6:4}(a,b,c,d,e,f)\Big)}{\spa{a}.b\spa{c}.d\spa{d}.e\spa{e}.f\spa{f}.c}
\Bigg],
\end{align}
%
where
\begin{align}
G^{1}_{6:4}(a,b,c,d,e,f)&=\frac{4\,\langle e|P_{abc}a|b\rangle[e|dP_{abc}|b]}{t_{abc}},
\notag\\
G^{2}_{6:4}(a,b,c,d,e,f)&= s_{ad}^2 + 106\,s_{ab}s_{ad} + 102\,[a|bcd|a\rangle - 4\,[a|bde|a\rangle - 4\,[a|dbe|a\rangle,
\notag\\
G^{3}_{6:4}(a,b,c,d,e,f)&=-\frac{\spb{a}.b}{\spa{a}.b}\Bigl(\langle a|cd|b\rangle+\langle a|ef|b\rangle\Bigr),
\notag\\
G^{4}_{6:4}(a,b,c,d,e,f)&=[a|cd|b]+[a|ef|b].
\end{align}
\subsection{$\mathbf{R_{6:2,2}^{(2)}}$}
\begin{align}
R_{6:2,2}^{(2)}(a,b;c,d;e,f)&=\sum_{\mathcal{P}_{6:2,2}}
i\,\frac{G_{6:2,2}^{1}(a,b,c,d,e,f)+G_{6:2,2}^{2}(a,b,c,d,e,f)
}
{\spa{a}.b\spa{b}.c\spa{c}.a\spa{d}.e\spa{e}.f\spa{f}.d},
\end{align}
where
\begin{align}
G_{6:2,2}^{1}(a,b,c,d,e,f)&= \frac{\langle b|P_{abc}f|d\rangle[b|cP_{abc}|d] }
{t_{abc}},
\notag\\
G_{6:2,2}^{2}(a,b,c,d,e,f)&=s_{ad}[e|P_{bc}|e\rangle -s_{ac}[e|P_{fa}|e\rangle -s_{af}s_{ae} -s_{ae}s_{cd}.
\end{align}
\subsection{$\mathbf{R_{6:1B}^{(2)}}$}
An $n$-point formula was conjectured in \cite{Dunbar:2020wdh} and we find agreement.
\begin{equation}
R_{6:1B}^{(2)}(a,b,c,d,e,f)=R_{6:1B_1}^{(2)}(a,b,c,d,e,f)+R_{6:1B_2}^{(2)}(a,b,c,d,e,f)
\end{equation}
where
\begin{equation}
R_{6:1B_1}^{(2)}(a,b,c,d,e,f) =
{-2i \over {Cy(a, b, c, d, e, f) } }
\times\sum_{a\leq i < j < k < l \leq f } \epsilon(i,j,k,l)
\end{equation}
and
\begin{eqnarray}
& &R_{6:1B_2}^{(2)}(a,b,c,d,e,f) =4i
\Bigl(
{ \epsilon(c, d, e, f) \over Cy(a, b, d, e, c, f) } +
{ \epsilon(c, d, e, f)\over Cy(a, b, e, c, d, f) }+
{ \epsilon(c, d, e, f)\over Cy(a, b, e, d, c, f) }
\notag\\
& &+ { \epsilon(a, b, c, d)\over Cy(a, c, d, b, e, f) }
-{ \epsilon(a, b, c, f)\over Cy(a, c, d, e, b, f) } +
{ \epsilon(a, b, c, d)\over Cy(a, d, b, c, e, f) }
-{ \epsilon(a, c, d, f)\over Cy(a, d, b, e, c, f) }
\notag \\
& & +{ \epsilon(a, b, c, d) \over Cy(a, d, c, b, e, f) } +
{ \epsilon(a, b, d, f)\over Cy(a, d, c, e, b, f) }
-{ \epsilon(a, c, d, f)\over Cy(a, d, e, b, c, f) } +
{ \epsilon(a, b, d, f)\over Cy(a, d, e, c, b, f) }
\notag\\
& & -{ \epsilon(a, d, e, f)\over Cy(a, e, b, c, d, f) } +
{ \epsilon(a, c, e, f)\over Cy(a, e, b, d, c, f) } +
{ \epsilon(a, c, e, f)\over Cy(a, e, d, b, c, f) }
-{ \epsilon(a, b, e, f)\over Cy(a, e, d, c, b, f) }
\Bigr)
\; ,
\end{eqnarray}
where $Cy$ is the Parke-Taylor denominator,
\begin{equation}
Cy(a, b, c, d, e, f) = \spa{a}.b \spa{b}.c \spa{c}.d \spa{d}.e \spa{e}.f \spa{f}.a \,.
\end{equation}
\subsection{$\mathbf{R_{6:1,1}^{(2)}}$}
We also calculate the $U(N_c)$ amplitudes
\begin{align}
R_{6:1,1}^{(2)}(a;b;c,d,e,f) = \sum_{\mathcal{P}_{6:1,1}}&\Bigg(
i\,\frac{G_{6:1,1}^1(a,b,c,d,e,f)+G_{6:1,1}^2(a,b,c,d,e,f)}
{\spa{b}.c\spa{c}.d\spa{d}.b\spa{a}.e\spa{e}.f\spa{f}.a}
\notag\\
&+i\,\frac{G_{6:1,1}^3(a,b,c,d,e,f)}
{\spa{a}.c\spa{c}.d\spa{d}.b\spa{b}.e\spa{e}.f\spa{f}.a}\Bigg)
\end{align}
where
\begin{align}
G_{6:1,1}^1(a,b,c,d,e,f)&=\frac{[c|P_{bcd}\,efP_{bcd}\,b|c\rangle}{t_{bcd}},
\notag\\
G_{6:1,1}^2(a,b,c,d,e,f)&=2s_{ab} s_{cd}
-s_{ac}s_{ae}
+s_{ac}s_{cd}
+s_{ad} s_{cd}
-s_{cd}^2
-s_{cd} s_{ce}
-s_{cd} s_{cf}
-s_{cd} s_{df}
\notag\\
&-[a|cde|a\rangle
+ \frac12 [c|def|c\rangle
\notag
\intertext{and}
G_{6:1,1}^3(a,b,c,d,e,f)&=2 s_{ab} s_{ac}
+2 s_{ac}^2
+2 s_{ac} s_{ad}
+2 s_{ac} s_{ae}
+2 s_{ac} s_{bc}
- s_{ae} s_{bc}
+ s_{ab} s_{cd}
+ s_{ac} s_{cd}
\notag\\
&+ s_{ad} s_{cd}
-2 s_{ae} s_{cd}
+2 s_{ad} s_{ce}
-2 s_{ae} s_{ce}
- s_{cd} s_{ce}
- s_{ce}^2
- s_{cd} s_{cf}
+ s_{ce} s_{df}
\notag\\
&-\frac12 s_{cd} s_{ef}
+ 2 [a|cbd|a\rangle
+ 2 [a|cbe|a\rangle
+ 4 [a|cde|a\rangle
- [c|def|c\rangle.
\end{align}
\subsection{$\mathbf{R_{6:1,2}^{(2)}}$}
\begin{align}
R_{6:1,2}^{(2)}(a;b,c;d,e,f) &= \sum_{\mathcal{P}_{6:1,2}}
\frac{i\Big(G_{6:1,2}^1(a,b,c,d,e,f)+G_{6:1,2}^2(a,b,c,d,e,f)\Big)}
{\spa{e}.f\spa{f}.a\spa{a}.e\spa{b}.c\spa{c}.d\spa{d}.b}
\end{align}
where
\begin{align}
G_{6:1,2}^1(a,b,c,d,e,f)&= -\frac{[e|fP_{bcd}dbP_{bcd}|e\rangle+[e|P_{bcd}bcP_{bcd}a|e\rangle}{t_{bcd}},
\notag\\
G_{6:1,2}^2(a,b,c,d,e,f)&=[a|bce|a\rangle-2[b|dce|b\rangle+[b|def|b\rangle \,.
\end{align}
\subsection{$\mathbf{R_{6:2}^{(2)}}$}
$\mathbf{R_{6:2}^{(2)}}$ is compactly written by its decoupling identity which we have checked numerically:
\begin{align}
R_{6:2}^{(2)}(a;b,c,d,e,f)=&-R_{6:1}^{(2)}(a,b,c,d,e,f)-R_{6:1}^{(2)}(b,a,c,d,e,f)-R_{6:1}^{(2)}(b,c,a,d,e,f)
\notag \\ &
-R_{6:1}^{(2)}(b,c,d,a,e,f)-R_{6:1}^{(2)}(b,c,d,e,a,f)\,.
\end{align}
These expressions are valid for both $U(N_c)$ and $SU(N_c)$ gauge groups and are remarkably compact.
We have confirmed that they satisfy the constraints arising from the decoupling identities.
The $SU(N_c)$ amplitudes have the correct collinear limits:
all non-adjacent and inter-trace
limits vanish and adjacent limits within a single trace factorize correctly.
All of the partial amplitudes have the correct symmetries.
Recursion involves choosing specific legs to shift, breaking the symmetry of the amplitude. Restoration of
this symmetry is a powerful check of the validity of our results.
We have checked that none of the $R_{6:\lambda}^{(2)}$ are annihilated by the conformal operator.
\vfill\eject
\section{Conclusions}
Computing perturbative gauge theory amplitudes to high orders is an important but difficult task.
In this article, we have calculated the full color all-plus six-point two-loop amplitude and presented the results in simple analytic forms.
We have computed all the color components directly thus presenting the first complete six gluon two-loop scattering amplitude.
Our methodology obtains these results bypassing the need to determine two-loop non-planar integrals.
There are some inherent assumptions in our methods however, the results satisfy a variety of consistency checks. Firstly, they give the correct results for the five-point amplitudes and for $A_{6:1}^{(2)}$ which was computed subsequently. Secondly, we have generated the full set of amplitudes and then checked the decoupling identities are satisfied. We have checked the collinear limits of the amplitudes. Note that
the singular terms $U_{n:\lambda}^{(2)}$ and the polylogarithms $P_{n:\lambda}^{(2)}$ must combine to give the correct collinear limits as in
ref~\cite{Dunbar:2016aux,Dunbar:2016cxp}.
Analytic forms are particularly useful in studying formal properties of amplitudes.
For example we have confirmed that the coefficients of the polylogarithms are conformally invariant whilst the rational terms are not.
\def\COMMENT{
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.8]{fig6.eps}
\end{center}
\caption{Diagram showing the tree-(two-loop) factorisation. Color dressing this diagram has no Tr$(a S_1)$Tr$(b S_2)$ contributions.}
\label{fig:axialg}
\end{figure}
}%
\section{Acknowledgements}
DCD was supported by STFC grant ST/L000369/1. JMWS was
supported by STFC grant ST/S505778/1. ARD was
supported by an STFC studentship.
|
3,212,635,537,510 | arxiv |
\section{Results at other redshifts}
\label{app:red}
In figure \ref{fig:otherz} we show the ratios of both the 1D and 3D
flux power spectra for the {\it massive} and {\it rescaled} simulations
at redshifts $z=4.5, 3.5, 2$.
We have matched the mean flux
and temperature at mean density. The ratios of the 1D and 3D flux power
spectra have essentially the same amplitude and shape as the $z=3$ ratios presented
in section \ref{ss:lya}. This shows that over the full redshift range relevant
for Ly$\alpha$ forest\, the flux power spectrum is primarily dependent on the amplitude
of the linear power and the state of the IGM, and the effects of massive
neutrinos can be replicated to within $2\%$ by matching these parameters.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.65]{otherZ.pdf}
\caption{Ratios of 1D (top) and 3D (bottom) flux power spectra in
simulations with massive and massless neutrinos, at different redshifts.
In both panels, the solid lines show the ratios of the flux power spectra in the
{\it massive} simulation and the {\it rescaled}, the ratios for the
{\it massive} and {\it massless} simulation are shown in dashed lines.
The mean flux and the temperature at mean density have been matched in
both simulations to reduce the effect of subtle changes in the thermal
and ionization history of the IGM. In vertical lines we show the highest $k$
mode measured from BOSS data in \cite{PD2013} for each redshift bin.
}
\label{fig:otherz}
\end{figure}
\section{Dependence on box size and resolution}
\label{app:box}
In section \ref{sec:res} we have presented the results from a set of
simulations with a box size of $L=133.85 \ \text{Mpc}$ ($h\,L= 90 \ \text{Mpc}$), and
$1024^3$ CDM and baryon particles.
In order to test that the main results do not depend on the chosen box size
or resolution, in figure \ref{fig:box} we reproduce the results of the left panel
in figure \ref{fig:lyaf} from simulations with $L=89.23 \ \text{Mpc}$ ($h\,
L= 60 \ \text{Mpc}$), and $512^3$ CDM and baryon particles. The original results from the main
text are shown in solid lines, and the dashed lines show the results from the
simulation with lower box size and lower resolution.
There is very little difference between the results shown in the solid and dashed lines,
indicating that the results shown in section \ref{ss:lya} are independent of simulation
box size and resolution.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.6]{box_res.pdf}
\caption{Ratios of the 3D flux power spectra in the {\it massive} and {\it rescaled}
cosmologies at $z=3$ for two different sets of simulations;
one with box sizes of $L=90 \ h^{-1}\text{Mpc}$ and $1024^3$ particles (solid lines),
and one with box sizes of $L=60 \ h^{-1}\text{Mpc}$ and $512^3$ particles (dashed lines).
We have matched $\bar{F}$ and $T_0$ in each simulation pair.
The results presented in section \ref{ss:lya} are independent of box size
and resolution.
}
\label{fig:box}
\end{figure}
\section{Dependence on neutrino implementation}
\label{app:nu}
In order to verify that our results are independent of neutrino implementation,
we show in figure \ref{fig:nu_implementation} the ratio of power spectra at $z=2$
in simulations with the linear response approximation and particle neutrinos.
We include the same number of neutrino particles as CDM and baryons
($N_\mathrm{part}=512^3$), with initial conditions also starting at $z=99$.
The particle implementation can include (small) non-linear clustering
in the neutrino component, whereas the Fourier
space approach only includes non-linearities in the cold dark matter.
We show the comparison at the lowest redshift we have run simulations to as
this is where any disagreement between the two methods would be strongest.
For the CDM + baryon density (left), 1D (center) and 3D (right) flux power
spectra, the agreement between the two approaches is better than $0.5\%$. This is
consistent with the expectation that there is minimal non-linear clustering of
neutrinos at $z=2$ (see refs \cite{Bird2012,Haimoud2013}), and the
residual difference is likely due to shot noise in the neutrino particles.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.6]{nu_implementation.pdf}
\caption{Comparison of the Fourier space based neutrinos with the particle
implementation at $z=2$.
From left to right, we show the ratios of the CDM + baryon matter power
spectra, the 1D flux power and then the 3D flux power spectra.
}
\label{fig:nu_implementation}
\end{figure}
\section{Conclusion}
\label{sec:con}
We have considered the effects of massive neutrinos on the growth of structure within the length
scales and redshift range relevant for Ly$\alpha$ forest\ analysis. These effects can be split
into three categories. First there is a suppression of the overall amplitude
of the power spectrum. Second there is an increase in the non-linear growth caused by the
presence and clustering of massive neutrinos. Third there is an effect
on the growth rate and velocity power spectrum.
The first effect is large, with a $9\%$ suppression of the power spectrum
for $\Sigma m_\nu=0.3$ eV. The Ly$\alpha$ forest\ is sensitive to the amplitude of the late-time, small
scale power spectrum, and so this is the strongest signal of neutrino
mass when using Ly$\alpha$ forest\ data. However we have shown that this signal is
extremely degenerate with a change in the primordial perturbation amplitude,
$A_s$. In section \ref{ss:non} we showed that even non-linear effects caused by
massive neutrinos are also highly degenerate with a change in the overall
amplitude of the power spectrum.
In figure \ref{fig:relabel} we showed that
the effect of massive neutrinos on the growth rate is small, with a
$<2\%$ effect on the non-linear modes. The Ly$\alpha$ forest\ is far less sensitive
to the growth rate - Ref \cite{McDonald2005} measured the growth rate
to a precision of $30\%$, while the amplitude of the linear power was
measured to a precision of $13\%$ - and so we argue that effects on the
growth rate do not break the degeneracy with $A_s$. We demonstrated
this degeneracy on the flux power spectra of the Ly$\alpha$ forest\, and showed that,
after matching the mean flux and temperature at mean density,
the effect of $\Sigma m_\nu=0.3$ eV massive neutrinos
is degenerate with $A_s$ to within $<1\%$ in the 1D
flux power at $k_\parallel<3 \ \text{Mpc}^{-1}$ and in the 3D flux power at $|k|<4\ \text{Mpc}^{-1}$.
Therefore, from the point of view of a Ly$\alpha$ forest\ only likelihood,
we have shown that it is not necessary to include an extra parameter to
describe neutrino mass, and that doing so introduces a strong degeneracy.
Our results also suggest that other parametrisations based on the amplitude
of the linear power at low $z$ and high $k$ would more closely describe the
observables, but we leave the exact specification of such parametrisations
for future work.
We finish by stressing that, even in the presence of this
degeneracy, the Ly$\alpha$ forest\ is still a very competitive probe of massive neutrinos
when combined with results from CMB experiments.
The CMB temperature fluctuations depend on the linear power spectrum at
early times, before neutrinos become non-relativistic, and provide a
measurement of $A_s$ that can be used to break the $A_s - \Sigma m_\nu$
degeneracy discussed in this work.
We expect that combined CMB + Ly$\alpha$ forest\ analyses will continue to play an
important role in cosmological studies of neutrino masses in the coming years.
\section{Introduction}
\label{sec:int}
The results of neutrino oscillation experiments show that at least two of the
neutrino mass eigenstates must have small but non-zero mass \cite{deSalas2018}.
Whilst these experiments are able to constrain the mass differences of the
eigenstates, they are far less sensitive to the absolute mass scale, or
to the sum of the mass eigenstates $\Sigma m_{\nu}$.
The extremely small cross section of neutrinos makes designing laboratory
experiments to measure their absolute mass scale challenging.
Attempts are continuing to be made through measuring the
$\beta$-decay spectrum of tritium \cite{BetaDecay,Katrin},
with the most recent results finding an upper limit
of $\Sigma m_\nu < 1.1$ eV (90\% C.L.) \cite{Katrin2019}.
The subtle effects of neutrino properties on cosmology have been studied for
decades \cite{BondEfstathiou1980,Julien06}, and the onset of precision data
sets in cosmology has opened up the possibility of measuring the neutrino mass
scale by detecting the effect on the growth of structure and expansion rate
of the Universe.
Constraints have been put on $\Sigma m_{\nu}$\ using several cosmological probes, such
as observations of the CMB \cite{Planck2018} or galaxy clustering and weak lensing
measurements from galaxy surveys \cite{DES2018}, and will continue to be a major
science goal in future surveys \cite{Hamann2012}.
Another cosmological probe that has emerged as an especially strong tool to
constrain neutrino mass is the clustering of the Lyman-$\alpha$ (Ly$\alpha$) forest:
a series of absorption features observed in the spectra of high redshift
($2<z<5$) quasars.
Some of the tightest constraints to date come from the combination of CMB and
Ly$\alpha$\ studies \cite{Seljak2006,PD2015}, with a current limit of only
$\Sigma m_{\nu} < 0.12$ eV (95\% C.L.).
With the upcoming Dark Energy Spectroscopic Instrument (DESI) survey
constraints are forecast to shrink to $\sigma_{\Sigma m_{\nu}}=0.041$eV when
utilising the full 3D power spectrum of the Ly$\alpha$ forest\ and combined with data from
the CMB \cite{Font-Ribera2014}. Given the current lower limit from oscillation
experiments of $\Sigma m_\nu \geq 0.06$eV, these observations should begin
to show evidence for neutrino mass.
When including information from the Ly$\alpha$ forest\ bispectrum
the constraints could be further improved \cite{Mandelbaum2003}.
Three-dimensional correlations in the Ly$\alpha$ forest\ have been measured at separations
of hundreds of Megaparsecs \cite{Busca2013,Slosar2013,Delubac2015,Bautista2017},
allowing a very precise determination of the expansion rate at $z \approx 2.4$
from baryon acoustic oscillations.
Most of the constraining power on total neutrino mass, however, comes from smaller
separations as massive neutrino free-streaming suppresses
clustering on small scales at late times.
Current Ly$\alpha$ forest\ constraints on neutrino mass are
restricted to the average correlation of one-dimensional Fourier modes along
quasar spectra, a summary statistic commonly known as the
1D flux power spectrum.
In order to extract cosmological information from the measured correlations
we need to be able to generate theoretical predictions for the flux power
spectrum as a function of cosmological model and as a function of several
nuisance parameters describing the uncertain thermal and ionization history
of the intergalactic medium (IGM).
In the absence of computing time constraints, one would run a large
hydrodynamical simulation for each of the $10^5-10^6$ likelihood evaluations
in a Monte Carlo Markov Chain.
However, these simulations are computationally expensive, typically requiring
$10^4-10^5$ core hours in high performance computing facilities, and past
Ly$\alpha$ forest\ analyses were only able to simulate few tens of models.
Some studies used the simulations to calibrate a Taylor expansion of the
likelihood around a fiducial model \cite{Viel2006,Viel2010,Borde2014,Rossi2014,PD2015},
while others designed different interpolation frameworks to predict the flux
power spectrum for models that were not simulated \cite{McDonald2005}.
Recent works have used Gaussian processes to perform this interpolation,
which have the benefit of estimating an error on the prediction \cite{Bird2019}.
This allows for
the use of Bayesian optimisation to distribute the hydrodynamical simulations more
efficiently throughout parameter space, minimising the total number of
simulations required and helping to ensure convergence to the true posterior distribution. \cite{Rogers2019}.
The analysis is further complicated by the existence of several parameter
degeneracies.
For instance, both the mean transmitted flux fraction (or mean flux) and the
amplitude of the linear power spectrum affect the overall amplitude of the
1D power spectrum of the Ly$\alpha$ forest, and we are only able to break the degeneracy
because they affect its shape in a different way \cite{McDonald2005}.
These degeneracies are difficult to capture in an interpolating framework
or in a Taylor expansion, and numerical approximations might accidentally
break these degeneracies in the estimated likelihood.
Therefore it is important to choose a likelihood parameterization that minimizes the
parameter degeneracies.
In this paper we investigate the degeneracy between the sum of the neutrino
masses $\Sigma m_{\nu}$\ and the amplitude of the power spectrum.
This degeneracy has been studied in the context of the matter power
spectrum in real \cite{FVN2014b,Zennaro2019,Archidiacono2017} and redshift space \cite{FVN2018}.
However, as discussed in \cite{Viel2010}, the degeneracy is stronger in
studies of the 1D flux power spectrum of the Ly$\alpha$ forest, since it is primarily sensitive to
the linear power on very small scales.
We revisit this degeneracy in the 1D flux power spectrum, with a
different simulation set up to previous studies that captures the degeneracy
more closely.
For the first time we also study the degeneracy in the 3D flux power spectrum.
In section \ref{sec:lin} we review the effect of massive neutrinos in
linear theory and discuss the parameter degeneracies.
In section \ref{sec:sim} we introduce a set of simulations to investigate
how well the degeneracy predicted by linear theory carries over into the
non-linear regime and into Ly$\alpha$ forest\ observables. Here we also describe the
difference in our simulation set up when compared with previous studies
into the degeneracy in the Ly$\alpha$ forest.
In section \ref{sec:res} we present our results, and we conclude in
section \ref{sec:con}.
\section{Linear theory}
\label{sec:lin}
In this section we review the effect of massive neutrinos on the linear
growth of structure (see \cite{Julien06} for a full review), focusing on
effects on the linear power spectrum in the range of scales and redshifts
relevant in studies of the small-scale clustering of the Ly$\alpha$ forest.
We denote the density parameters defined at $z=0$ for Cold Dark Matter (CDM),
baryons, neutrinos, and dark energy as
$\omega_\mathrm{c}$, $\omega_\mathrm{b}$, $\omega_\nu$ and $\omega_\Lambda$
respectively. For each component, these are related to the critical density
$\rho_c$ via the density fractions
$\Omega h^2=\omega$ where $\Omega=\rho/\rho_\mathrm{c}$.
The total non-relativistic
matter density at $z=0$ is
$\omega_{\mathrm{cb}\nu}=\omega_\mathrm{c}+\omega_\mathrm{b}+\omega_\nu$.
The first effect of a non-zero neutrino mass is a subtle change in the
expansion history of the Universe.
At early times, massive neutrinos are relativistic and indistinguishable from
massless ones, and their energy density decreases with the expansion of the
universe like radiation.
At late times, massive neutrinos become non-relativistic at redshift $z_\mathrm{nr}$
that is dependent on their mass, $m_\nu$:
\begin{equation}
1 + z_{\mathrm{nr}}= 189.4 ~ \left(\frac{m_\nu}{0.1 \mathrm{eV}}\right) ~.
\end{equation}
From this point their energy density evolves like non-relativistic matter, and the
total non-relativistic matter density is increased at the percent level:
\begin{equation}
f_\nu \equiv \frac{\omega_\nu}{\omega_{\mathrm{cb}\nu}} = 0.023
\left( \frac{\Sigma m_\nu}{0.3\ \mathrm{eV}} \right)
\left( \frac{0.14}{ \omega_{\mathrm{cb}\nu}} \right) ~,
\end{equation}
where the neutrino energy density is given by
\begin{equation}
\omega_\nu = 0.00322 ~ \left( \frac{\Sigma m_\nu}{0.3 \mathrm{eV}} \right)~.
\end{equation}
After the non-relativistic transition neutrinos effectively behave like
hot dark matter in that they free-stream and do not cluster on small scales.
This length scale is set by the wavenumber which enters the horizon when neutrinos become non-relativistic:
\begin{equation}
\label{eq:freestream}
k_{\mathrm{nr}} ~ \simeq ~ 0.00213 \left( \frac{\omega_{\mathrm{cb}\nu}}{0.14} \right)^{1/2} \left( \frac{m_\nu}{0.10\mathrm{eV}} \right)^{1/2} \ \text{Mpc}^{-1},
\end{equation}
This gives rise to the second effect of non-zero neutrino masses: the spatial
distribution of CDM and baryons is affected by the distribution of neutrinos,
as captured in the evolution of the linear power spectrum.
In figure \ref{fig:linear} we compare the linear power spectra
\footnote{Computed using the publicly available code {\texttt{CAMB}}\cite{CAMB}}
of CDM and baryons in a massless neutrino cosmology and a cosmology with
three $0.1{\rm eV}$ neutrinos. In this paper we assume that the Ly$\alpha$ forest\
is sensitive to the combined CDM
and baryon power spectrum, and so we focus on this quantity rather than what is
traditionally referred to as the matter power spectrum and includes massive
neutrinos.
\begin{figure}
\centering
\includegraphics[scale=0.5]{LINEARPLOTS_fixomm.pdf}
\includegraphics[scale=0.5]{LINEARPLOTS_fixomc.pdf}
\caption{The effect of degenerate hierarchy $\Sigma m_\nu=0.3 \mathrm{eV}$ massive
neutrinos on the linear theory CDM + baryon power spectrum at several
different redshifts, in two different scenarios:
(left) $\omega_c$ is reduced to conserve the value of $\omega_{\mathrm{cb}\nu}$ in both
models;
(right) $\omega_c$, $\omega_b$, and $h$ are kept fixed, with the
increase in $\omega_{\mathrm{cb}\nu}$ when adding
massive neutrinos compensated by
reducing the value of $\omega_\Lambda$ in the model.
The suppression on small scales due to massive neutrino free streaming
is clear in both examples, however the small-scale suppression is only
scale independent in the right panel.
In blue dashed lines we show the neutrino free-streaming scale
,$k_{\rm nr}$, for a $0.1 \rm{eV}$ mass
eigenstate, and the shaded area approximately represents the length scales
probed by the one-dimensional flux power spectrum of
the Ly$\alpha$ forest.
}
\label{fig:linear}
\end{figure}
As can be seen in figure \ref{fig:linear}, the effect of massive neutrinos in
the linear power spectrum is both redshift and scale-dependent.
The scale dependence is set by the neutrino free-streaming scale
(eq. \ref{eq:freestream}),
and we plot this as a vertical dashed line in figure \ref{fig:linear}.
When comparing massive and massless neutrino cosmologies, there are several
different choices one can make about which parameters to vary, and we show
two options in the panels.
In the left panel we change $\omega_c$ to keep constant the total density
of non-relativistic matter at low-redshift $\omega_{\mathrm{cb}\nu}$.
In the right panel we keep $\omega_c$ fixed and change the value of
$\Omega_\Lambda = 1 - \Omega_{\mathrm{cb}\nu}$.
In both cases we keep fixed the baryon density parameter $\omega_b$
and the Hubble parameter $h$.
In the left panel the effect at $z=1000$ shown in blue is
caused by the change in $\omega_c$, as this is well before
the neutrino relativistic transition ($z \gg z_{\mathrm{nr}} = 188$).
At low redshift, we see the characteristic suppression of power
below the neutrino free-streaming scale
($k \gg k_{\mathrm{nr}} = 0.0021 \ \text{Mpc}^{-1}$). We note that the
suppression has a mild scale- and
redshift-dependence even on small scales ($k\approx 1 \ \text{Mpc}^{-1}$).
When we keep $\omega_c$ fixed (right panel), adding a non-zero mass to
neutrinos does not affect the physics of the early Universe
($z \gg z_{\mathrm{nr}} = 188$), and the linear power at $z=1000$
is practically unaffected.
At much later times ($z \ll z_{\mathrm{nr}} = 188$), and on scales smaller
than the free-streaming scale,
the effect of neutrinos is a scale-independent suppression of the linear
power spectrum.
Studies of the small-scale clustering of the Ly$\alpha$ forest\ are sensitive to the linear
power in the approximate range shown shaded in gray in figure \ref{fig:linear},
which is well below the neutrino free-streaming scale.
This suggests an almost perfect degeneracy in the linear power spectrum between
the effect of massive neutrinos and the amplitude of the primordial power
spectrum $A_s$ when considering only these length scales and redshift ranges.
In the remaining sections of this paper we use hydrodynamical simulations
to show that this degeneracy is also valid in the non-linear regime,
and is still very strong in the Ly$\alpha$\ flux power spectra.
\section{Results}
\label{sec:res}
In this section we present the results of our simulations.
We examine how well the degeneracy in linear theory continues
into the non-linear regime by looking at the matter power spectra in the
three simulations described in Table \ref{tab:sims}.
We then study the effect of massive neutrinos on the Ly$\alpha$ forest\
observables, the 1D and 3D flux power spectrum, as these are the key statistics
that are ultimately used to constrain cosmology.
As well as being dependent on the underlying non-linear power spectrum, the
Ly$\alpha$ forest\ is also affected by the thermal and ionisation states of the IGM, which we will discuss in closer detail in section
\ref{ss:lya}.
\subsection{Non-linear growth of structure}
\label{ss:non}
First we look at the effect of massive neutrinos on the growth of structure in
the non-linear regime, and examine to what extent these effects can be
replicated simply by rescaling $A_s$.
In figure \ref{fig:matpow} we plot the ratio of the CDM + baryon matter power
spectra at $z=3$ in our three cosmologies:
The black line shows the linear theory result, also shown in the right panel
of figure \ref{fig:linear}.
The solid orange line compares the power in {\it massive} and {\it massless}
simulations, where $\Omega_c$ and $\Omega_b$ are kept constant,
and a $\Sigma m_{\nu}=0.3\mathrm{eV}$ neutrino mass has been added changing
$\Omega_{\mathrm{cb}\nu}$.
We see the characteristic `spoon' effect also seen in \cite{Viel2010,Bird2012}
and references therein, which is because the onset of non-linearities are delayed
as a result of the suppression of linear power.
\begin{figure}
\centering
\includegraphics[scale=0.6]{matterpow_ratios_z3.pdf}
\caption{The ratio of the matter power spectra (CDM+baryons) in simulations
with massive and massless neutrinos.
The black line shows the linear theory result for the cosmologies in the
{\it massive} and {\it massless} simulations described in Table \ref{tab:sims}
,which is the same as the red line in figure \ref{fig:linear} right panel.
The solid orange line shows a comparison between the full non-linear matter
power spectra in the {\it massive} and {\it massless} simulations.
The solid purple line represents the comparison between the {\it massive}
and {\it rescaled} cosmology, a massless neutrino cosmology with a lower
clustering amplitude $A_s$ to match the small-scale linear power at $z=3$.
In dashed blue, we compare the two massless neutrino cosmologies, showing
that the `spoon' effect can be recreated without any massive neutrinos.
The gray band shows the region of $1\%$ agreement.}
\label{fig:matpow}
\end{figure}
The solid purple line shows the ratio of the power spectra in
massive and massless neutrino
cosmologies; however in this case we have rescaled the perturbation amplitude
of the massless cosmology to match the small-scale linear power at $z=3$
({\it rescaled} cosmology in Table \ref{tab:sims}).
The matter power spectrum at $z=3$ in these two simulations agree to within
$1\%$ on all length scales relevant for Ly$\alpha$ forest\ analysis.
The blue dashed line shows the ratio of the power spectra in the two
massless cosmologies described in
Table \ref{tab:sims}, that differ only in the value of $A_s$.
In this case we see the same spoon effect as in the massive neutrino
comparison, highlighting the fact that this feature is a consequence of the
suppression of structure growth on linear scales and can be replicated
without massive neutrinos.
If the Ly$\alpha$ forest\ power spectrum were measured at a single redshift,
this would be sufficient: the solid purple line in figure \ref{fig:matpow}
shows that the effect of massive neutrinos can be reproduced to sub-percent
agreement by a rescaling of the amplitude of initial perturbations.
However, in the right panel of figure \ref{fig:linear} we see that the effect
of massive neutrinos on the small scale linear power has a redshift
dependence, varying by a couple of percent over the redshift range covered
by the Ly$\alpha$ forest, $5 > z > 2$, meaning that rescaling $A_s$ will only match
the linear power perfectly at a single redshift.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.6]{redshiftRescaling.pdf}
\caption{The ratios of the matter power spectra in the {\it massive}
and {\it rescaled} simulations for a linear (solid lines) and non-linear
(dashed) mode across the full redshift range $5>z>2$. In purple lines, the
ratio has a small redshift dependence due to the fact that the growth
in the two cosmologies is different. By design, the ratios in the linear
modes are matched at $z=3$. In green lines, we have modified the redshifts
at which snapshots are output to match the linear power in the two simulations.
When the linear power is matched, the power in the non-linear mode agrees
to within $0.5\%$.
}
\label{fig:relabel}
\end{figure}
To investigate this effect we pick two modes, one linear ($k_{\rm L} = 0.3 \ \text{Mpc}^{-1}$)
and one non-linear ($k_{\rm NL} = 3 \ \text{Mpc}^{-1}$). In the purple lines of figure \ref{fig:relabel},
we plot the ratio of the power in these modes in the {\it massive} and {\it rescaled}
simulations, with the ratio for $k_{\rm L}$ shown in solid lines, and
$k_{\rm NL}$ shown in dashed lines. By construction the amplitude in the linear
mode is matched at $z=3$, but due to the different growth rates in the two
cosmologies, there are disagreements at the percent level at $z=5$ and $z=2$.
We note that this is a very small effect that is still well below the precision
of current measurements of the amplitude of the linear power from the Ly$\alpha$ forest\, which
are on the order of $10\%$ \cite{McDonald2005,Chabanier2019}. This effect is also very small
when compared with the size of the suppression caused by massive neutrinos, which
is $9\%$ for our case of $\Sigma m_\nu=0.3$.
In the green lines we plot the same ratios, except where we have output the snapshots
in the {\it rescaled} simulation at a slightly different redshift in order to match
the amplitude of the linear power.
For instance, we compare the snapshot at $z=5$ for
the {\it massless} simulation with a snapshot of the {\it rescaled} simulation at $z=4.97$.
While the ratio of the power in the linear modes changes as a function of redshift
in the purple lines, it is constant in the green lines. When matching the amplitude
of the power in the linear mode, the power in the non-linear mode agrees to within
$0.5\%$ across the full redshift range, even when massive
neutrinos are not included in the simulation. This tells us again that the non-linear structure
is primarily sensitive to the amplitude in the linear power, and that any non-linear
effects caused by massive neutrinos themselves are negligible with respect to current
precision.
\subsection{Lyman-$\alpha$ forest clustering}
\label{ss:lya}
In this section we look at the statistics of the Ly$\alpha$ forest, and examine the
degeneracy between massive neutrinos and the amplitude of primordial
fluctuations.
In particular, we look at correlations of the fluctuating transmitted flux
fraction, $ \delta_F (\mathbf{x}) = F (\mathbf{x}) / \bar{F} -1$, where $\bar{F}$ is the
mean transmitted flux fraction.
In principle, one could measure the full 3D power spectrum,
\begin{equation}
\left\langle \delta_F(\mathbf{k}) \delta_F(\mathbf{k}') \right\rangle
= (2 \pi)^3 \delta^D(\mathbf{k}+\mathbf{k}') P_{3D}(k,\mu)
\end{equation}
where $\delta_F(\mathbf{k})$ is the Fourier transform of $\delta_F(\mathbf{x})$ and $\mu$ is
the cosine of the angle between the Fourier mode and the line-of-sight, and
use it to constrain cosmology \cite{Font-Ribera2018}.
However, current Ly$\alpha$\ constraints on neutrino mass use exclusively the
1D power spectrum,
\begin{equation}
P_{\mathrm{1D}}(k_\parallel) = \int_{0}^{\infty} \frac{dk_\perp k_\perp}{2\pi}
P_{\mathrm{3D}}(k_\perp,k_\parallel) ~,
\label{eq:p1d}
\end{equation}
where $k_\parallel$ and $k_\perp$ are the components of the Fourier mode along
and perpendicular to the line of sight respecitvely,
$k_\parallel=k \mu$, and $k^2 = k_\parallel^2 + k_\perp^2$.
The 1D and 3D flux power spectra of the Ly$\alpha$ forest\ are dependent
on both the matter density field and the state of the IGM, and there is also
some dependence on the velocity power spectrum through redshift space distortions.
Due to the different initial conditions, the IGM history in the {\it massive} and
{\it rescaled} simulations is different, which will propagate to differences in the
flux power spectra. However these differences do not mean that the degeneracy
shown in figures \ref{fig:matpow} and \ref{fig:relabel} is broken as the state of the
IGM is marginalised over in cosmological analysis of the Ly$\alpha$ forest\ due to uncertainties
in the astrophysics of reionisation. These subtle changes in the state of the IGM
therefore cannot be interpreted as signatures of massive neutrinos.
Given imperfect parametrisations of the IGM and the highly non-linear relationship
between these parameters and the observable flux power spectra, it is impossible to
isolate all of these effects and match them by hand in the way that we have done with
the linear power.
A full marginalization is beyond the scope of this paper, but we match two
of the IGM parameters -- the mean flux and the temperature at mean density in the
simulations -- to remove some effects of neutrinos that are degenerate with
nuisance parameters. The temperature of the gas in the IGM is well approximated by
a power-law distribution of the form $T(\delta_b) = T_0 (1 + \delta_b)^{\gamma-1}$
where $\delta_b$ is the baryon overdensity, and $T_0$ is the temperature
at mean density \cite{Hui1997}. Given that the power-law approximation
only holds at low densities, we perform the fit in the range
$-2 < \rm{log}_{10}(\delta_b)<0.5$. This is appropriate given that these
overdensities are also the regions which give rise to the Ly$\alpha$ forest\ \cite{Lukic2014}.
We match the mean flux by re-scaling the optical depth in the skewers by a
constant in post-processing.
Similarly, we match the temperature at mean density by rescaling the internal
energy by a constant in post-processing.
In both cases, the rescaling was fairly small, of order 2\%.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.47]{3DRatio_z3.pdf}
\includegraphics[scale=0.47]{1DRatio_z3.pdf}
\caption{\textit{Left:} Ratios of the 3D flux power spectra in simulations
with and without massive neutrinos as a function of 3D
Fourier mode orientation $\mu$.
Dashed lines show the ratios for the {\it massless} simulation, while
solid lines show the ratios for the {\it rescaled} simulation. In both
cases, $T_0$ and the mean flux have been matched in each simulation pair.
\textit{Right:} Ratios of 1D flux power spectra in massive and
massless neutrino cosmologies.
The ratio for the {\it massless} simulation is shown in orange dashed lines
and the ratio for the {\it rescaled} simulation in purple solid lines.
In the gray dot-dashed line we show the highest $k_\parallel$ bin measured in
BOSS data \cite{PD2013}, and in gray dashed we show the
we show $k_{\mathrm{max}}$ for higher resolution data \cite{Irsic2016}.
\label{fig:lyaf}
}
\end{figure}
In figure \ref{fig:lyaf} we look at the effect of massive neutrinos on the
flux power spectra after recalibrating $\bar{F}$ and $T_0$ as described above,
and study the degeneracy with the amplitude of the linear power.
The left panel shows the ratios of the 3D flux power spectra in simulations with
and without massive neutrinos. The dashed lines show the comparison between the
{\it massive} and {\it massless} simulations. There is a significant difference
between the flux power spectra in these two simulations, particularly along the line
of sight, with $\mu \sim 1$. The solid lines show that the majority of this difference
comes from the difference in the amplitude of the linear power, as the difference
in the flux power spectra shrinks to $\lesssim 1\%$ for $k<4 \ \text{Mpc}^{-1}$ once the amplitude
of the linear power is matched.
In particular for modes transverse to the line of sight (shown in black) there is a
near perfect match. The high $k$ modes along the line of sight still show some deviation,
which is suggesting some residual difference in the thermal state of the IGM.
Additionally there is a
a signature of neutrinos that is not captured in the matter power spectrum
but does affect the flux power spectrum, which comes from the change in the growth
rate. As discussed in section \ref{ss:non}, the presence of massive neutrinos
causes a $2\%$ difference in the growth rate, which has an effect on the flux
power spectrum through a change in the gas velocities. This is a feature of
massive neutrinos that we do not account for in our current setup, but the results
in figure \ref{fig:lyaf} indicate that the effect is very small.
The right panel shows the ratios of the 1D flux power spectra for the same
simulations, where the orange dashed line is the ratio of the {\it massless}
and {\it massive} simulations, and the purple solid line is the ratio of the
{\it massive} and {\it rescaled}. The vertical gray dashed lines show the
highest $k_\parallel$ modes that have been used in recent cosmological
analysis. The purple solid line shows that the effects of massive neutrinos
can be replicated by rescaling the linear power to within at least $1\%$ on scales measured by BOSS,
and within $2.5\%$ for the higher resolution data.
Upcoming measurements from DESI are not expected
to increase to significantly higher values of $k_\parallel$, but the improvement in
the data will come from a more precise measurement of the same $k_\parallel$ range.
Therefore the results that we present in this paper will still be applicable
for analysing DESI datasets.
We reiterate that our approach of matching the mean flux and $T_0$ does not
fully explore the degeneracy. For example, the optimum match of the ratios
shown in figure \ref{fig:lyaf} might be for IGM values that are not the same in each
simulation, i.e. $\bar{F}$ and $T_0$ could be tweaked to push the ratios
closer to 1. What will ultimately matter is to what extent the effects of $\Sigma m_\nu$
are degenerate with a change in these nuisance parameters, which would require
a full marginalisation.
While the results in figure \ref{fig:lyaf} are shown only at $z=3$, in appendix
\ref{app:red} we show the equivalent plots for other redshfits
and note that the results are largely independent of redshift. Together these results
indicate a strong degeneracy between $A_s$ and $\Sigma m_\nu$
when considering only the length scales and redshift ranges that are observed using the
Ly$\alpha$ forest.
\section{Simulations}
\label{sec:sim}
The simulations are run using the Tree-SPH code \texttt{MP-Gadget}
\footnote{\url{https://github.com/MP-Gadget/MP-Gadget}}, a modified version of
\textsc{Gadget-2} \cite{Gadget2} with the ability to simulate massive neutrinos
and with an improved performance in massively parallelized runs \cite{mp-gadget}.
Initial conditions for all our simulations were generated at $z=99$, using
separate, species specific, transfer functions for CDM and baryons computed using
{\texttt{CLASS}} \cite{CLASS}.
The CDM and baryon particles are both initialised on regular grids, offset to
prevent particles being initialised at the same position.
The initial conditions are generated using the Zel'dovich approximation at $z=99$
with {\tt MP-Gadget}'s inbuilt initial condition code, {\tt MP-GenIC}.
We do not use 2LPT because the terms have only been computed for a single
initial fluid.
The main results presented in this paper are from a set of simulations with box
size $L=133.85 \ \text{Mpc}$ ($h\,L= 90 \ \text{Mpc}$), and $1024^3$ CDM and baryon particles,
and we show in appendix \ref{app:box} that the conclusions of this paper do
not depend on box size or resolution. In order to reduce cosmic variance,
we use the `paired and fixed' simulations introduced in \cite{Angulo16,Anderson18}.
In this procedure, the initial amplitudes in each Fourier mode are fixed to the
ensemble average instead of being randomly drawn, and for each cosmological
model two simulations are run with the phases in each mode inverted.
Clustering statistics such as the power spectrum are then taken to be the average
of those calculated in each of the two simulations.
To include the effects of reionisation, we use a uniform UV background following the
model presented in ref.~\cite{Puchwein2019}, which has been tuned to approximately
match the observed IGM thermal history.
Including massive neutrinos in cosmological simulations presents its own
set of technical challenges \cite{Hannestad2012,Banerjee2016,Emberson2017,Bird2018,Dakin2019}.
Neutrinos are often included as another species
of particle in the simulation alongside the CDM and baryons
\cite{Bird2012,FVN2014,Castorina2014}. However due to
the comparatively low clustering of neutrino particles, especially for the
lighter neutrino masses considered as the upper limit on $\Sigma m_\nu$ becomes
tighter, it is necessary to include a large number of neutrino particles in the
simulation in order to reduce the shot noise below the level of the physical neutrino
clustering \cite{Wang2007}.
This is in turn more computationally intensive, adding $~50\%$ to the walltime of a
given simulation when using the same number of neutrino particles as CDM,
as well as increasing memory and storage requirements.
An alternative approach was proposed in \cite{Brandbyge2009}, which we refer to as
the {\it Fourier space} approach, where the neutrino clustering
is calculated using linear theory and then included in the gravitational potential
used to evolve the baryon and CDM particles. This technique was further improved to a
linear response in \cite{Haimoud2013}, where the linear evolution of the neutrino component is
determined using the full non-linear density field of the CDM and baryon particles
in the simulation. We use this implementation for the results presented in the
main text of this paper, and demonstrate in appendix \ref{app:nu} that our findings
do not depend on which approach is used.
Unlike the particle implementation, the storage and memory requirements for massive
neutrino simulations using the linear response approach
are very similar to massless neutrino simulations, with a $~5\%$ increase
in walltime with respect to the massless case that is largely independent of
cosmological parameters.
The computation time of our simulations is significantly reduced by turning
regions with gas density of $\rho_b/\bar{\rho}_b > 10^3$ directly into stars
({\tt QuickLymanAlpha} option in Gadget simulations \cite{Viel2004}),
since these large overdensities
do not affect the Ly$\alpha$ forest\ observables but are extremely computationally costly
to evolve.
The flux skewers are computed using \texttt{fake\_spectra}
\footnote{\url{https://github.com/sbird/fake\_spectra}}, which calculates the
optical depth along a given line of sight.
We compute the flux along a regular grid of $600^2$ lines of sight at
a cell resolution of $10 \mathrm{km/s}$,
providing high resolution both along and traverse to the line of sight.
To calculate the 1D flux power spectrum, we take the Fourier transform of
flux perturbations along each skewer, and then average each Fourier mode
across all skewers in a given simulation. For the 3D flux power spectrum we utilise
the fact that the skewers are calculated in an evenly spaced grid with the same
line of sight resolution, and take a Fourier
transform of flux perturbations for the entire box.
The results are then averaged in bins of $k=|\vec{k}|$
and $\mu$, where $\mu$ is the cosine of the angle of each Fourier mode
with respect to the line of sight \cite{Rogers2017a,Rogers2017b}.
In this study we run simulations for three different cosmologies:
the {\it massive} simulation uses a cosmology with massive neutrinos
($\Sigma m_{\nu}=0.3$eV);
the {\it massless} simulation uses a similar cosmology but with massless
neutrinos, resulting in a slightly lower $\omega_{\mathrm{cb}\nu}$
at low redshift;
the {\it rescaled} simulation uses the same cosmology as the
{\it massless} simulation, but with a slightly lower amplitude of primordial
fluctuations $A_s$.
The value of $A_s$ in the {\it rescaled} simulation is chosen in order to
match the amplitude of the linear power spectrum of CDM + baryons in the
{\it massive} simulation at a
central redshift $z=3$, and at a wavenumber\footnote{The exact scale where we
match the linear power is not important,
see the right panel of figure \ref{fig:linear}.} $k=0.7 \ \text{Mpc}^{-1}$.
Parameters for these simulations are shown in Table \ref{tab:sims}.
The cosmology in the {\it massive} simulation has three neutrinos of
degenerate hierarchy and $\Sigma m_\nu=0.3\mathrm{eV}$, which is slightly
larger than the upper constraint provided by Planck alone \cite{Planck2018}.
We choose this extreme case as the idea we present will only become more
reliable with lower values of $\Sigma m_{\nu}$.
The first detailed study into the degeneracy between the amplitude of the
small scale linear power spectrum and $\Sigma m_\nu$ in the context of the
Ly$\alpha$ forest\ was presented in \cite{Viel2010}.
In our simulation set up we build upon this work, and intend
to capture the degeneracy more completely with the following changes.
Firstly, when adding massive neutrinos
they kept the total matter content $\Omega_{\mathrm{cb}\nu}$ fixed,
which results in a slight
scale dependence in the suppression even on small scales as seen
in the left panel of figure \ref{fig:linear} \cite{Viel2010,FVN2014}.
The effect of this is that the linear
theory power spectrum will not match on all length scales relevant for the Ly$\alpha$ forest.
Secondly, in order to mimic the effect of massive neutrinos, they match $\sigma_8$ at
$z=7$, where $\sigma_8$ is calculated in each case from the total matter power
spectrum including massive neutrinos. The Ly$\alpha$ forest\ is primarily sensitive to the clustering
of the baryons and CDM rather than the clustering of the neutrinos themselves,
and so we instead opt to match the CDM + baryon power spectrum at $z=3$, neglecting the
contribution from massive neutrinos.
The suppression also has a slight redshift dependence, so fixing $\sigma_8$ at
$z=7$ would still result in imperfect agreement between the linear theory
power spectra at lower redshifts where the Ly$\alpha$ forest\ is observed.
\begin{table}
\centering
\begin{tabular}{ | l | l | l | l |}
\hline
& {\it massive} & {\it massless} & {\it rescaled} \\ \hline
$\Sigma m_{\nu}$ (eV) & 0.3 & 0.0 & 0.0 \\
$\Omega_{\mathrm{cb}\nu}$ & 0.3192 & 0.3121 & 0.3121 \\
$A_s$ & 2.142e-9 & 2.142e-9 & 1.952e-9 \\ \hline
$\Omega_c$ & \multicolumn{3}{c|}{0.2628} \\
$\Omega_b$ & \multicolumn{3}{c|}{0.0493} \\
$n_s$ & \multicolumn{3}{c|}{0.9667} \\
$h$ & \multicolumn{3}{c|}{0.6724} \\ \hline
\end{tabular}
\caption{Simulations from which we obtain key results.
We run three types of simulation: a simulation with $\Sigma m_\nu=0.3\mathrm{eV}$,
a massless neutrino simulation, and a massless neutrino
simulation where $A_s$ has been rescaled in order to replicate the effect
of massive neutrinos, referred to as a {\it rescaled} simulation.
All cosmological parameters are kept constant except $A_s$,
$\Sigma m_{\nu}$, and consequently $\Omega_{\mathrm{cb}\nu}$.}
\label{tab:sims}
\end{table}
In the next section we compare the non-linear power spectra and Ly$\alpha$ forest\
observables for the simulations described in Table \ref{tab:sims}, and study
to what degree the effects of massive neutrinos on these quantities can be
replicated.
|
3,212,635,537,511 | arxiv | \section{Low-energy limit of the four-band model Hamiltonian of a type-II Weyl semimetal}
\label{4band_derivation}
The dispersion along the $k_z$-axis (for $k_x=k_y=0$) of the four-band Hamiltonian \eqref{Hmodel} is given by
\begin{align}
E^{\rm high}_\pm &= \pm b \pm \sqrt{(2t-\mu)^2+t^2+2t(2t-\mu)\cos k_z}, \\
E^{\rm low}_\pm &= \pm b \mp \sqrt{(2t-\mu)^2+t^2+2t(2t-\mu)\cos k_z}.
\label{eq:Eofkz}
\end{align}
For $\mu>2t$ the two low-energy bands $E_\pm^{\rm low}$ form a pair of Weyl cones located at
\begin{align}
K_z = \pm \arccos \left(\frac{(2t-\mu)^2+t^2-b^2}{2t(\mu-2t)}\right).
\label{eq:Kz}
\end{align}
We wish to derive the corresponding low-energy Hamiltonian. Notice that for $k_x=k_y=0$ the Hamiltonian~\eqref{Hmodel} commutes with $\sigma_z$ and is thus block-diagonal. Each of the two blocks contains one low and one high energy band. At $\bm K = (0, 0, K_z)$ the corresponding low energy eigenstates are given by
\begin{align}
\Psi^{\rm low}_+ = \frac{1}{N_+}\left(\frac{(2t-\mu)(2b-2t\sqrt{1-\cos^2K_z})}{(2t-\mu)^2-t^2+b^2}, 1, 0, 0\right)
\label{eq:psip}, \\
\Psi^{\rm low}_- = \frac{1}{N_-}\left(0, 0,-\frac{(2t-\mu)(2b+2t\sqrt{1-\cos^2K_z})}{(2t-\mu)^2-t^2+b^2}, 1\right).
\label{eq:psim}
\end{align}
We expand the four-band Hamiltonian in the basis of these eigenstates:
\begin{widetext}
\begin{align}
\left(\begin{array}{cc|cc}
\langle \Psi^{\rm low}_+ | H | \Psi^{\rm low}_+ \rangle & \langle \Psi^{\rm low}_+ | H | \Psi^{\rm low}_- \rangle & \langle \Psi^{\rm low}_+ | H | \Psi^{\rm high}_+ \rangle & \langle \Psi^{\rm low}_+ | H | \Psi^{\rm high}_- \rangle \\
\langle \Psi^{\rm low}_- | H | \Psi^{\rm low}_+ \rangle & \langle \Psi^{\rm low}_- | H | \Psi^{\rm low}_- \rangle & \langle \Psi^{\rm low}_- | H | \Psi^{\rm high}_+ \rangle & \langle \Psi^{\rm low}_- | H | \Psi^{\rm high}_- \rangle \\ \hline
\langle \Psi^{\rm high}_+ | H | \Psi^{\rm low}_+ \rangle & \langle \Psi^{\rm high}_+ | H | \Psi^{\rm low}_- \rangle & \langle \Psi^{\rm high}_+ | H | \Psi^{\rm high}_+ \rangle & \langle \Psi^{\rm high}_+ | H | \Psi^{\rm high}_- \rangle \\
\langle \Psi^{\rm high}_- | H | \Psi^{\rm low}_+ \rangle & \langle \Psi^{\rm high}_- | H | \Psi^{\rm low}_- \rangle & \langle \Psi^{\rm high}_- | H | \Psi^{\rm high}_+ \rangle & \langle \Psi^{\rm high}_- | H | \Psi^{\rm high}_- \rangle
\end{array}\right) =
\left( \begin{array}{c|c}
H_{\rm low} & V_{\rm high,low} \\ \hline
V_{\rm high,low}^\dagger & H_{\rm high}
\end{array}\right).
\label{eq:HKz}
\end{align}
\end{widetext}
At $\bm K$ the high and low energy blocks are uncoupled ($V_{\rm high,low}(0, 0, K_z) = 0$). Close to the Weyl point we have
\begin{align}
\mathcal H\approx H_{\rm low} + V_{\rm high,low}^\dagger (H_{\rm high})^{-1}V_{\rm high,low}.
\label{eq:SW}
\end{align}
Thus, to linear order in the deviation from $\bm K$ we can neglect this coupling and simply linearize $H_{\rm low}$. After some algebra we find the corresponding low-energy Weyl Hamiltonian,
\begin{align}
\mathcal H = a_{\rm tilt} k_x \tilde\sigma_0 - v_x k_x \tilde\sigma_x + v_y k_y\tilde\sigma_y + v_z(k_z-K_z)\tilde\sigma_z.
\label{eq:Hlow}
\end{align}
The matrices $\tilde\sigma_{0,x,y,z}$ are the identity and the Pauli matrices in the basis of $|\Psi^{\rm low}_\pm\rangle$. The anisotropic velocity components were given in the main text, Eq.\ \eqref{vxyzresult}.
\begin{figure}[tb]
\centerline{\includegraphics[width=0.8\linewidth]{low_energy.pdf}}
\caption{Dispersion close to a type-II Weyl point. Shown are five $k_z$-subbands at $k_y=0$; blue-solid lines show the dispersion of the four-band model~\eqref{Hmodel}; yellow-dashed lines show the corresponding low energy description~\eqref{eq:Hlow}. Parameters are the same as in Fig.\ \ref{fig:Landau_levels}.}
\label{fig:low_energy}
\end{figure}
Fig.\ \ref{fig:low_energy} shows a comparison between the type-II Weyl cone of the four-band model and its effective low-energy description.
\section{Topological protection of the special magnetic field axis for Klein tunneling between electron and hole pockets}
\label{app:magic_axis_proof}
The topology of the M\"{o}bius strip (the projective plane ${\mathbb P}_2$) protects the intersection of two incontractible loops, ensuring the existence of a special magnetic field axis where the extremal contours ${\cal C}_\pm$ in the electron and hole pockets both touch the Weyl point at $E=0$. This is the arrangement shown in Fig.\ \ref{fig:magic_axis_fig}. Contractible loops can avoid the intersection, as they do in Fig.\ \ref{fig:magic_axis_fig_app}. For convex electron and hole Fermi surfaces the existence of incontractable loops is guaranteed by the following argument.
Consider the full set ${\cal S}_+$ of magnetic field axes for which the extremal contour ${\cal C}_+$ in the electron pocket touches the Weyl point. If this set would consist only of contractible loops, then we would be able to pass an incontractible loop $L$ through $\mathbb{P}_2$ that avoids ${\cal S}_+$. We will now see that this leads to a contradiction.
For a convex Fermi surface each field axis $\hat{\bm{n}}$ on $L$ is associated with a unique extremal contour $C(\hat{\bm{n}})$. By construction, the contour $C(\hat{\bm{n}})$ lies in a plane normal to $\hat{\bm{n}}$. The direction $\hat{\bm{n}}$ defines whether the Weyl point lies above or below this plane. Inversion of the axis produces the same extremal contour and therefore the same normal plane, with ``above'' and ``below'' interchanged. As we follow the incontractible loop $L$ from polar angle $\phi=0$ to $\phi=\pi$, the field axis is inverted, so at some axis $\hat{\bm{n}}_0$ on $L$ the Weyl point must move from above to below the plane. As motion of the plane is continuous, this can only happen if the Weyl point actually lies on $C(\hat{\bm{n}}_0)\in L$. This would mean that $C(\hat{\bm{n}}_0)\in {\cal S}_+$, which we had excluded by the construction of $L$.
\begin{figure}[tb]
\centerline{\includegraphics[width=0.8\linewidth]{maximal_surface_app.pdf}}
\caption{Same as Fig.\ \ref{fig:magic_axis_fig}, but now the incontractible loop $L_+$ is replaced by a set of contractible loops, containing the entire set of magnetic field axes with extremal contours in the electron pocket that touch the Weyl point. This arrangement would avoid the topological protection of the intersection of incontractible loops in a M\"{o}bius strip, but we show by contradiction that it cannot happen in a convex electron pocket.
\label{fig:magic_axis_fig_app}
}
\end{figure}
The same argument can be applied to the hole pocket, and we conclude that for both the (convex) electron and hole pockets there must exist incontractible loops $L_\pm$ of field axes with extremal contours that touch the Weyl point.
\section{Klein tunneling for pairs of connected type-II Weyl points}
\label{app_angular_dep}
The curves in Fig.\ \ref{fig:exp_approx} are calculated as follows. The Fermi level is fixed at the energy $E=0$ of the Weyl points and the magnetic field $\bm{B}$ is rotated in the $x$--$y$ plane, staying close to the $y$-axis (angles $\theta=0$, $|\phi/\pi-1/2|\ll 1$). We assume that the dominant $\phi$-dependence of the amplitude of the magnetic quantum oscillations is then given by the Klein tunneling probability.
For a given field orientation $\hat{\bm{n}}$ we define $T(q)$ as the Klein tunneling probability between electron and hole pockets at $E=0$ and $\hat{\bm{n}}\cdot\bm{k}=q$. Because of the symmetry of our band structure, both Weyl points have the same $T$. We then take a planar cross-section of the Fermi surface at a momentum $q$ parallel to the field and select one of the contours indicated in the left panels of Fig.\ \ref{fig:exp_approx}. The contour encloses a signed area $A(q)$ and we determine the $q_c$ at which the area is extremal, $A'(q_c)=0$. We calculate $T_c=T(q_c)$ using the general Landau-Zener formula \eqref{TLZ_general}. Finally, we follow the contour for one period, collecting a factor $\sqrt{T_c}$ for each transmission through a Weyl point and a factor $\sqrt{1-T_c}$ for each reflection. The product of these factors is plotted in Fig.\ \ref{fig:exp_approx} (left panels) as a function of the field orientation $\phi$.
\end{document} |
3,212,635,537,512 | arxiv | \section{Introduction}
Modification of general relativity is a viable explanation for current
cosmic acceleration and has several further predictable consequences
beyond the expansion history, such as the change in large scale structure
growth relative to the expansion history, change in light deflection
behavior, gravitational slip (distinction between time-time and space-space
metric gravitational potentials), and altered propagation of gravitational
waves relative to light (even for the same speed of propagation).
This offers a rich array of signatures that current and upcoming observations
can test (see recent reviews such as
\cite{1806.10122,1809.08735,1902.10503,1907.03150}).
However, unlike the expansion history, which for a broad variety
of models can be described by a few parameters (e.g.\ the matter density,
dark energy equation of state today $w_0$, and dark energy time variation
$w_a$ have been shown to reconstruct the expansion history to 0.1\%
accuracy \cite{calde}), modified gravity theories need not just a few
parameters but four functions of time in general, in addition to the
expansion history \cite{gubitosi,eft1,glpv,bellsaw,eft2}.
Still, if our first goal is to detect any modifications of gravity, one
can parametrize the effects on the observables. For example, binning the
modified Poisson equation gravitational strength $\gm$ in redshift
delivers subpercent accuracy for growth of structure with just 2--3
parameters \cite{misha1,misha2}. Comparison of observational data in
expansion vs in growth offers another useful avenue for alerts, e.g.\
\cite{huterer1709.01091,conjoin}.
A middle path is also desirable, where one has some closer connection
to theory, and a way of seeing covariances between the several observable
effects mentioned in the opening paragraph. One would like to narrow
down the multiple functional freedom but still have a viable theory that
can predict the multiple effects simultaneously. We call this limited modified
gravity.
Section~\ref{sec:limit} introduces limited modified gravity in two forms,
from the theory side and the phenomenology side, giving a systematic
summary of the limiting cases. In Sec.~\ref{sec:quant} we investigate
the main effects of the three new cases on gravitational strengths,
sound speed stability, etc. Section~\ref{sec:results} then propagates
this to observables, showing how these models can be tested, and introducing
the $D_G$ statistic, revealing
signatures beyond general relativity. We conclude in Sec.~\ref{sec:concl}.
\section{The System of Limited Modified Gravity} \label{sec:limit}
Many individual theories of modified gravity exist, but under certain
physical assumptions the most general scalar-tensor theory involving
no more than second derivatives is the Horndeski class, e.g.\
\cite{horn,deffayet,gubitosi,glpv}. This will contain
four functions of the scalar field $\phi$ and its kinetic term $X$,
$G_i(\phi,X)$ for $i=2$--5. No general principle for how to specify the
functional forms is known. Within linear theory one can equivalently
phrase the gravitational action in effective field theory terms, which
can in turn be treated by property functions, which are functions of time,
e.g.\ \cite{bellsaw,eft2}. No general principle for how to specify those
functional forms is known.
The situation improves somewhat if one applies the constraint that
gravitational waves must propagate at the speed of light, as strongly
suggested by observations \cite{gwspeed1,gwspeed2,gwspeed3}.
This (in the simplest
interpretation) removes $G_5$ and makes $G_4(\phi)$, independent of $X$.
On the property function side, it removes $\alpha_T$. It is still difficult
to see how to choose, or parametrize, the Horndeski functions, but there
are some interesting perspectives on the property function side. Some
modified gravity theories have specific relations between the property
functions $\alpha_i(t)$, for example $f(R)$ gravity has $\alb(t)=-\alm(t)$.
For the theory avenue, therefore, we work with the property functions.
Since the kineticity $\alpha_K$ generally has negligible observational
impact \cite{bellsaw,nscmb}, we mostly ignore it (setting it to $10^{-4}$,
see \cite{nscmb}),
leaving two functions:
the Planck mass running $\alm$ and the braiding $\alb$. We explore limited
modified gravity through the limits where one or the other of these
functions is zero (when both are zero then general relativity holds), or
one is determined by the other. That is, we focus on how one function
determines the observational signatures.
For the phenomenology avenue, we work with the modified Poisson equations
that relate the metric gravitational potentials to the observable
large scale structure spatial density perturbations:
\begin{eqnarray}
\nabla^2\psi&=&4\pi G_N\delta\rho\times \gm\\
\nabla^2(\psi+\phi)&=&8\pi G_N\delta\rho\times\gl \ .
\end{eqnarray}
The first equation governs the growth of structure, with a
gravitational strength $\gm$ relative to Newton's constant $G_N$,
and the second governs the deflection
of light, with a gravitational strength $\gl$. Both $\gm$ and $\gl$
are functions of time. We explore limited modified gravity here through
the limits where one or the other of these functions is unity (when both
are unity then general relativity holds), or they are equal to each other.
Other quantities that come from the two main functions are the
gravitational slip,
\begin{equation}
\bar\eta=\frac{\gm}{\gl}=\frac{2\psi}{\psi+\phi}\ ,
\end{equation}
measuring the offset between the metric potentials or gravitational
strengths, and the sound speed of scalar fluctuations, $c_s$,
important for testing stability of the theory, with $c_s^2\ge0$ required.
The relations between the phenomenological approach and the theory
quantities are (with $\alpha_T=0$):
\begin{eqnarray}
\gm&=&\frac{m_p^2}{M_\star^2}\frac{(2+2\alm)(\alb+2\alm)+2\alb'}{(2-\alb)(\alb+2\alm)+2\alb'} \label{eq:gmgen} \\
\gl&=&\frac{m_p^2}{M_\star^2}\frac{(2+\alm)(\alb+2\alm)+2\alb'}{(2-\alb)(\alb+\
2\alm)+2\alb'} \label{eq:glgen} \\
\bar\eta&=&\frac{(2+2\alm)(\alb+2\alm)+2\alb'}{(2+\alm)(\alb+2\alm)+2\alb'}
\label{eq:etagen} \\
c_s^2&=&\frac{1}{\alpha_{K}+3\alpha_{B}^{2}/2} \left[\left(1-\frac{\alpha_B}{2}\right)\left(\alb+2\alpha_{M}\right)\right. \label{eq:csgen} \\
&\qquad&\left.+\frac{(H\alpha_B)'}{H}+\frac{\rho_m}{H^2}\left(1-\frac{m_p^2}{\ms}\right)+\frac{{\rho}_{\rm de}(1+w)}{H^2}\right] \ , \notag
\end{eqnarray}
where $m_p$ is the usual (constant) Planck mass, $M_\star(t)$ is the
running Planck mass, prime denotes $d/d\ln a$ with $a$ the scale factor,
$H$ is the Hubble parameter, $\rho_m$ is the matter density, $\rho_{\rm de}$
the effective dark energy density, and $w$ the dark energy equation of state.
We can now identify several special limiting cases that will hopefully
offer some interesting insights into how each limit affects observations,
and provide benchmark theories that are tractable to test.
From the
theory side, these are when $\alm=0$ and when $\alb=0$. The first of these
is known as No Run Gravity, since the Planck mass is constant, and was
investigated in \cite{nrg}. The second has not been specifically studied,
and is explored here as
``Only Run Gravity''. There are also well known theories relating nonzero
$\alm$ and $\alb$, such as various scalar-tensor theories like $f(R)$
gravity, Brans-Dicke, or chameleon theories where $\alb=-\alm$, and
No Slip Gravity where $\alb=-2\alm$ \cite{nsg,nscmb}. One can write more
generally $\alb=R\alm$ where $R$ is constant \cite{island}, but this
does not seem to have clear physical significance unlike the previous
two cases. No Slip Gravity, as the name suggests, has the special property
that the gravitational slip $\bar\eta=1$, as in general relativity, despite
modifying other parts of gravitation. The $\alb=-\alm$ scalar-tensor
cases reduce the effect on light deflection to arise only from the Planck
mass change. Thus, three of the four limited modified gravity classes
of this sort are known, and we will explore the fourth.
From the phenomenology side, we can either set $\gl=m_p^2/M_\star^2$,
i.e.\ the only effect on light is from the Planck mass, or do the same
for $\gm=m_p^2/M_\star^2$. The former leads to either the above scalar-tensor
theories or to No Slip Gravity. The latter leads to No Slip Gravity.
If we set $\gm=\gl$ then the only solutions are either general relativity
(where they are both unity) or No Slip Gravity (where they are both
$m_p^2/M_\star^2$). These properties demonstrate that No Slip Gravity
is a particularly physically meaningful limit (even apart from having
no slip).
If we force $\gm$ or $\gl$ to be completely unaffected, i.e.\ to be unity,
then we have two new theories, which we call Only Light Gravity and Only
Growth Gravity, since only one of $\gl$ or $\gm$ is modified. Of course if
both are unity then we have general relativity. Imposing such conditions
on $\gm$ or $\gl$ defines a relation between $\alm$ and $\alb$ as we
discuss below.
Table~\ref{tab:models} summarizes the limited modified gravity classes,
and gives the expressions for $\gm$, $\gl$, $\bar\eta$, $\alm$, and $\alb$
in each one.
\begin{table*}[tb]
\centering
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
Model & $\gm$ & $\gl$ & $\bar\eta$ & $\alpha_M$ & $\alpha_B$ & Reference \\
\hline
\rule{0pt}{1.1\normalbaselineskip}Scalar-tensor & $\gm$ & $\frac{m_p^2}{M_\star^2}$ & $\bar\eta$ & $\alm$ &
$-\alm$ & e.g.~\cite{0610532,0805.1726,1002.4928,1901.08690} \\
\rule{0pt}{1.1\normalbaselineskip}No Slip & $\frac{m_p^2}{M_\star^2}$ & $\frac{m_p^2}{M_\star^2}$ & 1 &
$\alm$ & $-2\alm$ & \cite{nsg} \\
\rule{0pt}{1.1\normalbaselineskip}No Run & $\gm$ & $\gm$ & 1 & 0 & $\alb$ & \cite{nrg} \\
\rule{0pt}{1.1\normalbaselineskip}Only Run & $\frac{m_p^2}{M_\star^2}(1+\alm)$ & $\frac{m_p^2}{M_\star^2}
\left(1+\frac{\alm}{2}\right)$ & $\frac{1+\alm}{1+\alm/2}$ & $\alm$ & 0 &
new \\
\rule{0pt}{1.1\normalbaselineskip}Only Light & 1 & $\gl$ & $1/\gl$ & $\alm$ & dif.eq.($\alm$) & new \\
\rule{0pt}{1.1\normalbaselineskip}Only Growth & $\gm$ & 1 & $\gm$ & $\alm$ & dif.eq.($\alm$) & new \\
\hline
\end{tabular} \\
\caption{Several limited modified gravity models are of special interest,
having specific characteristics in either theoretical or observational
functions. The lower three are new and analyzed in this article. The
notation ``dif.eq.($\alm$)'' denotes that $\alb$ is determined from $\alm$
by a differential equation.
}
\label{tab:models}
\end{table*}
\section{Gravitational Functions} \label{sec:quant}
We focus in the rest of the article on the three new theories, and here
relate the key quantities in each.
\subsection{Only Run Gravity} \label{sec:onlyrun}
When $\alb=0$ then the scalar sector is not braided with the tensor
sector and the effective dark energy does not cluster on subhorizon
scales (see \cite{bellsaw} for details), however there is still modification
to matter perturbation growth. The main quantities are determined by $\alm$
(and the expansion history), with
\begin{eqnarray}
\gm&=&\frac{m_p^2}{\ms}(1+\alm)\\
\gl&=&\frac{m_p^2}{\ms}\left(1+\frac{\alm}{2}\right)\\
\bar\eta&=&\frac{1+\alm}{1+\alm/2}\ .
\end{eqnarray}
Note that $\alb=0$ implies in the Horndeski approach that
$G_{4\phi}=XG_{3X}$. This can be compared to No Slip Gravity which
has $G_{4\phi}=-XG_{3X}$.
In the early universe we wish all quantities to restore to general
relativity, to preserve primordial nucleosynthesis and the cosmic
microwave background (CMB) results, so we want $\alm(a\ll1)\to0$.
At late times, if we seek a de Sitter state then all time variations
must stop, so $\ms$ freezes and again $\alm(a\gg1)\to0$, since
$\alm\equiv d\ln\ms/d\ln a$. This tells us that at early times
$\gm\to\gl\to1$, $\bar\eta\to1$ and at late times
$\gm\to\gl\to m_p^2/\ms$, $\bar\eta\to1$. Moreover, $\bar\eta$ will have
its maximum deviation from unity when $\alm$ does. Thus a deviation
of $\alm$ (and hence growth and other effects) from general relativity in
the observable epoch will also give slip in that epoch.
One can also see from Eq.~(\ref{eq:csgen}) that the sound speed squared
will be proportional to $\alm$ at early times, requiring $\alm(a\ll1)\ge0$
for stability. The same holds in the approach to a de Sitter state. Thus
the simplest model will be of a hill form, as in \cite{nsg}. We adopt
that here:
\begin{equation}
\alm=\frac{4c_M\,(a/a_t)^\tau}{[(a/a_t)^\tau+1]^2} \ , \label{eq:hill}
\end{equation}
which also gives an analytic expression
\begin{equation}
\frac{\ms}{m_p^2}=e^{(2c_M/\tau)(1+\tanh[(\tau/2)\ln(a/a_t)])}\ .
\end{equation}
The Planck mass squared therefore goes from 1 in the past to $e^{4c_M/\tau}$
in the future. Note the maximum of $\alm$ is $c_M$, occurring at $a=a_t$,
and $\tau$ is a measure of the transition width.
Unlike in No Slip Gravity and No Run Gravity, stability at early times
only depends on the sign of $c_M$, requiring it to be positive, and not
on the value of $\tau$. We also assume $\alpha_K>0$. What is especially
interesting is the influence of $\tau$ on $\gm$ and $\gl$. At early times
$\alm\to 4c_M(a/a_t)^\tau$ and
\begin{eqnarray}
\gm&\to&1+(\tau-1)\alm \label{eq:gm1}\\
\gl&\to&1+(\tau/2-1)\alm\ . \label{eq:gl1}
\end{eqnarray}
This implies that for $1<\tau<2$, the gravitational strength deviations
from general relativity $\gm-1$ and $\gl-1$ will have opposite signs,
one being weaker and one being stronger than Einstein gravity. This
gives a direct proof that such a condition is possible, as argued by
\cite{1607.03113} regarding the conjecture requiring same signs
\cite{1606.05339}.
Figure~\ref{fig:quant4} exhibits the evolution of the key quantities
$\gm$, $\gl$, $\bar\eta$, and $c_s^2$. Indeed we see that $\gm-1$ and
$\gl-1$ can have opposite signs -- gravity for growth is strengthened
while gravity for light deflection is weakened -- at early times.
Gravitational slip is present and $c_s^2>0$ shows the theory is stable.
Extending the numerical evolution further to the future than shown
verifies that all quantities freeze to constant values (with $\bar\eta\to1$).
Figure~\ref{fig:quant2tau} explores the opposite sign conditions on
$\gm-1$ and $\gl-1$ in further detail, verifying the analytic derivation
that this occurs at early times when $1<\tau<2$. Note however that the
opposite signs can occur more generally at later times, including during
the key observational window $a\approx 0.25$--0.6.
\begin{figure}[tbp!]
\includegraphics[width=\columnwidth]{quant4.ps}
\caption{
The deviations from general relativity (GR) for the gravitational coupling
strength for matter $\gm-1$, for light $\gl-1$,
the gravitational slip $\bar\eta-1$, and the sound speed squared $c_s^2$
for Only Run Gravity are plotted vs scale factor. Here we take the hill
form Eq.~(\ref{eq:hill}) for $\alm$ with $c_M=0.05$, $a_t=0.5$, $\tau=1.5$.
}
\label{fig:quant4}
\end{figure}
\begin{figure}[tbp!]
\includegraphics[width=\columnwidth]{quant2tau.ps}
\caption{
The gravitational strengths $\gm$ and $\gl$ for Only Run Gravity can exhibit
opposite deviations from general relativity (opposite signs in $\gm-1$ and
$\gl-1$) for some of their evolution. We show cases for three different
values of $\tau$, with the behavior following the analytic predictions of
Eqs.~(\ref{eq:gm1})--(\ref{eq:gl1}).
}
\label{fig:quant2tau}
\end{figure}
\subsection{Only Growth Gravity} \label{sec:onlygrow}
When we limit the modification of gravity to the
growth sector, leaving light deflection unchanged from general relativity,
$\gl=1$, this imposes a condition relating $\alb$ to $\alm$ through a
differential equation,
\begin{equation}
\alb'=(\alb+2\alm)\left[-1+\frac{\alb+\mu\alm}{2(1-\mu)}\right]\ ,
\end{equation}
where $\mu=m_p^2/\ms$.
We can plug this back into Eq.~(\ref{eq:gmgen}) to obtain
\begin{equation}
\gm=\frac{\alb+\alm(2-\mu)}{\alb+\alm}\ .
\end{equation}
Note however that one must solve the differential equation to obtain
$\alb(\alm)$.
The early universe limit is $\gm\to1$ so $\mu\to1$, $\alm\to0$, $\alb\to0$.
The de Sitter limit is $\gm\to1$ with
$\alb\to 2(1-m_p^2/M^2_{\star,{\rm dS}})$ since $\alb'\to0$.
The stability condition $c_s^2\ge0$ at early times requires $\alm>0$.
The differential equation is straightforward to solve and we present the
numerical results below, also checking stability for all times. The
results are insensitive to initial conditions, as long as
$\alpha_{B,i}<-\alpha_{M,i}$ (otherwise $\gm$ will diverge when
$\alb+\alm$ crosses zero).
Figure~\ref{fig:qmatter} shows $\gm$ and $c_s^2$ for the hill form of
$\alm$ with $c_M=0.03$, $a_t=0.5$, and two different values of $\tau$.
(Recall that for this model $\bar\eta=\gm$.)
Note that the modified Poisson equation for growth shows weaker gravity
than general relativity (while the Poisson equation for light deflection
is the same as general relativity by construction). The minimum strength
$G_{\rm matter,min}\approx 1-4c_M$ and then it slowly approaches $\gm\to1$
in the de Sitter limit (verified by extending the integration to $a\gg1$);
also $c_s^2\to0$ in that limit. Only Growth Gravity, like No Slip Gravity
but unlike many scalar-tensor theories, suppresses growth -- this can
ease some tensions in $f\sigma_8$ measurements from redshift space
distortions with respect to $\Lambda$CDM cosmology, and possibly also
$\sigma_8$ tensions from weak lensing -- making it a theory worth further
study.
\begin{figure}[tbp!]
\includegraphics[width=\columnwidth]{qmatter.ps}
\caption{
The gravitational coupling strength for matter $\gm$ in Only Growth Gravity
can be weaker than in general relativity. The evolution $\gm(a)$ is shown
with the solid curves, and the sound speed squared $c_s^2(a)$ by the
dashed curves. For each, two values of $\tau$ are exhibited for $\alm$ in
the hill form, with $c_M=0.03$, $a_t=0.5$.
}
\label{fig:qmatter}
\end{figure}
\subsection{Only Light Gravity} \label{sec:onlylight}
The third new theory is limiting the modification of gravity to the
light deflection sector, leaving growth unchanged from general relativity,
$\gm=1$. This again gives a relation for $\alb$ in terms of $\alm$, in
the form of
\begin{equation}
\alb'=(\alb+2\alm)\left[-1+\frac{\alb+2\mu\alm}{2(1-\mu)}\right]\ .
\end{equation}
We can plug this back into Eq.~(\ref{eq:glgen}) to obtain
\begin{equation}
\gl=\frac{\alb+\alm(1+\mu)}{\alb+2\alm}\ .
\end{equation}
Again note that one must solve the differential equation to obtain
$\alb(\alm)$.
The early universe limit is $\gl\to1$ so $\mu\to1$, $\alm\to0$, $\alb\to0$.
The de Sitter limit is $\gl\to1$ with
$\alb\to 2(1-m_p^2/M^2_{\star,{\rm dS}})$, as in the Only Growth Gravity
case, and again the differential equation is straightforward to solve.
Only Light Gravity is more difficult, however, in that the denominator
of $\gl$ involves $\alb+2\alm$ and this is exactly the prefactor in the
$\alb'$ equation. This means that if at some point in the evolution of
$\alb$ it reaches or crosses $-2\alm$, as the dynamical equation
motivates, then the gravitational strength diverges.
We have not been able to find cases yet where this does not occur
(e.g.\ trying the hill form for $\alm$, or power law times Gaussians),
though we also have not found a proof there is no nondivergent solution.
\section{Observational Functions} \label{sec:results}
These modified gravity theories are highly predictive (in the linear
regime at least). With the expressions for $\gm$, $\gl$, and $\ms$
one can calculate observables in growth and light propagation. Furthermore,
\cite{nsg} identified a clear link between predictions for cosmic
growth and for gravitational wave propagation. Basically, deviations in
cosmic growth predict deviations in gravitational waves and vice versa.
This allows an important
test for modified gravity -- if a signature is seen in growth of large
scale structure, it could be seen as well in the luminosity distances
of gravitational wave standard sirens vs standard candles. Such a
crosscheck is a valuable systematics test; while one might find other
cosmological model parameters or astrophysical uncertainties that could
change growth and, say, the CMB or lensing dynamics in a way that mimics
modified gravity (e.g.\ neutrinos or selection effects),
such common systematics are much less
likely with a gravitational wave comparison.
Therefore in this section we not only look at the observational effects
on large scale structure growth through the growth rate $f\sigma_8$, but also
their connection to observational effects on gravitational wave propagation.
Recall that luminosity distances for photon sources, such as supernovae,
only depend on the background expansion, which we are holding fixed when
we change gravity from general relativity. However gravitational wave
propagation is sensitive to the Planck mass running
\cite{1406.7139,1408.2224,1509.08458,1710.04825,1711.03776,1712.08623,1712.08108,nsg,holz}, and so
\begin{equation}
\frac{d_{L,{\rm GW}}(a)}{d_{L,\gamma}(a)}=\left[\frac{\ms(a=1)}{\ms(a)}\right]^{1/2} \ .
\end{equation}
Figure~\ref{fig:nsggw} shows the prediction for both probes
for No Slip Gravity.
We see the characteristic suppression of growth, at the 3--5\% level,
relative to general relativity, over the currently measured range of
redshifts using redshift space distortions as in Fig.~3 of \cite{nsg}.
But in addition we plot the deviation in luminosity distance to gravitational
wave standard sirens relative to photon luminosity distances, e.g.\ from
standardized candles such as Type Ia supernovae. At redshift $z=1$ this
model predicts a 1\% deviation in $d_L$, concomitant with a 3\% deviation
in $f\sigma_8$. As measurements move to higher redshift, say $z=2$,
the deviations become
1.6\% in $d_L$ and 2\% in $f\sigma_8$. The numbers given are for $c_M=0.03$ and
will scale linearly with $c_M$. The key point is that the gravity model
predicts exactly how they should be related at all redshifts, allowing
for leverage by combining several low signal to noise measurements.
\begin{figure}[tbp!]
\includegraphics[width=\columnwidth]{nsggw.ps}
\caption{
Deviations from general relativity in the cosmic growth and gravitational
wave distance predictions are connected, and serve as a valuable crosscheck.
Here the relations are shown for $d_{L,{\rm GW}}^{\rm MG}/d_L^{\rm GR}-1$
and $f\sigma_8^{\rm MG}/f\sigma_8^{\rm GR}-1$ for No Slip Gravity, with model parameters
$c_M=0.03$, $a_t=0.5$, $\tau=1.5$. Deviations will scale linearly with $c_M$.
}
\label{fig:nsggw}
\end{figure}
Figure~\ref{fig:onlyrungw} shows the growth and gravitational wave
quantities for Only Run Gravity. Here, the deviation of the growth
from general relativity is partially canceled because the gravitational
strength $\gm$ is enhanced at high redshift, but suppressed at low
redshift, as seen in Fig.~\ref{fig:quant2tau}. This increases $f\sigma_8$
relative to general relativity for $a\lesssim0.5$ but decreases it for
$a\gtrsim0.5$. That allows higher values of Planck mass running amplitude
$c_M$ to be viable for growth observations. However, the hiding of the
deviation in growth due to the cancellation does not hold for the
gravitational wave luminosity distance, which sees simply the enhancement
of $\ms$ relative to $m_p^2$. Thus the two observational probes work
extremely well together.
\begin{figure}[tbp!]
\includegraphics[width=\columnwidth]{onlyrungw.ps}
\caption{
As Fig.~\ref{fig:nsggw} but for Only Run Gravity, with model parameters
$c_M=0.1$, $a_t=0.5$, $\tau=1.5$. Relatively large values of $c_M$ still
give viable results for growth, allowing for strong effects on gravitational
waves.
}
\label{fig:onlyrungw}
\end{figure}
Figure~\ref{fig:onlygrowgw} shows the growth and gravitational wave
quantities for Only Growth Gravity. This has a third, distinct behavior
for the relation between growth and gravitational waves. Due to the
rapid suppression of $\gm$ at early times, the growth gets off to a slow
start, and the continued weakness of gravity does not allow it to recover,
giving a strongly suppressed growth rate in the observational epoch.
This requires a small value of $c_M$ for viability, which substantially
reduces the signature of deviation in gravitational waves. However this
does mean that cosmic growth measurements can probe much smaller $c_M$
values than the other models discussed.
\begin{figure}[tbp!]
\includegraphics[width=\columnwidth]{onlygrowgw.ps}
\caption{
As Fig.~\ref{fig:nsggw} but for Only Growth Gravity, with model parameters
$c_M=0.01$, $a_t=0.5$, $\tau=1$. Note that the early time, and sustained,
weakening of $\gm$ as seen in Fig.~\ref{fig:qmatter} have a strong effect
to suppress growth. This indicates that even small values of $c_M$ can
have an observable effect on growth, though then the effect on gravitational
waves becomes negligible.
}
\label{fig:onlygrowgw}
\end{figure}
Thus we have seen that cosmic growth rate measurements through redshift
space distortions and gravitational wave luminosity distance measurements
through standard sirens have great complementarity. The three models we
discussed in this section have distinct signatures in each, with predictions
for their respective redshift dependences. Measurements
through both probes could not only test general relativity but distinguish
between these classes of gravity models: No Slip Gravity gives discernible
deviations in each, Only Run Gravity has a larger effect on gravitational
waves, and Only Growth Gravity has a larger effect on the cosmic growth
rate. (And of course Only Light Gravity has no effect on growth, only
on gravitational waves, while No Run Gravity has no effect on gravitational
waves, but enhances growth.)
We demonstrate the clear leverage for distinguishing the classes of
gravity by defining a new statistic,
\begin{equation}
D_G(a)=\frac{d_{L,{\rm GW}}^{\rm MG}/d_L^{\rm GR}}{f\sigma_8^{\rm MG}/f\sigma_8^{\rm GR}}\ .
\end{equation}
In general relativity this is simply a constant with value unity for all
$a$. However each of the classes of modified gravity we discussed will
not only show in the $D_G$ statistic deviations from unity (testing general
relativity), but have a distinct shape with redshift. While scaling $c_M$
will change the amplitude, it will not mix the shapes.
Figure~\ref{fig:dlfsgw} illustrates that indeed the different models are
highly distinct in the $D_G$ statistic.
\begin{figure}[tbp!]
\includegraphics[width=\columnwidth]{dlfsgw.ps}
\caption{
The new $D_G$ statistic, using the complementarity of the gravitational
wave luminosity distance $d_{L,{\rm GW}}$ and the cosmic matter growth
rate $f\sigma_8$, can clearly distinguish different classes of gravity. Each
class has a distinct shape in its redshift dependence $D_G(a)$. General
relativity has constant $D_G=1$.
}
\label{fig:dlfsgw}
\end{figure}
\section{Conclusions} \label{sec:concl}
We assessed in a systematic way limits of modified gravity in terms of
property functions and observational functions, including introducing
three new classes of modified gravity. Such limits are simpler than the
full freedom of gravity theories but are more predictive, and display
clear signatures that observations can use to test general relativity
and distinguish between theory classes.
For the three new theories -- Only Run Gravity, Only Growth Gravity,
and Only Light Gravity -- we compute the key functions of the
gravitational strengths for cosmic growth and for light deflection,
$\gm$ and $\gl$, and the gravitational slip $\bar\eta$ and scalar
perturbation sound speed squared $c_s^2$. Interestingly, Only
Run Gravity provides a definite demonstration that the deviations
from general relativity $\gm-1$ and $\gl-1$ for matter and light
can have opposite signs, which has been a topic of conjecture.
These theories can also provide suppressed matter growth, in
contrast to many scalar-tensor theories and in some accord with
observations.
In addition to solving for the evolution of these key functions,
we also calculate two observational quantities. One is $f\sigma_8$,
the cosmic growth rate for large scale structure perturbations,
measurable through redshift space distortions in galaxy
surveys such as DESI \cite{desi}. The other is the luminosity
distance to gravitational wave standard siren events,
$d_{L,{\rm GW}}$, which can differ from the photon luminosity
distance to standard candles such as Type Ia supernovae,
despite a gravitational wave propagation speed equal to the
speed of light.
Conjoined analysis of the two observables, $f\sigma_8$ and
$d_{L,{\rm GW}}$, as introduced by \cite{nsg}, is highly
insightful. For one thing, they offer a critical crosscheck
for systematic control. As well, there is a diversity of
behaviors between the classes of gravity in the magnitude
of deviations in one vs the other, and predictive power in
the specific redshift dependence between the two. This
enables even low signal to noise measurements at individual
redshifts to combine to give significant evidence to test
general relativity and distinguish classes of gravity.
We defined a new statistic $D_G$ to use for the conjoined
analysis of the two probes, illustrating that it has distinct redshift
dependence for different classes.
Future measurements will demonstrate the strong
complementarity of these probes.
Other combinations of gravitational wave and large scale structure
information are discussed in, e.g., \cite{1901.07832,1908.08951}.
There is still much to understand about modified gravity,
especially if one starts furthest from the observations with the
$G_i(\phi,X)$ functions in the Horndeski lagrangian. The relation
between these functions exhibited by, e.g., Only Run Gravity
and No Slip Gravity may provide some direction to future
investigations, but here we focused on quantities closer to the
observations. The approach of Limited Modified Gravity gives
a framework that is tractable, predictive, and yet with a range
of important characteristics that can yield insights when
confronted with forthcoming data.
\acknowledgments
This work is supported in part by the Energetic Cosmos Laboratory and by the
U.S.\ Department of Energy, Office of Science, Office of High Energy Physics,
under Award DE-SC-0007867 and contract no.\ DE-AC02-05CH11231.
|
3,212,635,537,513 | arxiv |
\section{Introduction}
Attacking a machine learning model with adversarial perturbations is the process of making changes to its input to maximize an adversarial goal, such as mis-classification \cite{Szegedy2013IntriguingPO} or mis-translation \cite{zhao2018generating}.
These attacks provide insight into the vulnerabilities of machine learning models and their brittleness to samples outside the training distribution. Lack of robustness to these attacks poses security concerns to safety-critical applications, \eg{} self-driving cars \cite{bojarski2016end}.
Adversarial attacks were first defined and investigated for computer vision systems (\citet{Szegedy2013IntriguingPO,Goodfellow2014ExplainingAH,MoosaviDezfooli2016DeepFoolAS} inter alia), where the input space is continuous, making minuscule perturbations largely imperceptible to the human eye.
In discrete spaces such as natural language sentences, the situation is more problematic; even a flip of a single word or character is generally perceptible by a human reader.
Thus, most of the mathematical framework in previous work is not directly applicable to discrete text data.
Moreover, there is no canonical distance metric for textual data like the $\ell_p$ norm in real-valued vector spaces such as images, and evaluating the level of semantic similarity between two sentences is a field of research of its own \cite{cer-EtAl:2017:SemEval}.
This elicits a natural question: \textit{what does the term ``adversarial perturbation'' mean in the context of \ac{nlp}}?
We propose a simple but natural criterion for adversarial examples in \ac{nlp}, particularly untargeted\footnote{Here we use the term untargeted in the same sense as \cite{Ebrahimi2018OnAE}: an attack whose goal is simply to decrease performance with respect to a reference translation.} attacks on \ac{seq2seq} models: \emph{adversarial examples should be meaning-preserving on the source side, but meaning-destroying on the target side}.
The focus on explicitly evaluating meaning preservation is in contrast to previous work on adversarial examples for \ac{seq2seq} models \cite{belinkov2018synthetic,zhao2018generating,cheng2018seq2sick,Ebrahimi2018OnAE}.
Nonetheless, this feature is extremely important; given two sentences with equivalent meaning, we would expect a good model to produce two outputs with equivalent meaning.
In other words, any meaning-preserving perturbation that results in the model output changing drastically highlights a fault of the model.
A first technical contribution of this paper is to lay out a method for formalizing this concept of meaning-preserving perturbations (\S\ref{sec:eval_adv_attacks}).
This makes it possible to evaluate the effectiveness of adversarial attacks or defenses either using gold-standard human evaluation, or approximations that can be calculated without human intervention.
We further propose a simple method of imbuing gradient-based word substitution attacks (\S\ref{sec:attack_paradigm}) with simple constraints aimed at increasing the chance that the meaning is preserved (\S\ref{sec:constraints}).
Our experiments are designed to answer several questions about meaning preservation in \ac{seq2seq} models.
First, we evaluate our proposed ``source-meaning-preserving, target-meaning-destroying'' criterion for adversarial examples using both manual and automatic evaluation (\S\ref{sec:corr_human_auto}) and find that a less widely used evaluation metric (\goodscore{}) provides significantly better correlation with human judgments than the more widely used BLEU and METEOR metrics.
We proceed to perform an evaluation of adversarial example generation techniques, finding that \goodscore{} does help to distinguish between perturbations that are more meaning-preserving across a variety of languages and models (\S\ref{sec:attack_results}).
Finally, we apply existing methods for adversarial training to the adversarial examples with these constraints and show that making adversarial inputs more semantically similar to the source is beneficial for robustness to adversarial attacks and does not decrease test performance on the original data distribution (\S\ref{sec:adv_train}).
\section{A Framework for Evaluating Adversarial Attacks}
\label{sec:eval_adv_attacks}
In this section, we present a simple procedure for evaluating adversarial attacks on \ac{seq2seq} models. We will use the following notation: $x$ and $y$ refer to the source and target sentence respectively. We denote $x$'s translation by model $M$ as $y_M$. Finally, $\hat x$ and $\hat y_M$ represent an adversarially perturbed version of $x$ and its translation by $M$, respectively. The nature of $M$ and the procedure for obtaining $\hat x$ from $x$ are irrelevant to the discussion below.
\subsection{The Adversarial Trade-off}
\label{sec:adv_tradeoff}
The goal of adversarial perturbations is to produce failure cases for the model $M$. Hence, the evaluation must include some measure of the \emph{target similarity} between $y$ and $y_{M}$, which we will denote $\stgt(y, \hat y_M)$.
However, if no distinction is being made between perturbations that preserve the meaning and those that don't, a sentence like ``he's very \textit{friendly}'' is considered a valid adversarial perturbation of ``he's very \textit{adversarial}'', even though its meaning is the opposite.
Hence, it is crucial, when evaluating adversarial attacks on \ac{mt} models, that the discrepancy between the original and adversarial input sentence be quantified in a way that is sensitive to meaning. Let us denote such a \emph{source similarity} score $\ssrc(x,\hat x)$.
Based on these functions, we define the \emph{target relative score decrease} as:
\begin{equation}
\dtgt(y, y_M, \hat y_M)
\begin{cases}
0 \text{ if } \small\stgt(y, \hat y_M) \ge \stgt(y, y_M) \\
\frac{\stgt(y, y_M)-\stgt(y, \hat y_M)}{\stgt(y, y_M)} \text{ otherwise}
\end{cases}
\end{equation}
The choice to report the \emph{relative} decrease in $\stgt$ makes scores comparable across different models or languages\footnote{Note that we do not allow negative $\dtgt$ to keep all scores between 0 and 1.}. For instance, for languages that are comparatively easy to translate (\eg{} French-English), $\stgt$ will be higher in general, and so will the gap between $\stgt(y, y_M)$ and $\stgt(y, \hat{y}_M)$. However this does not necessarily mean that attacks on this language pair are more effective than attacks on a ``difficult'' language pair (\eg{} Czech-English) where $\stgt$ is usually smaller.
We recommend that both $\ssrc$ and $\dtgt$ be reported when presenting adversarial attack results. However, in some cases where a single number is needed, we suggest reporting the attack's \emph{success} $\mathcal S\coloneqq \ssrc + \dtgt $.
The interpretation is simple: $\mathcal S>1 \Leftrightarrow\dtgt>1-\ssrc$, which means that the attack has destroyed the target meaning ($\dtgt$) more than it has destroyed the source meaning ($1-\ssrc$).
Importantly, this framework can be extended beyond strictly meaning-preserving attacks. For example, for targeted keyword introduction attacks \cite{cheng2018seq2sick,Ebrahimi2018OnAE}, the same evaluation framework can be used if $\stgt$ (resp. $\ssrc$) is modified to account for the presence (resp. absence) of the keyword (or its translation in the source). Similarly this can be extended to other tasks by adapting $\stgt$ (\eg{} for classification one would use the zero-one loss, and adapt the success threshold).
\subsection{Similarity Metrics}
\label{sec:eval_metrics}
Throughout \S\ref{sec:adv_tradeoff}, we have not given an exact description of the semantic similarity scores $\ssrc$ and $\stgt$. Indeed, automatically evaluating the semantic similarity between two sentences is an open area of research and it makes sense to decouple the definition of adversarial examples from the specific method used to measure this similarity. In this section, we will discuss manual and automatic metrics that may be used to calculate it.
\subsubsection{Human Judgment}
\label{sec:human_judgement}
Judgment by speakers of the language of interest is the \textit{de facto} gold standard metric for semantic similarity. Specific criteria such as adequacy/fluency \cite{Ma2006CorpusSF}, acceptability \cite{Goto2013OverviewOT}, and 6-level semantic similarity \cite{cer-EtAl:2017:SemEval} have been used in evaluations of \ac{mt} and sentence embedding methods.
In the context of adversarial attacks, we propose the following 6-level evaluation scheme, which is motivated by previous measures, but designed to be (1) symmetric, like \citet{cer-EtAl:2017:SemEval}, (2) and largely considers meaning preservation but at the very low and high levels considers fluency of the output\footnote{This is important to rule out nonsensical sentences and distinguish between clean and ``noisy'' paraphrases (\eg{} typos, non-native speech\ldots). We did not give annotators additional instruction specific to typos.}, like \citet{Goto2013OverviewOT}:
{
\begin{center}
\framebox{
\begin{minipage}{0.9\columnwidth}
How would you rate the similarity between the meaning of these two sentences?
\begin{enumerate}[itemsep=-4pt]
\setcounter{enumi}{-1}
\item The meaning is completely different or one of the sentences is meaningless
\item The topic is the same but the meaning is different
\item Some key information is different
\item The key information is the same but the details differ
\item Meaning is essentially equal but some expressions are unnatural
\item Meaning is essentially equal and the two sentences are well-formed English\footnote{Or the language of interest.}
\end{enumerate}
\end{minipage}
}
\end{center}
}
\subsubsection{Automatic Metrics}
\label{sec:auto_metrics}
Unfortunately, human evaluation is expensive, slow and sometimes difficult to obtain, for example in the case of low-resource languages. This makes automatic metrics that do not require human intervention appealing for experimental research.
This section describes 3 evaluation metrics commonly used as alternatives to human evaluation, in particular to evaluate translation models.%
\footnote{
Note that other metrics of similarity are certainly applicable within the overall framework of \S\ref{sec:human_judgement}, but we limit our examination in this paper to the three noted here.
}
\textbf{BLEU:} \cite{papineni-EtAl:2002:ACL} is an automatic metric based on n-gram precision coupled with a penalty for shorter sentences. It relies on exact word-level matches and therefore cannot detect synonyms or morphological variations.
\textbf{METEOR:} \cite{denkowski:lavie:meteor-wmt:2014} first estimates alignment between the two sentences and then computes unigram F-score (biased towards recall) weighted by a penalty for longer sentences. Importantly, METEOR uses stemming, synonymy and paraphrasing information to perform alignments. On the downside, it requires language specific resources.
\textbf{chrF:} \cite{popovic:2015:WMT} is based on the character $n$-gram F-score. In particular we will use the chrF2 score (based on the F2-score --- recall is given more importance), following the recommendations from \citet{popovic:2016:WMT}. By operating on a sub-word level, it can reflect the semantic similarity between different morphological inflections of one word (for instance), without requiring language-specific knowledge which makes it a good one-size-fits-all alternative.
Because multiple possible alternatives exist, it is important to know which is the best stand-in for human evaluation.
To elucidate this, we will compare these metrics to human judgment in terms of Pearson correlation coefficient on outputs resulting from a variety of attacks in \S\ref{sec:corr_human_auto}.
\section{Gradient-Based Adversarial Attacks}
\label{sec:attacks}
In this section, we overview the adversarial attacks we will be considering in the rest of this paper.
\subsection{Attack Paradigm}
\label{sec:attack_paradigm}
\begin{table*}[!h]
\centering
{
\begin{tabular}{ll}
\hline\hline
Original & {\bf Pourquoi} faire cela ? \\
English gloss & {\bf Why} do this? \\
\unconstrained{} & {\color{red}\bf construisant} {\color{orange}(English: building)} faire cela ? \\
\knn{} & {\color{red}\bf interrogez} {\color{orange}(English: interrogate)} faire cela ? \\
\unkonly{} & {\color{red}\bf Puorquoi} {\color{orange}(typo)} faire cela ?\\ \hline\hline
Original& Si seulement je pouvais me muscler {\bf aussi} rapidement.\\
English gloss& If only I could build my muscle {\bf this} fast.\\
\unconstrained{} & Si seulement je pouvais me muscler {\color{red}\bf etc} rapidement.\\
\knn{} & Si seulement je pouvais me muscler {\color{red}\bf plsu} {\color{orange}(typo for ``more'')} rapidement.\\
\unkonly{} & Si seulement je pouvais me muscler {\color{red}\bf asusi} {\color{orange}(typo)} rapidement.\\\hline\hline
\end{tabular}
}
\caption{\label{tab:qual_constraints} Examples of different adversarial inputs. The substituted word is highlighted.}
\end{table*}
We perform gradient-based attacks that replace one word in the sentence so as to maximize an adversarial loss function $\Ladv$, similar to the substitution attacks proposed in \cite{ebrahimi2018hotflip}.
\subsubsection{General Approach}
Precisely, for a word-based translation model $M$%
\footnote{Note that this formulation is also valid for character-based models (see \citet{Ebrahimi2018OnAE}) and subword-based models. For subword-based models, additional difficulty would be introduced due to changes to the input resulting in different subword segmentations. This poses an interesting challenge that is beyond the scope of the current work.}, and given an input sentence $w_1,\ldots,w_n$, we find the position $i^*$ and word $w^*$ satisfying the following optimization problem:
\begin{equation}\label{eq:adv_optim}
\argmax_{1\leq i\leq n, \hat w\in \mathcal V}\Ladv(w_0,\ldots,w_{i-1},\hat w, w_{i+1},\ldots,w_n)
\end{equation}
\noindent where $\Ladv$ is a differentiable function which represents our adversarial objective. Using the first order approximation of $\Ladv$ around the original word vectors $\w_1,\ldots,\w_n$\footnote{More generally we will use the bold $\w$ when talking about the embedding vector of word $w$}, this can be derived to be equivalent to optimizing
\begin{equation}
\argmax_{1\leq i\leq n, \hat w\in \mathcal V}\left[\hat\w-{\w}_i\right]^\intercal\nabla_{\w_i}\Ladv
\end{equation}
The above optimization problem can be solved by brute-force in $\bigO{n\vert\mathcal V\vert}$ space complexity, whereas the time complexity is bottlenecked by a $\vert\mathcal V\vert\times d$ times $n\times d$ matrix multiplication, which is not more computationally expensive than computing logits during the forward pass of the model. Overall, this naive approach is sufficiently fast to be conducive to adversarial training.
We also found that the attacks benefited from normalizing the gradient by taking its sign.
Extending this approach to finding the optimal perturbations for more than 1 substitution would require exhaustively searching over all possible combinations.
However, previous work \cite{Ebrahimi2018OnAE} suggests that greedy search is a good enough approximation.
\subsubsection{The Adversarial Loss $\Ladv$}
We want to find an adversarial input $\hat x$ such that, assuming that the model has produced the correct output $y_1,\ldots,y_{t-1}$ up to step $t-1$ during decoding, the probability that the model makes an error at the next step $t$ is maximized.
In the log-semiring, this translates into the following loss function:
\begin{equation}
\Ladv(\hat x, y)=\sum_{t=1}^{\vert y\vert}\log(1-p(y_t\mid \hat x, y_1,\ldots,y_{t-1}))
\end{equation}
\subsection{Enforcing Semantically Similar Adversarial Inputs}
\label{sec:constraints}
In contrast to previous methods, which don't consider meaning preservation, we
propose simple modifications of the approach presented in \S\ref{sec:attack_paradigm} to create adversarial perturbations at the word level that are more likely to preserve meaning.
The basic idea is to restrict the possible word substitutions to similar words. We compare two sets of constraints:
\textbf{\knn:} This constraint enforces that the word be replaced only with one of its 10 nearest neighbors in the source embedding space. This has two effects: first, the replacement will be likely semantically related to the original word (if words close in the embedding space are indeed semantically related, as hinted by Table~\ref{tab:qual_constraints}). Second, it ensures that the replacement's word vector is close enough to the original word vector that the first order assumption is more likely to be satisfied.
\textbf{\unkonly:} This constraint requires that the substituted words must be obtained by swapping characters. Word internal character swaps have been shown to not affect human readers greatly \cite{mccusker1981word}, hence making them likely to be meaning-preserving. Moreover we add the additional constraint that the substitution must not be in the vocabulary, which will likely be particularly meaning-destroying on the target side for the word-based models we test here. In such cases where word-internal character swaps are not possible or can't produce \ac{oov} words, we resort to the naive strategy of repeating the last character of the word. The exact procedure used to produce this kind of perturbations is described in Appendix \ref{sec:gen_char_swap}. Note that for a word-based model, every \ac{oov} will look the same (a special \unk{} token), however the choice of \ac{oov} will still have an influence on the output of the model because we use unk-replacement.
In contrast, we refer the base attack without constraints as {\bf \unconstrained} hereforth. Table \ref{tab:qual_constraints} gives qualitative examples of the kind of perturbations generated under the different constraints.
For subword-based models, we apply the same procedures at the subword-level on the original segmentation. We then de-segment and re-segment the resulting sentence (because changes at the subword or character levels are likely to change the segmentation of the resulting sentence).
\section{Experiments}
\label{sec:experiments}
Our experiments serve two purposes.
First, we examine our proposed framework of evaluating adversarial attacks (\S\ref{sec:eval_adv_attacks}), and also elucidate which automatic metrics correlate better with human judgment for the purpose of evaluating adversarial attacks (\S\ref{sec:corr_human_auto}). Second, we use this evaluation framework to compare various adversarial attacks and demonstrate that adversarial attacks that are explicitly constrained to preserve meaning receive better assessment scores (\S\ref{sec:attack_results}).
\subsection{Experimental setting}
\textbf{Data:}
Following previous work on adversarial examples for \ac{seq2seq} models \cite{belinkov2018synthetic,Ebrahimi2018OnAE}, we perform all experiments on the IWSLT2016 dataset \cite{Cettolo2016TheI2} in the \{French,German,Czech\}$\rightarrow$English directions (\fren{}, \deen{} and \csen{}). We compile all previous IWSLT test sets before 2015 as validation data, and keep the 2015 and 2016 test sets as test data. The data is tokenized with the Moses tokenizer \cite{Koehn:2007:MOS:1557769.1557821}. The exact data statistics can be found in Appendix \ref{sec:iwslt2016_stats}.
\textbf{\ac{mt} Models:}
We perform experiments with two common \ac{nmt} models. The first is an LSTM based encoder-decoder architecture with attention~\citep{luong-pham-manning:2015:EMNLP}. It uses 2-layer encoders and decoders, and dot-product attention. We set the word embedding dimension to 300 and all others to 500.
The second model is a self-attentional Transformer \cite{vaswani2017attention}, with 6 1024-dimensional encoder and decoder layers and 512 dimensional word embeddings. Both the models are trained with Adam \cite{Kingma2014Adam}, dropout \cite{srivastava2014dropout} of probability 0.3 and label smoothing \cite{Szegedy2016RethinkingTI} with value 0.1. We experiment with both word based models (vocabulary size fixed at 40k) and subword based models (BPE \cite{sennrich-haddow-birch:2016:P16-12} with 30k operations). For word-based models, we perform \unk{} replacement, replacing \unk{} tokens in the translated sentences with the source words with the highest attention value during inference. The full experimental setup and source code are available at \codeurl{}.
\textbf{Automatic Metric Implementations:} To evaluate both sentence and corpus level BLEU score, we first de-tokenize the output and use {\tt sacreBLEU}\footnote{\url{https://github.com/mjpost/sacreBLEU}} \cite{post2018call} with its internal {\tt intl} tokenization, to keep BLEU scores agnostic to tokenization. We compute METEOR using the official implementation\footnote{\url{http://www.cs.cmu.edu/~alavie/METEOR/}}. ChrF is reported with the {\tt sacreBLEU} implementation on detokenized text with default parameters. A toolkit implementing the evaluation framework described in \S\ref{sec:adv_tradeoff} for these metrics is released at \teapoturl{}.
\begin{table*}[ht]
\centering
\begin{tabular}{clccccccc}
& & \multicolumn{3}{c}{LSTM} &\ & \multicolumn{3}{c}{Transformer} \\\hline\hline
& Language pair & \csen & \deen & \fren&\ & \csen & \deen & \fren\\\cline{3-9}
\multirow{8}{*}{Word-based}& & \multicolumn{3}{c}{Target RDChrF} & & \multicolumn{3}{c}{Target RDChrF}\\\cline{3-5}\cline{7-9}
& Original chrF & 45.68 & 49.43 & 57.49 & & 47.66 & 51.08 & 58.04\\
&\unconstrained{} & 25.38 & 25.54 & 25.59 & & 25.24 & 25.00 & 24.68\\
&\unkonly{} & 24.11 & 24.94 & 23.60 & & 21.59 & 23.23 & 21.75\\
&\knn{} & 15.00 & 15.59 & 15.22& & 20.74 & 19.97 & 18.59 \\
&&\multicolumn{3}{c}{Source \goodscore{}}&&\multicolumn{3}{c}{Source \goodscore{}}\\\cline{3-5}\cline{7-9}
&\unconstrained{} & 70.14 & 72.39 & 74.29 & & 69.03 & 71.93 & 73.23\\
&\unkonly{} & 82.65 & 84.40 & 86.62 & & 84.13 & 85.97 & 87.02\\
&\knn{} & 78.08 & 78.11 & 77.62 & & 74.94 & 77.92 & 77.88\\\hline
\multirow{8}{*}{Subword-based}& & \multicolumn{3}{c}{Target RDChrF} & & \multicolumn{3}{c}{Target RDChrF}\\\cline{3-5}\cline{7-9}
& Original chrF &48.30 & 52.42 & 59.08 & & 49.70 & 54.01 & 59.65\\
&\unconstrained{} & 25.79 & 26.03 & 26.96 & & 23.97 & 25.07 & 25.28\\
&\unkonly{} & 18.65 & 19.15 & 19.75 & & 16.98 & 18.38 & 17.85\\
&\knn{} & 15.00 & 16.26 & 17.12 & & 19.02 & 18.58 & 18.63\\
&&\multicolumn{3}{c}{Source \goodscore{}}&&\multicolumn{3}{c}{Source \goodscore{}}\\\cline{3-5}\cline{7-9}
&\unconstrained{} & 69.32 & 72.12 & 73.57 & & 68.66 & 71.51 & 72.65\\
&\unkonly{} & 85.84 & 87.46 & 87.98 & & 85.79 & 87.07 & 87.99\\
&\knn{} & 76.17 & 77.74 & 78.03 & & 73.05 & 75.91 & 76.54\\
\hline
\end{tabular}
\caption{\label{tab:all_attacks_results} Target RDchrF and source \goodscore{} scores for all the attacks on all our models (word- and subword-based LSTM and Transformer).}
\end{table*}
\subsection{Correlation of Automatic Metrics with Human Judgment}
\label{sec:corr_human_auto}
We first examine which of the automatic metrics listed in \S\ref{sec:eval_metrics} correlates most with human judgment for our adversarial attacks. For this experiment, we restrict the scope to the case of the LSTM model on \fren{}. For the French side, we randomly select 900 sentence pairs $(x,\hat x)$ from the validation set, 300 for each of the \unconstrained{}, \knn{} and \unkonly{} constraints. To vary the level of perturbation, the 300 pairs contain an equal amount of perturbed input obtained by substituting 1, 2 and 3 words.
On the English side, we select 900 pairs of reference translations and translations of adversarial input $(y, \hat y_M)$ with the same distribution of attacks as the source side, as well as 300 $(y, y_M)$ pairs (to include translations from original inputs). This amounts to 1,200 sentence pairs in the target side.
These sentences are sent to English and French speaking annotators to be rated according to the guidelines described in \S\ref{sec:human_judgement}. Each sample (a pair of sentences) is rated by two independent evaluators. If the two ratings differ, the sample is sent to a third rater (an auditor and subject matter expert) who makes the final decision.
\begin{table}[!h]
\begin{tabular}{lccc}
Language & BLEU & METEOR & chrF\\\hline\hline
French&0.415&0.440&\bf0.586$^*$\\
English&0.357&0.478$^*$&\bf0.497\\\hline
\end{tabular}
\caption{\label{tab:human_eval_results} Correlation of automatic metrics to human judgment of adversarial source and target sentences. ``$^*$'' indicates that the correlation is significantly better than the next-best one.}
\end{table}
Finally, we compare the human results to each automatic metric with Pearson's correlation coefficient. The correlations are reported in Table \ref{tab:human_eval_results}. As evidenced by the results, \goodscore{} exhibits higher correlation with human judgment, followed by METEOR and BLEU. This is true both on the source side ($x$ vs $\hat x$) and in the target side ($y$ vs $\hat y_M$). We evaluate the statistical significance of this result using a paired bootstrap test for $p<0.01$. Notably we find that chrF is significantly better than METEOR in French but not in English. This is not too unexpected because METEOR has access to more language-dependent resources in English (specifically synonym information) and thereby can make more informed matches of these synonymous words and phrases. Moreover the French source side contains more ``character-level'' errors (from \unkonly{} attacks) which are not picked-up well by word-based metrics like BLEU and METEOR. For a breakdown of the correlation coefficients according to number of perturbation and type of constraints, we refer to Appendix \ref{sec:human_breakdown}.
Thus, in the following, we report attack results both in terms of \goodscore{} in the source ($\ssrc$) and \ac{rdb} in the target ($d_{\text{tgt}}$).
\begin{figure*}[!t]
\centering
\includegraphics[width=0.8\textwidth]{figures/chrf_plots.png}
\caption{\label{fig:chrf_plots} Graphical representation of the results in Table \ref{tab:all_attacks_results} for word-based models. High source \goodscore{} and target \ac{rdb} (upper-right corner) indicates a good attack.}
\end{figure*}
\subsection{Attack Results}
\label{sec:attack_results}
We can now compare attacks under the three constraints \unconstrained{}, \knn{} and \unkonly{} and draw conclusions on their capacity to preserve meaning in the source and destroy it in the target. Attacks are conducted on the validation set using the approach described in \S\ref{sec:attack_paradigm} with 3 substitutions (this means that each adversarial input is at edit distance at most 3 from the original input). Results (on a scale of 0 to 100 for readability) are reported in Table \ref{tab:all_attacks_results} for both word- and subword- based LSTM and Transformer models. To give a better idea of how the different variables (language pair, model, attack) affect performance, we give a graphical representation of these same results in Figure \ref{fig:chrf_plots} for the word-based models.
The rest of this section discusses the implication of these results.
\textbf{Source \goodscore{} Highlights the Effect of Adding Constraints:}
Comparing the \knn{} and \unkonly{} rows to \unconstrained{} in the ``source'' sections of Table \ref{tab:all_attacks_results} clearly shows that constrained attacks have a positive effect on meaning preservation. Beyond validating our assumptions from \S\ref{sec:constraints}, this shows that source \goodscore{} is useful to carry out the comparison in the first place\footnote{It can be argued that using \goodscore{} gives an advantage to \unkonly{} over \knn{} for source preservation (as opposed to METEOR for example). We find that this is the case for Czech and German (source METEOR is higher for \knn{}) but not French. Moreover we find (see \ref{sec:human_breakdown}) that \goodscore{} correlates better with human judgement even for \knn{}.}. To give a point of reference, results from the manual evaluation carried out in \S\ref{sec:corr_human_auto} show that that $90\%$ of the French sentence pairs to which humans gave a score of 4 or 5 in semantic similarity have a \goodscore{} $>78$.
\begin{table}[!h]
{
\small
\setlength\tabcolsep{2pt}
\begin{tabular}{ll}
\hline\hline
\multicolumn{2}{c}{Successful attack}\\
\multicolumn{2}{c}{(source \goodscore{} $=80.89$, target \ac{rdb} $=84.06$)}\\\hline
Original & Ils le r\'{e}investissent directement en engageant\\
& plus de proc\`{e}s.\\
Adv. src& {\color{red}Ilss} le r\'{e}investissent {\color{red}dierctement} en {\color{red}engagaent}\\
&plus de proc\`{e}s.\\
Ref. & They plow it right back into filing more troll \\
&lawsuits.\\
Base output& They direct it directly by engaging more cases.\\
Adv. output& .. de plus.\\\hline\hline
\multicolumn{2}{c}{Unsuccessful attack}\\
\multicolumn{2}{c}{(source \goodscore{} $=54.46$, target \ac{rdb} $=0.00$)}\\\hline
Original & C'\'{e}tait en Juillet 1969.\\
Adv. src & C'{\color{red}\'{e}tiat} en {\color{red}Jiullet} 1969.\\
Ref. & This is from July, 1969.\\
Base output & This was in July 1969.\\
Adv. output & This is. in 1969.\\\hline\hline
\end{tabular}
}
\caption{\label{tab:qual_unkonly_attack} Example of \unkonly{} attacks on the \fren{} LSTM. The first example is a successful attack (high source \goodscore{} and target \ac{rdb}) whereas the second is not.}
\end{table}
\textbf{Different Architectures are not Equal in the Face of Adversity:}
Inspection of the target-side results yields several interesting observations. First, the high \ac{rdb} of \unkonly{} for word-based model is yet another indication of their known shortcomings when presented with words out of their training vocabulary, even with \unk{}-replacement.
Second, and perhaps more interestingly, Transformer models appear to be less robust to small embedding perturbations (\knn{} attacks) compared to LSTMs. Although the exploration of the exact reasons for this phenomenon is beyond the scope of this work, this is a good example that \ac{rdb} can shed light on the different behavior of different architectures when confronted with adversarial input.
Overall, we find that the \unkonly{} constraint is the only one that consistently produces attacks with $>1$ average success (as defined in Section \ref{sec:adv_tradeoff}) according to Table \ref{tab:all_attacks_results}. Table \ref{tab:qual_unkonly_attack} contains two qualitative examples of this attack on the LSTM model in \fren{}.
\section{Adversarial Training with Meaning-Preserving Attacks}
\label{sec:adv_train}
\subsection{Adversarial Training}
Adversarial training \cite{Goodfellow2014ExplainingAH} augments the training data with adversarial examples. Formally, in place of the \ac{nll} objective on a sample $x, y$, $\mathcal{L}(x,y)=NLL(x,y)$, the loss function is replaced with an interpolation of the \ac{nll} of the original sample $x,y$ and an adversarial sample $\hat x, y$:
\begin{equation}
\mathcal{L}'(x,y)=(1-\alpha)NLL(x,y) + \alpha NLL(\hat x,y)
\end{equation}
\citet{Ebrahimi2018OnAE} suggest that while adversarial training improves robustness to adversarial attacks, it can be detrimental to test performance on non-adversarial input.
We investigate whether this is still the case when adversarial attacks are largely meaning-preserving.
In our experiments, we generate $\hat x$ by applying 3 perturbations on the fly at each training step. To maintain training speed we do not solve Equation (\ref{eq:adv_optim}) iteratively but in one shot by replacing the argmax by top-3. Although this is less exact than iterating, this makes adversarial training time less than $2\times$ slower than normal training. We perform adversarial training with perturbations without constraints (\unconstrained{}-adv) and with the \unkonly{} constraint (\unkonly{}-adv). All experiments are conducted with the word-based LSTM model.
\subsection{Results}
\label{sec:adv_train_experiments}
Test performance on non-adversarial input is reported in Table \ref{tab:adv_train_bleu_scores}. In keeping with the rest of the paper, we primarily report \goodscore{} results, but also show the standard BLEU as well.
We observe that when $\alpha=1.0$, \ie{}\ the model only sees the perturbed input during training\footnote{This setting is reminiscent of word dropout \cite{iyyer-EtAl:2015:ACL-IJCNLP}.}, the \unconstrained{}-adv model suffers a drop in test performance, whereas \unkonly{}-adv's performance is on par with the original. This is likely attributable to the spurious training samples $(\hat x, y)$ where $y$ is not an acceptable translation of $\hat x$ introduced by the lack of constraint. This effect disappears when $\alpha=0.5$ because the model sees the original samples as well.
Not unexpectedly, Table \ref{tab:adv_train_robustness} indicates that \unkonly{}-adv is more robust to \unkonly{} constrained attacks for both values of $\alpha$, with $1.0$ giving the best results. On the other hand, \unconstrained{}-adv is similarly or more vulnerable to these attacks than the baseline.
Hence, we can safely conclude that adversarial training with \unkonly{} attacks improves robustness while not impacting test performance as much as unconstrained attacks.
\begin{table}[t]
\centering
\begin{tabular}{lccc}
Language pair & \csen & \deen & \fren\\\hline\hline
\multirow{2}{*}{Base} & 44.21 & 49.30 & 55.67 \\
& \small (22.89) & \small (28.61) & \small (35.28)\\\cline{2-4}
& \multicolumn{3}{c}{$\alpha=1.0$} \\\cline{2-4}
\multirow{2}{*}{\unconstrained{}{}-adv} & 41.38 & 46.15 & 53.39 \\
& \small (21.51) & \small (27.06) & \small (33.96)\\
\multirow{2}{*}{\unkonly{}-adv} & 43.74 & 48.85 & 55.60 \\
& \small (23.00) & \small (28.45) & \small (35.33)\\\cline{2-4}
& \multicolumn{3}{c}{$\alpha=0.5$} \\\cline{2-4}
\multirow{2}{*}{\unconstrained{}{}-adv} & 43.68 & 48.60 & 55.55 \\
& \small (22.93) & \small (28.30) & \small (35.25)\\
\multirow{2}{*}{\unkonly{}-adv} & 44.57 & 49.14 & 55.88 \\
& \small (23.66) & \small (28.66) & \small (35.63)\\
\hline
\end{tabular}
\caption{\label{tab:adv_train_bleu_scores}\goodscore{} (BLEU) scores on the original test set before/after adversarial training of the word-based LSTM model.}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{lccc}
Language pair & \csen & \deen & \fren\\\hline\hline
Base & 24.11 & 24.94 & 23.60 \\\cline{2-4}
& \multicolumn{3}{c}{$\alpha=1.0$} \\\cline{2-4}
\unconstrained{}-adv & 25.99 & 26.24 & 25.67 \\
\unkonly{}-adv & 16.46 & 17.19 & 15.72 \\\cline{2-4}
& \multicolumn{3}{c}{$\alpha=0.5$} \\\cline{2-4}
\unconstrained{}-adv & 26.52 & 27.26 & 24.92 \\
\unkonly{}-adv & 20.41 & 20.24 & 16.08 \\
\hline
\end{tabular}
\caption{\label{tab:adv_train_robustness} Robustness to \unkonly{} attacks on the validation set with/without adversarial training (\ac{rdb}). Lower is better.}
\end{table}
\section{Related work}
\label{sec:related}
Following seminal work on adversarial attacks by \citet{Szegedy2013IntriguingPO},
\citet{Goodfellow2014ExplainingAH} introduced gradient-based attacks and adversarial training. Since then, a variety of attack \cite{MoosaviDezfooli2016DeepFoolAS} and defense \cite{Ciss2017ParsevalNI,Kolter2017ProvableDA} mechanisms have been proposed.
Adversarial examples for \ac{nlp} specifically have seen attacks on sentiment \cite{papernot2016crafting,samanta2017towards,ebrahimi2018hotflip}, malware \cite{grosse2016adversarial}, gender \cite{reddy-knight:2016:NLPandCSS} or toxicity \cite{hosseini2017deceiving} classification to cite a few.
In \ac{mt}, methods have been proposed to attack word-based \cite{zhao2018generating,cheng2018seq2sick} and character-based \cite{belinkov2018synthetic,Ebrahimi2018OnAE} models. However these works side-step the question of meaning preservation in the source: they mostly focus on target side evaluation. Finally there is work centered around meaning-preserving adversarial attacks for \ac{nlp} via paraphrase generation \cite{iyyer-EtAl:2018:N18-1} or rule-based approaches \cite{jia-liang:2017:EMNLP2017,ribeiro-singh-guestrin:2018:Long,naik-EtAl:2018:C18-1,alzantot-EtAl:2018:EMNLP}. However the proposed attacks are highly engineered and focused on English.
\section{Conclusion}
This paper highlights the importance of performing \emph{meaning-preserving} adversarial perturbations for \ac{nlp} models (with a focus on \ac{seq2seq}).
We proposed a general evaluation framework for adversarial perturbations and compared various automatic metrics as proxies for human judgment to instantiate this framework.
We then confirmed that, in the context of \ac{mt}, ``naive'' attacks do not preserve meaning in general, and proposed alternatives to remedy this issue.
Finally, we have shown the utility of adversarial training in this paradigm.
We hope that this helps future work in this area of research to evaluate meaning conservation more consistently.
\section*{Acknowledgments}
The authors would like to extend their thanks to members of the LATTE team at Facebook and Neulab at Carnegie Mellon University for valuable discussions, as well as the anonymous reviewers for their insightful feedback. This research was partially funded by Facebook.
\section{Supplemental Material}
\label{sec:supplemental}
\subsection{Generating \ac{oov} Replacements with Internal Character Swaps}
\label{sec:gen_char_swap}
We use the following snippet to produce an \ac{oov} word from an existing word:
\begin{lstlisting}[language=Python]
def make_oov(
word,
vocab,
max_scrambling,
):
"""Modify a word to make it OOV
(while keeping the meaning)"""
# If the word has >3 letters
# try scrambling them
L = len(word)
if L > 3:
# For a fixed number of steps
for _ in range(max_scrambling):
# Swap two adjacent letters
# in the middle of the word
pos = random.randint(1, L - 3)
word = word[:pos]
word += word[pos+1] + word[pos]
word += word[pos+2:]
# If we got an OOV already just
# return it
if word not in vocab:
return word
# If nothing worked, or the word is
# too short for scrambling, just
# repeat the last letter ad nauseam
char = word[-1]
while word in vocab:
word = word + char
return word
\end{lstlisting}
\subsection{IWSLT2016 Dataset}
\label{sec:iwslt2016_stats}
See table \ref{tab:iwslt2016_stats} for statistics on the size of the IWSLT2016 corpus used in our experiments.
\begin{table}[!h]
\centering
\begin{tabular}{lccc}
& \#train & \#valid & \#test \\ \hline
\fren & 220.4k & 6,824 & 2,213 \\
\deen & 196.9k & 11,825 & 2,213 \\
\csen & 114.4k & 5,716 & 2,213\\
\hline
\end{tabular}
\caption{\label{tab:iwslt2016_stats}IWSLT2016 data statistics.
}
\end{table}
\subsection{Breakdown of Correlation with Human Judgement}
\label{sec:human_breakdown}
We provide a breakdown of the correlation coefficients of automatic metrics with human judgment for source-side meaning-preservation, both in terms of number of perturbed words (Table \ref{tab:corr_edits_breakdown}) and constraint (Table \ref{tab:corr_constraints_breakdown}). While those coefficients are computed on a much smaller sample size, and their differences are not all statistically significant with $p<0.01$, they exhibit the same trend as the results from Table \ref{tab:human_eval_results} (BLEU $<$ METEOR $<$ chrF). In particular Table \ref{tab:corr_edits_breakdown} shows that the good correlation of chrF with human judgment is not only due to the ability to distinguish between different number of edits.
\begin{table}
\begin{tabular}{cccc}
\# edits & BLEU & METEOR & chrF\\\hline\hline
1&0.351&0.352&\bf0.486$^*$\\
2&0.403&0.424&\bf0.588$^*$\\
3&0.334&0.393&\bf0.560$^*$\\
\hline
\end{tabular}
\caption{\label{tab:corr_edits_breakdown} Correlation of automatic metrics to human judgment of semantic similarity between original and adversarial source sentences, broken down by number of perturbed words. ``$^*$'' indicates that the correlation is significantly better than the next-best one.}
\end{table}
\begin{table}
\begin{tabular}{cccc}
Constraint & BLEU & METEOR & chrF\\\hline\hline
\unconstrained{} &0.274 & 0.572 & \bf0.599\\
\unkonly{}&0.274 & 0.319 & \bf0.383\\
\knn{} & 0.534 & 0.584 & \bf0.606\\
\hline
\end{tabular}
\caption{\label{tab:corr_constraints_breakdown} Correlation of automatic metrics to human judgment of semantic similarity between original and adversarial source sentences, broken down by type of constraint on the perturbation. ``$^*$'' indicates that the correlation is significantly better than the next-best one.}
\end{table} |
3,212,635,537,514 | arxiv | \section{Introduction}
Following the discovery of topological insulators~\cite{RevModPhys.82.3045,RevModPhys.83.1057}, the search for novel symmetry-protected topological phases of quantum matter has become one of the central themes in condensed matter physics. Topological crystalline insulators (TCIs)~\cite{PhysRevLett.106.106802} are such a novel topological phase protected by crystalline symmetries. The gaplessness of Dirac surface states in TCIs usually depends on geometries of surface terminations crucially, which makes them more fragile than those in conventional topological insulators protected by time-reversal symmetry. Recently, the pursuit of topological phases has been extended to non-Hermitian systems~\cite{PhysRevB.84.205128,alvarez2018topological,PhysRevX.8.031079,AdvPhys2020,PhysRevX.9.041015}. Non-Hermitian systems have unique topological properties beyond the Hermitian framework owing to complex-valued energy spectra. One of the most salient characteristics of non-Hermitian systems is the emergence of exceptional points~\cite{moiseyev2011non,berry2004physics, Heiss_2012}, where pairs of eigenvalues and the corresponding eigenvectors coalesce. The exceptional point introduces several fascinating topological phenomena~\cite{RevModPhys.93.015005}, such as the exceptional rings~\cite{zhen2015spawning,PhysRevLett.120.146402,PhysRevLett.118.045701,cerjan2019experimental}, the bulk Fermi arcs and half-integer topological charges~\cite{Zhou1009}, and also the non-Hermitian skin effect as well~\cite{PhysRevLett.77.570,PhysRevLett.116.133903,PhysRevLett.121.086803,PhysRevLett.121.026808,PhysRevLett.121.136802,PhysRevLett.124.086801,PhysRevLett.124.250402}.
Interestingly, the bulk-edge correspondence, one of the essential concepts in topological materials, has been proved to be subtle in non-Hermitian systems~\cite{Xiong_2018,PhysRevLett.121.026808,PhysRevLett.121.086803,PhysRevB.99.155431,PhysRevB.99.201103,PhysRevB.99.081103,PhysRevA.99.052118,PhysRevB.100.054105,PhysRevLett.124.056802,PhysRevB.100.161105,PhysRevLett.123.066404,PhysRevB.101.195147}.
This is partially due to the appearance of abnormal geometric structures (points/rings/disks), eigenstates and eigenvalues can coalesce in complex energy space. Although many unique non-Hermitian properties of topological systems have been revealed in the context, the non-Hermitian properties of TCIs of various geometries, especially for edge terminations, are still worthy exploration.
As a prototype system, we use the two-dimensional (2D) honeycomb lattice decorated with the modulated Kekul\'{e} hopping texture, as displayed in Fig.~\ref{fig1}(a). Due to the asymmetric hopping texture, each unit cell in the Kekul\'{e} lattice consists of six sites, in contrast to the ideal honeycomb lattice. The hopping texture in the Kekul\'{e} lattice couples the valley degrees of freedom and gaps out the Dirac cones~\cite{PhysRevB.62.2806,PhysRevLett.98.186809,PhysRevB.80.233409,CHEIANOV20091499,Gamayun_2018}. The Kekul\'{e} texture can be realized experimentally for various solid-state materials, such as in the molecular graphene \cite{gomes2012designer} and Lithium-intercalated graphene~\cite{PhysRevLett.126.206804}. Remarkably, the Kekul\'{e}-texture-modulated honeycomb lattice has been recognized as a 2D TCI when the intercellular hopping is greater than the intracellular hopping~\cite{kariyado2017topological,PhysRevLett.122.086804,PSJ.86.123707}, and the gapless topological edge states are protected by mirror symmetry $M_y$ and chiral symmetry. Very recently, the topological edge states are observed in the artificial Kekul\'{e} lattice by positioning the CO molecules on Cu(111) surface~\cite{PhysRevLett.124.236404}. Moreover, the TCI on the Kekul\'{e} lattice exhibits higher-order topology~\cite{noh2018topological,PhysRevLett.122.086804,PhysRevLett.123.053902,JPSJ.88.104703,PhysRevB.101.241109} and the corner states are detected in photonic systems~\cite{noh2018topological}, electrical circuits~\cite{PhysRevLett.123.053902}, and acoustic systems~\cite{PhysRevLett.125.255502}. The Kekul\'{e} lattice has attracted intensive research interest, and the previous studies focus only on the Hermitian case. The study on the non-Hermitian Kekul\'{e} lattice is still lacking.
\begin{figure}[t]
\includegraphics[width=8cm]{fig1.pdf} \caption{(a) Schematic of the non-Hermitian honeycomb lattice with a Kekul\'{e} bond texture. The thick solid (thin dashed) black lines represent the intracellular bonds (intercellular bonds). The sublattices in the unit cell are indexed as $1,2,...,6$, accordingly. The inset shows the two types of gain and loss configurations, which are the $PT$-symmetric type I and $PT$-asymmetric type II. The red filled and the unfilled circles in the hexagons denote denote the lattices with gain $i\gamma$ and the lattices with loss $-i\gamma$, respectively. The red arrows show the enlarged lattice vectors $\vec{a}_{1}$ and $\vec{a}_{2}$. (b) Reduced Brillouin zone for the Kekul\'{e} lattice. The gray hexagon represents the original Brillouin zone for the honeycomb lattice. (c) A Kekul\'{e} lattice showing two type of boundaries compatible with unit cell defined in (a). The armchair boundary is along the $x$-direction and molecular-zigzag boundary is along the $y$-direction.}
\label{fig1}
\end{figure}
In this work, we study the Kekul\'{e} lattice subject to balanced gain and loss, i.e., the gain and the loss have a same amplitude. In the Hermitian case, the topological edge states of the TCI on the Kekul\'{e} lattice are sensitive to edge geometries. In particular, for the molecular-zigzag terminated Kekul\'{e} lattice that preserves mirror symmetry $M_y$, the edge states are gapless, and the Dirac point of the edge states are pinned at zero energy thanks to chiral symmetry. In the armchair-terminated Kekul\'{e} lattice, the edge states are gapped because of the mirror symmetry breaking. We consider two types of balanced gain and loss configurations that introduce non-Hermiticity, i.e., the $PT$ symmetric type I and the $PT$ asymmetric type II as displayed in the inset of Fig.~\ref{fig1}(a). For both configurations, the bulk gap is reduced by increasing the strength of gain and loss. For the $PT$ symmetric configuration, the edge states of molecular-zigzag-terminated ribbon show a pair of exceptional points and have a finite imaginary energy. In the armchair-terminated ribbon, the energy gap of the edge states closes, and a Dirac point forms while tuning the strength of $PT$ symmetric gain and loss. Furthermore, the non-Hermiticity-induced Dirac point in the armchair-terminated ribbon splits into a pair of exceptional points as increasing the strength of gain and loss. For the $PT$ asymmetric gain and loss, the energy spectra of the bulk and edge states become complex. The edge and bulk gaps close simultaneously for the $PT$ asymmetric configuration.
This paper is organized as follows: In Sec.~\ref{The Model}, we introduce the tight-binding model of the Kekul\'{e} hopping texture modulated honeycomb lattice in the presence of balanced gain and loss. Then, we present the non-Hermitian effects on the TCI phase of the Kekul\'{e} lattice in Sec.~\ref{nonhermitian} for $PT$-symmetric and $PT$-asymmetric gain and loss configurations. Finally, a brief summary is presented in Sec.~\ref{Conclusion}.
\section{Model}
\label{The Model}
\begin{figure*}[t]
\includegraphics[width=14cm]{fig2.pdf} \caption{ Bulk energy band structure of the tight-binding model on the Kekul\'{e} hopping texture modulated honeycomb lattice with hopping parameters $t_0$ and $t_1$. (a) $(t_{0},t_{1})=(1.5,1)$. The topologically trivial phase. (b) $(t_{0},t_{1})=(1,1)$. The critical semimetal phase. (c) $(t_{0},t_{1})=(1,1.5)$. The topologically nontrivial TCI phase.}%
\label{fig2}
\end{figure*}
\begin{figure}[t]
\includegraphics[width=8.0cm]{fig3.pdf} \caption{ Boundary geometry dependent edge states in the 2D TCI phase. (a) The energy spectrum for a molecular-zigzag terminated ribbon. The gapless edge modes (red lines) appear inside the bulk energy gap. (b) The spectrum for an armchair-terminated ribbon. The edge modes (red lines) inside the bulk gap display an energy gap. The hopping parameters are $(t_{0},t_{1})=(1,1.5)$ in both (a) and (b).}%
\label{fig3}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=14cm]{fig4.pdf} \caption{ Bulk band structure of the non-Hermitian TCI phase in the present of first configuration of gain and loss. (a) $\gamma=0.3$, (b) $\gamma=0.5$, (c) $\gamma=0.6$. The blue lines correspond to the real part of the energy, and the dashed green lines are the imaginary part. The spectrum becomes complex and the flat bands appear when $\gamma>0.5$. The $PT$ symmetric gain and loss per unit cell is given by $( {i\gamma, -i\gamma, i\gamma, -i\gamma, i\gamma, -i\gamma})$. The hopping parameters are $(t_{0},t_{1})=(1,1.5)$.}%
\label{fig4}
\end{figure*}
\begin{figure*}[hptb]
\includegraphics[width=17cm]{fig5.pdf} \caption{The energy spectra for a molecular-zigzag terminated ribbon in the non-Hermitian TCI phase, and the $PT$ symmetric gain and loss in the unit cell are given by $ ({i\gamma, -i\gamma, i\gamma, -i\gamma, i\gamma, -i\gamma})$. (a) $\gamma=0.1$, (b) $\gamma=0.3$ and (c) $\gamma=0.6$. The red lines mark the real and imaginary part of energy spectrum of edge states. The green curves mark the imaginary energy of the bulk states. The hopping parameters are $(t_{0},t_{1})=(1,1.5)$.}%
\label{fig5}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=17cm]{fig6.pdf} \caption{The spectra for an armchair terminated ribbon in the non-Hermitian TCI phase, and the $PT$ symmetric gain and loss in each unit cell is given by $ ({i\gamma, -i\gamma, i\gamma, -i\gamma, i\gamma, -i\gamma})$. The right parts of (a)-(c) show the imaginary parts of the edge spectra. (a) $\gamma=0.1$. (b) $\gamma=0.2192$. This is the critical points at which the edge gap is closed. (c) $\gamma=0.3$. The red lines mark the real and imaginary part of energy spectrum of edge states. The hopping parameters are $(t_{0},t_{1})=(1,1.5)$.}%
\label{fig6}
\end{figure*}
\begin{figure}[t]
\includegraphics[width=8cm]{fig7.pdf} \caption{The sample comprised of $20 \times 20$ unit cells supporting the edge states. The molecular-zigzag boundary is along the $y$-direction, and the armchair boundary is along the $x$ direction. (a) The distribution of edge states with energy around $0.15$ in the Hermitian case. (b) The distribution of edge states with energy around $0.28$, which is above the edge gap but still in the bulk gap. (c) In the non-Hermitian case of $\gamma=0.2$, the distribution of edge states with energy around $0.15$, which is located in the original edge gap.}%
\label{fig7}
\end{figure}
The Kekul\'{e} lattice can be viewed as a honeycomb lattice with an alternating bond texture, as depicted in Fig. \ref{fig1}(a). Owing to the alternating bond texture, the unit cell is enlarged, and there are six sublattices in a unit cell. The two primitive lattice vectors are defined as $\mathbf{a}_1=a(3/2,-\sqrt{3}/2)$ and $\mathbf{a}_1=a(0,\sqrt{3})$ with $a$ the lattice constant. Correspondingly, the two unit vectors in the reciprocal lattice are $\mathbf{b}_1=\frac{2\pi}{3a}(2,0)$ and $\mathbf{b}_2=\frac{2\pi}{3a}(1,\sqrt{3})$. We introduce two types of nearest-neighbor hopping parameters consist of the intracellular hopping $t_{0}$ and the intercellular hopping $t_{1}$~[See the orange dashed rectangle in Fig.~\ref{fig1}(a)]. Then the Hermitian tight-binding Hamiltonian reads
\begin{equation}
H_0=\sum_{\langle i,j\rangle}t _{i,j}c_{i}^{\dag}c_{j},
\end{equation}
where $\langle i,j \rangle$ represents the nearest-neighbors pairs on the honeycomb lattice, $c_{i}^{\dag}$ and $c_{i}$ are the creation and annihilation operators at the site $i$. The hopping parameter $t _{i,j}=t_{0}$ if $i$ and $j$ are connected by a solid bond and belong to the same cell in Fig.~\ref{fig1}(a), and $t _{i,j}=t_{1}$ if $i$ and $j$ are connected by a dashed bond and belong to the adjacent cells. On this basis of $\big( c_{\mathbf{k},1},c_{\mathbf{k},2},c_{\mathbf{k},3},c_{\mathbf{k},4},c_{\mathbf{k},5},c_{\mathbf{k},6}\big)^T$, the Hamiltonian matrix in momentum space can be written as
\begin{widetext}
\begin{equation}
H_0(\mathbf{k})=\left(\!
\begin{array}{cccccc}
\!0 & t_{0} & 0 & t_{1}e^{i\mathbf{k} \cdot \mathbf{a}_{2}} & 0 & t_{0}\! \\
\!t_{0} & 0 & t_{0} & 0 & t_{1}e^{-i\mathbf{k} \cdot \mathbf{a}_{1}} & 0\! \\
\!0 & t_{0} & 0 & t_{0} & 0 & t_{1}e^{-i \mathbf{k} \cdot (\mathbf{a}_{1}+\mathbf{a}_{2})}\! \\
\!t_{1}e^{-i\mathbf{k} \cdot \mathbf{a}_{2}} & 0 & t_{0} & 0 & t_{0} & 0\! \\
\!0 & t_{1}e^{i\mathbf{k} \cdot \mathbf{a}_{1}} & 0 & t_{0} & 0 & t_{0}\! \\
\!t_{0} & 0 & t_{1}e^{i\mathbf{k} \cdot (\mathbf{a}_{1}+\mathbf{a}_{2})} & 0 & t_{0} & 0\! \\
\end{array}%
\!\right).
\label{H0}
\end{equation}
\end{widetext}
Before introducing the non-Hermitian effect, it is useful to discuss the symmetries and band structures of the Hermitian Kekul\'{e} lattice. Equations (1) and (2) preserve time-reversal symmetry and chiral symmetry, which are two internal symmetries. For this spinless system, time-reversal symmetry is expressed as $\mathcal{T}=\mathcal{K}$ with $\mathcal{K}$ the complex conjugate. Whereas chiral symmetry $\mathcal{C}$ is defined as
\begin{equation}\label{CS}
\mathcal{C}H_0(\mathbf{k})\mathcal{C}^{-1}=-H_0(\mathbf{k}), \;\mathcal{C}^2=1.
\end{equation}
where $\mathcal{C}=\sigma_z\oplus(\sigma_0\otimes\sigma_z)$ on the basis of Eq.~(2), here $\sigma_{z}$ is the $z$-component of the Pauli matrix vector and $\sigma_0$ denotes the two by two identity matrix. Besides the two internal symmetries, the Hamiltonian has inversion symmetry that can be described by the form of matrix as follows:
\begin{equation}
\mathcal{P}=\left(\!
\begin{array}{cccccc}
\!0 & 0 & 0 & 1 & 0 & 0\! \\
\!0 & 0 & 0 & 0 & 1 & 0\! \\
\!0 &0 & 0& 0 & 0 &1\! \\
\!1 & 0 & 0 & 0 & 0 & 0\! \\
\!0 & 1 & 0 & 0 & 0 & 0\! \\
\!0 & 0 & 1 & 0 & 0 & 0\! \\
\end{array}%
\!\right).\nonumber
\end{equation}
In addition, the Hamiltonian is also invariant under the crystalline symmetries that include a six-fold rotation symmetry $C_6$ as well as the two inequivalent mirror reflection symmetries $M_x$ and $M_y$.
The competition between intercellular and intracellular hoppings plays an essential role in controlling the gap-opening of the bulk energy bands that determine the system's topology. Therefore the Kekul\'{e} lattice can be viewed as a 2D extension of the Su-Schrieffer-Heeger~(SSH) model~\cite{SSHmodel}. When $t_0\neq t_1$, an energy gap is opened at $\Gamma$ point and the system is an insulator at half-filling. At $t_0=t_1$, the system reduces to graphene that is a Dirac semi-metal. The system is a normal insulator for $t_0/t_1>1$. As the ratio $t_0/t_1$ decreases, a topological phase transition occurs at the critical point $t_0/t_1=1$, and the system becomes a 2D TCI for $t_0/t_1<1$~\cite{kariyado2017topological,PhysRevLett.122.086804,PSJ.86.123707}.
Figure~\ref{fig2} displays the bulk energy spectra for various ratios of $t_0/t_1$. In Fig.~\ref{fig2}(a), we show the energy spectrum of the topologically trivial insulator when $t_0>t_1$. The energy gap at $\Gamma$ point closes when $t_0/t_1=1$, as shown in Fig.~\ref{fig2}(b). Whereas Fig.~\ref{fig2}(c) shows the spectrum of the 2D TCI. For demonstrating the topological edge states of the TCI, we plot the energy spectra of the Kekul\'{e} nanoribbons with the molecular-zigzag boundary and the armchair boundary in Figs.~\ref{fig3}(a) and \ref{fig3}(b), respectively. The topological edge states are sensitive to the edge geometries of the sample. The molecular-zigzag terminated Kekul\'{e} lattice supports the gapless edge states, and the Dirac point formed by the band crossing of edge states is protected by both the mirror reflection symmetry $M_y$ and the chiral symmetry $\mathcal{C}$. $M_y$ is broken in the Kekul\'{e} lattice with the armchair boundary, and the edge states are gapped therefore.
\section{Non-Hermitian effects}
\label{nonhermitian}
For studying non-Hermitian effects in the Kekul\'{e} lattice, we consider that each unit cell suffers balanced gain and loss, which is described by the following Hamiltonian
\begin{equation}
\Delta H=i\gamma\sum_{i,\alpha,\beta}\left(c_{i,\alpha}^{\dag}c_{i,\alpha}- c_{i,\beta}^{\dag}c_{i,\beta}\right),
\label{H}
\end{equation}
where $\gamma$ denotes the gain loss strength, and $\alpha$ and $\beta$ label the sublattices suffering gain ($i\gamma$) and loss ($-i\gamma$), respectively. Here we focus on two distinct configurations of gain and loss, which are marked as type I and type II in the inset of Fig.~\ref{fig1}(a). For type I, $\alpha=1,3,5$ and $\beta=2,4,6$, while for type II, $\alpha=1,2,6$ and $\beta=3,4,5$. The total non-Hermitian Hamiltonian is given as $H=H_0+\Delta H$. The non-Hermiticity induced by gain and loss has led to intriguing topological phase transitions and phenomena in 1D and 2D SSH models~\cite{PhysRevA.89.062102,PhysRevLett.116.133903,PhysRevA.96.032103,PhysRevA.95.053626,pan2018photonic,PhysRevB.98.094307,PhysRevLett.121.213902,PhysRevA.97.042118,PhysRevB.99.155431,PhysRevLett.123.165701,PhysRevA.100.032102,PhysRevA.99.012113}.
In the non-Hermitian case, chiral symmetry transforms the Hamiltonian in the following way~\cite{PhysRevX.9.041015}
\begin{equation}
\mathcal{C}H^\dagger(\mathbf{k})\mathcal{C}^{-1}=-H(\mathbf{k}),
\end{equation}
This equation is consistent with Eq.~(\ref{CS}) for Hermitian systems where $H^\dagger=H$. The two types of gain and loss configurations preserve chiral symmetry, which ensures symmetric energy spectra of the non-Hermitian TCI.
\subsection{Type I configuration of gain and loss}
\label{Pattern 1}
\begin{figure*}[t]
\includegraphics[width=14cm]{fig8.pdf} \caption{Bulk energy band structure for the non-Hermitian 2D TCI phase with $(t_{0},t_{1})=(1,1.5)$. The gain and loss in each unit cell that breaks $PT$ symmetry is given by $ ({i\gamma, i\gamma, -i\gamma, -i\gamma, -i\gamma, i\gamma})$. (a) $\gamma=0.3$, (b) $\gamma=0.5$, (c) $\gamma=0.6$. The blue lines are the real part of the energy, and the dashed green lines describe the imaginary part. }%
\label{fig8}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=17cm]{fig9.pdf} \caption{The energy spectra for a molecular-zigzag terminated ribbon in the presence of $PT$ asymmetric gain and loss in the unit cell, which is given by $ ({i\gamma, i\gamma, -i\gamma, -i\gamma, -i\gamma, i\gamma})$. (a) $\gamma=0.3$, (b) $\gamma=0.5$ and (c) $\gamma=0.6$. The red lines mark the real part of energy spectrum of edge states, while the green lines represent the imaginary parts of the bulk and edge states. The hopping parameters are $(t_{0},t_{1})=(1,1.5)$.}%
\label{fig9}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=17cm]{fig10.pdf} \caption{The energy spectra for an armchair terminated ribbon in the presence of $PT$ asymmetric gain and loss in the unit cell, which is given by $ ({i\gamma, i\gamma, -i\gamma, -i\gamma, -i\gamma, i\gamma})$. (a) $\gamma=0.3$, (b) $\gamma=0.5$ and (c) $\gamma=0.6$. The red lines mark the real part of energy spectrum of edge states, while the green lines represent the imaginary parts of the bulk and edge states. The hopping parameters are $(t_{0},t_{1})=(1,1.5)$.}%
\label{fig10}
\end{figure*}
In this subsection, we consider type I configuration of gain and loss that preserves $PT$-symmetry, which can be described as $\delta H=i\gamma \sigma_z\oplus(\sigma_0\otimes\sigma_z)$. In the presence of type I configuration of gain and loss, time-reversal symmetry is broken. However, the combination of inversion symmetry and time-reversal symmetry $PT$ is preserved. Therefore, the bulk energy spectrum remains real unless the $PT$-symmetry is spontaneously broken. In the following of this article, we focus on the regime of $t_{0}<t_{1}$ where the system is in the TCI phase. In particular, the non-Hermitian Hamiltonian $H(\mathbf{k})$ has
pure imaginary eigenvalues when $\gamma>|t_{1}-t_{0}|$, which implies a topological phase transition. This topological phase transition is characterized by a formation of degenerate eigenstates that have a equal chirality and are connected to each other by inversion symmetry. Furthermore, the non-Hermiticity can also change the size of the bulk energy gap. In Fig.~\ref{fig4}, we plot the bulk band structures for different values of $\gamma$. We can see that as $\gamma$ increases, the band gap gradually decreases to zero at the critical point $\gamma=|t_{1}-t_{0}|$. When $\gamma$ is further increased, the spectrum develops a finite imaginary part meanwhile the real part of spectrum exhibits flat bands pinned at zero energy around $\Gamma$ point, as displayed in Fig.~\ref{fig4}(c).
Besides the non-Hermitian effects on the bulk band structure, the topological edge states of the 2D TCI phase also show interesting phenomena. As shown in Figs.~\ref{fig5}(a) and \ref{fig5}(b), the gapless edge states in the molecular-zigzag terminated TCI are robust against the balanced gain and loss if $\gamma<|t_{1}-t_{0}|$. However, the Dirac point of the edge states splits into two exceptional points connected by flat bands. Meanwhile, the eigenvalues of edge states have finite imaginary parts but the bulk spectrum keeps real. The imaginary part of edge spectrum increases as $\gamma$ increases, as displayed in Fig.~\ref{fig5}(b). When $\gamma>|t_1-t_0|$, the edge states are mixed with the bulk states as the bulk gap closes. The spectrum of the bulk states also has an imaginary part, whose magnitude is smaller than that of edge states, as depicted in Fig.~\ref{fig5}(c) .
The non-Hermitian effects on the edge states of the armchair-terminated TCI are even more striking. In the absence of gain and loss, the edge states are gapped owing to the mirror symmetry $M_y$ breaking. By turning on $\gamma$, the edge gap decreases with the increase of $\gamma$, as shown in Fig.~\ref{fig6}(a). The edge gap is eventually closed and a non-Hermiticity induced Dirac point is formed when $\gamma$ is large enough, as depicted in Fig.~\ref{fig6}(b). As $\gamma$ is further increased, the formed Dirac point splits into two separated exceptional points [ see Fig.~\ref{fig6}(c) ]. In addition, as shown in Fig.~\ref{fig6}(c), the real part of edge spectrum shows a flat band and the imaginary part is finite within the range of flat band.
The effect of gain and loss on the edge states can be understood by a two-band effective edge model Hamiltonian, which reads $H_\text{eff}=v_\text{F}k\tau_z+\Delta\tau_x+i\gamma\tau_y$, where $v_\text{F}$ denotes the Fermi velocity of the edge sates, $\tau_{x,y,z}$ are the Pauli matrices acting on the subspace formed by the edge states, and $\Delta$ represents the band gap of the edge states. The eigenvalues of this Hamiltonian are $E_\pm=\pm\sqrt{v^2_\text{F}k^2+\Delta^2-\gamma^2}$. Apparently, the edge gap determined by $\Delta$ will be closed when $\gamma=\Delta$. The eigenvalues $E_\pm$ become purely imaginary for $\gamma^2>(v^2_\text{F}k^2+\Delta^2)$. The flat bands pinned at zero energy appear since the real part of $E_\pm$ is zero in the regime determined by $\gamma^2>(v^2_\text{F}k^2+\Delta^2)$.
In order to further study the non-Hermitian effect on the edge states, we consider a finite square-shaped Kekul\'{e} lattice sample under the open boundary conditions along both the $x$ and $y$ directions. This finite size sample has two types of edges, the molecular-zigzag edge along the $y$ direction and the armchair edge along the $x$ direction. In the Hermitian case, when the chemical potential is located in the edge gap of the armchair boundaries, the molecular-zigzag edges show a finite probability density of electrons, as displayed in Fig.~\ref{fig7}(a). When the chemical potential is shifted out of the edge gap of the armchair-terminated ribbon, the probability density of electrons is finite for both molecular-zigzag and armchair edges [see Fig.~\ref{fig7}(b)]. In the non-Hermitian case, even for a chemical potential located in the edge gap, by increasing $\gamma$ the probability distribution pattern will change from that shown in Fig.~\ref{fig7}(a) to the pattern exhibited in Fig.~\ref{fig7}(c), which suggests that the distribution of edge states can be tuned by $\gamma$.
\subsection{Type II configuration of gain and loss}
\label{Pattern 2}
For comparison, we consider the type II configuration of gain and loss that is $PT$ asymmetric [see the inset of Fig.~\ref{fig1}(a)], where the sublattices 3, 4, 5 have the imaginary potential $-i\gamma$ and the sublattices 1, 2, 6 have $i\gamma$. The non-Hermitian Hamiltonian is no longer $PT$-invariant in presence of the type II configuration, thus the bulk energy spectrum becomes complex once turning on $\gamma$. Figure~\ref{fig8} plots the bulk spectrum for several different values of $\gamma$. We can see that the imaginary part of spectrum increases while the size of real bulk gap reduces as $\gamma$ increases. We also plot the energy spectrum of a molecular-zigzag terminated sample for different values of $\gamma$ in Fig.~\ref{fig9}. The Dirac point of gapless edge states again splits into a pair of exceptional points. For the armchair boundary, this kind of gain and loss also reduces the edge gap as the type I configuration of gain and loss does. Meanwhile, as shown in Fig.~\ref{fig10}, the edge gap and bulk gap close simultaneously as $\gamma$ raises, which is in contrast to the $PT$-symmetric gain and loss. In that case, the edge gap is closed before the bulk gap closes. It is suggested that we can use the $PT$ symmetric gain and loss to shut down the edge gap of the 2D TCI phase in future experiments, meanwhile keep the bulk spectrum real and gapped, for the Kekul\'{e} lattice with the armchair boundary.
\section{Conclusion}
\label{Conclusion}
To summarize, we have studied the 2D TCI phase in the honeycomb lattice with Kekul\'{e}-like hopping texture under balanced gain and loss.
Particularly, we consider two types of gain and loss configurations that are $PT$ symmetric and $PT$ asymmetric, respectively.
We found both types of gain and loss can shut down the bulk gap. However, the bulk spectrum remains real and gapped before the spontaneous PT-symmetry breaking for the $PT$ symmetric gain and loss. In contrast, the bulk spectrum becomes complex once we introduce the $PT$ asymmetric gain and loss.
The edge states are dramatically affected by the gain and loss. The $PT$ symmetric gain and loss drives the Dirac point of edge states in the molecular-zigzag-terminated sample to split into a pair of exceptional points. The edge gap in the armchair terminated sample can be shut down by the $PT$ symmetric gain and loss and a Dirac point forms. As the gain and loss further increases, the non-Hermiticity induced Dirac point also splits into two separated exceptional points before the bulk gap is closed. The $PT$ asymmetric gain and loss can also drive the Dirac point to split into exceptional points for the molecular-zigzag boundary. In the case of the armchair boundary, the edge gap and the bulk gap are simultaneously shut down by the $PT$ asymmetric gain and loss. With rapid progress in the experimental implementation of non-Hermiticity in artificial systems, we believe these exotic non-Hermitian phenomena uncovered in the 2D TCI state will be soon demonstrated in future experiments.
\section*{Acknowledgments}
The authors acknowledge the support by the NSFC (under Grant Nos. 12074108, 11704106 and 11974256), the NSF
of Jiangsu Province (under Grant No. BK20190813) and the Priority Academic Program Development (PAPD) of Jiangsu
Higher Education Institution. D.-H.X. also acknowledges the financial support of the Chutian Scholars Program in Hubei Province. F. Liu acknowledges the financial support by the Research Starting Funding of Ningbo
University, NSFC Grant No. 12074205, and NSFZP Grant No. LQ21A040004.
|
3,212,635,537,515 | arxiv | \section{Introduction}
Person re-identification (ReID) aims at matching persons of the same identity across multiple camera views. Recent works in ReID mainly focus on three settings, \ie, fully-supervised~\cite{zhang2020relation,zheng2019joint,zhou2019omni}, fully-unsupervised~\cite{lin2019bottom, lin2020unsupervised,wang2020unsupervised} and unsupervised domain adaptive~\cite{fu2019self,zhai2020multiple,zhong2019invariance} ReID. Despite their good performance on a seen domain (\ie, a domain with training data), most of them suffer from drastic performance decline on unseen domains. In real-world applications, the ReID systems will inevitably search persons in new scenes. Therefore, it is necessary to learn a model that has good generalization ability to unseen domains.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{figures/DG.pdf}
\subcaption*{}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.45\textwidth}
\begin{center}
\fontsize{8pt}{9pt}\selectfont
\vspace{-.2in}
\subcaption*{Comparison of different settings in ReID.}
\vspace{-.1in}
\begin{tabular}{p{2.5cm}|p{0.4cm}<{\centering}p{0.5cm}<{\centering}p{0.5cm}<{\centering}p{0.5cm}<{\centering}p{0.5cm}<{\centering}}
\toprule
\multirow{2}{*}{Setting} & \multicolumn{3}{c}{Source(s)} & \multicolumn{2}{c}{Target(s)} \\
~ & Multi & Images & Labels & Images & Labels \\
\midrule
Fully-Supervised &$\times$ & \checkmark & \checkmark & --- & --- \\
Fully-Unsupervised & $\times$ &\checkmark & $\times$ & --- & --- \\
Single-Source UDA & $\times$ & \checkmark & \checkmark & \checkmark & $\times$ \\
Single-Source DG & $\times$ & \checkmark & \checkmark & $\times$ & $\times$ \\
Multi-Source DG & \checkmark & \checkmark & \checkmark & $\times$ & $\times$ \\
\bottomrule
\end{tabular}
\end{center}
\end{subfigure}
\vspace{-.1in}
\caption{Comparison of different settings in person ReID. Different background colors indicate different distributions, \ie, domains. Solid/dashed ellipses denote data subset with/without labels. Domain generalization (DG) is designed to learn models for unseen domains, while other settings focus on learning models for specific domains. Compared to single-source DG, multi-source DG leverages knowledge from multiple labeled datasets, enforcing the model to learn more underlying patterns across domains.}
\label{fig:DG}
\end{figure}
To meet this goal, domain generalization (DG) is a promising solution that aims to learn generalizable models with one or several labeled source domains. As shown in Fig.~\ref{fig:DG}, compared to other settings, DG does not require the access to target domains. Generally, DG can be divided into two categories, single-source DG ~\cite{jin2020style,Liao2020QAConv,zhou2019learning} and multi-source DG~\cite{kumar2019fairest,song2019generalizable}, according to the number of source domains.
Recent works mainly focus on single-source DG where only one labeled source domain is available. However, a single domain provides limited training samples and scene information, restricting the improvement of single-source DG methods. In contrast, multi-source DG utilizes multiple datasets of different distributions, providing more training data that contain numerous variations and environmental factors. However, due to the strong compatibility of deep networks, directly aggregating all source domains together might lead the model to overfit on the domain bias, hampering the generalization ability of the model. Although we can sample balanced training data from all source domains during training to reduce the impact of domain bias, the above issue still remains.
In this paper, we study the multi-source DG and aim to enforce the model to learn discriminative features without domain bias so that the model can be generalized to unseen domains.
To achieve this goal, this paper introduces a meta-learning strategy for multi-source DG, which simulates the train-test process of DG during model optimization. In our method, we dynamically divide the source domains into meta-train and meta-test sets at each iteration. The meta-train is regarded as source data, and the meta-test is regarded as ``unseen'' data. During training, we encourage the loss of meta-train samples to optimize the model towards a direction that can simultaneously improve the accuracy of meta-test samples.
Nevertheless, meta-learning causes a problem for traditional parametric-based identification loss --- unstable optimization. On the one hand, ReID datasets contain numerous IDs, so the number of classifier parameters will surge when multiple domains are used for training.
On the other hand, the unified optimization of classifiers is unstable due to the asynchronous update by the high-order gradients of the meta-test.
Consequently, we propose a memory-based identification loss, which uses a non-parametric memory to take full advantage of meta-learning while avoiding unstable optimization.
We also introduce a meta batch normalization layer (MetaBN), which mixes meta-train knowledge with meta-test features to simulate the feature variations in different domains.
Our full method is called \textbf{M}emory-based \textbf{M}ulti-Source \textbf{M}eta-\textbf{L}earning (M$^3$L). Experiments on four large-scale ReID datasets demonstrate the effectiveness of our M$^3$L when testing on unseen domains and show that our M$^3$L can achieve state-of-the-art results.
Our contributions are summarized as follows:
\begin{itemize}
\item We propose a Multi-Source Meta-Learning framework for multi-source DG, which can simulate the train-test process of DG during training. Our method enables the model to learn domain-invariant representations and thus improves the generalization ability.
\item We equip our framework with a memory-based module, which implements the identification loss in a non-parametric way and can prevent unstable optimization caused by traditional parametric manner during meta-optimization.
\item We present MetaBN to generate diverse meta-test features, which can be directly injected into our meta-learning framework and obtain further improvement.
\end{itemize}
\section{Related Work}
\label{sec:relatedwork}
\textbf{Person Re-identification.} Recently, supervised learning approaches~\cite{chen2019abd,suh2018part,tay2019aanet,wang2018learning,zhang2020relation,zheng2019joint,zhong2020random,zhong2019camstyle,zhou2019omni} have achieved significant performance in person re-identification (ReID), relying on labeled training data. Considering the difficulties and complexities of annotations, unsupervised learning (USL)~\cite{fan2018unsupervised,lin2019bottom,lin2020unsupervised,wang2020unsupervised} and unsupervised domain adaptation (UDA)~\cite{chen2019instance, fu2019self,zhai2020ad,zhai2020multiple,zhong2018generalizing,zhong2019invariance,zou2020joint} methods are proposed. UDA aims to utilize labeled source data and unlabeled target data to improve the model performance on the target domain. UDA methods mainly focus on generating pseudo-labels on target data ~\cite{fu2019self, zhai2020multiple,zhong2019invariance} or transferring source images to the styles of the target domain for providing extra supervision during adaptation ~\cite{chen2019instance,zhong2018generalizing,zou2020joint}.
USL approaches learn discriminative features only from unlabeled target data, the mainstream~\cite{fan2018unsupervised,lin2019bottom} of which is to train models with pseudo-labels obtained by clustering.
\par
\textbf{Domain Generalization.} Although USL and UDA ReID methods show good performance, they still need to collect a large amount of target data for training models. In contrast, domain generalization (DG) has no access to any target domain data. By carefully designing, DG methods~\cite{2020EccvDMG,jin2020style,Li2018MLDG} can improve the model performance on unseen domains. Most existing DG methods focus on closed-set tasks~\cite{2020EccvDMG,khosla2012undoing,Li2018MLDG,muandet2013domain,qiao2020learning}, assuming that the target data have the same label space as the source data.
Lately, several works~\cite{jin2020style,Liao2020QAConv,song2019generalizable,zhou2019learning} were introduced to learn generalizable models for person ReID.
SNR~\cite{jin2020style} disentangles identity-relevant and identity-irrelevant features and reconstructs more generalizable features.
Liao \etal~\cite{Liao2020QAConv} propose a novel QAConv for calculating the similarity between samples, which can effectively improve ReID accuracy in unseen data but is inefficient during testing.
DIMN~\cite{song2019generalizable} proposes a mapping subnet to match persons within a mini-batch and trains the model with data from one domain at each iteration. Song \etal~\cite{song2019generalizable} claim that DIMN uses meta-learning in the training stage. However, DIMN optimizes the model with the common training strategy, which is completely different from our meta-learning strategy.
\par
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.92\linewidth]{figures/Overall.pdf}
\end{center}
\vspace{-7mm}
\caption{The framework of the proposed M$^3$L. During training, we are given several (three in this example) source domains. At each iteration, source domains are divided into one meta-test and two meta-train domains. In the meta-train stage, memory-based identification loss and triplet loss are calculated from meta-train data as the meta-train loss. In the meta-test stage, the original model is copied and then the copied model is updated with meta-train loss. We compute the meta-test loss on the updated model. In this stage, MetaBN is used to diversify the meta-test features. Finally, the combination of meta-train and meta-test losses is used to optimize the original model.}
\label{fig:overall}
\vspace{-1mm}
\end{figure*}
\textbf{Meta Learning.} The concept of meta-learning~\cite{thrun1998learning} is learning to learn, and has been initially proposed in the machine learning community. Recently, meta-learning has been applied to various deep-based applications, including model optimization~\cite{andrychowicz2016learning,li2016learning}, few-shot learning~\cite{finn2017model,snell2017prototypical,sun2019meta,vinyals2016matching} and domain generalization~\cite{balaji2018metareg,guo2020learning,Li2018MLDG,Li_2019_ICCV,li2019feature}.
MAML~\cite{finn2017model} and its variant Reptile~\cite{nichol2018first} are proposed to learn a good initialization for fast adapting a model to a new task.
Li \etal~\cite{Li2018MLDG} first extend MAML~\cite{finn2017model} to closed-set DG.
Latter, meta-learning was applied to closed-set DG~\cite{balaji2018metareg,Li_2019_ICCV,li2019feature} and open-set DG~\cite{guo2020learning}. In this paper, we propose a memory-based meta-learning approach, which is tailor-made for multi-source DG in ReID.
\section{Methodology}
For multi-source domain generalization (DG) in person ReID, we are provided with $N_S$ source domains $\mathcal D_S = \{\mathcal D^1_S, ...,\mathcal D^{N_S}_S\}$ in the training stage. The label spaces of the source domains are disjointed. The goal is to train a generalizable model with the source data. In the testing stage, the model is evaluated directly on a given unseen domain $\mathcal D_T$.
\subsection{Overview}
This paper designs a Memory-based Multi-source Meta-Learning (M$^3$L) framework for multi-source domain generalization (DG) in person ReID task.
In our framework, we introduce a meta-learning strategy, which simulates the train-test process of DG during model optimization.
Specifically, we dynamically split the source domains into meta-train and meta-test at each iteration. During training,
we first copy the original model and update it with the loss from meta-train data. Then we use the updated model to compute the meta-test loss.
The memory-based identification loss and triplet loss are adopted for effective meta-learning. We also inject a meta batch normalization layer (MetaBN) into the network, which diversifies the meta-test features with meta-train distributions to further facilitate the effect of meta-learning.
Finally, the combination of the meta-train and meta-test losses is used to update the original model towards a generalizable direction that performs well on meta-train and meta-test domains.
\subsection{Meta-Learning for Multi-Source DG}
\label{sec:metaoptimization}
We adopt the concept of ``learning to learn'' to simulate the train-test process of domain generalization during the model optimization. At each training iteration, we randomly divide $N_S$ source domains into $N_S-1$ domains as \textbf{meta-train} and the rest \emph{one} domain as \textbf{meta-test}. The process of computing the meta-learning loss includes the meta-train and the meta-test stages.
\textit{In the meta-train stage}, we calculate the meta-train loss $\mathcal{L}_{mtr}$ on the meta-train samples to optimize the model. \textit{In the meta-test stage}, the optimized model is used to calculate the meta-test loss $\mathcal{L}_{mte}$ with the meta-test samples.
Finally, the network is optimized by the combination of meta-train and meta-test losses, \ie,
\begin{equation}
\label{loss:meta}
\argmin_{\Theta} \mathcal{L}_{mtr}(\Theta) + \mathcal{L}_{mte}(\Theta^{'}),
\end{equation}
where $\Theta$ denotes the parameters of the network, and $\Theta^{'}$ denotes the parameters of the model optimized by the $\mathcal{L}_{mtr}$.
Note that, $\mathcal{L}_{mte}$ is only used to update $\Theta$, the derivative of which is the high-order gradients on $\Theta$.
\textbf{Remark.} In the proposed meta-learning objective, the meta-test loss encourages the loss of meta-train samples to optimize the model towards a direction that can improve the accuracy of meta-test samples. By iteratively enforcing the generalization process from meta-train domains to meta-test domain, the model can avoid overfitting to domain bias and can learn domain-invariant representations that generalize well on unseen domains.
Next, we will introduce the loss functions used in meta-learning in Sec.~\ref{sec:memory}, the MetaBN layer in Sec.~\ref{sec:metabn} and detailed training procedure of meta-learning in Sec.~\ref{sec:procedures}.
\subsection{Memory-based Identification Loss}
\label{sec:memory}
Identification loss can effectively learn discriminative person representations in a classification manner. Commonly, a fully-connected layer is adopted as the classifier to produce the probabilities that are used for computing the cross-entropy loss. Although existing works~\cite{balaji2018metareg,han2018coteaching,sun2019meta} show the effectiveness of meta-learning in the classification task, the parametric classifier is inadequate in the context of ReID. This is because ReID is an open-set task, where different domains contain completely different identities and the number of identities in each domain is commonly large. In multi-source DG of ReID, we have two kinds of parametric classifier selections, one global FC classifier or $N_S$ parallel FC classifiers for each domain, both of which will lead to problems during meta-learning.
For the global FC classifier (Fig.~\ref{fig:classifier}(a)), the dimension of the FC layer is the sum of all source identities. Different from closed-set tasks~\cite{balaji2018metareg,2020EccvDMG}, the global FC classifier contains a large number of parameters when trained with multiple person ReID datasets. This will lead to unstable optimization during the meta-learning. As for parallel FC classifiers in Fig.~\ref{fig:classifier}(b), although we can alleviate the parameter burden by only identifying persons within their own domain classifier, the number of parameters for all classifiers is still large. Moreover, during the meta-learning, the classifier of the meta-test domain is only updated by high-order gradients, which is asynchronous with the feature encoder. This optimization process is unequal and unstable, leading to an incomplete usage of meta-learning.
Taking all the above into consideration, inspired by~\cite{ge2020self,memory,zhong2019invariance,zhong2020memory}, we propose a memory-based identification loss for multi-source DG, which is non-parametric and suitable for both meta-learning and person ReID.
As shown in Fig.~\ref{fig:classifier}(c), we maintain a feature memory for each domain, which contains the centroids of each identity. The similarities between features and memory centroids are used to compute the identification loss. The memory-based identification loss has two advantages to our meta-learning framework. \textit{First}, the memory is a non-parametric classifier, which avoids the unstable optimization caused by a large number of parameters. \textit{Second}, the asynchronous update between the feature encoder and memory has a slight influence on model training. This is because the memory is updated smoothly by a momentum instead of being updated by an optimizer. Thus, the memory is insensitive to the changes of the feature encoder caused by the last few training iterations. In Sec.~\ref{sec:ablation}, we show that our meta-learning framework gains more improvements with the memory-based identification loss than with the FC-based identification loss. Next, we will introduce the memory-based identification loss in detail.
\par
\textbf{Memory Initialization.} We maintain an individual memory for each source domain. For a source domain $\mathcal{D}^i_S$ with $n_i$ identities, the memory $\mathcal{M}^i$ has $n_i$ slots, where each slot saves the feature centroid of the corresponding identity. In initialization, we use the model to extract features for all samples of $\mathcal{D}_S^i$. Then, we initialize the centroid of each identity with a feature, which is averaged on the features of the corresponding identity. For simplicity, we omit the superscript of the domain index and introduce the memory updating and memory-identification loss for one domain.
\par
\textbf{Memory Updating.} At each training iteration, we update the memory with the features in the current mini-batch. A centroid in the memory is updated through,
\begin{equation}
\mathcal{M}[k] \gets m\cdot \mathcal{M}[k] + (1-m)\cdot \frac{1}{|\mathcal{B}_k|}\sum\limits_{x_i\in \mathcal{B}_k}f(x_i) ,
\end{equation}
where $\mathcal{B}_k$ denotes the samples belonging to the $k$th identity and $|\mathcal{B}_k|$ denotes the number of samples for the $k$th identity in current mini-batch. $m\in [0,1]$ controls the updating rate.
\par
\textbf{Memory-based identification loss.} Given an embedding feature $f(x_i)$ from the forward propagation, we calculate the similarities between $f(x_i)$ and each centroid in the memory. The memory-based identification loss aims to classify $f(x_i)$ into its own identity, which is calculated by:
\begin{equation}
\label{loss:identification}
\mathcal{L}_M = -\log{\frac{\exp{\left(\mathcal{M}[i]^T f(x_i) / \tau \right)}}{\sum_{k=1}^{n_i}\exp{\left(\mathcal{M}[k]^T f(x_i) / \tau \right)}}} ,
\end{equation}
where $\tau$ is the temperature factor that controls the scale of distribution.
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{figures/classifier.pdf}
\vspace{-.2in}
\caption{Comparison of different classifiers. Layers within dashed rectangles are updated together by an optimizer.}
\label{fig:classifier}
\end{figure}
\textbf{Triplet loss.} We also use triplet loss~\cite{triplet} to train the model, which is formulated as,
\begin{equation}
\label{loss:triplet}
\mathcal{L}_{Tri} = [d_p - d_n + \delta]_+ ,
\end{equation}
where $d_p$ is the Euclidean distance between an anchor feature and a hard positive feature, and $d_n$ is the Euclidean distance between an anchor feature and a hard negative feature. $\delta$ is the margin of triplet loss and $[\cdot]_+$ refers to $max(\cdot,0)$.
\subsection{MetaBN}
\label{sec:metabn}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figures/MetaBN.pdf}
\vspace{-.1in}
\caption{Detailed architecture of MetaBN. MetaBN first utilizes the mean and variance of meta-train mini-batch features to construct Gaussian Distributions. Then, features sampled from these distributions are mixed with meta-test mini-batch features for generating new meta-test features.}
\label{fig:MetaBN}
\end{figure}
In our meta-learning strategy, the meta-test loss is important for learning generalizable representations, since the meta-test plays the role of the ``unseen'' domain. Intuitively, if the meta-test examples are sampled from more diverse distributions, the model will be optimized to be more robust to variations and thus be more generalizable to unseen domains. To achieve this goal, we introduce MetaBN to generate more diverse meta-test features at the feature-level. As shown in Fig.~\ref{fig:overall}, we replace the last batch normalization layer (BN)~\cite{bn} in the network with MetaBN. During training, MetaBN utilizes the domain information from meta-train domains to inject domain-specific information into meta-test features. This process can diversify meta-test features, enabling the model to simulate more feature variations. The operation of MetaBN is illustrated in Fig.~\ref{fig:MetaBN}.
\par
In the meta-train stage, for the $i$th meta-train domain, MetaBN normalizes the meta-train features as the traditional BN, and saves the mini-batch mean $\mu_i$ and mini-batch variance $\sigma_i$, which are used in the following meta-test stage.
In the meta-test stage, MetaBN uses the saved mean and variance to form $N_S-1$ Gaussian Distributions. Note that, the generated distribution mainly reflects the high-level domain information instead of specific identity information. This is because each saved mean and variance is calculated over samples belonging to dozens of identities. Considering this factor, we sample features from these distributions and inject these domain-specific features into meta-test features.
Specifically, for the $i$th distribution, we sample one feature $z^i_j$ for each meta-test feature:
\begin{equation}
z^i_j \sim \mathcal{N}\left(\mu_i, \sigma_i \right),
\end{equation}
where $\mathcal{N}$ denotes Gaussian Distribution. By doing so, we obtain $B$ (the batch size of meta-test features) sampled features, which are mixed with the original meta-test features for generating new features $F_{T}^{i}$,
\begin{equation}
F_{T}^{i} = \lambda F_T + (1-\lambda) Z^i,
\end{equation}
where $F_T$ denotes the original meta-test features. $Z^i = [z^i_0, z^i_1, \cdots, z^i_B]$ denotes $B$ sampled features from the $i$th Gaussian Distribution.
$\lambda$ is the mixing coefficient, which is sampled from Beta Distribution, \ie, $\lambda \sim \rm{Beta}(1,1)$.
Finally, the mixed features are normalized by batch normalization,
\begin{equation}
f_{T}^i = \gamma \frac{F_{T}^{i}-\mu_{T}^{i}}{\sqrt{{\sigma_{T}^{i}}^2 + \epsilon}} + \beta,
\end{equation}
where $\mu_{T}^{i}$ and ${\sigma_{T}^{i}}$ denote mini-batch mean and variance of $F_{T}^{i}$. $\gamma$ and $\beta$ denote the learnable parameters that scale and shift the normalized value.
\subsection{Training procedure of M$^3$L}
\label{sec:procedures}
During training, $N_S$ source domains are separated into $N_S-1$ meta-train domains and \emph{one} meta-test domain at each iteration. The model is optimized by the losses calculated in the meta-train and meta-test stages.
\begin{algorithm}[t]
\caption{Training procedure of M$^3$L}
\label{alg:metalearning}
\SetKwInput{Kwinit}{Init}{}{}
\SetKwInput{kwInput}{Input}{}{}
\kwInput{$N_S$ source domains $\mathcal D_S = \{\mathcal D^1_S, ...,\mathcal D^{N_S}_S\}$}
\Kwinit{Feature encoder $F$ parametrized by $\Theta$; \qquad
Inner loop learning rate $\alpha$; \qquad \qquad \qquad \qquad
Outer loop learning rate $\beta$;
Batch-size $B$;}
\For{iter in train\_iters}{
Randomly select a domain as meta-test $D_{mte}$; \\
Sample remaining domains as meta-train $D_{mtr}$; \\
\textbf{Meta-Train:}\\
Sample $B$ images $X_S^k$ from each meta-train domain $D^k_{mtr}$; \\
Compute meta-train loss $\mathcal{L}_{mtr}$ (Eq.\ref{loss:totalmtr});\\
Copy the original model and update the copied parameters $\Theta$ by Adam and inner loop learning rate $\alpha$:\\
$\Theta^{'} \leftarrow Adam(\nabla_{\Theta}\mathcal{L}_{mtr}, \Theta, \alpha)$;\\
\textbf{Meta-Test:}\\
Sample $B$ images $X_T$ from the meta-test domain $D_{mte}$;\\
Diversify meta-test features with MetaBN; \\
Compute meta-test loss $\mathcal{L}_{mte}$ (Eq.\ref{loss:mte}); \\
\textbf{Meta Optimization:}\\
Update the original model parameters $\Theta$ by Adam and outer loop learning rate $\beta$: \\
Gradient: $g \leftarrow \nabla_{\Theta}(\mathcal{L}_{mtr}(\Theta) + \mathcal{L}_{mte}(\Theta^{'}))$;\\
$\Theta \leftarrow Adam(g,\Theta,\beta)$.\\
}
\end{algorithm}
\textbf{Meta-train.} For each meta-train domain, the meta-train loss is a combination of memory-based identification (Eq.\ref{loss:identification}) and triplet losses (Eq.\ref{loss:triplet}), \ie,
\begin{equation}
\label{loss:mtr}
\mathcal{L}_{mtr}^i = \mathcal{L}_{Tri}(X^i_S;\Theta) + \mathcal{L}_{M}(X^i_S, \mathcal{M}^i_S;\Theta),
\end{equation}
where $\Theta$ denotes the parameters of the network. $X^i_S$ and $\mathcal{M}^i_S$ denote the training samples and memory of the $i$th meta-train domain, respectively.
The total loss for meta-train is averaged over $N_S-1$ meta-train domains, formulated as,
\begin{equation}
\label{loss:totalmtr}
\mathcal{L}_{mtr} = \frac{1}{N_S-1}\sum\limits_{i=0}^{N_S-1}\mathcal{L}_{mtr}^i.
\end{equation}
\textbf{Meta-test.} In the meta-test stage, the meta-test domain is performed on the new parameters $\Theta^{'}$, which is obtained by optimizing $\Theta$ with $\mathcal{L}_{mtr}$. With the MetaBN proposed in Sec.~\ref{sec:metabn}, we can obtain $N_S-1$ mixed features for each meta-test sample. The average memory-based identification loss over these features is considered as the meta-test memory-based identification loss. The meta-test loss is:
\begin{equation}
\small
\label{loss:mte}
\begin{aligned}
\mathcal{L}_{mte} &= \mathcal{L}_{Tri}(X_T;\Theta^{'})+\frac{1}{N_S-1}\sum\limits_{k=0}^{N_S-1}\mathcal{L}_M(f_{T}^{k}, \mathcal{M}_T;\Theta^{'}),
\end{aligned}
\end{equation}
where $X_T$ denotes the meta-test samples and $f_{T}^{k}$ denotes the $k$th mixed features generated by the MetaBN.
\textbf{Meta Optimization.} Finally, the model is optimized by the objective in Eq.\ref{loss:meta}. The optimization procedure of our M$^3$L is summarized in Alg.~\ref{alg:metalearning}.
\section{Experiments}
\label{sec:experiments}
\subsection{Benchmarks and Evaluation Metrics}
We conduct experiments on four large-scale person re-identification benchmarks: Market-1501~\cite{market1501}, DukeMTMC-reID~\cite{dukemtmc2,dukemtmc}, CUHK03~\cite{cuhk03,rerank} and MSMT17~\cite{msmt17}. \textit{For studying the multi-source DG}, we divide these four datasets into two parts: three domains as source domains for training and the other one as target domain for testing.
The statistics of these four benchmarks are shown in Table \ref{tab:datasets}.
For simplicity, we denote Market-1501, DukeMTMC-reID, CUHK03, and MSMT17 as M, D, C, and MS in tables.
\begin{table}[t]
\caption{Statistics of Person ReID Benchmarks.}
\centering
\label{tab:datasets}
\vspace{-3mm}
\fontsize{9pt}{10pt}\selectfont
\begin{tabular}{p{3cm}|p{1cm}<{\centering}p{1.2cm}<{\centering}p{1.3cm}<{\centering}}
\toprule
Benchmarks & \# IDs & \# images & \# cameras \\
\midrule
Market-1501~\cite{market1501} & 1,501 & 32,217 & 6 \\
DukeMTMC-reID~\cite{dukemtmc} & 1,812 & 36,411 & 8 \\
CUHK03~\cite{cuhk03} & 1,467 & 28,192 & 2 \\
MSMT17~\cite{msmt17} & 4,101 & 126,441 & 15 \\
\bottomrule
\end{tabular}
\end{table}
\textit{\textbf{Note:} In default, for CUHK03, we use the old protocol (CUHK03, 26,263 images of 1,367 IDs for training) as the source domain for training the model and the detected subset of the new protocol (CUHK-NP~\cite{rerank}) as the target domain for testing; for MSMT17, we use the MSMT17\_V2 for both training and testing.
In Table~\ref{tab:sota}, we also provide the results of using the detected subset of CUHK-NP (7,365 images of 767 IDs for training) and MSMT17\_V1 for both training and testing, and we recommend using this setting in future studies.}
The cumulative matching characteristic (CMC) at Rank-1 and mean average precision (mAP) are used to evaluate performance on the target testing set.
\subsection{Implementation Details}
We implement our method with two common backbones, \textit{i.e.}, ResNet-50~\cite{resnet} and IBN-Net50~\cite{ibn}.
Images are resized to 256$\times$128 and the training batch size is set to 64. We use random flipping and random cropping for data augmentation. For the memory, the momentum coefficient $m$ is set to 0.2 and the temperature factor $\tau$ is set to 0.05. The margin $\delta$ of triplet loss is 0.3. To optimize the model, we use Adam optimizer with a weight decay of 0.0005. The learning rate of inner loop $\alpha$ and outer loop $\beta$ are initialized to $3.5\times10^{-5}$ and increase linearly to $3.5\times10^{-4}$ in the first 10 epochs. Then, $\alpha$ and $\beta$ are decayed by 0.1 at the 30th epoch and 50th epoch. The total training stage takes 60 epochs.
\textbf{Baseline.} For the baseline, we directly train the model with the memory-based identification loss and triplet loss using the data of all the source domains. That is, the baseline does not apply the meta-learning strategy and MetaBN.
\subsection{Comparison with State-of-the-Art methods}
\begin{table*}[t]
\caption{Comparison with State-of-the-Arts domain generalization methods on four large-scale person ReID benchmarks --- Market-1501 (M), DukeMTMC-reID (D), CUHK03 (C) and MSMT17 (MS).
The performance is evaluated quantitatively by mean average precision (mAP) and cumulative matching characteristic (CMC) at Rank-1 (R1).}
\vspace{-.15in}
\centering
\label{tab:sota}
\fontsize{8.5pt}{9.5pt}\selectfont
\begin{threeparttable}
\begin{tabular}{p{2.4cm}|p{1.3cm}<{\centering}|p{1cm}<{\centering}|p{1cm}<{\centering}|p{0.8cm}<{\centering}p{0.8cm}<{\centering}|p{1.3cm}<{\centering}|p{1cm}<{\centering}|p{1cm}<{\centering}|p{0.8cm}<{\centering}p{0.8cm}<{\centering}}
\toprule
\multirow{2}{*}{Method} & \multirow{2}{*}{Source} & \multirow{2}{*}{IDs} & \multirow{2}{*}{Images} & \multicolumn{2}{c|}{Market-1501}& \multirow{2}{*}{Source} & \multirow{2}{*}{IDs} & \multirow{2}{*}{Images} & \multicolumn{2}{c}{DukeMTMC} \\
&&&& mAP & R1 &&&& mAP & R1\\
\midrule
OSNet-IBN~\cite{zhou2019omni} & \multirow{4}{*}{Com-MS} & \multirow{4}{*}{4,101} & \multirow{4}{*}{126,441} & 37.2 & 66.5 & \multirow{4}{*}{Com-MS} & \multirow{4}{*}{4,101} & \multirow{4}{*}{126,441} & 45.6 & 67.4 \\
OSNet-AIN~\cite{zhou2019learning} & & & & 43.3 & 70.1 & & & & 52.7 & 71.1 \\
SNR~\cite{jin2020style} & & & & 41.4 & 70.1 & & & & 50.0 & 69.2 \\
QAConv$_{50}$~\cite{Liao2020QAConv} & & & & 43.1 & 72.6 & & & & 52.6 & 69.4 \\
\midrule
QAConv$_{50}$~\cite{Liao2020QAConv}* & \multirow{3}{*}{MS+D+C} & \multirow{3}{*}{3,110} & \multirow{3}{*}{75,406} & 35.6 & 65.7 & \multirow{3}{*}{MS+M+C} & \multirow{3}{*}{3,159} & \multirow{3}{*}{71,820} & 47.1 & 66.1 \\
M$^3$L~(ResNet-50) & & & & 48.1 & 74.5 & & & & 50.5 & \textbf{69.4} \\
M$^3$L~(IBN-Net50) & & & & \textbf{50.2} & \textbf{75.9} & & & & \textbf{51.1} & 69.2 \\
\midrule
QAConv$_{50}$~\cite{Liao2020QAConv}*\dag & \multirow{3}{*}{\shortstack{MS+D\\+C-NP}} &\multirow{3}{*}{2,510} & \multirow{3}{*}{56,508} & 39.5 & 68.6 & \multirow{3}{*}{\shortstack{MS+M\\+C-NP}}& \multirow{3}{*}{2,559} & \multirow{3}{*}{52,922} & 43.4 & 64.9 \\
M$^3$L~(ResNet-50)\dag & & & & 51.1 & 76.5 & & & & 48.2 & 67.1 \\
M$^3$L~(IBN-Net50)\dag & & & & \textbf{52.5} & \textbf{78.3} & & & & \textbf{48.8} & \textbf{67.2} \\
\bottomrule
\toprule
\multirow{2}{*}{Method} & \multirow{2}{*}{Source} & \multirow{2}{*}{IDs} & \multirow{2}{*}{Images} & \multicolumn{2}{c|}{CUHK-NP}& \multirow{2}{*}{Source} & \multirow{2}{*}{IDs} & \multirow{2}{*}{Images} & \multicolumn{2}{c}{MSMT17} \\
&&&& mAP & R1 &&&& mAP & R1\\
\midrule
QAConv$_{50}$~\cite{Liao2020QAConv} & Com-MS & 4,101 & 126,441 & 22.6 & 25.3 & D & 702 & 16,522 & 8.9 & 29.0 \\
\midrule
QAConv$_{50}$~\cite{Liao2020QAConv}* & \multirow{3}{*}{MS+D+M} & \multirow{3}{*}{2,494} & \multirow{3}{*}{62,079} & 21.0 & 23.5 & \multirow{3}{*}{D+M+C} & \multirow{3}{*}{2,820} & \multirow{3}{*}{55,748} & 7.5 & 24.3 \\
M$^3$L~(ResNet-50) & & & & 29.9 & 30.7 & & & & 12.9 & 33.0 \\
M$^3$L~(IBN-Net50) & & & & \textbf{32.1} & \textbf{33.1} & & & & \textbf{14.7} & \textbf{36.9} \\
\midrule
QAConv$_{50}$~\cite{Liao2020QAConv}*\dag & \multirow{3}{*}{MS+D+M} &\multirow{3}{*}{2,494} & \multirow{3}{*}{62,079} & 19.2 & 22.9 & \multirow{3}{*}{\shortstack{D+M\\+C-NP}}& \multirow{3}{*}{2,220} & \multirow{3}{*}{36,823} & 10.0 & 29.9 \\
M$^3$L~(ResNet-50)\dag & & & & 30.9 & \textbf{31.9} & & & & 13.1 & 32.0 \\
M$^3$L~(IBN-Net50)\dag & & & & \textbf{31.4} & 31.6 & & & & \textbf{15.4} & \textbf{37.1} \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\scriptsize
\item[*] We reimplement this work based on the authors' code on Github with the same source datasets as us. \\
\item[\dag] Model is trained and tested on the detected subset of CUHK-NP and MSMT17\_V1.
\end{tablenotes}
\end{threeparttable}
\end{table*}
Since there is no multi-source DG method evaluating on large-scale datasets, we compare our method with state-of-the-art single-source DG methods,
including OSNet-IBN~\cite{zhou2019omni}, OSNet-AIN~\cite{zhou2019learning}, SNR~\cite{jin2020style} and QAConv~\cite{Liao2020QAConv}. SNR~\cite{jin2020style} and QAConv~\cite{Liao2020QAConv} use the ResNet-50 as the backbone. OSNet-IBN~\cite{zhou2019omni} and OSNet-AIN~\cite{zhou2019learning} use their self-designed networks that have better performance than ResNet-50. When testing on Market-1501, DukeMTMC-reID, and CUHK03, the existing single-source DG methods utilize MSMT17 as the source domain for model training. They combine the train set and test set of MSMT17, which is denoted as Combined MSMT17 (Com-MS) in this paper.
To verify that the effectiveness of our method is obtained by multi-source meta-learning instead of training with more IDs and images, we only use the training sets of the source domains for model training.
For example, when using Market-1501 as the target domain, we train the model with the train sets of DukeMTMC-reID, CUHK03, and MSMT17, including 3,110 IDs and 75,406 images. The numbers of IDs and images are less than that of Combined MSMT17 (3,110 IDs \vs 4,101 IDs, and, 75,406 images \vs 126,441 images). To conduct a fair comparison, we reimplement recent published QAConv~\cite{Liao2020QAConv} with the same training data as us.
Comparison results are reported in Table \ref{tab:sota}.
\textbf{Results on Market-1501 and DukeMTMC-reID.} From Table \ref{tab:sota}, we can make the following observations.
%
First, when using Combined MSMT17 as the source data, OSNet-AIN~\cite{zhou2019learning} and QAConv~\cite{Liao2020QAConv} achieve the best results on both Market-1501 and DukeMTMC-reID.
%
Second, compared to single-source DG methods that use more training data (Combined MSMT17), our M$^3$L outperforms them by a large margin on Market-1501 and achieves comparable results with them on DukeMTMC-reID. Specifically, when testing on Market-1501, with the same backbone, our M$^3$L surpasses SNR~\cite{jin2020style} by 6.7\% in mAP and 4.4\% in Rank-1 accuracy.
%
Third, when training with multiple source domains, with the same backbone, our M$^3$L produces significantly higher results than QAConv$_{50}$. Specifically, our M$^3$L is higher than QAConv$_{50}$ by 12.5\% in mAP for Market-1501 and by 3.4\% in mAP for DukeMTMC-reID. This demonstrates the superiority of our method over the method that considers all the source domains as one domain.
%
Fourth, when using the IBN-Net50 as the backbone, our M$^3$L can achieve better mAP than using ResNet-50.
\textbf{Results on CUHK03 and MSMT17.} There is only one method (QAConv~\cite{Liao2020QAConv}) evaluated on CUHK03 and MSMT17. When testing on MSMT17, QAConv~\cite{Liao2020QAConv} uses DukeMTMC-reID as the source data. Clearly, our M$^3$L achieves higher results than QAConv~\cite{Liao2020QAConv} on both datasets, no matter how many source domains QAConv is trained with.
We also find that both our M$^3$L and QAConv produce poor results on CUHK03 and MSMT17, indicating there is still a large room for generalizable models in DG.
\subsection{Ablation Studies}
\label{sec:ablation}
\textbf{Effectiveness of Meta-Learning.} To investigate the effectiveness of the proposed meta-learning strategy, we conduct ablation studies in Table \ref{tab:metalearning}. Clearly, the model trained with the proposed meta-learning strategy consistently improves the results with different backbones
Specifically, with ResNet-50, adding meta-learning optimization increases the baseline by 5.3\% in Rank-1 accuracy on Market-1501 and by 3.7\% in Rank-1 accuracy on CUHK03.
With IBN-Net50 backbone, meta-learning strategy gains 5.4\% and 2.8\% improvement in mAP on Market-1501 and CUHK03, respectively.
This demonstrates that by simulating the train-test process during training, the meta-learning strategy helps the model to learn domain-invariant representations that can perform well on unseen domains.
\textbf{Effectiveness of MetaBN.}
As shown in Table~\ref{tab:metalearning}, plugging MetaBN into the meta-learning-based model further improves the generalization ability.
For ResNet-50 backbone, MetaBN improves the meta-optimized model by 1.3\% and 1.6\% in Rank-1 accuracy on Market-1501 and CUHK03. For IBN-Net50, we can observe similar improvements.
The results validate that diversifying meta-test features by MetaBN is able to help the model to learn more generalizable representations for unseen domains.
\begin{table}[t]
\centering
\caption{Ablation studies on meta-learning strategy and MetaBN. Models are trained with the other three datasets except the target dataset. ``Meta'': training with meta-learning strategy; ``MetaBN'': training with MetaBN.}
\label{tab:metalearning}
\vspace{-3mm}
\fontsize{8pt}{8pt}\selectfont
\begin{tabular}{p{1.3cm}|p{0.5cm}<{\centering}|p{0.9cm}<{\centering}|p{0.6cm}<{\centering}p{0.6cm}<{\centering}|p{0.6cm}<{\centering}p{0.6cm}<{\centering}}
\toprule
\multirow{2}{*}{Backbone} & \multirow{2}{*}{Meta} & \multirow{2}{*}{MetaBN} & \multicolumn{2}{c|}{MS+D+C$\rightarrow$M}
& \multicolumn{2}{c}{MS+D+M$\rightarrow$C} \\
&&& mAP & R1 & mAP & R1\\
\midrule
\multirow{3}{*}{ResNet-50} & $\times$& $\times$ & 41.1 & 67.9 & 25.7 & 25.4 \\
& \checkmark & $\times$ & 47.4 & 73.2 & 29.1 & 29.1 \\
& \checkmark & \checkmark & \textbf{48.1} & \textbf{74.5} & \textbf{29.9} & \textbf{30.7} \\
\midrule
\multirow{3}{*}{IBN-Net50} & $\times$& $\times$ & 43.6 & 71.1 & 28.2 & 29.4 \\
& \checkmark& $\times$ & 49.0 & 75.0 & 31.0 & 31.8 \\
& \checkmark& \checkmark & \textbf{50.2} & \textbf{75.9} & \textbf{32.1} & \textbf{33.1} \\
\bottomrule
\end{tabular}
\end{table}
\textbf{Loss function components.} We conduct experiments to evaluate the impact of the memory-based identification loss and triplet loss. Results in Table \ref{tab:loss} show that the memory-based identification loss $\mathcal{L}_M$ is the predominant supervision for training a generalizable model and additionally adding the triplet loss $\mathcal{L}_{Tri}$ can slightly improve the performance.
\textbf{Comparison of different classifiers.} In Table~\ref{tab:idloss}, we compare different types of identification classifiers.
We have the following observations.
First, compared with the two parametric classifiers, our proposed non-parametric classifier gains higher improvement with the meta-learning strategy. Second, when directly training with multi-source data without the meta-learning, the model trained with memory-based identification loss achieves higher results. These two observations demonstrate that the proposed memory-based identification loss is suitable for multi-source DG and our meta-learning strategy.
\begin{table}[t]
\centering
\caption{Comparison of loss function components. $\mathcal{L}_M$ and $\mathcal{L}_{Tri}$ denote memory-based identification loss and triplet loss. Experiments are conducted with ResNet-50.}
\label{tab:loss}
\vspace{-3mm}
\fontsize{8pt}{9pt}\selectfont
\begin{tabular}{p{0.7cm}<{\centering}|p{0.7cm}<{\centering}|p{0.8cm}<{\centering}p{0.8cm}<{\centering}|p{0.8cm}<{\centering}p{0.8cm}<{\centering}}
\toprule
\multirow{2}{*}{$\mathcal{L}_M$} & \multirow{2}{*}{$\mathcal{L}_{Tri}$} & \multicolumn{2}{c|}{MS+D+C$\rightarrow$M}
& \multicolumn{2}{c}{MS+D+M$\rightarrow$C} \\
&& mAP & R1 & mAP & R1\\
\midrule
\checkmark& $\times$ & 47.8 & 73.6 & \textbf{29.9} & 30.3 \\
$\times$ & \checkmark & 35.1 & 59.7 & 20.9 & 19.9 \\
\checkmark & \checkmark & \textbf{48.1} & \textbf{74.5} & \textbf{29.9} & \textbf{30.7} \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Comparison of different classifiers. $\mathcal{L}_M$ denotes memory-based identification loss. $\mathcal{L}_{FCG}$ and $\mathcal{L}_{FCP}$ denote traditional identification loss with global classifier and parallel classifiers. ``Meta'' denotes training with the meta-learning strategy and MetaBN. Numbers in parentheses denote performance improvement gained by ``Meta''. Experiments are conducted with ResNet-50.}
\label{tab:idloss}
\vspace{-3mm}
\fontsize{8pt}{9pt}\selectfont
\begin{tabular}{p{1cm}|p{0.5cm}<{\centering}|p{1cm}<{\centering}p{1cm}<{\centering}|p{1cm}<{\centering}p{1cm}<{\centering}}
\toprule
\multirow{2}{*}{Loss} & \multirow{2}{*}{Meta} & \multicolumn{2}{c|}{MS+D+C$\rightarrow$M}
& \multicolumn{2}{c}{MS+D+M$\rightarrow$C} \\
& & mAP & R1 & mAP & R1\\
\midrule
\multirow{2}{*}{$\mathcal{L}_{FCG}$} & $\times$ & 37.7 & 67.0 & 21.2 & 20.9 \\
& \checkmark & 39.7~(\textcolor{red}{2.0}) & 68.3~(\textcolor{red}{1.3}) & 21.2~(\textcolor{red}{0.0}) & 21.9~(\textcolor{red}{1.0}) \\
\midrule
\multirow{2}{*}{$\mathcal{L}_{FCP}$} & $\times$ & 37.7 & 67.0 & 21.2 & 20.9 \\
& \checkmark & 40.9~(\textcolor{red}{3.2}) & 69.3~(\textcolor{red}{2.3}) & 23.9~(\textcolor{red}{2.7}) & 24.3~(\textcolor{red}{3.4}) \\
\midrule
\multirow{2}{*}{$\mathcal{L}_{M}$} & $\times$ & 41.1 & 67.9 & 25.7 & 25.4 \\
& \checkmark & 48.1~(\textbf{\textcolor{red}{7.0}}) & 74.5~(\textbf{\textcolor{red}{6.6}}) & 29.9~(\textbf{\textcolor{red}{4.2}}) & 30.7~(\textbf{\textcolor{red}{5.3}}) \\
\bottomrule
\end{tabular}
\end{table}
\textbf{Effectiveness of Multi-Source.}
Table \ref{tab:source} shows the comparison between two-source DG and multi-source DG. Despite bringing more domain bias, training with more source domains consistently produces higher results when testing on an unseen domain. This demonstrates the significance of studying multi-source DG.
\begin{table}[t]
\centering
\caption{Comparison of training with different source domains. Experiments are conducted with ResNet-50.}
\label{tab:source}
\vspace{-3mm}
\fontsize{8pt}{9pt}\selectfont
\begin{tabular}{p{1.2cm}|p{0.7cm}<{\centering}p{0.7cm}<{\centering}|p{1.2cm}|p{0.7cm}<{\centering}p{0.7cm}<{\centering}}
\toprule
\multirow{2}{*}{Sources} & \multicolumn{2}{c|}{Market-1501}
& \multirow{2}{*}{Sources} & \multicolumn{2}{c}{CUHK03} \\
& mAP & R1 & & mAP & R1\\%& && mAP & Rank-1 \\
\midrule
MS+D & 38.5 & 66.2 & MS+D & 21.9 & 23.7 \\
MS+C & 39.8 & 65.8 & MS+M & 27.1 & 27.8 \\
MS+D+C & \textbf{48.1} & \textbf{74.5} & MS+D+M & \textbf{29.9} & \textbf{30.7} \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Visualization}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figures/tsne_rebuttal.pdf}
\vspace{-.2in}
\caption{Visual distributions of four person ReID benchmarks. The distributions are obtained from inference features of (a)~ImageNet pretrained model, (b) Baseline, and (c) M$^3$L. All of the models are trained with ResNet-50, and the dimension of inference features is reduced by t-SNE~\cite{tsne}.}
\vspace{-.02in}
\label{fig:distribution}
\end{figure}
To better understand the effectiveness of our approach, we visualize the t-SNE~\cite{tsne} distributions of the features on the four datasets for different models, \textit{i.e.}, ImageNet pretrained model, baseline, and M$^3$L. Results are shown in Fig.~\ref{fig:distribution}.
As shown in Fig.~\ref{fig:distribution}(a), without training, distributions vary by domains: (1) MSMT17 is the largest dataset that contains images in a variety of situations; (2) DukeMTMC-reID and Market-1501 are closely related to MSMT17 and each other; (3) CUHK03 has a relatively more distinct distribution compared with the others.
Fig.~\ref{fig:distribution}(b) and Fig.~\ref{fig:distribution}(c) show the visual distributions of the four datasets after training. The model is trained on DukeMTMC-reID, CUHK03, and MSMT17, and tested on Market-1501.
Comparing Fig.~\ref{fig:distribution}(b) with Fig.~\ref{fig:distribution}(c), we observe that the features from the source and target domains of M$^3$L~(Fig.~\ref{fig:distribution}(c)) are clustered more compactly, indicating that M$^3$L leads the model to learn more generic and domain-agnostic representations.
\section{Conclusion}
In this paper, we propose a Memory-based Multi-source Meta-Learning (M$^3$L) framework for multi-source domain generalization (DG) in person ReID. The proposed meta-learning strategy enables the model to simulate the train-test process of DG during training, which can efficiently improve the generalization ability of the model on unseen domains. Besides, we introduce a memory-based module and MetaBN to take full advantage of meta-learning and obtain further improvement.
Extensive experiments demonstrate the effectiveness of our framework for training a generalizable ReID model. Our method achieves state-of-the-art generalization results on four large-scale benchmarks.
\vspace*{\baselineskip}
{\noindent\textbf{Acknowledgements}} This work is supported by the National Nature Science Foundation of China (No. 61876159, 61806172, 61662024, 62076116 and U1705286); the China Postdoctoral Science Foundation Grant (No. 2019M652257); the Fundamental Research Funds for the Central Universities (Xiamen University, No. 20720200030); the EU H2020 SPRING No. 871245 and AI4Media No. 951911 projects; and the Italy-China collaboration project TALENT: 2018YFE0118400.
|
3,212,635,537,516 | arxiv | \section{\label{s:intro}Introduction}
One of the most remarkable achievements of the past decade concerns the advent of experimental studies in ultracold fermionic atom gases of the crossover from the regime of Bardeen-Cooper-Schriffer (BCS) weakly bound Cooper pairs to the regime of Bose-Einstein condensation (BEC) of diatomic molecules~\cite{r:OHara:2002ly, r:Gehm:2003ve, r:Gehm:2003qf, r:Bourdel:2003bh, r:Regal:2003dq, r:Regal:2003cr, r:Regal:2004nx, r:Gupta:2003oq, r:Gupta:2004kl, r:Zwierlein:2003tg, r:Bartenstein:2004hc}. In turn, these studies made possible for the first time the experimental study of the ground-state properties of a many-body system composed of spin-1/2 fermions interacting via a zero-range, infinite scattering length contact interaction. This regime is known as the unitarity limit~\cite{r:unitarity} and is of particular interest in astrophysics because of its implications regarding the equation of state for neutron matter \cite{r:Heiselberg:2000zr}, thus emphasizes the far-reaching implications of these recent studies.
Much of the theoretical work on systems composed of spin-1/2 fermions interacting via an adjustable, attractive potential has focussed on interactions that are governed by a single parameter, namely the s-wave scattering length, $a$, of two atoms with different spin components~\cite{r:Leggett:1980fk,r:Randeria:1995ij}. This description is valid only if $|a| \gg r_0$ and $k_F \, r_0 \ll 1$, where $r_0$ is the range of the potential and $k_\mathrm{F}$ denotes the Fermi momentum of the gas and is conventionally related to the total density of particles, $\rho$, by the noninteracting Fermi gas formula
\begin{equation*}
\rho
=
\sum_{\sigma}
\int_0^{\kF} \!\!\! \frac{\mathrm{d}^3 k}{(2\pi)^3}
=
\frac{\kF^3}{3 \pi^2} \>,
\end{equation*}
The above momentum integral is performed over the interior volume of the Fermi sphere and $\sigma$ denotes the spin component of the fermion, i.e. $\sigma = \pm 1/2$. Then, the only independent dimensionless variable in the problem is $a \, k_F$. Thus, this description is only applicable to dilute systems like the ultracold fermionic atom gases, not the high-density regime found in conventional superconductors \cite{r:Schrieffer:1964bs}. The potential supports a 2-body bound state for $(a \, k_F)^{-1}>0$, but this molecular state passes through zero energy and vanishes into the continuum at $(a \, k_F)^{-1} = 0$, the position of the Feshbach resonance. Then, the BCS and BEC limits correspond to $(a \, k_F)^{-1}$$\rightarrow$$- \infty$ and $(a \, k_F)^{-1}$$\rightarrow$$+\infty$, respectively, whereas the unitarity limit is defined as the limit near Feshbach resonances where $a$ is much larger than the inter-particle distance ($|a| \, \kF \gg 1$) and corresponds to the BCS to BEC crossover at the singularity of the scattering length. This limit is the same when approached with positive or negative scattering length. In the unitarity limit, the correlations are deemed to be significant and the system attains universal behavior, independent of the shape of the potential and dependent only on the particle density and the system dimensionality~\cite{r:Nishida:2006fk,r:Nishida:2007uq,r:Mihaila:2009kx}.
The importance of correlations in the ground state of dilute fermionic matter in the unitarity limit is measured by the numerical value of the ratio, $\varepsilon / \varepsilon_0$, where $\varepsilon$ and $\varepsilon_0$ denote the ground-state energies per particle of the interacting and noninteracting systems, respectively. We recall the noninteracing energy density is defined as \begin{equation*}
\varepsilon_0
=
\frac{1}{\rho} \,
\sum_{\sigma}
\int_0^{\kF} \!\!\! \frac{\mathrm{d}^3 k}{(2\pi)^3} \ \epsilon_k
=
\frac{3}{5} \, \epsilonF \>,
\end{equation*}
where we introduced the notation,
$\epsilon_k = \gamma k^2$ with $\gamma = \hbar^2 / (2m)$, and $\epsilonF = \gamma \kF^2$ is the Fermi energy.
An upper bound to the value of $\varepsilon / \varepsilon_0$ at zero temperature was set by the quantum Monte Carlo (QMC) study performed by
Carlson \emph{et al}.~\cite{r:Carlson:2008fk,r:Carlson:2005uq,r:Chang:2004kx,r:Carlson:2003fv} that gave the value $(\varepsilon / \varepsilon_0)_\mathrm{QMC} = 0.40(01)$. Instead, the ``universal'' curve describing the BCS to BEC crossover in the standard BCS variational picture~\cite{r:Engelbrecht:1997fk,r:Parish:2005fk} derived by Leggett~\cite{r:Leggett:1980fk} gives the BCS value of the chemical potential to the Fermi energy ratio, $(\mu / \epsilon_F)_{BCS} = 0.59$.
Other theoretical and experimental values for $\varepsilon/ \varepsilon_0$ are summarized elsewhere~\cite{r:Heiselberg:2004dz,r:Levinsen:2006kl,r:Burovski:2006qa,r:Nikolic:2007fu}.
Recently we introduced a new theoretical framework for a dilute gas of Bose particles with tunable interactions~\cite{r:Cooper:2010fk}, the bosonic counterpart of the system of fermions discussed in this paper. For the Bose system, our theoretical description is based on a loop expansion of the one-particle irreducible (1-PI) effective action in terms of composite-field propagators by rewriting the Lagrangian in terms of auxiliary fields related to the normal and anomalous densities~\cite{r:Cooper:2010fk}. The leading-order auxiliary field (LOAF) approximation in the case of an interacting dilute Bose gas describes a large interval of values of the coupling constant, satisfies Goldstone's theorem and yields a Bose-Einstein transition that is second order, while also predicting reasonable values for the depletion.
In this paper we will derive the corresponding auxiliary field formalism for a dilute fermionic atom gas with tunable interactions, thus establishing the generality of our auxiliary field formalism. We will show that the LOAF approximation in the fermionic case corresponds to the BCS ansatz. At zero temperature the fermionic LOAF equations are the same as the equations derived by Leggett~\cite{r:Leggett:1980fk}, whereas the finite-temperature results correspond to those discussed earlier by S\'a de~Melo, Randeria, and Engelbrecht~\cite{r:Melo:1993vn,r:Engelbrecht:1997fk}. Hence, we find that the BCS ansatz is the only relevant auxiliary field theory in a dilute interacting Fermi gas. In the auxiliary field approach, one can systematically improve upon the LOAF approximation by calculating the 1-PI action corrections, order by order.
A related approach for the relativistic four-fermi theory can be found in Refs.~\onlinecite{r:Chodos:2000fk,r:CCMS01,r:Mihaila:2006fk}.
This paper is organized as follows: In Sec.~\ref{s:thm} we discuss the partition function for an infinite homogeneous system of spin-1/2 fermions with arbitrary populations of spin-up and spin-down fermions. In Sec.~\ref{s:auxformI} we discuss rewriting the Lagrangian in terms of auxiliary fields. The resulting effective action is discussed in Sec.~\ref{s:Seff}. The corresponding properties of an uniform system in equilibrium are derived in Sec.~\ref{s:thermalequib}. In Sec.~\ref{ss:caseA}, we specialize to the case of systems with equal populations of spin-up and spin-down fermions. We summarize our findings in Sec.~\ref{s:concl}.
\section{\label{s:thm}The partition function and path integrals}
For a grand canonical ensemble, the partition function for a collection of interacting Fermi particles can be written as
\begin{equation}\label{thm.e:rhoGCEI}
Z[T,\mu,V]
=
e^{- \beta \, \Omega[T,\mu,V] }
=
\Tr{ e^{- \beta \, ( \, \hat{H} - \mu \, \hat{N} \, ) } } \>,
\end{equation}
where $\Omega[T,\mu,V]$ is the grand potential and we have set $\beta = 1 / T$, with the temperature measured in $\kBolt$ units. Here $T$ is the temperature, $\mu$ the chemical potential, and $V$ the volume. Using the second law of thermodynamics, we find
\begin{align}
N[T,\mu,V]
&=
- \Partial{\,\Omega}{\mu}{T}{V} \>,
\notag \\
p[T,\mu,V]
&=
- \Partial{\,\Omega}{V}{T}{\mu} \>,
\label{thm.e:SNpI} \\
S[T,\mu,V]
&=
- \Partial{\,\Omega}{T}{\mu}{V} \>,
\notag
\end{align}
and the energy $E$ is given by
\begin{equation}\label{thm.e:U}
E
=
\Omega + T S + \mu N \>.
\end{equation}
The partition function can be written as a path integral,
\begin{equation}\label{pf.e:Z-I}
Z[T,\mu,V]
=
\mathcal{N}
\iint \text{D} \psi \, \text{D} \psi^{\ast} \,
e^{-S[\psi,\psi^{\ast};T,\mu,V]} \>,
\end{equation}
where $S[\psi,\psi^{\ast};T,\mu,V]$ the negative of the Euclidian (thermal) action obtained by mapping the physical action to imaginary time, $t \mapsto - i \hbar \tau$. We consider here the action $\mathcal{S}$ for a collection of fermions interacting by means of a short-range contact potential $V(r) = \lambda_0 \, \delta(\mathbf{r})$ is given by
\begin{equation}\label{pf.e:Sdef}
\mathcal{S}[ \, \psi,\psi^{\ast} \, ]
=
\int [\mathrm{d} x] \,
\mathcal{L}[ \, \psi,\psi^{\ast} \, ] \>,
\end{equation}
where we have put $\int \! \mathrm{d}^3 x \int_{0}^{\beta} \! \mathrm{d} \tau$, and where
\begin{align}
\mathcal{L}[ \, \psi,\psi^{\ast} \, ]
&=
\sum_{\sigma}
\biggr \{ \,
\frac{1}{2} \,
\Bigl [ \,
\psi_{\sigma}^{\ast}(x) \,
\frac{\partial \psi_{\sigma}(x)}{\partial \tau}
+
\psi_{\sigma}(x) \,
\frac{\partial \psi_{\sigma}^{\ast}(x)}{\partial \tau} \,
\Bigr ]
\notag \\
& \qquad
+
\psi^{\ast}_{\sigma}(x) \,
\Bigl [ \,
-
\gamma \nabla^2
-
\mu_{\sigma} \,
\Bigr ] \,
\psi^{\phantom\ast}_{\sigma}(x)
\label{pf.e:LagI} \\
& \qquad
+
\frac{\lambda_0}{2} \,
\psi^{\ast}_{\sigma}(x) \, \psi^{\ast}_{-\sigma}(x) \,
\psi^{\phantom\ast}_{-\sigma}(x) \, \psi^{\phantom\ast}_{\sigma}(x) \,
\biggr \} \>.
\notag
\end{align}
We have suppressed the dependence of quantities on the thermodynamic variables $(T,\mu_{\pm},V)$. The fields are described by two-component complex anticommuting Grassmann fields $\psi_{\sigma}(x)$ which obey the algebra,
\begin{gather}
\AntiComm{\psi^{\phantom\ast}_{\sigma}(x)}
{\psi^{\ast}_{\sigma'}(x')}
=
\AntiComm{\psi^{\phantom\ast}_{\sigma}(x)}
{\psi^{\phantom\ast}_{\sigma'}(x')}
\label{pf.e:psialgebra} \\
=
\AntiComm{\psi^{\ast}_{\sigma}(x)}
{\psi^{\ast}_{\sigma'}(x')}
=
0 \>,
\notag
\end{gather}
with $x \equiv \Set{\mathbf{r},\tau}$, and $\sigma = \pm 1$ correspond to the usual spin-up ($\uparrow$) and spin-down ($\downarrow$) fermions. Using a \emph{left} derivative convention for Grassmann derivatives, variation of the action with respect to $\psi^{\ast}_{\sigma}(x)$ leads to thermal equations of motion,
\begin{equation*}\label{pf.e:psieomI}
\Bigl \{ \,
-
\gamma \nabla^2
+
\frac{ \partial }{\partial \tau}
-
\mu_{\sigma}
+
\lambda_0 \,
\bigl [ \,
\psi^{\ast}_{-\sigma}(x) \, \psi^{\phantom\ast}_{-\sigma}(x) \,
\bigr ] \,
\Bigr \} \,
\psi_{\sigma}(x)
=
0 \>.
\end{equation*}
(No sum over $\sigma$ here.)
Particle densities $\rho_{\sigma}^{\phantom\ast}(x)$ and $\kappa_{\sigma}^{\phantom\ast}(x)$ are defined by
\begin{subequations}\label{pf.e:nadensdef}
\begin{align}
\rho_{\sigma}^{\phantom\ast}(x)
&=
\psi^{\ast}_{\sigma}(x) \, \psi^{\phantom\ast}_{\sigma}(x)
=
\rho_{\sigma}^{\ast}(x) \>,
\label{fp.e:rhodendef} \\
\kappa_{\sigma}(x)
&=
\psi^{\phantom\ast}_{\sigma}(x) \,
\psi^{\phantom\ast}_{-\sigma}(x)
=
- \kappa_{-\sigma}(x) \>.
\label{fp.e:kappadendef}
\end{align}
\end{subequations}
which have the property that
\begin{subequations}\label{pf.e:denprops}
\begin{align}
\sum_{\sigma}
\rho^{\phantom\ast}_{\sigma}(x) \, \rho^{\ast}_{-\sigma}(x)
&=
\sum_{\sigma}
\rho^{\ast}_{\sigma}(x) \, \rho^{\phantom\ast}_{-\sigma}(x)
\label{pf.e:rhoprop} \\
&=
+
\sum_{\sigma}
\psi^{\ast}_{\sigma}(x) \,
\psi^{\ast}_{-\sigma}(x) \,
\psi^{\phantom\ast}_{-\sigma}(x) \,
\psi^{\phantom\ast}_{\sigma}(x) \>,
\notag \\
\sum_{\sigma}
\kappa_{\sigma}^{\phantom\ast}(x) \,
\kappa_{-\sigma}^{\ast}(x)
&=
\sum_{\sigma}
\kappa_{\sigma}^{\ast}(x) \,
\kappa_{-\sigma}^{\phantom\ast}(x)
\label{pf.e:kappadenprop} \\
&=
-
\sum_{\sigma}
\psi^{\ast}_{\sigma}(x) \,
\psi^{\ast}_{-\sigma}(x) \,
\psi^{\phantom\ast}_{-\sigma}(x) \,
\psi^{\phantom\ast}_{\sigma}(x) \>.
\notag
\end{align}
\end{subequations}
We define $\kappa(x) \equiv \kappa_{+}(x) = - \kappa_{-}(x)$.
The densities $\rho_{\pm}(x)$ are real and independent whereas $\kappa(x)$ is complex. Introducing four component basis vectors $\psi^{a}(x)$ and $\psi_{a}(x) $,
\begin{align}
\psi^{a}(x)
&=
\Set{
\psi^{\phantom\ast}_{+}(x),
\psi^{\phantom\ast}_{-}(x),
\psi^{\ast}_{+}(x),
\psi^{\ast}_{-}(x)
} \>,
\label{pf.e:psiu} \\
\psi_{a}(x)
&=
\Set{
\psi^{\ast}_{+}(x),
\psi^{\ast}_{-}(x),
\psi^{\phantom\ast}_{+}(x),
\psi^{\phantom\ast}_{-}(x)
} \>,
\notag
\end{align}
the density matrix can then be written as
\begin{align}
\rho_a{}^b(x)
&=
\psi_{a}(x) \, \psi^{b}(x)
\label{pf.e:denmat} \\
&=
\begin{pmatrix}
\rho_{+}(x) & 0 & 0 & - \kappa^{\ast}(x) \\
0 & \rho_{-}(x) & \kappa^{\ast}(x) & 0 \\
0 & \kappa(x) & - \rho_{+}(x) & 0 \\
-\kappa(x) & 0 & 0 & - \rho_{-}(x)
\end{pmatrix} \>.
\notag
\end{align}
\section{\label{s:auxformI}Auxiliary fields}
Following the Bose case \cite{r:Cooper:2010fk}, we use the Hubbard-Stratonovitch transformation \cite{r:Hubbard:1959kx,*r:Stratonovich:1958vn} to introduce auxiliary fields for each of the densities described above in order to eliminate the quadratic interaction term in the Lagrangian in favor of cubic interactions between the Fermi field and the auxiliary fields. There are six independent auxiliary fields possible in our case, two real fields $\chi_{\sigma}(x)$, and two complex fields $\Delta_{\sigma}(x)$, corresponding to the densities $\rho_{\sigma}(x)$ and $\kappa_{\sigma}(x)$ respectively.
The auxiliary Lagrangian is defined by
\begin{equation}\label{AF.e:Lauxdef}
\mathcal{L}_{\text{aux}}[\psi,\chi,\Delta]
=
-
\mathcal{L}_{\chi}[\psi,\chi]
+
\mathcal{L}_{\Delta}[\psi,\Delta] \>,
\end{equation}
where
\begin{subequations}\label{AF.e:Ldefs}
\begin{align}
\mathcal{L}_{\chi}[\psi,\chi]
&=
\frac{1}{2\lambda_0} \,
\sum_{\sigma}
\bigl [ \,
\chi_{\sigma}^{\phantom\ast}(x)
-
\lambda_0 \, \rho_{\sigma}(x) \, \sin\theta \,
\bigr ]
\label{AF.e:Lauxchi} \\
& \qquad\qquad
\times
\bigl [ \,
\chi_{-\sigma}(x)
-
\lambda_0 \, \rho_{-\sigma}(x) \, \sin\theta \,
\bigr ]
\notag \\
&=
\frac{1}{2\lambda_0} \,
\sum_{\sigma}
\chi_{\sigma}(x) \, \chi_{-\sigma}(x)
\notag \\
& \quad
-
\frac{\sin\theta}{2}
\sum_{\sigma}
\bigl [ \,
\chi_{\sigma}(x) \,
\rho_{-\sigma}(x)
+
\rho_{\sigma}(x) \,
\chi_{-\sigma}(x) \,
\bigr ]
\notag \\
& \quad
+
\frac{\lambda_0 \, \sin^2\theta}{2} \,
\sum_{\sigma}
\psi^{\ast}_{\sigma}(x) \,
\psi^{\ast}_{-\sigma}(x) \,
\psi^{\phantom\ast}_{-\sigma}(x) \,
\psi^{\phantom\ast}_{\sigma}(x) \>,
\end{align}
and
\begin{align}
\mathcal{L}_{\Delta}[\psi,\Delta]
&=
\frac{1}{2\lambda_0} \,
\sum_{\sigma}
\bigl [ \,
\Delta_{\sigma}^{\phantom\ast}(x)
-
\lambda_0 \, \kappa_{\sigma}(x) \, \cos\theta \,
\bigr ] \\
& \qquad\qquad
\times
\bigl [ \,
\Delta_{-\sigma}^{\ast}(x)
-
\lambda_0 \, \kappa_{-\sigma}^{\ast}(x) \, \cos\theta \,
\bigr ]
\notag \\
&=
\frac{1}{2\lambda_0} \,
\sum_{\sigma}
\Delta_{\sigma}^{\phantom\ast}(x) \, \Delta_{-\sigma}^{\ast}(x)
\notag \\
& \quad
-
\frac{\cos\theta}{2}
\sum_{\sigma}
\bigl [ \,
\Delta_{\sigma}^{\phantom\ast}(x) \,
\kappa_{-\sigma}^{\ast}(x)
+
\kappa_{\sigma}^{\phantom\ast}(x) \,
\Delta_{-\sigma}^{\ast}(x) \,
\bigr ]
\notag \\
& \quad
-
\frac{\lambda_0 \, \cos^2\theta}{2} \,
\sum_{\sigma}
\psi^{\ast}_{\sigma}(x) \,
\psi^{\ast}_{-\sigma}(x) \,
\psi^{\phantom\ast}_{-\sigma}(x) \,
\psi^{\phantom\ast}_{\sigma}(x) \>.
\notag
\end{align}
\end{subequations}
Here we have introduced an angle $\theta$. The auxiliary fields obey the same properties as the corresponding densities, so we define $\Delta(x) \equiv \Delta_{+}(x) = - \Delta_{-}(x)$.
So \emph{adding} $\mathcal{L}_{\text{aux}}$ to $\mathcal{L}$ given in Eq.~\eqref{pf.e:LagI}, eliminates the four-point Fermi interaction. Using the basis vectors given in Eq.~\eqref{pf.e:psiu}, the action can be written in a compact way as
\begin{align}
&\mathcal{S}[\Psi,J]
=
\frac{1}{2} \,
\iint [\mathrm{d} x] \, [\mathrm{d} x'] \,
\psi_{a}(x) \,
\mathcal{G}^{-1}{}^{a}{}_{b}[\phi](x,x') \,
\psi^{b}(x')
\notag \\
& \quad
+
\int [\mathrm{d} x] \,
\Bigl [ \,
-
\frac{ (\phi_i(x) + \mu_i) \,
g^i{}_j(\theta) \,
(\phi^j(x) + \mu^j) }
{ 2 \, \lambda_0 }
\notag \\
& \qquad\qquad
+
\psi_{a}(x) \, g^a{}_b \, j^b(x)
+
\phi_i(x) S^{i}(x) \,
\Bigr ] \>,
\label{AF.e:action}
\end{align}
where the inverse Green function is given by
\begin{equation}\label{AF.e:calGdef}
\mathcal{G}^{-1}{}^{a}{}_{b}[\phi](x,x')
=
\delta(x,x') \,
\bigl [ \,
\mathcal{G}^{-1}_{0}{}^{a}{}_{b}[\phi]
+
\mathcal{V}^{a}{}_b[\phi](x) \,
\bigr ]
\end{equation}
with
\begin{equation}\label{AF.e:calGzerodef}
\mathcal{G}^{-1}_{0}{}^{a}{}_{b}[\phi]
=
\begin{pmatrix}
h_{+} & 0 & 0 & 0 \\
0 & h_{+} & 0 & 0 \\
0 & 0 & -h_{-} & 0 \\
0 & 0 & 0 & -h_{-}
\end{pmatrix} \>,
\end{equation}
where we have defined $h_{\pm}$ as the operators
\begin{equation}\label{AF.e:hdefs}
h_{\pm}
=
-
\gamma \nabla^2
\pm
\frac{\partial}{ \partial \tau } \>,
\end{equation}
and
\begin{equation}\label{AF.e:calVdef}
\mathcal{V}^{a}{}_b[\phi](x)
=
\begin{pmatrix}
\chi'_{-}(x) & 0 & 0 & -\Delta'(x) \\
0 & \chi'_{+}(x) & \Delta'(x) & 0 \\
0 & \Delta^{\prime\,\ast}(x) & - \chi'_{-}(x) & 0 \\
-\Delta^{\prime\,\ast}(x) & 0 & 0 & - \chi'_{+}(x)
\end{pmatrix} \>,
\end{equation}
Here we have redefined the anomalous fields by setting
\begin{gather}
\chi'_{\pm}(x)
\equiv
\chi_{\pm}(x) \, \sin\theta - \mu_{\mp} \>,
\label{AF.e:chiptchipDeltapdefs} \\
\Delta'(x)
\equiv
\Delta_{+}(x) \, \cos\theta
=
- \Delta_{-}(x) \, \cos\theta \>,
\notag
\end{gather}
and introduced four component anomalous fields $\phi^i(x)$, a constant vector $\mu^i$, and currents $S^i(x)$ by the definitions
\begin{align}
\phi^{i}(x)
&=
\Set{ \chi'_{+}(x), \chi'_{-}(x), \Delta'(x), \Delta^{\prime\,\ast}(x) } \>,
\label{ea.e:AFASupperdef} \\
\mu^i_0
&=
\Set{ \, \mu_{-}, \mu_{+},0,0 } \>,
\notag \\
S^{i}(x)
&=
\Set{ s_{+}(x), s_{-}(x), S(x), S^{\ast}(x) } \>,
\notag
\end{align}
with
\begin{align}
\phi_{i}(x)
&=
\Set{ \chi'_{-}(x), \chi'_{+}(x), \Delta^{\prime\,\ast}(x), \Delta'(x) } \>,
\label{ea.e:AFASlowerdef} \\
\mu_{0\,i}
&=
\Set{ \, \mu_{+}, \mu_{-},0,0 } \>,
\notag \\
S_{i}(x)
&=
\Set{ s_{-}(x), s_{+}(x), S^{\ast}(x), S(x) } \>.
\notag
\end{align}
The tensors $g^a{}_b$ and $g^i{}_j(\theta)$ are defined by
\begin{align}
g^a{}_b
&=
\Diag{ 1, 1, -1, -1 } \>,
\label{ea.e:ZabIijdef} \\
g^i{}_j(\theta)
&=
\Diag{ 1/\sin^2\theta, 1/\sin^2\theta, 1/\cos^2\theta, 1/\cos^2\theta } \>.
\notag
\end{align}
The Grassmann fields $\psi^{a}(x)$ and $\psi_{a}(x)$ are defined as in Eq.~\eqref{pf.e:psiu}. It will be useful for notational purposes to also define ten component fields and currents using Greek indices as
\begin{align}
\Psi^{\alpha}(x)
&=
\Set{ \psi^{a}(x),\phi^{i}(x) } \>,
\label{ea.e:PhiJdefs} \\
J^{\alpha}(x)
&=
\Set{ j^{a}(x), S^i(x) } \>.
\notag
\end{align}
Note that the $\psi^a(x)$ fields are Grassmann fields whereas the $\phi^{i}(x)$ fields are commuting fields, so the vectors $\Psi^{\alpha}(x)$ and $J^{\alpha}(x)$ are superfields.
Setting $\chi' = 0$ and then $\theta = 0$ so that only $\Delta$ survives, recovers the first order S\'a de~Melo-Randeria-Engelbrecht theory\cite{r:Melo:1993vn}.
\section{\label{s:Seff}Effective action}
The partition function is now given by a path integral over all fields,
\begin{align}
Z[\, J \,]
&=
e^{-\beta \, \Omega[\, J \,]}
\label{ea.e:Z-I} \\
&=
\mathcal{N} \iint
\prod_a \text{D} \psi^a \, \text{D} \psi_a \,
\!\! \iint
\prod_i \text{D} \phi^i \, \text{D} \phi_i \,
e^{ - \mathcal{S}[\, \Psi,J \,]} \>,
\notag
\end{align}
where $\mathcal{S}[\, \Psi,J \,]$ is given in Eq.~\eqref{AF.e:action} and again we have suppressed the dependence of $Z$, $\Omega$, and $\mathcal{S}$ on the thermodynamic variables. The thermodynamic partition function is found by setting the currents to zero. Thermal average values of the fields are given by
\begin{align}
\Expect{\psi^a(x)}
&=
\frac{-1}{Z} \,
\frac{\delta Z}{\delta j_a(x)} \, \Big |_{j=S=0}
=
\beta \frac{\delta \Omega}{\delta j_a(x)} \, \Big |_{j=S=0} \>,
\label{ea.e:avefields} \\
\Expect{\phi^i(x)}
&=
\frac{-1}{Z} \,
\frac{\delta Z}{\delta S_i(x)} \, \Big |_{j=S=0}
=
\beta \frac{\delta \Omega}{\delta S_i(x)} \, \Big |_{j=S=0} \>.
\notag
\end{align}
Since the action is now quadratic in the fields $\psi^a(x)$, we can integrate these out and obtain an effective action,
\begin{equation}\label{ea.e:Z-II}
Z[\, J \,]
=
\mathcal{N} \iint
\prod_i \text{D} \phi^i \, \text{D} \phi_i \,
e^{ - \mathcal{S}_{\text{eff}}[\, \phi,J \,]} \>,
\end{equation}
where the effective action $\mathcal{S}_{\text{eff}}[\, \phi,J \,]$ is given by
\begin{align}
&\mathcal{S}_{\text{eff}}[\, \phi,J \,]
=
\frac{1}{2}
\iint [\mathrm{d} x ]\, [ \mathrm{d} x' ] \,
j_a(x) \, g^a{}_b \, \mathcal{G}^b{}_c[\phi](x,x') \, j^c(x')
\notag \\
& \qquad
-
\frac{1}{2} \,
\int [\mathrm{d} x] \,
\mathrm{Tr}
\bigl [ \,
\Ln{ \mathcal{G}^{-1}[\phi](x,x) } \,
\bigr ]
\notag \\
&
+
\int [\mathrm{d} x] \,
\Bigl [ \,
-
\frac{ ( \phi_i(x) + \mu_{0\,i} ) \,
g^i{}_j \,
( \phi^j(x) + \mu_0^j ) }
{ 2 \, \lambda_0 }
+
\phi_i(x) S^{i}(x) \,
\Bigr ] \>.
\label{ea.e:Seff}
\end{align}
We evaluate the remaining path integral by expanding the effective action about a point $\phi_0^i$,
\begin{align}
&\mathcal{S}_{\text{eff}}[\, \phi,J \,]
=
\mathcal{S}_{\text{eff}}[\, \phi_0,J \,]
\label{ea.e:Sexpand} \\
& \qquad
+
\int [\mathrm{d} x] \,
\frac{\delta \mathcal{S}_{\text{eff}}[\, \phi,J \,]}
{\delta \phi^i(x)} \Big |_{\phi_0} \!\!
( \, \phi^i(x) - \phi_0^{i}(x) \, )
\notag \\
& \quad
+
\frac{1}{2}
\iint [\mathrm{d} x] \, [\mathrm{d} x'] \,
\frac{\delta^2 \mathcal{S}_{\text{eff}}[\, \phi,J \,]}
{\delta \phi^i(x) \, \delta \phi^j(x')} \Big |_{\phi_0} \!\!
\notag \\
& \qquad\qquad
\times
( \, \phi^i(x) - \phi_{0}^{i}(x) \, ) \,
( \, \phi^j(x') - \phi_{0}^{j}(x') \, )
+
\dotsb
\notag
\end{align}
and evaluating the path integral by the method of steepest descent. The vanishing of the first derivatives define the saddle point $\phi_0^i$, which gives
\begin{align}
&\frac{ g^i{}_j \, [ \, \phi_0^j(x) + \mu^j \, ] }{ \lambda_0 }
=
\rho_0^{i}[\phi_0,J](x)
\label{ea.e:statpt} \\
& \qquad\qquad\qquad
-
\frac{1}{2} \,
\mathrm{Tr}
\bigl [ \,
\mathcal{V}^i \, \mathcal{G}[\phi_0](x,x) \,
\bigr ]
+
S^{i}(x) \>.
\notag
\end{align}
Here we have used
\begin{equation*}\label{ea.e:GinvG}
\int [\mathrm{d} x'] \,
\mathcal{G}^{-1}{}^a{}_b[\phi](x,x') \,
\mathcal{G}^b{}_c[\phi](x',x'')
=
g^a{}_c \, \delta(x - x'') \>,
\end{equation*}
so that
\begin{align}
&\frac{\delta \mathcal{G}^a{}_e[\phi](x_1,x_4) }
{ \delta \phi_i(x) }
=
-
\iint [\mathrm{d} x_2] \, [\mathrm{d} x_3] \,
g^a{}_b \,
\mathcal{G}^{b}{}_{c}[\phi](x_1,x_2) \,
\notag \\
& \qquad\qquad
\times
\frac{\delta \mathcal{G}^{-1\,c}{}_{d}[\phi](x_2,x_3) }
{ \delta \phi_i(x) } \,
\mathcal{G}^{d}{}_{e}[\phi](x_3,x_4)
\notag \\
&=
-
g^{a}{}_{b} \,
\mathcal{G}^b{}_c[\phi](x_1,x) \,
\mathcal{V}^{i\,c}{}_d \,
\mathcal{G}^d{}_e[\phi](x,x_4) \>,
\label{ea.e:dGdchi}
\end{align}
and defined the constant matrices $\mathcal{V}^{i\,a}{}_b$ by
\begin{equation}\label{ea.e:calVidefs}
\frac{\delta \mathcal{G}^{-1\,a}{}_b[\phi](x_2,x_3) }
{ \delta \phi_i(x) }
=
\mathcal{V}^{i\,a}{}_b \, \delta(x_2 - x_3) \, \delta(x - x_2) \>.
\end{equation}
The densities $\rho_0^{i}[\phi_0,J](x)$ are given by the equation,
\begin{equation}\label{ea.e:rhoidef}
\rho_0^{i}[\phi_0,J](x)
=
\frac{1}{2} \,
\psi_{0\,a}[\phi_0,J](x) \,
\mathcal{V}^i{}^a{}_b \,
\psi_0^b[\phi_0,J](x) \>,
\end{equation}
with Grassmann fields $\psi_0^a[j,\phi_0](x)$ given by,
\begin{equation}\label{ea.e:barphidef}
\psi_0^a[\phi_0,J](x)
=
\int [\mathrm{d} x'] \,
\mathcal{G}^a{}_b[\phi_0](x,x') \, j^b(x') \>.
\end{equation}
The densities $\rho_0^{i}[\phi_0,J](x)$ and fields $\psi_0^a[\phi_0,J](x)$ are functionals of both $\phi_0^i(x)$ and all the currents $J^{\alpha}(x)$. We define the fluctuation inverse Green function $6 \times 6$ matrix $\mathcal{D}^{-1}_{ij}[\phi_0](x,x')$ by the second-order derivatives evaluated at the stationary points,
\begin{align}
\mathcal{D}^{-1}_{ij}[\phi_0](x,x')
&=
\frac{ \delta^2 \, \mathcal{S}_{\text{eff}}[\phi] }
{ \delta \phi^i(x) \, \delta \phi^j(x') } \, \bigg |_{\phi_0}
\label{ea.e:calDdef} \\
&=
-
\frac{ g_{ij} }{ \lambda_0 } \, \delta(x,x')
+
\Pi_{ij}[\phi_0](x,x') \>,
\notag
\end{align}
where the polarization tensor $\Pi^{ij}[\phi_0](x,x')$ is given by
\begin{equation}\label{ea.e:Sigmadef}
\Pi^{ij}[\phi_0](x,x')
=
\frac{1}{2} \,
\Trb{ \mathcal{V}^{i} \, \mathcal{G}[\phi_0](x,x') \,
\mathcal{V}^{j} \, \mathcal{G}[\phi_0](x',x) } \>.
\end{equation}
So inserting these results into the path integral \eqref{ea.e:Z-II} and integrating over the $\phi^i$ fields, the grand potential function is given by
\begin{align}
&\beta \, \Omega[J]
\label{ea.e:Omega} \\
&=
\beta \, \Omega_0
+
\mathcal{S}_{\text{eff}}[\phi_0,j]
-
\frac{1}{2}
\Tr{ \Ln{ \mathcal{D}^{-1}[\phi_0](x,x) } }
+
\dotsb
\notag \\
&=
\beta \, \Omega_0
-
\frac{1}{2}
\iint [\mathrm{d} x ]\, [ \mathrm{d} x' ] \,
j_a(x) \, g^{a}{}_b \, \mathcal{G}^b{}_c[\phi_0](x,x') \, j^c(x')
\notag \\
&
+
\int [\mathrm{d} x] \,
\Bigl [ \,
-
\frac{ (\phi_i(x) + \mu_i) \,
g^i{}_j \,
(\phi^j(x) + \mu^j) }
{ 2 \, \lambda_0 }
+
\phi_{0\,i}(x) S^{i}(x) \,
\Bigr ]
\notag \\
&
-
\frac{1}{2} \,
\mathrm{Tr}
\bigl [ \,
\Ln{ \mathcal{G}^{-1}[\phi_0] } \,
\bigr ]
+
\frac{1}{2}
\Tr{ \Ln{ \mathcal{D}^{-1}[\phi_0](x,x) } }
+
\dotsb
\notag
\end{align}
where $\Omega_0$ is an integration constant to be determined.
Instead of writing the thermodynamic potential in terms of the currents $J^{\alpha}(x)$, we can write them in terms of fields $\Psi^{\alpha}(x)$ by Legendre transforming to the thermal vertex potential $\Gamma[\, \Psi \,]$,
\begin{align}
&\beta \, \Gamma[\, \Psi \,]
\label{ea.e:GammaI} \\
&=
\beta \, \Omega[\, J \,]
-
\int [\mathrm{d} x] \,
\bigl \{ \,
\psi_a(x) \, g^a{}_b \, j^b(x)
+
\phi_i(x) \, S^i(x) \,
\bigr \}
\notag \\
&
=
\beta \, \Omega_0
+
\frac{1}{2}
\iint [\mathrm{d} x] \, [\mathrm{d} x'] \,
\psi_a(x) \,
\mathcal{G}^{-1}[\phi]{}^a{}_b(x,x') \,
\psi^b(x')
\notag \\
& \qquad
-
\int [\mathrm{d} x] \,
\frac{ (\phi_i(x) + \mu_i) \,
g^i{}_j \,
(\phi^j(x) + \mu^j) }
{ 2 \, \lambda_0 }
\notag \\
& \qquad
-
\frac{1}{2} \,
\mathrm{Tr}
\bigl [ \,
\Ln{ \mathcal{G}^{-1}[\phi] } \,
\bigr ]
+
\frac{1}{2} \,
\mathrm{Tr}
\bigl [ \,
\Ln{ \mathcal{D}^{-1}[\phi] } \,
\bigr ] \,
+
\dotsb \>,
\notag
\end{align}
which is the classical action plus the trace-log terms. Currents are given by functional derivatives of $\Gamma[\, \psi, \phi \,]$ with respect to the fields,
\begin{equation}\label{aux.e:dGammadphis}
\frac{\delta \Gamma[\, \Psi \,]}{\delta \psi_a(x)}
=
- g^a{}_b \, j^b(x) \>,
\quad
\frac{\delta \Gamma[\, \Psi \,]}{\delta \phi_i(x)}
=
- S^i(x) \>.
\end{equation}
So the derivatives of $\Gamma[\, \Psi \,]$ with respect to the fields vanish for zero currents. As shown in Ref.~\onlinecite{r:Bender:1977bh}, the last term in Eq.~\eqref{ea.e:GammaI} is of second order in a loop expansion of the effective action in terms of $\phi$-propagators and will be ignored here. For the original path integral of Eq. \eqref{pf.e:Z-I}, the loop expansion in terms of $\psi$ propagators is obtained by realizing that $S$ is measured in units of $\hbar$ and one can evaluate the path integral as $\hbar \rightarrow 0$ by saddle point (or Laplace's method). The expansion in $\hbar$ leads to the loop expansion. Similarly one can insert an artificial small parameter $\epsilon$ into the
effective action by replacing $S_\mathrm{eff}$ by $S_\mathrm{eff}/\epsilon$ in Eq.~\eqref{ea.e:Z-II}. Powers of $\epsilon$ then counts powers of loops in the composite field $\phi$ propagators. After organizing the series in $\epsilon$ to some specified order one then sets $\epsilon=1$.
\section{\label{s:thermalequib}Uniform system in thermal equilibrium}
For the case of a uniform sample and thermal equilibrium, the fields $\psi^a$ and $\phi^i$ are independent of $x \equiv (\mathbf{r},\tau)$. In addition since the Green functions are periodic or anti-periodic in $\tau$, we can expand them in a Fourier series,
\begin{equation} \label{te.e:Gexpand}
\mathcal{G}[\phi](x,x')
=
\frac{1}{\beta}
\sum_{\mathbf{k},n}
\tilde{\mathcal{G}}[\phi](\mathbf{k},n) \,
e^{i [ \mathbf{k} \cdot ( \mathbf{r} - \mathbf{r}' ) - \omega_n ( \tau - \tau' ) ] } \>,
\end{equation}
where $\omega_n = ( 2n + 1 ) / \beta$ are the Fermi Matsubara frequencies. So using $\int [\mathrm{d} x] = \beta \, V$, at thermal equilibrium and for uniform systems, the thermal effective potential from Eq.~\eqref{ea.e:GammaI} is given by
\begin{align}
&\Veff[\, \Psi \,]
\equiv
\Gamma[\, \Psi \,] / V
=
\Veffzero
+
\frac{1}{2} \,
\psi_a \,
\mathcal{V}[\phi]{}^a{}_b \,
\psi^b
\label{te.e:Veff-I} \\
& \qquad
-
\frac{ (\phi_i + \mu_i) \, g^i{}_j \, (\phi^j + \mu^j) }
{ 2 \, \lambda_0 }
\notag \\
& \qquad
-
\frac{1}{2\beta}
\Intk \,
\sum_{n}
\Tr{ \Ln{ \tilde{\mathcal{G}}^{-1}[\phi](\mathbf{k},n) } }
+
\dotsb
\notag
\end{align}
Here the matrix $\tilde{\mathcal{G}}^{-1}[\phi](\mathbf{k},n)$ is given by
\begin{equation}\label{te.e:calGinv}
\tilde{\mathcal{G}}^{-1}[\phi](\mathbf{k},n)
=
\begin{pmatrix}
A(\mathbf{k},n) & B(\mathbf{k},n) \\
-B^{\ast}(\mathbf{k},n) & - A^{\ast}(\mathbf{k},n)
\end{pmatrix}
\end{equation}
with
\begin{subequations}\label{te.e:AB}
\begin{align}
A(\mathbf{k},n)
&=
\begin{pmatrix}
\epsilon_k + \chi'_{-} - i \omega_n & 0 \\
0 & \epsilon_k + \chi'_{+} - i \omega_n
\end{pmatrix} \>,
\label{te.e:AI} \\
- A^{\ast}(\mathbf{k},n)
&=
\begin{pmatrix}
-\epsilon_k - \chi'_{-} - i \omega_n & 0 \\
0 & - \epsilon_k - \chi'_{+} - i \omega_n
\end{pmatrix} \>,
\label{te.e:AII} \\
B(\mathbf{k},n)
&=
\begin{pmatrix}
0 & -\Delta' \\
\Delta' & 0
\end{pmatrix} \>,
\label{te.e:BI} \\
- B^{\ast}(\mathbf{k},n)
&=
\begin{pmatrix}
0 & \Delta^{\prime\,\ast} \\
-\Delta^{\prime\,\ast} & 0
\end{pmatrix} \>.
\label{te.e:BII}
\end{align}
\end{subequations}
Note that the matrices $A$ and $B$ satisfy the integration conditions for the Grassmann integral.
From \eqref{AF.e:calVdef}, the matrix $\mathcal{V}[\phi]$ is given by
\begin{equation}\label{te.e:Vabdef}
\mathcal{V}[\phi]
=
\begin{pmatrix}
\chi'_{-} & 0 & 0 & - \Delta' \\
0 & \chi'_{+} & \Delta' & 0 \\
0 & \Delta^{\prime\,\ast} & - \chi'_{-} & 0 \\
-\Delta^{\prime\,\ast} & 0 & 0 & - \chi'_{+}
\end{pmatrix} \>.
\end{equation}
Then we find
\begin{equation}\label{te.e:psiVpsi}
\frac{1}{2} \,
\psi_a \,
\mathcal{V}[\phi]{}^a{}_b \,
\psi^b
=
\chi'_{-} \rho_{+}
+
\chi'_{+} \rho_{-}
+
\Delta^{\prime\,\ast} \kappa
+
\Delta' \kappa^{\ast} \>.
\end{equation}
Also we find
\begin{align}
&\frac{ (\phi_i + \mu_i) \, g^i{}_j(\theta) \, (\phi^j + \mu^j) }
{ 2 \, \lambda_0 }
\label{te.e:phigphi} \\
& \qquad
=
\frac{ (\chi'_{+} + \mu_{+})(\chi'_{-} + \mu_{-})}{\lambda_0 \sin^2\theta}
+
\frac{ | \Delta' |^2 }{\lambda_0 \cos^2\theta} \>.
\notag
\end{align}
The grand potential per unit volume is the value of the effective potential evaluated at zero current, or when
\begin{equation}\label{te.e:dVeffdPhi}
\frac{ \partial \, \Veff[\, \Psi \,] }{ \partial \, \Psi_{\alpha} }
=
0 \>,
\end{equation}
for all values of $\alpha$. From Eq.~\eqref{te.e:psiVpsi}, we find
\begin{equation}\label{cII.e:psiWpsi}
\frac{1}{2} \,
\psi_a \,
\mathcal{V}[\phi]{}^a{}_b \,
\psi^b
=
\chi'_{-} \rho_{+}
+
\chi'_{+} \rho_{-}
+
\Delta' \, \kappa^{\ast}
+
\Delta^{\prime\,\ast} \kappa \>,
\end{equation}
and from Eq.~\eqref{te.e:phigphi}, we get
\begin{align}
&\frac{ (\phi_i + \mu_i ) \, g^i{}_j(\theta) \, (\phi^j + \mu^j ) }
{ 2 \lambda_0 }
\label{cII.e:phiUphi} \\
&=
\frac{ ( \chi'_{+} + \mu_{+} ) ( \chi'_{-} + \mu_{-} ) }
{ \lambda_0 \sin^2\theta }
+
\frac{ | \Delta' |^2 }{ \lambda_0 \cos^2\theta } \>.
\notag
\end{align}
In order to compute $\Tr{ \Ln{ \tilde{\mathcal{G}}^{-1}[\phi](\mathbf{k},n) } }$, it is simpler to interchange rows and columns of the $\mathcal{G}^{-1}[\phi]$ matrix so as to bring it into block diagonal form. To do this, we redefine the fields $\psi^a(x)$ in the following way:
\begin{subequations}\label{cII.e:redefphiul}
\begin{align}
\psi^a(x)
&\mapsto
\Set{
\psi^{\phantom\ast}_{+}(x),
\psi^{\ast}_{-}(x),
\psi^{\ast}_{+}(x),
\psi^{\phantom\ast}_{-}(x)
} \>,
\label{cII.e:redefphiu} \\
\psi_a(x)
&\mapsto
\Set{
\psi^{\ast}_{+}(x),
\psi^{\phantom\ast}_{-}(x),
\psi^{\phantom\ast}_{+}(x),
\psi^{\ast}_{-}(x)
} \>.
\label{cII.e:redefphil}
\end{align}
\end{subequations}
Then from Eqs.~\eqref{te.e:calGinv} and \eqref{te.e:AB}, the Fourier transform of the inverse $\mathcal{G}$-matrix is of the form,
\begin{equation}\label{cII.e:calGinvdef}
\tilde{\mathcal{G}}^{-1}(\mathbf{k},n)
=
\begin{pmatrix}
\tilde{G}^{-1}(\mathbf{k},n) & 0 \\
0 & - \tilde{G}^{-1\,\dagger}(\mathbf{k},n)
\end{pmatrix} \>,
\end{equation}
where
\begin{equation}\label{cII.e:Ginvdef}
\tilde{G}^{-1}(\mathbf{k},n)
=
\begin{pmatrix}
\epsilon_k + \chi'_{-} - i \omega_n & - \Delta' \\
- \Delta^{\prime\,\ast} & - \epsilon_k - \chi'_{+} - i \omega_n
\end{pmatrix} \>.
\end{equation}
After some algebra, we find
\begin{align}
\Det{ \tilde{G}^{-1}(\mathbf{k},n) }
&=
- ( \, \alpha_1 + i \alpha_2 \, ) \>,
\label{cII.e:detGs} \\
\Det{ \tilde{G}^{-1\,\dagger}(\mathbf{k},n) }
&=
- ( \, \alpha_1 - i \alpha_2 \, ) \>,
\notag
\end{align}
where
\begin{subequations}\label{cII.e:alphabeta}
\begin{align}
\alpha_1
&=
( \epsilon_k + \chi'_{+} ) ( \epsilon_k + \chi'_{-} )
+
| \Delta' |^2
+
\omega_n^2 \>,
\label{cII.e:alpha} \\
\alpha_2
&=
( \chi'_{-} - \chi'_{+} ) \, \omega_n \>.
\end{align}
\end{subequations}
So we find
\begin{align}
&\Trb{ \Ln{ \tilde{\mathcal{G}}^{-1}(\mathbf{k},n) } }
=
\Ln{ \Det{ \tilde{\mathcal{G}}^{-1}(\mathbf{k},n) } }
=
\label{cII.e:trlogG} \\
&\qquad
=
\Ln{
\Det{ \tilde{G}^{-1}(\mathbf{k},n) } \,
\Det{ \tilde{G}^{-1\,\dagger}(\mathbf{k},n) }
}
\notag \\
&\qquad
=
\Ln{ \alpha_1^2 + \alpha_2^2 }
=
\Ln{ \omega_n^4 + 2 \, b \, \omega_n^2 + c }
\notag \\
&\qquad
=
\Ln{
( \omega_n^2 + \omega_{+}^2 ) \,
( \omega_n^2 + \omega_{-}^2 ) }
\notag \\
&\qquad
=
\Ln{ \omega_n^2 + \omega_{+}^2 }
+
\Ln{ \omega_n^2 + \omega_{-}^2 } \>,
\notag
\end{align}
where $\omega_{\pm}^2$ are the two roots of the equation
\begin{equation}\label{cII.e:tworoots}
\omega_{\pm}^4
-
2 \, b \, \omega_{\pm}^2
+
c
=
0 \>,
\Quad{$\Rightarrow$}
\omega_{\pm}^2
=
b
\pm
\sqrt{b^2 - c} \>,
\end{equation}
with
\begin{align}
b
&=
( \epsilon_k + \chi'_{+} ) ( \epsilon_k + \chi'_{-} )
+
| \Delta' |^2
+
( \chi'_{+} - \chi'_{-} )^2 / 2
\label{cII.e:abdefs} \\
&=
\frac{1}{2} \,
\bigl [ \,
( \epsilon_k + \chi'_{+} )^2
+
( \epsilon_k + \chi'_{-} )^2
+
2 \, | \Delta' |^2 \,
\bigr ]
\notag \\
c
&=
\bigl [ \,
( \epsilon_k + \chi'_{+} ) ( \epsilon_k + \chi'_{-} )
+
| \Delta' |^2 \,
\bigr ]^2 \>.
\notag
\end{align}
The square root of the discriminant is given by
\begin{align}
&\sqrt{b^2 - c}
\label{cII.e:discriminant} \\
&=
\frac{1}{2} \,
| \chi'_{+} - \chi'_{-} | \,
\sqrt{
[ \,
( \epsilon_k + \chi'_{+} )
+
( \epsilon_k + \chi'_{-} ) \,
]^2
+
4 \, | \Delta' |^2 \,
} \>.
\notag
\end{align}
so the frequencies $\omega_{\pm}^2$, which depend on $k$, are given by
\begin{align}
\omega_{\pm}^2
&=
\frac{1}{2} \,
\Bigl \{ \,
\bigl [ \,
( \epsilon_k + \chi'_{+} )^2
+
( \epsilon_k + \chi'_{-} )^2
+
2 \, | \Delta' |^2 \,
\bigr ]
\label{cII.e:omegapm} \\
& \!\!
\pm
| \chi'_{+} - \chi'_{-} | \,
\sqrt{
[ \,
( \epsilon_k + \chi'_{+} )
+
( \epsilon_k + \chi'_{-} ) \,
]^2
+
4 \, | \Delta' |^2 \,
} \,
\Bigr \} \>.
\notag
\end{align}
So from \eqref{cII.e:trlogG}
\begin{align}
&\frac{1}{2} \,
\Tr{ \Ln{ \tilde{\mathcal{G}}^{-1}(\mathbf{k},n) } }
\label{cII.e:TrLn} \\
&
=
\frac{1}{2\beta} \Intk \, \sum_n
\Ln{ \Det{ \tilde{\mathcal{G}}^{-1}(\mathbf{k},n) } }
\notag \\
&
=
\frac{1}{2\beta} \Intk \,
\Bigl \{ \,
\sum_n
\Ln{ \omega_n^2 + \omega_{+}^2 }
+
\sum_n
\Ln{ \omega_n^2 + \omega_{-}^2 } \,
\Bigr \}
\notag \\
&
=
\Intk \,
\Bigl \{ \,
\frac{ \omega_{+} + \omega_{-} }{2}
\notag \\
& \qquad\qquad
+
\frac{1}{\beta} \,
\bigl \{ \,
\Ln{ 1 + e^{ -\beta \omega_{+}} }
+
\Ln{ 1 + e^{ -\beta \omega_{-}} } \,
\bigr \} \,
\Bigr \} \>.
\notag
\end{align}
Then from \eqref{te.e:Veff-I}, the effective potential becomes
\begin{align}
&\Veff[\, \Psi \,]
=
\Veffzero
+
\chi'_{+} \rho_{-}
+
\chi'_{-} \rho_{+}
+
\Delta' \, \kappa^{\ast}
+
\Delta^{\prime\,\ast} \kappa
\label{cII.e:Veff-II} \\
& \qquad
-
\frac{ ( \chi'_{+} + \mu_{+} ) ( \chi'_{-} + \mu_{-} ) }
{ \lambda_0 \sin^2\theta }
-
\frac{ | \Delta' |^2 }{ \lambda_0 \cos^2\theta }
\notag \\
& \qquad
-
\Intk \,
\Bigl \{ \,
\frac{ \omega_{+} + \omega_{-} }{2}
\notag \\
& \qquad\qquad
+
\frac{1}{\beta} \,
\bigl \{ \,
\Ln{ 1 + e^{ -\beta \omega_{+}} }
+
\Ln{ 1 + e^{ -\beta \omega_{-}} } \,
\bigr \} \,
\Bigr \} \>.
\notag
\end{align}
Expanding $\omega_{\pm}$ in a Laurent series about $k \rightarrow \infty$, we find
\begin{equation}\label{cII.e:resultII}
\frac{\omega_{+} + \omega_{-}}{2}
=
\epsilon_k
+
\frac{1}{2} \, [ \, \chi'_{+} + \chi'_{-} \, ]
+
\frac{ | \Delta' |^2 }
{ 2 \, \epsilon_k }
+
\dotsb
\end{equation}
So using dimensional regularization~\cite{r:Papenbrock:1999fk}, the effective potential becomes
\begin{align}
&\Veff[\, \Psi \,]
=
\chi'_{+} \rho_{-}
+
\chi'_{-} \rho_{+}
+
\Delta' \, \kappa^{\ast}
+
\Delta^{\prime\,\ast} \kappa
\label{cII.e:Veff-III} \\
& \quad
-
\frac{ ( \chi'_{+} + \mu_{+} ) ( \chi'_{-} + \mu_{-} ) }
{ \lambda \sin^2\theta }
-
\frac{ | \Delta' |^2 }{ \lambda \cos^2\theta }
\notag \\
& \quad
-
\Intk \,
\Bigl \{ \,
\frac{ \omega_{+} + \omega_{-} }{2}
-
\epsilon_k
-
\frac{1}{2} \, [ \, \chi'_{+} + \chi'_{-} \, ]
-
\frac{ | \Delta' |^2 }
{ 2 \, \epsilon_k }
\notag \\
& \qquad\qquad
+
\frac{1}{\beta} \,
\bigl \{ \,
\Ln{ 1 + e^{ -\beta \omega_{+}} }
+
\Ln{ 1 + e^{ -\beta \omega_{-}} } \,
\bigr \} \,
\Bigr \} \>,
\notag
\end{align}
where the coupling constant is related to the s-wave scattering length, $a$, i.e. $\lambda = 8\pi \gamma \, a /$.
Recall that $\rho_{\pm} = \psi^{\ast}_{\pm} \, \psi^{\phantom\ast}_{\pm}$ and $\kappa = \psi_{+} \, \psi_{-}$.
We recover the thermodynamic grand potential by evaluating $\mathcal{V}_{\text{eff}}[\, \Psi \,]$ at the minimum of the potential when
\begin{equation}\label{cII.e:dVeffdpsidpi}
\frac{\partial \, \mathcal{V}_{\text{eff}}[\, \Psi \,]}{\partial \, \psi_{a}}
=
0 \>,
\Qquad{and}
\frac{\partial \, \mathcal{V}_{\text{eff}}[\, \Psi \,]}{\partial \, \phi_{i}}
=
0 \>,
\end{equation}
for all values of $a$ and $i$. For the Grassmann $\psi_a$ fields, derivatives of the effective potential \eqref{cII.e:Veff-III} with respect to $\psi_{\pm}$ give
\begin{equation}\label{cII.e:dVdpsi}
\bigl [ \,
\chi'_{+} \chi'_{-}
+
| \Delta' |^2 \,
\bigr ] \, \psi_{\pm}
=
0 \>.
\end{equation}
The above can be satisfied only if $\psi_{\pm} = 0$.
\section{\label{ss:caseA}Equal chemical potentials}
In this section we set $\mu_{+} = \mu_{-} \equiv \mu$ and $\chi'_{+} = \chi'_{-} \equiv \chi'$, so that only the total particle density is fixed. For this case the only possible solution for the Grassmann fields is the first case above where $\psi_{\pm} = 0$. The frequency spectrum is given by
\begin{equation}\label{cA.e:omega}
\omega^2_k
\equiv
\omega_{+}^2
=
\omega_{-}^2
=
( \epsilon_k + \chi' )^2
+
| \Delta' |^2 \>,
\end{equation}
and from \eqref{cII.e:Veff-III}, the effective potential for this case is given by
\begin{align}
\Veff[\, \Psi \,]
&=
-
\frac{ ( \chi' + \mu )^2 }
{ \lambda \sin^2\theta }
-
\frac{ | \Delta' |^2 }{ \lambda \cos^2\theta }
\label{cIIA.e:Veff-III} \\
&
-
2 \Intk \,
\Bigl \{ \,
\frac{1}{2} \,
\Bigl [ \,
\omega_{k}
-
\epsilon_k
-
\chi'
-
\frac{ | \Delta' |^2 }
{ 2 \, \epsilon_k } \,
\Bigr ]
\notag \\
& \qquad
+
\frac{1}{\beta} \,
\Ln{ 1 + e^{ -\beta \omega_k } }
\Bigr \} \>.
\notag
\end{align}
The gap equation for the $\chi'$ field is now,
\begin{align}
&\frac{ \chi' + \mu }{ \lambda \sin^2\theta }
=
\frac{1}{2}
\Intk \,
\Bigl \{ \,
\Bigl ( \frac{ \partial \, \omega_k }{ \partial \chi' } \Bigr ) \,
\bigl [ \, 2 n(\beta \omega_k) - 1 \, \bigr ]
+
1 \,
\Bigr \}
\notag \\
&=
\frac{1}{2}
\Intk \,
\Bigl \{ \,
\frac{ \epsilon_k + \chi' }{ \omega_k } \,
\bigl [ \, 2 n(\beta \omega_k) - 1 \, \bigr ]
+
1 \,
\Bigr \} \>,
\label{cIIA.e:chigapI}
\end{align}
where the non-interacting Fermi particle number factor $n(x)$ is defined by
\begin{equation}\label{cIIA.e:nFermi}
n(x)
=
1 / [ \, e^{x} + 1 \, ] \>.
\end{equation}
The gap equation for $\Delta'$ is
\begin{align}
&\frac{ \Delta' }{ \lambda \cos^2\theta }
=
\Intk \,
\Bigl \{ \,
\Bigl ( \frac{ \partial \, \omega_k }{ \partial \Delta^{\prime\,\ast} } \Bigr ) \,
\bigl [ \, 2 n(\beta \omega_k) - 1 \, \bigr ]
+
\frac{\Delta'}{2 \epsilon_k} \,
\Bigr \}
\notag \\
&=
\Intk \,
\Bigl \{ \,
\frac{ \Delta' }{2 \omega_k} \,
\bigl [ \, 2 n(\beta \omega_k) - 1 \, \bigr ]
+
\frac{\Delta'}{2 \epsilon_k} \,
\Bigr \} \>.
\label{cIIA.e:DeltagapI}
\end{align}
Again, the factor of $\Delta'$ cancels, and we get for the $\Delta'$ gap equation
\begin{equation}\label{cIIA.e:DeltagapII}
\frac{1}{ \lambda \cos^2\theta }
=
\frac{1}{2}
\Intk \,
\Bigl \{ \,
\frac{1}{\omega_k} \,
\bigl [ \, 2 n(\beta \omega_k) - 1 \, \bigr ]
+
\frac{1}{\epsilon_k} \,
\Bigr \} \>.
\end{equation}
The total particle density is given by
\begin{align}
\rho
&=
- \frac{\partial \, \Veff[ \Psi ]}{\partial \mu}
=
2 \, \frac{\chi' + \mu}{\lambda \sin^2\theta}
\label{cIIA.e:rho} \\
&
=
\Intk \,
\Bigl \{ \,
\frac{ \epsilon_k + \chi' }{ \omega_k } \,
\bigl [ \, 2 n(\beta \omega_k) - 1 \, \bigr ]
+
1 \,
\Bigr \} \>.
\notag
\end{align}
It is convenient to scale momenta and energies in terms of the Fermi momentum, $\kF$, and Fermi energy, $\epsilonF = \gamma \kF^2$, respectively. We introduce
\begin{gather}
\bar{k} = k / \kF \>,
\quad
\bar{\mu}
=
\mu / \epsilonF \>,
\quad
\bar{\Delta}
=
\Delta / \epsilonF \>,
\label{scale.e:scaling} \\
\bar{\omega}_{\bar{k}}
=
\omega_{k} / \epsilonF
=
\sqrt{( \bar{k}^2 + \bar{\chi}' )^2+ \bar{\Delta}^2 } \>,
\notag \\
\bar{T} = T / \epsilonF = T / T_\mathrm{F} \>,
\quad
\bar{\beta} = \epsilonF \, \beta = T_\mathrm{F} / T = 1 / \bar{T} \>.
\notag
\end{gather}
Then, the rescaled equations are
\begin{subequations}\label{cIIA.e:gapdenI}
\begin{gather}
\frac{1}{\xi \, \cos^2\theta}
=
\frac{2}{\pi}
\int_{0}^{\infty} \!\! \bar{k}^2 \, \mathrm{d} \bar{k} \,
\Bigl \{ \,
\frac{1}{ \bar{k}^2 }
-
\frac{ 1 - 2 \, n( \bar{\beta} \, \bar{\omega}_{\bar{k}} ) }
{ \bar{\omega}_{\bar{k}} } \,
\Bigr \} \>,
\label{cIIA.e:gapA} \\
1
=
\frac{3}{2}
\int_{0}^{\infty} \!\! \bar{k}^2 \, \mathrm{d} \bar{k} \,
\Bigl \{ \,
1
-
\frac{ \bar{k}^2 + \bar{\chi}' }{ \bar{\omega}_{\bar{k}} } \,
[ \, 1 - 2 \, n( \bar{\beta} \, \bar{\omega}_{\bar{k}} ) \, ]
\Bigr \} \>,
\label{cIIA.e:gapB} \\
\bar{\chi}'
=
\frac{4}{3\pi} \, \xi \, \sin^2\theta
-
\bar{\mu} \>,
\label{cIIA.e:gapC}
\end{gather}
\end{subequations}
where now $\bar{\omega}_{\bar{k}}^2 = ( \bar{k}^2 + \bar{\chi}' )^2 + |\bar{\Delta}|^2$ and we introduced $\xi = k_\mathrm{F} a$. Eqs.~\eqref{cIIA.e:gapdenI} are to be solved selfconsistently for $\bar{\mu}$ and $|\bar{\Delta}'|$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\columnwidth]{Fig1_mu-delta}
\caption{\label{f:T0deltamu-eta}(Color online) Zero temperature solutions for $\Delta'$
and $\mu$ of the gap equations in scaled units
\vs\ $1/\xi$ for several values of the parameter $\theta = 0$.}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\columnwidth]{Fig2_muc-Tc}
\caption{\label{f:Deltazero}(Color online) Solutions for $T$ and $\mu$ of the gap equations
in scaled units at the critical point ($\Delta'=0$)
\vs\ $1/\xi$ for several values of the parameter $\theta = 0$.}
\end{figure}
\subsection{\label{ss:T0}Zero temperature ($T = 0$)}
At zero temperature so that $n( \bar{\beta} \, \bar{\omega}_k ) = 0$, Eqs.~\eqref{cIIA.e:gapdenI} reduce to
\begin{align}
\frac{1}{\xi \, \cos^2\theta}
&=
\frac{2}{\pi}
\int_{0}^{\infty} \!\! \mathrm{d} \bar{k} \,
\Bigl \{ \,
1
-
\frac{\bar{k}^2}{ \bar{\omega}_{\bar{k}} } \,
\Bigr \} \>,
\label{cIIA.e:gapdenzerotemp} \\
1
&=
\frac{3}{2}
\int_{0}^{\infty} \!\! \bar{k}^2 \, \mathrm{d} \bar{k} \,
\Bigl \{ \,
1
-
\frac{ \bar{k}^2 + \bar{\chi}' }{ \bar{\omega}_{\bar{k}} } \,
\Bigr \} \>,
\notag \\
\bar{\chi}'
&=
\frac{4}{3\pi} \, \xi \, \sin^2\theta
-
\bar{\mu} \>.
\notag
\end{align}
In Fig.~\ref{f:T0deltamu-eta} we illustrate the solutions of the gap equations \eqref{cIIA.e:gapdenzerotemp} for $\Delta'$ and $\mu$ in reduced units as a function of $1/\xi$ for values of $\theta = 0$, $\pi/12$, $\pi/6$ and $\pi/4$.
For $\theta = 0$, our results reduce to the variational equations discussed by Leggett~\cite{r:Leggett:1980fk}. In the unitarity limit (i.e. for $1/\xi = 0$) one obtains $\mu / \epsilon_F = 0.59$ and $\Delta / \epsilon_F = 0.69$.
For $\theta \ne 0$, the chemical potential has a singularity at $1/\xi = 0$. This indicates that the only physical theory corresponds to choosing $\theta = 0$. Hence, the BCS theory is the only relevant auxiliary field theory for a dilute gas of fermions.
\subsection{\label{ss:Tc}Critical temperature ($\Delta' = 0$)}
At finite temperature, the critical temperature and critical chemical potential correspond to the point where the gap $\Delta = 0$. At the critical point, the spectrum becomes $\omega_k = \epsilon_k + \chi'$. For this case, Eqs.~\eqref{cIIA.e:gapdenI} become
\begin{gather}
\frac{1}{\xi \, \cos^2\theta}
=
\frac{2}{\pi}
\int_{0}^{\infty} \!\! k^2 \, \mathrm{d} k \,
\Bigl \{ \,
\frac{1}{ k^2 }
-
\frac{ \tanh( \beta \, \omega_k / 2 ) }
{ \omega_k } \,
\Bigr \}
\label{scale.e:zeroDelta} \\
1
=
\frac{3}{2}
\int_{0}^{\infty} \!\! \bar{k}^2 \, \mathrm{d} \bar{k} \,
\bigl \{ \,
1 - \Sgn{\bar{k}^2 + \bar{\chi}'} \, \tanh( \beta \, \bar{\omega}_{\bar{k}} / 2 ) \,
\bigr \} \>,
\notag \\
\bar{\chi}'
=
\frac{4}{3\pi} \, \xi \, \sin^2\theta
-
\bar{\mu} \>.
\notag
\end{gather}
Solutions of Eqs.~\eqref{scale.e:zeroDelta} for various values of $\theta$ are shown in Fig.~\ref{f:Deltazero}. For $\theta = 0$, our results are the same as those discussed extensively by S\'a de Melo, Randeria and Engelbrecht in Refs.~\onlinecite{r:Melo:1993vn,r:Engelbrecht:1997fk}. For $\theta \neq 0$, the chemical potential at the critical temperature has a singularity in the unitarity limit, again indicating that the case of $\theta=0$ corresponds to the only physical theory for a dilute gas of fermions in the auxiliary field formalism.
\subsection{\label{ss:EOS}Thermodynamics}
From Eq.~\eqref{cIIA.e:Veff-III}, the particle number density is
\begin{align}
\rho
&=
\frac{N}{V}
=
- \frac{1}{V} \Partial{\,\Omega}{\mu}{T}{V}
=
- \frac{\partial \Veff}{\partial \mu}
=
2 \, \frac{\chi' + \mu}{\lambda \, \sin^2\theta}
\label{XXX.e:N} \\
&=
2 \Intk \, \rho(k) \>,
\notag
\end{align}
where
\begin{equation}\label{XXX.e:rhokdef}
\rho(k)
=
\frac{1}{2} \,
\Bigl \{ \,
1
-
\frac{ \epsilon_k + \chi' }{ \omega_k } \,
\bigl [ \, 1 - 2 n(\beta \omega_k) \, \bigr ] \,
\Bigr \} \>.
\end{equation}
The zero-temperature momentum distribution of the particle distribution function, $\rho(k)$, is shown in Fig.~\ref{f:T0theta0-rhok} for several values of the parameter~$\xi$. For completeness, we also depict the momentum dependence of the dispersion relations, $\omega_k$, for $\theta=0$ and the same values of the parameter~$\xi$. We note that the location of the minimum in the dispersion relation shifts smoothly to zero momentum and disappears for $\xi > \xi_c \approx$0.55, indicative of the crossover character of the BCS to BEC transition~\cite{r:Parish:2005fk}.
The pressure is also obtained from Eq.~\eqref{cIIA.e:Veff-III}, as
\begin{align}
p
&=
- \Partial{\,\Omega}{V}{T}{\mu}
=
- \Veff
=
\frac{ ( \chi' + \mu )^2 }
{ \lambda \sin^2\theta }
+
\frac{ | \Delta' |^2 }{ \lambda \cos^2\theta }
\label{XXX.e:pres} \\
& \qquad
+
2 \Intk \,
\Bigl \{ \,
\frac{1}{2} \,
\Bigl [ \,
\omega_{k}
-
\epsilon_k
-
\chi'
-
\frac{ | \Delta' |^2 }
{ 2 \, \epsilon_k } \,
\Bigr ]
\notag \\
& \qquad\qquad\qquad\qquad\qquad
+
\frac{1}{\beta} \,
\Ln{ 1 + e^{ -\beta \omega_k } }
\Bigr \} \>,
\notag
\end{align}
In scaled variables, the pressure is given by
\begin{align}
\frac{p}{\rho \, \epsilonF}
&=
\frac{2}{3 \pi} \, \xi \, \sin^2\theta
+
\frac{3 \pi}{8 \, \xi} \,
\frac{ | \bar{\Delta}' |^2 }{ \cos^2\theta }
\label{XXX.e:pressII} \\
& \quad
+
\frac{3}{2}
\int_{0}^{\infty} \!\! \bar{k}^2 \, \mathrm{d} \bar{k} \,
\Bigl \{ \,
\bar{\omega}_{\bar{k}}
-
\bar{k}^2
-
\bar{\chi}'
-
\frac{ | \bar{\Delta}' |^2 }{ 2 \, \bar{k}^2 } \,
\notag \\
& \qquad\qquad
+
\frac{2}{\bar{\beta}} \,
\Ln{ 1 + e^{ -\bar{\beta} \, \bar{\omega}_{\bar{k}} } } \,
\Bigr \} \>,
\notag
\end{align}
From Eq.~\eqref{thm.e:SNpI}, the entropy per unit volume, $s$, is given by
\begin{align}
s
&=
\frac{S}{V}
=
- \frac{1}{V} \, \Partial{\,\Omega}{T}{\mu}{V}
=
\frac{\beta^2}{V} \, \frac{\partial \, \Omega}{\partial \beta}
=
\beta^2 \, \frac{\partial \, \Veff}{\partial \beta}
\notag \\
&=
2 \beta
\Intk \,
\Bigl \{ \,
n(\beta \omega_k) \, \omega_k
+
\frac{1}{\beta} \,
\Ln{ 1 + e^{-\beta \omega_k} } \,
\Bigr \}
\notag \\
&=
- 2
\Intk \,
\bigl \{ \,
n(\beta \omega_k) \, \Ln{ n(\beta \omega_k) }
\label{XXX.e:S} \\
& \qquad\qquad
+
[ \, 1 - n(\beta \omega_k) \, ] \,
\Ln{ 1 - n(\beta \omega_k) } \,
\bigr \} \>,
\notag
\end{align}
or, in scaled units,
\begin{equation}
\frac{s}{\rho \, \epsilonF}
=
\frac{3}{T}
\int_{0}^{\infty} \!\! \bar{k}^2 \, \mathrm{d} \bar{k} \,
\Bigl \{ \,
n( \, \bar{\beta}\bar{\omega}_{\bar{k}} \, ) \,
\bar{\omega}_{\bar{k}}
+
\frac{1}{\bar{\beta}} \,
\Ln{ 1 + e^{-\bar{\beta} \bar{\omega}_{\bar{k}} } } \,
\Bigr \} \>,
\end{equation}
From Eq.~\eqref{thm.e:U}, the energy per unit volume, $e$, is given by
\begin{align}
e
&=
E/V
=
\Veff + T \, s + \mu \, \rho
\label{XXX.e:u} \\
&=
-
\frac{ \chi^{\prime\,2} - \mu^2 }
{ \lambda \sin^2\theta }
-
\frac{ | \Delta' |^2 }{ \lambda \cos^2\theta }
\notag \\
&
+
\Intk \,
\Bigl \{ \,
[ \, 2 n(\beta \omega_k) - 1 \, ] \, \omega_k
+
\epsilon_k
+
\chi'
+
\frac{ | \Delta' |^2 }
{ 2 \, \epsilon_k } \,
\Bigr \} \>,
\notag
\end{align}
or, in scaled units,
\begin{align}
&\frac{e}{\rho \, \epsilonF}
=
-
\frac{p}{\rho \, \epsilonF}
+
T \, \frac{s}{\rho \, \epsilonF}
+
\frac{\mu}{\epsilonF}
\label{XXX.e:uIII} \\
& \qquad
=
-
\frac{2}{3\pi} \, \xi \, \sin^2\theta
+
\bar{\mu}
-
\frac{3\pi}{8 \, \xi} \,
\frac{ | \bar{\Delta}' |^2 }{ \cos^2\theta }
\notag \\
&
-
\frac{3}{2}
\int_{0}^{\infty} \!\! \bar{k}^2 \, \mathrm{d} \bar{k} \,
\Bigl \{ \,
\bar{\omega}_{\bar{k}} \,
[ \, 1 - 2 n(\bar{\beta}\bar{\omega}_{\bar{k}}) \, ] \,
-
\bar{k}^2
-
\bar{\chi}'
-
\frac{ | \bar{\Delta}' |^2 }{ 2 \, \bar{k}^2 } \,
\Bigr \} \>.
\notag
\end{align}
Here $\bar{\chi}'$ and $\bar{\Delta}'$ solutions of Eqs.~\eqref{cIIA.e:gapdenI}.
Comparing Eqs.~\eqref{XXX.e:pressII} and \eqref{XXX.e:uIII}, we see that at $T=0$,
\begin{equation}\label{XXX.e:Tzero-u-p}
e
=
- p + \mu \, \rho \>.
\end{equation}
For illustrative purposes in Fig.~\ref{f:T0pu-eta} we depict the zero-temperature pressure and energy per unit volume as a function of $1/\xi$, for several values of~$\theta$.
The pressure and density have singularities in the unitarity limit, consistent with our previous results that the case of $\theta=0$ corresponds to the only physical theory for a dilute gas of fermions in the auxiliary field formalism.
In Fig.~\ref{f:T0theta0-ratio}, we illustrate the equation of state, $E / p V$, \vs\ $1/\xi$ for $\theta=0$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\columnwidth]{Fig3_rho-om}
\caption{\label{f:T0theta0-rhok}(Color online)
Momentum dependence of the zero temperature particle distributions, $\rho(k)$, and dispersion relations, $\omega_k$, for $\theta=0$ and
several values of the parameter~$1/\xi$. We note that the location of the minimum in the dispersion relation shifts smoothly to zero momentum
and disappears for $\xi > \xi_c \approx$0.55, indicative of the crossover character of the BCS to BEC transition.}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\columnwidth]{Fig4_p-e}
\caption{\label{f:T0pu-eta}(Color online) Zero temperature pressure and energy per unit volume
\vs\ $1/\xi$ for several values of the parameter $\theta = 0$.}
\end{figure}
\subsection{\label{s:contact}Contact interaction relations}
For fermions interacting via short-range potential, Tan derived a set of universal relations in Refs.~\onlinecite{r:Tan:2008uq,*r:Tan:2008kx,*r:Tan:2008vn} that are independent of the details of the short-range interactions, some of which have been verified in experiments \cite{r:Stewart:2008zr,r:Stewart:2010ys}. In particular, Tan relates the fermion momentum distribution, $\rho(k)$, at asymptotically large momentum to thermodynamics quantities such as the energy of the system per unit volume:
Tan showed~\cite{r:Tan:2008kx} that the fermion momentum distribution satisfies the property that
\begin{equation}\label{tan1}
\rho(k) \, \rightarrow \, \frac{C}{k^4}
\>,
\end{equation}
in the large momentum limit, where $C$ is the contact density. This results was observed experimentally by Stewart \emph{et al.}~\cite{r:Stewart:2008zr}. Next, according to Tan's ``adiabatic sweep'' theorem~\cite{r:Tan:2008vn}, the variation of the energy per unit volume, $e$, with respect to the inverse scattering length is given by
\begin{equation}\label{tan2}
\frac{d e}{d a^{-1}} \, = \,
- \, \frac{\gamma}{2\pi} \, C
\>.
\end{equation}
This result was also verified experimentally by Stewart \emph{et al.}~\cite{r:Stewart:2010ys}.
We will show here that the LOAF approximation satisfies these two Tan relations:
First, from Eq.~\eqref{XXX.e:rhokdef}, we find that indeed
\begin{equation}
\rho(k) \, = \, \frac{C_\mathrm{LOAF}}{k^4} \, + \, \mathcal{O} \Bigl ( \frac{1}{k^6} \Bigr )
\>,
\end{equation}
with the LOAF contact density
\begin{equation}
C_\mathrm{LOAF} \, = \,
\frac{\Delta'^2}{4 \gamma^2}
\>.
\end{equation}
Second, we take the derivative of the energy per unit volume, $e$, given in Eq.~\eqref{XXX.e:u} with respect to the inverse scattering length.
Using Eq.~\eqref{XXX.e:u} and recalling that at the minimum we have $\partial\Veff/\partial\chi^{\prime}=0$, $\partial\Veff/\partial\Delta'=0$, and $\partial\mathcal{V}_{eff}/\partial\mu=-\rho$, we find that
\begin{align}
\frac{d e}{d a^{-1}} \, = \, &
- \, \frac{\partial p}{\partial a^{-1}}
\\ \notag \, = \, &
- \, \frac{\Delta'^2}{8\pi \, \gamma} \, = \,
- \, \frac{\gamma}{2\pi} \, C_\mathrm{LOAF}
\>,
\end{align}
as indicated by Tan's relation~\eqref{tan2}.
\begin{figure}[t]
\includegraphics[width=0.95\columnwidth]{Fig5-eos}
\caption{\label{f:T0theta0-ratio}(Color online) Zero temperature equation of state for $\theta = 0$ \vs\ $1/\xi$. We note that the ratio $e/p$ is equal to 3/2
both in the limit of a noninteracting Fermi gas and in the unitarity limit.}
\end{figure}
\subsection{\label{s:unitarity}Unitarity limit}
From Eqs.~\eqref{cIIA.e:gapdenI}, we see that in the unitarity limit, $1/\xi \rightarrow 0$, the gap equations can only be satisfied if $\theta = 0$, which shows yet again that the case of $\theta=0$ corresponds to the only physical theory for a dilute gas of fermions in the auxiliary field formalism.
Hence, in the unitarity limit, the scaled gap equations become
\begin{subequations}\label{u.e:gapdenI}
\begin{gather}
0
=
\frac{2}{\pi}
\int_{0}^{\infty} \!\! \mathrm{d} k \,
\Bigl \{ \,
1
-
\frac{ k^2}{ \omega_k } \,
[ \, 1 - 2 \, n( \beta \omega_k ) \, ]
\Bigr \} \>,
\label{u.e:gapA} \\
1
=
\frac{3}{2}
\int_{0}^{\infty} \!\! k^2 \, \mathrm{d} k \,
\Bigl \{ \,
1
-
\frac{ k^2 - \mu }{ \omega_k } \,
[ \, 1 - 2 \, n( \beta \omega_k ) \, ]
\Bigr \} \>,
\label{u.e:gapB}
\end{gather}
\end{subequations}
where now $\omega_k^2 = ( k^2 - \mu )^2 + |\Delta|^2$, and we have dropped the bar notation.
The pressure and energy per unit colume are now given by
\begin{subequations}\label{u.e:pu}
\begin{align}
\frac{p}{\rho \, \epsilonF}
&=
\frac{3}{2}
\int_{0}^{\infty} \!\! k^2 \, \mathrm{d} k \,
\Bigl \{ \,
\omega_k
-
k^2
+
\mu
-
\frac{ | \Delta |^2 }{ 2 \, k^2 } \,
\label{u.e:pressI} \\
& \qquad\qquad
+
\frac{2}{\beta} \,
\Ln{ 1 + e^{ -\beta \omega_k } } \,
\Bigr \} \>,
\notag \\
\frac{e}{\rho \, \epsilonF}
&=
\mu
-
\frac{3}{2}
\int_{0}^{\infty} \!\! k^2 \, \mathrm{d} k \,
\Bigl \{ \,
\omega_k \,
[ \, 1 - 2 n( \beta \omega_k ) \, ]
\label{u.e:energyI} \\
& \qquad\qquad
-
k^2
+
\mu
-
\frac{ | \Delta |^2 }{ 2 \, k^2 } \,
\Bigr \} \>.
\notag
\end{align}
\end{subequations}
By parts integration, we have
\begin{align}
&\int_{0}^{\infty} \!\! k^2 \, \mathrm{d} k \,
\frac{1}{\beta} \,
\Ln{ 1 + e^{ -\beta \omega_k } }
\label{u.e:bypartsI} \\
& \qquad\qquad\qquad
=
\frac{2}{3}
\int_{0}^{\infty} \!\! k^2 \, \mathrm{d} k \,
\frac{ k^2 \, ( k^2 - \mu ) }{ \omega_k } \, n( \beta \omega_k ) \>.
\notag
\end{align}
Substituting this into Eq.~\eqref{u.e:pressI}, the pressure can be written as
\begin{align}
\frac{p}{\rho \, \epsilonF}
&=
\int_{0}^{\infty} \!\! k^2 \, \mathrm{d} k \,
\Bigl \{ \,
\frac{3}{2} \,
\Bigl [ \,
\omega_k
-
k^2
+
\mu
-
\frac{ | \Delta |^2 }{ 2 \, k^2 } \,
\Bigr ]
\label{u.e:pressII} \\
& \qquad\qquad
+
2 \,
\frac{ k^2 \, ( k^2 - \mu ) }{ \omega_k } \, n( \beta \omega_k ) \,
\Bigr \} \>.
\notag
\end{align}
For the energy expression, multiply Eq.~\eqref{u.e:gapB} by $\mu$ and substitute the result into Eq.~\eqref{u.e:energyI}. This gives for the energy
\begin{align}
\frac{e}{\rho \, \epsilonF}
&=
\frac{3}{2}
\int_{0}^{\infty} \!\! k^2 \, \mathrm{d} k \,
\Bigl \{ \,
-
\frac{ \omega_k^2 + \mu ( k^2 - \mu) }{ \omega_k } \,
[ \, 1 - 2 n( \beta \omega_k ) \, ]
\notag \\
& \qquad\qquad
+
k^2
+
\frac{ | \Delta |^2 }{ 2 \, k^2 } \,
\Bigr \} \>.
\label{u.e:energyII}
\end{align}
Now form the quantity
\begin{align}
&\frac{ 2 e - 3 p }{ \rho \, \epsilonF }
=
3
\int_{0}^{\infty} \!\! k^2 \, \mathrm{d} k \,
\Bigl \{ \,
-
\frac{ \omega_k^2 + \mu ( k^2 - \mu) }{ \omega_k }
+
k^2
+
\frac{ | \Delta |^2 }{ 2 \, k^2 }
\notag \\
&
-
\frac{3}{2} \,
\Bigl [ \,
\omega_k
-
k^2
+
\mu
-
\frac{ | \Delta |^2 }{ 2 \, k^2 } \,
\Bigr ]
+
\frac{ 2 | \Delta |^2 }{ \omega_k } \, n( \beta \omega_k ) \,
\Bigr \} \>.
\label{u.e:calcI}
\end{align}
Now integrate by parts,
\begin{align}
&
\int_{0}^{\infty} \!\! k^2 \, \mathrm{d} k \,
\Bigl [ \,
\omega_k
-
k^2
+
\mu
-
\frac{ | \Delta |^2 }{ 2 \, k^2 } \,
\Bigr ]
\label{u.e:bypartsII} \\
&=
\frac{2}{3}
\int_{0}^{\infty} \!\! k^2 \, \mathrm{d} k \,
\Bigl [ \,
k^2
-
\frac{ k^2 ( k^2 - \mu )}{\omega_k}
-
\frac{ | \Delta |^2 }{ 2 \, k^2 } \,
\Bigr ] \>.
\notag
\end{align}
Inserting this into \eqref{u.e:calcI} gives
\begin{equation}\label{u.e:calcII}
\frac{ 2 e - 3 p }{ \rho \, \epsilonF }
=
3 \, | \Delta |^2
\int_{0}^{\infty} \!\! \mathrm{d} k \,
\Bigl \{ \,
1
-
\frac{ k^2 [ 1 - 2 \, n( \beta \omega_k ) ] }
{ \omega_k } \,
\Bigr \}
=
0 \>,
\end{equation}
where we have used the gap equation \eqref{u.e:gapA}. So this shows that at the unitary limit,
\begin{equation}\label{u.e:purelation}
e
=
\frac{3}{2} \, p \>,
\end{equation}
for all temperatures $T$ (see e.g. Ref.~\onlinecite{r:He:2007vn}). In Fig.~\ref{f:T0theta0-ratio} we show numerically that this relation holds for $T=0$.
Using Eqs.~\eqref{XXX.e:Tzero-u-p} and \eqref{u.e:purelation}, we find the unitarity limit results at zero temperature,
\begin{equation}\label{u.e:energyTzeroUnitarity}
\frac{e}{\rho \, \epsilonF}
=
\frac{3}{5} \, \bar \mu \>,
\end{equation}
in reduced units. But, at $T=0$, we have $\mu / \epsilonF = 0.59$. Therefore, introducing the energy per particle
\begin{equation}
\varepsilon = \frac{E}{N} = \frac{e}{\rho}
\>,
\end{equation}
we obtain that at zero temperature, in the unitarity limit, we have
\begin{equation}
(\varepsilon / \varepsilon_0)_\textrm{LOAF} = 0.59
\>.
\end{equation}
\section{\label{s:concl}Conclusions}
To summarize, in this paper we derived the auxiliary field formalism for a dilute fermionic atom gas with tunable interactions. This formalism represents the fermionic counterpart of a similar auxiliary field formalism introduced recently to describe the properties of a dilute gas of Bose particles~\cite{r:Cooper:2010fk}. Here we demonstrate that at zero temperature, the fermionic LOAF equations are the same as the equations derived by Leggett~\cite{r:Leggett:1980fk}, whereas the finite-temperature results correspond to those discussed earlier by S\'a de~Melo, Randeria, and Engelbrecht~\cite{r:Melo:1993vn,r:Engelbrecht:1997fk}. The LOAF formalism shows that the BCS ansatz represents the \emph{only} physical auxiliary field theory for a dilute Fermi gas. Furthermore, we showed that LOAF satisfies Tan's relation regarding the momentum distribution of fermions at asymptotically large momenta and Tan's ``adiabatic sweep'' theorem. Just like in the Bose case, the auxiliary field approach for fermions provides a systematic framework that allows one to improve the LOAF results by calculating the 1-PI action corrections, order by order.
\bigskip
\begin{acknowledgments}
This work was performed in part under the auspices of the U.~S.~Dept.~of Energy. The authors would like to thank the Santa Fe Institute for hospitality during this work. JFD would like to thank LANL for travel support and hospitality.
\end{acknowledgments}
|
3,212,635,537,517 | arxiv | \section{Introduction and Main Results}
In view of the well-known Riemann mapping theorem in classical complex analysis,
the unit disk $\mathbb{D}=\{z\in\mathbb{C}:\,|z|<1\}$ is usually considered
as a standard domain. The analytic functions such as convex, starlike, and close-to-convex
functions defined in the unit disk have been extensively studied and found numerous
applications to various problems in complex analysis and related topics.
Part of this development is the study of subclasses of the class of
univalent functions, more general than the classes of
convex, starlike, and close-to-convex functions. Analytic and geometric characterizations of such functions
are of quite interesting to all function theorists in general.
Background knowledge in this theory can be found from standard books
(see for instance \cite{Dur83,Goo83}).
In 1916, Bieberbach first posed a conjecture on the coefficient estimate of univalent
functions. This conjecture was a long standing open problem in univalent function theory and was a
challenge to all mathematicians.
In this regard a lot of methods and concepts were developed. One of the important concepts
is the {\em Herglotz representation theorem} for univalent functions with positive real part.
Initially, the Bieberbach conjecture
was proved for first few coefficients of univalent functions. Then the conjecture was considered in
many special cases. In one direction, it was considered for certain subclasses of univalent functions
like starlike, convex, close-to-convex, typically real functions etc. The concept of order for the
starlike and convex was also introduced, which are the subclasses of the class of starlike and convex
functions respectively, and
the conjecture was proved in these subclasses. In other direction, discussion on
many conjectures, namely, the Zalcman conjecture, the Robertson conjecture, the Littlewood-Paley conjecture, etc. were investigated to prove the Bieberbach conjecture.
Finally, the full conjecture for univalent functions was settled down by L. de Branges in 1985 \cite{deB85}.
In 1990, Ismail et. al. \cite{IMS90} introduced a link between starlike functions
and the $q$-theory by introducing a $q$-analog of the starlike functions. We call these
functions as $q$-starlike functions.
They proved the Bieberbach conjecture for the $q$-starlike functions through the
Herglotz representation theorem for these functions.
In this connection, we aim to introduce the concept of order of $q$-starlikeness
and prove the Bieberbach conjecture.
In particular, we also discuss several other basic properties on the order of
$q$-starlike functions.
We now collect some standard notations and basic definitions used in the sequel.
We denote by $\mathcal{H}({\mathbb D})$, the set of all analytic (or holomorphic)
functions in ${\mathbb D}$. We use the symbol $\mathcal{A}$ for the class of functions
$f \in \mathcal{H}({\mathbb D})$ with
the standard normalization $f(0)=0=f'(0)-1$. This means that the functions
$f\in\mathcal{A}$ have the power series representation of the form $z+\sum_{n=2}^\infty a_nz^n$.
The principal value of the logarithmic function $\log z$ for $z\neq 0$ is denoted by
${\operatorname{Log}\,} z:=\ln |z|+i {\operatorname{Arg}\,}(z)$, where $-\pi\le {\operatorname{Arg}\,}(z)<\pi$.
For $0<q<1$, {\em the $q$-difference operator} denoted as $D_qf$ is defined by
the equation
\begin{equation}\label{sec1-eqn1}
(D_qf)(z)=\frac{f(z)-f(qz)}{z(1-q)},\quad z\neq 0, \quad (D_qf)(0)=f'(0).
\end{equation}
The operator $D_qf$ plays an important role in the theory of basic hypergeometric series
(see \cite{AS14,And74,Fin88,Sla66}); see also Section~4 in this paper.
It is evident that, when $q\to 1^{-}$, the difference operator $D_qf$ converges to the ordinary
differential operator $Df=df/{dz}=f'$.
A function $f\in \mathcal{A}$ is called starlike of order $\alpha$, $0\le \alpha<1$, if
$${\rm Re}\,\left(\frac{zf'(z)}{f(z)}\right)>\alpha,\quad z\in \mathbb{D}.
$$
We use the notation $\mathcal{S}^*(\alpha)$ for the class of starlike functions of order
$\alpha$. Set $\mathcal{S^*}:=\mathcal{S^*}(0)$, the class of all starlike functions.
One way to generalize the starlike functions of order $\alpha$ is to replace the derivative function $f'$ by the
$q$-difference operator $D_qf$ and replace the right-half plane $\{w:\,{\rm Re}\,w>\alpha\}$ by a suitable
domain in the above definition of the starlike functions of order $\alpha$. The
appropriate definition turned out to be the following:
\begin{definition}\label{main:defn}
A function $f\in\mathcal{A}$ is said to {\em belong to the class $\mathcal{S}^*_q(\alpha)$}, $0\le \alpha<1$, if
$$\left|\frac{\displaystyle\frac{z(D_qf)(z)}{f(z)}-\alpha}{1-\alpha}-\frac{1}{1-q}\right|\leq \frac{1}{1-q}, \quad z\in \mathbb{D}.
$$
\end{definition}
The following is the equivalent form of Definition~\ref{main:defn}.
\begin{equation}\label{main=defn}
f\in\mathcal{S}_q^*(\alpha) \iff \left|\frac{z(D_qf)(z)}{f(z)}-\frac{1-\alpha q}{1-q}\right|
\le \frac{1-\alpha}{1-q}.
\end{equation}
Observe that as $q\to 1^{-}$ the closed disk $|w-(1-q)^{-1}|\le (1-q)^{-1}$ becomes the right-half plane
and the class $\mathcal{S}^*_q(\alpha)$ reduces to $\mathcal{S}^*(\alpha)$, $0\le \alpha<1$.
In particular, when $\alpha=0$, the class $\mathcal{S}^*_q(\alpha)$ coincides with the class
$\mathcal{S}^*_q:=\mathcal{S}^*_q(0)$, which was
first introduced by Ismail et. al. \cite{IMS90} in 1990 and later (also recently)
it has been considered in
\cite{AS14,RS12,Ro92,SS14}.
In words we call $\mathcal{S}^*_q(\alpha)$, the class of {\em $q$-starlike functions of order $\alpha$}.
The main objective in this paper is to prove the following theorems. The first main theorem describes
the {\em Herglotz Representation} for functions belonging to the class $\mathcal{S}^*_q(\alpha)$ in
the form of a Poisson-Stieltjes integral (see Herglotz Representation Theorem for analytic functions
with positive real part in \cite[pp.~22]{Dur83}).
\begin{theorem}\label{thm2}
Let $f\in \mathcal{A}$. Then $f\in\mathcal{S}^*_q(\alpha)$ if and only if there exists
a probability measure $\mu$ supported on the unit circle such that
$$\frac{zf'(z)}{f(z)}=1+\int_{|\sigma|=1}\sigma z F_{q, \alpha}^{'}(\sigma z)\rm{d}\mu(\sigma)
$$
where
\begin{equation}\label{MainThm1:eq}
F_{q,\alpha}(z)=\displaystyle \sum_{n=1}^\infty \frac{(-2)\left(\ln \frac{q}{1-\alpha(1-q)}\right)}{1-q^n}z^n, \quad z\in {\mathbb D} .
\end{equation}
\end{theorem}
\begin{remark}
When $q$ approaches $1$, Theorem~\ref{thm2} leads to the Herglotz Representation Theorem for
starlike functions of order $\alpha$ (see for instance \cite[Problem~3, pp. 172]{Goo83}).
\end{remark}
Our second main theorem concerns about the Bieberbach conjecture problem for functions in $\mathcal{S}^*_q(\alpha)$.
The extremal
function is also explicitly obtained in terms of exponential of the function $F_{q, \alpha}(z)$.
This exponential form generalizes the Koebe function $k_\alpha(z)=z/(1-z)^{2(1-\alpha)}$, $z\in \mathbb{D}$.
That is, when $q\to 1^{-}$, the exponential form $G_{q,\alpha}(z):=z\, \exp [F_{q, \alpha}(z)]$ representing the
extremal function for the class $\mathcal{S}^*_q(\alpha)$ turns into the Koebe function $k_\alpha(z)$.
\begin{theorem}\label{sec2-thm7}
Let
\begin{equation}\label{MainThm2:eq}
G_{q, \alpha}(z):=z\, \exp [F_{q, \alpha}(z)]
=z+\displaystyle \sum_{n=2}^\infty c_n z^n.
\end{equation}
Then $G_{q, \alpha}\in \mathcal{S}^*_q(\alpha)$. Moreover, if $f(z)=z+\sum_{n=2}^\infty a_n z^n\in
\mathcal{S}^*_q(\alpha)$, then $|a_n|\le c_n$ with equality holding for all $n$ if and only if
$f$ is a rotation of $G_{q, \alpha}$.
\end{theorem}
\begin{remark}
When $q$ approaches $1$, Theorem~\ref{sec2-thm7} leads to the Bieberbach conjecture for
starlike functions of order $\alpha$ (see for instance \cite[Theorem~2, pp. 140]{Goo83}).
\end{remark}
Motivation behind this comes from the work of Ismail et. al., where the $q$-analog of starlike functions was
introduced in 1990 (see \cite{IMS90}).
The $q$-theory has important role in special functions and quantum physics
(see for instance \cite{And74,Ern02,Fin88,KC02,Kir95,Sla66}). For updated research work
in function theory related to $q$-analysis, readers can refer \cite{AS14,IMS90,RS12,Ro92,SS14}.
In \cite{IMS90}, the authors have obtained the Herglotz representation for functions of the class
$\mathcal{S}_q^*$ in the following form:
\medskip
\noindent
{\bf Theorem~A.} \cite[Theorem~1.15]{IMS90}
{\em Let $f\in \mathcal{A}$. Then $f\in\mathcal{S}^*_q$ if and only if there exists
a probability measure $\mu$ supported on the unit circle such that
$$\frac{zf'(z)}{f(z)}=1+\int_{|\sigma|=1}\sigma z F_{q}^{'}(\sigma z)\rm{d}\mu(\sigma)
$$
where
$$F_{q}(z)=\displaystyle \sum_{n=1}^\infty \frac{-2\ln q}{1-q^n}z^n, \quad z\in {\mathbb D} .
$$
}
\medskip
\noindent
Also they have proved the Bieberbach conjecture problem for $q$-starlike functions in the following form:
\medskip
\noindent
{\bf Theorem~B.} \cite[Theorem~1.18]{IMS90}
{\em Let
$$G_{q}(z):=z\, \exp [F_{q}(z)]
=z+\displaystyle \sum_{n=2}^\infty c_n z^n.
$$
Then $G_{q}\in \mathcal{S}^*_q$. Moreover, if $f(z)=z+\sum_{n=2}^\infty a_n z^n\in
\mathcal{S}^*_q$, then $|a_n|\le c_n$ with equality holding for all $n$ if and only if
$f$ is a rotation of $G_{q}$.
}
\medskip
\noindent
\begin{remark}
It is remarkable that when $\alpha=0$, Theorem~\ref{thm2} and Theorem~\ref{sec2-thm7} respectively coincides
with Theorem~A and Theorem~B.
\end{remark}
Section~2 is devoted for basic interesting properties of the class $\mathcal{S}^*_q(\alpha)$, which are
used in the proof of main theorems. In Section~3, we prove our main results.
The order of $q$-starlikeness of the basic hypergeometric functions is discussed in Section~4.
Finally, we focus on concluding remarks with few questions in Section~5 for future research in this direction.
\section{Properties of the class $\mathcal{S}^*_q(\alpha)$}
As a matter of fact the following proposition says that a function $f$ in $\mathcal{S}^*_q(\alpha)$
can be obtained in terms of a function $g$ in $\mathcal{S}^*_q$. The proof is obvious and it follows
from the definition of $\mathcal{S}^*_q(\alpha)$, $0\le \alpha< 1$.
\begin{proposition}\label{sec1-prop1}
Let $f \in \mathcal{S}^*_q(\alpha)$. Then there exists a unique function $g \in \mathcal{S}^*_q$ such that
\begin{equation}\label{prop1-eqn}
\frac{\displaystyle\frac{z(D_qf)(z)}{f(z)}-\alpha}{1-\alpha}=\frac{z(D_qg)(z)}{g(z)}
~~\mbox{ or }~~
\frac{f(qz)-\alpha qf(z)}{(1-\alpha)f(z)}=\frac{g(qz)}{g(z)}.
\end{equation}
holds. Similarly, for a given function $g\in \mathcal{S}_q^*$ there exists
$f\in \mathcal{S}_q^*(\alpha)$ satisfying the above relation. Uniqueness follows trivially.
\end{proposition}
Next we present a easy characterization of functions in the class $\mathcal{S}^*_q(\alpha)$.
This shows that if $f\in \mathcal{S}^*_q(\alpha)$ then $f(z) = 0$ implies $z = 0$, otherwise
$f(qz)/f(z)$ would have a pole at a zero of $f(z)$ with least nonzero modulus.
\begin{theorem}\label{sec2-thm1}
Let $f\in \mathcal{A}$. Then $f\in\mathcal{S}^*_q(\alpha)$ if and only if
$$\left|\frac{f(qz)}{f(z)}-\alpha q\right|\leq 1-\alpha, \quad z\in \mathbb{D}.
$$
\end{theorem}
\begin{proof}
The proof can be easily obtained from the fact
$$
\frac{z(D_qf)(z)}{f(z)}= \left(\frac{1}{1-q}\right)\left(1-\frac{f(qz)}{f(z)}\right)
$$
and the definition of $\mathcal{S}^*_q(\alpha)$.
\end{proof}
The next result is an immediate consequence of Theorem~\ref{sec2-thm1}.
\begin{corollary}
The class $\mathcal{S}^*_q(\alpha)$ satisfies the inclusion relation
$$\bigcap_{q<p<1}\mathcal{S}^*_p(\alpha)\subset \mathcal{S}^*_q(\alpha)
~~\mbox{ and }~~
\bigcap_{0<q<1}\mathcal{S}^*_q(\alpha) = \mathcal{S}^*(\alpha).
$$
\end{corollary}
\begin{proof}
The inclusions
$$\bigcap_{q<p<1}\mathcal{S}^*_p(\alpha)\subset \mathcal{S}^*_q(\alpha)
~~\mbox{ and }~~
\bigcap_{0<q<1}\mathcal{S}^*_q(\alpha) \subset \mathcal{S}^*(\alpha)
$$
clearly hold. It remains to show that
$$\mathcal{S}^*(\alpha) \subset \bigcap_{0<q<1}\mathcal{S}^*_q(\alpha)
$$
holds. For this, we let $f\in \mathcal{S}^*(\alpha)$. Then it is enough
to show that $f\in \mathcal{S}^*_q(\alpha)$ for all $q\in (0,1)$.
Since $f\in \mathcal{S}^*(\alpha)$ there exists a unique $g\in \mathcal{S}^*$
satisfying
$$\frac{\displaystyle\frac{zf'(z)}{f(z)}-\alpha}{1-\alpha}=\frac{zg'(z)}{g(z)},\quad |z|<1.
$$
Since $\mathcal{S}^*=\cap_{0<q<1}\mathcal{S}^*_q$, it follows that $g\in \mathcal{S}^*_q$ for all $q\in(0,1)$.
Thus, by Proposition~\ref{sec1-prop1} there exists a unique $h\in \mathcal{S}_q^*(\alpha)$ satisfying the
identity (\ref{prop1-eqn}) with $h(z)=f(z)$. The proof now follows immediately.
\end{proof}
We now define two sets and proceed to prepare some basic results which are being used
to prove our main results in this section. They are
$$B_q=\{g:g\in \mathcal{H}({\mathbb D}),~g(0)=q \mbox{ and } g:{\mathbb D} \to {\mathbb D}\}
~~\mbox{ and }~~
B_q^0=\{g:g\in B_q \mbox{ and } 0\notin g({\mathbb D}) \}.
$$
\begin{lemma}{\label{lm2}}
If $h\in B_q$ then the infinite product
$\prod_{n=0}^\infty \{((1-\alpha)h(zq^n)+\alpha q)/q\}$ converges uniformly on compact subsets of ${\mathbb D}$.
\end{lemma}
\begin{proof}
We set $(1-\alpha)h(z)+\alpha q=g(z)$. Since $h \in B_q$, it easily follows that $g \in B_q$.
By \cite[Lemma~2.1]{IMS90}, the conclusion of our lemma follows.
\end{proof}
\begin{lemma}{\label{lm3}}
If $h\in B_q^0$ then the infinite product $\prod_{n=0}^\infty \{((1-\alpha)h(zq^n)+\alpha q)/q\}$ converges
uniformly on compact subsets of ${\mathbb D}$ to a nonzero function in $\mathcal{H}({\mathbb D})$ with no zeros. Furthermore, the function
\begin{equation}\label{eq3}
f(z)=\frac{z}{\prod_{n=0}^\infty \{((1-\alpha)h(zq^n)+\alpha q)/q\}}
\end{equation}
belongs to $\mathcal{S}^*_q(\alpha)$ and $h(z)=((f(qz)/f(z))-\alpha q)/(1-\alpha)$.
\end{lemma}
\begin{proof}
The convergence of the infinite product is proved in Lemma \ref{lm2}.
Since $h\in B_q^0$, we have $h(z)\neq 0$ in
${\mathbb D}$ and the infinite product does not vanish in ${\mathbb D}$. Thus, the function $f\in \mathcal{A}$ and
we find the relation
$$\frac{f(qz)}{f(z)}=(1-\alpha)h(z)+\alpha q, ~~\mbox{ equivalently }~~
\frac{\displaystyle\frac{f(qz)}{f(z)}-\alpha q}{1-\alpha}=h(z) .
$$
Since $h\in B_q^0$, we get $f\in \mathcal{S}^*_q(\alpha)$ and the proof of our lemma is complete.
\end{proof}
We define two classes $B_{q,\alpha}$ and $B_{q,\alpha}^0$ by
$$B_{q,\alpha}=\left\{g: g\in \mathcal{H}({\mathbb D}),~g(0)=\frac{q}{1-\alpha(1-q)}\mbox{ and } g:{\mathbb D} \to {\mathbb D} \right\}
$$
and
$$
B_{q,\alpha}^0=\{g:g\in B_{q,\alpha} \mbox{ and } 0\notin g({\mathbb D}) \}.
$$
\begin{lemma}\label{lm}
A function $g\in B_{q,\alpha}^0$ if and only if it has the representation
\begin{equation}\label{eq4}
g(z)=\exp\left\{\left(\ln\frac{q}{1-\alpha(1-q)}\right)p(z)\right\},
\end{equation}
where $p(z)$ belongs to the class
$$
\mathcal{P}=\{p: p\in \mathcal{H}({\mathbb D}), p(0)=1 \mbox{ and } {\operatorname{Re}\,} \{p(z)\}\ge 0 \mbox{ for } z\in{\mathbb D}\}.
$$
\begin{proof}
For $g\in B_{q,\alpha}^0$, define the function $L(z)={\operatorname{Log}\,} g(z)$. Then it is easy to show
that the function $p(z)=\displaystyle\frac{L(z)}{\ln \frac{q}{1-\alpha(1-q)}}\in\mathcal{P}$ and satisfies (\ref{eq4}).
Conversely, if $g$ is given by (\ref{eq4}), then it is obvious that $g\in B_{q,\alpha}^0$.
\end{proof}
\end{lemma}
\begin{theorem}\label{thm1}
The mapping $\rho:\mathcal{S}^*_q(\alpha) \to B_q^0$ defined by
$$\rho(f)(z)=\frac{\displaystyle\frac{f(qz)}{f(z)}-\alpha q}{1-\alpha}
$$
is a bijection.
\end{theorem}
\begin{proof}
For $ h \in B_q^0 $, define a mapping $\sigma:\,B_q^0 \to \mathcal{A}$ by
$$ \sigma(h)(z)=\frac{z}{\prod_{n=0}^\infty \{((1-\alpha)h(zq^n)+\alpha q)/q\}}.
$$
It is clear from Lemma~\ref{lm3} that $\sigma(h) \in \mathcal{S}^*_q(\alpha)$ and $(\rho\circ \sigma)(h)=h$. Considering the composition mapping $\sigma\circ \rho$ we compute that
$$(\sigma\circ \rho)(f)(z)=\frac{z}{\prod_{n=0}^\infty \{(f(zq^{n+1})/qf(zq^n)\}}=\frac{z}{z/f(z)}=f(z).
$$
Hence $\sigma\circ \rho$ and $\rho\circ \sigma$ are identity mappings and $\sigma$
is the inverse of $\rho$, i.e. the map $\rho(f)$ is invertible. Hence $\rho(f)$ is a bijection. This completes the proof of our theorem.
\end{proof}
\section{Proof of the main theorems}
This section is devoted to the proofs of main theorems using the supplementary results proved in Section~2.
\begin{figure}[H]
\begin{minipage}[b]{0.5\textwidth}
\includegraphics[width=5.5cm]{F1by21by2z.pdf}
\vskip 0.1cm \hskip 0.4cm
Graph of $F_{1/2,1/2}(z)$, $|z|<1$
\end{minipage}
\begin{minipage}[b]{0.35\textwidth}
\includegraphics[width=5.5cm,height=8.9cm]{F5by61by2z.pdf}
\vskip 0.1cm \hskip 0.4cm
Graph of $F_{5/6,1/2}(z)$, $|z|<1$
\end{minipage}
\vskip 0.3cm
\caption{Graphs of the complex functions $F_{q,\alpha}(z)$ for $|z|<1$.}\label{Fz}
\end{figure}
\begin{proof}[\bf Proof of Theorem~\ref{thm2}]
For $0<q<1$ and $0\le \alpha<1$, let $F_{q, \alpha}$ be defined by (\ref{MainThm1:eq}).
Geometry of $F_{q, \alpha}$ is described in Figure~\ref{Fz} for different ranges
over the parameters $q$ and $\alpha$.
Suppose that $f\in \mathcal{S}^*_q(\alpha)$.
Then by Theorem~\ref{thm1} and Lemma~\ref{lm3}, it is clear that $f(z)$ has the
representation (\ref{eq3}) with $h\in B_q ^0$. The logarithmic derivative of $f$ gives
\begin{equation}\label{eq5}
\frac{zf'(z)}{f(z)}=1-\sum_{n=0}^\infty \frac{(1-\alpha)zq^nh'(zq^n)}{(1-\alpha)h(zq^n)+\alpha q}.
\end{equation}
Now, let us assume that
$$
g(z)=\frac{(1-\alpha)h(z)+\alpha q}{1-\alpha(1-q)}.
$$
Clearly, $g\in B_{q,\alpha}^0$ and hence Lemma~\ref{lm} guarantees that $g(z)$ has the
representation (\ref{eq4}). Taking the logarithmic derivative of $g$ we have
\begin{equation}\label{eq6}
\frac{zg'(z)}{g(z)}=\left(\ln\frac{q}{1-\alpha(1-q)}\right)zp'(z),
\end{equation}
where ${\operatorname{Re}\,} \{p(z)\}\ge 0$. By Herglotz representation of $p(z)$, there exists a probability measure
$\mu$ supported on the unit circle $|\sigma|=1$ such that
\begin{equation}\label{eq7}
zp'(z)=\int_{|\sigma|=1}2\sigma z(1-\sigma z)^{-2}d\mu(\sigma).
\end{equation}
Using (\ref{eq6}) and (\ref{eq7}) in (\ref{eq5}), we have
\begin{eqnarray*}
\frac{zf'(z)}{f(z)}&=& 1-2\left(\ln\frac{q}{1-\alpha(1-q)}\right)\sum_{n=0}^\infty
\int_{|\sigma|=1}\sigma z q^n(1-\sigma zq^n)^{-2}d\mu(\sigma)\\
&=& 1-2\left(\ln\frac{q}{1-\alpha(1-q)}\right)\int_{|\sigma|=1}
\left\{\sum_{n=0}^\infty\sum_{m=1}^\infty m{\sigma}^m z^m q^{mn}\right\}d\mu(\sigma)\\
&=& 1-2\left(\ln\frac{q}{1-\alpha(1-q)}\right)\int_{|\sigma|=1}\left\{\sum_{m=1}^\infty
m{\sigma}^m z^m\frac{1}{1-q^m}\right\}d\mu(\sigma)\\
&=&1+\int_{|\sigma|=1}\sigma z F_{q, \alpha}^{'}(\sigma z)\rm{d}\mu(\sigma).
\end{eqnarray*}
This completes the proof of our theorem.
\end{proof}
\begin{figure}[H]
\begin{minipage}[b]{0.5\textwidth}
\includegraphics[width=6cm,height=8.15cm]{G1by21by2z.pdf}
\vskip 0.1cm \hskip 0.6cm
Graph of $G_{1/2,1/2}(z)$, $|z|<1$
\end{minipage}
\begin{minipage}[b]{0.4\textwidth}
\includegraphics[width=6.5cm]{G5by61by2z.pdf}
\vskip 0.1cm \hskip 1cm
Graph of $G_{5/6,1/2}(z)$, $|z|<1$
\end{minipage}
\vskip 0.3cm
\caption{Graphs of the complex functions $G_{q,\alpha}(z)$ for $|z|<1$.}\label{Gz}
\end{figure}
\begin{proof}[\bf Proof of Theorem~\ref{sec2-thm7}]
For $0<q<1$ and $0\le \alpha<1$, let $G_{q, \alpha}$ be defined by (\ref{MainThm2:eq}).
Geometry of the mapping $G_{q, \alpha}$ is described in Figure~\ref{Gz} for different ranges
over the parameters $q$ and $\alpha$.
As a special case to Theorem~\ref{thm2}, when the measure has a unit mass, it is clear that
$G_{q, \alpha}\in \mathcal{S}^*_q(\alpha)$. Let $f\in \mathcal{S}^*_q(\alpha)$.
Then by Theorem~\ref{thm1}, there exist a function $\rho$ such that
$\rho(f)(z)=h(z)=\displaystyle\left(\frac{f(qz)}{f(z)}-\alpha q\right)/(1-\alpha)\in B_q ^0$.
Since $h\in B_q ^0$, $g(z)=((1-\alpha)h(z)+\alpha q)/(1-\alpha(1-q))\in B_{q,\alpha} ^0 $.
By Lemma~\ref{lm}, $g(z)$ has the representation (\ref{eq4}) and on solving we get,
\begin{equation}\label{eq8}
\frac{f(qz)}{f(z)}=(1-\alpha(1-q))\exp\left\{\left(\ln\frac{q}{1-\alpha(1-q)}\right)p(z)\right\}.
\end{equation}
Define the function $\phi(z)={\operatorname{Log}\,}\{f(z)/z\}$ and set
\begin{equation}\label{eq9}
\phi(z)={\operatorname{Log}\,}\frac{f(z)}{z}=\sum_{n=1}^\infty \phi_n z^n.
\end{equation}
On solving, we get
$$
\ln\frac{q}{1-\alpha(1-q)}+\phi(qz)=\phi(z)+\left(\ln\frac{q}{1-\alpha(1-q)}\right)p(z).
$$
This implies
$$
\phi_n=p_n\left(\ln\frac{q}{1-\alpha(1-q)}\right)/(q^n-1).
$$
Since $|p_n|\le 2$, we have
$$
|\phi_n|\le \frac{(-2)\left(\ln \frac{q}{1-\alpha(1-q)}\right)}{1-q^n}.
$$
From this inequality, together with the expression of $G_{q, \alpha}(z)$ and (\ref{eq9}), the conclusion follows.
\end{proof}
\section{Order of $q$-starlikeness of $z\Phi[a,b;c;q,z]$}
The basic hypergeometric function is associated with the
{\em Watson symbol} $(a;q)_n$ (also called {\em the $q$-shifted factorial}), $n\ge 0$. The Watson symbol is defined by
$$(a;q)_n=(1-a)(1-aq)(1-aq^2)\cdots (1-aq^{n-1})=\prod_{k=0}^\infty \frac{1-aq^k}{1-aq^{k+n}}, \quad (a;q)_0=1
$$
for all real or complex values of $a$.
In the unit disk $\mathbb{D}$, the {\em basic hypergeometric series} (also called {\em Heine hypergeometric series})
is defined by
$$\sum_{n=0}^\infty \frac{(a;q)_n(b;q)_n}{(c;q)_n(q;q)_n}z^n
= 1+\frac{(1-a)(1-b)}{(1-c)(1-q)}z+\frac{(1-a)(1-aq)(1-b)(1-bq)}{(1-c)(1-cq)(1-q)(1-q^2)}z^2+\cdots,
$$
where $|q|<1$ and $a,b,c$ are real or complex parameters with $(c;q)_n\neq 0$, is convergent.
The corresponding
functions are denoted by $\Phi[a,b;c;q,z]$ and are referred to as the
{\em basic (or Heine) hypergeometric functions} \cite{AAR99,Sla66}.
The function $z\Phi[a,b;c;q,z]$ is called the {\em shifted basic hypergeometric function.}
The limit
$$\lim_{q\to 1^-}\frac{(q^a;q)_n}{(q;q)_n}=\frac{a(a+1)\cdots (a+n-1)}{n!}
$$
says that, with the substitution $a\mapsto q^a$, the Heine hypergeometric function takes to the well-known
Gauss hypergeometric function $F(a,b;c;z)$ when $q$ approaches $1^-$.
For basic properties of Heine's hypergeometric series, readers may refer to \cite{GR90}.
K\"{u}stner \cite{Kus02} studied the order of starlikeness for functions $f\in \mathcal{A}$
by introducing the quantity
$$\sigma(f):=\inf_{z\in \mathbb{D}}{\rm Re}\,\left(\frac{zf'(z)}{f(z)}\right)\in [-\infty,1].
$$
Certainly, for the identity function $\sigma(f)=1$. K\"{u}stner found in \cite{Kus02} that
the well-known Gauss shifted hypergeometric functions $zF(a,b;c;z)$ have the order of
starlikeness $-\infty$ for certain constraints on the real parameter $a,b,c$.
For $f\in\mathcal{A}$, let us now define the quantity
$$\sigma_q(f)=\inf_{z\in {\mathbb D}} {\operatorname{Re}\,} \left(\frac{z(D_qf)(z)}{f(z)}\right) \in [-\infty,1].
$$
We call this quantity as the order of $q$-starlikeness of the function $f$.
Clearly, $\sigma_q(f)=1$ for $f(z)=z$. Note that $\lim_{q\to 1}\sigma_q(f)=\sigma(f)$.
We consider the shifted basic
hypergeometric functions introduced by Heine \cite{Hei46} and study its $q$-analog of
order of starlikeness.
Basic background knowledge on the order of starlikeness of the well-known Gauss hypergeometric
functions can be found in \cite{HPV10,Kus02,MM90,Pon97,PV01,Sil93}.
In Theorem~\ref{sec3-thm1} we find the order of $q$-starlikeness of shifted basic hypergeometric functions $z\Phi[a,b;c;q,z]$.
\begin{theorem}\label{sec3-thm1}
Let $a,b,c$ be non-negative real numbers with $0<1-aq< 1-cq$ and $0<1-b< 1-c$.
For $0<q<1$ and $r \in (0,1]$,
the function $z\mapsto z\Phi[a,b;c;q,rz]$ has the order of $q$-starlikeness
$$\sigma_q(z\Phi[a,b;c;q,rz])=1+\rho q\frac{(1-a)(1-b)}{(1-c)(1-q)}\,\frac{\Phi[aq,bq;cq;q,\rho]}
{\Phi[a,b;c;q,\rho]}
$$
where
$$ \rho=-r \mbox{ if } \frac{q(1-a)}{a(1-q)}=:s > 0 \mbox{ and } \rho=r \mbox{ if } s < 0.
$$
In particular, we have
$$1+\frac{s\rho}{1-\rho}\le \sigma_q(z\Phi[a,b;c;q,rz]) \le 1+\frac{\rho s(1-b)}{2(1-c)}.
$$
\end{theorem}
\begin{remark}
The case $s<0$ with $r=1$ in Theorem~\ref{sec3-thm1} is considered in the limiting sense.
In this case, the lower bound $1+\displaystyle\frac{sr}{1-r}$ is equal to $-\infty$.
\end{remark}
\begin{proof}[\bf Proof of Theorem~\ref{sec3-thm1}]
Set $\Phi(z)=\Phi[a,b;c;q,z]$ and $f(z)=z \Phi(z)$.
Now, by (\ref{sec1-eqn1}) we have
\begin{eqnarray*}
(D_qf)(z)&=&\frac{\Phi(z)-q\Phi(qz)}{1-q}\\
&=&\frac{\Phi(z)-q\Phi(z)+q\Phi(z)-q\Phi(qz)}{1-q}\\
&=&\frac{\Phi(z)(1-q)+q(\Phi(z)-\Phi(qz))}{1-q}\\
&=&\Phi(z)+zq(D_q \Phi)(z).
\end{eqnarray*}
Hence,
\begin{equation}\label{eqn0.2}
w=\frac{z(D_qf)(z)}{f(z)}=1+zq\frac{(1-a)(1-b)}{(1-c)(1-q)}\,\frac{\Phi[aq,bq;cq;q,z]}
{\Phi[a,b;c;q,z]},
\end{equation}
where the last equality holds by \cite[1.12(ii), pp. 27]{GR90}. Recall the difference equation stated in \cite{AS14}, which is equivalent to
\begin{equation}\label{sec3-eq}
\frac{\Phi[aq,bq;cq;q,z]}{\Phi[a,b;c;q,z]}=\frac{(1-c)}{a(1-b)z} \left[\frac{\Phi[aq,b;c;q,z]}{\Phi[a,b;c;q,z]}-1\right] .
\end{equation}
Substituting this ratio in (\ref{eqn0.2}), we get
$$w = 1+s\left[\frac{\Phi[aq,b;c;q,z]}{\Phi[a,b;c;q,z]}-1\right]
= 1-s+s\frac{\Phi[aq,b;c;q,z]}{\Phi[a,b;c;q,z]},
$$
where $s$ is defined in the statement of our theorem with $q \in (0,1)$.
It follows from \cite{AS14} that $w$ has an integral representation
\begin{equation}\label{eqn0.3}
w=1-s+s\int_0^1 \frac{1}{1-tz}\mbox{d}\mu(t),
\end{equation}
with the non-negative real numbers $a,b,c$ satisfying the conditions
$0\le 1-aq \le 1-cq$ and $0<1-b<1-c$.
Now, for $ s>0$, $r\in (0,1]$ and from equation (\ref{eqn0.3}) it follows that the minimum of
${\operatorname{Re}\,} w$ for $|z|\le r$ is attained at the point $z=-r$ and that the minimum is
$1-\displaystyle \frac{rs}{(1+r)}$ .
Secondly, for $s<0$, $r\in (0,1]$ and from equation (\ref{eqn0.3}),
it follows that the minimum of ${\operatorname{Re}\,} w$ for $|z|\le r$ is attained at
the point $z=r$ and that the minimum is $1+\displaystyle \frac{rs}{(1-r)}$.
This in combination with (\ref{eqn0.2}), yields the order of $q$-starlikeness of
$z\Phi[a,b;c;q,rz]$.
The upper estimate for ${\rm Re}\,w$
follows from (\ref{eqn0.2}) and an integral representation of the ratio
${\Phi[aq,bq;cq;q,z]}/{\Phi[a,b;c;q,z]}$ obtained in \cite[Theorem~2.13]{AS14}.
Hence, the conclusion of our theorem follows.
\end{proof}
\begin{remark}
Making the substitutions $a\to q^a$, $b\to q^b$ and $c\to q^c$, and taking the limit
as $q\to 1^-$, we achieve the result of K\"ustner \cite[Theorem~1.1]{Kus02}.
\end{remark}
\begin{remark}
If $f\in \mathcal{S}_q^*(\alpha)$, $0\le \alpha <1$, the order of $q$-starlikeness for $f$
can be equivalently defined by the quantity
$$\sigma_q(f):=\inf_{z\in \mathbb{D}} \left \{\frac{1}{1+q}\left(1+q {\operatorname{Re}\,} w -
\sqrt{(1-q {\operatorname{Re}\,} w)^2 -2(1-q){\operatorname{Re}\,} w+(1-q^2)|w|^2}\right)\right \},
$$
where $w=\displaystyle\frac{z}{f(z)}(D_qf)(z)$.
Also, one can prove that $\sigma_q(f)\ge \alpha$.
Squaring the inequality given in (\ref{main=defn}) on both the
sides, we get
$$f\in\mathcal{S}_q^*(\alpha) \iff \alpha^2(1+q)-2\alpha(1+q{\rm Re}\,w)
+2{\rm Re}\,w-(1-q)|w|^2\ge 0.
$$
Solving the inequality for $\alpha$, we obtain the required
order, $\sigma_q(f)$, of $q$-starlikeness of functions $f\in\mathcal{S}_q^*(\alpha)$.
Since for all $f\in \mathcal{S}_q^*(\alpha)$, ${\operatorname{Re}\,} w\ge \alpha$ and the lower bound
$\alpha$ is attained whenever ${\operatorname{Re}\,} w=|w|$, we get
$$\sigma_q(f)=\inf_{z\in {\mathbb D}} {\operatorname{Re}\,} w=\alpha.
$$
\end{remark}
\section{Concluding remarks}
At the beginning of the last century, studies on $q$-difference equations appeared in intensive
works especially by Jackson \cite{Jac10}, Carmichael \cite{Car12}, Mason \cite{Mas15}, Adams \cite{Ada29},
Trjitzinsky \cite{Trj33}, and later by others such as Poincar\'{e}, Picard, Ramanujan. Unfortunately,
from the thirties up to the beginning of the eighties only non-significant interest in this area was
investigated. Recently some research in this topic is carried out by Bangerezako \cite{Ban08};
see also references therein for other related work.
Research works in connection with function theory and $q$-theory together were first introduced
by Ismail and et. al. \cite{IMS90}. Later it is also studied in \cite{Ro92,RS12,SS14,AS14}.
Since only few work have been carried out in this direction, as indicated in \cite{AS14},
there are a lot can be done. For instance, $q$-analog of convexity
of analytic functions in the unit disk and even more general in arbitrary
simply connected domains may be interesting for researchers in this field.
Recently, the concept of $q$-convexity for basic hypergeometric functions is
considered in \cite{BS14}.
Bieberbach conjecture problem for $q$-close-to-convex functions is
estimated optimally in a recent paper \cite{SS14}. In fact sharpness of this result is
still an open problem and concerning this, a conjecture is stated there.
\vskip 1cm
\noindent
{\bf Acknowledgement.} The work of the first author is supported by University
Grants Commission, New Delhi (grant no. F.2-39/2011 (SA-I)). The authors would like to thank the referee for his/her
careful reading of the manuscript and valuable suggestion.
|
3,212,635,537,518 | arxiv | \section{Introduction}
\label{intro}
Direct laser subwavelenght micromachining is currently a challeging topic. Photonic jets have already demonstrated the ability to reduce the laser etching size beyond the diffraction limit using micro--beads \cite{Abdurrochman,Munzer,Wu,Guo,Grojo,Mcleod} or more recently using shaped optical fiber tips \cite{Zelgowski,Pierron,Pierron2}.
A Photonic jet (PJ) is a high concentrated propagative light beam with a full width at half maximum (FWHM) smaller than the diffraction limit \cite{Chen,Lecler,Itagi,Li}. The power density can be more than 200 times higher than the one of the incident wave \cite{Abdurrochman,Lecler,Heifetz}.
To achieve PJ, shaped fiber tips are obviously easier to move than a microsphere and therefore, to implement in an industrial process. Moreover, the fiber tips have no contact with the processed surface and are not altered by the removed material \cite{Zelgowski,Pierron,Pierron2}. Until now, only sub--micro PJ etching was reported. Micro--peaks formation was reported for processes using femtosecond laser pulses on thin metal and silicon films \cite{Her,Ivanov,Kuznetsov,Unger,Zhu,Kuznetsov2}. Micro--peaks were also generated using nanosecond laser pulses on thin gold film \cite{Moening,Moening2}, but never on silicon bulk.
In this paper, we report for the first time the possibility to achieve direct micro--peaks surface texturing using nanosecond pulses. Taking advantage of the photonic jet at the exist of a shaped optical fiber tip, peaks with a FWHM of around 1~$\mu$m, a height of almost an half micrometer and an apex radius of fews ten nanometers, were repeatability achieved on a silicon wafer. Surfaces with micro-peaks can have a wide range of applications. For example, A few millimeter squares surface with micro--spikes has shown an decrease of wettability.
\section{Experiment details}
\label{sec:1}
The laser source is a commercial near--infrared pulsed laser (VGEN ISP 1-40-30) emitting at 1064~nm with pulses of 100~ns at a repetition rate of 35~kHz. The output beam has a diameter of 6~mm at 1/e$^2$ and a beam quality factor (M$^2$) of 1.3. This corresponds to a quasi--Gaussian beam profile. An achromatic doublet, with a focal length of 19~mm, couples the laser beam into the fiber, whose position is controlled by XYZ microstages.
The fiber system is a multimode step--index silica fiber with a core diameter of 100 $\mu$m and a cladding diameter of 140 $\mu$m. The numerical aperture (NA) is 0.22. The tip shape is numericaly described by a B\'ezier curve set by a base radius ($a$~=~50~$\mu$m), a tip length ($b$ = 63 $\mu$m) and a B\'ezier weight ($w_0$~=~1) \cite{Zelgowski}. It has been designed to achieve photonic jet of around 1 $\mu$m at a distance of around 100 $\mu$m of the tip when excited by the fundamenal mode (Fig.~\ref{fig:fiber_tip}~a). The numerical method is described in \cite{Zelgowski,Pierron,Pierron2}. The tip has been achieved by LovaLite using an electric--arc thermoforming technique \cite{Borsuk}. The result has a base radius $a$ slightly larger due the cladding: $a~\simeq$~80~$\mu m$ (Fig.~\ref{fig:fiber_tip}~b).
\begin{figure*}
\includegraphics[width=1\textwidth]{TE_sim_fiber_tip.png}
\caption{a) Simulation of the electrical field norm through and outside of the shaped silica fiber tip (a = 50 $\mu$m, b = 63 $\mu$m, $w_0$ = 1) excited by the fundamental mode of the fiber. b) Optical microscope view of the shaped fiber tip (a = 80 $\mu$m, b = 63 $\mu$m, $w_0$ = 1).}
\label{fig:fiber_tip}
\end{figure*}
The shaped optical fiber tip is placed orthogonal to the sample. The fiber position is controlled by a Z--motorized stage and the sample position by XY--motorized stages. The distance between the tip and the sample is controlled by a camera with a 5x telecentric objective thanks to an image processing based on the reflectance of the tip into the sample. The laser, the motorized stages and the camera processing are controlled by LabView. The experiences are carried out at ambient atmosphere conditions. The power at the outputs of the laser and the fiber tip has been measured with a calorimeter Ophir Vega with a 12A--P sensor.
In the following we name working distance the distance between the tip end and the maximum of intensity of the photonic jet. Experimentally, when both the laser pulse energy and the distance between the tip and the sample vary, the working distance corresponds to the distance for which the smallest mark is achieved with the smallest energy. It is intrinsic to the fiber tip and has been determined experimentally in our previous work to be 100 $\pm$ 2 $\mu$m for the tip shown in Fig. \ref{fig:fiber_tip}~b) \cite{Pierron2} not far from the 113 $\mu$m predicted by simulation (Fig.\ref{fig:fiber_tip}~a).
Our experiments have shown that this working distance is enough to avoid the re--deposition of melt material on the fiber tip if material is ablated. This also allows ensure the tip integrity during the motion control: no shock and easier control due the 2 $\mu$m PJ positioning tolerance. From simulations, we can give some general rules: (1) Photonic jet with the same FWHM (Full Width at Half Maximum) can be obtained at larger working distance using larger optical fiber core. However, in this case more energy is required to achieve the same process. Namely, it is not so easy to couple energy in the fundamental mode. (2) For a given fiber radius, generally the working distance is larger increasing the B\'ezier weight ($w_0$) or decreasing the tip length ($b$). However, the photonic jet FWHM increases. Therefore this tip was a good compromise.
The sample is a monocrystalline silicon wafer with a passive layer (thickness of approximately 2 nm) and an initial roughness of 2 $\pm$ 1 nm. For each laser irradiation, 35 pulses have been used. Before and after the laser process, the sample was cleaned and dried with alcohol and dry air.
The micro--peaks have been characterized with two different methods: a white light interferometric microscope and an atomic force microscope (AFM). The interferometric system based on coherence scanning interferometry was a Zygo NewView 7200 profilometer with an axial resolution of 3 nm and a lateral resolution of 550 nm (50x Mirau objective, NA~=~0.55).
The AFM, a Park Systems XE--70 isolated inside an acoustic enclosure, worked in the non--contact mode. The field size used was 20x20 $\mu$m with a lateral resolution of 39 nm and a axial resolution of 2 nm.
The wettability of the flat and fakir's bed of nails--like textured sample was measured with a Kr\"uss Drop Shape Analyser DSA25. Static contact angle were measured with around 10 $\mu$L distilled water droplets by the sessile drop method. A computation method based on the analysis of the droplet shape determined by the model of Laplace--Young was used.
\section{Results and discussion}
\label{sec:results_discussion}
A 110x100 $\mu$m matrix with peaks every 5 $\mu$m has been achieved on silicon. In Fig. \ref{fig:peaks_matrix}, the 3D view obtained by the interferometric microscope shows that micro--peak machining by PJ with an shaped optical fiber tip is a repeatable process. The measured mean height for 30 peaks was 354 $\pm$ 3 nm, the mean FWHM was 1 $\pm$ 0.6 $\mu$m and the maximum height was 590 $\pm$ 3 nm. Thus, peaks have a width larger than their height. Pay attention, due to the scales, the aspect ratio of the micro--peaks are different from what appears in Fig. \ref{fig:profile_zygo}.
\begin{figure*}
\includegraphics[width=1\textwidth]{Zygo3D.png}
\caption{3D view with Zygo profilometer. a) 110x110 $\mu$m matrix on silicon with peaks every 5 $\mu$m; 35 pulses for each PJ peaks; pulse energy of 30~$\mu$J --- b) Zoom. Note that the unit lengths are not the same in the transverse plane and for height.}
\label{fig:peaks_matrix}
\end{figure*}
The PJ micro--peaks were obtained with 35 pulses, that corresponded to our mimium controlale number of pulses. The laser source has a minium repetion rate of 35 kHz and its minium controllabled shot time is 1~ms. The energy per pulse was 30~$\mu$J. Peak formation occurs only when the pulse energy is just under the ablation threshold (36 $\mu$J with our experimental conditions \cite{Pierron2}). For comparison purposes, the dimensions of a PJ ablation has been measured. An example of ablation with 35 pulses with a pulse energy of 36~$\mu$J has been achieved (cf. Fig. \ref{fig:profile_zygo}~b)). The PJ ablation is a sub--micro ablation with a deep of 456~$\pm$~3~nm and a FWHM of 0.9 $\pm$ 0.6 $\mu$m. An example of a PJ peak from the matrix is presented (cf. Fig. \ref{fig:profile_zygo}~a)). The PJ peak is a sub--micro peak with a height of 403~$\pm$~3~nm and a FWHM of 1.3~$\pm$~0.6~$\mu$m. Hence, the PJ peak has a FWHM with the same order magnitude (around 1~$\mu$m) as the FWHM of the PJ ablation, which is also the width of the PJ.
If a different distance between the tip and the sample is used, as illustrated in Fig. \ref{fig:profile_zygo}~c) with 110~$\mu$m, no more peaks are generated. The affect area (around 17~$\mu$m) has several maxima and minima and the highest maximum (around 120~nm) is not so high as the peaks and has a FWHM of 4 $\mu$m. This confirms the role of PJ in the peak generation.
\begin{figure*}
\includegraphics[width=1 \textwidth]{Zygo_profiles.png}
\caption{Examples of profiles on silicon with the Zygo microscope: a) a micro--peak; 35 pulses; pulse energy of 30 $\mu$J; tip-sample distance of 100 $\mu$m --- b) an ablation; 35 pulses; pulse energy of 36 $\mu$J; tip-sample distance of+ 100 $\mu$m --- c) damage area without peak; pulse energy of 30 $\mu$J; tip-sample distance of 110 $\mu$m. Note that the unit lengths are not the same in the x and y axes.}
\label{fig:profile_zygo}
\end{figure*}
In order to confirm the interferometer results, a micro--peak has been measured by AFM (cf. Fig. \ref{fig:peak_afm}). A height of 335~$\pm$~2~nm and a FWHM of 1.330 $\pm$ 0.040 $\mu$m have been measured (cf.~Fig.~\ref{fig:peak_profile_afm}). The micro--peak has a quasi-conical shape with an apex radius of 14~$\pm$~2~nm. Thus, with a small deviation (inferior to 10 \%), the AFM measurement confirms the interferometer profiles.
\begin{figure*}
\includegraphics[width=0.75\textwidth]{peak_afm.png}
\caption{3D view with the AFM. Example of a micro--peaks on silicon. Height of 335~$\pm$~2~nm, width 1.330~$\pm$~0.040~$\mu$m, apex radius of 14~$\pm$~2~nm -- 35 pulses -- pulse energy of 30 $\mu$J.}
\label{fig:peak_afm}
\end{figure*}
\begin{figure}
\includegraphics[width=0.48\textwidth]{peak_afm_profile.png}
\caption{Example of a micro--peak profile on silicon with the AFM. Height of 335~$\pm$~2~nm, width of 1.330~$\pm$~0.040~$\mu$m, apex radius of 14~$\pm$~2~nm; 35 pulses; pulse energy of 30 $\mu$J.}
\label{fig:peak_profile_afm}
\end{figure}
The physical mechanisms of the micro--peak formation are hypothetical. A similar phenomenon is the micro--peak formation on thin metal films by femtosecond lasers \cite{Ivanov,Kuznetsov,Unger,Kuznetsov2}. The absorption processes are very different, however the properties of the PJ (characteristic sizes) allow interaction with a volume of matter similar to the case of femtosecond laser. In the two cases, the micro--peaks are generated by fluences just below the ablation threshold. Independently of the peaks, under nanosecond laser irradiation, melt dynamics is a dominant mechanism; femtosecond irradiation induces more complex dynamics including thermoplastic deformation \cite{Kuznetsov}. In silicon, as with femtosecond pulses, the formation mechanism of the micro--peaks could be apparently due to the hydrodynamic flow of molten material governed by surface--tension forces. The mechanism is due to the thermal expansion of the heated solid part of the material. This thermal extension induces stresses, which provide forces directly perpendicular to the solid--liquid interface. Theses forces push the melted material towards the center of the irradiated region. The peak formation would be arrested by the surface tensions forces and material solidification \cite{Kuznetsov,Kuznetsov2}.
An interest in microstructured surfaces with peaks has arisen thanks to the superhydrophobic effect \cite{Nayak,Hairaye}. As an application example, a 5 x 5 mm matrix with micro--peaks every 50 $\mu$m, called fakir's bed of nails surface, has been achieved on silicon. A wettability test has been carried on the non--textured surface and textured surface (cf. Fig.~\ref{fig:hydrophobia}) with a drop of water. When the contact angle is below 90 degrees, the surface is considered hydrophilic. On the opposite, above 90 degrees the surface is hydrophobic \cite{Young,Wenzel,Cassie}.
The contact angle of initial surface was 39.3 $\pm$ 0.9 degrees, whereas the one of the textured surface has increased to 42.8 $\pm$ 0.8 degrees. The silicon surface has been become less hydrophilic.
\begin{figure*}
\includegraphics[width=1\textwidth]{hydrophobia.png}
\caption{A drop of water ($\simeq$ 10 $\mu$L) on: a) a non--textured silicon wafer; contact angle of 39.3 $\pm$ 0.9 degrees -- b) the silicon wafer with micro-peaks, contact angle of 42.8 $\pm$ 0.8 degrees.}
\label{fig:hydrophobia}
\end{figure*}
\section{Conclusion}
Direct micro--spikes machining by photonic jet using nanosecond laser pulses have been observed. Repeatable micro--peaks have been fabricated by 35 pulses of 100 nanoseconds at 1064~nm with a photonic jet generated in the vicinity of an shaped optical fiber tip. They are obtained on silicon with 30~$\mu$J per pulses slightly under the ablation threshold (36~$\mu$J). An hypothesis is that micro--peaks are formed due to hydrodynamic flow of molten material governed by surface--tension forces. This will be investigated. The potential of these micro--peaks for reducing the hydrophilicity of silicon has been illustrated.
\paragraph{Acknowledgment}
The authors are grateful to Camille Hairaye (ICube Laboratory, France) for her technical assistance in the use of the characterization system of wettability.
|
3,212,635,537,519 | arxiv | \section{Introduction}
The recent calculations of nucleon parton distributions within
the chiral quark soliton model (CQSM) exclusively utilizes the so-called
Pauli-Villars regularization scheme [1-6]. This is to be contrasted with the
fact that most of the past calculations of the nucleon static observables
were carried out by using the proper-time regularization scheme [7-8].
There are some reasons for it. The first reason is mainly technical.
For obtaining parton distributions, one need to evaluate nucleon matrix
elements of quark bilinear operators which is nonlocal in two
space-time coordinates. The problem is that we have no unanimous idea
about how to generalize the proper-time scheme for the reguralization of
such unusual quantities. The second but more positive reason for using the
Pauli-Villars regularization scheme has been advocated by Diakonov
et al. [1,2].
They emphasize that this regularization scheme preserves certain general
properties of parton distributions such as positivity, factorization
properties, sum rules etc., which are easily violated by other regularization
schemes like the proper-time one.
Recently, there was a controversial debate on the stability of
soliton solutions in the CQSM regularized with the Pauli-Villars
subtraction scheme [10,11].
It seems that the problem has been settled by now,
since stable soliton solutions seem to exist at any rate if the
Pauli-Villars regularization is applied to the quark seas only, not to
the discrete bound state sometimes called the valence
quark orbital. Unfortunately, this is not the end of the story.
In fact, soliton solutions of the CQSM with use of the Pauli-Villars
regularization scheme were obtained many years ago by D\"{o}ring
et al. [12]. (To be more precise, the model used by them is not the
CQSM but the Nambu-Jona-Lasinio model. In fact, they were forced to
impose an {\it ad hoc} nonlinear constraint for the scalar and
pseudoscalar meson fields at the later stage of manipulation.
Otherwise, they would not have obtained any convergent solutions [13].)
The fact that the single-subtraction Pauli-Villars scheme cannot
regularize the vacuum quark condensate was already noticed in
an earlier paper [14] as well as in this paper [12].
To remove this divergence, which is necessary for obtaining a finite
gap equation, D\"{o}ring et al. propose to add some counter terms,
which depend on the meson fields, to the original effective action.
It is very important to
recognize that this procedure is not workable within the CQSM, since
their counter terms reduce to mere constants under the chiral circle
condition which we impose from the very beginning. Thus, one must
conclude that the simplest Pauli-Villars scheme with the
single-subtraction term is unable to fully get rid of the divergence of
the vacuum quark condensate at least in the nonlinear model.
One should take this fact seriously, because
it brings about a trouble also in the physics of soliton sector.
To understand it, one has only to remember the
fact that the scalar quark density appearing in the soliton equation
of motion is expected to approach a finite and nonzero value
characterizing the vacuum quark condensate as the distance from
the soliton center becomes large [15].
This necessarily means that the
scalar quark density appearing in the soliton equation of motion
cannot also be free from divergences.
The purpose of the present study is then twofold. On the one hand,
we want to show that the single-subtraction Pauli-Villars scheme is
not a fully satisfactory regularization scheme, and that at least one more
subtraction term is necessary for a consistent regularization
of the effective theory. This will be made convinced through the
formal discussion given in II and also the explicit numerical
results shown in III.A. On the other hand, we also want to know
the regularization-scheme dependence of the CQSM through the comparative
analysis of typical static observables of the nucleon predicted by the two
regularization schemes, i.e. the Pauli-Villars one and the
proper-time one. The discussion on this second issue will
be given in III.B. We then summarize our conclusion in IV.
\vspace{4mm}
\section{Pauli-Villars regularization scheme}
We begin with the effective lagrangian of the chiral quark model
with an explicit chiral symmetry breaking term as
\begin{equation}
{\cal L}_{CQM} = {\cal L}_0 + {\cal L}^\prime ,
\end{equation}
where ${\cal L}_0$ denotes the chiral symmetric part [16] given by
\begin{eqnarray}
{\cal L}_0 = \bar{\psi} \,(\,i \not\!\partial -
M U^{\gamma_5} (x) \,) \psi ,
\end{eqnarray}
with
\begin{equation}
U^{\gamma_5} (x) = e^{\,i \gamma_5 \mbox{\boldmath $\tau$}
\cdot \mbox{\boldmath $\pi$} (x) / f_\pi \,} =
\frac{1 + \gamma_5}{2} \,U(x) + \frac{1 - \gamma_5}{2} \,U^\dagger (x),
\end{equation}
while
\begin{equation}
{\cal L}^\prime = \frac{1}{4} f_\pi^2 m_\pi^2 \,\mbox{tr}
( U(x) + U^\dagger (x) - 2),
\end{equation}
is thought to simulate a small deviation from the chiral symmetric limit.
Here the trace in (4) is to be taken with respect to flavor indices.
(One could have taken an alternative choice that introduces explicit
chiral-symmetry-breaking effects in the form of quark mass term.
We did not do so, because it turns out that this form of action
cannot be regularized consistently with the Pauli-Villars
subtraction method.)
The idea of the Pauli-Villars regularization can most easily be
understood by examining the form of the effective meson action
derived from (1) with the help of the standard derivative expansion :
\begin{equation}
S_{eff} [U] = S_f [U] + S_m [U] ,
\end{equation}
where
\begin{eqnarray}
S_f [U] &=& - \,i \,N_c \,\mbox{Sp} \log ( i \! \not\!\partial -
M U^{\gamma_5} ) \nonumber \\
&=& \int d^4 x \,\{ 4 N_c M^2 I_2 (M)
\,\mbox{tr} ( \partial_\mu U \partial^\mu U^\dagger ) +
\mbox{higher derivative terms} \} ,\\
S_m [U] &=& \int d^4 x \,\frac{1}{4} f_\pi^2 m_\pi^2
\,\mbox{tr} ( U(x) + U^\dagger (x) - 2) .
\end{eqnarray}
In eq.(6), the coefficient
\begin{equation}
I_2 (M) \equiv - i \int \frac{d^4 k}{{(2 \pi)}^4}
\frac{1}{{(k^2 - M^2)}^2} ,
\end{equation}
of the pion kinetic term diverges logarithmically. In fact, by
introducing a ultraviolet cutoff momentum $\alpha$ that should eventually
be made infinity, one finds that
\begin{equation}
I_2 (M) \sim \frac{1}{16 \pi^2} \{ \ln \alpha^2 - \ln M^2 - 1 \} .
\end{equation}
This logarithmic divergence can be removed if one introduces a
regularized action as follows :
\begin{equation}
S_{eff}^{reg} [U] = S_f^{reg} [U] + S_m [U] ,
\end{equation}
where
\begin{equation}
S_f^{reg} [U] \equiv S_f [U] - {\left( \frac{M}{M_{PV}} \right)}^2
S_f^{M_{PV}} [U] .
\end{equation}
Here $S_f^{M_{PV}}$ is obtained from $S_f [U]$ with $M$ replaced by
the Pauli-Villars regulator mass $M_{PV}$. Further requiring that the
above regularized action reproduces correct normalization for the pion
kinetic term, one obtains the condition :
\begin{equation}
\frac{N_c M^2}{4 \pi^2} \ln {\left( \frac{M_{PV}}{M} \right)}^2 =
f_\pi^2 ,
\end{equation}
which can be used to fix the regulator mass $M_{PV}$. Once the
effective action is regularized, the static soliton energy should be a
finite functional of the soliton profile $F(r)$ under the standard
hedgehog ansatz $U (\mbox{\boldmath $x$}) =
\exp [ i \mbox{\boldmath $\tau$}
\cdot \hat{\mbox{\boldmath $r$}} F(r) ]$. Since the soliton equation
of motion is obtained from the stationary condition of the static
energy against the variation of $F(r)$, everything seems to be going
well with the above single-subtraction Pauli-Villars regularization
procedure. Unfortunately, this is not the case. To understand what the
problem is, we first recall the fact that the scalar quark density
appearing in the soliton equation of motion is expected to approach a
finite and nonzero constant characterizing the vacuum quark condensate
as the distance from the soliton center becomes large [15].
(This is a natural consequence of our demand that both of the soliton
($B=1$) and vacuum($B=0$) sectors must be described by the same (or
single) equation of motion.) On the other hand,
it has been known that the vacuum quark condensate contains quadratic
divergences that cannot be removed by the single-subtraction
Pauli-Villars scheme [12,14]. This then indicates that the scalar quark
density appearing in the soliton equation of motion cannot also be free
from divergences.
To get rid of all the troublesome divergences, we propose here to
increase the number of subtraction terms, thereby starting with the
following action :
\begin{equation}
S_{eff}^{reg} [U] = S_f^{reg} [U] + S_m [U] ,
\end{equation}
where
\begin{equation}
S_f^{reg} [U] \equiv S_f [U] - \sum_{i = 1}^{N}
c_i S_f^{\Lambda_i} [U] ,
\end{equation}
with $N$ being the number of subtraction terms. The logarithmic divergence
of the original action is removed if the condition
\begin{equation}
1 - \sum_{i = 1}^{N} c_i {\left( \frac{\Lambda_i}{M} \right)}^2
= 0
\end{equation}
is fulfilled. Similarly, the normalization condition (12) is
replaced by
\begin{equation}
\frac{N_c M^2}{4 \pi^2} \sum_{i = 1}^N c_i
{\left( \frac{\Lambda_i}{M} \right)}^2 \ln
{\left( \frac{\Lambda_i}{M} \right)}^2 = f_\pi^2 .
\end{equation}
The single-subtraction Pauli-Villars scheme corresponds to taking
$N = 1, \Lambda_1 = M_{PV}$, and $c_1 = {( M / M_{PV} )}^2$.
This is naturally the simplest case that satisfies both conditions
(15) and (16).
To derive soliton equation of motion, we must first write down a
regularized expression for the static soliton energy.
Under the hedgehog ansatz $\mbox{\boldmath $\pi$}
(\mbox{\boldmath $x$}) = f_\pi \hat{\mbox{\boldmath $r$}} F(r)$ for
the background pion fields, it is obtained in the form :
\begin{equation}
E_{static}^{reg} [F(r)] = E_f^{reg} [F(r)] + E_m [F(r)] ,
\end{equation}
where the meson part is given by
\begin{equation}
E_m [F(r)] = - f_\pi^2 m_\pi^2 \int d^3 x \left(
\cos F(r) - 1 \right) ,
\end{equation}
while the fermion (quark) part is given as
\begin{equation}
E_f^{reg} [F(r)] = E_{val} + E_{vp}^{reg} ,
\end{equation}
with
\begin{eqnarray}
E_{val} &=& N_c E_0 \\
E_{vp}^{reg} &=& N_c \sum_{n < 0} \left( E_n - E_n^{(0)} \right) -
\sum_{i = 1}^N c_i \,N_c \sum_{n < 0} \left( E_n^{\Lambda_i} -
E_n^{(0) \Lambda_i} \right) .
\end{eqnarray}
Here $E_n$ are the quark single-particle energies, given as the
eigenvalues of the static Dirac hamiltonian in the background pion
fields :
\begin{equation}
H \,| \,n > = E_n \,| \,n > ,
\end{equation}
with
\begin{equation}
H = \frac{\mbox{\boldmath $\alpha$} \cdot \nabla}{i}
+ \beta M
\left( \cos F(r) + i
\gamma_5 \mbox{\boldmath $\tau$} \cdot
\hat{\mbox{\boldmath $r$}} \sin F(r) \right) ,
\end{equation}
while the energy $E_n^{(0)}$ denote the energy eigenvalues of the
vacuum hamiltonian given by eq.(23) with $F(r) = 0$ or $U = 1$.
Eq.(19) means that the quark part of the static energy is given as
a sum of the contribution of the discrete bound-state level and
that of the negative energy Dirac continuum. The latter part
is regularized by subtracting from the Dirac sea contribution a
linear combination of the corresponding sum evaluated with the
regulator mass $\Lambda_i$ instead of the dynamical quark mass.
($E_n^{\Lambda_i}$ in these subtraction terms are the eigenenergies
of the Dirac hamiltonian (23) with $M$ replaced by $\Lambda_i$
and with the same background pion field.)
Now the soliton equation of motion is obtained from the stationary
condition of $E_{static}^{reg} [F(r)]$ with respect to the variation
of the profile function $F(r)$ :
\begin{eqnarray}
0 &=& \frac{\delta E_{static} [F(r)]}{\delta F(r)} \nonumber \\
&=& 4 \pi r^2 \left\{ - M \left[
S(r) \sin F(r) - P(r) \cos F(r) \right] +
f_\pi^2 m_\pi^2 \sin F(r) \right\} ,
\end{eqnarray}
which gives
\begin{equation}
F(r) = \arctan \left( \frac{P(r)}{S(r) -
\frac{f_\pi^2 m_\pi^2}{M}} \right) .
\end{equation}
Here $S(r)$ and $P(r)$ are regularized scalar and pseudoscalar
densities given as
\begin{eqnarray}
S(r) &=& S_{val} (r) + \sum_{n < 0} S_n (r) -
\sum_{i = 1}^N c_i \frac{\Lambda_i}{M} \sum_{n < 0}
S_n^{\Lambda_i} (r) , \\
P(r) &=& P_{val} (r) + \sum_{n < 0} P_n (r) -
\sum_{i = 1}^N c_i \frac{\Lambda_i}{M} \sum_{n < 0}
P_n^{\Lambda_i} (r) ,
\end{eqnarray}
with
\begin{eqnarray}
S_n (r) &=& \frac{N_c}{4 \pi} \int d^3 x
< n | \mbox{\boldmath $x$} > \,\gamma^0 \,
\frac{\delta ( | \mbox{\boldmath $x$} | - r )}{r^2}
< \mbox{\boldmath $x$} | n > , \\
P_n (r) &=& \frac{N_c}{4 \pi} \int d^3 x
< n | \mbox{\boldmath $x$} > \,i \gamma^0 \gamma_5 \,
\mbox{\boldmath $\tau$} \cdot \hat{\mbox{\boldmath $r$}} \,
\frac{\delta ( | \mbox{\boldmath $x$} | - r )}{r^2}
< \mbox{\boldmath $x$} | n > ,
\end{eqnarray}
and $S_{val} (r) = S_{n = 0} (r)$ and $P_{val} (r) = P_{n = 0} (r)$,
while $S_n^{\Lambda_i} (r)$ and $P_n^{\Lambda_i} (r)$ are the
corresponding densities evaluated with the regulator mass $\Lambda_i$
instead of the dynamical quark mass $M$.
As usual, a self-consistent soliton solution is obtained in an
iterative way. First by assuming an appropriate (though arbitrary)
soliton profile $F(r)$, the eigenvalue problem of the Dirac hamiltonian
is solved. Using the resultant eigenfunctions and their associated
eigenenergies, one can calculate the regularized scalar and
pseudoscalar quark densities $S(r)$ and $P(r)$. Eq.(25) can then be
used to obtain a new soliton profile $F(r)$. The whole procedure above
is repeated with this new profile $F(r)$ until the self-consistency
is fulfilled.
Now we recall an important observation made before. The scalar quark density
$S (r)$ at the spatial infinity $r = \infty$ with respect to the
soliton center should coincide with the scalar quark density in the
vacuum ($B = 0$) sector, which is nothing but the familiar vacuum
quark condensate (per unit volume) ${\langle \bar{\psi}
\psi \rangle}_{vac}$. That is, the following simple relation must hold :
\begin{equation}
{\langle \bar{\psi} \psi \rangle}_{vac} \ = \ \frac{1}{V}
\int S (r = \infty) \,d^3 r \ = \ S(r = \infty) .
\end{equation}
(Later, this relation will be checked numerically.)
What we must do now is to find necessary conditions for the subtraction
constants $c_i$ and $\Lambda_i$ in the multi-subtraction Pauli-Villars
scheme to make the vacuum quark condensate finite.
This can be achieved by examining the expression of the vacuum quark
condensate obtained consistently with the soliton
equation of motion :
\begin{equation}
M {\langle \bar{\psi} \psi \rangle}_{vac}^{reg} =
M {\langle \bar{\psi} \psi \rangle}_{vac} - \sum_{i = 1}^N c_i
\left( \frac{\Lambda_i}{M} \right) \Lambda_i
{\langle \bar{\psi} \psi \rangle}_{vac}^{\Lambda_i} ,
\end{equation}
or equivalently
\begin{equation}
{\langle \bar{\psi} \psi \rangle}_{vac}^{reg} =
{\langle \bar{\psi} \psi \rangle}_{vac} - \sum_{i = 1}^N c_i
{\left( \frac{\Lambda_i}{M} \right)}^2
{\langle \bar{\psi} \psi \rangle}_{vac}^{\Lambda_i} ,
\end{equation}
where
\begin{equation}
{\langle \bar{\psi} \psi \rangle}_{vac} = - 4 N_c M
\int \frac{d^3 k}{{(2 \pi)}^3} \frac{1}{E_k^{(0)}} ,
\end{equation}
with $E_k^{(0)} = {( k^2 + M^2)}^{1 / 2}$, while
${\langle \bar{\psi} \psi \rangle}_{vac}^{\Lambda_i}$ are obtained
from ${\langle \bar{\psi} \psi \rangle}_{vac}$ with the replacement
of $M$ by $\Lambda_i$. Using the integration formula
\begin{equation}
\int^\alpha \frac{d^3 k}{{(2 \pi)}^3} \frac{1}{\sqrt{k^2 + M^2}} =
\frac{1}{8 \pi^2} \left\{ 2 \alpha^2 - M^2 \ln \alpha^2 +
(1 - 2 \ln 2) M^2 + M^2 \ln M^2 \right\} ,
\end{equation}
with $\alpha$ being a ultraviolet cutoff momentum, we obtain
\begin{eqnarray}
{\langle \bar{\psi} \psi \rangle }_{vac}^{reg} &=&
- \frac{N_c M}{2 \pi^2}
\Biggl\{ \left[ 1 - \sum_{i = 1}^N c_i
{\left( \frac{\Lambda_i}{M} \right)}^2 \right] \cdot 2 \alpha^2
\ - \ \left[ M^2 - \sum_{i = 1}^N c_i
{\left( \frac{\Lambda_i}{M} \right)}^2 \Lambda_i^2 \right]
\cdot \ln \alpha^2 \nonumber \\
&+& \left[ M^2 - \sum_{i = 1}^N c_i
{\left( \frac{\Lambda_i}{M} \right)}^2 \Lambda_i^2 \right]
\cdot ( 1 - 2 \ln 2 ) \ + \ M^2 \ln M^2 - \sum_{i = 1}^N c_i
{\left( \frac{\Lambda_i}{M} \right)}^2 \Lambda_i^2 \ln \Lambda_i^2
\Biggr\} , \ \ \ \ \
\end{eqnarray}
which clearly shows that ${\langle \bar{\psi} \psi \rangle}_{vac}$
contains quadratic and logarithmic divergences as $\alpha$ going to
infinity. These divergences can respectively be removed if the
subtraction constants are chosen to satisfy the following conditions :
\begin{eqnarray}
M^2 - \sum_{i = 1}^N c_i \Lambda_i^2 &=& 0 , \\
M^4 - \sum_{i = 1}^N c_i \Lambda_i^4 &=& 0 .
\end{eqnarray}
Using the first of these conditions, the finite part of ${\langle
\bar{\psi} \psi \rangle}_{vac}$ can also be expressed as
\begin{eqnarray}
{\langle \bar{\psi} \psi \rangle}_{vac}
= \frac{N_c M^3}{2 \pi^2} \sum_{i = 1}^N \,c_i \,
{\left( \frac{\Lambda_i}{M} \right)}^4
\log {\left( \frac{\Lambda_i}{M} \right)}^2
\end{eqnarray}
It is now obvious that the single-subtraction Pauli-Villars
scheme cannot satisfy both conditions (36) and (37) simultaneously.
Although the quadratic divergence may be removed, the logarithmic
divergence remains in ${\langle \bar{\psi} \psi \rangle}_{vac}$
and consequently also in $S(r = \infty)$ in view of the relation (30).
To get rid of both these divergences, we need at least two subtraction
terms, which contains four parameters $c_1,c_2$ and $\Lambda_1, \Lambda_2$.
The strategy for fixing these parameters is as follows. First by solving
the two equations (36) and (37) with $N = 2$ for $c_1$ and $c_2$, we obtain
\begin{eqnarray}
c_1 &=& \ \,\,\,{\left(\frac{M}{\Lambda_1}\right)}^2
\frac{\Lambda_2^2 - M^2}{\Lambda_2^2 - \Lambda_1^2} , \\
c_2 &=& - \,{\left(\frac{M}{\Lambda_2}\right)}^2
\frac{\Lambda_1^2 - M^2}{\Lambda_2^2 - \Lambda_1^2} ,
\end{eqnarray}
which constrains the values of $c_1$ and $c_2$, once $\Lambda_1$
and $\Lambda_2$ are given. For determining $\Lambda_1$ and $\Lambda_2$,
we can then use two conditions (16) and (38), which amounts to adjusting the
normalization of the pion kinetic term and the value of vacuum quark
condensate.
\section{Numerical Results and Discussion}
\subsection{Single- versus double-subtraction Pauli-Villars regularization}
The most important parameter of the CQSM is the dynamical quark
mass $M$, which plays the role of the quark-pion coupling constant thereby
controlling basic soliton properties. Throughout the present investigation,
we use the value $M = 400 \,\mbox{MeV}$ favored from the previous
analyses of static baryon observables.
In the case of single-subtraction Pauli-Villars scheme, the regulator
mass $M_{PV}$ is uniquely fixed to be $M_{PV} = 570.86 \,\mbox{MeV}$ by
using the normalization condition (12) for the pion kinetic term,
and there is no other adjustable parameter in the model. In the case
of double-subtraction Pauli-Villars scheme, we have four regularization
parameters $c_1, c_2, \Lambda_1$, and $\Lambda_2$. From the divergence
free conditions (36) and (37), $c_1$ and $c_2$ are constrained as
(39) and (40), while $\Lambda_1$ and $\Lambda_2$ are determined from
(16) and (38) with $f_\pi = 93 \,\mbox{MeV}$ and
${< \bar{\psi} \psi >}_{vac} = - \,{(286.6 \,\mbox{MeV})}^3$.
In spite of their nonlinearity, the two conditions
(16) and (38) are found to uniquely fix the two parameters $\Lambda_1$ and
$\Lambda_2$ within the physically acceptable range of parameters.
The solution that we found is
\begin{equation}
c_1 = 0.445, \hspace{6mm} c_2 = -0.00612, \hspace{6mm}
\Lambda_1 = 630.01 \,\mbox{MeV}, \hspace{6mm}
\Lambda_2 = 1642.13 \, \mbox{MeV}.
\end{equation}
As usual, all the numerical calculations are carried out by using the
so-called Kahana and Ripka basis [17].
Following them, the plane-wave basis, introduced as a set of
eigenstates of the free hamiltonian
$H_0 = \mbox{\boldmath $\alpha$} \cdot \nabla / i + \beta M$,
is discretized by imposing an appropriate boundary condition
for the radial wave functions at the radius $D$ chosen to be
sufficiently larger than the soliton size.
The basis is made finite by including only those states with
the momentum $k$ as $k < k_{mox}$. The eigenvalue problem (22)
is then solved by diagonalizing the Dirac hamiltonian $H$ in
the above basis. We are thus able to solve the self-consistent
Hartree problem and also to calculate any nucleon observables
with full inclusion of the sea-quark degrees of freedom.
If the theory is consistently regularized, final answers
must be stable against increase of $k_{max}$ and $D$
(especially against the increase of $k_{max}$).
Now we show in Fig.1 the $k_{max}$ dependence of the theoretical
pseudoscalar and scalar quark densities in the single-subtraction
Pauli-Villars scheme. These curves are obtained for a fixed value
of $D$ as $MD = 12$. The corresponding $k_{max}$ dependence of
the quark densities in the double-subtraction
Pauli-Villars scheme are shown in Fig.2.
Comparing the two figures, one immediately notices
that the quark densities obtained in the single-subtraction Pauli-Villars
scheme do not cease to increase in magnitudes as $k_{max}$ increases.
Undoubtedly, this must be a signal of logarithmic divergences
contained in $S(r = \infty)$ (and generally also in $P(r)$ and $S(r)$).
On the other hand, in the case of double-subtraction Pauli-Villars
scheme, the magnitudes of $P(r)$ and $S(r)$ are seen to grow much more
slowly. To convince more clearly the above qualitative difference of
the two regularization schemes, we plot in Fig.3 the value
of $S(r = \infty)$, i.e. the scalar quark density at the spatial
infinity, as functions of $k_{max}$, and also
as functions of $\log ( k_{max} / M )$.
Contrary to the case of single-subtraction scheme
in which a clear signal of logarithmic divergence is observed, the
value of $S(r = \infty)$ obtained in the double-subtraction scheme is
seen to converge to some limiting value. Although the rate of this
convergence is rather slow, it appears that this limiting value
certainly coincides with the prescribed value of vacuum quark
condensate ${\langle \bar{\psi} \psi \rangle}_{vac} = - \,
{(286.6 \,\mbox{MeV})}^3 = - \,3.062 \,\mbox{fm}^{-3}$.
Now that one has convinced the fact that the naive Pauli-Villars
scheme with the single-subtraction term contains logarithmic
divergence in the quark densities appearing in the soliton equation
of motion, one may come to the following question.
Why could the authors of ref.[12] obtain self-consistent soliton solutions
despite the presence of the above-mentioned divergences?
The answer lies in the way of obtaining a self-consistent soliton
profile in the nonlinear model (not in the original
Nambu-Jona-Lasinio model). After evaluating the pseudoscalar and scalar
quark densities with some (large but) finite model space (especially with
finite $k_{max}$), a new profile function $F(r)$ to be used in the next
iterative step is obtained from (25). Since $P(r)$ and $S(r)$ appears
respectively in the numerator and denominator of the argument of arctangent,
it can happen that the logarithmic divergence contained
in both of $P(r)$ and $S(r)$ are offset each other.
(We point out that the effect of the term
$f_{\pi}^2 m_{\pi}^2 / M$ accompanying the scalar quark density is
rather small, anyway.) In fact, Fig.4 shows the $k_{max}$ dependence
of the self-consistent profile function $F(r)$ in both of the
single-subtraction scheme and the double-subtraction scheme.
One sees that the resultant $F(r)$ is quite stable against the increase
of $k_{max}$ even in the single-subtraction scheme, in spite of the fact
that it shows logarithmically divergent behavior for both of $P(r)$ and
$S(r)$. Undoubtedly, this is the reason why the authors of [12] succeeded
in obtaining self-consistent soliton profile $F(r)$ despite the
divergences remaining in each of $P(r)$ and $S(r)$.
Because of this fortunate accident, self-consistent
soliton profiles $F(r)$ in the nonlinear model can be obtained with
a good accuracy by using a modest value of $k_{max}$ not only for
the double-subtraction scheme but also for the single-subtraction one,
and besides the resultant
$F(r)$ and not much different in these two schemes. This also
applies to most nucleon observables which depend only on $F(r)$
and have no direct dependence on $S(r)$ and/or $P(r)$.
The previous calculation of parton distributions with use of
the single-subtraction Pauli-Villars scheme may be justified
in this sense [1-6]. To verify the validity of this expectation,
we investigate the $k_{max}$ dependence of a typical nucleon
observable which contains only a logarithmic divergence, i.e.
the isovector axial-vector coupling constant $g_A^{(3)}$.
Fig.5 show the $k_{max}$ dependence of $g_A^{(3)}$ in the
single- and double-subtraction Pauli-Villars regularization
schemes. One sees that this quantity certainly shows a tendency of
convergence in both regularization schemes, though the
rate of convergence in the double-subtraction scheme
is much faster than for the scalar and pseudoscalar densities in
the same regularization scheme. Nonetheless, one must be very careful if
one is interested in nucleon observables, which have
direct dependence on $S(r)$ or $P(r)$. The most important
nucleon observable, which falls into this category, is the
nucleon scalar charge (or the quark condensate in the nucleon)
given by
\begin{equation}
\langle N | \bar{\psi} \psi | N \rangle \equiv
\int d^3 r \,[S(r) - S(r = \infty)].
\end{equation}
The superiority of the double-subtraction scheme to the
single-subtraction one must be self-explanatory in this case,
since this quantity is convergent only in the former scheme.
\subsection{Pauli-Villars versus proper-time regularization}
How to introduce ultraviolet cutoff into our effective chiral
theory is a highly nontrivial problem. Diakonov et al. advocated
the Pauli-Villars subtraction scheme as a ``good'' regularization scheme
for evaluating leading-twist parton distribution functions of the
nucleon within the chiral quark soliton model [1,2]. The reason is that
it preserves several general properties of the parton distributions
(such as positivity, factorization properties, sum rules etc.), which can
easily be violated by a naive ultraviolet regularization.
On the other hand, Schwinger's proper-time regularization has most
frequently been used for investigating low energy nucleon properties
within the chiral quark soliton model [7-9]. One might then wonder how
these predictions obtained by using the proper-time regularization scheme
would be altered if one uses the Pauli-Villars one.
Before entering into this discussion, we think it useful to recall
some basic properties of the proper-time regularization scheme. In this
scheme, the regularized effective meson action takes the same form
as (10) except that $S_f^{reg} [U]$ is now given in the form :
\begin{equation}
S_f^{reg} [U] = \frac{1}{2} \,i \,N_c \int_0^\infty
\frac{d \tau}{\tau} \,\varphi (\tau) \,
\mbox{Sp} \left( e^{- \tau D^\dagger D} -
e^{- \tau D_0^\dagger D_0} \right) ,
\end{equation}
with
\begin{equation}
D = i \not\!\partial - M U^{\gamma_5} , \hspace{10mm}
D_0 = i \not\!\partial - M .
\end{equation}
The regularization function $\varphi (\tau)$ is introduced so as to cut off
ultraviolet divergences which now
appear as a singularity at $\tau = 0$. For determining it, we can use
a similar criterion as what was used in the Pauli-Villars scheme.
That is, we require that the regularized theory reproduces the correct
normalization of the pion kinetic term as well as the empirical value
of the vacuum quark condensate. This gives two conditions :
\begin{eqnarray}
\frac{N_c M^2}{4 \pi^2} \int_0^\infty \frac{d \tau}{\tau} \,
\varphi (\tau) \,e^{- \tau M^2} &=& f_\pi^2 , \\
\frac{N_c M}{2 \pi^2} \int_0^\infty \frac{d \tau}{\tau^2} \,
\varphi (\tau) \,e^{- \tau M^2} &=&
{\langle \bar{\psi} \psi \rangle}_{vac} .
\end{eqnarray}
Schwinger's original choice corresponds to taking
\begin{equation}
\varphi (\tau) = \theta \left( \tau - \frac{1}{\Lambda^2} \right) ,
\end{equation}
with $\Lambda$ being a physical cutoff energy. However, this simplest choice
cannot fulfill the two conditions (45) and (46) simultaneously.
Then, we use here slightly more complicated form as
\begin{equation}
\varphi (\tau) = c \,\theta \left( \tau - \frac{1}{\Lambda_1^2} \right)
+ (1-c) \,\theta \left( \tau - \frac{1}{\Lambda_2^2} \right) ,
\end{equation}
which contains three parameters $c, \Lambda_1$ and $\Lambda_2$ [18].
Although the above two conditions are not enough to uniquely fix the
above three parameters, we find that solution sets
$(c, \Lambda_1, \Lambda_2)$ lie only in a small range of parameter
space and that this slight difference of regularization parameters
hardly affects the soliton properties. We use the following set of
parameters in the numerical investigation below :
\begin{equation}
c = 0.720, \hspace{8mm} \Lambda_1 = 412.79 \,\mbox{MeV}, \hspace{8mm}
\Lambda_2 = 1330.60 \,\mbox{MeV}.
\end{equation}
Within the framework of the chiral quark soliton model, which assumes
slow collective rotation of a hedgehog soliton as
\begin{equation}
U^{\gamma_5} (\mbox{\boldmath $x$},t) = A(t) U^{\gamma_5}_0
(\mbox{\boldmath $x$}) A^\dagger (t), \hspace{15mm}
A(t) \subset \mbox{SU(2)} ,
\end{equation}
the nucleon matrix element of any quark bilinear operator $\bar{\psi}
O \psi$ is given as a perturbative series in the collective angular
velocity operator $\Omega$ defined by
\begin{equation}
\Omega = i \,A^\dagger (t) \frac{d}{d t} A(t) .
\end{equation}
It is shown below that a noteworthy difference between the proper-time
regularization and the Pauli-Villars one appears at the zeroth order
term in $\Omega$. We recall that, in both schemes,
the $O (\Omega^0)$ contribution to this matrix element is given as
\begin{equation}
{\langle O \rangle}^{\Omega^0} = \int {\cal D} A \,\,
\Psi_{M_J M_T}^{(J)^*} [A] \,
{\langle O \rangle}_A^{\Omega^0} \,
\Psi_{M_J M_T}^{(J)} [A] ,
\end{equation}
with
\begin{equation}
{\langle O \rangle}^{\Omega^0}_A = {\langle O \rangle}_{val}^{\Omega^0}
+ {\langle O \rangle}_{vp}^{\Omega^0} ,
\end{equation}
where $\Psi_{M_J M_T}^{(J)} [A]$ is a wave function describing the
collective rotational motion. In eq.(53),
\begin{equation}
{\langle O \rangle}_{val}^{\Omega^0} =
N_c \,\langle 0 | \tilde{O} | 0 \rangle , \ \ \
\mbox{with} \ \ \ \ \tilde{O} = A^\dagger O A ,
\end{equation}
represents the contribution of the discrete
bound state level called the valence quark one.
Within the Pauli-Villars scheme, the contribution of the Dirac continuum
can be given in either of the following two forms :
\begin{eqnarray}
{\langle O \rangle}_{vp}^{\Omega^0} &=& \ \,\,\,
N_c \,\sum_{n < 0} \,\langle n | \tilde{O} | n \rangle -
\mbox{Pauli-Villars subtraction} , \nonumber \\
&=& - \,N_c \,\sum_{n \ge 0} \,\langle n | \tilde{O} | n \rangle -
\mbox{Pauli-Villars subtraction} .
\end{eqnarray}
Note that the first form is given as a sum over the occupied single-quark
levels, while the second form given as a sum over the nonoccupied levels.
The equivalence of the two expressions follows from the
identity
\begin{equation}
0 = \mbox{Sp} \,\tilde{O} =
\sum_{n < 0} \,\langle n | \tilde{O} | n \rangle +
\sum_{n \ge 0} \,\langle n | \tilde{O} | n \rangle ,
\end{equation}
which holds for most operators including the isovector magnetic
moment operator investigated below, if it is combined with the fact
that a similar identity holds also for the corresponding Pauli-Villars
subtraction terms. The situation is a little different for the proper-time
regularization scheme. The regularized Dirac sea contribution in this
scheme is given in the following form [8] :
\begin{equation}
{\langle O \rangle}_{vp}^{\Omega^0} = - \frac{N_c}{2} \sum_{n = all}
\mbox{sign} (E_n) g(E_n) \langle n | \tilde{O} | n \rangle ,
\end{equation}
with
\begin{equation}
g(E_n) = \frac{1}{\sqrt{\pi}} \int_0^\infty
\frac{d \tau}{\sqrt{\tau}} \,| E_n | \,e^{- \tau E_n^2} .
\end{equation}
To compare this with the corresponding expression in the Pauli-Villars
scheme, it is convenient to rewrite it as
\begin{eqnarray}
{\langle O \rangle}_{vp}^{\Omega^0} &=& \frac{1}{2} \,\Bigl\{ N_c
\sum_{n < 0} g(E_n) \langle n | \tilde{O} | n \rangle -
N_c \sum_{n \ge 0} g(E_n) \langle n | \tilde{O} | n \rangle \Bigr\} .
\end{eqnarray}
One sees that here the answer is given as an average of the two
expressions, i.e. the one given as a sum over the occupied levels and
the others given as a sum over the nonoccupied levels.
(This feature is a consequence of the starting covariant expression for
an operator expectation value in the proper-time scheme.)
However, contrary to the previous case in which ultraviolet
regularization is introduced in the form of the Pauli-Villars
subtraction, now there is no reason to believe that the above two
terms give the same answer. In fact, the introduction of the
energy dependent cutoff factor $g(E_n)$ generally breaks the
equivalence of the two expressions because of the spectral asymmetry
of the positive- and negative-energy levels induced by the background
pion field of hedgehog form.
Now we start a comparative analysis of the two regularization schemes
on the basis of the numerical results.
For reference, we also solve the soliton equation of motion
in the chiral limit. By assuming no (or at least weak) $m_\pi$
dependence of ${< \bar{\psi} \psi >}_{vac}$ appearing in (16) and (38),
this calculation can be done by setting $m_\pi = 0$ in (18) and (25)
without changing the sets of regularization parameters given in (41)
and (49). Since the way of cutting off
the ultraviolet component is totally different for the two regularization
schemes, it naturally affects solutions
of the soliton equation of motion. Although the detailed contents of the
soliton energy are highly model dependent concepts and are not direct
observables, they are anyhow very sensitive to this difference of the
self-consistent solutions. Table 1 shows this comparison.
Comparing the answers of the two regularization schemes, one finds that
the Pauli-Villars scheme leads to more strongly deformed soliton, which
means a deeper binding of the discrete valence level and larger vacuum
polarization energy. One sees that the total soliton energy is lower
for the Pauli-Villars scheme than for the proper-time scheme.
One also observes that the soliton energy is very sensitive to the
pion mass. When one goes from the finite pion mass case to the chiral
limit, one obtains much lower soliton energy.
\begin{table}[h]
\begin{center}
\renewcommand{\baselinestretch}{1.2}
\caption{The static soliton energy in the proper-time regularization
schme and the (double-subtraction) Pauli-Villars one.
$E_{val}, E_{v.p.}^{reg}$ respectively stand for the valence quark
contribution and the Dirac sea one to the fermionic
energy, while $E_m$ represents the mesonic part of the energy.
The sum of these three parts gives the total static energy
$E_{static}^{reg}$.}
\renewcommand{\baselinestretch}{1.38}
\vspace{5mm}
\begin{tabular}{ccccc}
\hline\hline
& $E_{val}$ [MeV] & $E_{v.p.}^{reg}$ [MeV] & $E_m$ [MeV] & $E_{static}^{reg}$
[MeV] \\
\hline
proper-time ($m_\pi = 138 \,\mbox{MeV}$) & 633.0 & 617.6 & 37.2 & 1287.9 \\
Pauli-Villars ($m_\pi = 138 \,\mbox{MeV}$) & 447.6 & 569.2 & 51.3 & 1068.1 \\
\hline
proper-time ($m_\pi = 0 \,\mbox{MeV}$) & 555.6 & 688.6 & 0 & 1244.2 \\
Pauli-Villars ($m_\pi = 0 \,\mbox{MeV}$) & 351.5 & 655.4 & 0 & 1006.9 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\renewcommand{\baselinestretch}{1.2}
\caption{The quark spin content of the nucleon $< \Sigma_3 >$
in the proper-time regularization scheme and the Pauli-Villars
one.}
\renewcommand{\baselinestretch}{1.38}
\vspace{5mm}
\begin{tabular}{cccc}
\hline\hline
& ${< \Sigma_3 >}_{val}$ & ${< \Sigma_3 >}_{v.p.}$ & ${< \Sigma_3 >}$ \\
\hline
proper-time ($m_\pi = 138 \,\mbox{MeV}$) & 0.484 & 0.005 & 0.489 \\
Pauli-Villars ($m_\pi = 138 \,\mbox{MeV}$) & 0.391 & 0.008 & 0.399 \\
\hline
proper-time ($m_\pi = 0 \,\mbox{MeV}$) & 0.374 & 0.007 & 0.380 \\
Pauli-Villars ($m_\pi = 0 \,\mbox{MeV}$) & 0.286 & 0.011 & 0.298 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
Probably, the most important observable which has strong sensitivity to the
above difference of the self-consistent solutions is the flavor-singlet
axial charge or the quark spin content of the nucleon ${\langle \Sigma_3
\rangle}$. The theoretical predictions for this quantities in the
two regularization schemes are shown in Table 2. In evaluating this
quantity, we did not introduce any regularization, because it is
related to the imaginary part of the (Euclidean) effective action and
is convergent itself. This means that the difference between the two
schemes purely comes from that of the self-consistent solutions.
One sees that the Pauli-Villars scheme leads to smaller quark spin
content. The reason can easily be understood. Within the framework of
the chiral quark soliton model, the rest of the nucleon spin is
carried by the orbital angular momentum of quark fields and this
latter portion increases as the deformation of the soliton becomes
larger [8]. A similar tendency is also observed when one goes from
the finite pion mass case to the chiral limit.
\begin{table}[h]
\begin{center}
\renewcommand{\baselinestretch}{1.2}
\caption{The $O (\Omega^0)$ contributions to the
isovector magnetic moment of the nucleon in the proper-time
regularization scheme and the Pauli-Villars one.
The second column represents for the valence quark contribution.
The third and fourth columns stand for the answers
for the vacuum polarization contributions respectively obtained
with the occupied and nonoccupied formulas, while the fifth column
gives the average of the two answers. The total $O(\Omega^0)$
contributions are shown in the sixth column.}
\renewcommand{\baselinestretch}{1.38}
\vspace{5mm}
\begin{tabular}{cccccc}
\hline\hline
& \raisebox{-1.5ex}[0pt]{$\mu_{val}^{(3)} (\Omega^0)$}
& \multicolumn{1}{c}{} &
\multicolumn{1}{c}{\raisebox{-1mm}[0pt]{$\mu_{v.p.}^{(3)reg} (\Omega^0)$}} &
\multicolumn{1}{c}{} &
\raisebox{-1.5ex}[0pt]{$\mu^{(3)} (\Omega^0)$} \\
& & \raisebox{1mm}[0pt]{occupid} &
\raisebox{1mm}[0pt]{nonoccupied} & \raisebox{1mm}[0pt]{average} & \\
\hline
proper-time ($m_\pi = 138 \,\mbox{MeV}$) & 1.611 & 1.312 & 0.210 & 0.761 & 2.372 \\
Pauli-Villars ($m_\pi = 138 \,\mbox{MeV}$) & 1.762 & 0.996 & 0.996 & 0.996 & 2.759 \\
\hline
proper-time ($m_\pi = 0 \,\mbox{MeV}$) & 1.623 & 1.908 & 0.588 & 1.248 & 2.875 \\
Pauli-Villars ($m_\pi = 0 \,\mbox{MeV}$) & 1.810 & 1.738 & 1.738 & 1.738 & 3.547 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
There are different kinds of nucleon observables, which contain (potential)
logarithmic divergence and thus depend directly on how they are regularized.
Most typical are the $O (\Omega^0)$ contribution to the isovector
axial-vector coupling constant $g_A^{(3)}$ and the isovector
magnetic moment $\mu^{(3)}$ of the nucleon. Let us first show the results for
the isovector magnetic moment, since it turns out to
have stronger dependence on the choice of the regularization scheme.
Table 3 shows the $O (\Omega^0)$ contribution to the isovector magnetic
moment. For each regularization scheme, the second column represents
the answer obtained with the occupied expression, while the third
column does the answer obtained with the nonoccupied one. In the case of
Pauli-Villars scheme, the equivalence of the two expressions is
nicely confirmed by the explicit numerical calculation.
In the case of proper-time scheme, however, we encounter quite a
dissimilar situation. First, the answer obtained with the occupied
expression is about $30 \,\%$ larger than the corresponding answer
of the Pauli-Villars scheme, while the answer obtained with the
nonoccupied expression is about $80 \,\%$ smaller than the answer obtained
with the occupied one. Since the final answer of the proper-time scheme
is given as an average of the occupied and nonoccupied expressions,
the consequence is that the
prediction of the proper-time scheme for the $O (\Omega^0)$
contribution to $\mu^{(3)}$ is about $14 \,\%$ smaller than the corresponding
prediction of the Pauli-Villars scheme. (See the fourth column of the
Table 3.) Note that the difference between the two regularization schemes
becomes much more drastic when one goes to the chiral limit. This is
due to the fact that the $O (\Omega^0)$ vacuum polarization contribution to
the isovector magnetic moment is extremely sensitive to the pion mass
effect such that it is much larger in the chiral limit.
\begin{table}[h]
\begin{center}
\renewcommand{\baselinestretch}{1.2}
\caption{The final predictions for the isovector magnetic moment of the
nucleon, given as sums of the $O(\Omega^0)$ and
$O(\Omega^1)$ contributions.}
\renewcommand{\baselinestretch}{1.38}
\vspace{5mm}
\begin{tabular}{cccc}
\hline\hline
& $\mu^{(3)} (\Omega^0)$ & $\mu^{(3)} (\Omega^1)$ & $\mu^{(3)} (\Omega^0 + \Omega^1)$ \\
\hline
proper-time ($m_\pi = 138 \,\mbox{MeV}$) & 2.372 & 1.072 & 3.445 \\
Pauli-Villars ($m_\pi = 138 \,\mbox{MeV}$) & 2.759 & 1.211 & 3.970 \\
\hline
proper-time ($m_\pi = 0 \,\mbox{MeV}$) & 2.875 & 1.032 & 3.907 \\
Pauli-Villars ($m_\pi = 0 \,\mbox{MeV}$) & 3.547 & 1.182 & 4.729 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\renewcommand{\baselinestretch}{1.2}
\caption{The final predictions for the isovector axial-coupling
constant of the nucleon, given as sums of the $O(\Omega^0)$ and
$O(\Omega^1)$ contributions.}
\renewcommand{\baselinestretch}{1.38}
\vspace{5mm}
\begin{tabular}{cccc}
\hline\hline
& $g_A^{(3)} (\Omega^0)$ & $g_A^{(3)} (\Omega^1)$ & $g_A^{(3)} (\Omega^0 + \Omega^1)$ \\
\hline
proper-time ($m_\pi = 138 \,\mbox{MeV}$) & 0.848 & 0.412 & 1.260 \\
Pauli-Villars ($m_\pi = 138 \,\mbox{MeV}$) & 0.976 & 0.408 & 1.384 \\
\hline
proper-time ($m_\pi = 0 \,\mbox{MeV}$) & 0.921 & 0.348 & 1.269 \\
Pauli-Villars ($m_\pi = 0 \,\mbox{MeV}$) & 1.054 & 0.344 & 1.398 \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
Before comparing our theoretical predictions with the observed
isovector magnetic moment of the nucleon, we must take account of
the $O (\Omega^1)$ contribution, too, since it is known to give
sizable correction to the leading-order result [19,20]. Although we
do not go into the detail here, it turns out that this $O (\Omega^1)$
piece is not so sensitive to the difference of the regularization
scheme as the $O (\Omega^0)$ piece is. The reason is that this
$O (\Omega^1)$ term is given as a double sum over the occupied
levels and the nonoccupied ones and the formula has some symmetry
under the exchange of these two types of single-quark orbitals [21].
The final predictions for the nucleon isovector magnetic moment
obtained as a sum of the $O (\Omega^0)$ and $O (\Omega^1)$
contributions are shown in Table 4. After all, the prediction
of the Pauli-Villars scheme is about $15 \,\%$ larger than that of the
proper-time scheme and a little closer to the observed moment.
The effect is much more drastic in the chiral limit. The prediction
of the Pauli-Villars scheme is about $20 \,\%$ larger than that of
the proper-time scheme and nearly reproduces the observed
isovector magnetic moment of the nucleon, i.e.
$\mu_{exp}^{(3)} \simeq 4.71$.
Finally, we show in Table 5 the predictions for the isovector
axial-charge of the nucleon obtained as a sum of the $O (\Omega^0)$
and $O (\Omega^1)$ contributions. Also for this quantity, there are
some detailed differences between the predictions of the two
regularization schemes. Nonetheless, the final answers for
$g_A^{(3)}$ turn out to be not so sensitive to the difference of
the regularization schemes as compared with the case of the
isovector magnetic moment. Besides, one also notices that the
finite pion mass effect hardly influences the final prediction
for this particular quantity.
\section{Conclusion}
In summary, the single-subtraction Pauli-Villars regularization
scheme, which is often used in evaluating nucleon structure functions
within the framework of the CQSM, cannot be regarded as
a fully consistent regularization scheme in that it still contains
ultraviolet divergences in the scalar and psuedoscalar quark densities
appearing in the soliton equation of motion.
However, these divergences can easily be removed by
increasing the number of subtraction term from one to two.
After this straightforward generalization, the effective theory is
totally divergence free. Especially, both the vacuum quark condensate
and the isoscalar piece of the nucleon scalar charge becomes finite now.
Nonetheless, we find that, owing to the accidental
cancellation explained in the text, one can obtain a finite soliton
profile $F(r)$ even in the single-subtraction scheme, and besides the
resultant soliton solution is not extreme different from the
corresponding one obtained in the double-subtraction scheme.
Furthermore, it turns out that, for most nucleon observables, which
contain only the logarithmic divergence, the predictions of the two
regularization schemes are not much different. The previous calculations
of quark distribution functions with use of the single-subtraction
Pauli-Villars regularization scheme would be justified in this sense.
We have also carried out a comparative analysis of typical nucleon
observables based on the Pauli-Villars regularization scheme and the
proper-time one. A nice property of the Pauli-Villars regularization
scheme, which is not possessed by the proper-time one, is that it
preserves a nontrivial symmetry of the original theory, i.e. the
equivalence of the occupied and nonoccupied expressions for
$O (\Omega^0)$ contributions to nucleon observables. The improvement
obtained for the isovector magnetic moment of the nucleon was shown
to be related to this favorable property of the Pauli-Villars
regularization scheme. How to introduce ultraviolet cutoff into an
effective low energy model should in principle be predictable from
the underlying QCD dynamics. For lack of precise information about it,
however, phenomenology must provides us with an important criterion
for selecting regularization schemes. The regularization scheme
based on the Pauli-Villars subtraction appears to be a good candidate
also in this respect.
\vspace{3mm}
\section*{Acknowledgement}
Numerical calculation was performed by using the workstations
at the Laboratory of Nuclear Studies, and those at the Research Center
for Nuclear Physics, Osaka University.
\vspace{3mm}
\section*{References}
\newcounter{refnum}
\begin{list}%
{[\arabic{refnum}]}{\usecounter{refnum}}
\item D.I.~Diakonov, V.Yu.~Petrov, P.V.~Pobylitsa, M.V.~Polyakov,
and C.~Weiss, \\
Nucl. Phys. {\bf B480}, 341 (1996).
\item D.I.~Diakonov, V.Yu.~Petrov, P.V.~Pobylitsa, M.V.~Polyakov,
and C.~Weiss, \\
Phys. Rev. {\bf D56}, 4069 (1997).
\item M.~Wakamatsu and T.~Kubota, Phys. Rev. {\bf D56}, 4069 (1998).
\item M.~Wakamatsu and T.~Kubota, Osaka University preprint
OU-HET-310/98, \\
hep-ph/9809443.
\item C.~Weiss and K.~Goeke, Bochum University preprint RUB-TPII-12/97,\\
hep-ph/9712447.
\item P.V.~Pobylitsa, M.V.~Polyakov, K.~Goeke, T.~Watabe, and C.~Weiss,\\
Bochum University preprint RUB-TPII-4/98, hep-ph/9804436.
\item H.~Reinhardt and R.~W\"{u}nsch, Phys. Lett. {\bf B215},
577 (1998) ;\\
T.~Meissner, F.~Gr\"{u}mmer, and K.~Goeke, Phys. Lett. {\bf B227},
296 (1989).
\item M.~Wakamatsu and H.~Yoshiki, Nucl. Phys. {\bf A524}, 561 (1991).
\item For reviews, see, M.~Wakamatsu, Prog. Theor. Phys. Suppl.
{\bf 109}, 115 (1992) ; \\
Chr.V.~Christov, A.~Blotz, H.-C.~Kim, P.~Pobylitsa, T.~Watabe,
Th.~Meissner, \\
E.~Ruiz Arriola and K.~Goeke, Prog. Part.
Nucl. Phys. {\bf 37}, 91 (1996) ;\\
R.~Alkofer, H.Reinhardt and H.~Weigel, Phys. Rep. {\bf 265}, 139
(1996).
\item H.~Weigel, L.~Gamberg, and H.~Reinhardt, Phys. Rev. {\bf D58},
038501 (1998).
\item D.I.~Diakonov, V.Yu.~Petrov, P.V.~Pobylitsa, M.V.~Polyakov,
and C.~Weiss, \\
Phys. Rev. {\bf D58}, 038502 (1998).
\item F.~D\"{o}ring, A.~Blotz, C.~Sch\"{u}ren, Th.~Meissner,
E.~Ruiz-Arriola, and K.~Goeke, \\
Nucl. Phys. {\bf A536}, 548 (1992).
\item T.~Watabe and H.~Toki, Prog. Theor. Phys. {\bf 87}, 651 (1992).
\item M.~Jaminon, G.~Ripka, and P.~Stassart, Phys. Lett. {\bf B227},
191 (1989).
\item M.~Wakamatsu, Phys. Rev. {\bf D46}, 3762 (1992).
\item D.I.~Diakonov, V.Yu.~Petrov, and P.V.~Pobylitsa, Nucl. Phys.
{\bf B306}, 809 (1988).
\item S.~Kahana and G.~Ripka, Nucl. Phys. {\bf A429}, 462 (1984) ;\\
S.~Kahanna, G.~Ripka, and V.~Soni, Nucl. Phys. {\bf A415}, 351 (1984).
\item A.~Blotz, D.I.~Diakonov, K.~Goeke, N.W.~Park, V.Yu.~Petrov,
and P.V.~Pobylitsa, \\
Nucl. Phys. {\bf A555}, 765 (1993).
\item M.~Wakamatsu and T.~Watabe, Phys. Lett. {\bf B312}, 184 (1993).
\item Chr.V.~Christov, A.~Blotz, K.~Goeke, P.V.~Pobylitsa,
V.Yu.~Petrov, \\
M.~Wakamatsu, and T.~Watabe, Phys. Lett. {\bf B325},
467 (1994).
\item M.~Wakamatsu, Prog. Theor. Phys. {\bf 95}, 143 (1996).
\end{list}
\vspace{8mm}
\begin{flushleft}
\Large\bf{Figure caption} \\
\end{flushleft}
\ \\
\begin{minipage}{2cm}
Fig. 1.
\end{minipage}
\begin{minipage}[t]{13cm}
The $k_{max}$ dependence of the scalar quark density $S(r)$
and the pseudoscalar density $P(r)$ in the single-subtraction
Pauli-Villars scheme.
\end{minipage}
\ \\
\vspace{6mm}
\ \\
\begin{minipage}{2cm}
Fig. 2.
\end{minipage}
\begin{minipage}[t]{13cm}
The $k_{max}$ dependence of the scalar quark density $S(r)$
and the pseudoscalar density $P(r)$ in the double-subtraction
Pauli-Villars scheme.
\end{minipage}
\ \\
\vspace{6mm}
\ \\
\begin{minipage}{2cm}
Fig. 3.
\end{minipage}
\begin{minipage}[t]{13cm}
The scalar quark densities at the spatial infinity $S(r = \infty)$
as functions of $k_{max} / M$ and as functions of
$\log (k_{max} / M)$ in the single- and double-subtraction
Pauli-Villars schemes.
\end{minipage}
\ \\
\vspace{6mm}
\ \\
\begin{minipage}{2cm}
Fig. 4.
\end{minipage}
\begin{minipage}[t]{13cm}
The $k_{max}$ dependence of the self-consistent soliton profiles
$F(r)$ in the single- and double-subtraction Pauli-Villars
schemes. The curves with different $k_{max}$ are almost
indistinguishable.
\end{minipage}
\ \\
\vspace{6mm}
\ \\
\begin{minipage}{2cm}
Fig. 5.
\end{minipage}
\begin{minipage}[t]{13cm}
The $k_{max}$ dependence of the nucleon isovector axial-charges
$g_A^{(3)}$ in the single- and double-subtraction Pauli-Villars
schemes.
\end{minipage}
\end{document}
|
3,212,635,537,520 | arxiv |
\section{Introduction}
Recent prevalent pre-trained language models such as ELMo~\citep{Peters2018DeepCW}, BERT~\citep{Devlin2018BERTPO}, and XLNet~\citep{Yang2019XLNetGA} achieve state-of-the-art performance for a diverse array of downstream NLP tasks.
An interesting area of research is to investigate the interpretability of these pre-trained models (i.e., the linguistic properties they capture).
Most recent approaches are built upon the idea of {\it probing classifiers}~\cite{Shi2016, Adi2017, conneau2018you, peters2018dissecting, Hewitt2019, Clark2019, Tenney2019, Jawahar2019}.
A {\it probe} is a simple neural network (with a small additional set of parameters) that uses the feature representations generated by a pre-trained model (e.g., hidden state activations, attention weights) and is trained to perform a supervised task (e.g., dependency labeling). The performance of a {\it probe} is used to measure the quality of the generated representations with the assumption that the measured quality is mostly attributable to the pre-trained language model.
One downside of such approach, as pointed out in~\cite{hewitt2019designing}, is that a probe introduces a new set of additional parameters, which makes the results difficult to interpret. Is it the pre-trained model that captures the linguistic information, or is it the probe that learns the downstream task itself and thus encodes the information in its additional parameter space?
In this paper we propose a parameter-free probing technique called {\it Perturbed Masking} to analyze and interpret pre-trained models. The main idea is to introduce the {\it Perturbed Masking} technique into the masked language modeling (\textbf{MLM}) objective to measure the impact a word $x_j$ has on predicting another word $x_i$ (Sec~\ref{sec:token-mask}) and then induce the global linguistic properties (e.g., dependency trees) from this inter-word information.
Our contributions are threefold:
\noindent $\bullet$ We introduce a new parameter-free probing technique, \textit{Perturbed Masking}, to estimate inter-word correlations. Our technique enables global syntactic information extraction.
\noindent $\bullet$ We evaluate the effectiveness of our probe over a number of linguistic driven tasks (e.g., syntactic parsing, discourse dependency parsing). Our results reinforce the claims of recent probing works, and further complement them by quantitatively evaluating the validity of their claims.
\noindent $\bullet$ We feed the empirically induced dependency structures into a downstream task to make a comparison with a parser-provided, linguist-designed dependency schema and find that our structures perform on-par or even better (Sec~\ref{sec:sentiment}) than the parser created one. This offers an insight into the remarkable success of BERT on downstream tasks.
\section{Perturbed Masking}
We propose the perturbed masking technique to assess the impact one word has on the prediction of another in MLM. The inter-word information derived serves as the basis for our later analysis.
\subsection{Background: BERT}
BERT\footnote{In our experiments, we use the base, uncased version from \cite{Wolf2019HuggingFacesTS}. }~\cite{Devlin2018BERTPO} is a large Transformer network that is pre-trained on 3.3 billion tokens of English text. It performs two tasks: (1) Masked Language Modeling (\textbf{MLM}): randomly select and mask 15\% of all tokens in each given sequence, and then predict those masked tokens. In masking, a token is (a) replaced by the special token [MASK], (b) replaced by a random token, or (c) kept unchanged. These replacements are chosen 80\%, 10\%, and 10\% of the time, respectively. (2)Next Sentence Prediction: given a pair of sentences, predict whether the second sentence follows the first in an original document or is taken from another random document.
\subsection{Token Perturbation}
\label{sec:token-mask}
Given a sentence as a list of tokens $\mathbf{x}=[x_1, \ldots, x_T]$, BERT maps each $x_i$ into a contextualized representation $H_{\theta}(\mathbf{x})_i$, where $\theta$ represents the network's parameters. Our goal is to derive a function $f(x_i, x_j)$ that captures the impact a context word $x_j$ has on the prediction of another word $x_i$.
We propose a two-stage approach to achieve our goal.
First, we replace $x_i$ with the [MASK] token and feed the new sequence $\mathbf{x} \backslash \{x_i\} $ into BERT. We use $H_{\theta}(\mathbf{x} \backslash \{x_i\})_i$ to denote the representation of $x_i$. To calculate the impact $x_j \in \mathbf{x} \backslash \{x_i\} $ has on $H_{\theta}(\mathbf{x} \backslash \{x_i\})_i$, we further mask out $x_j$ to obtain the second corrupted sequence $\mathbf{x} \backslash \{x_i, x_j\} $. Similarly, $H_{\theta}(\mathbf{x} \backslash \{x_i, x_j\})_i$ denotes the new representation of token $x_i$.
We define $f(x_i, x_j)$ as:
$$ f(x_i, x_j) = d \left(H_{\theta}(\mathbf{x} \backslash \{x_i\})_i, H_{\theta}(\mathbf{x} \backslash \{x_i, x_j\})_i\right)
$$
where $d(\mathbf{x}, \mathbf{y})$ is the distance metric that captures the difference between two vectors. We experimented with two options for $d(\mathbf{x}, \mathbf{y})$:
\noindent $\bullet$ \textbf{Dist:} Euclidean distance between $\mathbf{x}$ and $\mathbf{y}$
\noindent $\bullet$ \textbf{Prob}: $ d(\mathbf{x}, \mathbf{y}) = a(\mathbf{x})_{x_i} - a(\mathbf{y})_{x_i}$,
\newline
where $a(\cdot)$ maps a vector into a probability distribution among the words in the vocabulary. $a(\mathbf{x})_{x_i}$ represents the probability of predicting token $x_i$ base on $\mathbf{x}$.
By repeating the two-stage perturbation on each pair of tokens $x_i, x_j \in \mathbf{x}$ and calculating $f(x_i, x_j)$, we obtain an \textbf{impact matrix} $\mathcal{F}$, where $\mathcal{F}_{ij} \in \mathbb{R}^{T \times T}$.
Now, we can derive algorithms to extract syntactic trees from $\mathcal{F}$ and compare them with ground-truth trees that are obtained from benchmarks.
Note that BERT uses byte-pair encoding~\cite{sennrich2016subword} and may split a word into multiple tokens(subwords). To evaluate our approach on word-level tasks, we make the following changes to obtain inter-word impact matrices. In each perturbation, we mask all tokens of a split-up word.
The impact \textit{on} a split-up word is obtained by averaging\footnote{We also experimented with other alternatives, but observe no significant difference.} the impacts over the split-up word's tokens. To measure the impact exerted \textit{by} a split-up word, we assume the impacts given by its tokens are the same; We use the impact given by the first token for convenience.
\subsection{Span Perturbation}
\label{sec:clause-mask}
Given the token-level perturbation above, it is straightforward to extend it to span-level perturbation. We investigate how BERT models the relations between spans, which can be phrases, clauses, or paragraphs. As a preliminary study, we investigate how well BERT captures document structures.
We model a document $D$ as N non-overlapping text spans
$D = [e_1, e_2, \ldots, e_N]$, where each span $e_i$ contains a sequence of tokens $e_i = [x_1^i, x_2^i, \ldots, x_M^i]$.
For span-level perturbation, instead of masking one token at a time, we mask an array of tokens in a span simultaneously. We obtain the span representation by averaging the representations of all the tokens the span contains. Similarly, we calculate the impact $e_j$ has on $e_i$ by:
$$ f(e_i, e_j) = d \left(H_{\theta}(D \backslash \{e_i\})_i, H_{\theta}(D \backslash \{e_i, e_j\})_i\right)
$$
where $d$ is the \textbf{Dist} function.
\section{Visualization with Impact Maps}
\label{sec:visualization}
Before we discuss specific syntactic phenomena, let us first analyze some example impact matrices derived from sample sentences. We visualize an impact matrix of a sentence by displaying a heatmap. We use the term ``impact map'' to refer to a heatmap of an impact matrix.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures/fig-heatmap.pdf}
\caption{Heatmap of the impact matrix\ for the sentence ``For those who follow social media transitions on Capitol Hill, this will be a little different.''}
\label{fig:heatmap}
\end{figure}
\textbf{Setup.} We extract impact matrices by feeding BERT with 1,000 sentences from the English Parallel Universal Dependencies (PUD) treebank of the CoNLL 2017 Shared Task~\cite{zeman2017conll}. We follow the setup and pre-processing steps employed in pre-training BERT. An example impact map is shown in Figure~\ref{fig:heatmap}.
\textbf{Dependency.} We notice that the impact map contains many \textit{stripes}, which are short series of vertical/horizontal cells, typically located along the diagonal.
Take the word ``\textit{different}'' as an example (which is illustrated by the second-to-last column in the impact matrix). We observe a clear vertical stripe above the main diagonal. The interpretation is that this particular occurrence of the word ``\textit{different}'' strongly affects the occurrences of those words before it. These strong influences are shown by the darker-colored pixels seen in the second last column of the impact map. This observation agrees with the ground-truth dependency tree, which selects ``\textit{different}'' as the head of all remaining words in the phrase ``\textit{this will be a little different}.'' We also observe similar patterns on ``\textit{transitions}'' and ``\textit{Hill}''. Such correlations lead us to explore the idea of extracting dependency trees from the matrices (see Section~\ref{sec:dependency}).
\textbf{Constituency.} Figure~\ref{fig:goldtree} shows part of the constituency tree of our example sentence generated by Stanford CoreNLP~\cite{corenlp}. In this sentence, ``\textit{media}'' and ``\textit{on}'' are two words that are adjacent to ``\textit{transitions}''. From the tree, however, we see that ``\textit{media}'' is closer to ``\textit{transitions}'' than ``\textit{on}'' is in terms of syntactic distance. If a model is syntactically uninformed, we would expect ``\textit{media}'' and ``\textit{on}'' to have comparable impacts on the prediction of ``\textit{transitions}'', and vice versa. However, we observe a far greater impact (darker color) between ``\textit{media}'' and ``\textit{transitions}'' than that between ``\textit{on}'' and ``\textit{transitions}''. We will further support this observation with empirical experiments in Section~\ref{sec:constituency}.
\textbf{Other Structures.} Along the diagonal of the impact map, we see that words are grouped into four contiguous chunks that have specific intents (e.g., a noun phrase -- \textit{on Capitol Hill}). We also observe that the two middle chunks have relatively strong inter-chunk word impacts and thus a bonding that groups them together, forming a larger verb phrase. This observation suggest that BERT may capture the compositionality of the language.
\begin{figure}
\centering
\input{figures/fig-gold-tree.tex}
\caption{Part of the constituency tree.}
\label{fig:goldtree}
\end{figure}
In the following sections we quantitatively evaluate these observations.
\section{Syntactic Probe}
We start with two syntactic probes -- dependency probe and constituency probe.
\subsection{Dependency Probe}
\label{sec:dependency}
With the goal of exploring the extent dependency relations are captured in BERT, we set out to answer the following question: Can BERT outperform linguistically uninformed baselines in unsupervised dependency parsing? If so, to what extent?
We begin by using the token-level perturbed masking technique to extract an impact matrix $\mathcal{F}$ for each sentence. We then utilize graph-based algorithms to induce a dependency tree from $\mathcal{F}$, and compare it against ground-truth whose annotations are linguistically motivated.
\paragraph{Experiment Setup.}
We evaluate the induced trees on two benchmarks: (1) the PUD treebank described in Section~\ref{sec:visualization}. (2) the WSJ10 treebank, which contains 7,422 sentences (all less than 10 words after punctuation removal) from the Penn Treebank (PTB)~\cite{marcus1993penn}. Note that the original PTB does not contain dependency annotations. Thus, we convert them into Universal Dependencies using Stanford CoreNLP. We denote this set as WSJ10-U.
Next, two parsing algorithms, namely, the Eisner algorithm~\shortcite{eisner1996three} and Chu-Liu/Edmonds (CLE) algorithm~\shortcite{chu1965shortest, edmonds1967optimum}, are utilized to extract the projective and non-projective unlabeled dependency trees, respectively. Given that our impact matrices have no knowledge about the dependency root of the sentence, we use the gold root in our analysis. Introducing the gold root may artificially improve our results slightly. We thus apply this bias evenly across all baselines to ensure a fair comparison, as done in~\cite{Tiedemann2018, htut2019attention}.
We compared our approach against the following baselines: (1) right-(left-) chain baseline, which always selects the next(previous) word as dependency head. (2) A \textit{random} BERT baseline, with which we randomly initialize weights of the BERT model~\cite{htut2019attention}, then use our methods to induce dependency trees.
We measure model performance using Unlabeled Attachment Score (UAS). We note that UAS has been shown to be highly sensitive to annotation variations~\cite{schwartz2011neutralizing, tsarfaty2011ned, kubler2009dependency}. Therefore, it may not be a fair evaluation metric for analyzing and interpreting BERT. To reflect the real quality of the dependency structures that are retained in BERT, we also report Undirected UAS (UUAS)~\cite{klein2004dmv} and the Neutral Edge Direction (NED) scores~\cite{schwartz2011neutralizing}.
\begin{table}[]
\centering
\begin{tabular}{lcc}
\toprule
\multirow{2}{*}{\textbf{Model}} & \multicolumn{2}{c}{\textbf{Parsing UAS}}\\
& \multicolumn{1}{c}{\textbf{WSJ10-U}} & \multicolumn{1}{c}{\textbf{PUD}} \\
\toprule
Right-chain & 49.5 & 35.0 \\
Left-chain & 20.6 & 10.7 \\
Random BERT & 16.9 & 10.2 \\
\toprule
Eisner+Dist & \textbf{58.6} & \textbf{41.7}\\
Eisner+Prob & 52.7 & 34.1\\
CLE+Dist & 51.5 & 33.2\\
\toprule
\end{tabular}
\caption{UAS results of BERT on unsupervised dependency parsing. }
\label{tab:dep-result}
\end{table}
\textbf{Results.} Tables~\ref{tab:dep-result} and \ref{tab:new-metric} show the results of our dependency probes. From Table~\ref{tab:dep-result}, we see that although BERT is trained without any explicit supervision from syntactic dependencies, to some extent the syntax-aware representation already exists in it. The best UAS scores it achieves (Eisner+Dist) are substantially higher than that of the random BERT baseline with respect to both WSJ10-U(+41.7) and PUD(+31.5). Moreover, the \textit{Dist} method significantly outperforms the \textit{Prob} method on both datasets we evaluated. We thus use \textit{Dist} as the default distance function in our later discussion. We also note that the Eisner algorithm shows a clear advantage over CLE since English sentences are mostly projective. However, our best performing method does not go much beyond the strong right-chain baseline (with gold root modified), showing that the dependency relations learned are mostly those simple and local ones.
For reference, the famous unsupervised parser -- DMV~\cite{klein2004dmv} achieves a 43.2 UAS on WSJ10 with Collins~\shortcite{collins1999head} conventions. Note that the DMV parser utilizes POS tags for training while ours start with the gold root. The results are therefore not directly comparable. By putting them together, however, we see potential room for improvement for current neural unsupervised dependency parsing systems in the BERT era.
\begin{table}[]
\centering
\begin{tabular}{lccc}
\toprule
\textbf{Model} & \textbf{UAS} & \textbf{UUAS} & \textbf{NED} \\
\toprule
Eisner+Dist & 41.7 & 52.1 & 69.6 \\
Right-chain & 35.0 & 39.9 & 41.2 \\
\toprule
\end{tabular}
\caption{Performance on PUD when evaluated using UAS, UUAS, and NED.}
\label{tab:new-metric}
\end{table}
From Table~\ref{tab:new-metric}, we see that although BERT only outperforms the right-chain baseline modestly in terms of UAS, it shows significant improvements on UUAS (+12.2) and NED (+28.4). We also make similar observation with WSJ10-U.
This suggests that BERT does capture inter-word dependencies despite that it may not totally agree with one specific human-designed governor-dependent schema.
We manually inspect those discrepancies and observe that they can also be syntactically valid. For instance, consider the sentence ``It closed on Sunday.''. For the phrase ``on Sunday'', our method selects the functional word ``on'' as the head while the gold-standard annotation uses a lexical head (``Sunday'')\footnote{This specific choice is actually agreed with the YM \cite{yamada2003statistical} schema.}.
The above findings prove that BERT has learned its own syntax as a by-product of self-supervised training, not by directly copying any human design. However, giving the superior performance of BERT on downstream tasks, it is natural to ask if BERT is learning an empirically useful structure of language. We investigate this question in Sec~\ref{sec:sentiment}.
\subsection{Constituency Probe}
\label{sec:constituency}
We now examine the extent BERT learns about the constituent structure of sentences. We first present the algorithm for unsupervised constituent parsing, which executes in a top-down manner by recursively splitting larger constituents into smaller ones.
\paragraph{Top-Down Parsing.}
Given a sentence as a sequence of tokens $\mathbf{x}=[x_1, \ldots, x_T]$ and the corresponding impact matrix\ $\mathcal{F}$. We start by finding the best splitting position $k$ that will separate the sentence into constituents $( (\mathbf{x}_{<k} ), (x_k, (\mathbf{x}_{>k} ) ))$, where $\mathbf{x}_{<k} = [x_1, \ldots, x_{k-1}]$. The best splitting position ensures that each constituent has a large average impact between words within it (thus those words more likely to form a constituent) while at the same time the impact between words of different constituents are kept as small as possible (thus they are unlikely to be in the same constituent). Mathematically, we decide the best $k$ for the constituent $\mathbf{x}=[x_i, x_{i+1}, \ldots, x_j]$ by the following optimization:
\begin{equation}
\begin{split}
\argmax_k & \quad \mathcal{F}_{i,\ldots k}^{i,\ldots ,k} + \mathcal{F}_{k+1,\ldots ,j}^{k+1,\ldots ,j} \\
& - \mathcal{F}_{i, \ldots, k}^{k+1,\ldots ,j} - \mathcal{F}_{k+1,\ldots, j}^{i, \ldots, k}
\end{split}
\end{equation}
where $\mathcal{F}_{i,\ldots k}^{i,\ldots ,k} = \frac{\sum_{a=i}^{k}\sum_{b=i}^{k} f(x_a,x_b)}{|\theta|} $, and $|\theta|$ is the number of off-diagonal elements in the corresponding impact matrix $
\left[
\begin{matrix}
x_{i, i} & ... & x_{i, k} \\
\vdots & \ddots & \vdots \\
x_{k, i} & ... & x_{k, k}
\end{matrix}
\right]
$.
We recursively split $ (\mathbf{x}_{<k} )$ and $(\mathbf{x}_{>k} )$ until only single words remain. Note that this top-down strategy is similar to that of ON-LSTM~\cite{shen2018ordered} and PRPN~\cite{shen2018prpn}, but differs from them in that ON-LSTM and PRPN decide the splitting position based on a ``syntactic distance vector'' which is explicitly modeled by a special network component. To distinguish our approach from the others, we denote our parser as \textbf{MART} (\textbf{MA}t\textbf{R}ix-based \textbf{T}op-down parser)
\begin{table*}[]
\centering
\begin{tabular}{lccccccc}
\toprule
\multirow{2}{*}{\textbf{Model}} & \multicolumn{2}{c}{\textbf{Parsing F1}} & \multicolumn{5}{c}{\textbf{Accuracy on PTB23 by Tag}} \\
& \textbf{WSJ10} & \textbf{PTB23} & \textbf{NP} & \textbf{VP} & \textbf{PP} & \textbf{S} & \textbf{SBAR}\\
\toprule
PRPN-LM & 70.5 & 37.4 & 63.9 & - & 24.4 & - & - \\
ON-LSTM 1st-layer & 42.8 & 24.0 & 23.8 & 15.6 & 18.3 & 48.1 & 16.3\\
ON-LSTM 2nd-layer & 66.8 & 49.4 & 61.4 & 51.9 & 55.4 & 54.2 & 15.4\\
ON-LSTM 3rd-layer & 57.6 & 40.4 & 57.5 & 13.5 & 47.2 & 48.6 & 10.4\\
300D ST-Gumbel w/o Leaf GRU & - & 25.0 & 18.8 & - & 9.9 & - & -\\
300D RL-SPINN w/o Leaf GRU & - & 13.2 & 24.1 & - & 14.2 & - & -\\
\toprule
\textbf{MART} & 58.0 & 42.1 & 44.6 & 47.0 & 50.6 & 66.1 & 51.9 \\
Right-Branching & 56.7 & 39.8 & 25.0 & 71.8 & 42.4 & 74.2 & 68.8 \\
Left-Branching & 19.6 & 9.0 & 11.3 & 0.8 & 5.0 & 44.1 & 5.5 \\
\toprule
\end{tabular}
\caption{Unlabeled parsing F1 results evaluated on WSJ10 and PTB23.}
\label{tab:constituent}
\end{table*}
\paragraph{Experiment Setup.}
We follow the experiment setting in Shen et al~\shortcite{shen2018ordered, shen2018prpn} and evaluate our method on the 7,422 sentences in WSJ10 dataset and the PTB23 dataset (the traditional PTB test set for constituency parsing).
\paragraph{Results.} Table~\ref{tab:constituent} shows the results of our constituency probes. From the table, we see that BERT outperforms most baselines on PTB23, except for the second layer of ON-LSTM. Note that all these baselines have specifically-designed architectures for the unsupervised parsing task, while BERT's knowledge about constituent formalism emerges purely from self-supervised training on unlabeled text.
It is also worth noting that recent results~\cite{dyer2019critical, li2019imitation} have suggested that the parsing algorithm used by ON-LSTM (PRPN) is biased towards the right-branching trees of English, leading to inflated F1 compared to unbiased parsers. To ensure a fair comparison with them, we also introduced this right-branching bias. However, our results show that our method is also robust without this bias (e.g., only 0.9 F1 drops on PTB23).
To further understand the strengths and weaknesses of each system, we analyze their accuracies by constituent tags. In Table~\ref{tab:constituent}, we show the accuracies of five most common tags in PTB23. We find that the success of PRPN and ON-LSTM mainly comes from the accurate identification of NP (noun phrase), which accounts for 38.5\% of all constituents. For other phrase-level tags like VP (verb phrase) and PP (prepositional phrase), the accuracies of BERT are competitive. Moreover, for clause level tags, BERT significantly outplays ON-LSTM. Take SBAR (clause introduced by a subordinating conjunction) for example, BERT achieves an accuracy of 51.9\%, which is about 3.4 times higher than that of ON-LSTM. One possible interpretation is that BERT is pre-trained on long contiguous sequences extracted from a document-level corpus. And the masking strategy (randomly mask 15\% tokens) utilized may allow BERT to learn to model a sequence of words (might form a clause).
\section{Discourse Probe}
\label{sec:discourse}
Having shown that clause-level structures are well-captured in BERT using the constituency probe, we now explore a more challenging probe -- probing BERT's knowledge about the structure of a document.
A document contains a series of coherent text spans, which are named Elementary Discourse Units (EDUs)~\cite{yang2018scidtb, polanyi1988formal}. EDUs are connected to each other by discourse relations to form a document.
We devise a discourse probe to investigate how well BERT captures structural correlations between EDUs. As the foundation of the probe, we extract an EDU-EDU impact matrix for each document using span-level perturbation.
\paragraph{Setup.} We evaluate our probe on the discourse dependency corpus SciDTB~\cite{yang2018scidtb}. We do not use the popular discourse corpora RST-DT~\cite{carlson2003builrst} and PDTB~\cite{prasad2008pdtb} because PDTB focuses on local discourse relations but ignores the whole document structure, while RST-DT introduces intermediate nodes and does not cover non-projective structures. We follow the same baseline settings and evaluation procedure in Sec~\ref{sec:dependency}, except that we remove gold root from our evaluation since we want to compare the accuracy by syntactic distances.
\begin{table}[]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{lccccc}
\toprule
\multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{UAS}} & \multicolumn{4}{c}{\textbf{Accuracy by distance}}\\
& & 0 & 1 & 2 & 5 \\
\toprule
Right-chain & 10.7 & 20.5 & - & - & - \\
Left-chain & \textbf{41.5} & \textbf{79.5} & - & - & - \\
Random BERT & 6.3 & 20.4 & 7.5 & 3.5 & 0.0 \\
Eisner+Dist & 34.2 & 61.6 & 7.3 & \textbf{7.6} & \textbf{12.8}\\
CLE+Dist & 34.4 & 63.8 & 3.3 & 3.5 & 2.6 \\
\toprule
\end{tabular}
}
\caption{Performance of different discourse parser. The distance is defined as the number of EDUs between head and dependent.}
\label{tab:discourse}
\end{table}
\paragraph{Results.} Table~\ref{tab:discourse} shows the performance of our discourse probes. We find that both Eisner and CLE achieve significantly higher UAS (+28) than the random BERT baseline. This suggests that BERT is aware of the structure of the document it is given. In particular, we observe a decent accuracy in identifying discourse relations between adjacent EDUs, perhaps due to the ``next sentence prediction'' task in pre-training, as pointed out in~\cite{shi2019next}. However, our probes fall behind the left-chain baseline, which benefits from its strong structural prior\footnote{For reference, a supervised graph-based parser~\cite{li2014text} achieves an UAS of 57.6 on SciDTB} (principal clause mostly in front of its subordinate clause). Our finding sheds some lights on BERT's success in downstream tasks that have paragraphs as input (e.g., Question Answering).
\section{BERT-based Trees VS Parser-provided Trees}
\label{sec:sentiment}
Our probing results suggest that although BERT has captured a certain amount of syntax, there are still substantial disagreements between the syntax BERT learns and those designed by linguists. For instance, our constituency probe on PTB23 significantly outperforms most baselines, but it only roughly agree with the PTB formalism (41.2\% F1). However, BERT has already demonstrated its superiority in many downstream tasks. An interesting question is whether \textit{BERT is learning an empirically useful or even better structure of a language}.
To answer this question, we turn to neural networks that adopt dependency parsing trees as the explicit structure prior to improve downstream tasks. We replace the ground-truth dependency trees those networks used with ones induced from BERT and approximate the effectiveness of different trees by the improvements they introduced.
We conduct experiments on the Aspect Based Sentiment Classification (\textbf{ABSC}) task~\cite{absc}. ABSC is a fine-grained sentiment classification task aiming at identifying the sentiment expressed towards each aspect of a given target entity. As an example, in the following comment of a restaurant, ``I hated their fajitas, but their salads were great'', the sentiment polarities for aspect \textit{fajitas} is negative and that of \textit{salads} is positive. It has been shown in \citet{zhang2019aspect} that injecting syntactic knowledge into neural networks can improve ABSC accuracy. Intuitively, given an aspect, a syntactically closer context word should play a more important role in predicting that aspect's sentiment. They integrate the distances between context words and the aspect on a dependency tree into a convolution network and build a Proximity-Weighted Convolution Network (PWCN). As a naive baseline, they compare with network weighted by relative position between aspect and context words.
\paragraph{Setup.} We experimented on two datasets from SemEval 2014~\cite{absc}, which consist of reviews and comments from two categories: \textsc{Laptop} and \textsc{Restaurant}. We adopt the standard evaluation metrics: Accuracy and Macro-Averaged F1. We follow the instructions of \citet{zhang2019aspect} to run the experiments 5 times with random initialization and report the averaged performance. We denote the original PWCN with relative position information as PWCN-Pos, and that utilizes dependency trees constructed by SpaCy\footnote{https://spacy.io/} as PWCN-Dep. SpaCy has reported an UAS of 94.5 on English PTB and so it can serve as a good reference for human-designed dependency schema. We also compare our model against two trivial trees (left-chain and right-chain trees). For our model, we feed the corpus into BERT and extract dependency trees with the best performing setting: Eisner+Dist. For parsing, we introduce an inductive bias to favor short dependencies~\cite{eisner2010favor}. To ensure a fair comparison, we induce the root word from the impact matrix $\mathcal{F}$ instead of using the gold root. Specifically, we select the root word $x_k$ based on the simple heuristic $\argmax_i \sum_{j=1}^T f(x_i, x_j)$.
\paragraph{Results.} Table~\ref{tab:sentiment} presents the performance of different models. We observe that the trees induced from BERT is either on-par (\textsc{Laptop}) or marginally better (\textsc{Restaurant}) in terms of downstream task's performance when comparing with trees produced by SpaCy.
\textsc{Laptop} is considerably more difficult than \textsc{Restaurant} due to the fact that the sentences are generally longer, which makes inducing dependency trees more challenging.
We also see that the Eisner trees generally perform better than the right-/left- chain baselines. It is also worth noting that the right-chain baseline also outperforms PWCN+Dep on \textsc{Restaurant}, which leads to an exciting future work that investigates how encoding structural knowledge can help ABSC.
Our results suggest that although the tree structures BERT learns can disagree with parser-provided-linguistically-motivated ones to a large extent, they are also empirically useful to downstream tasks, at least to ABSC. As future work, we plan to extend our analysis to more downstream tasks and models, like those reported in Shi~\shortcite{shi2018tree}.
\begin{table}[]
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{lcccc}
\toprule
\multirow{2}{*}{\textbf{Model}} & \multicolumn{2}{c}{\textbf{Laptop}} & \multicolumn{2}{c}{\textbf{Restaurant}}\\
& Acc & Macro-F1 & Acc & Macro-F1 \\
\hline
LSTM & 69.63 & 63.51 & 77.99 & 66.91 \\
\hline
\textbf{PWCN} & & & &\\
$\quad$+Pos & 75.23 & 71.71 & 81.12 & 71.81 \\
$\quad$+Dep & 76.08 & 72.02 & 80.98 & 72.28 \\
$\quad$+Eisner & 75.99 & 72.01 & \textbf{81.21} & \textbf{73.00} \\
$\quad$+right-chain& 75.64 & 71.53 & 81.07 & 72.51 \\
$\quad$+left-chain & 74.39 & 70.78 & 80.82 & 72.71 \\
\toprule
\end{tabular}
}
\caption{Experimental results of aspect based sentiment classification.}
\label{tab:sentiment}
\end{table}
\section{Related Work}
There has been substantial research investigating what pre-trained language models have learned about languages' structures.
One rising line of research uses probing classifiers to investigate the different syntactic properties captured by the model. They are generally referred to as ``probing task''~\cite{conneau2018you}, ``diagnostic classifier''~\cite{giulianelli2018under}, and ``auxiliary prediction tasks''~\cite{Adi2017}. The syntactic properties investigated range from basic ones like sentence length~\cite{Shi2016, Jawahar2019}, syntactic tree depth~\cite{Jawahar2019}, and segmentation~\cite{liu2019linguistic} to challenging ones like syntactic labeling~\cite{tenney2019bert,Tenney2019}, dependency parsing~\cite{Hewitt2019, Clark2019}, and constituency parsing~\cite{peters2018dissecting}. However, when a probe achieves high accuracy, it’s difficult to differentiate if it is the representation that encodes targeted syntactic information, or it is the probe that just learns the task~\cite{hewitt2019designing}.
In line with our work, recent studies seek to find correspondences between parts of the neural network and certain linguistic properties, without explicit supervision.
Most of them focus on analyzing attention mechanism, by extracting syntactic tree for each attention head and layer individually~\cite{Tiedemann2018, Clark2019}. Their goal is to check if the attention heads of a given pre-trained model can track syntactic relations better than chance or baselines. In particular, \citet{Tiedemann2018} analyze a machine translation model's encoder by extracting dependency trees from its self-attention weights, using Chu-Liu/Edmonds algorithm. \citet{Clark2019} conduct a similar investigation on BERT, but the simple head selection strategy they used does not guarantee a valid dependency tree. \citet{Marecek2018} propose heuristic methods to convert attention weights to syntactic trees. However, they do not quantitatively evaluate their approach. In their later study~\cite{Marecek2019}, they propose a bottom-up algorithm to extract constituent trees from transformer-based NMT encoders and evaluate their results on three languages. \citet{htut2019attention} reassess these works but find that there are no generalist heads that can do holistic parsing. Hence, analyzing attention weights directly may not reveal much of the syntactic knowledge that a model has learned. Recent dispute about attention as explanation~\cite{jain2019attention, serrano-smith-2019-attention, wiegreffe2019attention} also suggests that the attention's behavior does not necessarily represent that of the original model.
Another group of research examine the outputs of language models on carefully chosen input sentences~\cite{Goldberg2019, Bacon2019}. They extend previous works~\cite{Linzen2016, Gulordava2018, marvin2018targeted} on subject-verb agreement test (generating the correct number of a verb far away from its subject) to provide a measure of the model’s syntactic ability. Their results show that the BERT model captures syntax-sensitive agreement patterns well in general. However, subject-verb agreement cannot provide more nuanced tests of other complex structures (e.g., dependency structure, constituency structure), which are the interest of our work.
Two recent works also perturb the input sequence for model interpretability~\cite{Rosa2019, Xintong2019}. However, these works only perturb the sequence once. \citet{Rosa2019} utilize the original MLM objective to estimate each word's ``reducibility'' and import simple heuristics into a right-chain baseline to construct dependency trees. \citet{Xintong2019} focus on evaluating word alignment in NMT, but unlike our two-step masking strategy, they only replace the token of interest with a zero embedding or a randomly sampled word in the vocabulary.
\section{Discussion \& Conclusion}
One concern shared by our reviewers is that performance of our probes are underwhelming: the induced trees are barely closer to linguist-defined trees than simple baselines (e.g., right\-branching) and are even worse in the case of discourse parsing. However, this does not mean that supervised probes are wrong or that BERT captures less syntax than we thought. In fact, there is actually no guarantee that our probe will find a strong correlation with human-designed syntax, since we do not introduce the human-designed syntax as supervision. What we found is the ``natural'' syntax inherent in BERT, which is acquired from self-supervised learning on plain text. We would rather say our probe complements the supervised probing findings in two ways. First, it provides a lower-bound (on the unsupervised syntactic parsing ability of BERT). By improving this lower-bound, we could uncover more ``accurate'' information to support supervised probes' findings. Second, we show that when combined with a down-stream application (sec~\ref{sec:sentiment}), the syntax learned by BERT might be empirically helpful despite not totally identical to the human design.
In summary, we propose a parameter-free probing technique to complement current line of work on interpreting BERT through probes. With carefully designed two-stage perturbation, we obtain impact matrices from BERT. This matrix mirrors the function of attention mechanism that captures inter-word correlations, except that it emerges through the output of BERT model, instead of from intermediate representations. We devise algorithms to extract syntactic trees from this matrix. Our results reinforce those of~\cite{Hewitt2019, liu2019linguistic, Jawahar2019, Tenney2019, tenney2019bert} who demonstrated that BERT encodes rich syntactic properties. We also extend our method to probe document structure, which sheds lights on BERT's effectiveness in modeling long sequences. Finally, we find that feeding the empirically induced dependency structures into a downstream system~\cite{zhang2019aspect} can further improve its accuracy. The improvement is compatible with or even superior to a human-designed dependency schema. This offers an insight into BERT's success in downstream tasks. We leave it for future work to use our technique to test other linguistic properties (e.g., coreference) and to extend our study to more downstream tasks and systems.
\section{Acknowledgement}
We would like to thank Lingpeng Kong from DeepMind for his constructive feedback of the paper. This research is supported by Hong Kong Research Grant Council GRF grants 17254016.
|
3,212,635,537,521 | arxiv | \section{Introduction}
String theory is a powerful and multifaceted formalism for studying topics in physics and pure math. Included among the subjects encompassed by the formalism are formulations of string theory based on the field $\mathbb{Q}_p$ of $p$-adic numbers.
The notion of $p$-adic string theory was introduced independently by Volovich \cite{volovich1987p} and Grossman \cite{grossman1987p} in 1987, who each proposed a theory where spacetime and momenta are valued over the $p$-adic numbers $\mathbb{Q}_p$, and amplitudes are given in terms of Morita gamma functions. The conception of $p$-adic string theory which has since become standard, and which was put forward by Freund and Olson \cite{freund1987non}, operates with real-valued spacetime coordinates and momenta, while scattering amplitudes, now given by Gelfand-Graev gamma functions, are valued over the complex numbers; the role of the $p$-adic numbers is to serve as integration variables in the integral representation of $N$-point tachyon amplitudes $A_N^{(p)}$. As clarified by Zabrodin \cite{zabrodin1989non}, the worldsheet of $p$-adic string theory consists of a regular $(p+1)$-valent tree known as the Bruhat-Tits tree, while the boundary of the world sheet is given by $\mathbb{Q}_p$. The tachyon amplitudes obtained in this setup are significantly simpler than those of real bosonic string theory and contain only a finite number of poles, all due to tachyons, rather than semi-infinite sequences of poles arising from an infinite tower of higher spin particles. This simplicity has enabled $p$-adic string theory to serve as a helpful toy model, particularly in the context of string field theory. In 1988, long before the corresponding calculation was carried out in real string theory, Brekke, Freund, Olson, and Witten \cite{brekke1988non} managed to compute the tree-level effective spacetime action for the tachyonic field of $p$-adic string theory. The stationary configurations of this action were subsequently interpreted by Ghoshal and Sen as exotic $D$-branes\footnote{ We refer the reader to Huang, Stoica, and Zhong's paper \cite{huang2021massless} for a recent proposal for a theory of $p$-adic 2-branes.} and understood to provide an explicit manifestation of tachyon condensation \cite{ghoshal2000tachyon} --- an interpretation corroborated by Minahan's work \cite{minahan2001mode} demonstrating the consistency of tree-level $p$-adic string amplitudes and tachyon fluctuations about lumps in the effective action; but also an interpretation requiring modifications at loop-level \cite{minahan2001quantum}. Some 12 years after the computation of the spacetime effective action of $p$-adic string theory, Gerasimov and Shatashvili \cite{gerasimov2000exact} derived a tree-level Lagrangian for the tachyon of real open string theory. Their result allowed them to make the curious observation that their Lagrangian could have also been obtained by taking the $p\rightarrow 1$ limit of the tachyon Lagrangian in the $p$-adic formalism.
The idea that $p$-adic string theories, beyond serving as mere toy models, might enjoy a precise relation to real string theory was advanced already in 1987, when Freund and Witten \cite{freund1987adelic} raised the suggestion that all the $p$-adic string theories and real bosonic string theory might be commingled in a theory associated to the adelic numbers $\mathbb{A}$ that conjoin $\mathbb{R}$ with all the fields $\mathbb{Q}_p$ for every prime $p$. The first step towards explicating this suggestion is to present a unified description of 4-point amplitudes in real and $p$-adic string theory. To this end, it is customary to introduce a label $v$ such that $v=\infty$ or $v=p$ for some prime $p$, where the value $v=\infty$ refers to the real case, meaning that $\mathbb{Q}_\infty=\mathbb{R}$ and that $|\cdot|_\infty$ is the familiar absolute value norm, while $|\cdot|_p$ is the $p$-adic norm. In this notation, working with spacetime signature $(-,+,...,+)$, introducing dimensionless Mandelstam invariants given by
\begin{align}
s=\,-\alpha'(k_1+k_2)^2&\,, \nonumber
\\
t=\,-\alpha'(k_1+k_3)^2&\,, \label{Mandelstam}
\\
u=\,-\alpha'(k_1+k_4)^2&\,, \nonumber
\end{align}
and setting aside the question of overall normalization, the 4-point tachyon amplitudes in the various theories are given by
\begin{align}
A_4^{(v)}=\int_{\mathbb{Q}_v} dx\,|x|_v^{2\alpha' k_1\cdot k_2}|1-x|_v^{2\alpha' k_1\cdot k_3}
=\int_{\mathbb{Q}_v} dx\,|x|_v^{-2-s}|1-x|_v^{-2-t}\,.
\label{A4}
\end{align}
Throughout this paper, we will only be considering tree-amplitudes and only full amplitudes, not partial amplitudes. When $v=\infty$, we recover the standard Veneziano amplitude:
\begin{align}
A_4^{(\infty)}=
\frac{\Gamma(-1-s)\Gamma(-1-t)}{\Gamma(-2-s-t)}+
\frac{\Gamma(-1-s)\Gamma(-1-u)}{\Gamma(-2-s-u)}+
\frac{\Gamma(-1-t)\Gamma(-1-u)}{\Gamma(-2-t-u)}\,.
\label{A4real}
\end{align}
In the $p$-adic case, direct integration over $\mathbb{Q}_p$ gives
\begin{align}
A_4^{(p)}=&\,
\frac{\zeta_p(-1-s)\zeta_p(-1-t)\zeta_p(-1-u)}{\zeta_p(2+s)\zeta_p(2+t)\zeta_p(2+u)}
=\,\Gamma_p(-1-s)\Gamma_p(-1-t)\Gamma_p(-1-u)\,,
\label{A4padic}
\end{align}
where the local zeta and gamma functions are given by
\begin{align}
\zeta_p(x)=\frac{1}{1-p^{-x}}\,,\hspace{20mm}\Gamma_p(x)=\frac{\zeta_p(x)}{\zeta_p(1-x)}\,. \label{zetap}
\end{align}
There also exist related zeta and gamma functions associated to the real case $v=\infty$,
\begin{align}
\zeta_\infty(x)=\pi^{-\frac{x}{2}}\,\Gamma\big(\frac{x}{2}\big)\,,\hspace{20mm}\Gamma_\infty(x)=\frac{\zeta_\infty(x)}{\zeta_\infty(1-x)}\,,
\end{align}
and a fascinating observation of Freund and Witten's is that the three terms of the full Veneziano amplitude \eqref{A4real} combine into a single expression of the same form as the $p$-adic answer:
\begin{align}
A_4^{(\infty)}=&\,
\frac{\zeta_\infty(-1-s)\zeta_\infty(-1-t)\zeta_\infty(-1-u)}{\zeta_\infty(2+s)\zeta_\infty(2+t)\zeta_\infty(2+u)}
=\,\Gamma_\infty(-1-s)\Gamma_\infty(-1-t)\Gamma_\infty(-1-u)\,.
\end{align}
By invoking the functional equation for the Riemann zeta function
\begin{align}
\zeta_\infty(x)\,\zeta(x)=\zeta_\infty(1-x)\,\zeta(1-x)\,,
\end{align}
we can also express the Veneziano amplitude solely in terms of the Riemann zeta function:
\begin{align}
A_4^{(\infty)}=&\,
\frac{\zeta(2+s)\zeta(2+t)\zeta(2+u)}{\zeta(-1-s)\zeta(-1-t)\zeta(-1-u)}\,.
\label{VenezianoRiemann}
\end{align}
We now recall the formula due to Euler that expresses the Riemann zeta function as a product running over the set $\mathbb{P}$ of all primes:
\begin{align}
\prod_{p\in \mathbb{P}}\zeta_p(z) = \zeta(z)\,.
\label{Euler}
\end{align}
The product on the left-hand side is convergent when Re$[z]>1$, whereas the Riemann zeta function on the right-hand side admits an analytic continuation to the entire complex plane except for the simple pole at $z=0$. Following \cite{freund1987adelic}, we therefore observe that if we are allowed to distribute an infinite product of ratios of local zeta functions into each zeta function and to perform separate analytic continuations of these sub-products,
\begin{align}
\label{distribute}
\prod_p \frac{\zeta_p(...)\zeta_p(...)...}{\zeta_p(...)\zeta_p(...)...}
\,\leftrightarrow\,
\frac{\Big(\prod_p\zeta_p(...)\Big)\Big(\prod_p\zeta_p(...)\Big)...}{\Big(\prod_p\zeta_p(...)\Big)\Big(\prod_p\zeta_p(...)\Big)...}
\,\leftrightarrow\,
\frac{\zeta(...)\zeta(...)...}{\zeta(...)\zeta(...)...}\,,
\end{align}
then we arrive at what is known as the adelic product formula for the Veneziano amplitude:
\begin{align}
A_4^{(\infty)}\prod_{p\in \mathbb{P}}A_4^{(p)}=1\,.
\label{A4adelic}
\end{align}
The assumption of distributivity \eqref{distribute}, or some other regularization scheme that results in the same effect, is necessary because there is no kinematic regime for which the product in equation \eqref{A4adelic} converges; the criterion Re$[z]>1$ required for the convergence of \eqref{Euler} cannot be simultaneously satisfied for all the local zeta functions in \eqref{A4padic}.\footnote{There are arguments in the literature \cite{vladimirov1993freund,huang2020green,huang2022quadratic} attesting to the validity of the equation
\begin{align}
\Gamma_\infty(z)\prod_{p\in\mathbb{P}}\Gamma_p(z)=1
\label{gammaProd}
\end{align}
when understood in a suitably regulated sense, but even with equation \eqref{gammaProd} given, an assumption of distributivity is needed in order to split the product over $A_4^{(p)}$ into three products over local gamma functions.}
These issues of non-convergence can be circumvented by turning to the adelic 5-point amplitude, for which there exist kinematic regimes where the prime product does converge. Introducing the shorthands
\begin{align}
s_{ij} \equiv 2\alpha'k_i \cdot k_j\,,
\hspace{20mm}
s_{ijk} \equiv 2\alpha'k_i\cdot k_j+2\alpha'k_i\cdot k_k+2\alpha'k_j\cdot k_k\,,
\end{align}
the real $(v=\infty)$ and $p$-adic $(v=p)$ tachyonic 5-point amplitudes are given by
\cite{virasoro1969generalization,bardakci1969meson,koba1969reaction,fairlie1970integral}
\begin{align}
A_5^{(v)}=&
\int_{\mathbb{Q}_v} dx\,dy\,|x|^{s_{12}}|y|^{s_{13}}|y-x|^{s_{23}}
|1-y|^{s_{34}}|1-x|^{s_{24}}\,.
\label{A5}
\end{align}
In the real case, the double-integral can be carried out analytically as in \cite{kitazawa1987effective}, giving
\begin{align}
& \hspace{40mm}
A_5^{(\infty)} = \frac{1}{10} \sum_{\sigma \in S_5} F\big[\sigma(\vec{k}_1,...,\vec{k}_5)\big]\,,
\nonumber\\[-16pt]
\hspace{1mm}
\\ \nonumber
&\text{where}
\hspace{5mm}
F\big[\vec{k}_1,...,\vec{k}_5\big]
=
B\big(s_{123}+2,s_{34}+1\big)B\big(s_{12}+1,s_{23}+1\big)
\\
&\hspace{43mm}
{}_3F_2
\Big[
\big\{
-s_{24},\,s_{12}+1,\,s_{123}+2
\big\},
\big\{
s_{123}+s_{34}+3,\,
s_{12}+s_{23}+2
\big\};
1
\Big]\,,
\nonumber
\end{align}
and the factor of $\frac{1}{10}$ cancels over-counting due to cyclic permutations and their inversions. For the $p$-adics, the five-point amplitude, as computed in \cite{marinari1988p,brekke1988non}, evaluates to
\begin{align}
A_5^{(p)}=
\frac{(p-2)(p-3)}{p^2}
+
\sum_{i\neq j}\frac{(p-1)(p-2)}{2p^2(p^{1+s_{ij}}-1)}
+
\sum_{i,j,k,l\text{ distinct}}
\frac{(p-1)^2}{8p^2(p^{1+s_{ij}}-1)(p^{1+s_{kl}}-1)}\,.
\end{align}
While evaluating the adelic product $A_5^{(\infty)}\prod_{p\in\mathbb{P}}A_5^{(p)}$ presents a somewhat formidable task, Refs. \cite{marinari1988p,brekke1988non} were each able to argue, by demonstrating that the zeros and poles of $A_5^{(\infty)}$ and $\prod_{p\in\mathbb{P}}A_5^{(p)}$ do not align, that this product is not a momentum-independent constant and so in particular does not equal one. As with the derivation of the adelic 4-point formula \eqref{A4adelic}, the arguments of both references require splitting an infinite product over $\mathbb{P}$ into several such products, which do not all converge.
With the benefit modern computing power, it is not hard to check numerically that $A_5^{(\infty)}\prod_{p\in\mathbb{P}}A_5^{(p)}$ is indeed not a constant in the kinematic regions where the product converges. Furthermore, as will be explicitly demonstrated in the next section of this paper, there exists a codimension-one subset of kinematic space for which the adelic product $A_5^{(\infty)}\prod_{p\in\mathbb{P}}A_5^{(p)}$ can be evaluated exactly. The exact result reveals two important facts:
the 5-point tree-amplitude is not an analytic function of the kinematic invariants, and it is not generally valid to apply the identifications \eqref{distribute} to the adelic 5-point product. Since analyticity and the distribution of an infinite product into subfactors played a key role in the regularization of the adelic four-point amplitude sketched above, we are motivated to investigate alternative regularization methods. Such an alternative was considered previously by Aref'eva, Dragovi\'c and Volovich \cite{aref1988adelic}, who observed that while the product $\prod_{p\in\mathbb{P}}A_4^{(p)}(s,t)$ never converges, there are $s$- and $t$-values for which the product $\prod_{p\in\mathbb{P}}\big(-A_4^{(p)}(s,t)\big)$ does converge and can be interpreted as the $p$-adic component of a non-constant adelic 4-point amplitude. This amplitude is given by a separate function for each scattering channel and therefore does not obey crossing symmetry, for which reason it has generated sparse interest. However, since we will discover in the next section that the 5-point adelic amplitude possesses precisely such a piecewise analytic structure, the proposal of Ref. \cite{aref1988adelic} merits closer inspection. Taking seriously the notion of a non-constant adelic 4-point amplitude, it behoves us to inquire whether such an amplitude complies with the demands of causality, locality, and unitarity, and also whether there are other candidates for non-constant adelic amplitudes besides that of \cite{aref1988adelic}. Finally, we may pose the question, what is the critical dimension of the adelic string?
To address these questions, the present paper adopts a prescription for regulating divergent products that does not rely on splitting an infinite product via $\prod_n\big(a_n \,b_n\big) \leftrightarrow \big(\prod_n a_n\big)\big(\prod_n b_n\big)$ and then performing a separate analytic continuation for each sub-product. In applying this prescription to adelic products, we arrive at a number of non-constant candidate amplitudes, which are given by ratios of Riemann zeta functions or other Dirichlet $L$-functions. To evaluate the suitability of these expressions as actual scattering amplitudes, we will analyze their partial wave expansions and study their high energy asymptotics. In so doing, we discover that while the candidate amplitudes exhibit unorthodox pole structures and only piecewise analyticity, they display the well-behaved high energy behaviour of conventional string amplitudes as well as, within particular ranges of target space dimensionality, the positivity properties required by unitarity.
\section{The Adelic 5-point Amplitude at Special Kinematics}
\label{2}
The real and $p$-adic 5-point amplitudes \eqref{A5} simplify greatly if we consider the special case when any two momenta are orthogonal. We will consider the case when $k_2\cdot k_3=0$, noting that for tachyonic scattering such a kinematic configuration is perfectly permissible. In this special case the double integral in \eqref{A5} factorizes and equals
\begin{align}
&A_5^{(v)}\Big|_{s_{23}=0}
\hspace{-2mm}
=
\Gamma_v(1+s_{12})
\Gamma_v(1+s_{24})
\Gamma_v(-1 \hspace{-0.5mm}-\hspace{-0.5mm}s_{12}\hspace{-0.5mm}-\hspace{-0.5mm}s_{24})\,
\Gamma_v(1+s_{13})
\Gamma_v(1+s_{34})
\Gamma_v(-1\hspace{-0.5mm}- \hspace{-0.5mm}s_{13}\hspace{-0.5mm}-\hspace{-0.5mm}s_{34})
\nonumber
\\[12pt]
\label{A5special}
&=
\frac{\zeta_v(1+s_{12})\zeta_v(1+s_{24})\zeta_v(-1-s_{12}-s_{24})}{\zeta_v(-s_{12})\zeta_v(-s_{24})\zeta_v(2+s_{12}+s_{24})}
\,
\frac{\zeta_v(1+s_{13})\zeta_v(1+s_{34})\zeta_v(-1-s_{13}-s_{34})}{\zeta_v(-s_{13})\zeta_v(-s_{34})\zeta_v(2+s_{13}+s_{34})}\,.
\end{align}
Since $A_5^{(v)}\big|_{s_{23}=0}$ is given by a product of local gamma functions, it is tempting to conclude that $A_5^{(\infty)}\prod_{p\in\mathbb{P}}A_5^{(p)}\big|_{s_{23}=0}$ is equal to one, but it can be numerically checked that this is not the case. In fact the product can be evaluated analytically through the use of a simple algebraic identity. From the definition \eqref{zetap} of the local zeta function, it follows that for any $z,w\in\mathbb{C}$:
\begin{align}
\frac{\zeta_p(z)\zeta_p(w)}{\zeta_p(z+w)}
=\frac{1-p^{-z-w}}{(1-p^{-z})(1-p^{-w})}
=-\frac{1-p^{z+w}}{(1-p^z)(1-p^w)}
=
-\frac{\zeta_p(-z)\zeta_p(-w)}{\zeta_p(-z-w)}\,.
\label{simpleid}
\end{align}
By twice applying this identity to $A_5^{(v)}\big|_{s_{23}=0}$, we find that
\begin{align}
&\hspace{65mm}A_5^{(v)}\Big|_{s_{23}=0}
=\,
\label{A5I}
\\[6pt]
&
\frac{\zeta_v(-1-s_{12})\zeta_v(-1-s_{24})\zeta_v(-1-s_{12}-s_{24})}{\zeta_v(-s_{12})\zeta_v(-s_{24})\zeta_v(-2-s_{12}-s_{24})}
\,
\frac{\zeta_v(-1-s_{13})\zeta_v(-1-s_{34})\zeta_v(-1-s_{13}-s_{34})}{\zeta_v(-s_{13})\zeta_v(-s_{34})\zeta_v(-2-s_{13}-s_{34})}\,. \nonumber
\end{align}
Suppose now we are in the following kinematic regime:
\begin{align}
\text{(I)}&\hspace{10mm} s_{23}=0\,,\hspace{10mm}
s_{12},s_{24},s_{13},s_{34}<-2\,.
\label{I}
\end{align}
The advantage to so doing is that the convergence requirement Re$[z]>1$ is satisfied for each local zeta function $\zeta_p(z)$ in \eqref{A5I}, so that we can reliably infer that
\begin{align}
&\hspace{62mm}
\prod_{p\in\mathbb{P}}A_5^{(v)}\Big|_{\text{(I)}}
=\,
\\
&
\frac{\zeta(-1-s_{12})\zeta(-1-s_{24})\zeta(-1-s_{12}-s_{24})}{\zeta(-s_{12})\zeta(-s_{24})\zeta(-2-s_{12}-s_{24})}
\,
\frac{\zeta(-1-s_{13})\zeta(-1-s_{34})\zeta(-1-s_{13}-s_{34})}{\zeta(-s_{13})\zeta(-s_{34})\zeta(-2-s_{13}-s_{34})}\,.
\nonumber
\end{align}
By multiplying this expression with the real 5-point amplitude, we find that the adelic 5-point amplitude in region (I) is given by
\begin{align}
A_5^{(\mathbb{A})}\Big|_{\text{(I)}}=\,&A_5^{(\infty)}\prod_{p\in\mathbb{P}}A_5^{(v)}\Big|_{\text{(I)}}
\label{A5Iadelic}
\\
=\,&
\frac{\zeta(-1-s_{12})\zeta(-1-s_{24})\zeta(2+s_{12}+s_{24})}{\zeta(1+s_{12})\zeta(1+s_{24})\zeta(-2-s_{12}-s_{24})}
\,
\frac{\zeta(-1-s_{13})\zeta(-1-s_{34})\zeta(2+s_{13}+s_{34})}{\zeta(1+s_{13})\zeta(1+s_{34})\zeta(-2-s_{13}-s_{34})}\,.
\nonumber
\end{align}
Region (I) is not the only regime in which the adelic amplitude converges and admits a closed-form answer. The same arguments used above can be applied in the eight following instances to produce the results listed below:
\begin{align}
\text{(II)}&\hspace{10mm} s_{23}=0\,,\hspace{10mm}
s_{13}\,,\,s_{24}\,,\,s_{34}<-2\,,
\hspace{10mm}
s_{12}>-s_{24}\,,
\\[6pt]
A_5^{(\mathbb{A})}\Big|_{\text{(II)}}
&=
\frac{\zeta(-s_{12})\zeta(-1-s_{24})\zeta(1+s_{12}+s_{24})}{\zeta(s_{12})\zeta(1+s_{24})\zeta(-1-s_{12}-s_{24})}\,
\frac{\zeta(-1-s_{13})\zeta(-1-s_{34})\zeta(2+s_{13}+s_{34})}{\zeta(1+s_{13})\zeta(1+s_{34})\zeta(-2-s_{13}-s_{34})}\,,
\nonumber
\end{align}
\begin{align}
\text{(III)}&\hspace{10mm} s_{23}=0\,,\hspace{10mm}
s_{12}\,,\,s_{13}\,,\,s_{34}<-2\,,
\hspace{10mm}
s_{24}>-s_{12}\,,
\\[6pt]
A_5^{(\mathbb{A})}\Big|_{\text{(III)}}
&=
\frac{\zeta(-1-s_{12})\zeta(-s_{24})\zeta(1+s_{12}+s_{24})}{\zeta(1+s_{12})\zeta(s_{24})\zeta(-1-s_{12}-s_{24})}\,
\frac{\zeta(-1-s_{13})\zeta(-1-s_{34})\zeta(2+s_{13}+s_{34})}{\zeta(1+s_{13})\zeta(1+s_{34})\zeta(-2-s_{13}-s_{34})}\,,
\nonumber
\end{align}
\begin{align}
\text{(IV)}&\hspace{10mm} s_{23}=0\,,\hspace{10mm}
s_{12}\,,\,s_{24}\,,\,s_{34}<-2\,,
\hspace{10mm}
s_{13}>-s_{34}\,,
\\[6pt]
A_5^{(\mathbb{A})}\Big|_{\text{(IV)}}
&=
\frac{\zeta(-1-s_{12})\zeta(-1-s_{24})\zeta(2+s_{12}+s_{24})}{\zeta(1+s_{12})\zeta(1+s_{24})\zeta(-2-s_{12}-s_{24})}\,
\frac{\zeta(-s_{13})\zeta(-1-s_{34})\zeta(1+s_{13}+s_{34})}{\zeta(s_{13})\zeta(1+s_{34})\zeta(-1-s_{13}-s_{34})}\,,
\nonumber
\end{align}
\begin{align}
\text{(V)}&\hspace{10mm} s_{23}=0\,,\hspace{10mm}
s_{12}\,,\,s_{13}\,,\,s_{24}<-2\,,
\hspace{10mm}
s_{34}>-s_{13}\,,
\\[6pt]
A_5^{(\mathbb{A})}\Big|_{\text{(V)}}
&=
\frac{\zeta(-1-s_{12})\zeta(-1-s_{24})\zeta(2+s_{12}+s_{24})}{\zeta(1+s_{12})\zeta(1+s_{24})\zeta(-2-s_{12}-s_{24})}\,
\frac{\zeta(-1-s_{13})\zeta(-s_{34})\zeta(1+s_{13}+s_{34})}{\zeta(1+s_{13})\zeta(s_{34})\zeta(-1-s_{13}-s_{34})}\,,
\nonumber
\end{align}
\begin{align}
\text{(VI)}&\hspace{10mm} s_{23}=0\,,\hspace{10mm}
s_{24}\,,\,s_{34}<-2\,,
\hspace{10mm}
s_{12}>-s_{24}\,,
\hspace{10mm}
s_{13}>-s_{34}\,,
\\[6pt]
A_5^{(\mathbb{A})}\Big|_{\text{(VI)}}
&=
\frac{\zeta(-s_{12})\zeta(-1-s_{24})\zeta(1+s_{12}+s_{24})}{\zeta(s_{12})\zeta(1+s_{24})\zeta(-1-s_{12}-s_{24})}\,
\frac{\zeta(-s_{13})\zeta(-1-s_{34})\zeta(1+s_{13}+s_{34})}{\zeta(s_{13})\zeta(1+s_{34})\zeta(-1-s_{13}-s_{34})}\,,
\nonumber
\end{align}
\begin{align}
\text{(VII)}&\hspace{10mm} s_{23}=0\,,\hspace{10mm}
s_{13}\,,\,s_{24}<-2\,,
\hspace{10mm}
s_{12}>-s_{24}\,,
\hspace{10mm}
s_{34}>-s_{13}\,,
\\[6pt]
A_5^{(\mathbb{A})}\Big|_{\text{(VII)}}
&=
\frac{\zeta(-s_{12})\zeta(-1-s_{24})\zeta(1+s_{12}+s_{24})}{\zeta(s_{12})\zeta(1+s_{24})\zeta(-1-s_{12}-s_{24})}\,
\frac{\zeta(-1-s_{13})\zeta(-s_{34})\zeta(1+s_{13}+s_{34})}{\zeta(1+s_{13})\zeta(s_{34})\zeta(-1-s_{13}-s_{34})}\,,
\nonumber
\end{align}
\begin{align}
\text{(VIII)}&\hspace{10mm} s_{23}=0\,,\hspace{10mm}
s_{12}\,,\,s_{34}<-2\,,
\hspace{10mm}
s_{24}>-s_{12}\,,
\hspace{10mm}
s_{13}>-s_{34}\,,
\\[6pt]
A_5^{(\mathbb{A})}\Big|_{\text{(VIII)}}
&=
\frac{\zeta(-1-s_{12})\zeta(-s_{24})\zeta(1+s_{12}+s_{24})}{\zeta(1+s_{12})\zeta(s_{24})\zeta(-1-s_{12}-s_{24})}\,
\frac{\zeta(-s_{13})\zeta(-1-s_{34})\zeta(1+s_{13}+s_{34})}{\zeta(s_{13})\zeta(1+s_{34})\zeta(-1-s_{13}-s_{34})}\,,
\nonumber
\end{align}
\begin{align}
\text{(IX)}&\hspace{10mm} s_{23}=0\,,\hspace{10mm}
s_{12}\,,\,s_{13}<-2\,,
\hspace{10mm}
s_{24}>-s_{12}\,,
\hspace{10mm}
s_{34}>-s_{13}\,,
\label{A5IXadelic}
\\[6pt]
A_5^{(\mathbb{A})}\Big|_{\text{(IX)}}
&=
\frac{\zeta(-1-s_{12})\zeta(-s_{24})\zeta(1+s_{12}+s_{24})}{\zeta(1+s_{12})\zeta(s_{24})\zeta(-1-s_{12}-s_{24})}\,
\frac{\zeta(-1-s_{13})\zeta(-s_{34})\zeta(1+s_{13}+s_{34})}{\zeta(1+s_{13})\zeta(s_{34})\zeta(-1-s_{13}-s_{34})}\,.
\nonumber
\end{align}
It should be noted that none of the regimes (I) to (IX) are physical, in the sense that no real momenta $k_1$ to $k_5$ that are conserved, $\sum_{i=1}^5k_i=0$, and on-shell, $k_i^2=1$, will lead to kinematic invariants satisfying the conditions for any of these regimes. Take for example region (I): the inequalities $s_{12},s_{24},s_{13},s_{34}<-2$ imply that momenta $k_1$, $k_2$, $k_3$, and $k_4$ have the same sign for their zero components (all in-going or all out-going), which entails that $s_{14}<1$. But from the fact that $1=k_5^2=(\sum_{i=1}^4k_i)^2$, it follows that
\begin{align}
s_{14}=-3-s_{12}-s_{13}-s_{23}-s_{24}-s_{34}\,, \label{s14}
\end{align}
and this equation together with the conditions \eqref{I} imply that $s_{14}>5$. It is not hard to likewise show that regions (II) to (IX) are not physical either, and neither is the convergent regime studied in Ref. \cite{brekke1988non}. For the purpose of establishing that $A_5^{(\mathbb{A})}$ fails to satisfy extended analyticity, the unphysicality of the convergent kinematic configurations of course poses no issue.
\subsection{Lessons from the 5-point amplitude}
\label{2.1}
From the exact expressions \eqref{A5Iadelic} to \eqref{A5IXadelic} we are able to draw two conclusions about adelic amplitudes:
\begin{adjustwidth}{20pt}{20pt}
1) The 5-point tree-amplitude is not given by a single analytic function of the kinematic invariants $s_{ij}$.
\end{adjustwidth}
Each of the nine expressions \eqref{A5Iadelic} to \eqref{A5IXadelic} for the adelic amplitude admits its own analytical continuation to the remaining eight regimes, but none of these analytic functions match. Analyticity breaks down even if we vary just a single kinematic invariant $s_{ij}$ and even if we restrict this invariant to real values. We conclude that the adelic amplitude is not an analytic function of the Mandelstam variables, although it is piecewise analytic in the regimes where we have been able to compute it. It is not an uncommon occurrence in number theory that functions fail to admit an analytic continuation throughout the complex plane. Consider, for example the sum $P(s)=\sum_{p\in \mathbb{P}}\frac{1}{p^s}$. For Re$[s]>1$, the sum converges absolutely, and $P(s)$ can be analytically continued into the strip with $0<\text{Re}[s]<1$. But $P(s)$ does not admit an analytic continuation to values of $s$ with non-positive real part \cite{landau1920nichtfortsetzbarkeit,froberg1968prime}; essentially, the sum ceases to be smooth as $s$ approaches the imaginary axis due to a clustering of singular points. A similar phenomenon occurs for the Dedekind eta function $\eta(\tau)=e^{\frac{\pi i \tau}{12}}\prod_{n=1}^\infty(1-e^{2n\pi i \tau})$, which cannot be analytically continued beyond the upper half-plane. The failure of analyticity of the adelic amplitude is of a different kind in that the right-hand sides of \eqref{A5Iadelic} to \eqref{A5IXadelic} are all meromorphic functions, which however only equal the adelic amplitude for restricted domains of kinematic invariants. Incidentally, Ref. \cite{frampton1988adelic} cautioned against $A_5^{(\mathbb{A})}$ possibly admitting multiple analytic continuations depending on the values of the arguments already in 1988, although this reference paradoxically made this point in order to argue that $A_5^{(\mathbb{A})}$ equals one.
\begin{adjustwidth}{20pt}{20pt}
2) In the context of adelic amplitudes, it is not generally valid to distribute an infinite product into multiple subproducts, as in \eqref{distribute}.
\end{adjustwidth}
An identification of the form $\prod_n\big(a_n \,b_n\big) \leftrightarrow \big(\prod_n a_n\big)\big(\prod_n b_n\big)$ is only guaranteed to be valid when the subproducts $\big(\prod_n a_n\big)$ and $\big(\prod_n b_n\big)$ each converges on its own. To regulate an infinite product by splitting it into multiple products, several of which require analytic continuation, or by any other regularization method that effectuates the same splitting, is a procedure that demands careful justification, since even in some convergent cases, the value of a product can be changed by reorganizing factors. A convergent product $\prod_n(1+c_n)$ for which $\prod_n(1+|c_n|)$ is not convergent is said to be conditionally convergent, and by a multiplicative analog of Riemann's rearrangement theorem, such products can be made to converge to any desired value by changing the order of multiplication.
The failure of otherwise divergent, regulated products to factorize is a well-studied phenomenon in the context of operator determinants, which goes by the name of the multiplicative anomaly \cite{wodzicki1987non,kassel1989residu,kontsevich1993functional,elizalde1998zeta}. Given two operators $A$ and $B$ with eigenvalues $a_i$ and $b_i$, this anomaly is present whenever
\begin{align}
\det AB
=\prod_i a_i b_i
\neq
\prod_i a_i\, \prod_j b_j
=
\det A\,
\det B\,,
\end{align}
where the products are understood to have been regulated. If we use zeta function regularization, we introduce a zeta function $\zeta_A(s)$ defined for sufficiently large values of Re$[s]$ by the formula
\begin{align}
\zeta_A(s)=\sum_i \frac{1}{(a_i)^s}\,,
\end{align}
and defined via analytic continuation everywhere else. Using this function, the zeta regulated determinant is given by
\begin{align}
\det A= \exp \Big[-\zeta_A'(0)\Big]\,.
\end{align}
The existence of the multiplicative anomaly amounts to the observation that there exist operators $A$ and $B$ for which
\begin{align}
\zeta'_{AB}(0)
\neq
\zeta'_A(0)+\zeta'_B(0)\,.
\end{align}
In general, then, it is necessary to exercise caution in splitting up an infinite product. In the specific case of the adelic 5-point amplitude we see explicitly that naive application of the prescription \eqref{distribute} to the right-hand side of \eqref{A5special} results in an incorrect answer of one. Moreover, a regularization procedure that consists in separately taking an infinite product over individual factors of the local zeta function $\zeta_p(x)$ does not produce a unique result, for as the equality between \eqref{A5special} and \eqref{A5I} illustrates, there are multiple ways of expressing the $p$-adic 5-point amplitude in terms of local zeta functions.
\section{Adelic Scalar 4-Point Amplitudes}
\label{3}
In light of the lessons learned from the adelic 5-point function, it may be worthwhile to consider new ways of regulating adelic products. This section offers one such alternative and applies it to the 4-point tachyon amplitude.
\subsection{Regularization through coefficients}
\label{3.1}
Equations \eqref{A4} and \eqref{A4padic} encapsulate the dependency of the $p$-adic 4-point amplitude on the Mandelstam invariants but do not accurately fix the overall normalization, which depends on the string theory coupling constant. To allow for a different normalization than in \eqref{A4} and \eqref{A4padic}, we can dress the amplitudes $A_4^{(p)}$ with coefficients $C_p$, which may depend on $p$ but not on the Mandelstam invariants. In the presence of such coefficients, we can redefine the adelic product as
\begin{align}
A_4^{(\mathbb{A})}(s,t) = A_4^{(\infty)}(s,t)\prod_{p\in\mathbb{P}} C_p\, A_4^{(p)}(s,t)\,.
\label{newA4}
\end{align}
This equation suggests a regularization procedure wherein, when possible, the coefficients $C_p$ are precisely chosen in such a way as to render the product convergent.\footnote{Divergent products occur in the physics literature also outside the context of adelic amplitudes. In appendix \ref{A}, it is shown how to validly apply the regularization procedure described here to a simple such example.} Such a choice of $C_p$ is not guaranteed to exist, and when it does exist, it will not be unique; we can always include additional factors $f_p$, substituting $C_p \rightarrow C_p f_p$, provided that the product $\prod_pf_p$ converges. But this ambiguity will at most contribute a momentum-independent prefactor to $A_4^{(\mathbb{A})}(s,t)$ and so does not pose a problem if we content ourselves with not determining the overall normalization of $A_4^{(\mathbb{A})}(s,t)$. A separate ambiguity, however, is introduced by the fact that different kinematic regimes require different choices of $C_p$ in order to converge, and that in some regimes no choice of coefficients results in convergence; for these latter regions it is necessary to perform a continuation from the convergent regions, which can be done in multiple ways. The upshot is that we will consider choices of values for $C_p$ that converge for different values of $s$ and $t$ as being associated to distinct candidate theories, whose merits and flaws need to be assessed separately.
The string theory tachyon has a mass of $m^2=-\frac{1}{\alpha'}$, which means the Mandelstam invariants \eqref{Mandelstam} are related by
\begin{align}
s+t+u=-4\,.
\end{align}
The physical ranges of values for the Mandelstam variables of tachyonic two-to-two scattering are given as follows:
\begin{align}
\nonumber
& \text{$s$-channel:} \hspace{3mm} s\geq -4\,, \hspace{5mm} -4-s\leq t \leq 0\,,
\\[3pt]
& \text{$t$-channel:} \hspace{3.5mm} t\geq -4\,, \hspace{5.5mm} -4-t\leq s \leq 0\,, \label{subsetRegions}
\\[3pt]
& \text{$u$-channel:} \hspace{3mm} s,t<0\,.
\nonumber
\end{align}
The union of these regions is marked in different hues of blue in figure \ref{fig:Regions}. Allowing $s$ and $t$ to assume more general values, it turns out there are two possible choices of $C_p$ that will render the product $\prod_pC_pA^{(p)}(s,t)$ convergent. For $C_p=-p$ the product is convergent in the following domains:
\begin{align*}
& \text{$s$-channel subset:} \hspace{3mm} t < -3\,, \hspace{5mm} s> - 1 - t\,,
\\[3pt]
& \text{$t$-channel subset:} \hspace{3mm} s < -3\,, \hspace{5mm} t> - 1 - s\,,
\\[3pt]
& \text{$u$-channel subset:} \hspace{3mm} s,t<-3\,.
\end{align*}
And for $C_p=-1$ the product is convergent precisely in the complement, marked in \textcolor{colour4}{\bf{orange}} in figure \ref{fig:Regions}, of the physical region, ie. in the following three regions:
\begin{align}
\nonumber
& A) \hspace{3mm} t > 0\,, \hspace{5mm} s< - 4 - t\,,
\\[3pt]
& B) \hspace{3mm} s > 0\,, \hspace{5mm} t< - 4 - s\,, \label{abcRegions}
\\[3pt]
& C) \hspace{3mm} s,t>0\,.
\nonumber
\end{align}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\linewidth]{regions1.pdf}
\end{center}
\caption{Different regions of kinematic space for 4-particle tachyon scattering. The regions in {\bf \textcolor{colour2}{blue}}, {\bf \textcolor{colour1}{dark blue}}, and {\bf \textcolor{colour3}{light blue}} represent physical regions, while the {\bf\textcolor{colour4}{orange}} regions are unphysical. In the {\bf \textcolor{colour4}{orange}} regions, the product for the adelic amplitude can be rendered convergent by dressing the $p$-adic amplitudes with a coefficient of $C_p=-1$. In the {\bf \textcolor{colour1}{dark blue}} regions, convergence is achieved with coefficients $C_p=-p$. The region in {\bf \textcolor{colour3}{light blue}} marks the overlap between the $s$-, $t$-, and $u$-channels.
}
\label{fig:Regions}
\end{figure}
$\hspace{-1.5mm}$These regions are marked in \textcolor{colour1}{\bf{dark blue}} in figure \ref{fig:Regions}. In each convergent regime, the adelic product \eqref{newA4} admits closed-form evaluation by a computation similiar to the 5-point calculation in the last section.
For example, let us consider the case when $C_p=-p$ and $s,t<-3$. From the simple algebraic fact that
\begin{align}
-p\frac{\zeta_p(3+s+t)}{\zeta_p(2+s)\,\zeta_p(2+t)}
=
\frac{\zeta_p(-3-s-t)}{\zeta_p(-2-s)\,\zeta_p(-2-t)}\,,
\end{align}
it follows that when multiplied by coefficients $C_p=-p$, the $p$-adic amplitude can be rewritten as
\begin{align}
-pA_4^{(p)}(s,t)=
\frac{\zeta_p(-1-s)\,\zeta_p(-1-t)\,\zeta_p(-3-s-t)}{\zeta_p(-2-s)\,\zeta_p(-2-t)\,\zeta_p(-2-s-t)}\,.
\label{-pA4}
\end{align}
In the $u$-channel subset with $s,t<-3$, the product over primes converges separately for each of the six local zeta functions in \eqref{-pA4} and can consequently be evaluated straightforwardly:
\begin{align}
\prod_{p\in \mathbb{P}}\Big(-pA_4^{(p)}(s,t)\Big)=
\frac{\zeta(-1-s)\,\zeta(-1-t)\,\zeta(-3-s-t)}{\zeta(-2-s)\,\zeta(-2-t)\,\zeta(-2-s-t)}
\,.
\end{align}
Multiplying this product with the real Veneziano amplitude \eqref{VenezianoRiemann}, we arrive at the following expression for a putative non-constant $u$-channel adelic amplitude:
\begin{align}
A_4^{(\mathbb{A})}\Big|_{\text{$u$-channel}}
= A_4^{(\infty)}
\prod_{p\in \mathbb{P}}\Big(
-p A_4^{(p)}
\Big)\Big|_{\text{$u$-channel}}
=
\frac{\zeta(2+s)\,\zeta(2+t)\,\zeta(-3-s-t)}{\zeta(-2-s)\,\zeta(-2-t)\,\zeta(3+s+t)}\,.
\end{align}
Since it is not possible to attain convergence of the adelic product throughout the physical regime, we must invoke a partial analytic continuation, but a continuation of the full product, not of separate sub-products. Just as in the case of the 5-point amplitude, the result we thereby arrive at is only piecewise analytic. A peculiarity of tachyon scattering is that the $s$-, $t$-, and $u$-channels overlap. In the region of overlap, displayed in \textcolor{colour3}{\bf{light blue}} in figure \ref{fig:Regions}, the answer for $A_4^{(\mathbb{A})}$ will depend on the scattering channel. Phrased differently, $A_4^{(\mathbb{A})}$ depends not only on $s$ and $t$ but also on the signs of the time components of the momenta. This peculiarity disappears once we turn to massless scattering in the next section.
For $C_p=-p$, we continue the answers obtained in each subset of the three scattering channels to the full respective channel. For $C_p=-1$ we continue region $A)$ to the $s$-channel, $B)$ to the $t$-channel, and $C)$ to the $u$-channel. By this procedure, we arrive at the following candidate amplitudes:
\begin{align}
C_p=-p:
\hspace{5mm}
A^{(\mathbb{A})}_4=
\begin{cases}
\displaystyle \frac{\zeta(1+s)\,\zeta(2+t)\,\zeta(2+u)}{\zeta(-1-s)\,\zeta(-2-t)\,\zeta(-2-u)}
&\hspace{5mm}\text{ $s$-channel}\,,
\\
\\
\displaystyle \frac{\zeta(2+s)\,\zeta(1+t)\,\zeta(2+u)}{\zeta(-2-s)\,\zeta(-1-t)\,\zeta(-2-u)}
&\hspace{5mm}\text{ $t$-channel}\,,
\\
\\
\displaystyle \frac{\zeta(2+s)\,\zeta(2+t)\,\zeta(1+u)}{\zeta(-2-s)\,\zeta(-2-t)\,\zeta(-1-u)}
&\hspace{5mm}\text{ $u$-channel}\,,
\end{cases}
\label{newAdelic1}
\end{align}
\begin{align}
C_p=-1:
\hspace{5mm}
A^{(\mathbb{A})}_4=
\begin{cases}
\displaystyle \frac{\zeta(2+s)\,\zeta(1+t)\,\zeta(1+u)}{\zeta(-2-s)\,\zeta(-1-t)\,\zeta(-1-u)}
&\hspace{5mm}\text{ $s$-channel}\,,
\\
\\
\displaystyle \frac{\zeta(1+s)\,\zeta(2+t)\,\zeta(1+u)}{\zeta(-1-s)\,\zeta(-2-t)\,\zeta(-1-u)}
&\hspace{5mm}\text{ $t$-channel}\,,
\\
\\
\displaystyle \frac{\zeta(1+s)\,\zeta(1+t)\,\zeta(2+u)}{\zeta(-1-s)\,\zeta(-1-t)\,\zeta(-2-u)}
&\hspace{5mm}\text{ $u$-channel}\,.
\end{cases}
\label{newAdelic2}
\end{align}
The $C_p=-1$ adelic amplitude given in \eqref{newAdelic2} is the non-constant adelic amplitude that was previously proposed in \cite{aref1988adelic}.\footnote{There is a different way to regulate the adelic product in the regions \eqref{abcRegions} without introducing a coefficient $C_p=-1$ by hand that also leads to \eqref{newAdelic2}. In the regions $A$, $B$, and $C$, the product over the $p$-adic amplitudes \eqref{A4padic} converges in modulus but the overall sign alternates between plus and minus. This means the limit set at fixed $s$ and $t$ consists of two points. If, in analogy with Ces\'aro summation, we take the geometric mean of the limit set, we recover \eqref{newAdelic2} up to an overall phase.}
Piecewise analyticity is not a customary property of tree-amplitudes, and so a natural question to ask is if there are compelling reasons why we should think of either of these expressions as representing a scattering amplitude. To address this question, we will study in turn the partial wave decompositions and high energy asymptotics of \eqref{newAdelic1} and \eqref{newAdelic2}.
\subsection{Partial wave decomposition}
\label{3.2}
Scattering amplitudes in quantum theories admit of partial wave decompositions into Gegenbauer polynomials $C_m^{(\alpha)}(x)$. Viewed abstractly, the Gegenbauer polynomials form an infinite family of one-parameter polynomials satisfying the orthogonality relation
\begin{align}
\int_{-1}^1dx\,C_m^{(\alpha)}(x)\,C_n^{(\alpha)}(x)\,(1-x^2)^{\alpha-\frac{1}{2}}
=\delta_{m,n}\frac{2^{1-2\alpha}\pi\Gamma(2\alpha+n)}{n!(n+\alpha)\Gamma(\alpha)^2}\,.
\label{GegOrtho}
\end{align}
The index $m$ takes values in $\mathbb{N}_0$ and labels the degree of a given polynomial, while $\alpha$ is a real parameter that in the context of partial wave decompositions assumes the value $\alpha = (d-3)/2$. In four dimensions the Gegenbauer polynomials reduce to the Legendre polynomials $C_m^{(\frac{1}{2})}(x)$, and in five dimensions they reduce to the Chebyshev polynomials $C_m^{(1)}(x)$. For even $m$, $C_m^{(\alpha)}(x)$ contains only even powers of $x$, for odd $m$, only odd. The first four Gegenbauer polynomials are given by
\begin{align}
C_0^{(\alpha)}(x)=&\,1\,, \\
C_1^{(\alpha)}(x)=&\,2\alpha\, x\,, \\
C_2^{(\alpha)}(x)=&\,2\alpha(1+\alpha)x^2-\alpha\,, \\
C_3^{(\alpha)}(x)=&\,\frac{4}{3}\alpha(1+\alpha)(2+\alpha)x^3-2\alpha(1+\alpha)x\,.
\end{align}
Unitarity dictates that any residue of a four-point amplitude $A_4(s,t)$ decomposes into a positively-weighted sum of Gegenbauer polynomials $C_m^{(\frac{d-3}{2})}(\cos\theta)$, where $\theta$ is the scattering angle in the center-of-mass frame. Such a frame does not always exist for tachyonic scattering, but we can still perform the decomposition
\begin{align}
\underset{s=s^\ast}{\text{Res}}\,A_4\left(s,\frac{(s+4)(\cos\theta-1)}{2}\right)
=\sum_{m=\mathbb{N}_0}K^{(s^\ast)}_m(d)\,C_m^{(\frac{d-3}{2})}(\cos\theta)\,,
\label{Geg}
\end{align}
where all non-zero coefficients $K^{(s^\ast)}_m(d)$ must be positive for unitarity to hold.\footnote{We refer the reader for to the appendices of Caron-Huot, Komargodski, Sever, and Zhiboedov's paper \cite{caron2017strings} for a nice review of partial wave unitarity with application to the Veneziano amplitude.}
Let us review how this decomposition works for the real Veneziano amplitude $A_4^{(\infty)}(s,t)$, which has $s$-channel poles at $s=s^\ast \in 2\mathbb{N}_0-1$, and for which the number of non-zero Gegenbauer coefficients in generic number of spacetime dimensions grows linearly as $(s^\ast+3)/2$. There are no poles at even values of $s$, since open string excitations of even mass in units of $\frac{1}{\alpha'}$ carry odd spin, and we are studying the full rather than the partial amplitude. In order for the coefficients $K_m^{(s^\ast)}$ in \eqref{Geg} to be positive, we must equip $A_4^{(\infty)}(s,t)$ with an overall minus sign.\footnote{To make positive the residues of the $p$-adic amplitude $A_4^{(p)}(s,t)$, it must also be multiplied with a negative prefactor. Conceivably this fact helps account for the negativity of the coefficients $C_p$.
}
The first residue trivially decomposes into a Gegenbauer polynomial:
\begin{align}
&\underset{s=-1}{\text{Res}}\left[-A_4^{(\infty)}\left(s,\frac{(s+4)(\cos\theta-1)}{2}\right)\right]=\,
2=2\,C_0^{(\frac{d-3}{2})}(\cos\theta)\,.
\end{align}
The second residue decomposes as
\begin{align}
\underset{s=1}{\text{Res}}\left[-A_4^{(\infty)}\left(s,\frac{(s+4)(\cos\theta-1)}{2}\right)\right]
=\,&\frac{1}{4}(25\cos^2\theta-1)
\\[6pt]
=\,&
\frac{26-d}{4(d-1)}\,
C_0^{(\frac{d-3}{2})}(\cos\theta)
+
\frac{25}{2d^2-8d+6}\,
C_2^{(\frac{d-3}{2})}(\cos\theta)\,.
\nonumber
\end{align}
From the coefficient $K_0^{(1)}(d)$, we observe that the Veneziano amplitude violates unitarity for $d>26$ as first observed by Frampton \cite{frampton1972n} in 1972, in accordance with Lovelace's discovery of the previous year that 26 is the critical dimension of bosonic string theory \cite{lovelace1971pomeron}. The no-ghost theorem implies the positivity of all coefficients for $d\leq 26$, but no direct proof is known. A partial proof in the case $d=4$ was offered by Maity \cite{maity2022positivity} just this year. For superstring amplitudes the positivity of all Gegenbauer coefficients in $d\leq 6$ was directly proven, also this year, by Arkani-Hamed, Eberhardt, Huang, and Mizera in \cite{arkani2022unitarity}.
For general string amplitudes, Gegenbauer coefficients do not have to become negative immediately above the critical dimension. The Gegenbauer decomposition of the first many residues of the Virasoro-Shapiro amplitude suggests that partial wave positivity is not violated until the number of spacetime dimensions exceeds 57.
\subsubsection*{$C_p=-p$ adelic amplitude}
In equation \eqref{newAdelic1}, the $C_p=-p$ candidate 4-point adelic amplitude was written down in terms of three expressions, one for each scattering channel. In actuality this notation is redundant, and without loss of generality, we are free to take the zero components $k_1^0$ and $k_2^0$ to be positive and $k_3^0$ and $k_4^0$ to be negative, so that we are in the $s$-channel.
In the $s$-channel subset depicted in {\bf \textcolor{colour1}{dark blue}} in figure \ref{fig:Regions}, where the $C_p=-p$ adelic product is convergent, the adelic amplitude shares its poles with those of the real Veneziano amplitude, meaning there are poles at $s\in 2\mathbb{N}+1$. Unlike the residues of the real Veneziano amplitude, the residues of $A_4^{(\mathbb{A})}$ are not polynomials in $\cos\theta$, for which reason the coefficients $K_m^{(s^\ast)}$ do not admit straightforward closed-form evaluation. Instead we resort to numeric computation via the formula
\begin{align}
K_m^{(s^\ast)}\hspace{-0.3mm}=\hspace{-0.3mm}
\frac{m!(2m+d-3)\Gamma\big(\frac{d-3}{2}\big)^2}{2^{5-d}\pi\Gamma(d-3+m)}
\hspace{-0.6mm}\int_{-1}^1 \hspace{-0.6mm}dx\,C_m^{(\frac{d-3}{2})}(d)\, \underset{s=s^\ast}{\text{Res}}\Big[\hspace{-0.5mm}-\hspace{-0.6mm}A^{(\mathbb{A})}_4\Big(s,\frac{(s+4)(x-1)}{2}\Big)\Big](1-x^2)^{\frac{d-4}{2}}
\,,
\nonumber
\end{align}
where $A_4^{(\mathbb{A})}$, whose overall normalization is undetermined, has been dressed with a prefactor of minus one, which will be needed for unitarity. Also unlike the real Veneziano amplitude, the $s$-channel residues of $A^{(\mathbb{A})}_4$ decompose into infinite rather than finite sums over Gegenbauer polynomials, an unorthodox property also present in the UV-complete gravity amplitude recently discovered by Huang and Remmen \cite{huang2022uv} and pointed out by Geiser and Lindwasser \cite{geiser2022generalized} to be present in generalizations of the Veneziano amplitude.
If we analytically extend the $C_p=-p$ amplitude to the entirety of the $s$-channel, we encounter new poles: at $s=1$, at $t,u=-1$, and at $s,t,u=0$. The tachyonic pole in $s$ is absent: only an in-going and an out-going tachyon can together generate a tachyonic resonance in this set-up. The poles at $s=0$ and $t,u=-1$ each have a constant residue of 12. The fact that the $s=0$ pole is a constant, unlike all the other $s$-channel poles, which are non-polynomial in $\cos\theta$, is a fortuitous circumstance, since the adelic amplitude thereby eschews exchanges of higher spins on the massless pole.
At the intersection of the poles at $s\in 2\mathbb{N}_0+1$ and at $t=0$ or $u=0$ we encounter problematic double poles. One consequence of these is the breakdown of the Gegenbauer decomposition for $d\leq 4$, since the numerical integral $K_m^{(s^\ast)}$ ceases to converge in low dimensions. The presence of double poles is certainly alarming, but an extenuating circumstance is provided by the fact that they are situated on the edge of the physically allowed region. It would be further problematic if we attempted to analytically continue $A_4^{(\mathbb{A})}$ into the complex plane, since the non-trivial zeros of the Riemann zeta function would give rise to complex poles.
While the above-mentioned issues cast doubt on the interpretation of $A_4^{(\mathbb{A})}$ as a scattering amplitudes, there is also evidence to the contrary. There is no reason why Gegenbauer coefficients for different polynomials and at different poles should have the same sign without unitarity to enforce this constraint. Yet for $5\leq d \leq 27$, numerical evidence suggests that the coefficients are all positive. Figure \ref{fig:coeffs1} shows the values of the first 18 Gegenbauer coefficients at the first 25 $s$-channel poles. The coefficients of the zeroth Gegenbauer polynomial is positive for $4<d\leq 27$ and falls negative between $d=27$ and $d=28$, while all other coefficients are positive throughout the range of dimensions explored.
\begin{figure}
\begin{center}
\includegraphics[width=0.99\linewidth]{Geg2final.pdf}
\end{center}
\caption{Coefficients in the $s$-channel Gegenbauer decomposition of the tentative adelic amplitude with $C_p=-p$. All the coefficients are positive with the exception of $K^{(1)}_{0}(d)$, which falls negative between $d=27$ and $d=28$. For $d\leq 4$, the coefficients blow up. For the sake of visibility, the logarithm has been taken of coefficients $K^{(s^\ast)}_{m}(d)$ that are positive for all $d>4$.
}
\label{fig:coeffs1}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.99\linewidth]{Geg1final.pdf}
\end{center}
\caption{Coefficients in the $s$-channel Gegenbauer decomposition of the tentative adelic amplitude with $C_p=-1$. In five to ten dimensions all the coefficients are positive. In 11 dimensions and higher, some coefficients are negative. For $d\leq 4$, the coefficients blow up. For the sake of visibility, the logarithm has been taken of coefficients $K^{(s^\ast)}_{m}(d)$ that are positive for all $d>4$.
}
\label{fig:coeffs2}
\end{figure}
\subsubsection*{$C_p=-1$ adelic amplitude}
In the $s$-channel region, the $C_p=-1$ adelic amplitude has poles at $s=-1$, for $s\in 2\mathbb{N}_0$, and at $t,u=0$. This means that in a hypothetical adelic string theory spectrum, the spin-even particles would carry even rather than odd masses in units of $\frac{1}{\alpha'}$. As with the $C_p=-p$ adelic amplitude, we encounter problematic double poles when the above equations for $s$ and $t$ or $u$ are simultaneously satisfied, and again these double poles entail that we can only perform the Gegenbauer decomposition \eqref{Geg} when $d>4$. And as before, imposing extended analyticity would result in complex poles corresponding to the non-trivial zeros of the Riemann zeta function.
The values of the first 18 Gegenbauer coefficients at the first 13 poles in $s$ are shown in figure \ref{fig:coeffs2}. Generally the coefficients $K_m^{(s^\ast)}(d)$ are not monotonic functions of $m$, $s^\ast$, or $d$. With increasing $s^\ast$, there are an increasing number of values of $m$ for which $K_m^{(s^\ast)}(d)$ assumes negative values, but in no case is $K_m^{(s^\ast)}(d)$ negative for $d\leq 10$. The first unitarity violation occurs when $K^{(-1)}_0(d)$ falls negative between $d=10$ and $d=11$. For $4<d\leq 10$ the numerically computed coefficients $K_m^{(s^\ast)}(d)$ are all positive. Unlike the case for the $C_p=-p$ amplitude, to achieve positivity, the plotted Gegenbauer coefficients are associated to $A^{(\mathbb{A})}_4$ rather than $-A^{(\mathbb{A})}_4$. Another difference from the $C_p=-p$ amplitude, and very possibly a serious malady, is the presence of an infinite tower of Gegenbauer polynomials at the massless residue.
\subsection{High energy limit}
\label{3.3}
A desirable property of the Veneziano amplitude is its well-behaved high energy asymptotics, as nicely reviewed in the first chapter of Green-Schwarz-Witten \cite{green}. For external scalars scattering and exchanging an internal particle of spin $J$ at large $s$ and fixed $t$, the tree-amplitude has the form
\begin{align}
A_J(s,t) \approx -\frac{g^2(-s)^J}{t-M^2}\,. \label{J}
\end{align}
By unitarity, loop amplitudes can be constructed by sewing together tree-amplitudes. But for $d\geq 4$, the high energy scaling \eqref{J} gives rise to unrenormalizable divergences when $J>1$, which is problematic if we want a theory with particles of spin greater than one. String theory comes to the rescue by furnishing us with an infinite tower of particles with increasingly higher spin that resums to a more benign high energy behaviour. In the Regge limit and the fixed angle high energy limit, using the asympotic behaviour of the gamma function,
\begin{align}
x>>1:
\hspace{15mm}
\Gamma(x)\approx \sqrt{\frac{2\pi}{x}}\,\Big(\frac{x}{e}\Big)^{x}\,,
\hspace{15mm}
\Gamma(-x)\approx
-\sqrt{\frac{\pi}{2x}}\,\Big(\frac{x}{e}\Big)^{-x}\csc(\pi x)\,,
\label{gammaAsymptotic}
\end{align}
it can be shown that the Veneziano amplitude exhibits the following asymptotics:
\begin{align}
&\text{large $s$, fixed $t$:} \hspace{12mm}
A_4^{(\infty)}(s,t) \approx
2\,s^{1+t}\,\Gamma(-1-t)
\sec\Big(\frac{\pi s}{2}\Big)
\sin\Big(\frac{\pi t}{2}\Big)
\sin\Big(\frac{\pi (s+t)}{2}\Big)\,,
\nonumber \\[8pt] \label{decay}
&\text{large $s$, fixed $\theta$:} \hspace{12mm}
A_4^{(\infty)}\Big(s,\frac{s+4}{2}(\cos\theta-1)\Big)\approx
\\[5pt]
&
\bigg|\frac{2}{\sin\theta}\tan(\frac{\theta}{2})^{\cos\theta}\bigg|^{-s}
\,
\sqrt{\frac{\pi\sin^2\theta}{2s}}
\,
\Big(\cot^2\big(\frac{\theta}{2}\big)\Big)^{2\cos\theta}
\bigg(
1-\cos\Big(\frac{\pi(4+s)\cos\theta}{2}\Big)
\sec\Big(\frac{\pi s}{2}\Big)
\bigg)\,. \nonumber
\end{align}
We see that $A_4^{(\infty)}$ scales as $s^{1+t}$ in the Regge limit and decays exponentially in the fixed angle large energy limit.
To investigate whether the candidate adelic amplitudes \eqref{newAdelic1} and \eqref{newAdelic2} exhibit similar desirable high energy limits, it is convenient to write each adelic amplitudes as the product of the Veneziano amplitude and the piece due to the $p$-adic amplitudes:
\begin{align}
A_4^{(\mathbb{A})}\big|_{s\text{-channel}}(s,t)=A_4^{(\infty)}(s,t)\,A_4^{(\mathbb{P})}(s,t)\big|_{s\text{-channel}}\,,
\end{align}
where the piece relating the adelic and real amplitudes is given by
\begin{align}
&
C_p=-p:
\hspace{12mm}
A^{(\mathbb{P})}_4(s,t)\big|_{s\text{-channel}}
=\frac{
\zeta(1+s)\,
\zeta(-1-t)\,
\zeta(3+s+t)
}{
\zeta(2+s)\,
\zeta(-2-t)\,
\zeta(2+s+t)
}\,,
\hspace{16mm}
\\[4pt]
&
C_p=-1:
\hspace{12mm}
A^{(\mathbb{P})}_4(s,t)\big|_{s\text{-channel}}
=\frac{
\zeta(-1-s)\,
\zeta(1+t)\,
\zeta(-3-s-t)
}{
\zeta(-2-s)\,
\zeta(2+t)\,
\zeta(-2-s-t)
}\,.
\hspace{16mm}
\end{align}
Since $\zeta(x)\rightarrow 1$ as $x \rightarrow \infty$, we immediately see that the $C_p=-p$ adelic amplitude has the same high energy asymptotics as $A_4^{(\infty)}(s,t)$. For the $C_p=-1$ adelic amplitude, it takes slightly more work to determine the asympotics. But by invoking the functional equation for the Riemann zeta function, Euler's reflection formula for the gamma function, and a bit of trigonometry, one finds that
\begin{align}
\label{pFactor2}
&
C_p=-1:
\hspace{48mm}
A^{(\mathbb{P})}_4(s,t)\big|_{s\text{-channel}}=
\\[6pt]
&\hspace{6mm}
-\frac{2(1+t)(3+s+t)}{\pi(2+s)}
\frac{
\cos\big(\frac{\pi s}{2}\big)
\cos\big(\frac{\pi t}{2}\big)
\cos\big(\pi\frac{s+t}{2}\big)
}
{
\sin\big(\pi s\big)+\sin\big(\pi t\big)-\sin\big(\pi(s+t)\big)
}
\frac{
\zeta(2+s)\,
\zeta(-t)\,
\zeta(4+s+t)
}{
\zeta(3+s)\,
\zeta(-1-t)\,
\zeta(3+s+t)
}\,.
\nonumber
\end{align}
In the limit of large $s$ and fixed $t$, this expression tends to a periodically oscillating function of $s$, and in the limit of large $s$ and fixed $\theta$, \eqref{pFactor2} tends to a periodic function of $s$ times a function that grows linearly in $s$. The linear growth is subdominant compared to the exponential decay in \eqref{decay}, and so the healthy high energy behaviour of the real amplitude persists also for the $C_p=-1$ adelic amplitude.
\subsection{Interpretation as integral over ring of adeles}
\label{3.4}
The term ``adelic" applied to product formulas and amplitudes is motivated by a tentative connection to the adelic numbers. This subsection discusses this connection and how it relates to the choices of $C_p$ introduced above.
The ring $\mathbb{A}$ of adeles over $\mathbb{Q}$ is given by the restricted product
\begin{align}
\mathbb{A}=\mathbb{R}\times \prod_{p\in\mathbb{P}}{}'\,\, \mathbb{Q}_p\,.
\end{align}
What this means concretely is that the adelic numbers are given by the following set:
\begin{align}
\nonumber
\mathbb{A}=\bigg\{
(x_\infty,\,x_2,\,x_3,\,x_5,...)\hspace{3mm}\Big|\hspace{3mm} &x_\infty \in \mathbb{R}\,,
\\[-2pt]
& x_p \in \mathbb{Q}_p \text{ for all } p\in \mathbb{P}\,,
\\
& x_p \in \mathbb{Z}_p \text{ for all but finitely many } p\in \mathbb{P}
\bigg\}\,.
\nonumber
\end{align}
An infinite product like \eqref{newA4} is referred to as an adelic amplitude because ideally it admits an interpretation as an integral over the adelic numbers:
\begin{align}
A_4^{(\mathbb{A})}(s,t)
\stackrel{?}{=}
\int_{\mathbb{A}}dx\,|x|_{\mathbb{A}}^{-2-s}|1-x|_{\mathbb{A}}^{-2-t}\,,
\label{adelicIntegral}
\end{align}
where the adelic norm is defined as follows:
\begin{align}
|x|_{\mathbb{A}}=|x_\infty|_\infty\prod_{p\in \mathbb{P}}|x_p|_p\,.
\end{align}
The condition on an adelic number that all but finitely many of the coordinates $x_v$ belong to $\mathbb{Z}_p$ indicates that we cannot immediately factor an adelic integral into an infinite product of integrals over $\mathbb{Q}_p$. A more careful way to carry out the integral would be to partition the set of all primes into two subsets, $\mathbb{P}=P_1 \cup P_2$, where $P_1$ is finite and $P_2$ is infinite, and then integrate primes in $P_1$ over $\mathbb{Q}_p$ and primes in $P_2$ over $\mathbb{Z}_p$, before finally taking the limit as $P_1$ tends to $\mathbb{P}$:
\begin{align}
A_4^{(\mathbb{A})}(s,t)\stackrel{?}{=}
\lim_{P_1\rightarrow \mathbb{P}}
\bigg[
\prod_{p\in P_1}
\int_{\mathbb{Q}_p}dx\,|x|_p^{-2-s}|1-x|_p^{-2-t}
\bigg]\,
\bigg[
\prod_{p \in P_2}
\int_{\mathbb{Z}_p}dx\,|x|_p^{-2-s}|1-x|_p^{-2-t}
\bigg]\,.
\end{align}
For such a procedure to be well-defined, the precise details of how we partition $\mathbb{P}$ into $P_1$ and $P_2$ and take the limit $P_1\rightarrow \mathbb{P}$ should not matter. For the procedure to work we must also require that the product
\begin{align}
\prod_{p\in\mathbb{P}}\int_{\mathbb{Z}_p}dx\,|x|_p^{-2-s}|1-x|_p^{-2-t}
\end{align}
be well-defined. But we note that
\begin{align}
\int_{\mathbb{Z}_p}dx\,|x|_p^{-2-s}|1-x|_p^{-2-t}
=\frac{-2+p+p^{1+s}+p^{1+t}-p^{3+s+t}}{p(p^{1+s}-1)(p^{1+t}-1)}\,,
\end{align}
and the product of this expression over all primes tends to zero or infinity, depending on the values of $s$ and $t$. We can however render the product convergent by equipping the integrals with coefficients. In particular, in the kinematic regimes \eqref{subsetRegions} and \eqref{abcRegions}, we can use precisely the respective constants $C_p=-p$ and $C_p=-1$ to attain convergence of the product
\begin{align}
\prod_{p\in\mathbb{P}} C_p \int_{\mathbb{Z}_p}dx\,|x|_p^{-2-\alpha's}|1-x|_p^{-2-\alpha't}\,.
\end{align}
With the coefficients in place, we can identify the adelic integral as
\begin{align}
A^{(\mathbb{A})}_4(s,t)
=
\lim_{P_1\rightarrow \mathbb{P}}
\bigg[
\prod_{p\in P_1}
C_p\int_{\mathbb{Q}_p}dx\,|x|_p^{-2-s}|1-x|_p^{-2-t}
\bigg]\,
\bigg[
\prod_{p\in \mathbb{P}\setminus P_1}
C_p\int_{\mathbb{Z}_p}dx\,|x|_p^{-2-s}|1-x|_p^{-2-t}
\bigg]\,. \nonumber
\end{align}
The limiting value for this expression does not depend on how the limit $P_1\rightarrow \mathbb{P}$ is taken and matches the expressions \eqref{newAdelic1} or \eqref{newAdelic1}, provided we set $C_p=-p$ or $C_p=-1$ and restrict $s$ and $t$ to the respective region of converges for these two choices of coefficients.
\section{Adelic 4-Point Superamplitudes}
\label{4}
The full four-gluon amplitude in type-I string theory is given by
\begin{align}
\mathcal{A}_4=
K(k_i,\zeta_i) \bigg(
\frac{\Gamma(-s)\Gamma(-t)}{\Gamma(1+u)}+
\frac{\Gamma(-s)\Gamma(-u)}{\Gamma(1+t)}+
\frac{\Gamma(-t)\Gamma(-u)}{\Gamma(1+s)}
\bigg)\,,
\label{Agluon}
\end{align}
where the overall normalization and the polarization-dependent kinematic factor have been absorbed into the prefactor $K(k_i,\zeta_i)$, whose value can be read off from equations (4.23) and (4.24) of Schwarz' paper \cite{schwarz1982superstring}. Since we are now dealing with massless scattering, the Mandelstam invariants are related by
\begin{align}
s+t+u=0\,.
\end{align}
It was observed in 1989 by Ruelle, Thiran, Verstegen, and Weyers \cite{ruelle1989adelic} that, like the Veneziano amplitude, this amplitude can be expressed in terms of Gelfand-Graev gamma functions:
\begin{align}
\mathcal{A}_4=\frac{K}{2\pi i}\,\Gamma^{(-1)}_{\infty}(-s)\,\Gamma^{(-1)}_{\infty}(-t)\,\Gamma^{(-1)}_{\infty}(-u)
\equiv \frac{K}{2\pi}\,\mathcal{A}^{(\infty)}_4\,,
\label{AsuperReal}
\end{align}
where the kind of Gelfand-Graev gamma function relevant in this case is given by
\begin{align}
\Gamma^{(-1)}_{\infty}(z)=2i(2\pi)^{-z}\sin\left(\frac{\pi z}{2}\right)\Gamma(z)\,.
\end{align}
This kind of signed gamma function exists both in a real and a $p$-adic version, with the different versions defined by
\begin{align}
\Gamma^{(-1)}_\infty(z)=\int_{\mathbb{R}} \frac{dx}{|x|}\,e^{2\pi i x}\,|x|^z\,\text{sign}(x)\,,
\hspace{15mm}
\Gamma^{(-1)}_p(z)=\int_{\mathbb{Q}_p} \frac{dx}{|x|_p}\,e^{2\pi i\{ x\}}\,|x|_p^z\,\text{sign}_{-1}(x)\,,
\label{signedGammas}
\end{align}
where $\{x\}$ denotes the fractional part of $x\in \mathbb{Q}_p$ and the $p$-adic sign function is given by
\begin{align}
\text{sign}_{-1}(x)=\begin{cases}
1 \hspace{5mm}&\text{ if } x=a^2+b^2 \text{ for some }a,b\in \mathbb{Q}_p\,,
\\
-1 &\text{ otherwise}\,.
\end{cases}
\end{align}
In the $p$-adic case, carrying out the integral over $\mathbb{Q}_p$, the signed gamma function evaluates to
\begin{align}
&\Gamma_2^{(-1)}(z)=i\,2^{2z-1} \hspace{10mm}\text{for }p=2\,, \text{ while}
\label{GammaSigned}
\\[6pt]
&
\Gamma_p^{(-1)}(z)
=\frac{\zeta_p^{(-1)}(z)}{\zeta_p^{(-1)}(1-z)}\,,
\hspace{10mm}
\text{where}
\hspace{10mm}
\zeta_p^{(-1)}(z)
=\frac{1}{1+(-1)^{\frac{p+1}{2}}\, p^{-z}}\,,
\hspace{10mm}
\text{for }p>2\,.
\nonumber
\end{align}
In analogy with \eqref{AsuperReal}, Ref. \cite{ruelle1989adelic} proposed a $p$-adic superamplitude:
\begin{align}
\mathcal{A}_4^{(p)}\big(s,t\big)=\Gamma^{(-1)}_{p}(-s)\,\Gamma^{(-1)}_{p}(-t)\,\Gamma^{(-1)}_{p}(s+t)\,.
\label{AsuperPadic}
\end{align}
Unlike the $p$-adic Veneziano amplitude and the alternative $p$-adic superstring amplitudes studied in \cite{aref1988p,marshakov1990new,garcia2022towards}, there is no integral formula to justify the identification of \eqref{AsuperPadic} with a string theory amplitude. The motivation for \eqref{AsuperPadic} was the goal of obtaining an adelic formula for superstring amplitudes. The signed $p$-adic zeta function satisfies the product formula
\begin{align}
\prod_{p>2}\zeta_p^{(-1)}(z)=\prod_{p>2} \frac{1}{1+(-1)^{\frac{p+1}{2}}\,p^{-z}}=L_{4,2}(z)\equiv\sum_{n=1}^\infty \frac{\chi_{4,2}(n)}{n^z}\,, \label{prodZetaSigned}
\end{align}
where the product over primes can be shown to converge absolutely for Re$[z]>1$ and conditionally for Re$[z]>\frac{1}{2}$. Here $L_{4,2}(z)$ is the Dirichlet L-function for the field of Gaussian rationals, and $\chi_{4,2}(n)$ is the Dirichlet character with modulus 4 and index 2, whose value for any integer argument can be determined from the facts that $\chi_{4,2}(n_1 n_2)=\chi_{4,2}(n_1) \chi_{4,2}(n_2)$ for any $n_1,n_2\in \mathbb{N}$, $\chi_{4,2}(2)=0$, and $\chi_{4,2}(p)=(-1)^{\frac{p+3}{2}}$ for any prime $p>2$. By virtue of being a Dirichlet L-function, $L_{4,2}(z)$ obeys a functional equation:
\begin{align}
L_{4,2}(1-z)
=\frac{4^{z}}{2i}\,\Gamma_\infty^{(-1)}(z)\,
L_{4,2}(z)\,.
\label{superFunctional}
\end{align}
Using this relation, the real superamplitude $\mathcal{A}_4^{(\infty)}$ can be recast solely in terms of the Dirichelet L-function:
\begin{align}
\mathcal{A}_4^{(\infty)}
=-8\,\frac{L_{4,2}(1+s)L_{4,2}(1+t)L_{4,2}(1+u)}{L_{4,2}(-s)L_{4,2}(-t)L_{4,2}(-u)}\,.
\end{align}
Also through the use of the functional equation, we observe that if we set aside the criterion Re$[z]>\frac{1}{2}$ required for the convergence of \eqref{prodZetaSigned} and allow ourselves to split up an infinite product, then we find that
\begin{align}
\prod_{p\in\mathbb{P}} \Gamma_p^{(-1)}(z)
=
\Gamma_2^{(-1)}(z)
\frac{\prod_{p>2}\zeta_p^{(-1)}(z)}{\prod_{p>2}\zeta_p^{(-1)}(1-z)}
=i\,2^{2z-1}\frac{L_{4,2}(z)}{L_{4,2}(1-z)}
=-\frac{1}{\Gamma_\infty^{(-1)}(z)}\,.
\label{gammaSignedProd}
\end{align}
If we are also allowed to split an infinite product over $\mathcal{A}_4^{(p)}$ into three products over the signed gamma function in \eqref{AsuperPadic}, then, as in \cite{ruelle1989adelic}, we arrive at the following formula:
\begin{align}
\mathcal{A}_4^{(\infty)}(s,t)
\prod_{p \in \mathbb{P}}
\mathcal{A}_4^{(p)}(s,t)
=-1
\,.
\label{superAdelicProd}
\end{align}
\subsection{Prefactor regulated adelic superamplitude}
\label{4.1}
In the interest of finding an alternative to the constant adelic superamplitude not requiring the assumptions needed to arrive at \eqref{superAdelicProd}, we introduce coefficients $\mathcal{C}_p$ and take the adelic amplitude to be given by
\begin{align}
\mathcal{A}_4^{(\mathbb{A})}(s,t)
=
\mathcal{A}_4^{(\infty)}(s,t)
\prod_{p \in \mathbb{P}}
\mathcal{C}_p\, \mathcal{A}_4^{(p)}(s,t)
\,.
\label{AadelSuper}
\end{align}
If there exists a theory to which such a product can be meaningfully associated, then it will be necessary to multiply $\mathcal{A}_4^{(\mathbb{A})}$ with a kinematic factor like $K(k_i,\zeta_i)$ in equation \eqref{Agluon}, but we will not examine this complication. As in the bosonic case, there exist two choices of $\mathcal{C}_p$ for which the product in \eqref{AadelSuper} converges in particular kinematic regimes, but there is no choice of $\mathcal{C}_p$ that will render the product convergent throughout the physical regimes given by
\begin{align}
\nonumber
& \text{$s$-channel:} \hspace{3mm} s\geq 0\,, \hspace{5mm} -s\leq t \leq 0\,,
\\[3pt]
\label{superSubsets}
& \text{$t$-channel:} \hspace{3.5mm} t\geq 0\,, \hspace{5.5mm} -t\leq s \leq 0\,,
\\[3pt]
& \text{$u$-channel:} \hspace{3mm} s,t<0\,,
\nonumber
\end{align}
and depicted in different shades of green in figure \ref{fig:SuperRegions}. If we set $\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p^2$ for $p>2$, convergence is attained in the regions marked in {\bf \textcolor{colour6}{dark green}} in figure \ref{fig:SuperRegions} and given by
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\linewidth]{regions2.pdf}
\end{center}
\caption{Different regions of kinematic space for 4-particle gluon scattering. The regions in {\bf \textcolor{colour5}{light green}} and {\bf \textcolor{colour6}{dark green}} represent physical regions, while the {\bf \textcolor{colour7}{magenta}} and {\bf \textcolor{colour8}{purple}} regions are unphysical. In the {\bf \textcolor{colour6}{dark green}} regions, the product for the adelic amplitude can be rendered convergent by dressing the $p$-adic amplitudes with coefficients $\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p^2$. In the {\bf \textcolor{colour8}{purple}} regions, convergence is achieved with coefficients $\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p$.
}
\label{fig:SuperRegions}
\end{figure}
\begin{align*}
& \text{$s$-channel subset:} \hspace{3mm} t < -\frac{3}{2}\,, \hspace{5mm} s> \frac{3}{2} - t\,,
\\[3pt]
& \text{$t$-channel subset:} \hspace{3mm} s < -\frac{3}{2}\,, \hspace{5mm} t> \frac{3}{2} - s\,,
\\[3pt]
& \text{$u$-channel subset:} \hspace{3mm} s,t<-\frac{3}{2}\,.
\end{align*}
And if we set $\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p$ for $p>2$, the product is convergent for
\begin{align*}
& \mathcal{A}) \hspace{3mm} s+t<-\frac{1}{2}\,, \hspace{5.5mm} t>\frac{1}{2}\,, \hspace{5.5mm} s< -\frac{3}{2}\,,
\\[3pt]
& \mathcal{B}) \hspace{3mm} s+t<-\frac{1}{2}\,, \hspace{5.5mm} s>\frac{1}{2}\,, \hspace{5.5mm} t< -\frac{3}{2}\,,
\\[3pt]
& \mathcal{C}) \hspace{3mm} s,t>\frac{1}{2}\,, \hspace{5.5mm} s+t> \frac{3}{2}\,.
\end{align*}
These regions are depicted in {\bf \textcolor{colour8}{purple}} in figure \ref{fig:SuperRegions}. For $p=2$, we set $\mathcal{C}_2=4i$ and $\mathcal{C}_2=2i$ respectively, but the precise values of $\mathcal{C}_2$ are unimportant since changing a finite number of coefficients only affects the overall normalization of $\mathcal{A}^{(\mathbb{A})}_4$, which we do not determine.
For the values of $\mathcal{C}_p$ just described, it is a simple exercise to evaluate the product \eqref{AadelSuper} in the convergent regions through the use of the formula
\begin{align}
(-1)^{\frac{p+1}{2}}p^n\frac{\zeta^{(-1)}_p(a+b-n)}{\zeta^{(-1)}_p(a)\,\zeta^{(-1)}_p(b)}
=
\frac{\zeta_p^{(-1)}(-a-b+n)}{\zeta^{(-1)}_p(-a)\,\zeta^{(-1)}_p(-b)}\,.
\end{align}
Analytically continuing the products obtained for $\mathcal{C}_p=(-1)^\frac{p+1}{2}p^2$ in the convergent regimes \eqref{superSubsets} to the full respective scattering channels, and continuing for $\mathcal{C}_p=(-1)^\frac{p+1}{2}p$ the product in region $\mathcal{A})$ to the $s$-channel, in $\mathcal{B})$ to the $t$-channel, and in $\mathcal{C})$ to the $u$-channel, we get the following candidate amplitudes:
\begin{align}
\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p^2:
\hspace{5mm}
\mathcal{A}^{(\mathbb{A})}_4=
\begin{cases}
\displaystyle
\,-4\,\frac{L_{4,2}(s)L_{4,2}(1+t)L_{4,2}(1+u)}{L_{4,2}(-s)L_{4,2}(-1-t)L_{4,2}(-1-u)}
&\hspace{5mm}\text{ $s$-channel}\,,
\\
\\
\displaystyle
\,-4\,\frac{L_{4,2}(1+s)L_{4,2}(t)L_{4,2}(1+u)}{L_{4,2}(-1-s)L_{4,2}(-t)L_{4,2}(-1-u)}
&\hspace{5mm}\text{ $t$-channel}\,,
\\
\\
\displaystyle
\,-4\,\frac{L_{4,2}(1+s)L_{4,2}(1+t)L_{4,2}(u)}{L_{4,2}(-1-s)L_{4,2}(-1-t)L_{4,2}(-u)}
&\hspace{5mm}\text{ $u$-channel}\,,
\end{cases}
\label{newAdelicSuper}
\end{align}
\begin{align}
\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p:
\hspace{5mm}
\mathcal{A}^{(\mathbb{A})}_4=
\begin{cases}
\displaystyle
\,-2\,\frac{L_{4,2}(1+s)L_{4,2}(t)L_{4,2}(u)}{L_{4,2}(-1-s)L_{4,2}(-t)L_{4,2}(-u)}
&\hspace{5mm}\text{ $s$-channel}\,,
\\
\\
\displaystyle
\,-2\,\frac{L_{4,2}(s)L_{4,2}(1+t)L_{4,2}(u)}{L_{4,2}(-s)L_{4,2}(-1-t)L_{4,2}(-u)}
&\hspace{5mm}\text{ $t$-channel}\,,
\\
\\
\displaystyle
\,-2\,\frac{L_{4,2}(s)L_{4,2}(t)L_{4,2}(1+u)}{L_{4,2}(-s)L_{4,2}(-t)L_{4,2}(-1-u)}
&\hspace{5mm}\text{ $u$-channel}\,.
\end{cases}
\end{align}
In the next two subsections, we will analyze the partial wave decompositions of these expressions and study their high energy asymptotics.
\subsection{Partial Wave Decomposition}
\label{4.2}
For massless 4-particle scattering in $d$ spacetime dimensions, the partial wave decomposition reads
\begin{align}
\underset{s=s^\ast}{\text{Res}}\,\mathcal{A}\Big(s,\frac{s(\cos\theta-1)}{2}\Big)
=\sum_{m\in \mathbb{N}_0}K^{(s^\ast)}_m(d)\,C_m^{(\frac{d-3}{2})}(\cos\theta)\,,
\label{GegSup}
\end{align}
where unitarity stipulates that all the coefficients $K_m^{(s^\ast)}$ are non-negative. For the real superamplitude $\mathcal{A}_4^{(\infty)}(s,t)$ the $s$-channel poles are situated at $s=s^\ast\in 2\mathbb{N}_0+1$, and the number of Gegenbauer polynomials with non-zero weight in the expansion \eqref{Geg} grows linearly with $s^\ast$ , being given by $(s^\ast+1)/2$ in generic spacetime dimensions. The first residue is simply a constant:
\begin{align}
&\underset{s=1}{\text{Res}}\,\mathcal{A}_4^{(\infty)}\Big(s,\frac{(s+4)(\cos\theta-1)}{2}\Big)=\,
4\pi=4\pi\,C_0^{(\frac{d-3}{2})}(\cos\theta)\,.
\end{align}
The second residue decomposes as
\begin{align}
\underset{s=3}{\text{Res}}\,\mathcal{A}_4^{(\infty)}\Big(s,\frac{(s+4)(\cos\theta-1)}{2}\Big)
=\,&\frac{\pi}{6}(9\cos^2\theta-1)
\\[6pt]
=\,&
\frac{(10-d)\pi}{6(d-1)}\,
C_0^{(\frac{d-3}{2})}(\cos\theta)
+
\frac{3\pi}{d^2-4d+3}\,
C_2^{(\frac{d-3}{2})}(\cos\theta)\,.
\nonumber
\end{align}
From the coefficient $K_0^{(3)}(d)$, we see that the real superamplitude violates unitarity for $d>10$, in accordance with 10 being the critical dimension of superstring theory.
\begin{figure}
\begin{center}
\includegraphics[width=0.99\linewidth]{GegSuperFinal.pdf}
\end{center}
\caption{Coefficients in the $s$-channel Gegenbauer decomposition of the tentative adelic superamplitude with $\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p^2$. In five to 11 dimensions, all the coefficients are positive. In 12 dimensions and higher, some coefficients are negative. For $d\leq 4$, the coefficients blow up. For the sake of visibility, the logarithm has been taken of coefficients $K^{(s^\ast)}_{m}(d)$ that are positive for all $d>4$.
}
\label{fig:coeffsSuper}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.99\linewidth]{superGeg2.pdf}
\end{center}
\caption{Coefficients $K_m^{(2)}(d)$ in the $s$-channel Gegenbauer decomposition of the tentative adelic superamplitude with $\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p$ for the residue at $s=2$. The coefficients $K_m^{(2)}(d)$ are positive for all $d>3$ when $m=4$, $8$ or $12$, marked in \textcolor{blue}{\bf{blue}}, and are all negative for $m=0$, $2$, $6$, $10$, and $14$, marked in marked in \textcolor{red}{\bf{red}}, which implies unitarity violations. For $d\leq 3$, the coefficients blow up. For the sake of visibility, $\log|K_m^{(2)}|$ rather than $K_m^{(2)}$ is plotted along the $y$-axis.}
\label{fig:coeffsSuper2}
\end{figure}
\subsubsection*{$\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p^2$ adelic amplitude}
$L_{4,2}(x)$ equals zero whenever $x\in -2\mathbb{N}_0-1$, and as a result the $s$-channel poles of the $\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p^2$ adelic amplitude are located at the values $s=s^\ast \in 2\mathbb{N}_0+1$ as in the real case. But unlike the real case, every even Gegenbauer polynomial is present at every residue. For the first 14 residues, figure \ref{fig:coeffsSuper} depicts the first 18 Gegenbauer coefficients, computed numerically using the orthogonality relation \eqref{GegOrtho}. On the boundary of the physical region, there is a pole at $t=0$. Wherever the line $t=0$ intersects any of the lines $s=s^\ast$, there is a double pole, causing the decomposition \eqref{GegSup} to blow up for $d\leq 4$. For the first pole at $s^\ast=1$, the coefficients are all positive for every $d>4$, but at higher values of $s^\ast$ an increasing number of coefficients $K_m^{(s^\ast)}(d)$ are negative for some values of $d$. The first unitarity violation occurs between $d=11$ and $d=12$ for the pole at $s^\ast=3$. In dimensions 5 to 11, all the computed coefficients are positive.
\subsubsection*{$\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p$ adelic amplitude}
The $\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p$ adelic amplitude has $s$-channel poles situated at $s^\ast \in 2\mathbb{N}_0$. In the physical $s$-channel region, there are no poles in $t$ and $u$, and therefore no double poles at the intersection of $s$ and $t$ poles, in consequence whereof we can numerically compute the Gegenbauer coefficients for $d > 3$. The residue at $s=0$ is a constant given by
\begin{align}
\underset{s=1}{\text{Res}}\,\mathcal{A}_4^{(\mathbb{A})}\Big(s,\frac{(s+4)(\cos\theta-1)}{2}\Big)
=\frac{\pi}{2L'_{4,2}(-1)}\approx 2.69377\,.
\end{align}
While this number is positive, it is evident from the plots of Gegenbauer coefficients for the residue at $s=2$ shown in figure \ref{fig:coeffsSuper2} that other coefficients are negative. Unitarity is violated in every number of dimensions.
\subsection{High energy limit}
\label{4.3}
From the asymptotic behaviour of the gamma function \eqref{gammaAsymptotic}, it can be shown that the real superamplitude has the following high energy limits:
\begin{align}
&
\text{large $s$, fixed $t$:} \hspace{12mm}
\mathcal{A}_4^{(\infty)}(s,t) \approx
4\pi\, s^{t-1}\,
\Gamma(-t)
\sec\big(\frac{\pi s}{2}\big)
\sin\big(\frac{\pi t}{2}\big)
\sin\big(\frac{\pi (s+t)}{2}\big)\,,
\\[6pt]
&\text{large $s$, fixed $\theta$:} \hspace{12mm}
\mathcal{A}_4^{(\infty)}\Big(s,\frac{s}{2}(\cos\theta-1)\Big)\approx
\\[4pt]
&
\hspace{20mm}
\bigg|\frac{2}{\sin\theta\cot\big(\frac{\theta}{2}\big)^{\cos\theta}}\bigg|^{-s}
\,
\sqrt{\frac{32\,\pi^3}{\sin^2\theta\,s^3}}
\,
\bigg(
1-\cos\Big(\frac{\pi s\cos\theta}{2}\Big)
\sec\big(\frac{\pi s}{2}\big)
\bigg)\,. \label{superRegge}
\end{align}
We see that $\mathcal{A}_4^{(\infty)}(s,t)$ scales as $s^{t-1}$ in the Regge limit and decays exponentially in the high energy fixed scattering angle limit. To see that this benign high energy behaviour carries over in the adelic case, we express the relationship between the real and adelic amplitudes as
\begin{align}
\mathcal{A}_4^{(\mathbb{A})}\big|_{s\text{-channel}}(s,t)=\mathcal{A}_4^{(\infty)}(s,t)\,\mathcal{A}_4^{(\mathbb{P})}(s,t)\big|_{s\text{-channel}}\,,
\end{align}
where the factor relating the two is given by
\begin{align}
&
\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p^2:
\hspace{8mm}
\mathcal{A}^{(\mathbb{P})}_4(s,t)\big|_{s\text{-channel}}
=
\frac{L_{4,2}(s)\,L_{4,2}(-t)\,L_{4,2}(s+t)}{2\,L_{4,2}(1+s)\,L_{4,2}(-1-t)\,L_{4,2}(-1+s+t)}
\,,
\nonumber
\\
&
\\[-6pt]
&
\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p:
\hspace{10mm}
\mathcal{A}^{(\mathbb{P})}_4(s,t)\big|_{s\text{-channel}}
=
\frac{L_{4,2}(-s)\,L_{4,2}(t)\,L_{4,2}(-s-t)}{4\,L_{4,2}(-1-s)\,L_{4,2}(1+t)\,L_{4,2}(1-s-t)}
\,.
\nonumber
\end{align}
Since $L_{4,2}(x)$ tends to one as $x\rightarrow \infty$, we immediately infer that the $\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p^2$ adelic amplitude has the same high energy asymptotics as the real amplitude. In the case $\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p$, it follows from the functional equation \eqref{superFunctional} that
\begin{align}
&\mathcal{C}_p=(-1)^{\frac{p+1}{2}}p:
\hspace{35mm}
\mathcal{A}^{(\mathbb{P})}_4(s,t)\big|_{s\text{-channel}}=
\label{100}
\\[6pt]
&\hspace{14mm}
-\frac{t(s+t)
\cot\big(\frac{\pi s}{2}\big)
\cot\big(\frac{\pi t}{2}\big)
\cot\big(\frac{\pi (s+t)}{2}\big)
}{2\pi (1+s)} \frac{L_{4,2}(1+s)\,L_{4,2}(1-t)\,L_{4,2}(1+s+t)}{4\,L_{4,2}(2+s)\,L_{4,2}(-t)\,L_{4,2}(s+t)}
\,.
\nonumber
\end{align}
In the limit of large $s$ and fixed $t$, this expression tends to a periodic function, and in the limit of large $s$ and fixed $\theta$, \eqref{100} tends to a periodic function in $s$ times a linearly growing function, which is suppressed compared to the exponential decay of \eqref{superRegge}.
\section{Discussion}
\label{5}
We have seen that the adelic 5-point amplitude, given by a sometimes convergent product over real and $p$-adic amplitudes, admits exact evaluation in special kinematic regimes and that the results of these evaluations suggest seeking out alternative regularization procedures for interpreting the 4-point amplitude, which is always given by a divergent product. We analyzed in detailed a particular procedure that consists in endowing the $p$-adic amplitudes with momentum-independent coefficients so as to render their product convergent. Similarly, the 5-point and higher $p$-adic amplitudes can be dressed with coefficients that drastically alter their products and modify their regimes of convergence, and imposing factorization may point to a consistent set of choices for these coefficients, although it would be preferable to derive them from first principles.
Of the four non-constant adelic 4-point amplitudes we obtained through regularization via coefficients, three contain double poles at the boundary of the physical kinematic region but appear consistent with unitary, while the fourth contains no double poles in the physical region but violates unitarity. The situation is reminiscent of the $q$-deformed version of the Veneziano amplitude known as the Coon amplitude \cite{coon1969uniqueness}. For $q>1$, the Coon amplitude is meromorphic but non-unitary, while for $q<1$, it is unitary but non-meromorphic. As to the absence of crossing symmetry in the adelic amplitudes, it is certainly an unusual feature but not one that need be fatal be to the existence of an underlying theory. It has been known since the work of Bros, Epstein, and Glaser \cite{bros1964some,bros1965proof} that locality, causality, and unitarity together imply crossing symmetry in theories with a mass gap, but for theories with massless particles, if we abandon the mathematical assumption of analyticity, no such proof is known.\footnote{We refer the reader to Mizera's paper \cite{mizera2021crossing} for a nice review of, and recent progress on, crossing symmetry.} Taking a different view on piecewise analyticity and lack of crossing symmetry, the adelic formalism provides a method of engineering features otherwise difficult to realize for tree-amplitudes in string theory and quantum field theory, where Feynman tree-diagrams are largely insensitive to whether external momenta are in- or out-going --- conditions which, as we have seen, are crucial to the evaluation of infinite products with disjoint regions of convergence.
The most intriguing feature of the adelic amplitudes in \eqref{newAdelic1}, \eqref{newAdelic2}, and \eqref{newAdelicSuper} is arguably the numerical evidence of unitarity for target spaces of suitable dimensionalities. Unitarity of string theory tree-amplitudes has previously proved itself to be a powerful principle capable of deriving new mathematical results, as in the work of Green and Wen \cite{green2019superstring}. For the candidate amplitude \eqref{newAdelic1}, $d=27$ is the largest number of dimensions before unitarity violations are observed, while for the tentative amplitudes in \eqref{newAdelic2} and \eqref{newAdelicSuper} these numbers are $d=10$ and $d=11$ respectively, although \eqref{newAdelic2} should perhaps be dismissed, since it suffers from the malady of exchanging higher spins on a massless pole. It remains an intriguing prospect to ascertain if any of the above numbers truly represents the critical dimension of the adelic string. It will also be potentially interesting to investigate whether there is evidence for unitarity of more broad classes of adelic amplitudes associated to general Dirichlet L-functions, and if so, to determine the associated critical dimensions. The family of local gamma functions in \eqref{signedGammas}, indexed by a superscript of minus one, admit of a generalization to families of gamma functions indexed by any non-zero rational number, and for each of these families it is possible to form an adelic construction. It would also be desirable to establish partial wave unitarity, if this is a genuine feature, on a firmer footing than a finite number of numerical tests. For unitary to hold, a doubly infinite set of constraints must be obeyed: for each residue at an infinite tower of poles, each coefficient in an infinite sum of Gegnebauer polynomials must be non-negative. The fact that these conditions are all satisfied in the first many instances, where they have so far been checked, is a phenomenon that is difficult to explain without positing an underlying unitary theory associated to the ring of adeles.
\section*{Acknowledgements}
I am indebted to Gabriel Cuomo, Matthew Heydeman, An Huang, Ziming Ji, Justin Kaidi, Joseph Minahan, Yaron Oz, Gabi Zafrir, and Wayne Zhao for illuminating discussions.
|
3,212,635,537,522 | arxiv | \section{Introduction \label{sec:introduction}}
Grand Unified Theories (GUTs) offer an attractive framework for model building beyond the Standard Model (SM). Fermion unification in the large GUT representations, on top of gauge coupling unification, makes them a natural environment for addressing the flavour puzzle, i.e.\ the question about the origin of the observed fermion masses, mixings and CP violating phases. Popular GUT models are, e.g., based on the unifying gauge groups $\mathrm{SU}(5)$ \cite{Georgi:1979df} or $\mathrm{SO}(10)$ \cite{Fritzsch:1974nn,Georgi:1974my}. In this work we will focus on the framework of $\mathrm{SU}(5)$ based GUTs.
Depending on the GUT gauge group and on the choice of the GUT-Higgs representations involved in the GUT operators for the Yukawa matrices, the unification of fermions in GUT-matter representations leads to a variety of close connections between the elements of the Yukawa matrices, and thus between the masses and mixings in the quark and lepton sectors (cf.\ \cite{Antusch:2009gu,Antusch:2013rxa}). Furthermore, in particular towards understanding the observed charged fermion mass hierarchies and the large mixing in the lepton sector, family symmetries are often considered in addition to the unifying gauge symmetry. In the literature many options have been considered for family symmetries, including continuous or discrete symmetries, Abelian and/or non-Abelian groups etc., for reviews see e.g.\ \cite{King:2017guk,Meloni:2017cig,King:2013eh}. In such a ``flavour GUT" scenario, the vacuum expectation values (VEVs) of the family symmetry breaking fields (known as ``flavons'') play a crucial role in generating the Yukawa couplings.
Although the origin of the observed fermion masses, mixings and CP violating phases has been among the most important puzzles of particle physics already for a long time, the discovery of a non-zero leptonic mixing angle
$\theta_{13}^\text{PMNS}$ a few years back by T2K~\cite{Abe:2011sj}, Double Chooz~\cite{Abe:2011fz}, RENO~\cite{Ahn:2012nd}, and in particular Daya Bay~\cite{An:2012eh} has triggered new proposals for models towards its solution. In particular, the nowadays very precise result from combining the latest measurements in a global fit yielding $\theta_{13}^\text{PMNS} \approx 8.54^\circ \pm 0.15^\circ$~\cite{Esteban:2016qun} requires a substantial deviation of leptonic mixing (described by the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix) from the tri-bimaximal (TB) mixing pattern \cite{Harrison:2002er,Xing:2002sw}, which for some time was considered a valid mixing scheme for the lepton sector.
Classifying flavour models as ``direct'' or ``indirect'' as in \cite{King:2009ap}, depending on whether residual symmetries are used (``direct'' models) or whether a family symmetry gets completely broken to generate the flavour structure (``indirect'' models), different routes were followed. In the context of ``direct'' models it was found that more and more complicated groups had to be chosen in order to be in approximate agreement with the $\theta_{13}^\text{PMNS}$ measurement (cf.\ e.g.\ Refs.\ in \cite{King:2017guk,Meloni:2017cig}). In ``indirect'' models, on the other hand, those became appreciated which had the ``corrections'' to a leading order mixing pattern with zero $1$-$3$ mixing (such as the TB mixing pattern) present \textsl{a priori}.
The latter situation is typical for flavour GUTs, since due to the GUT relation between quarks and leptons the charged lepton Yukawa matrix typically features some non-zero mixing related to the mixing in the down-type Yukawa matrix, which can correct a leading order mixing pattern from the neutrino mass matrix. Ideas in this direction have been studied some time back under the name ``quark-lepton complementarity (QLC)'' \cite{Minakata:2004xt,Smirnov:2004ju,Raidal:2004iw,Frampton:2004vw,Antusch:2005ca,Li:2005yj,Minakata:2005rf,Hochmuth:2007wq,Goswami:2009yy,Qin:2010hn,Patel:2010hr,Qin:2011ub,Ahn:2011yj,Ahn:2011ep,Zheng:2011uz,Ahn:2011if} with a leading order pattern of ``bimaximal mixing'' \cite{Barger:1998ta} in the neutrino sector, which gets a correction from the charged lepton sector (with ``CKM-like'' mixing angles). Such a scenario was able to explain the large lepton mixing (with a large but non-maximal $\theta_{12}^\text{PMNS}$) at that time.
Furthermore, it was proposed (cf.~e.g.~\cite{King:2012vj,Antusch:2012fb}) and also realised (cf.~\cite{Meroni:2012ty,Antusch:2013kna,Zhao:2014qwa,Shimizu:2014ria}) in the context of flavour GUTs that a leading order TB mixing pattern in the neutrino sector, which then gets modified by a charged lepton mixing correction (via a predicted 1-2 mixing angle from the charged lepton Yukawa matrix related to the Cabibbo angle $\theta_\mathrm{C}$, cf.\ \cite{Antusch:2011qg,Marzocca:2011dh}), could be an interesting scheme for model building. Such a leading order TB mixing pattern in the neutrino sector can, e.g., be realised in ``indirect'' models via a type I seesaw mechanism with so-called Constrained Sequential Dominance (CSD, also referred to as CSD1) \cite{King:2005bj}. In CSD1, the VEVs of the ``flavons'', which break the family symmetry, point in the specific directions $(0,1,-1)$ and $(1,1,1)$ in flavour space, corresponding to two of the columns of the TB mixing matrix.
Interestingly, it was found that the general scenario that $\theta_{13}^\text{PMNS}$ emerges entirely from a charged lepton $1$-$2$ mixing contribution leads not only to the relation $\theta_{13}^\text{PMNS} = s_{23}^\text{PMNS} \theta_{12}^\text{e}$ (in leading order), with $\theta_{12}^\text{e}$ potentially related to the Cabibbo angle $\theta_\mathrm{C}$ in the context of GUTs, but also to a so-called lepton mixing sum rule which allows to predict $\delta^\mathrm{PMNS}$ once the mixing pattern in the neutrino mass matrix is fixed \cite{King:2005bj,Masina:2005hf,Antusch:2005kw,Antusch:2007rk,Antusch:2012fb,Girardi:2014faa,Ballett:2014dua}. Taking the $2$-$3$ mixing in the neutrino sector maximal and $\theta_{12}^\text{e} \approx \theta_\mathrm{C}$, we arrive at the prediction $\theta_{13}^\text{PMNS} \approx \theta_\mathrm{C}/{\sqrt{2}} \approx 9.2^\circ $ \cite{King:2012vj,Antusch:2012fb}. This value was originally close to the observed one, but is now disfavoured with the more precise measurement of $\theta_{13}^\text{PMNS}=8.54^\circ \pm 0.15^\circ$~\cite{Esteban:2016qun}. Furthermore, the experimentally preferred region for $s_{23}^\text{PMNS}$ is now larger than $1/\sqrt{2}$, making the prediction for $\theta_{13}^\text{PMNS}$ under the assumption of $\theta_{12}^\text{e} \approx \theta_\mathrm{C}$ even worse.
Already before the very precise measurements of $\theta_{13}^\text{PMNS}$, the pattern of CSD2 \cite{Antusch:2011ic} was proposed for the neutrino sector, as an alternative to CSD1.
Here, the ``flavons'' which break the family symmetry point in the directions $(0,1,-1)$ and $(1,2,0)$ (or ($1,0,2$)) in flavour space.
CSD2 features the same attractive prediction for the neutrino sector $1$-$2$ mixing $\theta_{12}^\nu$, close to the measured PMNS value of about $35^\circ$, but it predicts a non-zero mixing angle $\theta_{13}^{\nu}$ already in the neutrino sector, a deviation of $\theta_{23}^\text{PMNS}$ from $45^\circ$, as well as a leptonic Dirac CP phase $\delta^\text{PMNS}$ which has been shown in \cite{Antusch:2011ic} to be directly linked to the CP violation relevant for generating the baryon asymmetry via the leptogenesis mechanism \cite{Fukugita:1986hr}. When CSD2 is realised in the context of GUTs, then the combined mixing from the charged lepton sector (predicted by GUT relations) and the neutrino sector leads to an attractive class of models for explaining the observed PMNS parameters. Specific $\mathrm{SU}(5)$ GUT models realising this idea have been constructed in \cite{Antusch:2013wn,Antusch:2017ano}.
The purpose of this paper is to perform a systematic analysis of the above-described novel class of models. After defining the model class and identifying the possible choices of GUT operators and the free parameters, we will systematically investigate and classify the resulting predictions by fitting the known experimental results for fermion masses and mixings, in order to select the most promising routes for future model building. It will turn out that the promising models predict the lepton and quark Dirac CP phases $\delta^\mathrm{PMNS}, \delta^\mathrm{CKM}$, with $\delta^\mathrm{PMNS}$ between $230^\circ$ and $290^\circ$ and $\delta^\mathrm{CKM}$ in accordance with a right-angle unitarity triangle ($\alpha_\mathrm{UT}=90^\circ$). They also $\theta_{23}^\mathrm{PMNS}$ and $m_d/m_s$ with much less uncertainty than the experimentally allowed ranges. Such predictions of the considered class of models will be probed by future experiments. The DUNE experiment, for instance, can measure $\theta_{23}^\mathrm{PMNS}$ with a precision of less than $1^\circ$, and $\delta^\mathrm{PMNS}$ with a precision of ${\cal O}(10^\circ)$ \cite{Abi:2018dnh,Abi:2018alz,Abi:2018rgm}.
The paper is organized as follows. In Section~\ref{sec:model} we describe the class of models we consider in our study, including the specification of all fermion sectors and an extensive discussion on the texture and various predictive mechanisms used. In Section~\ref{sec:implementationmodel} we analyse the predictive power of the models and determine the best approach to a numerical analysis. In Section~\ref{sec:numericalresults} we present the results. In Section~\ref{sec:conclusions} we conclude with a summary of our work, as well as discuss the future outlook and application of our results.
\section{A new class of models: CSD2 in a simple and predictive GUT setup \label{sec:model}}
\subsection{General $\mathrm{SU}(5)$ GUT setup}\label{sec:GUTsetup}
In this section we define the setup for the class of models we consider in this paper. The general idea is to take supersymmetric (SUSY) $\mathrm{SU}(5)$ GUT models and assume a texture in the Yukawa sector which is as predictive as possible. We shall not be concerned with how these textures are dynamically achieved, i.e.~we shall not specify a flavour theory, as we want to do an analysis which is as model independent as possible.
We assume that the fermion sector consists of the usual three families of $\mathbf{\bar{5}}\oplus\mathbf{10}$, which decompose under the SM group $\mathrm{SU}(3)_C\times\mathrm{SU}(2)_L\times\mathrm{U}(1)_Y$ as
\begin{align}
\mathbf{\bar{5}}_i&=(\mathbf{\bar{3}},\mathbf{1},+\tfrac{1}{3})_i\;\oplus\;(\mathbf{1},\mathbf{2},-\tfrac{1}{2})_i\equiv d^c_i\oplus L_i,\\
\mathbf{10}_i&=(\mathbf{3},\mathbf{2},+\tfrac{1}{6})_i\;\oplus\;(\mathbf{\bar{3}},\mathbf{1},-\tfrac{2}{3})_i\;\oplus\; (\mathbf{1},\mathbf{1},+1)_i\equiv Q_i\oplus u^c_i\oplus e^c_i,
\end{align}
\noindent
where the family index $i$ goes from $1$ to $3$. In addition, the implementation of CSD2 via seesaw type~I in the neutrino sector would require additional right-handed neutrinos in the representation $\mathbf{1}$ of $\mathrm{SU}(5)$.
We make, however, no explicit assumptions on the Higgs sector or any top-down flavour theory, although these do have implicit requirements by the choice of our Yukawa texture. Since no Higgs sector is given, we remain agnostic about the the exact superpotential terms of the Yukawa sector in the $\mathrm{SU}(5)$ theory. The most appropriate level at which such a superpotential is to be written is that of the MSSM with right-handed neutrinos:
\begin{align}
\begin{split}
W_{\text{Yuk}}&=\sum_{i,j}\;\; (\mathbf{Y}_u)_{ij}\;Q_i\cdot H_u\,u_j^c-
(\mathbf{Y}_d)_{ij}\;Q_i\cdot H_d\,d^c_j-(\mathbf{Y}_e)_{ij}\;L_i\cdot H_d\,e^c_j\\
&\quad +\sum_{ik}(\mathbf{Y}_\nu)_{ik}\; L_i\cdot H_u \,\nu_k^c+\sum_{kl}(\mathbf{M}_R)_{kl}\;\nu^c_k\,\nu^c_l,
\end{split}\label{eq:MSSM-superpotential}
\end{align}
\noindent
where the $H_u\sim (\mathbf{1},\mathbf{2},+\tfrac{1}{2})$ and $H_d\sim (\mathbf{1},\mathbf{2},-\tfrac{1}{2})$ are the two Higgs fields of the MSSM. The dot $\cdot$ represents a contraction of $\mathrm{SU}(2)$ fundamental indices of the form
\begin{align}
X\cdot Y\equiv \varepsilon_{ab}\;X^a\,Y^b,
\end{align}
\noindent
where $\varepsilon_{ab}$ is the completely anti-symmetric tensor with two indices and $\varepsilon_{12}=1$. The indices $i$ and $j$ run from $1$ to $3$, while we do not assume necessarily the same for $k$ and $l$. We have suppressed in this notation the $\mathrm{SU}(3)$ indices. The Yukawa matrices are written in the left-right convention, and the signs in front of the terms are chosen, so that we get positive terms for the fermion mass terms when the electrically neutral components of $H_u$ and $H_d$ acquire a VEV; this convention is equivalent to that of \cite{Martin:1997ns}.
The free parameters in Eq.~\eqref{eq:MSSM-superpotential} are the $3\times 3$ Yukawa matrices $\mathbf{Y}_u$, $\mathbf{Y}_d$ and $\mathbf{Y}_e$, the $3\times n$ neutrino Yukawa matrix $\mathbf{Y}_\nu$ and the $n\times n$ Majorana mass matrix $\mathbf{M}_{R}$, where $n$ is the number of right-handed neutrinos. At the $\mathrm{SU}(5)$ level, the various Yukawa terms are coming from the following type of operators, where each of $X,Y,Z$ stand for a GUT-Higgs field (or a product of GUT-Higgs fields) in $\mathrm{SU}(5)$ representations, such that the terms form an $\mathrm{SU}(5)$ invariant:
\begin{align}
(\mathbf{Y}_u)_{ij}:&\qquad \mathbf{10}_i\;\mathbf{10}_j\;X,\label{eq:operator-u}\\
(\mathbf{Y}_d)_{ij}:&\qquad \mathbf{10}_i\;\mathbf{\bar{5}}_j\;Y,\label{eq:operator-d}\\
(\mathbf{Y}_e^\textsf{T})_{ij}:&\qquad \mathbf{10}_i\;\mathbf{\bar{5}}_j\;Y,\label{eq:operator-e}\\
(\mathbf{Y}_\nu)_{ik}:&\qquad \mathbf{\bar{5}}_i\;\mathbf{1}_k\;Z.\label{eq:operator-nu}
\end{align}
This is a list of $3$ different types of operators, and therefore gauge unification imprints itself only in the form of relations between $\mathbf{Y}_d$ and $\mathbf{Y}_e$, while all other Yukawa parameters are completely independent. We therefore use the GUT concept to relate the parameters in the down-type quark and charged lepton sector in a particular manner, which we discuss later. Note though that the unknown parts $X,Y,Z$ are in general different for different choices of indices $i$ and $j$. In order to be as predictive as possible, we assume that the matrices $\mathbf{Y}_u$, $\mathbf{Y}_d$, $\mathbf{Y}_e$, $\mathbf{Y}_\nu$ and $\mathbf{M}_R$ have special textures at the GUT scale, which we discuss and motivate below.
The GUT setup will be studied in the framework of supersymmetry, however we will mainly be concerned with the predictions for the ``SM part'', i.e.\ for the prediction for the fermion masses, mixings and CP phases. SUSY enters mainly via the RGEs when we run the parameters from the GUT scale to the SUSY scale, and via the one-loop SUSY threshold corrections \cite{Hempfling:1993kv,Hall:1993gn,Carena:1994bv,Blazek:1995nv,Antusch:2008tf} for which we will use a general parameterisation as discussed later in section \ref{sec:GUT-operators} following \cite{Antusch:2013jca}. We will include the SUSY threshold correction parameters as free parameters in our analysis. Since they will be determined by the fit to the experimental data for the fermion flavour structure, they can give interesting constraints on the SUSY sparticle spectrum, and, together with the measured mass of the SM Higgs particle and the GUT constraints on the soft SUSY breaking terms, can even fully determine the sparticle spectrum (cf.\ \cite{Antusch:2017ano,Antusch:2016nak,Antusch:2015nwi}). We will leave the investigation of the consequences of the considered models for the sparticle spectrum for a future study.
\subsection{Choice of Yukawa sector\label{sec:Yukawa-sector}}
We now focus on our choice of textures in the Yukawa sector. We shall choose a specific texture based on previous analyses and model building ideas, as we discuss below, but remain agnostic regarding the underlying flavour theory.
The choice of our Yukawa textures will be guided by the principles of simplicity and predictivity, and we shall choose the explanation of the CKM CP violating phase as an important starting point. A summary of the train of thought determining the textures of both the quark and lepton sectors is the following:
\begin{enumerate}
\item \textbf{Phase sum rule}: as a guide to obtaining a viable CKM CP violating phase, we choose the phase sum rule \cite{Antusch:2009hq} that a unitarity triangle angle of $\alpha_{UT}=\delta^{dL}_{12}-\delta^{uL}_{12}\approx 90^\circ$ gives a good prediction; a necessary condition for the implementation is that $\theta^{uL}_{13}=\theta^{dL}_{13}=0$.
\item \textbf{Simplicity in $\mathbf{Y}_d$}: we choose the down-sector to have no mixing between the first two families and the third family at all, i.e.~$\theta^{dL}_{23}=\theta^{dL}_{13}=0$, and further simplify the texture by taking $(\mathbf{Y}_d)_{11}=0$ and use phase redefinitions of the fields $\mathbf{10}_i$ to eliminate unphysical phases in the entries of $\mathbf{Y}_d$.
\item \textbf{CP violation in $\mathbf{Y}_u$}: in the up-type quark sector the matrix $\mathbf{Y}_u$ is symmetric; we achieve $\theta^{uL}_{13}=0$ by $(\mathbf{Y}_u)_{13}=0$ and implement the quark sector CP violating phase via a realisation of the ``phase sum rule mechanism'' \cite{Antusch:2009hq} by taking all entries real, except the $(\mathbf{Y}_u)_{12}$ entry to be imaginary.
\footnote{We would like to emphasize that any choice that realizes the phase sum rule leads to the same predictions. We therefore do not lose generality by our particular implementation.}
\item \textbf{Single operator dominance}: we assume that each non-vanishing entry in $\mathbf{Y}_d$ comes dominantly from only one operator of the type of Eq.~\eqref{eq:operator-d}, consequently relating $\mathbf{Y}_e$ to $\mathbf{Y}_d$ in the simplest and most predictive way.
\item \textbf{CSD2 for $\mathbf{Y}_\nu$}: we choose the form of ``Constrained Sequential Dominance 2'' (CSD2) for the neutrino sector.
\end{enumerate}
\noindent
Each of these arguments will be thoroughly explored later in this section; we shall discuss each one separately and also flesh-out the connections on how each one then leads to the next. For now, we simply specify the form that Yukawa matrices take considering the points mentioned above: the quark and charged lepton sector Yukawa matrices at the scale $M_{\text{GUT}}$ take the form
\begin{align}
\mathbf{Y}_u = \begin{pmatrix} u_1 & iu_2 & 0 \cr iu_2 & u_3 & u_4 \cr 0 & u_4 & u_5 \end{pmatrix}\,\quad
\mathbf{Y}_d = \begin{pmatrix} 0 & z & 0 \cr y e^{i \gamma} & x & 0 \cr 0 & 0 & y_b \end{pmatrix} \,, \quad
\mathbf{Y}_e = \begin{pmatrix} 0 & c_y ye^{i \gamma} & 0 \cr c_z z & c_x x & 0 \cr 0 & 0 & y_{\tau} \end{pmatrix}\,,\label{eq:Yukawa-texture-ude}
\end{align}
while the left-handed neutrino mass matrix at $M_{\text{GUT}}$ takes one of the following two forms:
\begin{align}
\mathbf{M}_{\nu}^{(102)} = m_a \begin{pmatrix} \epsilon e^{i\alpha} &0& 2 \epsilon e^{i\alpha} \cr 0&1&-1\cr 2\epsilon e^{i\alpha} & -1&1+4\epsilon e^{i\alpha} \end{pmatrix}\,,\quad
\mathbf{M}_{\nu}^{(120)} = m_a \begin{pmatrix} \epsilon e^{i\alpha} & 2 \epsilon e^{i\alpha} & 0 \cr 2\epsilon e^{i\alpha} & 1+4\epsilon e^{i\alpha} & -1 \cr 0 & -1 & 1 \end{pmatrix}\,.\label{eq:CSD2-2variants}
\end{align}
The reason behind two possible forms of $\mathbf{M}_\nu$ will be explained in Section~\ref{sec:reasoning-Ynu}.
Above, the Yukawa sector is parametrized by $14$ real parameters in total; this includes the $12$ parameters
\begin{align}
u_1,\quad u_2,\quad u_3,\quad u_4,\quad u_5, \quad x,\quad y,\quad z, \quad m_a, \quad \epsilon,\quad y_b, \quad y_\tau, \label{eq:list_parameters_1}
\end{align}
\noindent
and the $2$ phases
\begin{align}
\alpha,\quad \gamma. \label{eq:list_parameters_2}
\end{align}
In addition, the factors $c_x,c_y,c_z$ in $\mathbf{Y}_e$ are Clebsch-Gordan (CG) coefficients, which are fixed by the choice of particular GUT operators in Eqs.~\eqref{eq:operator-d} and \eqref{eq:operator-e}. We postpone a more in-depth discussion on the possible values of these coefficients to Section~\ref{sec:GUT-operators}.
The above texture considerably reduces the number of free parameters, allowing it to make a number of predictions. The way this texture works is the following:
\begin{itemize}
\item Since $\mathbf{Y}_d$ is block diagonal, the angles $\theta_{23}^{\text{CKM}}$ and $\theta^{\text{CKM}}_{13}$ are coming only from $\mathbf{Y}_u$. Since $(\mathbf{Y}_u)_{13}=0$ and $(\mathbf{Y}_d)_{13}=0$, the angle $\theta_{13}^\text{CKM}$ is generated indirectly:
\begin{align}
\theta_{13}^\text{CKM}\approx \theta_{12}^{uL} \theta_{23}^{uL}.\label{eq:theta13-generated-indirectly}
\end{align}
All in all, this means that the parameters $u_i$ in $\mathbf{Y}_u$, where $i=1\ldots 5$, are fitted to accommodate the three mass eigenvalues $m_u$, $m_c$, $m_t$, and two mixing angles $\theta_{23}^{\text{CKM}}$ and $\theta_{13}^{\text{CKM}}$ (the latter generated indirectly). Since $\theta_{23}^{uL}$ is fixed by the CKM angle $\theta_{23}^\text{CKM}$, the angle $\theta_{12}^{uL}$ is determined by Eq.~\eqref{eq:theta13-generated-indirectly}. Since there is no left mixing phase $\delta^{dL}_{12}$ in $\mathbf{Y}_d$, the relative factor $i$ in $\mathbf{Y}_u$ predicts the $\alpha_{\text{UT}}$ angle in the unitarity triangle $\alpha_{UT}\approx-\delta^{uL}_{12}\approx \pi/2$, and thus $\delta^{\text{CKM}}$.
\item In $\mathbf{Y}_e$ and $\mathbf{Y}_d$, the parameters $x$, $y$, $z$ are used to correctly fit the two well measured charged lepton masses $m_{e}$ and $m_{\mu}$, and produce a suitable $\theta_{12}^{dL}$ in order to produce the correct remaining CKM angle $\theta_{12}^{\text{CKM}}$. Given fixed Clebsch coefficients $c_x$, $c_y$ and $c_z$ (they are not free parameters in a chosen model), this automatically predicts the down-type masses $m_d$ and $m_s$ and the charged lepton mixing angle $\theta_{12}^{eL}$. The prediction of the masses $m_d$ and $m_s$ can be alternatively thought of as the prediction of the ratio $m_d/m_s$, which turns out to be fixed almost solely by the three Clebsch coefficients, while the SUSY threshold correction parameter $\eta_q$ (to be discussed later in Section~\ref{sec:GUT-operators}) is fit so as to give a correct overall scale for the $m_d$ and $m_s$ masses. Finally, the parameters $y_b$ and $y_\tau$ can be set independently, thus determining the correct $m_b$ and $m_\tau$ mass, respectively.
\item The form of CSD2 in the neutrino sector predicts one neutrino mass to be zero. The parameters $m_a$ and $\epsilon$ in $\mathbf{M}_{\nu}$ can be used to fit the two non-zero masses $m_{\nu_2}$ and $m_{\nu_3}$. The two remaining parameters are the phases $\alpha$ and $\gamma$ in the matrices $\mathbf{M}_\nu$ and $\mathbf{Y}_e$, respectively. These 2 parameters have to be used to fit the three PMNS mixing angles, and they also determine the PMNS CP violating phase. From a simplified perspective, successfully fitting $3$ angles with 2 parameters implies that $1$ angle is determined by the other $2$; we choose the least well measured of the angles, the angle $\theta_{23}^{\text{PMNS}}$, to be the predicted one. All in all, that means that the CSD2 form of the neutrino sector and the given texture in $\mathbf{Y}_e$ make 2 predictions: $\delta^{\text{PMNS}}$ and $\theta_{23}^{\text{PMNS}}$.
\end{itemize}
Given the considerations above, we see that the chosen textures make $4$ predictions, which we summarize in a table given below:
\begin{center}
\begin{tabular}{ll}
\toprule
predicted quantity&root cause\\
\midrule
$\delta^{\text{CKM}}$& phase sum rule\\\addlinespace[4pt]
$m_d/m_s$& GUT connection\\\addlinespace[4pt]
$\theta_{23}^{\text{PMNS}}$& $\mathbf{Y}_e$ texture and CSD2\\\addlinespace[4pt]
$\delta^{\text{PMNS}}$& $\mathbf{Y}_e$ texture and CSD2\\
\bottomrule
\end{tabular}
\end{center}
Additionally, two more interesting quantities are fit: the charged lepton mixing angle $\theta_{12}^{eL}$ and the SUSY threshold parameter for the first two down-type families $\eta_{q}$. The quantity $\theta_{12}^{eL}$ may be of interest for more general model building approaches, e.g.\ when the charged fermion GUT setup may be combined with a different scheme for the neutrino sector. The value for $\eta_{q}$ would have to be realized by a realistic model of SUSY breaking, which can lead to interesting constraints on the sparticle spectrum as discussed at the end of Section~\ref{sec:GUTsetup}.
We note that complete analysis has to take into account the RGE running of the Yukawa matrices to low energies, as well as SUSY threshold corrections. Such a complete analysis of all input parameters and observables of the model with careful consideration of the involved energy scales is performed later in Section~\ref{sec:implementationmodel}. The discussion in this section was intended only for demonstrative purposes of what the chosen textures can achieve.
It has thus been established that the chosen Yukawa textures are both simple and predictive. Based on our $5$-point step-by-step reasoning, we also claim that the choice of the texture is far from arbitrary, and that the various motivational points lead from one to the other.
For an example how such an $\mathrm{SU}(5)$ GUT texture for the charged fermions can be realised in an explicit model, we refer the interested reader to Ref.\ \cite{Antusch:2013tta}.
We now return to this step-by-step motivation of the textures, discussing each of the $5$ considerations in greater detail.
\subsubsection{Phase sum rule \label{sec:reasoning-phase}}
The starting point for the considerations to determine our textures was the ``phase sum rule mechanism'' in the quark sector, proposed in~\cite{Antusch:2009hq}, which leads to a predictive scheme for CP violation in the quark sector featuring a right-angled unitarity triangle with $\alpha_{\mathrm{UT}}= 90^\circ$ (corresponding to a prediction $\delta^{\text{CKM}} = 1.188\pm 0.016$, well within the current experimental range of $1.208 \pm 0.054$ )\footnote{This is a prediction for $\alpha_\mathrm{UT}=90^\circ$, with the CKM angles $\theta_{12}^\text{CKM}$, $\theta_{23}^\text{CKM}$ and $\theta_{13}^\text{CKM}$ taken in their $1\sigma$ experimental ranges, with data at $M_{Z}$ taken from~\cite{Antusch:2013jca}. The experimental range for $\delta^{\text{CKM}}$ at $M_Z$ is also from~\cite{Antusch:2013jca}.}.
In~\cite{Antusch:2009hq} it was shown that a number of ``quark mixing sum rules'' arise under the condition that the 1-3 mixings from both $\mathbf{U}_{u}^{L}$ and $\mathbf{U}_{d}^{L}$ are zero (see Appendix~\ref{app:ckm_pmns} for notation). Assuming $\theta_{13}^{uL}=\theta_{13}^{dL}=0$ and the small angle approximation,
the mixing sum rules can be written as
\begin{align}
\delta^{dL}_{12}-\delta^{uL}_{12}&\approx \alpha_{\text{UT}}\approx \arg \left(1-\frac{\theta_{12}^{\text{CKM}}\theta_{23}^{\text{CKM}}}{\theta_{13}^{\text{CKM}}}e^{-i\delta^{\text{CKM}}}\right),\\
\theta_{12}^{uL}&\approx \frac{\theta_{13}^{\text{CKM}}}{\theta_{23}^{\text{CKM}}}, \label{eq:theta12uL}\\
\theta_{12}^{dL}&\approx \bigg|\theta_{12}^{\text{CKM}}-\frac{\theta_{13}^{\text{CKM}}}{\theta_{23}^{\text{CKM}}}\;e^{-i\delta^{\text{CKM}}}\bigg|,
\end{align}
where $\alpha_{\text{UT}}$ is the upper angle in the unitarity triangle (labelled $\alpha$ in PDG~\cite{Patrignani:2016xqp}). Taking the central values and $1\sigma$ errors at the scale $M_{Z}$ from \cite{Antusch:2013jca}, we thus arrive to the conclusion that the numerical values for the left-hand side quantities are
\begin{align}
\delta^{dL}_{12}-\delta^{uL}_{12}&\approx 88.5^\circ\pm 3.2^\circ\,, \\
\theta_{12}^{uL}&\approx 4.96^\circ \pm 0.19^\circ\,,\\
\theta_{12}^{dL}&\approx 12.18^\circ \pm 0.27^\circ\,.\label{eq:sum-rule-d}
\end{align}
Experimental data is thus consistent with the intriguing possibility that $\alpha_{\mathrm{UT}}= 90^\circ$. It has been proposed in \cite{Antusch:2009hq} that simple textures realising $\delta_{12}^{dL} - \delta_{12}^{uL} = \pi/2$ could thus be used for building predictive models for CP violation in the quark sector. This idea has been applied, e.g., in the GUT flavour models in Refs.~\cite{Antusch:2011sx,Meroni:2012ty,Antusch:2013wn,Antusch:2013kna,Antusch:2013tta,Antusch:2013rla,Antusch:2017ano}. Future more precise measurements of the CKM phase have the potential to verify or exclude this $90^\circ$ prediction.
As a final comment on the phase sum rule, we would like to point out that the generation of a CP violating phase from a phase $\pi/2$ in one of the entries is attractive from a model building point of view, since it can arise from an underlying discrete symmetry or spontaneous breaking thereof.
In models of flavour, where the structure of the Yukawa matrices arises from the vacuum expectation values of so-called ``flavons'', which break a certain family symmetry, phase differences of $\pi/2$ between different flavons, or between the different components of one flavon, can emerge in various ways, e.g.\ via ``discrete vacuum alignment' with $\mathbb{Z}_4$ symmetry combined with spontaneous CP violation \cite{Antusch:2011sx} or from a flavon potential as discussed in \cite{Antusch:2013kna}.
\subsubsection{$\mathbf{Y}_d$: simplicity and predictivity \label{sec:reasoning-Yd}}
From among the fermion sectors we first turn to the down-type quark sector and discussion the form that the matrix $\mathbf{Y}_d$ takes. Here we rely on the principles of simplicity and predictivity. When we apply these principles to the down-sector, there are added benefits also in the charged-lepton sector, since the two sector are related due to gauge unification in the underlying $\mathrm{SU}(5)$ setup.
An important prerequisite for the phase mixing sum rule of Section~\ref{sec:reasoning-phase} to work was to have vanishing $1$-$3$ mixing angles; for $\mathbf{Y}_d$ this means that $\theta_{13}^{dL}=0$, and we can approximately achieve this by taking a texture zero by $(\mathbf{Y}_d)_{13}=0$. We can further simplify $\mathbf{Y}_d$ by assuming that the $\theta_{23}^{dL}$ angle is also zero, so that all CKM mixing between the first two and the last family is coming from the up sector. With this assumption only the largest CKM mixing angle $\theta_{12}^{\text{CKM}}$ in the hierarchy
\begin{align}
1&\gg \theta_{12}^{\text{CKM}}\gg\theta_{23}^{\text{CKM}}\gg \theta_{13}^{\text{CKM}}
\end{align}
is generated from the down sector. A simple way with a minimal number of free parameters is to choose a $2+1$ block diagonal structure. This structure then needs to generate the $4$ relevant quantities (excluding any phases): $3$ down-type masses, as well as the $1$-$2$ contribution via the angle $\theta_{12}^{dL}$.
A $2+1$ block structure has $5$ non-zero entries, so we can still explain the $4$ relevant quantities if we eliminate one of the entries in the $2\times 2$ block. The dominant entry in this block will be generating the mass $m_s$, so we require $(\mathbf{Y}_d)_{22}\neq 0$, while the non-zero $\theta_{12}^{dL}$ angle contribution will benefit from $(\mathbf{Y}_d)_{12}\neq 0$. Since the right mixing will dominantly come from $(\mathbf{Y}_d)_{21}$, and this mixing can be of use later in the lepton sector, we would like to keep that as well. We thus eliminate the parameter in the $1$-$1$ entry: $(\mathbf{Y}_d)_{11}=0$.\footnote{We like to note that another possibility here would be to take the $2$-$1$ entry to be zero. This, however, would decouple the mixing in the quark and lepton sectors since then $\theta_{12}^{dL}$ would vanish. While this may be of interest for different model building ideas, we prefer to stick to $(\mathbf{Y}_d)_{11}=0$ in the following since CSD2 will make use of a non-zero $\theta_{12}^{dL}$.}
The non-zero entries in such a texture are in general complex. We have the freedom, however, to absorb phases into redefinitions of the fields. Since we are using the left-right convention for the matrix $\mathbf{Y}_d$, the basis of the rows comes from the $\mathrm{SU}(5)$ representations $\mathbf{10}_{i}$ (where the left-handed down-quarks live), while the basis for columns comes from $\mathbf{\bar{5}}_{i}$. We use only the freedom of the phase in $\mathbf{10}_i$, with which we can make $3$ entries, one in each row, to be real. A redefinition of $\mathbf{\bar{5}}_{i}$, on the other hand, would influence the neutrino mass matrix; we prefer not to absorb the one remaining phase in $\mathbf{Y}_d$ into $\mathbf{\bar{5}}_{i}$ due to greater clarity later when considering the neutrino sector. The choice of which phases to absorb has now fixed the basis of the $\mathbf{10}_i$. As will be discussed below, to eliminate phases in the neutrino Yukawa matrix we will globally redefine the $\mathbf{\bar{5}}_i$, such that the bases of all the Yukawa matrices are fixed and only the physical phases remain; since in a flavon setup the three families of $\mathbf{\bar{5}}_i$ form a triplet, there is only the freedom to absorb one phase.
Given the considerations above, we have thus arrived to the following form of $\mathbf{Y}_d$:
\begin{align}
\mathbf{Y}_d&\sim\begin{pmatrix}
0&\ast&0\\
\ast&\ast&0\\
0&0&\ast\\
\end{pmatrix} \quad \to
\begin{pmatrix}
0&\star&0\\
\ast&\star&0\\
0&0&\star\\
\end{pmatrix} .
\end{align}
The symbols $\ast$ denote non-zero complex entries, while the $\star$ represents positive real entries. The arrow ``$\to$'' represents the absorbing of phases in to redefinitions of $\mathbf{10}_i$, which shows our choice of entries from which the phase is eliminated, arriving to the final form of $\mathbf{Y}_d$ given in Eq.~\eqref{eq:Yukawa-texture-ude}. We note that in this parametrisation the remaining complex entry $(\mathbf{Y}_d)_{21}$ does not affect the CKM CP phase, but will have an influence on CP violation in the lepton sector since in the considered $\mathrm{SU}(5)$ framework, $\mathbf{Y}_d$ is related to $\mathbf{Y}_e^\textsf{T}$. We will parametrise the complex $2$-$1$ element of $\mathbf{Y}_d$ as $ye^{i \gamma}$ with real parameter $y$ (cf.\ Eq.~\eqref{eq:Yukawa-texture-ude}).
We finish the motivation of $\mathbf{Y}_d$ with a remark comparing our texture to the one, which gives rise to the Gatto Sartori Tonin (GST) relations \cite{Gatto:1968ss}:
The vanishing $1$-$1$ entry in the $2\times 2$ block connects the two mixing angles (left and right) and the two singular values of the block in a relation. Adapting the notation to the concrete case of $\mathbf{Y}_d$, the $2$-$2$ entry is roughly equal to the bigger singular value and thus to the strange quark mass $m_s$. We can then write the block using the small angle approximation $\theta_{12}^{dL},\theta_{12}^{dR}\ll 1$ as
\begin{align}
(\mathbf{Y}_d)_{2\times 2}&\approx\begin{pmatrix}
0&\theta_{12}^{dL}m_s\\
\theta_{12}^{dR}m_s&\phantom{\theta_{12}^{dL}}m_s
\end{pmatrix},
\end{align}
from which we can derive the relation
\begin{align}
\theta_{12}^{dL}\theta_{12}^{dR}&\approx \frac{m_d}{m_s},
\end{align}
where $m_d$ is the down quark mass, which is the smaller of the two singular values; since $m_d\ll m_s$ experimentally, the small angle approximation is justified.
The aforementioned GST relation has a texture zero in the same $1$-$1$ location, but it is valid only when the matrix is symmetric, and the $1$-$2$ mixing angle is taken to be the Cabibbo angle $\theta_\mathrm{C}$: if $\theta_{12}^{dL}=\theta_{12}^{dR}\approx \theta_\mathrm{C}$, then we get the GST relation
\begin{align}
\sqrt{\frac{m_d}{m_s}}\approx \theta_\mathrm{C}.
\end{align}
We stress, however, that in our case the GST relation is not valid; beside our texture not being symmetric, it is also important that not all the $\theta_{12}^{\text{CKM}}$ mixing is generated from $\mathbf{Y}_d$, such that we do not obtain a prediction for $m_d/m_s$ in terms of the Cabbibo angle $\theta_\mathrm{C}$. In our texture, the parameters $x,y$ and $z$ are determined by the very accurately measured $m_e$ and $m_\mu$, and by
$\theta_{12}^{dL}$ which in turn is fixed by the quark mixing sum rule in Eq.~\eqref{eq:sum-rule-d}. The masses $m_d$ and $m_s$, or more precisely the ratio $m_d/m_s$ and the SUSY threshold correction parameter $\eta_q$, are then obtained as predictions once $x,y$ and $z$ are fitted. The model predictions for $m_d/m_s$ thus in general differ from the one of the GST relation.
\subsubsection{$\mathbf{Y}_u$: generating CP violation in CKM \label{sec:reasoning-Yu}}
The up sector Yukawa matrix $\mathbf{Y}_u$ is taken to be symmetric. The masses in the up-quark sector are hierarchical and the mixing angles small. We will take a general $\mathbf{Y}_u$ under the two conditions that (i) the 1-3 mixing in the up-type quark sector is vanishing, which we achieve in a very good approximation by $(\mathbf{Y}_u)_{13} = 0$ and which is a condition for the phase sum rule to hold, and (ii) the phase of the 1-2 mixing is equal to $- \pi/2$, which, applying the ``phase sum rule'' relation $\delta_{12}^{dL} - \delta_{12}^{uL} = \pi/2$ \cite{Antusch:2009hq} gives $\alpha_{\mathrm{UT}}= 90^\circ$.
Due to the texture zero in the entries $(\mathbf{Y}_u)_{13}$ and $(\mathbf{Y}_d)_{13}$ the relation
\begin{align}
\theta_{13}^\text{CKM}\approx \theta_{12}^{uL}\,\theta_{23}^{uL}\approx \frac{(\mathbf{Y}_u)_{12}\,(\mathbf{Y}_u)_{23}}{(\mathbf{Y}_u)_{22}\,(\mathbf{Y}_u)_{33}},
\end{align}
holds, with $\theta_{12}^{uL}$ given by Eq.~\eqref{eq:theta12uL} and where $\theta_{23}^{uL} = \theta_{23}^{\text{CKM}}$.
Since the down sector is block diagonal, it contributes only $\theta_{12}^{dL}$, and the CKM angles $\theta^{\text{CKM}}_{13}$ and $\theta_{23}^{\text{CKM}}$ are thus generated exclusively from the up sector, while $\theta_{12}^\text{CKM}$ gets contributions from the up-sector and the down-sector.
For the following analysis it is only relevant that $\delta_{12}^{uL} = -\pi/2$, however in order to be specific we will choose a special representative of the possible $\mathbf{Y}_u$ with this property, namely the case where most entries are real, except for the $(\mathbf{Y}_u)_{12}$ (and $(\mathbf{Y}_u)_{21}$) entry which has a complex phase of $\pi/2$ (cf.\ \cite{Antusch:2009hq}). The placement of the $i$ in $\mathbf{Y}_u$ is the sole generator of the quark CP phase $\delta^{\text{CKM}}$. Note that the freedom for phase redefinitions of $\mathbf{10}_i $ was already used for $\mathbf{Y}_d$, so basis for $\mathbf{Y}_u$ is thus already fixed and there is no phase freedom remaining. In summary, the texture we consider for $\mathbf{Y}_u$ and $\mathbf{Y}_d$ is
\begin{align}
\mathbf{Y}_u&\sim \begin{pmatrix}
\star&i\star& 0\\
i\star&\star&\star\\
0&\star&\star\\
\end{pmatrix},&
\mathbf{Y}_d&\sim \begin{pmatrix}
0&\star& 0\\
\ast&\star&0\\
0&0&\star\\
\end{pmatrix},
\label{eq:general_texture_yu_yd}
\end{align}
where $\star$ denote any positive real values, and the zero entries $(\mathbf{Y}_u)_{13}=(\mathbf{Y}_d)_{13}=0$ ensure approximately zero $1$-$3$ mixing, ensuring the validity of the phase sum rule to a good approximation. The asterisk $\ast$ denotes arbitrary complex entries, and a complex phase in the 2-1 entry of $\mathbf{Y}_d$ only contributes to right-mixing and not to the left mixing matrix relevant for the construction of the CKM mixing matrix.
Since $\mathbf{Y}_d$ is related to $\mathbf{Y}_e^T$ in our $\mathrm{SU}(5)$ setup, however, this phase appears in the left-side mixing matrix of the charged lepton sector, helping to generate $\delta^{\text{PMNS}}$.
We shall make use of this form of the matrices $\mathbf{Y}_u$ and $\mathbf{Y}_d$ in the next steps.
\subsubsection{$\mathbf{Y}_e$: single operator dominance \label{sec:reasoning-Ye}}
In the following we will furthermore assume that the entries in Yukawa matrices are each dominantly generated by a single GUT operator of the type given in Eqs.~\eqref{eq:operator-u}--\eqref{eq:operator-nu}, which could be a tree-level operator (e.g.\ for the case of the $3$-$3$ element of $\mathbf{Y}_u$ to generate the comparatively large top quark mass) or an effective operator (which helps to explain the hierarchy of the quark and charged lepton masses). We refer to this principle as \textbf{single operator dominance}. The assumption is that possible effects of subdominant operators can be neglected.\footnote{It has been checked in explicit GUT flavour models, e.g.\ in \cite{Antusch:2013kna}, that this principle works very well, unless two operators are engineered to both contribute with similar strength. With the single operator dominance principle, one arrives at more predictive models, while engineering two operators to contribute with similar strength would introduce a new parameter to soften correlations which are otherwise induced by the GUT operators. To be as predictive as possible, we choose not to rely on such assumptions.}
This assumption enables to establish a direct relation between $\mathbf{Y}_d$ and $\mathbf{Y}_e$ due to the same operator contributing to one entry of each of these matrices, cf.~Eq.~\eqref{eq:operator-d} and \eqref{eq:operator-e}; the entries are related by a group-theoretic $\mathrm{SU}(5)$ Clebsch-Gordan coefficient depending on which product of representations $Y$ represents in the stated equations. Each entry in $\mathbf{Y}_d$ can be coming from a different type of operator, so each matrix entry $(\mathbf{Y}_e^\mathsf{T})_{ij}$ can have a different CG coefficient relative to the entry $(\mathbf{Y}_d)_{ij}$. The possibilities of which values the Clebsch coefficients $c_x,c_y,c_z$ can take will be discussed later.
As we have already mentioned briefly in Section~\ref{sec:GUTsetup}, and as we will discuss in more detail in Section~\ref{sec:GUT-operators}, the relation between $\mathbf{Y}_d$ and $\mathbf{Y}_e$ is affected by RG running between the GUT scale and low energies, and also by the SUSY threshold correction when matching the MSSM to the SM at loop level. The latter effects can be particularly large since there are contributions that are loop suppressed but $\tan \beta$ enhanced \cite{Hempfling:1993kv,Hall:1993gn,Carena:1994bv,Blazek:1995nv,Antusch:2008tf,Antusch:2009gu}. For the $1$-$2$ blocks of $\mathbf{Y}_d$ and $\mathbf{Y}_e$, we will show that to a very good approximation both effects can be subsumed into a single factor that merely rescales one block compared to the other.
Regarding $(\mathbf{Y}_d)_{33}$, in an explicit model, there may also be a Clebsch coefficient relating $y_b$ and $y_\tau$ at the GUT scale, and an additional SUSY threshold correction parameter $\eta_b$ (analogously to $\eta_q$ for the $1$-$2$ block) which is fit to match the measured bottom and tau quark masses. This gives an additional constraint on the SUSY particle spectrum (cf.\ discussion in section 2.1), but it will not be discussed any further in this paper. Possible Clebsch factors between $y_b$ and $y_\tau$ in $\mathrm{SU}(5)$ are e.g.\ $y_\tau/y_b = 1$ (i.e.\ $b$-$\tau$ unification \cite{Georgi:1979df}) or $y_\tau/y_b =3/2$ (see for example~\cite{Antusch:2009gu}). However, as already mentioned earlier, we will simply fit $y_b$ and $y_\tau$ to the experimental data, since we want our analysis to be as model independent as possible.
\subsubsection{Neutrino sector: using CSD2 \label{sec:reasoning-Ynu}}
In the neutrino sector, we choose the CSD2 texture for the light neutrinos, which is known to be very predictive~\cite{Antusch:2011ic,Antusch:2013wn}. This section provides a brief summary and motivation for this texture; for understanding the remainder of the paper, however, it is sufficient to simply note the forms of the light neutrino mass matrices of Eq.~\eqref{eq:CSD2-2variants} (description with $3$ real parameters $m_a$, $\epsilon$ and $\alpha$) and the approximate mixing angle predictions in Eqs.~\eqref{eq:12pmns}--\eqref{eq:23pmns}.
The discovery of neutrino oscillations made clear that at least two out of the three observed left-handed neutrinos possess mass, and that there is a mismatch between the flavour eigenbasis $\{\nu_e,\nu_\mu,\nu_\tau\}$ and the mass eigenbasis $\{\nu_1,\nu_2,\nu_3\}$. The two bases are related by the unitary PMNS matrix; see Appendix~\ref{app:ckm_pmns} for details on notation.
\vspace{0.5cm}
{\bf Large lepton mixing in type I seesaw models via Sequential Dominance:}\newline
To understand the origin of the two large lepton mixing angles in the context of the type I seesaw mechanism, the concept of Sequential Dominance (SD) \cite{King:2002nf,Antusch:2004gf} of the right-handed neutrino contributions to the neutrino mass matrix was proposed. Writing
\begin{align}
\mathbf{Y}_\nu=\begin{pmatrix} A_1 & B_1 & C_1 \cr A_2 & B_2 & C_2 \cr A_3 & B_3 & C_3 \end{pmatrix} \,,
\quad \mathbf{M}_{R}=\begin{pmatrix} M_A & 0 & 0 \cr 0 & M_B & 0 \cr 0 & 0 & M_C \end{pmatrix} \,,\label{nuyuk}
\end{align}
then according to the type~I seesaw mechanism, the neutrino masses are given by
\begin{align}
\mathbf{M}_{\nu} =v^2 \mathbf{Y}_\nu \mathbf{M}_R^{-1} \mathbf{Y}_\nu^\mathsf{T}=v^2 \left[ \frac{AA^\mathsf{T}}{M_A}+ \frac{BB^\mathsf{T}}{M_B}+ \frac{CC^\mathsf{T}}{M_C}\right]\,,\label{eq:Mnu-general}
\end{align}
where $A$, $B$, $C$ are the column vectors of neutrino Yukawa matrix, e.g.~$A=(A_1,A_2,A_3)^\mathsf{T}$.
SD is the assumption that
\begin{align}
\frac{AA^\mathsf{T}}{M_A}\gg \frac{BB^\mathsf{T}}{M_B}\gg \frac{CC^\mathsf{T}}{M_C} \;,
\end{align}
i.e.\ that the contribution of one of the right-handed neutrinos, the one with mass $M_A$, dominates $\mathbf{M}_{\nu}$, the one with mass $M_B$ is subdominant, and the one with mass $M_C$ can be neglected. Sequential Dominance thus
corresponds to strong normal hierarchy, i.e. $m_{3}^{\nu}\gg m_{2}^{\nu}\gg m_{1}^{\nu}$. With this hierarchy and the simplifying assumption $A_1=0$, the neutrino mixing angles in leading order satisfy \cite{King:2002nf}
\begin{align}
\tan \theta_{12}^{\nu} &\approx \frac{|B_1|}{c^{\nu}_{23}|B_2|\cos(\phi^\prime_{B_2})-s^{\nu}_{23}|B_3|\cos(\phi^\prime_{B_3})}\,, \label{eq:SD_th12}\\
\theta_{13}^{\nu} &\approx \frac{|B_1||A_2^* B_2+A_3^* B_3|}{(|A_2|^2+|A_3|^2)^{3/2}} \frac{M_A}{M_B}\,,\label{eq:SD_th13} \\
\tan \theta_{23}^{\nu} &\approx\frac{|A_2|}{|A_3|}\,. \label{eq:SD_th23}
\end{align}
We used the definitions
\begin{align}
\phi^\prime_{B_2} &= \phi_{B_2} - \phi_{B_1} - \phi^\nu_2 - \chi^\nu\,, \label{eq:def_phase2}\\
\phi^\prime_{B_3} &= \phi_{B_3} - \phi_{B_1} + \phi_{A_2} - \phi_{A_3} - \phi^\nu_2 - \chi^\nu\,, \label{eq:def_phase3}
\end{align}
and the (complex) parameters in $\mathbf{Y}_\nu$ are written in the form $X=|X|e^{i\phi_X}$ $(X\in\{A_i,B_i\})$. With no loss of generality $M_A$, $M_B$, $M_C$ are chosen real and positive. The values of the two auxiliary phases $\phi^\nu_2$ and $\chi^\nu$ (see the convention in Eq.~\eqref{eq:general_matrix_3}) are fixed by the equations
\begin{align}
\phi^\nu_2 - \phi_{A_2} + \phi_{B_1} &\approx \arg{(A_2^* B_2+A_3^* B_3)}\,, \\
c^{\nu}_{23}|B_2|\sin(\phi^\prime_{B_2}) &\approx s^{\nu}_{23}|B_3|\sin(\phi^\prime_{B_3})\,,
\end{align}
such that the angles $\theta_{12}^\nu$ and $\theta_{13}^\nu$ are real (which is already assumed in Eq.~\eqref{eq:SD_th12} and \eqref{eq:SD_th13}).
\vspace{0.5cm}
{\bf Before the measurement of the $\theta_{13}^\text{PMNS}$: TB mixing via CSD1}\newline
Before the measurement of the ``reactor angle'' $\theta_{13}^\text{PMNS}$, the values of the mixing angles were consistent with the simple scenario
\begin{align}
\sin^2\theta^{\text{PMNS}}_{12}\approx \tfrac{1}{3},\\
\sin^2\theta^{\text{PMNS}}_{13}\approx 0,\\
\sin^2\theta^{\text{PMNS}}_{23}\approx \tfrac{1}{2},
\end{align}
which can be summarized with a PMNS matrix of the form
\begin{align}
U_\text{PMNS}&=
\begin{pmatrix}
\sqrt{\tfrac{2}{3}} & \sqrt{\tfrac{1}{3}} & 0 \\
-\sqrt{\tfrac{1}{6}} & \sqrt{\tfrac{1}{3}} & \sqrt{\tfrac{1}{2}} \\
\sqrt{\tfrac{1}{6}} & -\sqrt{\tfrac{1}{3}} & \sqrt{\tfrac{1}{2}} \\
\end{pmatrix},
\end{align}
with bases for the rows and columns defined by the standard PDG convention stated in Eq.~\eqref{eq:matrix_explicit}: the matrix is $U_{fi}$, where the indices $f=e,\mu,\tau$ and $i=1,2,3$. This pattern is called tri-bimaximal mixing~\cite{Harrison:2002er,Xing:2002sw}. If the PMNS matrix takes the TB form, the atmospheric angle $\theta^{\text{PMNS}}_{23}$ is maximal, while the reactor angle $\theta^{\text{PMNS}}_{13}$ is predicted to be zero, and there is no complex CP-violating phase.
The TB mixing matrix in the neutrino sector can be realized with SD by imposing the conditions
$|A_1|=0\,,
|A_2|=|A_3|\,, |B_1|=|B_2|=|B_3|\,, \phi'_{B_2}=0\,,
\phi'_{B_3}=\pi$,
which corresponds to $\mathbf{Y}_\nu$ and $ \mathbf{M}_{R}$ of the form
\begin{align}
\mathbf{Y}_\nu=\begin{pmatrix} 0 & b \cr a & b \cr -a & b \end{pmatrix} \,, \quad \mathbf{M}_{R}=\begin{pmatrix} M_A & 0 \cr 0 & M_B \end{pmatrix} \,,\label{eq:alignement-CSD}
\end{align}
where the parameters $a$, $b$ are in general complex.
We assume that the heaviest right-handed neutrino is completely decoupled, either because it is very heavy or because the corresponding neutrino Yukawa couplings are very small, thus the contribution from the subsubleading term $CC^\mathsf{T}/M_C$ in the light neutrino mass matrix is neglected.\footnote{ Alternatively, in $\mathrm{SU}(5)$ models, we may assume that only two right-handed neutrinos exist.}
When $\mathbf{Y}_e$ is diagonal, the PMNS matrix will be completely determined by the mixing in the neutrino sector. In this case the PMNS mixing matrix has the TB form discussed above:
The condition $|A_2|=|A_3|$ gives rise to $\tan{\theta^\nu_{23}}=1$, whereas $|B_1|=|B_2|=|B_3|$ with the phase relations imply $\tan{\theta^\nu_{12}}=1/2$. Furthermore, substituting $ \phi'_{B_2}=0$ and $ \phi'_{B_3}=\pi$ into the definitions in Eq.~\eqref{eq:def_phase2} and \eqref{eq:def_phase3}, we get the relation $(\phi_{B_3}-\phi_{A_3})-(\phi_{B_2}-\phi_{A_2})=\pi$ and it follows immediately that $\theta_{13}^\nu = 0$. The combination of SD and the above set of relations for the neutrino Yukawa couplings $A_i$ and $B_i$ is known as constrained sequential dominance \cite{King:2005bj}, which we may also refer to as CSD1.
The exact TB mixing pattern in the PMNS matrix, however, has been ruled out ever since the measurement of a non-zero $\theta^{\text{PMNS}}_{13}$ mixing angle at T2K~\cite{Abe:2011sj}, Double Chooz~\cite{Abe:2011fz}, RENO~\cite{Ahn:2012nd} and Daya Bay~\cite{An:2012eh}. This implies that the TB structure of the PMNS matrix needs to be perturbed in some way.
\vspace{0.5cm}
{\bf TB neutrino mixing plus charged lepton mixing contribution:}\newline
After the $\theta^{\text{PMNS}}_{13}$ measurement, it was realised that in a GUT context, where a $1$-$2$ mixing contribution from the charged lepton sector is typically present due to GUT relations between $\mathbf{Y}_e$ and $\mathbf{Y}_d$ (with $\mathbf{Y}_d$ often being the dominant source for the CKM mixing and thus featuring a sizeable $1$-$2$ mixing), TB mixing could still be an attractive mixing pattern in the neutrino sector. The angle $\theta^{\text{PMNS}}_{13}$ is in this scenario generated via the $1$-$2$ charged lepton mixing contribution.
In typical flavour GUT models $\theta^{eL}_{12}$ will be dominant, because it is related to the largest (Cabibbo) mixing angle in the quark sector. This motivates the assumption that only $\theta^{eL}_{12}$ is non-zero ($\theta^{eL}_{13}=0$, $\theta^{eL}_{23}=0$). Under the assumption that $\theta^{eL}_{12},\ll 1$ and remembering that TB mixing implies $\theta_{13}^\nu=0$, the general formulas for the lepton mixing angles from Eqs.~\eqref{appeq:eqs12}--\eqref{appeq:eqs23}, including charged lepton contributions, give (cf. \cite{Antusch:2005kw})
\begin{align}
s_{12}^\text{PMNS} e^{-i\delta_{12}^\text{PMNS}} &\approx s_{12}^{\nu} e^{-i (\delta_{12}^{\nu}+\theta_{12}^{eL} t_{12}^\nu c_{23}^\nu \sin(\delta_{12}^\nu-\delta_{12}^{eL}))} + \theta_{12}^{eL} c_{12}^{\nu} c_{23}^{\nu} e^{-i\delta_{12}^{eL}}\,, \label{eqs12} \\
s_{13}^\text{PMNS} e^{-i \delta_{13}^\text{PMNS}} &\approx \theta_{12}^{eL} s_{23}^\nu e^{-i (\delta_{23}^{\nu}+\delta_{12}^{eL})}\,, \label{eqs13} \\
s_{23}^\text{PMNS} e^{-i \delta_{23}^\text{PMNS}} &\approx s_{23}^{\nu} e^{-i \delta_{23}^{\nu}} \,, \label{eqs23}
\end{align}
where $s^\nu_{ij}\equiv\sin\theta^\nu_{ij}$, $c^\nu_{ij}\equiv\cos\theta^\nu_{ij}$ and $t^\nu_{ij}\equiv\tan\theta^\nu_{ij}$. In particular, from Eq.~\eqref{eqs13} we obtain for the PMNS angle $\theta_{13}^\text{PMNS}$:
\begin{align}
s_{13}^\text{PMNS}&\approx \theta_{13}^\text{PMNS} \approx \theta_{12}^{eL} s_{23}^\nu \;,\label{eq:13pmnsCSD1}
\end{align}
in leading order in $\theta_{12}^{eL}$. With approximate TB mixing realised in the neutrino sector, e.g.\ via CSD1, $s_{23}^\nu = s_{23}^\text{PMNS} = 1/\sqrt{2}$ and we obtain $\theta_{13}^\text{PMNS} \approx \theta_{12}^{eL}/\sqrt{2}$. It has been pointed out that with $\theta_{12}^{eL} \approx \theta_\mathrm{C}$ one would obtain $\theta_{13}^\text{PMNS} \approx 9.2^\circ$, close to the experimental value at that time, and models along this line have been constructed e.g.\ in \cite{Meroni:2012ty,Antusch:2013kna}. However, with the present rather accurate measurement of $\theta_{13}^\text{PMNS}$ it has turned out that the predicted value for $\theta_{13}^\text{PMNS}$ from this consideration is not in agreement with the experimental data.
\vspace{0.5cm}
{\bf A novel scheme for PMNS mixing: CSD2 plus charged lepton corrections}\newline
In \cite{Antusch:2011ic,Antusch:2013wn} it was proposed to use a novel vacuum alignment of the flavons, such that the form of $\mathbf{Y}_\nu$ is different from the one in Eq.~\eqref{eq:alignement-CSD}. In particular, the alternative flavon vacuum alignment retains the dominant flavon VEV in the first column, but a different subdominant flavon choice in the second column. This new form is called CSD2~\cite{Antusch:2011ic}, and it comes along in two varieties based on two different VEV alignments of the subdominant column.\footnote{We remark that alongside CSD1 and CSD2, there exist still more possible interesting types of vacuum alignment of flavons, such as CSD3~\cite{King:2013iva} and CSD4~\cite{King:2013xba,King:2013hoa}. These alignments though generate a good $\theta^{\text{PMNS}}_{13}$ from the neutrino sector alone, making them less attractive in a GUT setup (where the charged lepton contribution is linked to the down sector). We shall thus not consider CSD3 or CSD4 further in this paper.} They are denoted by $\phi_{102}$ and $\phi_{120}$, and they respectively correspond to the following neutrino Yukawa matrices:
\begin{align}
\mathbf{Y}_\nu^{(102)}=\begin{pmatrix} 0 & b \cr a & 0 \cr -a & 2b \end{pmatrix}\,, \quad
\mathbf{Y}_\nu^{(120)}=\begin{pmatrix} 0 & b \cr a & 2b \cr -a & 0 \end{pmatrix}\,.
\label{yukcsd2}
\end{align}
With these alternative vacuum alignments, the seesaw mechanism from Eq.~\eqref{eq:Mnu-general} (and $CC^\mathsf{T}/M_C \to 0$) delivers the following mass matrices for the left handed neutrinos:
\begin{align}
\mathbf{M}_{\nu}^{(102)} &= m_a \begin{pmatrix} 0 & 0 & 0 \cr 0 & 1 & -1 \cr 0 & -1 & 1 \end{pmatrix} + m_b \begin{pmatrix} 1 & 0 & 2 \cr 0 & 0 & 0 \cr 2 & 0 & 4 \end{pmatrix}
= m_a \begin{pmatrix} \epsilon e^{i\alpha} &0& 2 \epsilon e^{i\alpha} \cr 0&1&-1\cr 2\epsilon e^{i\alpha} & -1&1+4\epsilon e^{i\alpha} \end{pmatrix}\,,\label{eq:Mnu-CSD2-b}\\
\mathbf{M}_{\nu}^{(120)} &= m_a \begin{pmatrix} 0 & 0 & 0 \cr 0 & 1 & -1 \cr 0 & -1 & 1 \end{pmatrix} + m_b \begin{pmatrix} 1 & 2 & 0 \cr 2 & 4 & 0 \cr 0 & 0 & 0 \end{pmatrix}
= m_a \begin{pmatrix} \epsilon e^{i\alpha} & 2 \epsilon e^{i\alpha} & 0 \cr 2\epsilon e^{i\alpha} & 1+4\epsilon e^{i\alpha} & -1 \cr 0 & -1 & 1 \end{pmatrix}\,,\label{eq:Mnu-CSD2-a}
\end{align}
where the complex mass parameters $m_a$ and $m_b$ are defined by
\begin{align}
m_a:=\frac{v^2 a^2}{M_A}\,,\quad
m_b:=\frac{v^2 b^2}{M_B}, \label{eq:ma-mb}
\end{align}
while their ratio is parametrized by the modulus $\epsilon$ and phase angle $\alpha$ via
\begin{align}
\frac{m_b}{m_a} &\equiv \epsilon e^{i\alpha}.
\end{align}
By using the overall phase freedom for $\mathbf{\bar{5}}_i$ (which form a flavour triplet in a complete flavour theory, as already mentioned earlier), we can absorb the phase from the parameter $m_a$, making $m_a$ real\footnote{In the effective theory with no right-handed neutrinos, the phase redefinitions of $\mathbf{1}_k$ (in contrast to the phase from $\mathbf{5}_i$) do not appear anywhere; the phases do not change $m_a$ and $m_b$, since they cancel in the fractions of Eq.~\eqref{eq:ma-mb}.}. the light neutrino mass matrix in Eq.~\eqref{eq:Mnu-CSD2-b} or \eqref{eq:Mnu-CSD2-a} will thus be parametrized by $3$ real parameters: $m_a$, $\epsilon$ and $\alpha$.
It is clear from the above equation that the neutrino sector mixing matrix will depend only on the ratio $m_b/m_a = \epsilon e^{i\alpha}$, while the size of the parameter $m_a$ determines the overall scale of the masses. With the assumption $M_A \ll M_B$ we get $|m_b|\ll|m_a|$ and $\epsilon$ can be used as an expansion parameter for the angles in the neutrino rotation matrix. Beside the contribution from the neutrino sector to the lepton mixing parameters in the PMNS matrix, there is also one coming from the charged leptons. Using Eqs.~\eqref{appeq:12pmns_102}--\eqref{appeq:23pmns_102} we obtain the PMNS angles as an expansion in the parameters $\epsilon$ and $\theta_{12}^{eL}$ when both lepton sectors contribute (cf.\ \cite{Antusch:2013wn,King:2005bj})
\begin{align}
\theta_{12}^\text{PMNS} &\approx 35.3^\circ - \frac{\theta_{12}^{eL}}{\sqrt{2}}\cos{\gamma}\,, \label{eq:12pmns} \\
\theta_{13}^\text{PMNS} &\approx \frac{1}{\sqrt{2}} \big( \epsilon^2 + {\theta_{12}^{eL}}^2 \pm 2\epsilon\theta_{12}^{eL} \cos{(\alpha + \gamma)} \big)^{1/2}\,, \label{eq:13pmns}\\
\theta_{23}^\text{PMNS} &\approx 45^\circ \mp \epsilon \cos{\alpha}\,, \label{eq:23pmns}
\end{align}
for the two CSD2 scenarios $\mathbf{M}_{\nu}^{(102)}$ and $\mathbf{M}_{\nu}^{(120)}$. As we can see, CSD1 and CSD2 share the good prediction that in leading order $\theta_{12}^\text{PMNS} \approx 35.3^\circ$ and $\theta_{23}^\text{PMNS} \approx 45^\circ $, as in the TB mixing pattern, however a non-zero $\theta_{13}^\text{PMNS}$ is already predicted from the neutrino sector (even if $\theta_{12}^{eL}$ was zero). Interestingly, in contrast to CSD1 models where the decay asymmetry for leptogenesis is suppressed \cite{Antusch:2006cw}, it has been shown in \cite{Antusch:2011ic} that in CSD2 it is unsuppressed and directly linked to the leptonic Dirac CP phase $\delta^\text{PMNS}$.
A CSD2 set-up in combination with charged lepton corrections was considered the first time in a renormalizable model based on an $\mathrm{A}_4$ family symmetry and a specific $\mathrm{SU}(5)$ GUT set-up in Ref.~\cite{Antusch:2013wn}, and more recently also in \cite{Antusch:2017ano}. In the models considered in the present paper, we assume that the neutrino mass matrix $\mathbf{M}_{\nu}$ has the CSD2 form of either the $\phi_{102}$ or $\phi_{120}$ flavon vacuum alignment, as written down in Eq.~\eqref{eq:Mnu-CSD2-b} and \eqref{eq:Mnu-CSD2-a}, and we explore the possible sets of GUT operators, which essentially predict $\theta_{12}^{eL}$, to find out which of them are most promising for model building.
\subsection{Candidates for GUT operators in the Yukawa sector\label{sec:GUT-operators}}
We have just set the texture of $\mathbf{Y}_u$, $\mathbf{Y}_d$, $\mathbf{Y}_e$ and $\mathbf{M}_{\nu}$ at the GUT scale in the previous part of this section, but there are still undetermined quantities which are an integral part of a model: the Clebsch-Gordan coefficients $c_x$, $c_y$ and $c_z$. Their values depend on the yet unspecified choices for the unknown parts $Y$ of the $\mathrm{SU}(5)$ GUT operator in Eq.~\eqref{eq:operator-d} and \eqref{eq:operator-e}.
The possible Clebsch factors between the down and charged lepton sector in these operators have been classified in \cite{Antusch:2009gu,Antusch:2013rxa}. It would seem that we have potentially a very large number of viable possibilities which Clebsch coefficients to take. The exact relations with Clebsch coefficients are valid only at the GUT scale, while the measured masses and mixing angles of the fermion sector are considered at $M_Z$. The RGE running of these parameters from the GUT scale to low scales, as well as unknown SUSY threshold corrections at the SUSY scale, can to some extent ``repair'' the high-scale relations so that they are compatible with experiment at low energy. Therefore it might appear that there are few constraints on the combination of Clebsch factors yielding realistic low energy masses and mixing angles.
It turns out, however, that we can greatly limit the number of Clebsch combinations by considering the following double ratio of the first two generations of Yukawa couplings (which, as we shall show below, is approximately invariant under RGE and SUSY threshold corrections):
\begin{align}
d:= \frac{y_\mu y_d}{ y_e y_s}.
\end{align}
In the model under consideration at the GUT scale, this ratio can be approximately written as a ratio of Clebsch factors
\begin{align}
d\Big|_{M_{\text{GUT}}} = \frac{y_\mu y_d}{ y_e y_s}\Big|_{M_{\text{GUT}}} \approx \Big|\frac{c_x^2}{c_y c_z}\Big |.\label{eq:doubleratio-GUT}
\end{align}
The last approximation comes the following approximate formulas for the Yukawa couplings:
\begin{align}
y_d \approx \Big|\frac{yz}{x}\Big|\,, \quad y_s \approx |x| \,, \quad y_e \approx \Big|\frac{c_y c_z}{c_x}\Big| \Big|\frac{yz}{x}\Big|\,, \quad y_\mu \approx |c_x| |x|\,,
\end{align}
in the case where $x \gg y,z$, using the texture of Eq.~\eqref{eq:Yukawa-texture-ude}. On the other hand, this ratio is experimentally determined at low energies (at the $Z$ scale) to be
\begin{align}
d\Big|_{M_Z} = \frac{m_\mu m_d}{ m_e m_s} = 10.7\,{}^{+1.6}_{-0.9}\,,\label{eq:doubleratio-Z}
\end{align}
with the errors coming mostly from the quark masses $m_s$ and $m_d$, while the lepton masses $m_e$ and $m_\mu$ are very well measured. The values at $M_Z$ were taken from \cite{Antusch:2013jca}; the asymmetry of the error mostly comes from the measurement of $m_d$.
The ratio $d$ has the remarkable property that it is stable both under RGE running and under SUSY threshold corrections~\cite{Hempfling:1993kv,Hall:1993gn,Carena:1994bv,Blazek:1995nv,Antusch:2008tf}. This is easy to see by noticing that single ratios $y_d/y_s$ and $y_e/y_\mu$ within the same sector are already stable; the reason for using the double ratio is that its GUT scale expression depends only on group-theoretical Clebsch factors, and not on any of the unknown parameter values.
We now argue that the ratios $y_d/y_s$ and $y_e/y_\mu$ are stable under RGE and SUSY threshold corrections:
\begin{enumerate}
\item \textbf{RGE running}\par
We consider the $1$-loop RGE equations in the MSSM \cite{Martin:1993zk} for the down and charged lepton Yukawa (written in the LR convention):\footnote{We neglect the effects of the neutrino Yukawa couplings here (cf.\ e.g.\ \cite{Antusch:2002ek}). We can assume they are small, since they would stem from an effective operator in a model realisation. }
\begin{align}
\tfrac{d}{dt}\mathbf{Y}_d&=\tfrac{1}{16\pi^2}\left(\mathrm{Tr}(3\mathbf{Y}_d^\dagger\mathbf{Y}_d+\mathbf{Y}_e^\dagger\mathbf{Y}_e)+3\mathbf{Y}_d\YD^\dagger+\mathbf{Y}_u\YU^\dagger-\tfrac{16}{3}g_3^2-3g_2^2-\tfrac{7}{15}g_1^2\right)\,\mathbf{Y}_d,\\
\tfrac{d}{dt}\mathbf{Y}_e&=\tfrac{1}{16\pi^2}\left(\mathrm{Tr}(3\mathbf{Y}_d^\dagger\mathbf{Y}_d+\mathbf{Y}_e^\dagger\mathbf{Y}_e)+3\mathbf{Y}_e\YE^\dagger-3g_2^2-\tfrac{9}{5}g_1^2\right)\,\mathbf{Y}_e,
\end{align}
where $t=\log\mu$ is the log of the renormalization scale $\mu$, and the explicit writing of unit matrices next to the scalar terms has been suppressed in the above notation. We use the left and right basis of the matrices $\mathbf{Y}_d$ and $\mathbf{Y}_e$, where they are diagonal, which is simplest for our considerations. Due to the strong hierarchy in the down and charged lepton sector masses, in particular
$y_d,y_s\ll y_b$ and $y_e,y_\mu\ll y_\tau$, the 3rd generation Yukawa terms from the trace and the gauge coupling terms dominate the RGE beta functions of the first two families, and the contributions of first two generation Yukawas can be neglected; thus
\begin{align}
\tfrac{d}{dt}y_d&\approx \tfrac{1}{16\pi^2}\,y_d\,(3|y_b|^2+|y_\tau|^2-\tfrac{16}{3}g_3^2-3g_2^2-\tfrac{7}{15}g_1^2),\\
\tfrac{d}{dt}y_s&\approx \tfrac{1}{16\pi^2}\,y_s\,(3|y_b|^2+|y_\tau|^2-\tfrac{16}{3}g_3^2-3g_2^2-\tfrac{7}{15}g_1^2),\\
\tfrac{d}{dt}y_e&\approx\tfrac{1}{16\pi^2}\,y_e\,(3|y_b|^2+|y_\tau|^2-3g_2^2-\tfrac{9}{5}g_1^2),\\
\tfrac{d}{dt}y_\mu&\approx\tfrac{1}{16\pi^2}\,y_\mu\,(3|y_b|^2+|y_\tau|^2-3g_2^2-\tfrac{9}{5}g_1^2).
\end{align}
Now it is clear that $dy_d/y_d\approx dy_s/y_s$ and $dy_e/y_e\approx dy_\mu/y_\mu$, consequently keeping the ratios $y_d/y_s$ and $y_e/y_\mu$ approximately constant under RG running in the MSSM. Similar arguments hold also for the SM RG running below the SUSY scale.
\item \textbf{SUSY threshold corrections}\par
At the SUSY scale, where the MSSM is matched to the SM, the SUSY threshold corrections \cite{Hempfling:1993kv,Hall:1993gn,Carena:1994bv,Blazek:1995nv,Antusch:2008tf} of the Yukawa couplings are implemented as the following~\cite{Antusch:2013jca}:
\begin{align}
\label{thresh:yu}
\mathbf{Y}_u^\text{MSSM} &\approx \frac{\mathbf{Y}_u^\text{SM}}{\sin{\beta}}\,,\\
\label{thresh:yd}
\mathbf{Y}_d^\text{MSSM} &\approx \text{diag}\left(\frac{1}{1+\eta_q},\frac{1}{1+\eta_q},\frac{1}{1+\eta_b}\right) \frac{\mathbf{Y}_d^\text{SM}}{\cos{\beta}}\,,\\
\label{thresh:yl}
\mathbf{Y}_e^\text{MSSM} &\approx \frac{\mathbf{Y}_e^\text{SM}}{\cos{\beta}}\,.
\end{align}
The above equations are written in the basis where $\mathbf{Y}_u$ is diagonal and in the left-right convention for the Yukawa matrices. The SUSY threshold corrections are parametrized by $\eta_q$ and $\eta_b$, and they also depend on the $\tan\beta$ parameter defined as the ratio of the VEVs of the Higgs fields $H_u$ and $H_u$ in the MSSM: $\tan{\beta} := v_u/v_d$. Note that in Eqs.~\eqref{thresh:yu}--\eqref{thresh:yl}, we only considered $\tan{\beta}$ enhanced contributions from down type quarks.\footnote{The analysis already covers also the general case. The 3rd family SUSY threshold corrections can be absorbed into $\tan\beta$ and relabelling it into $\tan\bar{\beta}$, cf.~\cite{Antusch:2013jca}. The $\eta_l$ correction to the first two families also has no qualitative effect on predictions of observables: it would change the overall scale of the $1$-$2$ charged lepton block, which can be compensated by a change in $x$,$y$ and $z$ by a common factor, while the consequent change in the overall change of scale in the down sector can then be absorbed by a shift in $\eta_q$. This leads to the same low energy prediction at a shifted parameter point.} From Eq.~\eqref{thresh:yd}, it is clear that the $1/\cos\beta$ and $1/(1+\eta_q)$ factors drop out of the ratio $y_d/y_s$. Similarly, according to Eq.~\eqref{thresh:yl} the $1/\cos\beta$ factor drops out of the ratio $y_e/y_\mu$.
\end{enumerate}
\par
We have thus seen that the double ratio $d$ is a very useful quantity and that it has approximately the same value at all scales. Equating Eq.~\eqref{eq:doubleratio-GUT} and \eqref{eq:doubleratio-Z}, the ratio of Clebsch factors $|c_x^2/(c_y c_z)|$ must thus be around $10.7$. This guideline enables to greatly reduce the number of relevant Clebsch factor combinations that one needs to consider, since we automatically know that large deviations from the ratio will not provide a good fit to the low energy observables.
Taking the list of possible Clebsch factors of operators from \cite{Antusch:2009gu,Antusch:2013rxa}, we compute all the combinations giving a good value for the double ratio. Although the double ratio is only sensitive to the product $c_y c_z$, permutations of $c_y$ and $c_z$ are considered as different cases since the model predictions will be dependent on the individual values. As will be argued in Section~\ref{sec:parameters}, a change of sign in any of the Clebsch-Gordan coefficients returns an equivalent solution. Therefore only the absolute values $|c_x|$, $|c_y|$ and $|c_z|$ are distinguished. In addition, for $c_x$ we restrict the values to $3$, $9/2$ and $6$ (when we run up the ratio $y_\mu /y_s $ to the GUT scale, its value becomes roughly $4.5$ if there were no threshold corrections, while threshold effects can readily change this to $3$ and $6$, but not much further) and the values for $c_y$ and $c_z$ are then chosen in a way that the double ratio lies between $9$ and $14$, corresponding to a roughly $2\sigma$ region in Eq.~\eqref{eq:doubleratio-Z}.
We identify in this way the potentially good Clebsch combinations, and list them in Table~\ref{tab1}. This is the list of combinations we shall consider further in the numerical analysis of the models in the next sections.
\renewcommand{\arraystretch}{1.3}
\begin{table}
\begin{align*}
\begin{array}{c@{\kern1.0em}c@{\kern1.0em}c}
\toprule
{c_x,c_y,c_z} & {c_x,c_y,c_z} & {c_x,c_y,c_z} \\
\midrule
{3,\frac{1}{6},\frac{9}{2}} & {\frac{9}{2},\frac{1}{6},9} & {6,\frac{1}{2},6} \\
{3,\frac{1}{6},6} & {\frac{9}{2},\frac{1}{2},3} & {6,\frac{2}{3},\frac{9}{2}} \\
{3,\frac{1}{2},\frac{3}{2}} & {\frac{9}{2},\frac{1}{2},\frac{9}{2}} & {6,\frac{2}{3},6} \\
{3,\frac{1}{2},2} & {\frac{9}{2},\frac{2}{3},3} & {6,1,3} \\
{3,\frac{2}{3},1} & {\frac{9}{2},1,\frac{3}{2}} & {6,\frac{3}{2},2} \\
{3,\frac{2}{3},\frac{3}{2}} & {\frac{9}{2},1,2} & {6,2,\frac{3}{2}} \\
{3,1,\frac{2}{3}} & {\frac{9}{2},\frac{3}{2},1} & {6,2,2} \\
{3,1,1} & {\frac{9}{2},\frac{3}{2},\frac{3}{2}} & {6,3,1} \\
{3,\frac{3}{2},\frac{1}{2}} & {\frac{9}{2},2,1} & {6,\frac{9}{2},\frac{2}{3}} \\
{3,\frac{3}{2},\frac{2}{3}} & {\frac{9}{2},3,\frac{1}{2}} & {6, 6,\frac{1}{2}} \\
{3,2,\frac{1}{2}} & {\frac{9}{2},3,\frac{2}{3}} & {6,6,\frac{2}{3}} \\
{3,\frac{9}{2},\frac{1}{6}} & {\frac{9}{2},\frac{9}{2},\frac{1}{2}} & \\
{3,6,\frac{1}{6}} & {\frac{9}{2},9,\frac{1}{6}} & \\
\bottomrule
\end{array}
\end{align*}
\caption{The list of all combinations of $\mathrm{SU}(5)$ Clebsch-Gordan coefficients (only absolute values), which provide the
Yukawa double ratio $\frac{y_\mu y_d}{y_s y_e} \approx \Big|\frac{c_x^2}{c_y c_z}\Big|$ in the range between $9$ and $14$. The possible values of these coefficients were taken from the classification in \cite{Antusch:2009gu,Antusch:2013rxa}.}
\label{tab1}
\end{table}
\renewcommand{\arraystretch}{1.0}
\section{Model implementation and analysis}
\label{sec:implementationmodel}
The model is implemented at the GUT scale, using the texture described in the previous section for specific combinations of CG coefficients in the charged lepton Yukawa couplings. In order to compare the observables with experimental data and fit the parameters of the model, we use the MSSM and SM RGEs for the running, including SUSY threshold corrections, with boundary conditions at the GUT scale. The fitting is done by calculating the $\chi^2$ of the observables.
\subsection{Model setup}
\label{sec:modelsetup}
\subsubsection{Texture}
\label{sec:texture}
The texture for the down-type quark and charged lepton Yukawa couplings is stated in Eq.~\eqref{eq:Yukawa-texture-ude}. The matrices are given by
\begin{align}
\mathbf{Y}_d = \begin{pmatrix} 0 & z & 0 \cr y e^{i \gamma} & x & 0 \cr 0 & 0 & y_b \end{pmatrix} \,, \quad \mathbf{Y}_e = \begin{pmatrix} 0 & c_y ye^{i \gamma} & 0 \cr c_z z & c_x x & 0 \cr 0 & 0 & y_{\tau} \end{pmatrix}\,.
\label{yuk:ydye}
\end{align}
According to the texture in Eq.~\eqref{eq:Yukawa-texture-ude}, the (symmetric) up-type Yukawa matrix is implemented in the following way
\begin{align}
\mathbf{Y}_u= \mathbf{U}_{23}(\theta^\text{CKM}_{23})\, \mathbf{U}_{12}(\theta^{uL}_{12})\, \text{diag}(y_u,y_c,y_t)\,\mathbf{U}^\textsf{T}_{12}(\theta^{uL}_{12})\,\mathbf{U}^\textsf{T}_{23}(\theta^\text{CKM}_{23})\,,
\label{yuk:yu}
\end{align}
with the unitary matrices
\begin{align}
\mathbf{U}_{23}(\theta^\text{CKM}_{23}) = \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos{\theta^\text{CKM}_{23}} & \sin{\theta^\text{CKM}_{23}} \\ 0 & -\sin{\theta^\text{CKM}_{23}} & \cos{\theta^\text{CKM}_{23}} \end{pmatrix}\,,\quad
\mathbf{U}_{12}(\theta^{uL}_{12}) = \begin{pmatrix} \cos{\theta^{uL}_{12}} & i\sin{\theta^{uL}_{12}} & 0 \\ i\sin{\theta^{uL}_{12}} & \cos{\theta^{uL}_{12}} & 0 \\ 0 & 0 & 1 \end{pmatrix}\,.
\end{align}
The factor $i$ in $\mathbf{U}_{12}$, which corresponds to a phase $\delta_{12}^{uL}=-\pi/2$, is introduced to get an imaginary $1$-$2$ element in $\mathbf{Y}_u$, and a potential rotation angle $\theta_{13}^{uL}$ in Eq.~\eqref{yuk:yu} (cf. Eq.~\eqref{eq:general_matrix_2_1}) is chosen equal to zero, such that the $1$-$3$ element is negligible, which realizes the texture in Eq.~\eqref{eq:general_texture_yu_yd} to a very good approximation.\footnote{Although according to Eq.~\eqref{yuk:yu} the $1$-$3$ and $3$-$1$ elements of $\mathbf{Y}_u$ do not vanish exactly, the relative correction of $\theta_{13}^\text{CKM}$ compared to the texture in Eq.~\eqref{eq:general_texture_yu_yd}, where the two entries are zero, is of order $y_c/y_t$, which is much smaller than the experimental uncertainty.} The values for $y_b$, $y_\tau$, $y_u$, $y_c$, $y_t$ and $\theta^\text{CKM}_{23}$ in Eq.~\eqref{yuk:ydye} and \eqref{yuk:yu} are set to the experimental values at the GUT scale provided in \cite{Antusch:2013jca}.
The CSD2 mechanism provides two choices of flavon VEVs which determine the neutrino Yukawa matrices, i.e. $\mathbf{Y}_\nu^{(102)}$ and $\mathbf{Y}_\nu^{(120)}$, as stated in Eq.~\eqref{yukcsd2}. After integrating out the right-handed neutrinos, the corresponding mass matrices of the left-handed neutrinos are given by (as stated in Eq.~\eqref{eq:Mnu-CSD2-b} and \eqref{eq:Mnu-CSD2-a})
\begin{align}
\mathbf{M}_\nu^{(102)} = m_a \begin{pmatrix} \epsilon e^{i\alpha} & 0 & 2\epsilon e^{i\alpha} \\ 0 & 1 & -1 \\ 2\epsilon e^{i\alpha} & -1 & 1+4\epsilon e^{i\alpha} \end{pmatrix}\,,\quad
\mathbf{M}_\nu^{(120)} = m_a \begin{pmatrix} \epsilon e^{i\alpha} & 2\epsilon e^{i\alpha} & 0 \\ 2\epsilon e^{i\alpha} & 1+4\epsilon e^{i\alpha} & -1 \\ 0 & -1 & 1 \end{pmatrix}\,.
\label{mass:neutrino}
\end{align}
At the SUSY scale, where the MSSM is matched to the SM, the threshold corrections of the Yukawa couplings are implemented according to Eqs.~\eqref{thresh:yu}--\eqref{thresh:yl}.
\subsubsection{Observables}
\label{sec:observables}
With the implementation shown above the model provides $12$ experimentally measured observables, namely the Yukawa couplings $y_e$, $y_\mu$, $y_d$, $y_s$, the CKM angles and CKM Dirac phase $\theta_{12}^\text{CKM}$, $\theta_{13}^\text{CKM}$, $\delta^\text{CKM}$, the PMNS angles $\theta_{12}^\text{PMNS}$, $\theta_{13}^\text{PMNS}$, $\theta_{23}^\text{PMNS}$ and the neutrino mass squared differences $\Delta m^2_\text{21}$, $\Delta m^2_\text{31}$. There also exist other observables, which are in one-to-one correspondence with a parameter (such as the up-type Yukawa couplings, $3$rd generation Yukawa couplings in $\mathbf{Y}_e$ and $\mathbf{Y}_d$, and $\theta_{23}^\text{CKM}$) and can be fitted independently; these observable-parameter pairs are not counted. As mentioned earlier, the CSD2 scenario implies normal hierarchy for neutrino masses. Furthermore, the model predicts three observables which are not (or not well) measured: the PMNS Dirac phase $\delta^\text{PMNS}$, the ratio of the Yukawa couplings $\frac{y_d}{y_s}$ and the effective mass $\langle m_{\beta\beta} \rangle$ in neutrinoless double-beta ($0\nu\beta\beta$) decay. Although $\theta_{23}^\text{PMNS}$ is measured by experiment, the range of $\theta_{23}^\text{PMNS}$ predicted by the model for the different combinations of CG coefficients is usually much smaller than the uncertainty in the experimental data. The same holds true for the ratio $\frac{y_d}{y_s}$, which is stable under the RGE running and the SUSY threshold corrections. The general formula for the effective mass $\langle m_{\beta\beta} \rangle$ is given by
\begin{align}
\begin{split}
\langle m_{\beta\beta} \rangle &= \Big{|} \sum_i (\mathbf{U}^\text{PMNS}_{1i})^2 m_{\nu_i} \Big{|} \\
&= c_{12}^2 c_{13}^2 e^{-i\varphi^\text{PMNS}_1} m_{\nu_1} + s_{12}^2 c_{13}^2 e^{-i\varphi^\text{PMNS}_2} m_{\nu_2} + s_{13}^2 e^{-2i\delta^\text{PMNS}} m_{\nu_3}\,,
\end{split}
\label{eq:meff}
\end{align}
with the PMNS matrix $\mathbf{U}^\text{PMNS}$ and the abbreviations $c_{ij}=\cos{\theta_{ij}^\text{PMNS}}$, $s_{ij}=\sin{\theta_{ij}^\text{PMNS}}$. The left-handed neutrino masses are labelled by $m_{\nu_i}$ (where \hbox{$m_{\nu_1} < m_{\nu_2} < m_{\nu_3}$}) and $\varphi^\text{PMNS}_1$, $\varphi^\text{PMNS}_2$ are the two PMNS Majorana phases. Since the neutrino sector contains only two right handed neutrinos, we have $m_{\nu_1}=0$ and consequently $\varphi^\text{PMNS}_1$ is unphysical; $\langle m_{\beta\beta} \rangle$ thus acts as a proxy for the Majorana phase $\varphi^\text{PMNS}_2$.
The total $\chi^2$ of the model is given by the sum of the individual $\chi^2$ of each measured observable, which are calculated by using the experimental data. The $\chi^2$ therefore consists of $12$ terms. If the $1\sigma$ experimental range for any of them is asymmetric relative to the central value, we took this into account. An exception is the observable $\theta_{23}^\text{PMNS}$ for which we used the exact $\Delta\chi^2$ function provided by NuFIT 3.2 (2018)~\cite{Esteban:2016qun}. The experimental values for the Yukawa couplings and the CKM parameters are taken at the GUT scale. They are provided in \cite{Antusch:2013jca}, including the corresponding $1\sigma$ errors, as functions of the parameters $\tan{\beta}$, $\eta_b$ and $\eta_q$. The PMNS angles and the neutrino mass squared differences are determined at the $Z$ boson mass scale $M_Z$, where they are fitted to the experimental values from NuFIT 3.2 (2018)~\cite{Esteban:2016qun}. Furthermore, the predictions for the PMNS Dirac phase and the effective mass in $0\nu\beta\beta$ decay are calculated at $M_Z$ too. A schematic illustration of the model quantities at the different scales is shown in Figure~\ref{fig:scales}. For all the observables determined at low scale the change of their values when running them from $M_\text{GUT}$ to $M_Z$ is calculated by using an interpolated data table, whose implementation is discussed in detail in Appendix~\ref{app:rg_running}. The data table is available under the link stated in \cite{running_data}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{graphic_scales.pdf}
\caption{Schematic illustration of the model quantities concerning the different mass scales. On the y-axis the three mass scales $M_\text{GUT}$, $M_\text{SUSY}$ and $M_Z$ are indicated, as well as the type of the RGEs needed for the running. At $M_\text{GUT}$ the model is implemented by fixing the CG coefficients, the CSD2 scenario and $8$ of the model parameters. Then $7$ observables corresponding to the Yukawa couplings and the CKM matrix are fitted directly to the experimental values at the GUT scale using the data from~\cite{Antusch:2013jca}. The other $5$ observables, corresponding to the neutrino masses and the PMNS matrix are run down to the $Z$ scale, where they are fitted to the data from NuFIT 3.2 (2018)~\cite{Esteban:2016qun}. At the SUSY scale, the threshold corrections are specified by $3$ parameters and we also switch from the $\overline{\text{DR}}$ to the $\overline{\text{MS}}$ scheme when matching. While the Yukawa ratio of the down and the strange quark is predicted at $M_\text{GUT}$, the other predictions are run down to $M_Z$.}
\label{fig:scales}
\end{figure}
\subsubsection{Parameters}
\label{sec:parameters}
Once the CG coefficients ${c_x,c_y,c_z}$ in the charged lepton Yukawa matrix are fixed, the model contains $11$ parameters according to Eqs.~\eqref{thresh:yu}--\eqref{thresh:yl} and Eqs.~\eqref{yuk:ydye}--\eqref{mass:neutrino}. These parameters are $x$, $y$, $z$, $\gamma$, $\theta_{12}^{uL}$, $m_a$, $\epsilon$, $\alpha$, $\tan{\beta}$, $\eta_b$ and $\eta_q$ (see also Figure~\ref{fig:scales}). In fact, the two parameters $\tan{\beta}$ and $\eta_b$ have only a minor impact via RGE effects on the observables. Thus the $12$ measured observables are basically fitted by $9$ parameters. The parameter and observable counting excludes direct pairs of parameter-observable, where a fit of the pair can be performed independently; thus for the fit of the model less parameters are effectively used than present in Eq.~\eqref{eq:list_parameters_1} and \eqref{eq:list_parameters_2}.
In particular, the parameters $x$, $y$, $z$, $\theta_{12}^{uL}$ and $\eta_q$ are used to fit the four Yukawa couplings in the down-type quark and charged lepton sector, the two CKM angles and the CKM Dirac phase, while $\gamma$, $m_a$, $\epsilon$ and $\alpha$ determine the three PMNS angles and the two neutrino mass squared differences. Furthermore, all parameters are real, as discussed in Section~\ref{sec:model}.
Considering the parametrization in Section~\ref{sec:texture}, it turns out that there is some redundancy in the values of the parameters and CG coefficients, i.e. for certain different parameters and CG coefficients the model retains the same values for the observables. For example, a change of sign $c_y \rightarrow -c_y$ can be compensated by the shift $\gamma \rightarrow \gamma+\pi$. In the same manner $c_x \rightarrow -c_x$ is compensated, and $c_z \rightarrow -c_z$ has no impact on the observables at all. Thus, with no loss of generality all CG coefficients can be chosen positive as assumed in Table~\ref{tab1}. Furthermore, a simultaneous change of sign in $z$ and $\theta_{12}^{uL}$ or in $x$, $y$ and $z$ do not change the observables and a change of sign in $y$ can again be compensated by the shift of $\gamma$ by $\pi$. In order to keep the factor $i$ in the $1$-$2$ element in $\mathbf{Y}_u$, which predicts a viable CKM Dirac phase, the quantities $\frac{z}{x}$ and $s_{12}^{uL}$ must have the same sign. Therefore $x$, $y$, $z$ and $\theta_{12}^{uL}$ are chosen non-negative in the analysis below.
For the numerical analysis we choose the following parameter ranges:
\begin{align}
\begin{split}
&x,\theta_{12}^{uL} \in [0,0.1]\,,\quad y,z \in [0,0.01]\,,\quad \gamma,\alpha \in [0,2\pi]\,,\quad \epsilon \in [0,1]\,,\quad m_a \in [0,0.1]\,\mathrm{eV}\,, \\
&\tan{\beta} \in [20,50]\,,\quad \eta_b,\eta_q \in [-0.6,0.6]\,,
\end{split}
\end{align}
and the different mass scales are fixed as
\begin{align}
M_Z=91.2\,\mathrm{GeV}\,,\quad M_\text{SUSY}=3\cdot10^3\,\mathrm{GeV}\,,\quad M_\text{GUT}=2\cdot10^{16}\,\mathrm{GeV}\,.
\end{align}
\subsection{Analytical considerations in the lepton sector}
\label{sec:analyticalconsideration}
When fitting the model for fixed CG coefficients to the experimental data the values for $x$, $y$, $z$, $\theta_{12}^{uL}$ and $\eta_q$ are completely fixed in the quark and charged lepton sector by the observables $y_e$, $y_\mu$, $y_d$, $y_s$, $\theta_{12}^\text{CKM}$, $\theta_{13}^\text{CKM}$ and $\delta^\text{CKM}$. In the neutrino sector the parameters $\gamma$, $m_a$, $\epsilon$ and $\alpha$ are then used to fit $\theta_{12}^\text{PMNS}$, $\theta_{13}^\text{PMNS}$, $\theta_{23}^\text{PMNS}$, $\Delta m^2_\text{21}$ and $\Delta m^2_\text{31}$. Once a local minimum of the $\chi^2$ function in the space of these four parameters is found, we expect further local minima with the same or a similar $\chi^2$ value. From an analytical point of view the different minima can be explained in three steps as follows, where for the sake of simplicity running effects are neglected:
\begin{enumerate}
\item \label{item:step1} Consider the neutrino mass matrix in Eq.~\eqref{mass:neutrino}, which is dependent on $m_a$, $\epsilon$ and $\alpha$. The masses of the light neutrinos are given by the singular values of this matrix. When fitting the two neutrino mass squared differences, $m_a$ and $\epsilon$ thus can be expressed as a function of $\alpha$, where they are not sensitive to the sign of $\alpha$.
\item \label{item:step2} Since the left angle $\theta_{12}^{eL} \approx \big|\frac{c_y}{c_x}\frac{y}{x}\big|$ is fixed by the Yukawa couplings and the CKM parameters, the parameter $\gamma$ is fixed too (up to a minus sign) when $\theta_{12}^\text{PMNS}$ is fitted, using Eq.~\eqref{appeq:12pmns_102}:
\begin{align}
\theta_{12}^\text{PMNS} &\approx 35.3^\circ - \frac{\theta_{12}^{eL}}{\sqrt{2}}\cos{\gamma}\,.
\end{align}
This means the solutions for $\gamma$ always come in pairs.
\item The final step in the analysis of minima depends on the CSD2 variant. We shall explicitly state here the argument for the $\mathbf{Y}_\nu^{(102)}$ variant, for which we make use of the identity in Eq.~\eqref{appeq:identity_delta_pmns_102} given in leading order of $\theta_{12}^{eL}$ and $\epsilon$:\footnote{The analysis for the $\mathbf{Y}_\nu^{(120)}$ variant is completely analogous, except that we use Eq.~\eqref{appeq:identity_delta_pmns_120}.}
\begin{align}
\theta_{13}^\text{PMNS} e^{i\delta^\text{PMNS}} \approx \frac{\epsilon}{\sqrt{2}} e^{i(\pi + \alpha)} + \frac{\theta_{12}^{eL}}{\sqrt{2}} e^{i(\pi - \gamma)}\,. \label{eq:graph}
\end{align}
While the left-hand side of the equation is determined by experiment, the right-hand side involves parameters of our model; in particular, the first term on the right can be viewed as a function of $\alpha$ only due to step~\ref{item:step1} and the second term represents merely a constant shift due to step~\ref{item:step2}. A successful fit of the model thus involves finding a good value for $\alpha$, which is the only remaining degree of freedom in Eq.~\eqref{eq:graph}.
We illustrate the stated features of Eq.~\eqref{eq:graph} in Figure~\ref{fig:pmns}, which is drawn based on data for the model with $(c_x,c_y,c_z)=(3,\frac{3}{2},\frac{1}{2})$ and the CSD2 variant $\mathbf{Y}_\nu^{(102)}$. The left- and right-hand side of the equation are represented by solid red and blue curves in the complex plane, respectively. The red curve is a circle with a radius equal to the central measured value for $\theta_{13}^\text{PMNS}$; the dark red part represents the $3\sigma$ experimental range for $\delta^\text{PMNS}$. The dashed blue line represents the first term on the right, which is to a good approximation shaped as an off-center circle; its exact shape depends on the function $\epsilon(\alpha)$. The solid blue curves in the figure represent the dashed curve shifted by the second term; there are two such curves due to solutions for $\gamma$ coming in $\pm$ pairs. The dark blue curves represent values of $\alpha$ that predict $\theta_{23}^\text{PMNS}$ in the experimental $1\sigma$ range via Eq.~\eqref{appeq:23pmns_102}.
A low $\chi^2$ for $\theta_{13}^\text{PMNS}$ is obtained only when Eq.~\eqref{eq:graph} is satisfied, i.e.~when the red and blue solid curves intersect. Geometrically each blue (approximate) circle can intersect the red circle in either $0$, $1$ (special case when they touch) or $2$ points. We therefore generically expect that if intersection points between the solid blue curves and the red circle exist, there are $4$ of them. Indeed, a geometrical consideration of Figure~\ref{fig:pmns} indicates that once we have found a point with low $\chi^2$ for some values of $(\gamma,\alpha)$, there are further good points for $(-\gamma,-\alpha)$, $(\gamma,2\gamma-\alpha)$ and $(-\gamma,-2\gamma+\alpha)$, where all other parameters are fixed. Since the first two points differ only by a minus sign in $\gamma$ and $\alpha$, they have the same $\chi^2$ value. The same holds true for the last two points. Note that since the dashed blue circle is not centred at the origin, the form stated for the second pair of points is only approximate.
Including the experimental $1\sigma$ range also for $\theta_{23}^\text{PMNS}$~\cite{Esteban:2016qun} (dark blue lines), which is also part of our $\chi^2$ function, $2$ of the $4$ points do not fit anymore. For certain CG coefficients $\theta_{23}^\text{PMNS}$ cannot be fitted well at all; in these cases there is no point with a low $\chi^2$. Otherwise we expect two best fit points with the same $\chi^2$ when fitting the model (with good values for both $\theta_{13}^\text{PMNS}$ and $\theta_{23}^\text{PMNS}$). Usually only one of them provides $\delta^\text{PMNS}$ within the experimental $3\sigma$ range~\cite{Esteban:2016qun} (dark red line). In Figure~\ref{fig:pmns}, the two best fit points predict $\delta^\text{PMNS}$ at around $90^\circ$ and $270^\circ$, the latter one being consistent with the $3\sigma$ range.
\end{enumerate}
This analytic consideration for minima holds in general: if in a specific model points with low $\chi^2$ exist, we expect $2$ of them, with possibly only $1$ of the $2$ in the correct $\delta_\text{PMNS}$ range. Our numeric results indeed confirm this, as we shall see in the next section.
\begin{figure}
\centering
\includegraphics[scale=0.9]{analytic_solutions.pdf}
\caption{The different terms in Eq.~\eqref{eq:graph} are illustrated for $(c_x,c_y,c_z)=(3,\frac{3}{2},\frac{1}{2})$ and the CSD2 scenario $\mathbf{Y}_\nu^{(102)}$, where $\epsilon$ is taken as a function of $\alpha$ induced by neutrino mass fitting. There are two solid blue curves according to the two solutions for $\gamma$ when fitting $\theta_{12}^\text{PMNS}$. The dark blue lines represent the experimental $1\sigma$ range of $\theta_{23}^\text{PMNS}$ for given $\epsilon$ and $\alpha$ using Eq.~\eqref{appeq:23pmns_102}. The radius of the red circle is given by the experimental value of $\theta_{13}^\text{PMNS}$, where the darker part indicates the experimental $3\sigma$ range of $\delta^\text{PMNS}$~\cite{Esteban:2016qun}. The angles $\delta^\text{PMNS}$ and $\alpha$ run from $0$ to $2\pi$, and $\theta_{12}^{eL} = \big|\frac{c_y}{c_x}\frac{y}{x}\big|$ is fixed by the fitting of the Yukawa couplings.}
\label{fig:pmns}
\end{figure}
\section{Results}
\label{sec:numericalresults}
Having specified the implementation of the model at the GUT scale and how observables are compared to experimental data in Section~\ref{sec:implementationmodel}, we investigate in this section the following two questions: First, which tuples of CG coefficients listed in Table~\ref{tab1}, in combination with one of the two CSD2 neutrino Yukawa couplings, are compatible with the experimental data. Second, what are the predictions for $\theta_{23}^\text{PMNS}$, $\delta^\text{PMNS}$, $\frac{y_d}{y_s}$ and $\langle m_{\beta\beta} \rangle$ in these models.
\subsection{Suitable model candidates}
\label{sec:bestfitclebsches}
In Table~\ref{tab2} a complete list of combinations of CG coefficients $(c_x,c_y,c_z)$ that provide a $\chi^2$ less than $15$ is shown. They are ordered with respect to their best fit value and labelled by an integer number. For each tuple $(c_x,c_y,c_z)$ both types of CSD2 neutrino Yukawa couplings ($\mathbf{Y}_\nu^\text{(102)}$ and $\mathbf{Y}_\nu^\text{(120)}$) are considered. According to the analytical discussion in Section~\ref{sec:analyticalconsideration}, the (local) best fit points always come in pairs with opposite sign in $\gamma$ and $\alpha$. In the table only the minima with $\chi^2 < 15$ are shown, and they are distinguished by the labels $a_1,a_2,...$ in the case of $\mathbf{Y}_\nu^\text{(102)}$ and $b_1,b_2,...$ in the case of $\mathbf{Y}_\nu^\text{(120)}$. For each local minimum listed in Table~\ref{tab2}, the values of certain selected quantities are shown:
\begin{itemize}
\item Beside the total $\chi^2$ ($\chi^2_\text{Tot}$), two partial sums are also listed: $\chi^2_\text{q}$ sums over the contributions from the Yukawa couplings and the CKM angles and phase, while $\chi^2_\nu$ sums over the terms for the neutrino mass squared differences and the PMNS angles. As expected, $\chi^2_\text{Yuk}$ usually gives only a minor contribution to $\chi^2_\text{Tot}$, because of the selection of the CG coefficients which was guided by the Yukawa double ratio in Eq.~\eqref{eq:doubleratio-GUT}. Hence, in the models which do not fit well, the main contribution comes from $\chi^2_\nu$, and in particular in most of the cases from $\theta_{23}^\text{PMNS}$.
\item We list the values of the observables $\theta_{23}^\text{PMNS}$, $\delta^\text{PMNS}$, which are the predictions of each model; we discuss the results in Section~\ref{sec:predictions}. The values of all observables in Table~\ref{tab2} are given at the $Z$-boson mass scale.
\item The best-fit values of the parameters $\gamma$, $\alpha$ and of the $1$-$2$ left angle $\theta_{12}^{eL}$ of the charged leptons are shown. These parameters allow insight into explicitly constructing new models, as for example discussed in the two points below:
\begin{itemize}
\item A full flavour model could predict the value for the phase $\gamma$ by a suitable flavon VEV alignment. A striking feature is that the most promising models (cf.~also Table \ref{tab3}) feature $\gamma$ close to $270^\circ$. As discussed in section \ref{sec:reasoning-phase}, such phases (or phase differences) can emerge in flavour models in various ways, e.g.\ from ``discrete vacuum alignment'' \cite{Antusch:2011sx} combined with spontaneous CP violation, or from other methods for vacuum alignment with non-Abelian discrete symmetries, e.g.\ from a flavon potential as discussed in \cite{Antusch:2013kna}. We would also like to point out the very interesting possibility that the phase difference of $90^\circ$ for the ``phase sum rule mechanism'' and a phase $\gamma = 270^\circ$ could arise from a single imaginary entry in the 2-2 element of $\mathbf{Y}_e$/$\mathbf{Y}_d$. Furthermore, in explicit flavour models also the phase $\alpha$ could emerge from the vacuum alignment, and for a specific model candidate one could try to find a model realisation where its value is close to the one given in Table~\ref{tab2} or \ref{tab3}.\footnote{Alternatively, of course, one could try to construct models where $\gamma$ and/or $\alpha$ are kept as free parameters.}
\item
The values of $\theta_{12}^{eL}$ also allow to explore model building possibilities within the considered $\mathrm{SU}(5)$ GUT setup beyond the CSD2 setup in the neutrino sector. For example, as already mentioned in section \ref{sec:reasoning-Ynu}, one can check whether a tri-bimaximal mixing pattern in the neutrino sector instead of CSD2 could be a valid option, with $\theta_{13}^\text{PMNS}$ generated solely from the charged lepton mixing contribution. The angle $\theta_{13}^\text{PMNS}$ is then predicted as $\theta_{13}^\text{PMNS} = \theta_{12}^{eL}/\sqrt{2}$, and one finds from Table \ref{tab2} that no model candidate would give an acceptable value for $\theta_{13}^\text{PMNS}$. Analogously, one can also explore whether other leading order mixing patterns in the neutrino sector could be promising for $\mathrm{SU}(5)$ GUT model building in the considered framework.
\end{itemize}
\end{itemize}
We see from Table~\ref{tab2} that out of the $37$ tuples of CG factors giving potentially viable models listed in Table~\ref{tab1}, only $20$ have minima with $\chi^2 <15$. There are $10$ combinations of CG coefficients which have an excellent fit of $\chi^2<4$, which means that all observables of the model do not deviate more than $2\sigma$ in total from the experimentally measured values.
Before the present study, only two representatives from the considered class of models had been studied; model $18$ with the tuple of Clebsch factors $(\tfrac{9}{2},\tfrac{3}{2},\tfrac{3}{2})$ in Ref.~\cite{Antusch:2013wn}, and model $20$ with CG factors $(3,1,1)$ in \cite{Antusch:2017ano}. We can see that the fits of these two models are not as promising given the latest results from NuFIT 3.2 (2018)~\cite{Esteban:2016qun} with a preference for $\theta_{23}^\text{PMNS}> 45^\circ$.
In Table~\ref{tab3} the model parameters of the $12$ best fit points with lowest $\chi^2$, namely $1a_1$, $3b_2$, $4b_2$, $3a_2$, $6a_2$, $7b_2$, $4a_2$, $8a_2$, $7a_2$, $8b_2$, $9a_2$, $10b_2$, are listed. Note that the models~$2$ and $5$ are not considered in Table~\ref{tab3}, since the tuple of CG coefficients differs only by an overall factor $2$ and $3/2$ compared to the ones in model~$1$ and $4$, respectively. Thus, the predictions for the observables in each of the two pairs of models are essentially the same.
Another general observation in comparing models is that neither of the CSD2 variants $\mathbf{Y}_\nu^{(102)}$ and $\mathbf{Y}_\nu^{(120)}$ is strongly preferred overall. There exist models where one of the variants is strongly preferred over the other, such as model 6 with CG factors $(\tfrac{9}{2},3,\tfrac{2}{3})$ preferring the $(102)$ flavon VEV alignment; there are also models where there is minimal difference between the variants, such as model 10 with CG factors $(3,\tfrac{3}{2},\tfrac{2}{3})$. Models $15$ to $20$ have a preference for the $(120)$ variant, with the other having $\chi^2 > 15$, and thus not listed. In the list of $12$ best minima in Table~\ref{tab3}, $7$ of them are of the $(102)$ variant and $5$ are of the $(120)$ variant, again showing no strong preference overall.
\subsection{Predictions}
\label{sec:predictions}
\subsubsection{$\theta_{23}^\text{PMNS}$ and $\delta^\text{PMNS}$}
\label{sec:predictionspmns}
For the $12$ best fit points listed in Table~\ref{tab3} predictions of $\theta_{23}^\text{PMNS}$ and $\delta^\text{PMNS}$ are shown in Figure~\ref{fig:contours1}. In this figure the minimal $\chi^2$ contours in the $\theta_{23}^\text{PMNS}$-$\delta^\text{PMNS}$ plane are shown around each local minimum. For fixed $\theta_{23}^\text{PMNS}$ and $\delta^\text{PMNS}$ the minimal $\chi^2$ is determined by varying the model parameters, with the condition that $\theta_{23}^\text{PMNS}$ and $\delta^\text{PMNS}$ have the correct fixed values. Up to a certain threshold, these $\chi^2$ values are then shown as contours around the chosen best fit point. In order to be in agreement with the experimental data, only best fit points with $\delta^\text{PMNS}$ within the experimental $3\sigma$ range are taken into account. In this way, we demonstrate how well a specific model can be fitted to the known values of the SM parameters assuming a certain $\theta_{23}^\text{PMNS}$ and $\delta^\text{PMNS}$ prediction, showing in which $\theta_{23}^\text{PMNS}$-$\delta^\text{PMNS}$ regions the models work well.
For a given model the range of $\theta_{23}^\text{PMNS}$ with low $\chi^2$, defined by the corresponding plot in Figure~\ref{fig:contours1}, is in most of the cases much smaller than the experimental $3\sigma$ range, given by the interval $[40.3^\circ,51.1^\circ]$~\cite{Esteban:2016qun}. This implies, although $\theta_{23}^\text{PMNS}$ is used to fit the parameters, that the models make distinct predictions for this observable. More accurate measurements of $\theta_{23}^\text{PMNS}$ in future experiments can distinguish between the different models. Furthermore, all models predict the ranges for $\delta^\text{PMNS}$ within around $230^\circ$ and $290^\circ$, which is quite restricted compared the current experimental $3\sigma$ range, given by $[144^\circ,374^\circ]$~\cite{Esteban:2016qun}. Thus, independent of the choice of the CG coefficients and the CSD2 variant, the class of model under consideration delivers a prediction for the PMNS Dirac phase, which can be tested by future experiments.
To illustrate the above consideration further, all the plots listed in Figure~\ref{fig:contours1} have been combined into one plot in Figure~\ref{fig:contours2}, where the predictions of all the models can be compared in the $\theta_{23}^\text{PMNS}$-$\delta^\text{PMNS}$ plane, together with the experimental $3\sigma$ ranges of the two quantities. We see that all minimal $\chi^2$ regions of models fall onto an almost horizontal trend line; this implies that a future more precise $\theta_{23}^\text{PMNS}$ measurement can indeed further reduce the set of viable models if not outright discriminate between them\footnote{Future measurements by the DUNE experiment, for example, shall determine $\theta_{23}^\mathrm{PMNS}$ with a precision of less than $1^\circ$, and $\delta^\mathrm{PMNS}$ with a precision of ${\cal O}(10^\circ)$ \cite{Abi:2018dnh,Abi:2018alz,Abi:2018rgm}, which allows for precision model testing.}, while the rough range of $\delta^\text{PMNS}$ is a prediction of the entire class. There is a slight positive trend noticeable that models with a higher predicted $\theta_{23}^\text{PMNS}$ also predict a slightly higher $\delta^\text{PMNS}$.
\subsubsection{Ratio of $y_d$ and $y_s$}
\label{sec:predictionsyukawaratio}
In order to compute the $1\sigma$ highest posterior density (HPD) interval of the ratio $\frac{y_d}{y_s}$ the Markov chain Monte Carlo (MCMC) method is used. For the different combinations of CG coefficients listed in Table~\ref{tab2} we perform a Markov chain, where among others the posterior density of $\frac{y_d}{y_s}$ is calculated. The posterior density of this ratio only depends on the choice of the CG coefficients but not on the neutrino Yukawa coupling. In Figure~\ref{fig:ratio} the $1\sigma$ HPD intervals for each of the models are indicated as red lines. In addition, the experimental central value of $\frac{y_d}{y_s}$ is indicated by a dotted line and the regions outside the experimental $1\sigma$ range are represented by grey areas. Note that the values for the ratio $\frac{y_d}{y_s}$ in the Markov chain are computed at the GUT scale. Since this ratio is stable under the RGE running and the SUSY threshold corrections, the calculated values can be compared directly with the experimental value at the $M_Z$ scale.
Figure~\ref{fig:ratio} shows that the predicted range of $\frac{y_d}{y_s}$ for a given model is much smaller than the $1\sigma$ experimental range given by $5.06^{+0.78}_{-0.42} \cdot 10^{-2}$~\cite{Antusch:2013jca}. Since different models predict different ranges, more accurate future measurements of the masses $m_d$ and $m_s$, and consequently also of the Yukawa couplings $y_d$ and $y_s$, have the potential to distinguish between different models. The small ranges of $1\sigma$ HPD intervals of $\frac{y_d}{y_s}$ for each model in Figure~\ref{fig:ratio} can be explained as follows: Once the CG coefficients in the charged lepton Yukawa matrix are fixed, the double ratio $d = \frac{y_\mu y_d}{y_e y_s} \approx \Big|\frac{c_x^2}{c_y c_z}\Big|$ given in Eq.~\eqref{eq:doubleratio-GUT} is fixed too in leading order. Since in addition $y_e$ and $y_\mu$ have very small experimental uncertainties, much smaller than $y_d$ and $y_s$, the ratio $\frac{y_d}{y_s}$ is much more constrained in our models than in the experiment.
\subsubsection{Effective mass in $0\nu\beta\beta$ decay}
\label{sec:predictionsmeff}
Once the CG coefficients and the CSD2 variant is chosen, all parameters in the PMNS matrix and in the left-handed neutrino masses are predicted, including the one Majorana phase (there is only one, since the lightest left-handed neutrinos in our setup was taken massless). Therefore Eq.~\eqref{eq:meff} implies that the effective mass in neutrinoless double-beta decay is predicted too; we shall show results for the experimentally more interesting quantity of effective mass rather than for the Majorana phase.
The $1\sigma$ HPD interval of $\langle m_{\beta\beta} \rangle$ is determined by calculating the posterior density using the MCMC method. For the twelve best fit points with lowest $\chi^2$ listed in Table~\ref{tab3}, the $1\sigma$ HPD intervals of the effective mass are shown in Figure~\ref{fig:meff} as red lines.
Different combinations of CG coefficients $(c_x,c_y,c_z)$ and CSD2 scenarios ($\mathbf{Y}_\nu^\text{(102)}$ or $\mathbf{Y}_\nu^\text{(120)}$) predict different ranges for $\langle m_{\beta\beta} \rangle$ as shown in Figure~\ref{fig:meff}. Furthermore, all predictions lie roughly in the interval $[2.5,4.0]\cdot10^{-3}\,\mathrm{eV}$. This means the class of model under consideration predicts a well defined range for the effective mass, independent of the choice of the CG coefficients and of the CSD2 scenario. A precise measurement of $\langle m_{\beta\beta} \rangle$ would have the potential to distinguish different models, but unfortunately this is far beyond the reach of currently planned experiments, which have an upper detectable limit of around $0.1\,\mathrm{eV}$ (e.g.~see Table II in \cite{Agostini:2018tnm}).
\subsubsection{SUSY threshold parameter $\eta_q$}
\label{sec:predictionsetaq}
The SUSY threshold parameter $\eta_q$ is actually one of the input parameters we fit. A complete model involving SUSY breaking and a prediction of the SUSY spectrum would need to reproduce, however, the correct threshold effect in the first $2$ fermion families. For this reason, we can consider the $\eta_q$ value also as one of the predictions, despite it not being directly observable experimentally.
We already stated in Section~\ref{sec:GUT-operators} that the $\eta_q$ value is linked to the Clebsch coefficient $c_x$, which determines the ratio $y_\mu/y_s$ at the GUT scale. Using SM and MSSM RGEs with no SUSY threshold corrections, the GUT scale value of $y_\mu/y_s$ is approximately $4.5$, suggesting that any deviation of the Clebsch factor $c_x$ from $4.5$ will need to be compensated by $\eta_q$. This requirement picked only $3$,$\tfrac{9}{2}$ and $6$ as suitable $c_x$ candidates (involving the possibilities of raising/lowering the $y_\mu/y_s$ ratio by $\pm33\%$ ), implying the predicted values of $\eta_q$ to be approximately $-0.33$, $0$ and $+0.33$, respectively. We confirm this expectation with the results in Table~\ref{tab3}.
\section{Summary and Conclusions}
\label{sec:conclusions}
In this paper, we have systematically investigated the predictions of a novel class of supersymmetric $\mathrm{SU}(5)$ GUT flavour models with Constrained Sequential Dominance 2 (CSD2) in the neutrino sector. CSD2 is an attractive building block for flavour model building because it predicts a non-zero leptonic mixing angle $\theta_{13}^\text{PMNS}$, a deviation of $\theta_{23}^\text{PMNS}$ from $\pi /4$, as well as a leptonic Dirac CP phase $\delta^\text{PMNS}$, which is directly linked to the CP violation relevant for generating the baryon asymmetry via the leptogenesis mechanism.
When embedded into a predictive $\mathrm{SU}(5)$ GUT setup, the CSD2 predictions in the neutrino sector are modified in a calculable way by a charged lepton mixing contribution, which is determined by the $\mathrm{SU}(5)$ relations between the charged lepton and down quark Yukawa matrices $\mathbf{Y}_e$ and $\mathbf{Y}_d$, respectively.
The $\mathrm{SU}(5)$ quark-lepton relations in turn depend on GUT operators responsible for generating the entries of fermion Yukawa matrices. Under the assumption of single operator dominance, the choice of GUT operators and consequently the associated Clebsch-Gordan coefficients directly govern the ratios between the entries of $\mathbf{Y}_e$ and $\mathbf{Y}_d$ \cite{Antusch:2009gu,Antusch:2013rxa}.
Furthermore, another model building ingredient is the ``phase sum rule mechanism'' \cite{Antusch:2009hq}, used to obtain a valid scheme for CP violation in the quark sector which leads to the prediction of a right-angled unitarity triangle with $\alpha_\mathrm{UT} = 90^\circ$ and thus to a prediction $\delta^\text{CKM}= 1.188\pm 0.016$ in good agreement with the allowed experimental range.
This chosen setup defines the class of models under consideration, with a specific member defined by a $3$-tuple of Clebsch-Gordan factors between $\mathbf{Y}_d$ and $\mathbf{Y}_e$ in the $1$-$2$ block and the choice of the CSD2 variant. Once these choices are made, and once concrete values are given to model parameters, all the SM fermion sector parameters are determined: this includes the masses, as well as the mixing angles and CP violating phases of both the CKM and PMNS matrix.
Making use of the approximately invariant double ratio $\tfrac{y_d}{y_s}/\tfrac{y_e}{y_\mu}$, we can narrow down the list of potentially viable Clebsch factors of the model class to $37$; this list is given in Table~\ref{tab1}. For each of the viable $37$ candidates, and for each of the $2$ CSD2 variants, we performed a fit of parameters by minimizing the $\chi^2$ for the observables, thus identifying which models can be viable in at least some part of their parameter space; the minimization results for this are gathered in Table~\ref{tab2}, where all (local) minima with $\chi^2<15$ are listed, with the complete information on the input parameters for the $12$ best minima given in Table~\ref{tab3}. The goal of this study was to systematically explore the predictions of the whole model class to identify the most promising candidates for future model building; up to now only two representatives from this class of models had been studied in Ref.~\cite{Antusch:2013wn,Antusch:2017ano}.
A general observation from the results in Tables~\ref{tab2} and~\ref{tab3} is that while there may be a preference for the CSD2 variant $\mathbf{Y}_\nu^{(102)}$ or $\mathbf{Y}_\nu^{(120)}$ for an individual model, there is no strongly preferred overall variant across all models.
In the fitting procedure there are $11$ input parameters, which includes $\tan\beta$ and $\eta_b$ with only indirect and minor effects on observables; the $\chi^2$ function we minimize has $12$ terms associated to observables.\footnote{The parameter and observable counting excludes direct pairs of parameter-observable, where a fit of the pair can be performed independently from other quantities.} Our model class is thus predictive with the following results:
\begin{enumerate}
\item The predicted PMNS quantities are $\theta_{23}^\text{PMNS}$ and $\delta^\text{PMNS}$, with results shown in Figures~\ref{fig:contours1} and \ref{fig:contours2}. It shows that the predictions of $\theta_{23}^\text{PMNS}$ vary from model to model, while the entire model class predicts $\delta^\text{PMNS}$ roughly between $230^\circ$ and $290^\circ$. Future measurements planned for example by DUNE~\cite{Abi:2018dnh,Abi:2018alz,Abi:2018rgm} will determine $\theta_{23}^\mathrm{PMNS}$ and $\delta^\mathrm{PMNS}$ with a precision of less than $1^\circ$ and ${\cal O}(10^\circ)$, respectively, allowing for precision model testing and discrimination between them.
\item Each set of GUT operators predicts the ratio $m_d/m_s$ from the induced quark-lepton mass relations using the precise existing measurements for $m_\mu$ and $m_e$, with very small errors. The predictions for $m_d/m_s$ are summarised in Figure \ref{fig:ratio}.
\item With CSD2 predicting one neutrino mass to be negligible, there is only $1$ Majorana phase in the neutrino sector. We use instead the effective mass $\langle m_{\beta\beta}\rangle$ for neutrinoless double-beta decay as a proxy; the $1\sigma$ HPD interval predictions are given in Figure~\ref{fig:meff}.
\item While the SUSY threshold parameter $\eta_q$ is one of the fit parameters, its value would need to be reproduced by the SUSY particle spectrum in any complete model. The $\eta_q$ value is determined already by the $c_x$ Clebsch factor choice (cf.~Section~\ref{sec:predictionsetaq}).
\end{enumerate}
Beyond the present study, the results of the fits provide useful insight for explicitly constructing new models, especially when better experimental precision for the PMNS parameters will guide the direction. In building a complete flavour GUT model, our results provide the following guidance:
\begin{itemize}
\item A complete theory of flavour would be guided by the Yukawa textures used in the fermion sector, and potentially also by the results for viable values of the phases $\alpha$ and $\gamma$, which would be predicted by a suitable flavon VEV alignment. Interestingly, all of the most promising models in Table \ref{tab3} feature $\gamma$ close to $270^\circ$. We point out the intriguing possibility that such a phase $\gamma$, together with a $\alpha_\text{UT}=90^\circ$ for the ``phase sum rule mechanism'', could arise from a single imaginary entry in the $2$-$2$ element of $\mathbf{Y}_e$ and $\mathbf{Y}_d$. Furthermore, the provided values of $\theta_{12}^{eL}$ could allow one to explore model building possibilities within the considered $\mathrm{SU}(5)$ GUT setup even with a neutrino sector texture other than CSD2 (cf.~Section~\ref{sec:bestfitclebsches}).
\item CG coefficients are crucial building blocks of GUT flavour models, since they link predictions for $\theta_{23}^\text{PMNS}$ and $\delta^\text{PMNS}$ to the quark-lepton mass relation from $\mathrm{SU}(5)$ unification. The choice of CG factors actually reveals the choice of the underlying GUT operators in the Yukawa sector, thus suggesting the GUT matter content of the Higgs sector and perhaps guiding even towards a complete Higgs potential, from which the spontaneous breaking of GUT symmetry $\mathrm{SU}(5)\to\text{SM}$ arises.
\end{itemize}
Finally, an offshoot of the presented work are the extensive RGE data tables for the changes in neutrino observables when run from the GUT scale to the $Z$ scale (cf.~Appendix~\ref{app:rg_running}). Raw data is provided under the link stated in~\cite{running_data}. Interpolating that data, together with the existing data tables from \cite{Antusch:2013jca} for the quark and charged lepton sectors allows to greatly speed up numerical fits of supersymmetric GUT flavour models to the experimental data.
In summary, we provide a systematic study for a novel class of CSD2 neutrino mixing models within a predictive $\mathrm{SU}(5)$ GUT setup. The candidate models have the potential to be highly predictive, and can therefore be tested in future experiments. Our study thus provides a roadmap for future work in constructing new flavour SUSY GUT models of this novel type.
\begin{figure}
\vspace{-1cm}
\centering
\includegraphics[width=0.3\textwidth]{plot1.pdf} \hspace{0.1cm}
\includegraphics[width=0.3\textwidth]{plot2.pdf} \hspace{0.1cm}
\includegraphics[width=0.3\textwidth]{plot3.pdf}\\ \vspace{0.2cm}
\includegraphics[width=0.3\textwidth]{plot4.pdf} \hspace{0.1cm}
\includegraphics[width=0.3\textwidth]{plot5.pdf} \hspace{0.1cm}
\includegraphics[width=0.3\textwidth]{plot6.pdf}\\ \vspace{0.2cm}
\includegraphics[width=0.3\textwidth]{plot7.pdf} \hspace{0.1cm}
\includegraphics[width=0.3\textwidth]{plot8.pdf} \hspace{0.1cm}
\includegraphics[width=0.3\textwidth]{plot9.pdf}\\ \vspace{0.2cm}
\includegraphics[width=0.3\textwidth]{plot10.pdf} \hspace{0.1cm}
\includegraphics[width=0.3\textwidth]{plot11.pdf} \hspace{0.1cm}
\includegraphics[width=0.3\textwidth]{plot12.pdf}\\ \vspace{0.2cm}
\includegraphics[width=0.25\textwidth]{legend.pdf}
\caption{Minimal $\chi^2$ contours of the best fit models in the $\theta_{23}^\text{PMNS}$-$\delta^\text{PMNS}$ plane. From top left to bottom right the $12$ best fit points with lowest $\chi^2$ from Table~\ref{tab3} are presented. In each plot the minimal $\chi^2$ for fixed $\theta_{23}^\text{PMNS}$ and $\delta^\text{PMNS}$ is plotted as contours around the local minimum, indicated by a black cross. Beside the chosen CG coefficients $(c_x,c_y,c_z)$ and the CSD2 variant ($\mathbf{Y}_\nu^{(102)}$ or $\mathbf{Y}_\nu^{(120)}$), the title of each plot contains a label specifying the best fit point according to Table~\ref{tab3}. Only local minima with $\delta^\text{PMNS}$ within the experimental $3\sigma$ range are chosen.}
\label{fig:contours1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{plotcom.pdf}
\caption{Summary of the minimal $\chi^2$ contours of the best fit models in the $\theta_{23}^\text{PMNS}$-$\delta^\text{PMNS}$ plane. The figure shows the combined $\chi^2$ contours of all the plots in Figure~\ref{fig:contours1}. The grey areas represent the regions outside the experimental $3\sigma$ ranges of $\theta_{23}^\text{PMNS}$ and $\delta^\text{PMNS}$ which are given by the intervals $[40.3^\circ,51.5^\circ]$ and $[144^\circ,374^\circ]$, respectively~\cite{Esteban:2016qun}.}
\label{fig:contours2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{ydys_ratio.pdf}
\caption{Predictions for the Yukawa ratio $y_d/y_s$. For each combination of CG coefficients listed in Table~\ref{tab2} the $1\sigma$ HPD intervals for $y_d/y_s$ are shown as red lines. The HPD intervals do not depend on the choice of the CSD2 scenario. The dotted line indicates the experimental central value of the Yukawa ratio and the grey areas represent the regions outside the experimental $1\sigma$ range, given by $y_d/y_s=5.06^{+0.78}_{-0.42} \cdot 10^{-2}$~\cite{Antusch:2013jca}.}
\label{fig:ratio}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{meff.pdf}
\caption{Predictions for the effective mass $\langle m_{\beta\beta}\rangle$ in neutrinoless double-beta decay in the best fit models. For the $12$ best fit points with lowest $\chi^2$ listed in Table~\ref{tab3} the $1\sigma$ HPD intervals for $\langle m_{\beta\beta}\rangle$ are shown as red lines.}
\label{fig:meff}
\end{figure}
\begin{longtable}{c}
$
\arraycolsep=8pt
\begin{array}{rrd{2}d{2}d{2}d{1}d{1}d{1}d{1}d{2}}
\toprule
\text{Label} & (c_x,c_y,c_z) & \aligncell{\chi_\text{Tot}^2} & \aligncell{\chi_\text{q}^2} & \aligncell{\chi_\nu^2} & \aligncell{\theta_{23}[{}^\circ]} & \aligncell{\delta[{}^\circ]} & \aligncell{\gamma[{}^\circ]} & \aligncell{\alpha[{}^\circ]} & \aligncell{\theta_{12}^{eL}[{}^\circ]} \\
\midrule
\spaceclebsches
1\kern1.3em & \left(3,\frac{3}{2},\frac{1}{2}\right) & & & & & & & & \\
\spacelines
a_1 & (102) & 0.17 & 0.06 & 0.11 & 47.9 & 92.7 & 68.7 & 233.1 & 7.23 \\
a_2 & & 0.17 & 0.06 & 0.11 & 47.9 & 267.3 & 291.3 & 126.9 & 7.23 \\
b_1 & (120) & 4.05 & 0.06 & 3.99 & 41.6 & 120.1 & 71.6 & 148.2 & 7.22 \\
b_2 & & 4.05 & 0.06 & 3.99 & 41.6 & 239.9 & 288.4 & 211.8 & 7.22 \\
\spaceclebsches
2\kern1.3em & (6,3,1) & & & & & & & & \\
\spacelines
a_1 & (102) & 0.19 & 0.06 & 0.14 & 47.9 & 93.7 & 67.7 & 233.9 & 7.23 \\
a_2 & & 0.19 & 0.06 & 0.14 & 47.9 & 266.3 & 292.3 & 126.1 & 7.23 \\
b_1 & (120) & 4.19 & 0.06 & 4.13 & 41.5 & 118.9 & 72.6 & 147.0 & 7.22 \\
b_2 & & 4.19 & 0.06 & 4.13 & 41.5 & 241.1 & 287.4 & 213.0 & 7.22 \\
\spaceclebsches
3\kern1.3em & \left(\frac{9}{2},2,1\right) & & & & & & & & \\
\spacelines
a_1 & (102) & 1.62 & 1.06 & 0.56 & 43.9 & 103.0 & 72.9 & 263.2 & 5.49 \\
a_2 & & 1.62 & 1.06 & 0.56 & 43.9 & 257.0 & 287.1 & 96.8 & 5.49 \\
b_1 & (120) & 1.06 & 1.06 & 0.00 & 47.2 & 90.2 & 71.0 & 90.5 & 5.49 \\
b_2 & & 1.06 & 1.06 & 0.00 & 47.2 & 269.8 & 289.0 & 269.5 & 5.49 \\
\spaceclebsches
4\kern1.3em & \left(\frac{9}{2},\frac{3}{2},1\right) & & & & & & & & \\
\spacelines
a_1 & (102) & 1.81 & 1.12 & 0.69 & 43.7 & 110.9 & 66.2 & 272.3 & 5.33 \\
a_2 & & 1.81 & 1.12 & 0.69 & 43.7 & 249.1 & 293.8 & 87.7 & 5.33 \\
b_1 & (120) & 1.24 & 1.12 & 0.12 & 47.4 & 93.0 & 68.8 & 94.0 & 5.33 \\
b_2 & & 1.24 & 1.12 & 0.12 & 47.4 & 267.0 & 291.2 & 266.0 & 5.33 \\
\spaceclebsches
5\kern1.3em & \left(3,1,\frac{2}{3}\right) & & & & & & & & \\
\spacelines
a_1 & (102) & 1.82 & 1.12 & 0.70 & 43.7 & 110.7 & 66.5 & 272.1 & 5.32 \\
a_2 & & 1.82 & 1.12 & 0.70 & 43.7 & 249.3 & 293.5 & 87.9 & 5.32 \\
b_1 & (120) & 1.24 & 1.12 & 0.12 & 47.4 & 93.1 & 68.7 & 94.1 & 5.32 \\
b_2 & & 1.24 & 1.12 & 0.12 & 47.4 & 266.9 & 291.3 & 265.9 & 5.32 \\
\spaceclebsches
6\kern1.3em & \left(\frac{9}{2},3,\frac{2}{3}\right) & & & & & & & & \\
\spacelines
a_1 & (102) & 1.64 & 0.92 & 0.72 & 48.8 & 83.9 & 72.5 & 215.3 & 8.29 \\
a_2 & & 1.64 & 0.92 & 0.72 & 48.8 & 276.1 & 287.5 & 144.7 & 8.29 \\
b_1 & (120) & 9.68 & 0.93 & 8.75 & 40.4 & 117.1 & 78.4 & 155.0 & 8.28 \\
b_2 & & 9.68 & 0.93 & 8.75 & 40.4 & 242.9 & 281.6 & 205.0 & 8.28 \\
\spaceclebsches
7\kern1.3em & \left(6,2,\frac{3}{2}\right) & & & & & & & & \\
\spacelines
a_1 & (102) & 2.97 & 0.06 & 2.91 & 42.3 & 117.7 & 64.3 & 283.7 & 4.80 \\
a_2 & & 2.97 & 0.06 & 2.91 & 42.3 & 242.3 & 295.7 & 76.3 & 4.80 \\
b_1 & (120) & 1.77 & 0.05 & 1.72 & 48.3 & 84.7 & 77.8 & 87.7 & 4.80 \\
b_2 & & 1.77 & 0.05 & 1.72 & 48.3 & 275.3 & 282.2 & 272.3 & 4.80 \\
\bottomrule
\end{array}$\\
$
\arraycolsep=8pt
\begin{array}{rrd{2}d{2}d{2}d{1}d{1}d{1}d{1}d{2}}
\toprule
\text{Label} & (c_x,c_y,c_z) & \aligncell{\chi_\text{Tot}^2} & \aligncell{\chi_\text{q}^2} & \aligncell{\chi_\nu^2} & \aligncell{\theta_{23}[{}^\circ]} & \aligncell{\delta[{}^\circ]} & \aligncell{\gamma[{}^\circ]} & \aligncell{\alpha[{}^\circ]} & \aligncell{\theta_{12}^{eL}[{}^\circ]} \\
\midrule
\spaceclebsches
8\kern1.3em & \left(6,6,\frac{1}{2}\right) & & & & & & & & \\
\spacelines
a_1 & (102) & 2.37 & 0.38 & 1.99 & 49.6 & 84.4 & 71.5 & 147.4 & 14.90 \\
a_2 & & 2.37 & 0.38 & 1.99 & 49.6 & 275.6 & 288.5 & 212.6 & 14.90 \\
a_3 & & 8.62 & 0.43 & 8.19 & 40.9 & 133.9 & 64.5 & 63.6 & 14.93 \\
a_4 & & 8.62 & 0.43 & 8.19 & 40.9 & 226.1 & 295.5 & 296.4 & 14.93 \\
b_1 & (120) & 3.11 & 0.38 & 2.74 & 42.1 & 121.6 & 68.9 & 218.5 & 14.90 \\
b_2 & & 3.11 & 0.38 & 2.74 & 42.1 & 238.4 & 291.1 & 141.5 & 14.90 \\
\spaceclebsches
9\kern1.3em & \left(3,2,\frac{1}{2}\right) & & & & & & & & \\
\spacelines
a_1 & (102) & 3.24 & 3.12 & 0.12 & 47.9 & 87.4 & 72.7 & 226.9 & 7.42 \\
a_2 & & 3.24 & 3.12 & 0.12 & 47.9 & 272.6 & 287.3 & 133.1 & 7.42 \\
b_1 & (120) & 8.59 & 3.15 & 5.45 & 41.2 & 114.2 & 77.5 & 144.0 & 7.41 \\
b_2 & & 8.59 & 3.15 & 5.45 & 41.2 & 245.8 & 282.5 & 216.0 & 7.41 \\
b_3 & & 11.64 & 3.19 & 8.45 & 49.2 & 97.0 & 49.2 & 75.2 & 7.40 \\
b_4 & & 11.64 & 3.19 & 8.45 & 49.2 & 263.0 & 310.8 & 284.8 & 7.40 \\
\spaceclebsches
10\kern1.3em & \left(3,\frac{3}{2},\frac{2}{3}\right) & & & & & & & & \\
\spacelines
a_1 & (102) & 3.76 & 3.27 & 0.49 & 44.0 & 102.7 & 72.6 & 262.3 & 5.54 \\
a_2 & & 3.76 & 3.27 & 0.49 & 44.0 & 257.3 & 287.4 & 97.7 & 5.54 \\
b_1 & (120) & 3.28 & 3.27 & 0.01 & 47.1 & 91.4 & 69.8 & 91.5 & 5.54 \\
b_2 & & 3.28 & 3.27 & 0.01 & 47.1 & 268.6 & 290.2 & 268.5 & 5.54 \\
\spaceclebsches
11\kern1.3em & \left(6,\frac{9}{2},\frac{2}{3}\right) & & & & & & & & \\
\spacelines
a_1 & (102) & 4.87 & 0.14 & 4.73 & 50.6 & 82.0 & 70.5 & 187.2 & 10.97 \\
a_2 & & 4.87 & 0.14 & 4.73 & 50.6 & 278.0 & 289.5 & 172.8 & 10.97 \\
b_1 & (120) & 8.16 & 0.14 & 8.01 & 40.5 & 128.9 & 69.9 & 188.8 & 10.98 \\
b_2 & & 8.16 & 0.14 & 8.01 & 40.5 & 231.1 & 290.1 & 171.2 & 10.98 \\
\spaceclebsches
12\kern1.3em & \left(6,6,\frac{2}{3}\right) & & & & & & & & \\
\spacelines
a_1 & (102) & 5.98 & 2.65 & 3.32 & 50.2 & 78.0 & 73.6 & 182.1 & 11.28 \\
a_2 & & 5.98 & 2.65 & 3.32 & 50.2 & 282.0 & 286.4 & 177.9 & 11.28 \\
b_1 & (120) & 14.58 & 2.66 & 11.92 & 39.8 & 125.1 & 72.7 & 187.3 & 11.28 \\
b_2 & & 14.58 & 2.66 & 11.92 & 39.8 & 234.9 & 287.3 & 172.7 & 11.28 \\
\spaceclebsches
13\kern1.3em & \left(\frac{9}{2},\frac{9}{2},\frac{1}{2}\right) & & & & & & & & \\
\spacelines
a_1 & (102) & 6.13 & 2.65 & 3.48 & 50.2 & 78.4 & 73.2 & 182.3 & 11.28 \\
a_2 & & 6.13 & 2.65 & 3.48 & 50.2 & 281.6 & 286.8 & 177.7 & 11.28 \\
b_1 & (120) & 14.66 & 2.66 & 12.00 & 39.8 & 125.0 & 72.8 & 187.2 & 11.28 \\
b_2 & & 14.66 & 2.66 & 12.00 & 39.8 & 235.0 & 287.2 & 172.8 & 11.28 \\
\spaceclebsches
14\kern1.3em & \left(\frac{9}{2},3,\frac{1}{2}\right) & & & & & & & & \\
\spacelines
a_1 & (102) & 6.40 & 1.56 & 4.84 & 50.6 & 82.9 & 69.9 & 189.3 & 10.81 \\
a_2 & & 6.40 & 1.56 & 4.84 & 50.6 & 277.1 & 290.1 & 170.7 & 10.81 \\
b_1 & (120) & 9.66 & 1.56 & 8.09 & 40.5 & 128.9 & 70.0 & 187.4 & 10.81 \\
b_2 & & 9.66 & 1.56 & 8.09 & 40.5 & 231.1 & 290.0 & 172.6 & 10.81 \\
\bottomrule
\end{array}$\\
$
\arraycolsep=8pt
\begin{array}{rrd{2}d{2}d{2}d{1}d{1}d{1}d{1}d{2}}
\toprule
\text{Label} & (c_x,c_y,c_z) & \aligncell{\chi_\text{Tot}^2} & \aligncell{\chi_\text{q}^2} & \aligncell{\chi_\nu^2} & \aligncell{\theta_{23}[{}^\circ]} & \aligncell{\delta[{}^\circ]} & \aligncell{\gamma[{}^\circ]} & \aligncell{\alpha[{}^\circ]} & \aligncell{\theta_{12}^{eL}[{}^\circ]} \\
\midrule
\spaceclebsches
15\kern1.3em & \left(6,\frac{3}{2},2\right) & & & & & & & & \\
\spacelines
b_1 & (120) & 11.62 & 0.07 & 11.55 & 50.3 & 67.1 & 99.7 & 74.5 & 3.60 \\
b_2 & & 11.62 & 0.07 & 11.55 & 50.3 & 292.9 & 260.3 & 285.5 & 3.60 \\
\spaceclebsches
16\kern1.3em & \left(\frac{9}{2},1,\frac{3}{2}\right) & & & & & & & & \\
\spacelines
b_1 & (120) & 13.25 & 1.10 & 12.16 & 50.3 & 66.4 & 101.3 & 74.3 & 3.54 \\
b_2 & & 13.25 & 1.10 & 12.16 & 50.3 & 293.6 & 258.7 & 285.7 & 3.54 \\
\spaceclebsches
17\kern1.3em & \left(3,\frac{2}{3},1\right) & & & & & & & & \\
\spacelines
b_1 & (120) & 13.28 & 1.09 & 12.19 & 50.3 & 66.5 & 101.2 & 74.5 & 3.54 \\
b_2 & & 13.28 & 1.09 & 12.19 & 50.3 & 293.5 & 258.8 & 285.5 & 3.54 \\
\spaceclebsches
18\kern1.3em & \left(\frac{9}{2},\frac{3}{2},\frac{3}{2}\right) & & & & & & & & \\
\spacelines
b_1 & (120) & 13.30 & 3.35 & 9.95 & 50.2 & 62.0 & 103.7 & 68.4 & 3.69 \\
b_2 & & 13.30 & 3.35 & 9.95 & 50.2 & 298.0 & 256.3 & 291.6 & 3.69 \\
\spaceclebsches
19\kern1.3em & (6,2,2) & & & & & & & & \\
\spacelines
b_1 & (120) & 13.36 & 3.36 & 10.00 & 50.2 & 62.6 & 103.1 & 69.0 & 3.69 \\
b_2 & & 13.36 & 3.36 & 10.00 & 50.2 & 297.4 & 256.9 & 291.0 & 3.69 \\
\spaceclebsches
20\kern1.3em & (3,1,1) & & & & & & & & \\
\spacelines
b_1 & (120) & 13.41 & 3.35 & 10.06 & 50.2 & 63.1 & 102.7 & 69.6 & 3.69 \\
b_2 & & 13.41 & 3.35 & 10.06 & 50.2 & 296.9 & 257.3 & 290.4 & 3.69 \\
\bottomrule
\end{array}$\\
\caption{Results of the fit for model candidates specified by the CG coefficients and the CSD2 scenario. The table shows a complete list of CG coefficients $(c_x,c_y,c_z)$ with $\chi^2 < 15$, ordered according to their best $\chi^2$ value. For each combination of $(c_x,c_y,c_z)$ and CSD2 scenario ($\mathbf{Y}_\nu^\text{(102)}$ or $\mathbf{Y}_\nu^\text{(120)}$) all local minima with $\chi^2 < 15$ are listed. The 1st column assigns a unique label to each local minimum. The 2nd column specifies the CG coefficients and the type of neutrino Yukawa coupling. The quantity $\chi^2_\text{Tot}$ indicates the $\chi^2$ of the model, which includes all observables. $\chi^2_\text{q}$ contains the contributions of the $\chi^2$ coming from the quark and charged lepton Yukawa couplings and the CKM parameters, whereas in $\chi^2_\nu$ the remaining contributions to the $\chi^2$ from the neutrino mass squared differences and the PMNS angles are incorporated. In the last five columns the values of the two observables $\theta_{23}\equiv\theta_{23}^\text{PMNS}$, $\delta\equiv\delta^\text{PMNS}$, the two parameters $\gamma$, $\alpha$ and the $1$-$2$ left angle $\theta_{12}^{eL}$ of the charged leptons are shown.}\\
\label{tab2}
\end{longtable}
\begin{table}
\begin{align*}
\begin{array}{r@{\kern0.7em}r@{\kern0.7em}r@{\kern0.7em}r@{\kern0.7em}r@{\kern0.7em}r@{\kern0.7em}r@{\kern0.7em}r@{\kern0.7em}r@{\kern0.7em}r@{\kern0.7em}r@{\kern0.7em}r}
\toprule
\text{Label} & \tan\beta & \eta _b & \eta _q & x & y & z & \gamma[{}^{\circ}] & \theta_{12}^{uL} & m_a[\mathrm{eV}] & \epsilon & \alpha[{}^{\circ}] \\
\midrule
1a_2 & 46.9 & 0.449 & -0.344 & 0.00722 & 0.001833 & 0.001642 & 291.3 & 0.0871 & 0.0283 & 0.103 & 126.9 \\
3b_2 & 33.4 & -0.170 & 0.017 & 0.00347 & 0.000752 & 0.000777 & 289.0 & 0.0871 & 0.0261 & 0.119 & 269.5 \\
4b_2 & 48.5 & 0.599 & -0.048 & 0.00498 & 0.001396 & 0.001147 & 291.2 & 0.0871 & 0.0266 & 0.117 & 266.0 \\
3a_2 & 31.1 & -0.147 & 0.016 & 0.00317 & 0.000688 & 0.000710 & 287.1 & 0.0872 & 0.0263 & 0.116 & 96.8 \\
6a_2 & 31.0 & -0.141 & 0.021 & 0.00314 & 0.000687 & 0.000704 & 287.5 & 0.0870 & 0.0285 & 0.099 & 144.7 \\
7b_2 & 48.0 & 0.395 & 0.310 & 0.00374 & 0.000945 & 0.000850 & 282.2 & 0.0872 & 0.0263 & 0.121 & 272.3 \\
4a_2 & 49.3 & 0.600 & -0.048 & 0.00507 & 0.001422 & 0.001169 & 293.8 & 0.0871 & 0.0264 & 0.119 & 87.7 \\
8a_2 & 48.7 & 0.568 & 0.328 & 0.00365 & 0.000970 & 0.000834 & 288.5 & 0.0872 & 0.0290 & 0.098 & 212.6 \\
7a_2 & 49.1 & 0.494 & 0.309 & 0.00381 & 0.000964 & 0.000866 & 295.7 & 0.0873 & 0.0258 & 0.125 & 76.3 \\
8b_2 & 49.6 & 0.590 & 0.328 & 0.00372 & 0.000991 & 0.000852 & 291.1 & 0.0872 & 0.0292 & 0.097 & 141.5 \\
9a_2 & 32.5 & -0.167 & -0.308 & 0.00502 & 0.000982 & 0.001116 & 287.3 & 0.0871 & 0.0282 & 0.102 & 133.1 \\
10b_2 & 35.0 & -0.078 & -0.310 & 0.00542 & 0.001053 & 0.001203 & 290.2 & 0.0871 & 0.0262 & 0.119 & 268.5 \\
\bottomrule
\end{array}
\end{align*}
\caption{List of best-fit models. The table shows the model parameters of the $12$ best fit points with lowest $\chi^2$ from Table~\ref{tab2}. Note that the models~$2$ and $5$ are not considered. The corresponding local minima are essentially the same as the ones in model~$1$ and $4$ respectively, since the tuple of CG coefficients in model~$1$ and $2$, and in model~$4$ and $5$, differ only by an overall factor.}
\label{tab3}
\end{table}
\section*{Acknowledgements}
The work of S.A., C.H.~and V.S.~has been supported by the Swiss National Science Foundation. C.K.K. wishes to acknowledge support from the Swiss Government Excellence Scholarship (2017.0527) and the Royal Society-SERB Newton International Fellowship (NF171488).
|
3,212,635,537,523 | arxiv | \section{Introduction}
Many laboratory experiments exist today to search for or otherwise strongly constrain deviations from Newtonian gravity on submillimeter scales
\cite{Adelberger:2003zx,Will:2014kxa,Burrage:2016bwy,Burrage:2017qrf}. These often give tight bounds on the parameters of hypothetical Yukawa fifth forces, although it has recently become interesting also to consider their implications for nonlinear scalar fields. It is now known that when a scalar field is allowed to have both self-interactions and nonlinear couplings to the Standard Model, its phenomenology becomes markedly different.
\subsection{Chameleonlike particles}
Despite the enormous range of possibilities (see \cite{Clifton:2011jh,Joyce:2014kja} for reviews), a defining feature common to such scalar fields is a nonperturbative effect known as screening. Screening mechanisms drive the scalar to dynamically alter its properties in response to its surroundings, thus suppressing or enhancing the fifth force it mediates. Two models of screening are particularly suited to being tested in the laboratory and have justly been the focal point of experiments in recent years. The first is the chameleon mechanism \cite{Khoury:2003aq,Khoury:2003rn}, wherein the mass of the scalar varies accordingly with the ambient density, thus resulting in a Yukawa-like suppression of the range of its fifth force in dense environments. The second, dubbed the symmetron \cite{Hinterbichler:2010es,Hinterbichler:2011ca}, utilizes a Higgs-like potential and the spontaneous breaking of its $\mathbb Z_2$ symmetry to couple the scalar to matter when in high vacuum while decoupling it in dense media. Both models belong to the same universality class of scalar-tensor theories, and serve as archetypal examples of how variations in density can elicit screening. In this paper, we introduce the blanket term ``chameleonlike particle'' (CLP) to make it easier to refer to this class of models collectively.\footnote{Our choice of nomenclature draws inspiration from and highlights the contrast with axionlike particles (ALPs), which are (pseudo)scalar fields that do not couple to matter.}
At the time of its introduction, this novel idea of screening found tremendous success in enabling a CLP's evasion of the the stringent fifth force constraints enforced by tests of gravity on Earth and in the Solar System that were already in place \cite{Will:2014kxa}. However, in some sense this success has been its own demise; having spurred the onset of a number of dedicated experiments searching specifically for signatures of screening. Today, most of the parameter space of the original chameleon model has been ruled out, leaving only a sliver still out of reach of current experiments. (See Ref.~\cite{Burrage:2017qrf} for a review of current constraints on CLPs.) In contrast, the space of symmetron models remains mostly unexplored. This state of affairs is due primarily to a lack of theoretical work in translating bounds from existing experiments conducted for the chameleon, although some of the blame is also borne by the symmetron's distinct phenomenology. Many laboratory experiments conducted in vacuum chambers are only sensitive to a small range of the symmetron mass (discussed further in Sec.~\ref{sec:sym}), meaning a large number of complementary experiments are needed to probe the parameter space fully. All in all, the question of whether scalar fifth forces exist in our Universe still remains open today. Our aim in this paper is to make further progress in answering this question.
We do so by taking an approach complementary to dedicated searches: A small number of high-precision experiments conducted and refined over the years have verified the accuracy of the Standard Model, and QED in particular, to the level of about one part per trillion. As CLPs are assumed to interact with all matter species, if present, they can give rise to additional effects that might tarnish this spectacular agreement between experiment and theory. Theoretical work in reanalyzing precision QED tests while incorporating the effects of such scalar fields is therefore interesting, since models in conflict with known physics can immediately be deemed unviable. Moreover, such work is also useful in elucidating where in parameter space future searches should direct their focus.
\subsection{Anomalous magnetic moment}
In this work, we investigate how the precision measurement of the electron's magnetic moment places bounds on both chameleons and symmetrons. The magnetic moment $\bm\mu$ can be written as
\begin{equation*}
\bm\mu = - g \mu_B \mathbf{S}
\end{equation*}
in terms of the spin $\mathbf{S}$ and the Bohr magneton $\mu_B = e/2 m_e$. (We work in units with $\hbar = c = 1$ throughout.) In the current state of the art, what is measured experimentally is the dimensionless ratio $g/2$, which is exactly one for a classical field governed by the Dirac equation. As is well known, quantum fluctuations slightly increase this value, making it a promising probe for the existence of new physics. The difference between the true and tree-level values is called the anomalous magnetic moment
\begin{equation*}
a = (g-2)/2.
\end{equation*}
To measure this, Hanneke~\emph{et al.}~\cite{HannekePRL,HannekePRA} confine a single electron in a cylindrical Penning trap, within which an axial magnetic field and quadratic electrostatic potential are maintained. The value of $a$ can then be inferred by measuring the eigenfrequencies of the electron in this vacuum cavity. Three measurements are needed: The cyclotron frequency $\bar\w_c$, the anomaly frequency $\bar\w_a$, and the axial frequency $\bar\w_z$, from which one deduces \cite{HannekePRA}
\begin{equation}
\label{eq:a_exp}
a_\text{exp} = \frac{\bar\w_a - \bar\w_z^2/(2\bar\w_c)}{\bar\w_c + 3 \delta_\text{rel}/2 + \bar\w_z^2/(2\bar\w_c)} + \frac{\Delta g_\text{cav}}{2}.
\end{equation}
In this paper, we denote experimentally measured frequencies $\bar\w_i$ with an overline to distinguish them from their theoretical counterparts. These, along with other experimental details relevant to this work, are discussed further in Sec.~\ref{sec:cav}. Two other quantities are present in Eq.~\eqref{eq:a_exp}: A small shift $\delta_\text{rel}$ is necessary to include the leading relativistic correction, whereas $\Delta g_\text{cav}$ is put in by hand to account for systematics arising from the interaction between the electron and radiation modes in the cavity. These considerations yield a measurement of $g/2$ precise to 0.28 parts per trillion \cite{HannekePRL,HannekePRA}:
\begin{equation*}
(g/2)_\text{exp} = 1.001\,159\,652\,180\,73\,(28).
\end{equation*}
Just as spectacular an achievement is its agreement with the Standard Model, which predicts a theoretical value
\begin{equation}
\label{eq:a_sm}
a_\text{SM} = \sum_{n=1}^\infty C_n (\alpha/\pi)^n + a_\text{ew} + a_\text{had}.
\end{equation}
The first term is the asymptotic series arising from QED, calculations for which have now been completed up to $n=5$ loops \cite{Aoyama:2012wj,Aoyama:2014sxa,*PhysRevD.96.019901}. Also relevant at the experiment's level of precision are small contributions from the electroweak and hadronic sectors, encapsulated in the remaining two terms. (See Ref.~\cite{Giudice:2012ms} for a more in-depth discussion.) The series in Eq.~\eqref{eq:a_sm} takes as input a value for the fine-structure constant that must be determined experimentally. For this purpose, the most precise, independent determination of $\alpha$ comes from combining measurements of the Rydberg constant \cite{codata} and the ratio $h/m_\text{Rb}$ obtained from recoil experiments with rubidium atoms \cite{Clade:2006zz,Cadoret:2008st,Bouchendira:2010es}. These yield the value
\begin{equation*}
\alpha^{-1}(\text{Rb}) = 137.035\,999\,049\,(90),
\end{equation*}
with the uncertainty dominated by the measurement of $h/m_\text{Rb}$. Substituting this into Eq.~\eqref{eq:a_sm}, the end result is an agreement between theory and experiment at 1.7 standard deviations \cite{Aoyama:2014sxa},
\begin{equation}
\label{eq:a_compare}
a_\text{SM} - a_\text{exp} = (1.30 \pm 0.77)\times 10^{-12}.
\end{equation}
The $1\sigma$ uncertainty above is dominated by the errors accrued in measuring $h/m_\text{Rb}$.
\subsection{Effects from a CLP}
\label{sec:intro_effects}
If a CLP exists in our Universe, three additional effects come into play:
\begin{enumerate}
\item \emph{Quantum corrections}: Virtual chameleons and symmetrons run in loops, generating additional corrections to the QED vertex function. These slightly increase the intrinsic value of the electron's magnetic moment.
\item\emph{Cavity shift}: Nonlinear scalar fields invariably form a bubblelike profile inside vacuum cavities, thus exerting an additional fifth force on the electron. This shifts its eigenfrequencies by a small amount $\omega_i \to \omega_i + \delta\omega_i$. Unlike the intrinsic change in~(1), this is a systematic effect coming from the experimental setup, which must be corrected for to obtain an accurate value of $a_\text{exp}$.
\item\emph{Charge rescaling}: Scalars that couple to the photon induce a field-dependent rescaling of the electron charge, or equivalently, of the fine-structure constant $\alpha \to \alpha(\phi)$ \cite{PhysRevD.25.1527,Sandvik:2001rv,Barrow:2011kr,Olive:2007aj,Brax:2009ey,Brax:2010uq,Brax:2013doa}. If the local values of $\phi$ present in the experiments used to determine $\alpha(\text{Rb})$ differ from that in the Penning trap, then $\alpha(\text{Rb})$ must be appropriately rescaled before being substituted into Eq.~\eqref{eq:a_sm}.
\end{enumerate}
All three effects add up to an overall deviation $\delta a$. Compatibility with Eq.~\eqref{eq:a_compare} requires that this must be constrained, at the $2\sigma$ level, to lie within
\begin{equation}
\label{eq:da_constraint}
| \delta a + 1.30 \times 10^{-12}| < 1.54 \times 10^{-12}.
\end{equation}
Contributions from both the quantum and cavity effects can be estimated by considering the experiment of Hanneke~\emph{et al.}~in isolation, but including the variation of the fine-structure constant requires, in addition, a good understanding of how the scalar behaves in the experimental setups leading to the value of $\alpha(\text{Rb})$. This is a far more involved task, which lies beyond the scope of this paper. For simplicity, we shall assume in what follows that the value of $\alpha$ is identical in all relevant experiments. This assumption is not expected to have a negative impact on our results. Considering only the first two effects is sufficient to provide conservative bounds on the model parameters, which can only be expected to improve once charge rescaling is properly taken into account. In fact, only the bound on the photon coupling has room for improvement; our constraints for the matter coupling are robust against charge rescaling since the relevant physics is independent of~$\alpha$.
\subsection{Outline of this paper}
The remainder of this paper is organized as follows: The details that go into quantifying the effect of quantum corrections and the cavity shift are discussed in Secs.~\ref{sec:quantum} and \ref{sec:cav}, respectively. Up to this point, the calculations are kept as general as possible, and will apply to any nonlinear scalar field with a canonical kinetic term, a self-interaction potential, and couplings to the Standard Model. The reader interested primarily in the punchline may prefer to jump directly to Sec.~\ref{sec:chm}. There, the calculations are completed by specializing to the chameleon model, and the constraints on parameter space are determined. The same process is repeated for the symmetron in Sec.~\ref{sec:sym}. We summarize in Sec.~\ref{sec:conclusions}.
\section{Quantum corrections}
\label{sec:quantum}
The scalar fields we consider couple universally to matter and mediate a fifth force. At the quantum level, virtual exchange of these scalars leads to additional loop corrections to the QED vertex function, in turn resulting in an increase in the intrinsic value of the electron's magnetic moment.
\subsection{Lagrangian}
\label{sec:L}
We begin this section by briefly reviewing the ingredients that constitute chameleon and symmetron models. Both belong to the same family of scalar-field theories governed by the Lagrangian\footnote{For the purposes of laboratory experiments, it suffices to work in flat space. See, e.g., the reviews in Refs.~\cite{Clifton:2011jh,Joyce:2014kja} for the covariant form of this action. Our metric signature is $(-,+,+,+)$.}
\begin{equation}
\label{eq:L}
\mathcal L = -\frac{1}{2}(\partial\phi)^2 - V(\phi) + \mathcal L_m(\Psi,\phi),
\end{equation}
where the Standard Model fields (denoted collectively by $\Psi$) and their couplings to $\phi$ are encapsulated in the third term $\mathcal L_m$. Massive fermions, such as the electron, obey the modified Dirac equation \cite{Brax:2010jk}
\begin{equation}
\label{eq:Dirac}
\mathcal L_m \supset \overline\psi[i\slashed D - \Omega(\phi) m_e]\psi,
\end{equation}
where $D_\mu = \partial_\mu + i e A_\mu$ is the usual gauge-covariant derivative, but the mass term has picked up a dependence on the scalar via the conformal function\footnote{$\Omega(\phi)$ is often also called $A(\phi)$ elsewhere in the literature. In this paper, we reserve $A$ for referring to the electromagnetic gauge field.} $\Omega(\phi)>0$. To satisfy the weak equivalence principle, nonrelativistic fluids with a conserved density distribution $\rho$ couple to $\phi$ via a similar interaction
\begin{equation}
\mathcal L_m \supset -\Omega(\phi)\rho.
\end{equation}
A coupling to the electromagnetic sector is also possible, since one is not forbidden by symmetries \cite{Brax:2009ey,Brax:2010uq}. Here one has the freedom to specify a different coupling function $\varepsilon(\phi)>0$, which modifies the kinetic term of the photon to read
\begin{equation}
\label{eq:defVarepsilon}
\mathcal L_m \supset - \frac{1}{4}\varepsilon(\phi) F_{\mu\nu} F^{\mu\nu}.
\end{equation}
As both $\Omega(\phi)$ and $\varepsilon(\phi)$ introduce nonrenormalizable operators into the Lagrangian, these theories should be viewed as low-energy effective field theories (EFTs) valid only below some cutoff. Well within this regime, these models typically satisfy $\Omega(\phi) \approx 1$ and $\varepsilon(\phi) \approx 1$. For this reason, their phenomenology is more aptly framed in terms of the dimensionless coupling strengths
\begin{equation}
\beta_m(\phi) = \ensuremath{M_\text{Pl}}\frac{\text{d}\log\Omega}{\text{d}\phi},
\quad
\beta_\gamma(\phi) = \ensuremath{M_\text{Pl}}\frac{\text{d}\log\varepsilon}{\text{d}\phi},
\end{equation}
where $\ensuremath{M_\text{Pl}} = (8\pi G_\text{N})^{-1/2}$ is the reduced Planck mass. These theories are most interesting when $\beta_m, \beta_\gamma \geq 1$, corresponding to interactions that are of gravitational strength or greater.
\subsection{Vertex corrections}
\label{sec:quantum_loops}
To compute loop corrections, let us consider quantum fluctuations $\chi = \phi - \avg\phi$ about the classical background field profile $\avg\phi$ in the cavity where $g/2$ is to be measured. As the electron remains very close to the center of the cavity (see Sec.~\ref{sec:cav_motion}), it suffices to take $\avg\phi \approx \phi_0$ to be a constant, where $\phi_0$ is the classical field value at the center.
We shall restrict ourselves to the one-loop level, which is sufficient for determining the leading effect. At this order, the only influence from $V(\phi)$ is a mass term for the $\chi$ field, with mass $m_0$ given by the second derivative
\begin{equation}
\label{eq:def_mphi}
m_0^2 = V_{\text{eff},\phi\phi}(\phi_0)
\end{equation}
evaluated at the center of the cavity.\footnote{CLPs suffer from the usual hierarchy problem, since heavy particles running in loops induce large corrections to the scalar's mass. Some fine tuning must be tolerated in these theories to keep the classical predictions reliable.} Linearizing Eqs.~\eqref{eq:Dirac} and \eqref{eq:defVarepsilon}, the interaction terms relevant at this order are \cite{Brax:2009ey,Brax:2009aw}
~\vspace{-1\baselineskip}
\begin{equation}
\label{eq:L_linear_couplings}
\mathcal L_m \supset - \left( \frac{\beta_m m_e}{\ensuremath{M_\text{Pl}}} \right) \overline\psi\psi \chi - \frac{1}{4}\left(\frac{\beta_\gamma}{\ensuremath{M_\text{Pl}}}\right) \chi F_{\mu\nu} F^{\mu\nu},
\end{equation}
where we write $\beta_m \equiv \beta_m(\phi_0)$ and $\beta_\gamma \equiv \beta_\gamma(\phi_0)$ for brevity. Overall factors of $\Omega(\phi_0) \approx 1$ and $\varepsilon(\phi_0) \approx 1$ can be absorbed into a renormalization of the electron mass $m_e$ and charge $-e$, respectively.
Three Feynman diagrams contribute to the value of $g/2$ at one-loop order, as shown in Fig.~\ref{fig:FeynmanDiagrams}. As these diagrams have been widely considered for many different scenarios (see, e.g., Refs.~\cite{Giudice:2012ms,Jegerlehner:2009ry,PhysRevD.5.2396,Chen:2015vqy,Marciano:2016yhf}), we shall merely quote their result here in the main text. For the benefit of the inquisitive reader, a brief description of how these computations are carried out is relegated to Appendix~\ref{app:feyn}.
\begin{figure}
\includegraphics[width=70mm]{fig_feyn}
\caption{Scalar field (dashed line) contributions at one-loop order to the magnetic moment of the electron.}
\label{fig:FeynmanDiagrams}
\end{figure}
The first diagram in Fig.~\ref{fig:FeynmanDiagrams}(a) gives the finite contribution
\begin{equation}
\label{eq:da_bMbM}
\delta a \supset 2\beta_m^2 \left(\frac{m_e}{4\pi\ensuremath{M_\text{Pl}}}\right)^2 I_1(m_0/m_e),
\end{equation}
whereas the remaining two diagrams are UV divergent. After renormalization in the $\overline{\text{MS}}$ scheme, they yield
\begin{equation}
\label{eq:da_bMbG}
\delta a \supset 4\beta_m\beta_\gamma\left(\frac{m_e}{4\pi\ensuremath{M_\text{Pl}}}\right)^2
\left[ \log\left(\frac{\mu}{m_e}\right) + I_2(m_0/m_e)\right],
\end{equation}
where $\mu$ is an arbitrary energy scale. These results are expressed in terms of two integrals,
\begin{subequations}
\label{eq:feyn_integrals}
\begin{align}
\label{eq:feyn_integral_1}
I_1(\eta) &= \int_0^1\text{d}x \frac{(1-x)^2(1+x)}{(1-x)^2+x \eta^2},
\\
\label{eq:feyn_integral_2}
I_2(\eta) &= \int_0^1\text{d}x\int_0^1\text{d}y (x-1)\log[x^2 + (1-x)y \eta^2];
\end{align}
\end{subequations}
for which closed-form expressions can be found. For $\eta \geq 0$, we have
\begin{subequations}
\label{eq:feyn_integrals_closedform}
\begin{align}
I_1(\eta) =&\,
\frac{3}{2} - \eta^2 - \eta^2 (3 - \eta^2) \log \eta
\nonumber\\
&
-\eta (\eta^2 - 4)^{1/2} (\eta^2 - 1) \log \left( \frac{\eta}{2} + \sqrt{\frac{\eta^2}{4} - 1} \right),
\label{eq:feyn_integrals_closedform_1}
\\
I_2(\eta) =&\,
\frac{3}{2} - \frac{\eta^2}{6} + \frac{\eta^2}{6} (\eta^2 - 6)\log\eta
\nonumber\\
&+
\frac{\eta}{6} (\eta^2 - 4)^{3/2}\log\left( \frac{\eta}{2} - \sqrt{\frac{\eta^2}{4} - 1} \right),
\label{eq:feyn_integrals_closedform_2}
\end{align}
\end{subequations}
where the principal branch should be taken when $\eta < 2$. Alternatively, a piecewise expression for $I_1$ can also be found in Ref.~\cite{Chen:2015vqy}. Most of the time, however, we shall find ourselves working in the regime $m_0 \ll m_e$, such that it suffices to set $m_0/m_e = 0$ in the integrals. Both then evaluate to
\begin{equation*}
I_1(0) = I_2(0) = \frac{3}{2}.
\end{equation*}
\subsection{Nonrenormalizability}
\label{sec:quantum_ren}
It is worth discussing the result in Eq.~\eqref{eq:da_bMbG} in more detail. The scalar-photon coupling $\chi F_{\mu\nu}F^{\mu\nu}$ is a dimension-five operator, whose inclusion renders the theory nonrenormalizable. This plagues the evaluation of the diagrams in Figs.~\ref{fig:FeynmanDiagrams}(b) and \ref{fig:FeynmanDiagrams}(c), as their UV-divergent parts cannot be renormalized into any of the existing parameters in the Lagrangian we started with, such as the electron charge or particle masses. This is not uncommon in a low-energy EFT, and it must be understood that the scalar-photon coupling cannot remain pointlike up to arbitrarily high energies. This is dealt with in Ref.~\cite{Marciano:2016yhf} by assuming a sharp momentum cutoff. Here, we shall take an alternative route compatible with dimensional regularization, although in practice the end results are similar, since physics should not depend on the choice of regulator.
The resolution is to recognize that under RG flow, the heavy degrees of freedom that have integrated out to generate the scalar-photon coupling must also generate a bare term\footnote{We have written the coupling as $a_0 \mu_B$ to make manifest its contribution to the magnetic moment. Of course, in an EFT language, one should think of this as $a_0\mu_B \sim c_5/M_\star$, where $c_5$ is a dimensionless coupling and $M_\star$ is the appropriate cutoff scale.}
\begin{equation}
\mathcal{L} \supset - a_0 \mu_B \overline\psi S^{\mu\nu} F_{\mu\nu} \psi
\end{equation}
in the Lagrangian, where $S^{\mu\nu} = \frac{i}{4}[\gamma^\mu,\gamma^\nu]$. The UV divergences that arise at one loop can now be absorbed into counterterms that renormalize $a_0$. This naturally gives an extra contribution $\delta a \supset a_0$, which, in the absence of knowledge of the UV completion, is a new parameter to be constrained by experiment. For simplicity, we shall assume the UV completion is such that $a_0$ is much smaller than the one-loop contributions in Eqs.~\eqref{eq:da_bMbM} and \eqref{eq:da_bMbG} that it can be safely neglected.
On the other hand, the arbitrary scale $\mu$ should in principle be fixed by measuring $g/2$ at a given energy, after which Eq.~\eqref{eq:da_bMbG} dictates how this changes as we vary the energy of the experiment. Unlike particle colliders, however, there is an ambiguity in determining the scale $\mu$ of low-energy experiments like the one considered in this paper. Nevertheless, as $\mu$ appears only as the argument of a logarithm, its exact value is not crucial, and in practice a conservative estimate is to set
\begin{equation*}
\log(\mu/m_e) \sim 1.
\end{equation*}
\section{Cavity shift}
\label{sec:cav}
A defining feature of CLPs is their predisposition for forming a bubblelike profile when trapped in a vacuum cavity. This nontrivial profile will couple to the electron confined to the center of the Penning trap, exerting a fifth force which mildly shifts the energies of the electron's eigenstates. Unlike the intrinsic change described in Sec.~\ref{sec:quantum}, this is a systematic effect arising from considerations of how the experiment is conducted, which can also be used to place constraints. In this section, we describe how to account for this cavity shift, and quantify its contribution to the total deviation $\delta a$. Details of the experiment are described along the way, when needed, but only at a cursory level sufficient for our analysis. We refer the interested reader to the original experimental papers \cite{HannekePRL,HannekePRA} or the associated review \cite{Brown} for a more comprehensive account.
\subsection{Vacuum cavity profile}
\label{sec:cav_profile}
The electron's magnetic moment is measured using what is called a one-electron quantum cyclotron. In this setup, a single electron is trapped in a cylindrical vacuum cavity of radius $r_0$ and half-height $z_0$. The values of all experimental parameters, and the measured frequencies, are curated in Table~\ref{table:values}. A uniform magnetic field
\begin{subequations}
\label{eq:bare_em_fields}
\begin{equation}
\label{eq:B}
\mathbf{B} = B_0 \hat{\mathbf{z}}
\end{equation}
is established within the cavity to split the energy levels of the electron's spin states. A quadratic electrostatic potential\footnote{This expression differs by an overall sign from Ref.~\cite{Brown} because we use the convention that the electron has charge $-e$. In our case, both constants $e$ and $V_0$ are positive.}
\begin{equation}
\label{eq:V}
V = \frac{V_0}{2d^2}\left( \frac{r^2}{2} - z^2 \right)
\end{equation}
\end{subequations}
is also present to keep the electron close to the center, where the constant
\begin{equation*}
d = (r_0^2/4 + z_0^2/2)^{1/2} \approx 3.5\,\text{mm}
\end{equation*}
can be thought of as a characteristic length scale of the trap.
\begin{table}
\caption{Values of the experimental parameters and frequencies, reproduced from Refs.~\cite{HannekePRL,HannekePRA}. Up to small differences, the theoretical frequencies $\{ \omega_+, \omega_0, \omega_z \}$ are approximately related to their experimentally measured counterparts by $\omega_+ \approx \omega_0 \approx \bar\w_c$ and $\omega_z \approx \bar\w_z$. (See text in Secs.~\ref{sec:cav_H0} and \ref{sec:cav_dH} for details.)}
\label{table:values}
{\renewcommand{\arraystretch}{0}
\begin{ruledtabular}
\begin{tabular}{lccc}
Magnetic field & $B_0$ & 5.36 & T \\
Electrode potential difference & $V_0$ & 101.4 & V \\
Cavity radius & $r_0$ & 4.5 & mm\\
Cavity height & $2z_0$ & 7.7 & mm\\[5pt]
Cyclotron frequency & $\bar\w_c/2\pi$ & 150 & GHz \\
Anomaly frequency & $\bar\w_a/2\pi$ & 174 & MHz \\
Axial frequency & $\bar\w_z/2\pi$ & 200 & MHz \\
Magnetron frequency & $\omega_-/2\pi$ & 133 & kHz \\[5pt]
\end{tabular}
\end{ruledtabular}}
\end{table}
The profile of the scalar inside the vacuum cavity is determined by solving its field equation in the static limit,
\begin{equation}
\label{eq:eom_scalar}
\nabla^2\phi = V_{\text{eff},\phi},
\end{equation}
where the comma on the rhs denotes a derivative. It follows from the Lagrangian in Sec.~\ref{sec:L} that the effective potential differentiates to give
\begin{equation}
V_{\text{eff},\phi} = V_{,\phi} + \frac{\beta_m(\phi)\rho}{\ensuremath{M_\text{Pl}}} + \frac{\beta_\gamma(\phi)\rho_\text{em}}{\ensuremath{M_\text{Pl}}}.
\end{equation}
The electromagnetic energy density $\rho_\text{em} = (\mathbf{B}^2 - \mathbf{E}^2)/2$ that enters on the rhs is given by Eq.~\eqref{eq:bare_em_fields} in the interior of the cavity, while it can be assumed that it is unappreciable in the exterior. The distribution $\rho$ of matter is assumed to be piecewise constant, such that
\begin{equation*}
\rho =
\begin{cases}
\rho_\text{cav} & \text{inside the cavity } (r < r_0, |z| < z_0), \\
\rho_\text{wall} & \text{in the surrounding walls}.
\end{cases}
\end{equation*}
While no direct measurement of the density of gas $\rho_\text{cav}$ in the cavity has been made, an estimate from a similar trap design places an upper bound on the number density of atoms at $100\,\text{cm}^{-3}$ \cite{HannekePRA,PhysRevLett.65.1317}. Assuming this remains true for the current implementation, and taking the average mass of a molecule to be that of nitrogen, we estimate
\begin{equation*}
\rho_\text{cav} \lesssim 5 \times 10^{-18}\,\text{kg}\,\text{m}^{-3}.
\end{equation*}
On the other hand, the trap electrodes and vacuum container surrounding the cavity are composed primarily of silver, quartz, titanium, and molybdenum \cite{HannekePRA}, which have typical densities
\begin{equation*}
\rho_\text{wall} \gtrsim 3 \times 10^3\,\text{kg}\,\text{m}^{-3}.
\end{equation*}
For the two-dimensional cylindrical geometry considered here, an analytic solution to Eq.~\eqref{eq:eom_scalar} is not known. We postpone a full numerical solution of this equation to Secs.~\ref{sec:chm} and \ref{sec:sym}, where we specialize to chameleon and symmetron models, respectively. Nevertheless, we can continue to make analytic progress in this section because the experiment is cooled to an extremely low temperature $T \sim 100\,\text{mK}$, such that the electron remains very close to the center of the cavity. (We shall be more quantitative about this in Sec.~\ref{sec:cav_motion}.) Whatever the field profile is, it can be Taylor expanded about the center, which we take to be the origin, as
\begin{equation}
\label{eq:phi_Taylor}
\phi \simeq \phi_0 + \phi_{rr} \frac{r^2}{2 r_0^2} + \phi_{zz} \frac{z^2}{2z_0^2}.
\end{equation}
The central field value $\phi_0$ is a local maximum, hence we must have $\phi_{rr},\phi_{zz}<0$. Reflection symmetry in all three spatial directions ensures that the expansion contains only even powers of $r$ and $z$. Quartic and higher-order terms have been neglected since they are suppressed by additional powers of $\langle r^2/r_0^2 \rangle \ll 1$ and $\langle z^2/z_0^2 \rangle\ll 1$.
\subsection{Electromagnetic corrections}
\label{sec:cav_em}
The coupling function $\varepsilon(\phi)$ should be thought of as a relative permittivity of the vacuum, since it appears in the Maxwell equations as
\begin{equation}
\partial_\nu (\varepsilon F^{\mu\nu}) = J^\mu.
\end{equation}
The presence of a nontrivial scalar profile $\phi$ polarizes the vacuum, generating bound charges and currents that go on to source corrections to the bare electromagnetic fields. In a previous paper \cite{Wong:2017jer}, two of us showed that, at least in the case of the spectral lines of hydrogenlike atoms, this effect is large enough that it must be included. Moreover, it led to terms that allow a constraint on $\beta_\gamma$ independently of $\beta_m$. Given the large magnetic field in the cavity, it is worth exploring if the same is true for this experiment.
Solving Maxwell's equations perturbatively in the Lorenz gauge, the first-order corrections are given by
\begin{equation}
\nabla^2 \delta A_\mu = \frac{\beta_\gamma(\phi)}{\ensuremath{M_\text{Pl}}} F_{\mu\nu}^{(0)} \partial^\nu\phi,
\end{equation}
where $F_{\mu\nu}^{(0)}$ describes the bare (zeroth-order) electric and magnetic fields, as given in Eq.~\eqref{eq:bare_em_fields}. Restricting ourselves to the quadratic terms in Eq.~\eqref{eq:phi_Taylor}, the correction to the electrostatic potential is
\begin{equation}
\delta V = \frac{V_0}{2d^2}\frac{\beta_\gamma(\phi_0)}{\ensuremath{M_\text{Pl}}}\left( \phi_{rr} \frac{r^4}{16 r_0^2} - \phi_{zz} \frac{z^4}{6 z_0^2}\right),
\end{equation}
whereas the magnetic field receives corrections of the form
\begin{subequations}
\begin{align}
\delta\mathbf{A} &= B_0\phi_{rr}\frac{\beta_\gamma(\phi_0)}{\ensuremath{M_\text{Pl}}}\frac{r^2}{8 r_0^2} (y\hat{\mathbf{x}} - x \hat{\mathbf{y}}),
\\
\delta\mathbf{B} &= - B_0\phi_{rr}\frac{\beta_\gamma(\phi_0)}{\ensuremath{M_\text{Pl}}}\frac{r^2}{2 r_0^2} \hat{\mathbf{z}}.
\end{align}
\end{subequations}
\subsection{Hamiltonian}
\label{sec:cav_H}
The electron at the center of the Penning trap is adequately described by nonrelativistic quantum mechanics. In this limit, the modified Dirac equation in Eq.~\eqref{eq:Dirac} reduces to the Schr\"odinger equation with Hamiltonian \cite{Brax:2010jk,Brax:2010gp}
\begin{equation}
\label{eq:H}
H = \frac{(\mathbf{p} + e\mathbf{A})^2}{2m_e} - eV + g \mu_B \mathbf{B}\cdot\mathbf{S} + \Omega(\phi) m_e,
\end{equation}
where subleading terms of the form $\sim\mathcal O(\Omega\mathbf{p}^2)$ have been discarded. Ignoring the constant mass term, this Hamiltonian can be split into two parts,
\begin{equation*}
H = H_0 + \delta H.
\end{equation*}
The unperturbed Hamiltonian, for which the eigenstates can be determined exactly, is
\begin{equation}
\label{eq:H0_raw}
H_0 = \frac{\bm\pi^2}{2m_e} - eV + g \mu_B \mathbf{B}\cdot\mathbf{S},
\end{equation}
where the mechanical momentum is defined as $\bm\pi = \mathbf{p} + e \mathbf{A}$. It should be understood that the electromagnetic fields appearing here take their bare values, as in Eq.~\eqref{eq:bare_em_fields}. We work in the gauge $\mathbf{A} = (\mathbf{B} \times \mathbf{x})/2$. The remaining terms, which we shall treat with linear perturbation theory, are
\begin{equation}
\label{eq:cav_dH}
\delta H = \frac{m_e}{\ensuremath{M_\text{Pl}}}\beta_m\delta\phi - e \delta V + \mu_B(2 \bm\pi \cdot\delta\mathbf{A} + g \delta\mathbf{B}\cdot\mathbf{S}).
\end{equation}
We write $\delta\phi$ to mean the quadratic terms in Eq.~\eqref{eq:phi_Taylor}, have resumed writing $\beta_m \equiv \beta_m(\phi_0)$ and $\beta_\gamma \equiv \beta_\gamma(\phi_0)$ for brevity, and have once again absorbed factors of $\Omega(\phi_0)$ into the electron mass $m_e$ (see Sec.~\ref{sec:quantum_loops}).
\subsection{Unperturbed eigenstates}
\label{sec:cav_H0}
The unperturbed Hamiltonian in Eq.~\eqref{eq:H0_raw} can be split into three mutually-commuting parts,
\begin{equation*}
H_0 = H_r + H_z + H_s.
\end{equation*}
The radial, axial, and spin interaction parts are, respectively,
\begin{subequations}
\begin{align}
H_r &= \frac{1}{2m_e}(\pi_x^2 + \pi_y^2) - \frac{1}{4} m_e \omega_z^2 r^2,
\\
H_z &= \frac{1}{2m_e}\pi_z^2 + \frac{1}{2} m_e \omega_z^2 z^2,
\\
H_s &= \frac{g}{2} \omega_0 S_z.
\end{align}
\end{subequations}
These expressions are written in terms of the (bare) cyclotron frequency $\omega_0$ and the axial frequency $\omega_z$, given by
\begin{equation}
\omega_0 = e B_0 /m_e, \quad
\omega_z = (e V_0/ m_e d^2)^{1/2}.
\end{equation}
It should already be clear at this stage that the axial motion, governed by $H_z$, simply corresponds to a harmonic oscillator with frequency $\omega_z$. Making the transformation
\begin{equation}
z = \frac{1}{\sqrt{2m_e \omega_z}}(a_z + a_z^\dagger), \quad
\pi_z = -i \sqrt{\frac{m_e \omega_z}{2}}(a_z - a_z^\dagger)
\end{equation}
allows us to write
\begin{equation}
H_z = \omega_z \left(a_z^\dagger a_z + \frac{1}{2}\right)
\end{equation}
in terms of creation and annihilation operators. It turns out that the same is true for the radial motion, which can be diagonalized to form two decoupled oscillators. To see this, first define two more frequencies $\omega_\pm$ via \cite{Brown}
\begin{equation}
\label{eq:def_wpm}
2 \omega_\pm = \omega_0 \pm (\omega_0^2 - 2\omega_z^2)^{1/2},
\end{equation}
and denote their difference by $\Delta\omega = \omega_+ - \omega_-$. Then, by writing
\begin{align}
x &= \frac{i}{\sqrt{2 m_e \Delta\omega}}(a_c - a_c^\dagger + a_m - a_m^\dagger),
\nonumber\\
y &= - \frac{1}{\sqrt{2 m_e \Delta\omega}}(a_c + a_c^\dagger - a_m - a_m^\dagger),
\nonumber\\
\pi_x &= \sqrt{\frac{m_e}{2 \Delta\omega}}[\omega_+(a_c + a_c^\dagger) - \omega_-(a_m + a_m^\dagger)],
\nonumber\\
\pi_y &= i \sqrt{\frac{m_e}{2\Delta\omega}}[\omega_+(a_c - a_c^\dagger) + \omega_-(a_m - a_m^\dagger)],
\end{align}
we ultimately end up with
\begin{align}
\label{eq:H0}
H_0 =&\; \omega_+ \left( a_c^\dagger a_c + \frac{1}{2} \right) + \omega_z \left(a_z^\dagger a_z + \frac{1}{2}\right) - \omega_-\left(a_m^\dagger a_m + \frac{1}{2}\right)
\nonumber\\
& + \frac{g}{2} \omega_0 S_z.
\end{align}
An eigenstate of this system $| n_c,n_z,n_m,m_s \rangle$ is specified by four quantum numbers: Three of these correspond to the occupation numbers $n_i = \langle a_i^\dagger a_i \rangle = 0,1,2,\dots$ of the harmonic oscillators, whereas the fourth is the spin state $m_s = \pm 1/2$.
Physically, the oscillators with frequencies $\{ \omega_+, \omega_z, \omega_- \}$ correspond to cyclotron, axial, and magnetron motion, respectively (see Sec.~II of Ref.~\cite{Brown} for further details). That $\omega_+$ is slightly larger than the bare cyclotron frequency $\omega_0$ is due to the confining effect of the electrostatic potential, and note the minus sign appearing in front of $\omega_-$ in Eq.~\eqref{eq:H0} makes clear that magnetron motion is unstable and unbounded from below. Based on the parameters of the experiment (see Table~\ref{table:values}), these frequencies satisfy the hierarchy
\begin{equation}
\label{eq:hierarchy}
\omega_+ \gg \omega_z \gg \omega_-.
\end{equation}
\subsection{Axial and magnetron motion}
\label{sec:cav_motion}
This large hierarchy ensures that both the axial and magnetron motions are semiclassical. When measurements of the anomalous and cyclotron frequencies are being made, the axial motion is in thermal equilibrium with the detection amplifier circuit at a temperature $T_z \sim 230\,\text{mK}$ \cite{HannekePRA}. The average axial quantum number is thus given by
\begin{equation*}
n_z \sim k_B T_z/\omega_z \sim 24.
\end{equation*}
Similarly, the magnetron motion thermalizes with a temperature $T_m \sim - (\omega_-/\omega_z) T_z$, assuming maximum axial sideband cooling \cite{HannekePRA,Brown}. This relation sets the axial and magnetron quantum numbers equal to each other,
\begin{equation*}
n_m \sim n_z \sim 24.
\end{equation*}
The negative temperature here again represents the fact that magnetron motion is unstable. Nevertheless, its decay time is on the order of billions of years, such that the state is metastable on the timescale of the experiment \cite{HannekePRA,Brown}.
These estimates justify us truncating the scalar field profile to quadratic order in Eq.~\eqref{eq:phi_Taylor}. For $n_c \sim 1$, the expectation values
\noindent
\begin{minipage}{\columnwidth}
{~}
{\vskip -1.7\baselineskip}
\begin{subequations}
\begin{align}
\avg{\frac{r^2}{r_0^2}} &= \frac{2(n_c + n_m + 1)}{m_e \Delta\omega r_0^2} \sim 10^{-10},
\\
\avg{\frac{z^2}{z_0^2}} &= \frac{n_z + 1/2}{m_e \omega_z z_0^2} \sim 10^{-7}
\end{align}
\end{subequations}
\medskip
\end{minipage}
\vskip -0.5em
\noindent
demonstrate that the spread of the electron wavefunction indeed remains very close to the center of the cavity.
\subsection{Frequency shifts}
\label{sec:cav_dH}
Three frequencies must be measured experimentally to determine the electron's magnetic moment. These are defined as follows:
\begin{subequations}
\label{eq:def_w}
\begin{enumerate}
\item The measured cyclotron frequency $\bar\w_c$ is obtained by exciting the electron from the state $(n_c,m_s) = (0,1/2) \to (1,1/2)$ at fixed $n_z$ and $n_m$. Taking the difference in the expectation values $\avg{H}$ for these two states, we get
\begin{equation}
\label{eq:def_w_c}
\bar\w_c = \omega_+ - \frac{3}{2}\delta_\text{rel} + \delta\omega_c.
\end{equation}
Note that the scalar-induced shift $\delta\omega_c$ refers to the terms arising from computing $\avg{\delta H}$ at first order. Explicit expressions for all $\delta\omega_i$ are given together below in Eq.~\eqref{eq:def_dw}.
In Eq.~\eqref{eq:def_w_c}, we have also added in by hand the leading relativistic correction $\delta_\text{rel}/\omega_+ \approx 10^{-9}$ relevant at the experimental precision \cite{HannekePRA,Brown}.
\item The measured anomaly frequency $\bar\w_a$ is similarly obtained by the excitation $(n_c,m_s) = (1,-1/2) \to (0,1/2)$. This yields
\begin{equation}
\bar\w_a = \frac{g}{2} \omega_0 - \omega_+ + \delta\omega_a.
\end{equation}
\item The measured axial frequency $\bar\w_z$ corresponds to the transition $|\Delta n_z| = 1$, with all other quantum numbers fixed. This yields
\begin{equation}
\bar\w_z = \omega_z + \delta\omega_z.
\end{equation}
While the result does not change significantly, for definiteness we define $\bar\w_z$ as being the average energy for the two transitions $n_z \to n_z \pm 1$.
\end{enumerate}
\end{subequations}
\begin{widetext}
The three scalar-induced shifts are
\begin{subequations}
\label{eq:def_dw}
\begin{align}
\delta\omega_c &= \frac{\phi_{rr}}{\ensuremath{M_\text{Pl}} r_0^2}
\left[
\frac{\beta_m}{\Delta\omega} - \frac{\beta_\gamma \omega_0}{2 m_e \Delta\omega}\left(\frac{g}{2} + (2n_m + 3)\frac{\omega_+}{\Delta\omega}\right)
+ (n_m + 1)\frac{\beta_\gamma \omega_z^2}{2 m_e \Delta\omega^2} - (2n_m+1)\frac{\beta_\gamma\omega_0 \omega_-}{2 m_e \Delta\omega^2}
\right],
\\
\label{eq:def_dw_a}
\delta\omega_a &= -\frac{\phi_{rr}}{\ensuremath{M_\text{Pl}} r_0^2}
\left[
\frac{\beta_m}{\Delta\omega} + (2n_m+3) \frac{\beta_\gamma \omega_0}{2 m_e \Delta\omega}\left(\frac{g}{2}-\frac{\omega_+}{\Delta\omega}\right)
+ (n_m + 1)\frac{\beta_\gamma \omega_z^2}{2 m_e \Delta\omega^2} - (2n_m+1)\frac{\beta_\gamma\omega_0 \omega_-}{2 m_e \Delta\omega^2}
\right],
\\
\delta\omega_z &= \frac{\phi_{zz}}{\ensuremath{M_\text{Pl}} z_0^2}
\left[
\frac{\beta_m}{2\omega_z} - (2n_z + 1)\frac{\beta_\gamma}{8m_e}
\right].
\end{align}
\end{subequations}
\newpage
\end{widetext}
At the moment, Eqs.~\eqref{eq:def_w} and \eqref{eq:def_dw} form a set of three simultaneous equations that relate $g/2$ to the measured frequencies $\bar\w_i = (\bar\w_c, \bar\w_a, \bar\w_z )$ and the theoretical parameters $\omega_i = (\omega_0, \omega_z, \omega_+,\omega_- )$. We infer the value of the magnetic moment by eliminating all instances of $\omega_i$ to obtain an expression for $g/2$ that depends only on $\bar\w_i$. This is necessary since $\bar\w_i$ are the only quantities measured to a high enough precision. To do so requires two more independent equations. These are provided by the definitions of $\omega_\pm$ in Eq.~\eqref{eq:def_wpm}, which can be rearranged to read
\begin{equation}
\label{eq:w_relations}
\omega_0 = \omega_+ + \omega_-, \quad
\omega_- = \omega_z^2/(2\omega_0).
\end{equation}
Note that these relations are exact for an ideal Penning trap, but are also approximately true in the presence of small imperfections of a real trap due to the hierarchy of Eq.~\eqref{eq:hierarchy} and an invariance theorem \cite{PhysRevA.25.2423}.
This set of five simultaneous equations will yield an approximate solution of the form
\begin{equation*}
(g/2)_\text{exp} = 1 + a_\text{exp} + \delta a_\text{cav},
\end{equation*}
where the zeroth-order term $a_\text{exp}$ is independent of the CLP, while the scalar-induced effects are encapsulated in the first-order correction $\delta a_\text{cav}$. Owing to the highly nonlinear dependence of Eq.~\eqref{eq:def_dw} on $\omega_i$, the desired result is most easily obtained in two stages. First, we solve this set of simultaneous equations at zeroth order by ignoring the scalar-induced shifts $\delta\omega_i$. This is easy enough and returns $(g/2)_\text{exp} = 1 + a_\text{exp}$, with $a_\text{exp}$ given unsurprisingly by Eq.~\eqref{eq:a_exp} as before.
We then reintroduce the frequency shifts $\delta \omega_i$ by perturbing $a_\text{exp}$ to first order to obtain the `cavity shift'\footnote{The minus sign is crucial, and reflects the fact that $\omega_i$ are still the parameters to be eliminated. It can most easily be traced back to seeing that Eqs.~\eqref{eq:def_w} can be rearranged such that their lhs's read $\bar\w_i - \delta\omega_i$.}
\begin{equation}
\delta a_\text{cav} = - \sum_i \frac{\partial a_\text{exp}}{\partial \bar\w_i} \delta\omega_i.
\end{equation}
The shifts $\delta\omega_i$ that appear on the rhs are functions of $\omega_i$ and $g/2$, but can now be recast in terms of $\bar\w_i$ by using the zeroth-order relations in Eqs.~\eqref{eq:a_exp}, \eqref{eq:def_w}, and \eqref{eq:w_relations} once more. Throughout both stages, judicious use of the hierarchy in Eq.~\eqref{eq:hierarchy} was made to keep only the terms relevant at the level of the experimental precision. The end result is
\begin{align}
\label{eq:da_cav_full}
\delta a_\text{cav} =&\;
\frac{\beta_m}{\ensuremath{M_\text{Pl}} \bar\w_c^2}\left( \frac{\phi_{rr}}{r_0^2} + \frac{\phi_{zz}}{2z_0^2} \right)
\nonumber\\
&- \frac{\beta_\gamma}{\ensuremath{M_\text{Pl}}\bar\w_c^2} \left( \frac{\bar\w_a}{2 m_e}\frac{\phi_{rr}}{r_0^2} + \frac{49 \bar\w_z}{8 m_e} \frac{\phi_{zz}}{z_0^2} \right).
\end{align}
Note that the coefficient of $\phi_{zz}$ in the second line contains a factor of $2n_z + 1 = 49$. Notice also that the second line, arising from the classical vacuum polarization effect due to the photon coupling (Sec.~\ref{sec:cav_em}), is strongly suppressed by factors of
\begin{equation*}
\bar\w_a/m_e \sim \bar\w_z/m_e \sim 10^{-12}.
\end{equation*}
As a consequence, this effect is unable to place any meaningful constraint on the photon coupling. While we initially imagined that the large magnetic field in the cavity would be helpful for such a purpose, on the contrary, it turns out to offer little advantage because of the particular combination of frequencies that have to be measured. The correction $\delta\mathbf{A}$ couples to the orbital angular momentum while $\delta\mathbf{B}$ couples to the spin in the Hamiltonian [see Eq.~\eqref{eq:cav_dH}], and the two contributions approximately cancel out when computing $\delta a_\text{cav}$. The leading effect that survives is due to the correction $\delta V$ to the electrostatic potential. This is much smaller, since the ratio of the electric to magnetic energy densities is
\begin{equation}
\label{eq:cav_E/B}
\frac{\mathbf{E}^2}{\mathbf{B}^2} \sim \frac{V_0^2}{B_0^2 d^2} \sim 10^{-10}.
\end{equation}
Moving forward, we shall neglect any effect of the photon coupling on the cavity shift. Fortuitously, the combination of second derivatives in the first line of Eq.~\eqref{eq:da_cav_full} is exactly the Laplacian evaluated at the origin. Use of Eq.~\eqref{eq:eom_scalar} allows us to rewrite this in terms of $V_{\text{eff},\phi}$, such that
\begin{equation}
\label{eq:da_cav}
\delta a_\text{cav} = \frac{\beta_m(\phi_0) V_{\text{eff},\phi}(\phi_0)}{2\ensuremath{M_\text{Pl}} \bar\w_c^2}.
\end{equation}
This effect contributes to the total deviation as $\delta a \supset - \delta a_\text{cav}$, where the minus sign can be traced back to the relative sign between $a_\text{SM}$ and $a_\text{exp}$ in Eq.~\eqref{eq:a_compare}.
\section{Chameleon constraints}
\label{sec:chm}
We have seen so far that a CLP generates additional quantum corrections and an experimental cavity shift that together contribute to a total deviation $\delta a$. This must be constrained according to Eq.~\eqref{eq:da_constraint} to respect the agreement between the Standard Model prediction and the experimental measurement of the electron's magnetic moment. Individual contributions to $\delta a$ are given in Eqs.~\eqref{eq:da_bMbM}, \eqref{eq:da_bMbG}, and \eqref{eq:da_cav}. In these equations, the calculations were carried out in complete generality, and the results are expressed in terms of the coupling strengths $\beta_m(\phi_0)$ and $\beta_\gamma(\phi_0)$, and the first derivative of the effective potential $V_{\text{eff},\phi}(\phi_0)$. Crucially, all three quantities depend only on the choice of model and the central field value $\phi_0$. To complete the calculation and determine the constraints on parameter space, we must simply specify the former and determine the latter. We do so for the chameleon in this section, and for the symmetron in the next.
The prototypical chameleon model assumes an inverse power-law potential\footnote{Note that the chameleon mechanism can also be realized with positive power-law potentials, $V(\phi) \propto \phi^{2s}$ with integer values of $s \geq 2$ \cite{Gubser:2004uf}, although we shall not consider such models in this work.} \cite{Khoury:2003aq,Ratra:1987rm,Wetterich:1987fm}
\begin{equation}
\label{eq:chm_potential}
V(\phi) = \frac{\Lambda^{4+n}}{\phi^n}
\quad (n>0)
\end{equation}
and coupling functions of the form
\begin{equation}
\Omega(\phi) = \exp\left( \frac{\phi}{M_c} \right),
\quad
\varepsilon(\phi) = \exp\left( \frac{\phi}{M_\gamma} \right).
\end{equation}
With these definitions, the dimensionless coupling strengths
\begin{equation}
\beta_m = \frac{\ensuremath{M_\text{Pl}}}{M_c}, \quad \beta_\gamma = \frac{\ensuremath{M_\text{Pl}}}{M_\gamma}
\end{equation}
are independent of the value of the field. Putting these together, the effective potential differentiates to
\begin{equation}
V_{\text{eff},\phi} = - \frac{n \Lambda^{4+n}}{\phi^{n+1}} + \left(\frac{\rho}{M_c} + \frac{\rho_\text{em}}{M_\gamma}\right).
\end{equation}
While, in principle, all of parameter space is open to exploration, focus has primarily been devoted to models in which $\Lambda$ is chosen to be near the dark energy scale, $\Lambda = 2.4\,\text{meV}$. This choice makes the chameleon cosmologically relevant, if we view the potential in Eq.~\eqref{eq:chm_potential} as just the leading $\phi$-dependent term in an expansion
\begin{equation}
V(\phi) = \Lambda^4 f(\Lambda^n/\phi^n) \simeq \Lambda^4 + \frac{\Lambda^{4+n}}{\phi^n},
\end{equation}
assumed to arise from nonperturbative effects \cite{Brax:2004qh}. The constant piece $\Lambda^4$ has no effect on laboratory scales, but is an alternative to $\Lambda$CDM for driving the accelerated expansion of the Universe. (See Refs.~\cite{Brax:2004qh,Wang:2012kj} for more on the cosmology of the chameleon.)
\subsection{Analytic estimates}
\label{sec:chm_1D}
As stated in Sec.~\ref{sec:cav_profile}, it is difficult to solve Eq.~\eqref{eq:eom_scalar}---either exactly or approximately---for the chameleon profile in the interior of the Penning trap. This is because the cavity radius and height are of the same size, so the problem is strictly two-dimensional. However, as we are interested only in the central field value $\phi_0$, it turns out that analyzing an analogous one-dimensional cavity suffices to capture the most salient features of the solution. We discuss this one-dimensional ``toy model'' first, before turning to a numerical solution of the cylindrical geometry proper in Sec.~\ref{sec:chm_numerics}.
The toy model in question is the following: Consider a plane-parallel cavity in the region $z \in [-l,l]$ surrounded by walls on either side extending to infinity. The density of matter is assumed to be piecewise constant, such that
\begin{equation*}
\rho =
\begin{cases}
\rho_\text{cav} & z \in [-l,l],
\\
\rho_\text{wall} & \text{otherwise}.
\end{cases}
\end{equation*}
We shall neglect the electric field in the cavity, as its energy density is much smaller than that of the magnetic field; see Eq.~\eqref{eq:cav_E/B}. In doing so, the electromagnetic energy density is also piecewise constant,
\begin{equation*}
\rho_\text{em} \simeq
\begin{cases}
B_0^2/2 & z \in[-l,l],
\\
0 & \text{otherwise}.
\end{cases}
\end{equation*}
In this setup, Eq.~\eqref{eq:eom_scalar} then reduces to
\begin{equation}
\label{eq:chm_1D}
\frac{\text{d}^2\phi}{\text{d}z^2} = V_{\text{eff},\phi}.
\end{equation}
An exact solution to this equation is known \cite{Brax:2011hb,Ivanov:2012cb,Ivanov:2016rfs}, but only for $n\in\{1,2\}$ and when the interior of the cavity is pure vacuum. This is not general enough for our purposes. Instead, we use a standard technique to approximate the solution by solving linearized versions of Eq.~\eqref{eq:chm_1D} inside and outside the cavity, and imposing matching conditions at the adjoining boundaries \cite{Khoury:2003rn,Hinterbichler:2011ca,Tamaki:2008mf,Burrage:2014daa}. The linearized field equations are
\begin{equation}
\frac{\text{d}^2\phi}{\text{d}z^2} \simeq
\begin{cases}
m^2_0 (\phi-\phi_0) + V'_0 & |z| \leq l,
\\
m_\infty^2(\phi-\phi_\infty) & |z| > l.
\end{cases}
\end{equation}
In the interior of the cavity, we have expanded about the as-of-yet unknown central field value~$\phi_0$. The effective mass $m_0$ was defined previously in Eq.~\eqref{eq:def_mphi} as the second derivative $m_0^2 = V_{\text{eff},\phi\phi}(\phi_0)$ evaluated at the center. The constant term $V'_0 \coloneq V_{\text{eff},\phi}(\phi_0)$. Deep inside the walls, the chameleon will asymptote to the field value $\phi_\infty$ which minimizes the local effective potential, $V_{\text{eff},\phi}(\phi_\infty;\rho=\rho_\text{wall}) = 0$.
Solving this equation yields
\begin{equation}
\label{eq:chm_phi_infty}
\phi_\infty = \left( \frac{n \Lambda^{4+n} M_c}{\rho_\text{wall}} \right)^{1/(1+n)}.
\end{equation}
We have thus expanded the field equation in the walls about this point, with a mass $m_\infty^2 = V_{\text{eff},\phi\phi}(\phi_\infty)$ similarly defined.
Solving these equations brings about four integration constants, which are determined uniquely by the boundary conditions. Two of them are
\begin{equation}
\left.\frac{\text{d}\phi}{\text{d}z}\right|_{z=0} = 0,
\quad
\phi(z \to\pm\infty) = \phi_\infty,
\end{equation}
while the remaining two come from imposing continuity of $\phi(z)$ and its first derivative at $|z| = l$. With these considerations, the solution in the cavity ($|z| \leq l$) is
\begin{subequations}
\begin{equation}
\label{eq:chm_pp_int}
\phi(z) =
\phi_0 - \frac{V'_0}{m_0^2} - \frac{(\phi_0 - \phi_\infty - V_0'/m_0^2)\cosh(m_0 z)}{\cosh(m_0 l)+(m_0/m_\infty)\sinh(m_0 l)},
\end{equation}
whereas the solution in the walls ($|z| > l$) is
\begin{equation}
\label{eq:chm_pp_ext}
\phi(z) = \phi_\infty + \frac{(\phi_0 - \phi_\infty - V_0'/m_0^2) e^{-m_\infty(|z|-l)}}{1 + (m_\infty/m_0) \coth(m_0 l)}.
\end{equation}
\end{subequations}
An implicit equation for the central field value $\phi_0$ is obtained by demanding the solution in Eq.~\eqref{eq:chm_pp_int} satisfy the self-consistency condition
\begin{equation}
\label{eq:chm_solve_phi0}
\phi(z = 0) = \phi_0.
\end{equation}
Two approximations can be made to simplify this result. (Their implications and validity are discussed in the next two subsections.) First, let us assume that once in the walls, the chameleon quickly reaches its limiting value $\phi_\infty$. By inspecting Eq.~\eqref{eq:chm_pp_ext}, this will be true if $m_\infty \gg m_0$. Second, let us also assume that the interior of the cavity is pure vacuum, such that $V'_0 \simeq -n \Lambda^{4+n}/\phi_0^{n+1}$. When both these assumptions hold, Eq.~\eqref{eq:chm_solve_phi0} simplifies to
\begin{equation}
\label{eq:chm_1D_phi0}
\cosh(m_0 l) = n+2.
\end{equation}
This result admits an intuitive physical interpretation: In a vacuum cavity, the chameleon adjusts itself until its local Compton wavelength $m_0^{-1}$ is on the order of the size of the cavity~$l$ \cite{Khoury:2003rn}. This feature appears to be generic. A similar calculation can be found in Ref.~\cite{Brax:2007hi} for the case of an infinitely-long cylindrical cavity. The same result was obtained, except with the hyperbolic cosine replaced by the modified Bessel function of the first kind.
We expect this result to extend to higher dimensions also, although now the function appropriate to the geometry is not known. To proceed, we first note that Eq.~\eqref{eq:chm_1D_phi0} can be approximated by
\begin{equation}
m_0^2 l^2 \simeq 17.4 \frac{n+1.05}{n+10.5}
\end{equation}
for $n$ of order unity, where the rhs is the [1/1]-order Pad\'e approximant of $[\cosh^{-1}(n+2)]^2$ about $n=1$. For arbitrary (convex) cavity shapes, we conjecture that this generalizes to
\begin{equation}
\label{eq:chm_1D_pade}
m_0^2 l^2 \simeq \frac{n+1}{n+\delta},
\end{equation}
where $\delta$ is a constant depending on the geometry and any overall normalization of the rhs can been absorbed into the constant~$l$, which should now be thought of as a characteristic length scale of the cavity. Rearranging this equation and using the definition of $m_0$, we predict that the central field value has a dependence on $\Lambda$ and $n$ given by
\begin{equation}
\label{eq:chm_phi0}
\phi_0 \simeq \left[ n(n+\delta) \Lambda^{4+n} l^2 \right]^{1/(2+n)}.
\end{equation}
The two constants $(l,\delta)$ act as free parameters which should be tuned to best fit the numerical results.
\subsection{Numerical results}
\label{sec:chm_numerics}
We determine the full, nonlinear chameleon profile in the cylindrical Penning trap numerically by integrating Eq.~\eqref{eq:eom_scalar} through successive under-relaxation using the Gauss-Seidel scheme \cite{Press:2007:NRE:1403886} for 12 values of $n \in (0,13)$ with $\Lambda = 2.4\,\text{meV}$. Our code has been previously used to study similar problems in Ref.~\cite{Elder:2016yxm}, where more details on the method can be found. The dependence of $\phi_0$ on $n$ is shown in Fig.~\ref{fig:chm_phi0}, alongside the best-fitting analytic approximation, given in Eq.~\eqref{eq:chm_phi0}. The values of the best-fitting parameters are\footnote{These values, and analogous ones in Sec.~\ref{sec:sym}, were determined using the native \texttt{NonlinearModelFit} routine in \emph{Mathematica}.}
\begin{equation*}
l = 1.40\,\text{mm}, \quad
\delta = 2.78.
\end{equation*}
For illustrative purposes, we also present the full chameleon profile for $n=1$ in Fig.~\ref{fig:chm_profile}. The profiles for the remaining values of $n$ are qualitatively similar.
\begin{figure}
\includegraphics[width=70mm]{chm_phi0}
\caption{Best-fitting analytic approximation (dashed line) to the central field value $\phi_0$ of the chameleon in the cylindrical vacuum cavity for different values of $n$ with $\Lambda = 2.4\,\text{meV}$, compared with the numerical results (black dots). The lower plot displays the percentage difference between the numerical and analytic results: All points agree to less than one percent.}
\label{fig:chm_phi0}
\end{figure}
\begin{figure}
\includegraphics[width=70mm]{chm_profile}
\caption{Chameleon profile in the cylindrical vacuum cavity for $n=1$ and $\Lambda = 2.4\,\text{meV}$. The field value along the innermost contour is 90\% of the value at the origin. Moving outwards, successive contours are 80\%, 70\%, etc.~of the central field value. The field reaches 10\% near the boundary of the cavity, before quickly plummeting to $\phi\approx 0$ once inside the walls.}
\label{fig:chm_profile}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{chm_c_1}
\caption{Constraints on chameleon models due to the electron magnetic moment. The shaded regions are excluded at the 95\% confidence level. The panels correspond to the following slices in parameter space: (a) $n=1$, $\Lambda = 2.4\,\text{meV}$; (b) $\Lambda=2.4\,\text{meV}$, $\beta_\gamma \coloneq \ensuremath{M_\text{Pl}}/M_\gamma = 0$; (c) $n=1$, $\beta_\gamma = 0$. Numerical limitations mean that the cavity shift can be computed reliably only when both the zero-skin-depth and perfect-vacuum approximations are valid (see Sec.~\ref{sec:chm_numerics} for details). This corresponds to the region above the solid line, and below or to the left of the dotted line. Inside this region, the constraints arising from the cavity shift are shaded in gray. Outside this region, only the constraints from the quantum corrections (pink), which are still reliable, are shown.
}
\label{fig:chm_c_1}
\end{figure*}
Our approach is tractable only under two simplifying assumptions---the same as were made in the preceding subsection. We now give them names and discuss their implications:
\begin{enumerate}
\item\emph{Zero-skin-depth approximation}: We assume that the chameleon approaches its limiting value $\phi_\infty$ rapidly once inside the walls, such that we can approximate $\phi \approx \phi_\infty \approx 0$ at the boundary of the cavity. This is exactly true in the limit $\rho_\text{wall} \to \infty$, but will hold in practice provided
\begin{equation}
\label{eq:chm_approx_zsd}
m_0^2 \ll m_\infty^2.
\end{equation}
This approximation is essential, because in reality the walls of the cavity do not extend to infinity. By assuming that the chameleon quickly reaches $\phi_\infty$, we are assured that it has effectively decoupled itself from everything else happening beyond the walls, so that it is safe to neglect the complicated configuration of apparatuses surrounding the cavity.
\item\emph{Perfect-vacuum approximation}: We also assume that the interior of the cavity is a perfect vacuum. This is formally the limit $\rho_\text{cav}, \rho_\text{em} \to 0$, but will hold in practice provided
\begin{equation}
\label{eq:chm_approx_pv}
\frac{\rho_\text{cav}}{M_c} + \frac{\rho_\text{em}}{M_\gamma} \ll \frac{n \Lambda^{4+n}}{\phi_0^{n+1}}.
\end{equation}
This approximation is computationally convenient because the chameleon field equation reduces to
\begin{equation}
\nabla^2\phi = - \frac{n \Lambda^{4+n}}{\phi^{n+1}}
\end{equation}
in this limit. It is obvious that the central field value $\phi_0$ can then only depend on $\Lambda$ and $n$. More importantly, this equation admits the scaling symmetry
\begin{equation}
\label{eq:chm_lambda_scaling}
\Lambda \to f \Lambda, \quad
\phi \to f^{(4+n)/(2+n)} \phi,
\end{equation}
hence it suffices to perform the numerical integration for just one value of $\Lambda$; all other solutions are then accessible by rescaling.
\end{enumerate}
\subsection{Constraints}
\label{sec:chm_discussion}
The chameleon model contains four free parameters $(n,\Lambda,M_c,M_\gamma)$ which we wish to constrain. In terms of these parameters, the total deviation $\delta a$ takes the form
\begin{equation}
\label{eq:chm_da}
\delta a =
\frac{1}{2 M_c \bar\w_c^2}\frac{n \Lambda^{4+n}}{\phi_0^{n+1}}
+
3\left(\frac{m_e}{4\pi M_c}\right)^2
+
\frac{10}{M_c M_\gamma} \left(\frac{m_e}{4\pi}\right)^2,
\end{equation}
where the first term is due to the cavity shift, while the remaining two arise from the quantum corrections.
The cavity shift term exhibits a strong dependence on the central field value $\phi_0$, which we can predict reliably using Eq.~\eqref{eq:chm_phi0} only when both the zero-skin-depth (ZSD) and perfect-vacuum (PV) approximations are valid. As the limit $\rho \to 0$ is equivalent to taking $M_c,M_\gamma \to \infty$, these approximations are easily satisfied in some regions of parameter space, but break down in others. (The boundary at which this happens is estimated in Appendix~\ref{app:approx}.) For easy reference, we shall refer to the region where both the ZSD and PV approximations hold as the numerically accessible region (NAR). Outside this NAR, we no longer have a good sense for how $\phi_0$ behaves, and consequently cannot determine constraints arising from the cavity shift. In contrast, the quantum correction terms extend well beyond the NAR, since this effect has virtually no dependence on $\phi_0$ as long as $m_0 \ll m_e$. (The boundary at which this approximation breaks down is also discussed in Appendix~\ref{app:approx}.) Regions of parameter space excluded at the 95\% confidence level by the cavity shift and quantum corrections are shown, separately, in Fig.~\ref{fig:chm_c_1}.
For $n=1$ and $\Lambda = 2.4\,\text{meV}$, the chameleon field profile near the center is sufficiently flat that the cavity shift has no impact within the NAR. The constraints in Fig.~\ref{fig:chm_c_1}(a) are thus set entirely by the quantum corrections. Note that the effect of the photon coupling only becomes noticeable for $\log_{10}(M_\gamma/\ensuremath{M_\text{Pl}}) \lesssim -16$, although couplings in the region $\lesssim -15.4$ are already ruled out from considering collider experiments \cite{Brax:2009aw}. We therefore find that the electron's magnetic moment places no meaningful constraint on the photon coupling scale $M_\gamma$. This statement is true for all values of $\Lambda$ and $n$, since the quantum corrections are independent of these parameters, at least at leading one-loop order.
Focusing on the matter coupling scale $M_c$ now, the quantum corrections provide a universal lower bound
\begin{equation*}
\log_{10}(M_c/\ensuremath{M_\text{Pl}}) \gtrsim -16.7
\end{equation*}
independent of $\Lambda$ and $n$, as shown in Fig.~\ref{fig:chm_c_1}(b). This is a weak constraint, stemming from the small ratio $(m_e/M_c)^2 \ll 1$ that sets the scale of the quantum corrections. Other experiments do much better. Most notably, a different precision QED test---measurement of the $1S$--$2S$ transition in hydrogen---gives a slightly better lower bound $\log_{10}(M_c/\ensuremath{M_\text{Pl}}) \gtrsim -14$ \cite{Brax:2010gp,Wong:2017jer}. The best lower bound to date, however, comes from atom interferometry \cite{Burrage:2014oza,Burrage:2015lya,Brax:2016wjk,Hamilton:2015zga,Jaffe:2016fsh,Elder:2016yxm}. Depending on the value of $n$, the lower bound is between $-4$ to just under $-2.5$.
\begin{figure}
\includegraphics[width=68mm]{chm_c_3}
\caption{Constraints on the chameleon due to the electron magnetic moment in the $M_c$--$\Lambda$ plane. Parameters in the shaded region are excluded for the $n=1$ chameleon at the 95\% confidence level. The regions to the left of the solid, dotted, and dashed lines rule out parameters for other illustrative values of $n$.}
\label{fig:chm_c_3}
\end{figure}
\begin{figure}
\includegraphics[width=68mm]{chm_c_2}
\caption{The constraining power of the electron magnetic moment for the $n=1$ chameleon, compared with a selection of other experiments \cite{Brax:2007vm,Brax:2010gp,Upadhye:2012qu,Jain:2012tn,Vikram:2014uza,Jaffe:2016fsh}. See Ref.~\cite{Burrage:2017qrf} for details on all existing constraints.}
\label{fig:chm_c_2}
\end{figure}
Moving away from the dark energy scale, increasing $\Lambda$ drives the chameleon to climb to a larger central field value. When this happens, the cavity shift dominates until $\Lambda$ becomes too large, at which point we impinge on the boundary of the NAR. The end result is a triangular-shaped region excluded by this effect, as shown in Fig.~\ref{fig:chm_c_1}(c). For $n=1$, the lower bound on $M_c$ extends all the way out to $\log_{10}(M_c/\ensuremath{M_\text{Pl}}) = -10$ when $\Lambda \approx 300\,\text{eV}$. The shape of the excluded region is qualitatively similar for other values of $n$, as shown in Fig.~\ref{fig:chm_c_3}.
A comparison of our constraints with those from a selection of other experiments is shown in Fig.~\ref{fig:chm_c_2} for the $n=1$ chameleon. Although we do not cover any new region of parameter space not already ruled out by other experiments, it is worth remarking that our results represent the tightest constraints yet achievable by an experiment not originally designed to search for fifth forces.
\section{Symmetron constraints}
\label{sec:sym}
The symmetron model is characterized by a Higgs-like, double-well potential
\begin{equation}
V(\phi) = -\frac{1}{2} \mu^2 \phi^2 + \frac{\lambda}{4} \phi^4
\end{equation}
and coupling functions
\begin{align}
\Omega(\phi) &= 1 + \frac{\phi^2}{2 M_s^2} + \mathcal O\left(\frac{\phi^4}{M_s^4}\right),
\nonumber\\
\varepsilon(\phi) &= 1 + \frac{\phi^2}{2 M_\gamma^2} + \mathcal O\left(\frac{\phi^4}{M_\gamma^4}\right)
\end{align}
consistent with the field's $\phi \to -\phi$ symmetry. Differentiation gives the field-dependent dimensionless coupling strengths
\begin{equation}
\beta_m(\phi) = \ensuremath{M_\text{Pl}} \frac{\phi}{M_s^2}, \quad
\beta_\gamma(\phi) = \ensuremath{M_\text{Pl}} \frac{\phi}{M_\gamma^2}
\end{equation}
to leading order. Taken altogether, these yield an effective potential
\begin{equation}
V_\text{eff}(\phi) = \frac{1}{2}\mu^2\left( \frac{\rho}{\mu^2 M_s^2} + \frac{\rho_\text{em}}{\mu^2 M_\gamma^2} - 1 \right) \phi^2 + \frac{\lambda}{4} \phi^4.
\end{equation}
\subsection{Analytic estimates}
\label{sec:sym_1D}
As we did for the chameleon, it is helpful to first consider an analogous plane-parallel cavity whose solution will elucidate the relevant physics. Unlike the chameleon, this simple toy model admits an exact solution even in the presence of matter, provided only that it is distributed in a piecewise-constant fashion. The only spatially-varying source of matter is the energy density in the electric field, which for all intents and purposes is small enough to be neglected [recall Eq.~\eqref{eq:cav_E/B}]. Doing so, the symmetron's field equation can be integrated up once to give
\begin{equation}
\left(\frac{\text{d}\phi}{\text{d}z}\right)^2 = \frac{\mu^2}{2}\left( \frac{\rho}{\mu^2 M_s^2} + \frac{\rho_\text{em}}{\mu^2 M_\gamma^2} - 1 \right) \phi^2 + \frac{\lambda}{4} \phi^4 + \text{const.},
\end{equation}
with the constant determined by boundary conditions.
Inside the cavity, let us define an effective mass scale
\begin{equation}
\mu_0^2 = \mu^2\left( 1 - \frac{\rho_\text{cav}}{\mu^2 M_s^2} - \frac{\rho_\text{em}}{\mu^2 M_\gamma^2} \right),
\end{equation}
which must satisfy $\mu_0^2 > 0$ as a necessary condition if the symmetron is to break its $\mathbb Z_2$ symmetry. When this is the case, we expect the field to climb to an as-of-yet unknown value $\phi_0$ in the center, assumed to be a local maximum satisfying
\begin{equation}
\label{eq:sym_1D_bc_grad}
\left.\frac{\text{d}\phi}{\text{d}z}\right|_{z=0} = 0.
\end{equation}
Indeed, if the cavity were infinitely large, the field would have sufficient room to minimize its effective potential, such that $\phi_0 \to \pm \mu_0/\sqrt\lambda$. As this gives the largest possible value for~$|\phi_0|$, it is convenient to define a dimensionless scalar field
\begin{equation}
\varphi = \frac{\phi}{\mu_0/\sqrt\lambda}
\end{equation}
with range $\varphi \in [-1,1]$. With this definition, the symmetron field equation inside the cavity ($|z| \leq l$) becomes
\begin{equation}
\frac{1}{\mu_0^2} \left(\frac{\text{d}\varphi}{\text{d}z}\right)^2
=
- (\varphi^2-\varphi_0^2) + \frac{1}{2}(\varphi^4-\varphi_0^4),
\end{equation}
which crucially depends only on the parameter $\mu_0$. This first-order differential equation can be integrated to yield \cite{Upadhye:2012rc}
\begin{equation}
- \mu_0 z \frac{\varphi_0}{\sqrt{2v^2}} =
F\left(\sin^{-1}\left(\frac{\varphi(z)}{\varphi_0}\right), v \right)
- K(v),
\end{equation}
where we have chosen the positive branch $\varphi(z) > 0$ without loss of generality, and have defined $v^2 = \varphi_0^2/(2-\varphi_0^2)$. This result is expressed in terms of the elliptic integrals of the first kind
\begin{equation}
F(u,v) = \int_0^u \frac{\text{d}\theta}{\sqrt{1- v^2 \sin^2\theta}},
\end{equation}
and $K(v) = F(\pi/2,v)$. From the definitions of the Jacobi elliptic functions
\begin{align}
\text{sn}(u,v) &= \sin F^{-1}(u,v),
\nonumber\\
\text{cn}(u,v) &= \cos F^{-1}(u,v),
\nonumber\\
\text{dn}(u,v) &= \sqrt{1- v^2 \text{sn}^2(u,v)},
\end{align}
this can be inverted to give
\begin{equation}
\varphi(z) = \varphi_0\,\text{sn}\left( - \mu_0 z \frac{\varphi_0}{\sqrt{2v^2}} + K(v),v\right).
\end{equation}
As a final step, note that the elliptic functions satisfy the identity
\begin{equation}
\text{sn}(u + K(v),v) = \frac{\text{cn}(u,v)}{\text{dn}(u,v)} \eqcolon \text{cd}(u,v),
\end{equation}
where the function $\text{cd}$ is even in its first argument. Hence, the exact solution for the symmetron field in the cavity is (see also Ref.~\cite{Brax:2017hna})
\begin{equation}
\label{eq:sym_1D_sol_int}
\varphi(z) = \varphi_0\,\text{cd}\left( \mu_0 z \frac{\varphi_0}{\sqrt{2v^2}}, v \right).
\end{equation}
\begin{figure*}
\includegraphics[width=\textwidth]{sym_1D}
\caption{(a) The central field value $\varphi_0$ of the symmetron in a plane-parallel cavity is determined by finding the root(s) of the function $\mathcal B(\varphi_0)$, shown for two illustrative values $\mu_0 = 0.1\,\text{meV}$ (dashed line) and $0.45\,\text{meV}$ (solid line). (b) Symmetron profiles corresponding to the roots $\varphi_0 \approx \{0.26, 0.90, 1.00 \}$ for $\mu_0 = 0.45\,\text{meV}$ are shown as dashed, dotted, and solid lines, respectively. (c) The largest root $\varphi_0$ as a function of the mass scale $\mu_0$. In all three panels, illustrative values $\mu_\infty = 1\,\text{eV}$ and $l = 3.5\,\text{mm}$ are used.}
\label{fig:sym_1D}
\end{figure*}
Similarly, the solution in the walls ($|z| \geq l$) is governed by the equation
\begin{equation}
\label{eq:sym_1D_ode_walls}
\frac{1}{\mu_0^2} \left(\frac{\text{d}\varphi}{\text{d}z}\right)^2
=
\left(\frac{\mu_\infty}{\mu_0}\right)^2 \varphi^2 + \frac{1}{2}\varphi^4,
\end{equation}
made to satisfy the boundary condition $\varphi(|z| \to \infty) = 0$. The corresponding effective mass scale $\mu_\infty$ is defined by
\begin{equation}
\mu_\infty^2 = \mu^2 \left(\frac{\rho_\text{wall}}{\mu^2 M_s^2} - 1 \right),
\end{equation}
which must be positive to restore the $\mathbb Z_2$ symmetry in this region.
If we were so inclined, Eq.~\eqref{eq:sym_1D_ode_walls} can then be integrated to give the exact solution in the walls, with the integration constant determined by requiring continuity of $\varphi$ at the boundary $|z| = l$. A self-consistency equation for $\varphi_0$ is then obtained by also demanding continuity of the first derivatives. However, as we are here only interested in the solution within the cavity, this process can be sidestepped in favor of a shortcut. An equivalent self-consistency condition can be obtained by substituting Eq.~\eqref{eq:sym_1D_sol_int} into Eq.~\eqref{eq:sym_1D_ode_walls} evaluated at $|z| = l$. This yields an implicit equation for the central field value $\varphi_0$.
Said again in different words, we can solve for $\varphi_0$ by searching for the root of the function
\begin{equation}
\label{eq:sym_1D_B}
\mathcal B(\varphi_0;\mu_0,\mu_\infty,l) = \left.\left(\frac{\mu_\infty}{\mu_0}\right)^2 \varphi^2 + \frac{\varphi^4}{2} - \frac{1}{\mu_0^2} \left(\frac{\text{d}\varphi}{\text{d}z}\right)^2 \right|_{z=l},
\end{equation}
where $\varphi(z)$ on the rhs is given by Eq.~\eqref{eq:sym_1D_sol_int}. The function $\mathcal B(\varphi_0)$ is drawn for two illustrative values of $\mu_0$ in Fig.~\ref{fig:sym_1D}(a). Above a certain threshold value of $\mu_0$, the function begins to admit multiple roots. Each root is a valid solution of the field equation, with smaller values of $\varphi_0$ corresponding to field configurations with an increasing number of nodes, as seen in Fig.~\ref{fig:sym_1D}(b). For an intuitive picture, we should view a symmetron bubble as a solitonic object of a certain minimum width specified by $\mu_0$. If the length scale set by this mass matches the size of the cavity, $\mu_0 l \sim \mathcal O(1)$, then a single bubble can be contained within the walls. For larger values of $\mu_0$, the characteristic size of each solitonic packet decreases, and thus it becomes possible to fit multiple nodes within the same available space. In fact, when this is the case, we can relax the boundary condition in Eq.~\eqref{eq:sym_1D_bc_grad} to also allow for odd solutions in the cavity. Such solutions are discussed further in Ref.~\cite{Brax:2017hna}.
In an experimental setup, however, it is natural to expect that the symmetron will occupy the state of lowest free energy, corresponding to the solution with only one antinode. This is given by the largest root $\varphi_0$; which is shown as a function of $\mu_0$ in Fig.~\ref{fig:sym_1D}(c). This curve is also easy to understand intuitively: For very small values of $\mu_0$, the symmetron has too large a Compton wavelength and is unable to resolve the size of the cavity, thus remains in its symmetry-unbroken phase, $\varphi_0 = 0$. At a threshold value of $\mu_0 l \sim 1.6$, the field is finally able to support a bubble that can fit within the cavity, and the curve starts to grow. For larger values of $\mu_0$, the curve starts its plateau at $\varphi_0 \approx 1$ when the Compton wavelength is sufficiently small that the field almost immediately reaches the minimum of its effective potential once inside the cavity. This qualitative picture holds also when we generalize to the two-dimensional cylindrical case in the next subsection.
\subsection{Numerical results}
\begin{figure}[b]
\includegraphics[width=75mm]{sym_phi0}
\caption{Best-fitting analytic approximation (dashed line) to the dimensionless central field value $\varphi_0$ of the symmetron in the cylindrical vacuum cavity for different values of $\mu_0$, compared with the numerical results (black dots). The lower plot displays the percentage difference between the numerical and analytic results: All points agree to less than one percent, except the first three near $\mu_0 = 10^{-3.9}\,\text{eV}$ where $\varphi_0$ differs from zero only in the eighth (or higher) decimal place. Any discrepancy here is of no concern, since the numerical accuracy is unreliable for such small values of the field.}
\label{fig:sym_phi0}
\end{figure}
\begin{figure}[b]
\includegraphics[width=70mm]{sym_profile}
\caption{Symmetron profile in the cylindrical vacuum cavity for $\mu_0 = 10^{-3.82}\,\text{eV}$. The field value along the innermost contour is 90\% of the value at the origin. Moving outwards, successive contours are 80\%, 70\%, etc.~of the central field value. The field reaches $\phi = 0$ once at the walls.}
\label{fig:sym_profile}
\end{figure}
The same numerical scheme as discussed in Sec.~\ref{sec:chm_numerics} is used to solve for the symmetron profile inside the cylindrical vacuum cavity. As we saw earlier, for this model the presence of piecewise-constant distributions of matter can be accounted for exactly by defining effective mass scales $\mu_0$ and $\mu_\infty$, hence only the zero-skin-depth (ZSD) approximation is needed. To recap, this assumes that the symmetron rapidly reaches its limiting value $\phi = 0$ once inside the walls, such that the field is essentially decoupled from its greater surroundings. Formally this is the limit $\rho_\text{wall}$ or $\mu_\infty \to \infty$, but will hold in practice provided
\begin{equation}
\label{eq:sym_zsd}
\mu_0^2 \ll \mu_\infty^2.
\end{equation}
\begin{figure*}
\includegraphics[width=\textwidth]{sym_c_1}
\caption{Constraints on symmetron models due to the electron magnetic moment in the limit of a negligible photon coupling $M_\gamma \to \infty$. The shaded regions are excluded at the 95\% confidence level. Constraints arising from the cavity shift (gray) and quantum corrections (blue) are shown separately for the case $\mu = 10^{-3.82}\,\text{eV}$ in (a). Numerical limitations mean that these constraints can be computed reliably only when the zero-skin-depth (ZSD) approximation is valid, which explains the sharp cutoff for large $M_s$, as indicated by the vertical dashed line. Furthermore, the quantum correction terms are valid only in the weak coupling regime, corresponding to the region sandwiched between the dotted lines. Finally, no constraints are given for sufficiently small values of $\lambda$ when the EFT itself becomes unworkable, as shown by the solid line (see text in Sec.~\ref{sec:sym_c_matter} for more details). The combined constraints from the cavity shift and quantum corrections are shown together as one shaded region in (b) and (c) for different values of $\mu$. The same limits from assuming the ZSD approximation, weak coupling, and a valid EFT apply to each shaded region. For comparison, the region ruled out by torsion balance experiments \cite{Upadhye:2012rc} for $\mu = 10^{-3}\,\text{eV}$ is also shown in (b). In (c), observe that the parameter space is unconstrained for $\mu < 10^{-3.88}\,\text{eV}$, which is when the symmetron remains in its symmetry-unbroken phase inside the cavity. This same effect is responsible for the sharp cutoff at low $M_s$ in (a) and (b).}
\label{fig:sym_c_1}
\end{figure*}
We have performed the numerical integration for 15 values of $\mu_0$ in the range $\log_{10}(\mu_0/\text{eV}) \in (-4,-3)$, with the results of the dimensionless central field value $\varphi_0$ shown in Fig.~\ref{fig:sym_phi0}. The curve has a similar shape to what we found in the one-dimensional case, beginning its rise above zero at $\mu_0 \sim 10^{-3.88}\,\text{eV}$ and reaching the plateau by $\mu_0 \sim 10^{-3.39}\,\text{eV}$. For illustrative purposes, the full symmetron profile for the intermediate value $\mu_0 = 10^{-3.82}\,\text{eV}$ is shown in Fig.~\ref{fig:sym_profile}.
With some educated guessing, we have found that the curve in Fig.~\ref{fig:sym_phi0} can be well described by an empirical formula. Our starting point is the function $\mathcal B(\varphi_0)$ in Eq.~\eqref{eq:sym_1D_B}, the roots of which give the correct value of $\varphi_0$ in the one-dimensional case. Imposing the ZSD approximation, the limit $\mu_\infty \to \infty$ reduces this to the problem of finding the root of
\begin{equation}
\label{eq:sym_1D_approx}
\varphi(z=l) = \varphi_0\,\text{cd}\left( \mu_0 l \frac{\varphi_0}{\sqrt{2v^2}}, v \right) = 0.
\end{equation}
Finally, we introduce an \emph{ad hoc} parameter $\delta$ that deforms the solution away from the plane-parallel geometry, such that the new implicit equation for $\varphi_0$ is
\begin{equation}
\label{eq:sym_varphi0}
\varphi_0\,\text{cd}\left( (\mu_0 l)^{1+\delta} \frac{\varphi_0}{\sqrt{2v^2}}, v \right) = 0.
\end{equation}
This is given in terms of two free parameters $(l,\delta)$ which we should fit to the numerical data. Roughly speaking, the role of the characteristic length scale~$l$ is to fix the point at which the curve starts to rise above zero. The deformation parameter $\delta$ then tells us how quickly the curve reaches its plateau. The best-fitting parameters for the cylindrical Penning trap considered here are
\begin{equation*}
l = 1.96\,\text{mm}, \quad
\delta = 0.70.
\end{equation*}
\subsection{Constraints}
The symmetron model is specified by four parameters $(\mu,\lambda, M_s, M_\gamma)$ which we now constrain. In terms of these parameters, the total deviation $\delta a$ takes the form
\begin{align}
\label{eq:sym_da}
\delta a = &\,
\frac{\mu_0^4 \varphi_0^2(1-\varphi_0^2)}{2 \bar\w_c^2 M_s^2 \lambda}
+
\left(\frac{m_e}{4\pi}\right)^2 \frac{2\mu_0^2 \varphi_0^2}{M_s^4 \lambda} I_1(m_0/m_e)
\nonumber\\
&+
\left(\frac{m_e}{4\pi}\right)^2\frac{4\mu_0^2 \varphi_0^2}{M_s^2 M_\gamma^2 \lambda}
[1 + I_2(m_0/m_e)],
\end{align}
where the first term is due to the cavity shift, while the remaining two arise from the quantum corrections. Unlike the chameleon which always satisfies $m_0/m_e \ll 1$, the effective symmetron mass in the cavity
\begin{equation}
m_0^2 = V_{\text{eff},\phi\phi}(\phi_0) = \mu_0^2(3\varphi_0^2 -1)
\end{equation}
can be made arbitrarily large by increasing the value of $\mu$. For this reason, we have retained the integrals $I_1$ and $I_2$ in Eq.~\eqref{eq:sym_da}.
\subsubsection{Matter coupling only}
\label{sec:sym_c_matter}
It is instructive to first neglect the photon coupling and focus on the subspace $(\mu,\lambda, M_s)$. Regions excluded at the 95\% confidence level are shown in Fig.~\ref{fig:sym_c_1}. Notice that the cavity shift term in Eq.~\eqref{eq:sym_da} is proportional to $\varphi_0^2(1-\varphi_0^2)$, thus switches off when $\varphi_0 = 0$ or $\varphi_0 = 1$. In terms of the symmetron mass, this means that the cavity shift exerts an appreciable force only in the small range $\mu \in [10^{-3.88}, 10^{-3.39}]\,\text{eV}$ (see Fig.~\ref{fig:sym_phi0}).\footnote{In most of the parameter space probed by this experiment, the mass scales $\mu$ and $\mu_0$ are essentially equivalent, and will be used interchangeably.} In Fig.~\ref{fig:sym_c_1}(a), constraints are shown for the illustrative value $\mu = 10^{-3.82}\,\text{eV} = 0.15\,\text{meV}$, which we have specifically chosen because it maximizes the quantity $\varphi_0^2(1-\varphi_0^2)$, and thus (approximately) maximizes the size of the cavity shift.
This sensitive dependence on $\mu$ is the reason why other laboratory experiments hitherto have left the symmetron parameter space mostly unexplored. Atom interferometry experiments \cite{Burrage:2016rkv,Jaffe:2016fsh}, for instance, place meaningful bounds only in the range $\mu \in [10^{-5}, 10^{-4}]\,\text{eV}$, whereas an analysis of torsion pendula \cite{Upadhye:2012rc} has so far only considered the range $[10^{-4},10^{-2}]\,\text{eV}$. This does not present an obstacle for the electron magnetic moment experiment, however, because in addition to the cavity shift, there exists also quantum correction terms that survive up to much larger values of $\mu$, which are primarily responsible for the constraints in Figs.~\ref{fig:sym_c_1}(b) and \ref{fig:sym_c_1}(c).
Having said that, not all of parameter space is accessible to this experiment. As always with the symmetron, the parameter space is unconstrained when spontaneous symmetry breaking fails to occur inside the cavity. This is the case for all values of $(\lambda, M_s,M_\gamma)$ when $\mu < 10^{-3.88}\,\text{eV}$. For larger masses, symmetry breaking occurs only above a minimum value of $M_s$, which explains the sharp cutoff at low $M_s$ seen in Figs.~\ref{fig:sym_c_1}(a) and \ref{fig:sym_c_1}(b). At the other end, the ZSD approximation breaks down beyond a maximum value of $M_s$---shown by the right vertical dashed line---at which point the central field value $\varphi_0$ can no longer be reliably predicted from Eq.~\eqref{eq:sym_varphi0}. Since every term in Eq.~\eqref{eq:sym_da} depends strongly on $\varphi_0$, constraints cannot be reliably determined to the right of this boundary. (Appendix~\ref{app:approx} describes how this boundary is estimated.)
Further limitations must be taken into account when determining the constraints arising from the quantum corrections. First, our perturbative approach requires a weak self-coupling\footnote{Recall that $\lambda$ appears in the potential as $V(\phi) \supset \lambda \phi^4/4$. However, when computing Feynman diagrams, the combinatorial factors are simplest if we organize the perturbative expansion in powers of $\lambda'$, where $\lambda'/4! = \lambda/4$. Imposing the condition $\lambda' \lesssim 1$ explains the factor of $1/6$ in Eq.~\eqref{eq:sym_validity_wc_1}.}
\begin{subequations}
\label{eq:sym_validity}
\begin{equation}
\label{eq:sym_validity_wc_1}
\lambda \lesssim 1/6.
\end{equation}
For the same reason, the Yukawa-like, scalar-matter coupling must also be weak [cf. Eq.~\eqref{eq:L_linear_couplings}],
\begin{equation}
\label{eq:sym_validity_wc_2}
\frac{\beta_m(\phi_0) m_e}{\ensuremath{M_\text{Pl}}} = \frac{\mu_0^2 \varphi_0 m_e}{\sqrt\lambda M_s^2} \lesssim 1.
\end{equation}
For sufficiently small values of $\lambda$, the EFT itself becomes unworkable. For a rough estimate of when this happens, we shall deem it a necessary condition that the functions $\Omega(\phi)$ and $\varepsilon(\phi)$ do not deviate too far from unity. Inside the cavity, the (classical) symmetron field reaches a maximum value of at most $\phi_0 = \mu/\sqrt\lambda$, so our condition is satisfied provided
\begin{equation}
\label{eq:sym_validity_eft}
\frac{\mu^2}{2\lambda M_s^2} \lesssim 1, \quad
\frac{\mu^2}{2\lambda M_\gamma^2} \lesssim 1.
\end{equation}
\end{subequations}
The boundary lines demarcating the regions in parameter space that satisfy these conditions are shown in Fig.~\ref{fig:sym_c_1}(a). To prevent an overcrowded plot, they are not drawn again in Figs.~\ref{fig:sym_c_1}(b) and \ref{fig:sym_c_1}(c), nor in the remaining figures that follow, although it should be understood that they continue to be in effect.
\begin{figure*}
\includegraphics[width=\textwidth]{sym_c_2}
\caption{Constraints on the $\mu = 10^{-3}\,\text{eV}$ symmetron due to the electron magnetic moment. The regions of parameter space excluded at the 95\% confidence level are shown as two-dimensional slices for different values of (a)~$M_\gamma$, (b)~$M_s$, and (c)~$\lambda$. We show constraints only for weak couplings $\lambda \lesssim 1/6$ which are amenable to our perturbative approach. Other approximations are also responsible for moulding the final shape of the shaded regions shown here. These are discussed towards the end of Sec.~\ref{sec:sym_c_matter}, and are primarily responsible for the awkward shapes of the bottom edges.}
\label{fig:sym_c_2}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{sym_c_3}
\caption{Constraints on the symmetron due to the electron magnetic moment for different values of $\mu$. Shaded regions denote the values of the parameters in the $M_s$--$M_\gamma$ plane that are excluded at the 95\% confidence level for each value of $\lambda$. We show constraints only for weak couplings $\lambda \lesssim 1/6$ which are amenable to our perturbative approach. In the $\mu = 10^{-3.82}\,\text{eV}$ panel, the slice for the largest value of $\lambda$ does not extend as far to the left and bottom as the others. This is because the quantum correction terms responsible for this slice suffer from a tachyonic instability near the edges, when $\varphi_0 < 1/\sqrt{3}$ (see the last paragraph of Sec.~\ref{sec:sym_c_matter} for details).}
\label{fig:sym_c_3}
\end{figure*}
One last subtlety must be brought to light. Our calculations for the quantum corrections also fail to hold when the symmetron becomes tachyonic at the center ($m_0^2 < 0$). Rather than signaling any kind of severe pathology with the theory, this merely indicates that we can no longer neglect the spatial variation of $\langle\phi\rangle$ when computing the quantum corrections. As such a calculation is beyond the scope of this paper, we have simply forgone placing constraints when this occurs. Luckily this does not affect the end results much, and explains why the shaded region due to the quantum corrections in Fig.~\ref{fig:sym_c_1}(a) does not extend as far to the left as the cavity shift.
\subsubsection{Photon coupling}
We now discuss the constraints on the symmetron when the photon coupling $M_\gamma$ is included. For an illustrative value of $\mu = 10^{-3}\,\text{eV}$, the region in the $(\lambda, M_s, M_\gamma)$ subspace that is excluded is shown in Fig.~\ref{fig:sym_c_2}. As before, the bottom edges of each shaded region in Figs.~\ref{fig:sym_c_2}(a) and \ref{fig:sym_c_2}(b) correspond to the boundary beneath which the weak coupling limit and, further down, the EFT itself stop being valid. These conditions correspond to Eqs.~\eqref{eq:sym_validity_wc_2} and \eqref{eq:sym_validity_eft}, respectively, and are universal to all experiments.\footnote{More precisely, Eq.~\eqref{eq:sym_validity_wc_2} applies only to experiments probing the quantum nature of the symmetron for which a perturbative calculation is unavoidable, whereas Eq.~\eqref{eq:sym_validity_eft} applies in all cases.} For this reason, the most essential information to be gained from this experiment is encapsulated in the top edges of the shaded regions, which give the lower bound on $\lambda$ that remains viable. Evidently, this lower bound on $\lambda$ increases for fixed $(\mu, M_s)$ as we decrease $M_\gamma$. This information is most efficiently conveyed in a ``top view'' plot as shown in Fig.~\ref{fig:sym_c_2}(c).
To observe the dependence on the symmetron mass, top-view plots for different values of $\mu$ are shown in Fig.~\ref{fig:sym_c_3}. For $\mu \lesssim 10^{-3}\,\text{eV}$, the photon coupling has no noticeable effect, whereas the shapes of the shaded regions are qualitatively similar for all $\mu > 10^{-3}\,\text{eV}$. As we increase $\mu$, the left and bottom edges of each slice in the $M_s$--$M_\gamma$ plane move further left and bottom, owing to the fact that spontaneous symmetry breaking in the cavity can occur for smaller values of $M_s$ and $M_\gamma$. This continues on until about $\mu \sim 10\,\text{eV}$, when the opposite begins to occur and the edges retreat towards the top-right corner of the plot. This happens simply because Eqs.~\eqref{eq:sym_validity_wc_2} and \eqref{eq:sym_validity_eft} break down in larger and larger regions of parameter space as $\mu$ increases. When we reach $\mu \sim 10^8\,\text{eV}$, the theory becomes completely unworkable in the range of $M_s$ and $M_\gamma$ accessible to this experiment, such that no constraint can be placed.
Viewed from this perspective, the shaded regions in the $M_s$--$M_\gamma$ plane for a given value of $\lambda$ strongly resemble the chameleon constraints in the $M_c$--$M_\gamma$ plane of Fig.~\ref{fig:chm_c_1}(a). The effect of the photon coupling only becomes noticeable below a certain value of $M_\gamma$, and the lower bound depends on the specific value of $M_s$. This behavior can of course be traced back to the quantum correction terms, where the photon coupling always appears in tandem with the matter coupling when restricted to leading one-loop order. Recall in the case of the chameleon that we spent no effort illustrating the weak constraints on the photon coupling any further, since they were found to be uncompetitive with those already placed by collider experiments \cite{Brax:2009aw}. The same might be true for the symmetron, although no work has yet been done to translate the bounds and demonstrate this definitively. Indeed, to our knowledge, this paper represents the first attempt at constraining the symmetron's coupling to photons.
\section{Conclusion}
\label{sec:conclusions}
Decades of exceptional work by theorists and experimentalists alike have now verified the accuracy of the Standard Model, and QED in particular, to about one part per trillion. Beyond achieving their original objective, we have shown that precision tests of QED can also be used to place meaningful constraints on the existence of chameleonlike particles (CLPs) that mediate screened fifth forces. In this work, we considered the implications of the precision measurement of the electron's magnetic moment, focusing on two main scalar-induced effects that could arise.
First, the virtual exchange of CLPs generates additional loop corrections to the QED vertex function, since the scalars are assumed to couple to electrons and photons with gravitational strength or greater (see Sec.~\ref{sec:quantum}). This leads to an increase in the intrinsic value of the magnetic moment, which must be constrained to be less than $\sim 10^{-12}$ lest it ruin the remarkable agreement between experiment and the Standard Model prediction. Second, nonlinear self-interactions drive the scalar to form a bubblelike profile within the cylindrical vacuum cavity of the experiment. This scalar profile exerts an additional fifth force on the electron confined to the Penning trap, thus perturbing its energy eigenvalues. A systematic shift of this form can also be used to place constraints, since the magnetic moment is determined experimentally by measuring the transition frequencies between energy levels (see Sec.~\ref{sec:cav}).
Accurate estimates of these effects require knowledge of the value of the scalar field at the center of the cavity, which can only be determined by fully solving the nonlinear field equation. The absence of any known closed-form solution---either approximate or exact---for the case of the cylindrical geometry considered here has led to a somewhat novel, semiempirical approach. It has already been shown that a chameleon in a vacuum cavity satisfies a resonance condition such that its local Compton wavelength is dynamically adjusted to match the size of the cavity \cite{Brax:2007hi}. In this paper, we have shown this explicitly for the case of a plane-parallel cavity by obtaining an approximate, one-dimensional solution. Through well-motivated arguments, the solution to this toy model was then deformed to describe more arbitrary convex cavity shapes. The resulting empirical formula for the central field value is a function of only two free parameters, which are tuned to best fit the full numerical solutions carried out for a small number of points in parameter space (see Sec.~\ref{sec:chm}).
We found that the quantum corrections were able to place a universal bound of $\log_{10}(M_c/\ensuremath{M_\text{Pl}}) \gtrsim -16.7$ for the chameleon model, independent of the values of $(\Lambda, n)$. However, for values near $\Lambda \approx 300\,\text{eV}$, the cavity shift dominates to give a much better lower bound of $\log_{10}(M_c/\ensuremath{M_\text{Pl}}) \gtrsim -10$. While this part of parameter space is already constrained by other laboratory experiments, the bound determined here represents the tightest constraint yet achieved by an experiment not originally intended to search for fifth forces.
Our results are able to break even more ground for the symmetron (see Sec.~\ref{sec:sym}). Again, a deformation of the one-dimensional solution to a plane-parallel cavity results in an empirical formula with only two free parameters that can be tuned to fit the numerical results with a high degree of accuracy. With this in hand, we saw that the cavity shift places constraints only for a small range of the symmetron mass, $\mu \in [10^{-3.88}, 10^{-3.39}]\,\text{eV}$. This limitation is unsurprising, and is generic to any laboratory experiment that probes the effect of the symmetron's fifth force. When $\mu$ is too small, the associated Compton wavelength is too large such that the symmetron is unable to resolve the size of the vacuum cavity, and thus remains in the symmetry-unbroken phase. On the other end of the spectrum, the fifth force is strongly Yukawa-suppressed when $\mu$ is too large, resulting in a field profile that is essentially flat in the cavity except near the walls.
Nonetheless, the electron magnetic moment has an added advantage over other experiments that have hitherto provided constraints on the symmetron. The quantum corrections are able to yield constraints regardless of the value of $\mu$, provided only that the mass is large enough to enable spontaneous symmetry breaking, and small enough that the effective field theory remains valid. As a result, this experiment has probed, and decisively ruled out, a large and previously unexplored region of parameter space in the range $\mu \in [10^{-3.88}, 10^8]\,\text{eV}$ for couplings ($M_s, M_\gamma$) around the GeV scale.
To conclude, this work provides a clearer picture of the space of CLP models that remain viable in this Universe, now more than ever. Our results also suggest a new direction for future work: While dedicated fifth-force experiments such as atom interferometry and torsion balances may well provide the best sensitivities in a given mass range near the meV scale, it will be interesting to explore other experiments that exploit the quantum nature of the symmetron in the hopes of covering large regions of parameter space more efficiently.
\begin{acknowledgments}
It is a pleasure to thank Clare Burrage for helpful discussions. This work has been partially supported by STFC Consolidated Grants No.~ST/P000673/1 and No.~ST/P000681/1. This work is supported in part by the EU Horizon 2020 research and innovation program under the Marie-Sklodowska Grant No.~690575. This article is based upon work related to the COST Action CA15117 (CANTATA) supported by COST (European Cooperation in Science and Technology). B.E. is supported by a Leverhulme Trust Research Leadership Award. L.K.W. is supported by the Cambridge Commonwealth, European and International Trust, and Trinity College, Cambridge. We also thank the organizers of the \emph{PSI$^2$ DarkMod} and \emph{Dark Energy in the Laboratory} workshops for the conducive environments they have provided during which some of this work was completed.
\end{acknowledgments}
|
3,212,635,537,524 | arxiv | \section{Introduction}
Information gathering using mobile robots in dangerous or hard-to-access environments has significantly improved humanity's ability to understand our world~\cite{Cliff-RSS-15,dunbabin2012robots}. Research in improving the capabilities of these robots has largely focused on automating low level functionality, such as perception and obstacle avoidance. Higher level reasoning (and task level autonomy in particular) in unstructured real world environments has not received as much attention. However, this technology is critical to enable the study of more remote areas, where much of the interesting science lies. Such high level autonomy in the context of information gathering missions is known as \emph{Autonomous Science}. In this paper we use robotic planetary exploration as our motivating application, but the ideas presented here are applicable to exploration of remote environments in general.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{cont1024.jpg}
\caption{The Continuum rover in a Mars-analog environment using its robotic arm to closely examine rocks}
\label{fig:continuum}
\end{figure}
Planetary rovers are required to explore largely unknown environments under strong communication constraints such as high latency, limited bandwidth and infrequent communication windows. They are equipped with multiple heterogeneous sensors which must be used collaboratively to achieve a set of high level scientific goals such as finding evidence of water. In outdoor environments there is also significant noise in the form of shadows, sensor inaccuracies, and deformable terrain. These challenges induce the need for some form of autonomy to ensure safety and mission progress in the absence of human supervision.
Recent research in Autonomous Science has explored increasing autonomy through anomaly detection, selective data transmission, guiding data collection with template based feature matching and adaptive sampling through non-parametric models such as Gaussian processes (GPs) \cite{castano2007oasis,thompson2011autonomous,woods2009autonomous}.
Higher level reasoning such as deciding where to go in the short and long term, which sensors to deploy and most importantly, making inferences from observations to update scientific hypotheses, is handled primarily by human supervisors on Earth. This creates a bottleneck in the scientific progress made as communication can typically only be established twice a day on Mars. In this work we approach the problem of Autonomous Science from a novel cognitive robotics perspective by equipping the rover with an approximate representation of a scientist's domain knowledge. We then develop techniques to reason about this knowledge to explore and sample the environment in a more intelligent and goal-driven manner.
We represent geological knowledge as a Bayesian network~(BN). The BN's structure and conditional probability parameters allow us to capture many important aspects of scientific knowledge such as conditional dependencies between variables, causality relationships and any mathematical or process models that may be known prior to the mission. BNs are limited in expressivity as compared to knowledge representation languages such as Answer Set Prolog \cite{zhang2015mixed}, but have the advantage of handling uncertainty more robustly. This property is crucial in unstructured environments, such as Mars, where sensors and controls are both noisy. Further, there are many algorithms which allow fast approximate inference in BNs, which is an important advantage lacking in many other languages \cite{pearl2014probabilistic}.
We then show how Monte Carlo Tree Search~(MCTS) techniques can be applied to reason about the knowledge BN efficiently and plan goal oriented sensing actions over long horizons. The resulting knowledge representation and reasoning framework extends robotic information gathering in two ways: it enables the robot to reason about prior scientific knowledge in a principled manner, and it allows the robot to plan with multiple sensing modalities to study latent environmental variables which cannot directly be observed.
We apply the framework to a Mars exploration mission where the robot observes environmental features to determine the geological identity of different regions on the map, such as ancient riverbeds and volcanic zones. The robot is equipped with two sensors, a camera and an idealized spectrometer and required to autonomously plan where to move and which sensor to use at each time step while satisfying some sensing budget. We present extensive simulation results where our method outperforms alternative approaches in terms of information gain (confidence) and accuracy. We conclude by demonstrating the practicality of our approach in an analog Martian environment using our experimental rover, Continuum (shown in Fig.~\ref{fig:continuum}).
\section{Related Work}
\subsection{Bayesian Networks for Knowledge Representation}
Due to their desirable property of remaining robust under uncertainty, many authors have employed BNs to model domain knowledge, particularly in the form of expert systems. Applying these networks to robotic decision making problems in unstructured environments is, however, less studied. Most authors have limited their use to classification and have not closed the loop around path planning~\cite{sharif2015autonomous,apostolopoulos2001robotic}.
Work that is similar to ours is by Post et al.~\cite{post2016planetary}, who use BNs to create an obstacle map while integrating any sensor uncertainties that are present. A path is then planned to achieve a goal position while minimizing the probability of collisions. This work, however, does not attempt to model scientific knowledge, especially the spatial relationships.
Gallant et al.~\cite{gallant2011science} used a BN to classify minerals and assign benefit scores based on the current scientific goals of the mission. The benefit scores were then fed into a cost function to determine the best action take. However, their approach does not reason about unobserved parts of the environment and does not consider the problem of selecting which sensor to use.
\subsection{Informative Path Planning}
The idea of planning the placement of sensors to achieve some information-theoretic goal can be viewed as an active sensing problem, or more generally, an informative path planning problem. When the problem is monotone submodular, greedy approaches are effective and offer performance guarantees~\cite{krause2012near}. Unfortunately, this property is often violated in field environments leading to arbitrarily poor worst case performance. Branch and bound techniques which prune suboptimal branches early in the tree search have shown promise~\cite{bestprobabilistic}. However efficiently calculating tight bounds in problems with unknown environments and multiple sensors like ours is non-trivial. MCTS methods, however, work for any general objective function and do not require bounds. They are anytime and hence suitable for online planning~\cite{browne2012survey}.
Approaches that involve initially unknown environments typically utilize GPs and exploit the monotone submodular nature of the mutual information or variance reduction function to avoid exhaustive search~\cite{binney2012branch,hollinger2014sampling}. While GPs can represent spatial phenomena in a probabilistic manner, they are not particularly useful tools for encoding domain knowledge especially causal knowledge. Proposed methods are limited to: imposing priors on the co-variance parameters, transforming the training data and biasing the mean function~\cite{azmanincorporating}. Further, the computational complexity of GPs make them difficult to use in online planning applications with long horizons such as the problem considered here.
\section{Autonomous Science for Planetary Rovers}
This section discusses the robot properties, the assumptions made about the world, and formally defines the planning problem that the robot is required to solve in the context of Mars exploration.
\subsection{Robot and Environment Setup}
The robot is a UGV which moves around in a world discretized into cells. The robot is equipped with a camera which can detect rocks and extract their visual features. The camera can take measurements within its field of view which may span multiple cells. The robot is also equipped with an ultraviolet (UV) light source which it can project onto the environment to reveal UV reflective minerals. The UV light source simulates what a spectrometer might do on a real Mars mission since it is energetically expensive to use and has a narrow sensing range, but gives more informative measurements than a camera. For the remainder of this paper we refer to the camera as the low cost `remote' sensor and the UV source as the high cost `local' sensor.
\subsection{Problem Setup}
Given this robot and environment setup and some representation of scientific knowledge, the robot is required to plan a sequence of informative sensing actions $a_{seq}$ to minimize entropy of some scientific latent variable of interest $L$ across all of the $N$ cells on the map. A sensing action is a tuple consisting of a movement action and which sensor to use. The robot is also constrained to some specified sensing (energy or time) budget. The optimization objective can be described by Eq. 1.
\begin{equation}
\begin{split}
&a^*_{seq} = \operatorname*{arg\,max}_{a_{seq} \in A} EI(a_{seq}) \\
&\textbf{s.t.} \sum^{|a_{seq}|}_{i}{\textnormal{cost($a_i$)} = \text{Budget}}
\end{split}
\end{equation}
The cost function and the action space $A$ we use will be defined in Section V. $EI$ is the expected information gain of an action sequence which is calculated by marginalizing out all possible observations $Z_{seq}$ that can result from the sensing sequence (Eq. 2). The $P(Z_{seq}|a_{seq})$ term is effectively a sensor model and $I$ is an information gain function given by Eq. 3 where $H$ is the Shannon entropy. The conditional entropy $H(L_n|Z_{seq})$ requires a mapping from observations to the latent variable of interest. This is the knowledge representation component of the framework while the optimization to determine sensing sequences is akin to scientific reasoning.
\begin{equation}
EI(a_{seq}) = \sum_{Z_{seq}}{I(Z_{seq})P(Z_{seq}|a_{seq})}
\end{equation}
\begin{equation}
I(Z_{seq}) = \sum_{n=1}^{N}{H(L_n) - H(L_n|Z_{seq})}
\end{equation}
\section{Approach}
In this section we present the two main components of the system: BN knowledge representation and a MCTS planner. The planner reasons about the knowledge network and the robot and environment state to determine a sequence of sensing actions which maximize the information gained on the scientific latent variable of interest.
\subsection{Knowledge Representation}
The purpose of the BN is to model the relationship between the observations made and the latent variable of interest through scientific knowledge. The structure of the network encodes causal knowledge while the conditional probability parameters encode quantitative knowledge.
Since, rocks are key sources of geological cues in Martian environments, we design the BN around them (Fig.~\ref{knowledgebn}). The rocks in the environment of class $R$ exhibit $N$ visual features represented by the variable $F$. The robot can observe these features through its camera, denoted by the variable $Z$. The variable $B$ is the UV reflective material that can be measured by the robot's local sensor. Lastly $L$ resembles the underlying latent variable which affects the environment. In this paper we assume $L$ to be the type of location the robot is in such as desert or a riverbed and this is scientifically interesting variable we are interested in gaining information on.
All nodes in the network are discrete as geologists often look for features which do not have associated continuous measurements such as the presence of bedding on a rock. Discretization also simplifies inference. The structure of the BN can be adapted to account for different variables and dependencies that come with specific applications. In this paper all nodes have three categories they can take but the approach works for any arbitrary number.
The proposed BN structure allows several sources of information to be integrated in the form of conditional probabilities. $P(Z|F)$ is the sensor model, $P(F|R)$ is effectively a classifier likelihood while $P(B|L)$ and $P(R|L)$ are geological properties of the environment. This network exists in every cell of the environment. If there are no rocks detected in a cell, then the $R$ node and its children will be removed to speed up future computations.
\begin{figure}[!t]
\centering
\includegraphics[trim={5cm 3cm 8cm 6.5cm},clip,width=0.5\textwidth]{bnet_diagram.pdf}
\caption{Left: The structure of the Bayesian network used to represent geological knowledge. Right: Spatial relationships between adjacent cells}
\label{knowledgebn}
\end{figure}
In natural environments there are also strong spatial correlations present. There are several methods of encoding this relationship. A common approach is through a Markov random field. However this will make the inference problem difficult as cycles will be introduced in the graphical model. Another alternative is to add links between the $L$, $R$ and $B$ nodes of adjacent cells. This implies that the variables $R$ and $B$ are dependent on the $L$ nodes in the neighborhood as opposed to just the one in its cell. Nodes that are far from where the observation was taken have less influence on the inference. This decreasing influence is modeled by a Gaussian function. Fig. \ref{knowledgebn} illustrates this spatial dependency. The resolution of the $L$ grid does not have to match the $R$ grid and can be adapted based on the expected spatial variability of variables.
The conditional probability parameters can either be specified directly through domain knowledge, learned from training data \cite{heckerman1995learning} or even learned online by modeling them as Dirichlet distributions \cite{girdhar2015modeling}. In this work we assume the maximum likelihood parameters are known a priori.
Due to this BN's structure, the belief on the value of nodes can be updated recursively without keeping an history of observations. The message passing technique is used for efficiently propagating belief updates through the BN \cite{yedidia2000generalized}.
\subsection{Monte Carlo Tree Search}
In this problem, the robot acquires observations after executing every sensing action and has the freedom to adapt the sensing plan accordingly. Therefore at planning time, the robot only needs to decide the next best action to take which in expectation will give maximal future rewards. We propose the use of MCTS methods to address this sequential decision making problem. The algorithm is presented in Alg. 1.
MCTS is a best first, anytime algorithm which involves cycling through four stages: node selection, expansion, simulation and back-propagation. The key idea is to first select promising leaf nodes based on a tree policy. The selected node is expanded and a terminal reward is estimated by conducting simulations or `rollouts' in the decision space. The reward is then back propagated up the tree and the process is repeated until some computational budget is reached. At the end of the search, the child of the root node with the highest average reward is selected as the next best action. Since MCTS is sampling based, it is well suited for large state spaces, high branching factors and long horizon planning. For an overview on MCTS methods we refer the reader to Browne's comprehensive survey \cite{browne2012survey}.
\begin{algorithm}[!t]
\caption{MCTS Science Autonomy Planner}
\begin{algorithmic}[1]
\State \textbf{Input:} SensingBudget $S$, BeliefSpace $Bel$, DomainKnowledge BN $K$, RemainingBudget $R$
\Function{Main}{}
\State $R \gets S$
\While{$R > 0$}
\State $robotPose \gets getLocalisation()$
\State $a_{opt} \gets planner(robotPose, R, Bel, K)$
\State $Z\gets takeObservation(a_{opt})$
\State $Bel\gets updateBeliefSpace(Z, Bel, K)$
\State $R \gets R - cost(a_{opt})$
\EndWhile
\EndFunction
\State
\Function{planner}{$robotPose, R, Bel, K$}
\State $T \gets initialiseTree(robotPose, R)$
\State $currentNode \gets T.rootNode$
\While{within computational budget}
\State $currentNode \gets treePolicy(T)$
\State $sequence\gets rolloutPolicy(currentNode, R)$
\State $reward \gets getReward(sequence, Bel, K)$
\State $T \gets updateTree(T, reward)$
\EndWhile
\State
\Return $bestChild(T)$
\EndFunction
\State
\Function{rolloutPolicy}{$currentNode, R$}
\State $sequence \gets currentNode$
\While{$R > 0$}
\State $nextNode \gets defaultPolicy(currentNode)$
\State $currentNode \gets nextNode$
\State $sequence \gets sequence + currentNode$
\State $R \gets currentNode.R$
\EndWhile
\State
\Return $sequence$
\EndFunction
\State
\Function{getReward}{$sequence, B, K$}
\State $reward \gets 0$
\For{$i=1:length(sequence)$}
\State $currentAction \gets sequence(i)$
\State $Z = sampleObs(currentAction, Bel, K)$
\State $Bel_{new} = updateBelief(Z, Bel, K)$
\State $infoGain = calcInfoGain(Bel_{new}, Bel)$
\State $reward \gets reward + infoGain$
\State $Bel \gets Bel_{new}$
\EndFor
\State
\Return $reward$
\EndFunction
\end{algorithmic}
\end{algorithm}
We formulate the MCTS such that each node in the tree is a potential sensing action that can be made. It is a tuple consisting of the robot's x and y position, the orientation, the type of sensor used and the remaining sensing budget. Each node also stores the average reward $\bar{R_i}$ of all the simulations that have passed through it and the number of times it has been visited $n_i$ during the tree search. The children of the node are determined by the robot's action space and the remaining budget. We now describe each stage of the MCTS in detail and show how it has been adapted for our problem.
\textbf{Selection:} The first stage of MCTS is using a tree policy to select which leaf nodes to expand. We want to expand leaf nodes which are expected to have a good terminal reward but at the same time evaluate alternative nodes sufficiently to minimize chances of converging to local minima. The Upper Confidence Tree (UCT) policy based on the optimism in the face of uncertainty paradigm is known to be a good solution to balance the exploration/exploitation trade-off present here \cite{kocsis2006bandit}. UCT begins at the root node and iteratively selects leaf nodes with the highest Upper Confidence Bound (UCB) until a node with unexpanded children is reached. The UCB score for node $i$ is defined by Eq. 4 below.
\begin{equation}
UCB_i = \bar{R_i} + C_p\sqrt\frac{2\log N}{n_i}
\end{equation}
The first term is the `exploitation' component of UCB. $\bar{R_i}$ is the average reward of all rollouts that have passed through $node_i$. We define the reward function in the simulation subsection below. The second term in the equation is the `exploration' component where $N$ is the number of times the parent of the node has been evaluated and $n_i$ is the number of times node $i$ has been evaluated. $C_p$ is a constant that balances exploration and exploitation. We found empirically that a value of $0.1$ gave good results in both simulations and hardware experiments.
\textbf{Expansion:} From the leaf node selected by the UCT policy, an unexpanded child node is randomly selected and added to the tree.
\textbf{Simulation:} The aim of the simulation stage is to determine the terminal reward associated with this newly expanded child node by executing some default policy. Here we use a random action selection policy from the selected node until the sensing budget is exhausted. A random policy was used because it requires minimal computational overhead to calculate and ensures the decision space is uniformly explored. However since we are sampling randomly, a large number of rollouts are often required to accurately estimate rewards. Using problem specific rollout policies has been shown to significantly improve tree convergence but we leave this as an interesting avenue for future work.
The expected information gain function defined earlier in Eq. 2 is the ideal reward function to evaluate a rollout. However, calculating this function analytically requires summing over all possible observations that can result from the rollout sequence. In our problem, the low cost remote sensor observes rocks in its field of view. Each rock can exhibit $|F|^N$ combinations of features where $|F|$ is the number of classes each feature can take and $N$ is the total number of features. Furthermore the number of rocks seen as well as the position of the rocks in the image are all unknown at planning time if an area hasn't been observed before. The observation space is therefore very large and evaluating the reward exactly is not practical.
We define the reward function as $\frac{I_r}{H_{init}}$ where $I_r$ is the information gain during rollout $r$ and $H_{init}$ is the joint entropy of the $L$ variables at the current state of the mission. This division constrains the average reward to between $0$ and $1$- a requirement for UCB convergence guarantees to hold. We approximate $I_r$ by sampling. We begin at the first node of the rollout sequence. Depending on the sensing action used, an observation is sampled from the belief space. The belief space is updated and passed onto the next node. The process is iterated until the terminal node is reached. The total information gain is determined by subtracting the entropies of the initial and terminal belief spaces.
\textbf{Back-propagation:} Lastly the reward received by the rollout is back-propagated up the tree and the average reward and number of evaluations for each node involved is updated.
The four stages are repeated until the computational budget for the robot has expired at which point the root child with the highest average reward is selected as the next best action. Given enough samples and an appropriate value for the exploration parameter $C_p$ in Equation 4 it can be shown that the tree will converge to the optimal action sequence.
\section{Simulation Experiments}
\begin{figure}[t]
\centering
\includegraphics[trim = {0 0.2cm 0 0},clip,width = 0.5\textwidth]{sim_example.JPG}
\caption{Left: An example of a randomly generated location ground truth map where the colors signify different classes. Right: An example rock map generated by sampling from the Bayesian Network}
\label{egmap}
\end{figure}
This section aims to empirically demonstrate the performance of the MCTS planner for our Autonomous Science problem. While there are several algorithms for informative path planning in literature, they cannot be applied in situations where multiple sensing modalities are involved without significant algorithmic modifications and heuristics. We therefore compare performance over the following baselines:
\begin{itemize}
\item Random sampling- the robot selects a random action within its action space at each time step
\item Fixed sampling- When one sensor is involved, a lawnmower pattern is popular as it provides uniform coverage. When there are multiple sensors and a sensing budget involved, it is non trivial to design such paths. We here use a 5 stage policy which involves the robot using the remote sensor in the forward direction, 90 degrees to the left, and 90 degrees to the right, using the local sensor in the current cell and then moving one step forward. The stages are repeated until the robot's sensing budget is exhausted.
\item A greedy planner which selects the action with the highest immediate expected information gain to cost ratio. The behavior is similar to a frontier based strategy often used in exploration problems. The expected information gain is approximated by sampling observations from the belief space and simulating belief updates.
\end{itemize}
Large random environments were generated in which the location type and UV nodes were set to be a $40\times 40$ grid. Location type is the scientific latent variable of interest, which represents abstract geological features such as desert, riverbed, etc. The grid was further divided into 25 $8\times 8$ regions of homogeneous location types. The rock and feature space grids were of size $800 \times 800$. Each location grid cell therefore contains multiple rocks with associated features. The remote sensor can make observations in the feature space grid with a field of view of size 50 by 40 cells. All nodes were assigned ground truth labels by randomly sampling from the BN. An example environment is shown in Fig.~\ref{egmap}.
The robot can occupy any one of the cells in the 40 by 40 grid and orient itself in 8 directions in $45\degree$ increments. In each decision step the robot can move one step forward in the direction it is facing or rotate on the spot with either -90$\degree$,-45$\degree$,45$\degree$ or 90$\degree$ increments. It also has to decide which of the two sensors to use. The size of the action space is therefore 10 actions. The $cost(a)$ function is defined as 1 unit for the remote sensor and 8 units for the local sensor.
\begin{table}[!t]
\centering
\caption{Information gain with varying sensing budgets}
\label{table1}
\begin{tabular}{|c|c|c|c|}
\hline
& \multicolumn{3}{c|}{\textbf{Sensing Budget}} \\ \hline
\textbf{Policy} & \textbf{50} & \textbf{70} & \textbf{100} \\ \hline
Random & $103.67(18.68)$ & $114.03(17.87)$ & $130.99(18.29)$ \\ \hline
Fixed & $109.06(18.48)$ & $134.82(20.38)$ & $157.38(17.24)$ \\ \hline
Greedy & $176.34(25.76)$ & $192.44(32.76)$ & $231.55(49.57)$ \\ \hline
MCTS-50 & $166.56(38.20)$ & $202.55(39.63)$ & $243.59(53.45)$ \\ \hline
MCTS-100 & $193.63(39.76)$ & $203.36(40.11)$ & $256.65(50.80)$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Accuracy score with varying sensing budgets}
\label{table2}
\begin{tabular}{|c|c|c|c|}
\hline
& \multicolumn{3}{c|}{\textbf{Sensing Budget}} \\ \hline
\textbf{Policy} & \textbf{50} & \textbf{70} & \textbf{100} \\ \hline
Random & $391.84(15.41)$ & $397.22(16.64)$ & $402.12(19.75)$ \\ \hline
Fixed & $389.62(16.27)$ & $402.55(17.27)$ & $412.47(18.27)$ \\ \hline
Greedy & $426.10(20.80)$ & $436.78(18.61)$ & $451.95(30.15)$ \\ \hline
MCTS-50 & $423.29(24.85)$ & $444.47(26.27)$ & $460.02(36.44)$ \\ \hline
MCTS-100 & $436.35(27.58)$ & $445.21(24.94)$ & $466.22(29.48)$ \\ \hline
\end{tabular}
\end{table}
We ran 50 trials for each policy with randomly generated environments and start locations. The policies were tested with sensing budgets of 50, 70 and 100 units. Two performance measures were used: the total information gained and an accuracy score. This is defined as the probability of the correct location class in the robot's belief. For example if a robot's belief about the class of $L$ in a particular cell is $[0.1,0.2,0.7]$ and the true class is the second one, the accuracy for the cell will be $0.2$. The accuracy score is the sum of the accuracy of all of the cells. It is an important metric because it captures situations in which the robot's belief converges to the wrong class.
\begin{figure*}[!t]
\centering
\includegraphics[trim = {0 0.2 0 0.1cm},clip, width=\textwidth]{cont_diagram.JPG}
\caption{Left: System diagram of Continuum. Right: Continuum's UV light source in action}
\label{continuum_diag}
\end{figure*}
The average information gain and accuracy scores at the end of the mission are shown in Tables I and II with the standard deviation in brackets. Since our MCTS based planner is anytime, it was run with 50 and 100 iterations to test the effect of computation time on resulting performance. For all budget sizes, the adaptive algorithms (greedy and MCTS variants) significantly outperformed random and fixed sampling paths. For budget sizes 70 and 100 both of the MCTS variants yielded better performance than greedy in terms of both information gain and accuracy score.
For a budget of 50 however, the greedy algorithm outperformed the MCTS-50 variant. We believe this is the case due to two reasons. Firstly, the simulation environment is open and unconstrained. With a small budget, the greedy strategy does not reach a point where the local information the robot can gain is exhausted. Secondly, in short planning horizons the next best action has a large effect on the final performance. Since the greedy algorithm allocated 20 samples for each action but MCTS-50 on average only uses 5 samples (the action space has a size of 10), the greedy approach has a better estimate of the information that can be gained in the next action. The fact that MCTS-100 significantly outperformed greedy supports this hypothesis.
In terms of computation time, each iteration of the MCTS took between 0.2 to 0.5 seconds on an average desktop computer. The implementation was however in MATLAB and can be significantly sped up through more efficient memory management and handling of data structures. Parts of the algorithm can be parallelized so utilizing multi-threading is also a possibility.
\section{Planetary Rover Experiments}
In this section we demonstrate the practicality of our approach with a rover mission on an analog Martian terrain based in the Museum of Applied Arts and Sciences (MAAS) in Sydney. This section summarizes the platform capabilities, testing environment, our computer vision technique and concludes with some trial experiments.
\subsection{Platform Details}
Our rover Continuum is pictured in Fig. \ref{continuum_diag}. It is equipped with an omni-directional drive which gives it relatively unconstrained motion capabilities. The spiral shape of the rims act as shock absorbers while the double-bogie chassis allow the rover to climb over steep rocks and minimize the changes in orientation. Continuum has a 6 degree of freedom robotic arm with cameras, an ultraviolet light source and a 2D laser scanner mounted on the end effector. There are also several hazard cameras around the body to check for collisions. In this experiment we use one of the arm cameras and the UV light source as our two sensors. The light source illuminates the UV reflective powder we discuss in the next section and simulates what a spectrometer might do in a real mission. The arm camera was pointed towards the ground to constrain the information that can be gathered in a single sense.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{areatypes.JPG}
\caption{From left to right: The three classes of location type and a typical image when the local sensor is activated.}
\label{area_example}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{segmentation_eg.JPG}
\caption{Our rock segmentation technique in action. It can be seen that there are still some false positives in areas with shadows.}
\label{cv_example}
\end{figure*}
\subsection{Environment Setup}
Our testing environment, the MAAS Mars Lab is a $20\times 7$m space which is designed to be a scientifically accurate representation of Martian terrain. The lab was divided into three different types of location shown in Fig. \ref{area_example}. Each location type had slightly different distributions of types of rocks and the features they exhibit. UV reflective powder was added in varying quantities to each category. There was however enough ambiguity between categories to encourage the robot to use a combination of both sensors to gather information. The rock grid was set to be a resolution of 2cm per cell. Rocks are different sizes so they usually span across many cells. To account for this we assume they are located in the cell nearest to their centroid. The conditional probability parameters of the BN were determined from intuition and therefore not 100\% accurate. There were also rocks in the environment which were not explicitly modeled in the BN, which is a realistic source of noise not present in the simulations.
\subsection{Computer vision}
In a realistic unstructured environment the feature extraction process is more complex and requires first segmenting the rocks from the image. It can be seen in Figures 6, 7 and 8 that rocks look very similar to ground in terms of colour. There are also lighting variations and shadows which complicate the image processing step. There are several methods proposed in literature which achieved good results. Edge-based techniques such as \cite{thompson2007performance} ran a Canny edge detector followed by a complex process of pruning and joining edges likely to belong to a rock. Texture based techniques such as \cite{song2008automated} utilized multi-resolution histograms to achieve coarse segmentation followed by an active contour technique to get good edge detection performance. Another interesting and effective approach was used by \cite{dunlop2007multi} which calculated superpixels at different scales followed by adding, subtracting, splitting and merging superpixels to satisfy criterion learned from a Support Vector Machine. However all of these approaches were designed for Martian imagery which did not have the same characteristics as our environment and were not available open source. Furthermore computation time was not considered in these studies so the algorithms often took several minutes to yield a result.
We approach this problem by first over-segmenting the image into superpixels using the SLIC algorithm \cite{achanta2012slic} which groups similarly colored pixels together while preserving the strong edges. This is followed by adaptive normalization to reduce lighting variations and shadows. Histograms of intensity, the number of edges, LAB color and intensity variance were calculated for each superpixel and compared to a training image of the ground with no rocks. Applying appropriate thresholds allows us to classify most of the superpixels as rock, ground or shadow. For the more uncertain superpixels, the amount of texture correlation with their local neighborhoods was measured followed by a voting process. This two stage process yields the final image shown in Fig. \ref{cv_example}. Segmentation is sometimes noisy like most robotic applications especially in the presence of shadows but the probabilistic nature of Bayesian networks helps minimize the resulting effects on decision making. For features we use circularity, size and color as they are simple to calculate and geologically meaningful. The UV measurement was obtained by calculating the blue to red ratio of the RGB channels. The features and UV measurements were both discretized into three categories.
\subsection{Localization and control}
PID controllers were used in conjunction with a localization system detailed in previous work \cite{potiris2014terrain} to control the omni-directional drive such that the required position and orientation is achieved within a small error margin. Localization was fused with the computer vision to register observations on a map which allowed the belief space to be updated. The action space was once again discretized into ten actions where the robot could select one of two sensors and decide whether to move forward one step, move diagonally at -45 and 45 degrees or rotate by -90 or 90 degrees. The robot also checked if actions will lead to collisions or cause the robot to drive over valuable rocks through an occupancy map provided to the robot prior to the mission.
\subsection{Results}
We compared our non-myopic planner against a random action policy with random start locations and orientations in the yard. Ten trials were run for each policy. A sensing budget of 30 units was used with a cost function of 1 and 5 units for the remote and local sensor respectively. We also attempted to implement a greedy strategy but found early in the trials that the robot often got stuck in local minima and wasn't able to give useful results. This is because, unlike the simulations, there were many non traversable rocks present which often created concave areas in the occupancy map. A random policy was able to better recover from such situations, and hence was a better benchmark to compare our algorithm against. The information gain and accuracy scores along with standard deviations are shown in Table III.
\begin{table}[!t]
\centering
\caption{Performance comparison of MCTS planner with random for real robot experiments}
\label{my-label}
\begin{tabular}{|c|c|c|ll}
\hline
\textbf{Policy} & \textbf{Information Gain} & \textbf{Accuracy Score} \\ \hline
Random & 52.23 (11.76) & 161.89 (8.48) \\ \hline
MCTS-50 & 59.17 (18.63) & 170.04 (10.66) \\ \hline
\end{tabular}
\end{table}
If the robot has a uniform distribution over the belief of $L$ across all the cells, the accuracy score is $139.33$. The MCTS algorithms therefore gives almost a $25\%$ increase in accuracy score over random policies and $13\%$ increase in terms of information gain. It is important to note the testing environment was relatively small. Longer horizon plans are likely to generate even more performance benefits.
\section{Conclusions and Future Work}
The results presented in this paper show that our approach has the potential to extend the autonomy of space rovers, and information gathering robots in general. A novel method for encoding scientific knowledge in a BN was proposed, along with a MCTS planner to reason about the network and create informative action policies. This enables robots to plan and deploy sensors to directly study scientifically interesting latent variables in a closed loop fashion. The reduced reliance on communication with scientists for navigation should lead to increased science returns in future missions. Our approach was tested extensively in simulation as well as in an analog Mars environment and showed significant performance improvements over simpler policies.
In future work we would like to evaluate our approach in different use cases such as agriculture and remote sensing. Richer knowledge representation frameworks such as statistical relational models could be explored, while the performance of the MCTS can be further improved through more informed rollout policies and better reward function approximations. Another interesting line of work is to adapt the structure and conditional probability parameters of the BN online to better fit and predict observations.
\section*{ACKNOWLEDGMENT}
We would like to thank ACFR, MAAS and the Mars Lab project for supporting this work. Thanks also goes to Graeme Best, Oliver Cliff, Asher Bender and Steven Potiris for their valuable feedback.
\bibliographystyle{ieeetr}
|
3,212,635,537,525 | arxiv |
\section{Introduction}
One critical step for cancer diagnosis and treatment is pathologists' analysis of histological glass slides obtained from a patient's tissue sections to identify evidence of tumor patterns and determine the diagnosis (\eg benign {\em vs.} malignant) and grade based on medical guidelines.
Such an analysis is often challenging for pathologists, due to the sheer amount of effort it requires to identify sparse, small-scaled, non-homogeneous histological patterns of tumors in each patient's case.
Further, the process suffers from subjectivity due to the intra- and inter-observer variations, \eg different interpretations of the grading guidelines and different ways of sampling and examining the slides to `implement' a given guideline \cite{cai2019hello}.
To overcome these challenges, digital pathology, enabled by advanced whole slide imaging techniques, promises to transform traditional manual optical microscopic examinations to be automated by data-driven artificial intelligence (AI) \cite{holzinger2017towards}. Specifically, there is a recent development of deep learning techniques to detect or grade carcinoma with digitized Whole Slide Images (WSI) \cite{pantanowitz2011review}, such as breast cancer \cite{wang2016deep, huang2018improving, rakhlin2018deep}, prostate cancer \cite{strom2020artificial}, lung cancer \cite{wang2019artificial}, uterine cervical cancer \cite{de2013fusion}, and glioblastoma \cite{barker2016automated}.
Although multiple AI-aided diagnostic systems have been designed preliminarily for pathologists \cite{schneider2012nih, bankhead2017qupath, carpenter2006cellprofiler, saltz2017containerized, martel2017image}, there remain barriers that prevent their adoption into a clinical setting. Studies indicate that various factors are in play, including a lack of concerns for clinicians' needs that undermine their motivation to accept AI for clinical usage, and a lack of workflow integration where the system is disruptive and time-consuming to use in practice \cite{yang2016investigating}. Other research pointed out that medical users desire more transparency to overcome subjectivity and more interpretability to correct the model when the model makes mistakes \cite{cai2019hello, xie2020chexplain}. Building upon this prior work, we conducted a formative study with four board-certified pathologists that further summarizes three gaps of employing AI-enabled analysis of histological data:
\begin{enumerate}
\item The gap of {\bf comprehensiveness}. Most AI models tend to focus on one specific criterion inferred from one specific type of histological data, while pathologists in practice use multiple criteria and data types, \eg Hematoxylin and Eosin (H\&E) staining\footnote{A primary type of staining used for histopathology examinations.} for initial examination, and for instance, Ki-67 immunohistochemistry (IHC) staining\footnote{A more precise type of staining that utilizes the antibodies to highlight specific histological features, in this case, the histological pattern of mitosis.} \cite{abry2010significance} to collect different information to come up with a differential diagnosis \cite{gurda2014ki, shih1998ki};
\item The gap of {\bf explainability}. Most AI models function as `black boxes' and lack transparency in how they arrive at certain findings, including, locally, how an individual criterion is computed and, globally, how different criteria are combined to arrive at a diagnosis;
\item The gap of {\bf integrability}: most AI models abstract histological analysis as a computational problem, yet it remains unclear how to integrate such models into pathologists' existing workflow and practices.
\end{enumerate}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/main_view3.pdf}
\caption{A pathologist uses \xp~ in a top-down workflow to oversee how multiple AI models have contributed to the grading diagnosis of meningioma --- a type of brain tumor: (a) at the top-level, an AI-suggested diagnosis (\ie WHO Grade III) according to the World Health Organization guidelines, which consist of
(b) a list of meningioma grading criteria examined by multiple AI models;
(c) an arrow highlights the main contributing criterion to the current grading result;
(d) as the user selects the histological pattern of `mitosis' criterion, they see a list of sampled evidence based on which AI grades this criterion;
(e) each piece of evidence consists of AI's output probability, confidence level, and a saliency map that highlights the spatial support for the mitosis class;
(f) clicking a pair of thumbnail images registers the evidence to a whole slide image viewer with continuous magnification; the yellow outer box corresponds to the area of one high power field in the optical microscope, the blue box in the middle corresponds to a patch that includes the positive detection (a mitotic cell from \xp's AI), while the inner red box points out a more precise location of such a detection, corresponding to the evidence in the list;
(g) a user can verify each piece of sampled evidence by clicking on `approve', `decline' or `declare-uncertain' button;
(h) a heatmap can be enlarged to see a global distribution of each criterion;
(i) the user can also override each histological feature manually. Correspondingly, the findings demonstrated in (a,b) would be updated as the user overrides or modifies the AI's results.
}
\label{overview}
\end{figure}
To fill in these gaps, we propose a human-AI collaborative workflow for pathologists with two key design ingredients.
\begin{itemize}
\item {\bf Joint-analyses of multiple criteria}: at the top level, we present AI's findings based on multiple juxtaposed criteria across multiple data types, which are combined to produce a single diagnosis based on rules derived from existing medical guidelines. Such a design addresses comprehensiveness by following how pathologists often examine more than one criterion; the presentation of rules addresses global explainability, \ie how different AI-computed criteria contribute to a diagnosis.
\item {\bf Explanation by hierarchically traceable evidence}: for each criterion, we present a trace of evidence hierarchically across three levels: clicking a top-level finding based on a specific criterion (Figure \ref{overview}b) brings a user to the mid-level list of samples (Figure \ref{overview}d); The mid-level sample for the mitosis criterion consists of AI probability, confidence level and a saliency map \cite{chattopadhay2018grad} that highlights the spatial support for the mitosis class (Figure \ref{overview}e); clicking one sample further brings a user to the original WSI for examining low-level details (Figure \ref{overview}f). Such a design addresses local explainability by making the provenance of a criterion traceable and transparent; further, the top-down workflow is similar to (thus integrable with) pathologists' existing practices of delegating work to trainees and overseeing their findings.
\end{itemize}
\subsection{System Overview}
We instantiate these two design ingredients in the implementation of \xp~--- a human-AI collaborative brain tumor grading tool\footnote{Currently, we focus on the grading of meningioma---the most common primary type of brain tumor---as a point of departure for exploring the design of \xp. The goal is to aid pathologists to \textit{grade} meningioma with the aid of AI, which assumes that tumor areas have already been identified on a slide (\eg from an early radiology examination).} for pathologists to perform top-down, multi-criterion histological analysis. In this work, the medical task of meningioma grading is selected since it represents one of the hardest practices of the pathologists --- grading meningiomas is particularly challenging because it covers three aspects of difficulties for pathologists: \one pathologists are required to locate and examine multiple histological features from cross-data-typed H\&E and IHC slides, \two slides' high resolutions ($\sim$$(10^5)^2$) \textit{vs.} small-sized histological patterns (\eg mitosis, ($\sim$$90^2$)), and, \three the sparse, non-homogeneous distribution of the histological patterns. As such, the practice of grading meningiomas is a favorable arena of studying how AI should be applied to assist pathologists in carrying out the task.
Figure \ref{overview} shows an overview of \xp~'s interface. A typical workflow starts with a pathologist first seeing the top-level suggested tumor grade (Figure \ref{overview}a), where an arrow (Figure \ref{overview}c) highlights the main contributing criterion that leads to the suggested grading. The analysis of different criteria (Figure \ref{overview}b) is produced by multiple AI models that examine histological data --- H\&E and Ki-67 --- based on the World Health Organization (WHO) meningioma grading guideline \cite{louis20072007}.
The pathologist can further select and drill down to a specific criterion, which retrieves a set of examples as evidence (Figure \ref{overview}d) to explain AI's findings.
For example, for the criterion of mitotic count (the count number of a type of cell within a fixed size of area), the pathologist can see pieces of evidence (Figure \ref{overview}d) detected by \xp's mitosis classification model. Each piece of evidence (Figure \ref{overview}e) demonstrates multiple explainable components, including probability, confidence level, and a saliency map.
Moreover, the pathologist can open a heatmap (Figure \ref{overview}h) overlaying the whole slide image to overview the density distribution of positive mitotic cells recognized by the AI.
Selecting a piece of evidence directs the pathologists' attention to a high power field\footnote{One high power field corresponds to a field of view under x400 magnification from the optical microscope.} (HPF, yellow box) of the mitosis on the original whole slide image, where they can further examine the low-level histological features and approve/decline/declare-uncertain AI's analysis with one click (Figure \ref{overview}g,i), which in turn will update AI's findings on individual criterion and, if necessary, the overall suggested grading as well based on WHO guidelines.
We conducted two evaluations to validate \xp:
\one A technical evaluation shows AI models identifying multiple meningioma grading criteria, achieving F1 scores of 0.755, 0.904, 0.763, 0.946 in classifying four histological patterns of mitosis, necrosis, prominent nucleolus, and sheeting tissues, respectively.
Moreover, models used in \xp~achieve averaged error rates of 12.08\% and 29.36\% in counting nuclei (for the histological pattern of hypercellularity) and calculating Ki-67 proliferation index;
\two Work sessions with twelve\footnote{We would like to point out that our participants are highly-specialized medical experts that come from a much smaller population than general users and are very difficult to recruit due to their busyness.} medical professionals\footnote{, which includes two attendings, two fellows, seven senior residents, and one junior resident.} across three medical centers, comparing the performance of \xp~with an off-the-shelf whole slide image viewer as the baseline. The result shows that, with less than an hour of learning, participants were able to use \xp~ to make more accurate grading decisions. Specifically, participants gave correct gradings for 7/12 cases with the baseline interface. In comparison, participants gave 17/20 cases correct gradings using \xp. In the meantime, a post-study questionnaire shows that participants found \xp~more comprehensive ($p$=0.001), more integrable with their existing workflow ($p$=0.006), requiring less effort ($p$=0.002), and more effective on reducing the workload ($p$=0.002) in grading meningiomas. Moreover, they gave \xp~high ratings on explainability ($\mu$=5.58/7) and trust ($\mu$=5.83 and 6.00/7). Last but not least, pathologists were more likely to use \xp~in the future ($p$=0.002), and gave more overall preference on \xp~(9/12 ``totally prefer'', 3/12 ``much more prefer'').
\subsection{Contributions}
In terms of system input, \xp~goes beyond previous work that merely relies on a single data type (\eg H\&E \cite{rakhlin2018deep, liu2017detecting, wang2016deep, gu2020lessons}, or immunohistochemistry examinations \cite{saha2017advanced, xing2019pixel, anari2010computer}); instead, \xp's pipeline utilizes multiple H\&E-based criteria (Figure \ref{overview}b) and a complementary immunohistochemistry data type --- Ki-67 slides as input. In terms of system output, \xp~goes beyond prior work on automating diagnosis by `black box' AI \cite{irshad2013automated, lu2013toward, mishra2018convolutional, yap2015automated}; instead, \xp~examines whole slide images with joint analyses across multiple criteria where explanatory evidence can be traced hierarchically from top-level grading to mid-level samples and to low-level details on a whole slide image. Moreover, \xp~differs from previous medical human-AI collaboration systems \cite{cai2019human, corvo2017pathova, krueger2019facetto} \one by focusing on a different clinical task of diagnosing with AI, which requires an aggregation of detections and a fusion of decisions from multiple criteria and \two by contributing a top-down diagnosis workflow that assists pathologists in overseeing and verifying AI as a task integrable to their day-to-day work.
{\bf Our main contribution is a generalizable tool design for human-AI collaborative diagnosis that employs a workflow with joint-analyses of multiple criteria and explanation by hierarchically traceable evidence}, addressing three existing gaps of comprehensiveness, explainability and integrability. Our study provides initial insights into the application of the proposed AI-assistive diagnosis, which share the common requirements of processing multiple types of medical information and overseeing AI's performance explainably from the global findings to traces of local evidence.
\section{Background of Meningioma}
\label{sec:background}
In this work, we target the challenging task of meningioma grading to probe the design of human-AI collaborative tools for pathology diagnosis.
Meningioma is the most common primary brain tumor in adults and, according to the World Health Organization (WHO) guidelines (2007), can be graded as Grade I, Grade II, or Grade III \cite{louis20072007}. The future grading of meningioma in the new WHO guideline (2021) still recommends the same criteria for grading, although the nomenclature is slightly different.
The accurate grading of meningioma is vital for the treatment planning: the Grade I tumors can be treated with either surgery or external beam radiation, while Grade II/III ones often need both treatments \cite{walcott2013radiation}; meanwhile, research shows that patients with Grade III meningiomas suffer a higher recurrence rate as well as lower survival rate in comparison to Grade II patients \cite{palma1997long}.
Pathologists need to search and locate features across various magnifications in order to grade meningiomas. Specifically, they first localize the regions of interest (ROIs) in low magnification (x40), then switch to the patch level with a higher magnification (x100), and sometimes zoom further into the highest magnification (x400) into the cell level. These steps are usually repeated multiple times until pathologists have collected sufficient findings to conclude a grading and sign-off the case. Figure \ref{fig:meningioma} briefly visualizes such a workflow with typical pathology slides for demonstration. Pathologists' grading workflow starts with a resected session from a patient that cuts the tumor into slices and then stains it with Hematoxylin and Eosin (H\&E) solution on a glass slide (Figure \ref{fig:meningioma}a). Apart from the H\&E, Ki-67 immunohistochemistry (IHC) \cite{abry2010significance} is an additional staining method that is often used (Figure \ref{fig:meningioma}b) to provide an estimated proliferation index (Figure \ref{fig:meningioma}d,k), which is highly correlated to meningioma grading. Next, we describe the WHO guidelines for the meningioma grading \cite{louis20072007}:
\begin{itemize}
\item \textbf{Grade I} (benign) meningiomas include ``histological variant other than clear cell, chordoid, papillary, and rhabdoid ''\cite{brat2008surgical} \textit{and} a lack of criteria from grade II and III meningiomas.
\item \textbf{Grade II} (atypical) meningiomas are recognized by meeting at least one of the four following criteria:
\begin{enumerate}[wide=\dimexpr\parindent+\labelsep\relax, leftmargin=* ]
\item the appearance of 4 to 19 mitoses (Figure \ref{fig:meningioma}c,l) in 10 consecutive high-power fields (HPFs). Moreover, since mitoses are challenging to recognize in H\&E, the Ki-67-positive nucleus (Figure \ref{fig:meningioma}k) in the corresponding areas of Ki-67 (Figure \ref{fig:meningioma}d) are often referred to for disambiguation;
\item at least three out of five histological features are observed: hypercellularity --- an abnormal excess of cells in an HPF (Figure \ref{fig:meningioma}f), prominent nucleoli --- enlarged nucleoli in a cell (Figure \ref{fig:meningioma}g,m), sheeting --- loss of `whirling' architecture (Figure \ref{fig:meningioma}h), necrosis --- irreversible injury to cells (Figure \ref{fig:meningioma}i), and small cell --- cell aggregation with high nuclear/cytoplasmic ratio (Figure \ref{fig:meningioma}j);
\item brain invasion --- invasive tumor cells in brain tissue is observed (Figure \ref{fig:meningioma}e);
\item the appearance of clear cell or chordoid histological subtype.
\end{enumerate}
\item \textbf{Grade III} meningiomas are justified if at least one of the following criteria met \cite{backer2012histopathological}:
\begin{enumerate}[wide=\dimexpr\parindent+\labelsep\relax, leftmargin=* ]
\item 20 or more mitoses per 10 consecutive HPFs;
\item the appearance of frank anaplasia, papillary or rhabdoid histological subtype.
\end{enumerate}
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/sample32.pdf}
\caption{An example workflow pf pathologists' grading with whole slide images (WSIs) from our formative study. (a) The resected tissues are first stained with H\&E solution and scanned into WSIs. (b) An additional Ki-67 IHC staining is usually used to locate mitoses. Pathologists then zoom into the patch level and seek certain histological patterns listed in the WHO grading guidelines. Specifically, pathologists look for (c) mitotic cells (marked in the red box) in high power fields with the help of (d) Ki-67 stains; (e) brain invasion (invasive tumor cells in brain tissue); five histological patterns, including (f) hypercellularity (an abnormal excess of cells), (g) prominent nucleoli (enlarged nucleoli pointed by the arrow), (h) sheeting (loss of `whirling' architecture), (i) necrosis (irreversible injury to cells marked in the red box), (j) small cells (tumor cell aggregation with high nuclear/cytoplasmic ratio marked in the red box). For some criteria, \eg mitosis (k,l) and prominent nucleoli (m)), pathologists are required to zoom further into cell level for examination.}
\label{fig:meningioma}
\end{figure}
As shown above, meningioma grading is not only challenging but high-stake --- an overestimated study would incur unnecessary treatment on patients, and an overlooked one would cause a delay of necessary treatment.
\section{Related Work}
In this section, we review the related work of \xp~ from three areas: \one data-driven AI algorithms for digital pathology, \two tools for digital pathology, and \three human-AI collaborative tools.
\subsection{Data-Driven AI Algorithms for Digital Pathology}
According to Komura \etal~\cite{komura2018machine}, there are three primary categories of applications of data-driven AI algorithms in digital pathology: \one Computer-Aided Diagnosis (CAD), \two Content-Based Image Retrieval (CBIR), and \three feature-triggered biomarker discovery. We will review the representative works on CAD, since \xp~ falls into this category.
Current AI algorithms are primarily based on H\&E slides, the most commonly used stained slides for providing a detailed view of the tissue.
In particular, several existing approaches utilize AI algorithms to detect a single criterion in different diseases. While not targeted at meningiomas specifically, they seek the same criterion as in WHO grading guidelines \cite{louis20072007} (\eg mitosis). For example,
Irshad \etal~ include selected color spaces and morphological features into the mitotic cell detection pipeline to support breast cancer grading \cite{irshad2013automated}; Lu \etal~use Bayesian modeling and local-region threshold method to detect mitotic cells \cite{lu2013toward}; Mishra \etal~propose a CNN network to identify necrosis tissues in osteosarcoma tumor \cite{mishra2018convolutional}; Zhou \etal~enhance the traditional U-Net models by applying nested, dense skip pathways for nuclei segmentation \cite{zhou2018unet++}; Yap \etal~use RankBoost-permutations to integrate multiple base classifiers to detect prominent nucleoli patterns from multiple tumor tissues \cite{yap2015automated}.
Besides H\&E slides, AI algorithms have also been devised on other stainings that can be applied in clinical settings to assist decision-making, \eg Ki-67 IHC tests. For example, Saha \etal~use CNN as a feature extractor with Gamma Mixture Model to detect immuno-positive and negative cells in breast cancer \cite{saha2017advanced}. Xing \etal~train a fully connected convolutional network that can perform nucleus detection and classification from Ki-67 slides in a single stage \cite{xing2019pixel}. Anari \etal~utilize fuzzy c-means clustering to extract positive and negative cells for meningioma tissues \cite{anari2010computer}.
Different from prior work that simply applies an AI model for predicting and red-flagging the target of a clinical task, the design of \xp~ has two unique features. First, previous work has been mainly based on a single staining examination as input, \eg H\&E images, which differs from clinical settings where pathologists usually refer to multiple IHC examinations to acquire comprehensive information \cite{gurda2014ki, shih1998ki}. Following such practices, \xp's~ AI models accept multiple examinations and direct users across different input sources during diagnosis. Second, previous studies have focused on the direct prediction of the diagnostic target (\eg classifying whether the tissue is tumor). Although some AI has reported performance on par with pathologists, the non-transparent, non-explainable characteristics of data-driven AI algorithms can still lead to distrust in high-stake medical decision processes \cite{bera2019artificial}. In contrast, \xp~ decomposes the diagnosis as rule-based joint analyses based on multiple juxtaposed WHO criteria. By demonstrating the example-based explanations according to the criteria, \xp~ supports pathologists to collaborate with AI by seeing, modifying, and verifying hierarchical evidence that leads to AI's findings.
\subsection{Tools for Digital Pathology}
In the domain of digital pathology, multiple tools have been designed to assist pathologists for the purpose of clinical diagnosis.
Rather than taking a pixel-based data-driven approach, most of these tools rely on morphological features (\eg nuclear shape and texture). Since such features are typically associated with attributes of the disease, they tend to have greater explainability and a stronger morphological underpinning than the ones based on data-driven models.
Meanwhile, because such features are shallow and from a low level, these tools might suffer from inferior performance in diagnostic tasks and are mostly only applied for general image analysis \cite{bera2019artificial}.
For example, ImageJ \cite{schneider2012nih}, one of the most commonly used scientific image analysis tools with extensions for computational pathology, provides functions of nuclei segmentation and nuclei characteristics analysis (\eg intensity distribution and texture); Cellprofiller \cite{carpenter2006cellprofiler} automates morphological analysis and classifies cell phenotypes; QuPath \cite{bankhead2017qupath} and CaMicroscope \cite{saltz2017containerized} provide extensive annotation and automation for nuclei segmentation and positive cell counting; Pathology Image Informatics Platform (PIIP) \cite{martel2017image} extends Sedeen viewer\footnote{\url{https://pathcore.com/sedeen/}} by adding plugins on out-of-focus detection, region of interest transformation, and IHC slide analysis.
Despite the recent development of AI algorithms, only a few have been adapted into tools. For example, Steiner \etal's tool can red-flag skeptical regions on slides for grading prostate biopsies and evaluate the impact of the presence of AI on inter-observer consistency and time cost \cite{steiner2020evaluation}. Other models still lack integrability into physicians' existing workflow, which has been recognized as a long-standing issue. Teach \etal~ have studied physicians' attitudes on clinical consultation systems and offer suggestions on computer-based decision support systems, \eg ``minimizing changes to current clinical practices'' and ``enhancing the interactive capabilities'' \cite{teach1981analysis}. Middleton \etal~have reviewed research on clinical decision support since 1990 and point out that the poor integration in clinicians' workflow is becoming a key barrier preventing the application of such tools \cite{middleton2016clinical}.
Specifically related to histological diagnosis, there is limited research on tool design for pathologist-AI collaboration on a clinical task \cite{cai2019human}. Different from prior work, \xp~addresses the integrability of AI using a specific task of meningioma grading as a case study. By working closely with pathologists, \xp~ proposes a top-down workflow design for the tool inspired by how pathologists oversee trainees’ work --- presenting pathologists with a trace of evidence hierarchically from each computed criterion to contextual information and allowing pathologists to correct the AI on the fly.
\subsection{Human-AI Collaborative Tools}
Recent HCI research has demonstrated numerous examples of human-AI collaboration, where AI takes human input and conducts automation to ease humans' burden on performing repeated routines \cite{chen2018forte, willett2018mixed, lee2019smartmanikin}. In the digital pathology domain, multiple works have shown that the human + AI team has the potential to increase the quality of diagnoses. For example, Wang \etal~report that combining AI and human diagnoses improves pathologists’ performance on breast cancer metastasis classification with an $\sim$85\% reduction of human error rate \cite{wang2016deep}. More recent work by Bulten \etal~points out that the introduction of AI assistance increases pathologists' agreement with the expert reference standard in the prostate cancer grading \cite{bulten2021artificial}. Meanwhile, Buccinca \etal~show that users might over-rely on AI by failing to recognize or correct AI when its predictions are wrong \cite{buccinca2021trust}. Such a contradiction in the performance opens up a question: how should human-AI collaboration be employed to harness the AI without incurring bias? To answer this question, Horvitz first sheds light on the design of human-AI collaboration by proposing a series of principles of mixed-initiative interactions \cite{horvitz1999principles}. Amershi \etal~further enhance Horvitz's work with 18 design guidelines on how humans could better interact with AI, such as ``support efficient correction'' and ``make clear why the system did what it did'' \cite{amershi2019guidelines}.
However, Yang \etal~ point out that implementing these guidelines is a non-trivial task \cite{yang2020re} --- specifically, the uncertainty and complexity in AI's inference make it hard for users to control the data reasoning process. Going beyond collaborative research in the general domain, Cai \etal~highlight medical experts' need for information from AI, which includes the AI's capabilities measured in well-defined metrics and transparency to overcome subjectivity \cite{cai2019hello}. Meanwhile, merely automating a part of work would not be sufficient to motivate the medical users --- a medical human-AI collaborative tool should set the explicit goal of helping medical users increase the overall quality of work \cite{yang2016investigating}.
A number of existing human-AI collaboration projects on pathology have been focused on Content-Based Image Retrieval (CBIR). With a given slide (or patch) from pathologists, such tools retrieve previous examples of a similar pattern to help the decision-making. For instance, Hegde \etal~propose a reversed image searching tool to help pathologists find histological image patches of similar histological features or disease states \cite{hegde2019similar}; Cai \etal~enable pathologists to specify custom concepts that guide the retrieval of similar annotated patches of histological patterns \cite{cai2019human}. However, CBIR focuses on image searching, which is a low-level task: what images to search, how to use the searching results, and what to conclude according to searching results. On the contrary, diagnosing/grading carcinoma in digital pathology is a high-level task, automating an aggregation of detections and a fusion of decisions from multiple criteria. This is also confirmed by Tschandl \etal's work, which discovers that the CBIR tools require significantly more time to interact with than those giving diagnostic predictions \cite{tschandl2020human}. In contrast to above-mentioned CBIR tools \cite{hegde2019similar, cai2019human}, \xp~is considered a tool of Computer-Aided Diagnosis (CAD) or Clinical Decision Support System (CDSS).
Going beyond CBIR, existing CAD/CDSS tools enhance the detection in digital pathology with visualization. For example, Corvo \etal~develop PathoVA that provides AI support for breast cancer grading by visualizing three types of pixel-level clues \cite{corvo2017pathova}. The system can also track pathologists' interactions and help them generate reports by providing snapshots for confirmed areas. Krueger \etal~ enhance users' exploration of multi-channel fluorescence images to support cell phenotype analysis \cite{krueger2019facetto}. Specifically, the tool maintains hierarchical statistics about the number of cell-level findings to help a user keep track of analysis and interactively update the statistics with machine learning algorithms on the fly. These tools provide a bottom-up approach to assist pathologists in making a diagnosis: pathologists are only prompted with low-level AI-generated clues (\eg highlighting tumor cells with a segmentation map); then, the diagnosis is drawn by pathologists from fusing observations with these clues. In contrast, \xp~implements a top-down workflow, where pathologists first see an overall grading based on joint analyses of multiple criteria, then drill down to localized areas with traceable evidence and further to low-level patterns for verification and correction. Such a design provides ``actionable information'' \cite{gu2020lessons} and reduces the total areas of study for pathologists. Moreover, to support such a workflow, \xp~makes comprehensive AI detections based on medical guidelines, whereas the above-mentioned prior studies \cite{krueger2019facetto,corvo2017pathova} only provide partial clues.
\section{Formative Study}
\label{sec:formative}
We conducted a formative study with four board-certified neuropathologists (average experience $\mu=21.25$ years) from a local medical center\footnote{Please see the supplementary material for the demographic information for the participants.} to reveal the system requirements for human-AI collaborative histological diagnosis. We started by describing the project's motivation and then asked pathologists to describe their typical process of examining a patient's case. Next, we asked them to describe the challenges in their practice and their expectations on an AI-enabled system to assist such a process.
\subsection{Existing Challenges for Pathologists}
We found two major challenges in the current pathology practice of meningioma grading, which validate our motivation for introducing AI into the diagnosis process.
\textbf{Time consumption}.
The small-scaled characteristics in the patterns of interest and the very high resolution of slides make the meningioma grading highly time-consuming for pathologists. A resected section from a patient's brain tissue would generate eight to twelve H\&E slides, and pathologists need to look through all those slides and integrate the information found on each slide. Except for the few experienced pathologists, meningioma grading can take up to several hours to go through a single patient's case that often consists of 10+ slides. Automating portions of the slide examination process by AI can potentially reduce such time consumption, alleviate pathologists' workload, and increase the overall throughput.
\textbf{Subjectivity}.
There are high intra- and inter-observer variations during the grading of tumors. Pathologists summarize three factors contributing to such subjectivity:
\one a lack of precise definitions --- the WHO guidelines do not always provide a quantified description for the five histological features for high-grade meningioma. For example, for the `prominent nucleoli' criterion, the WHO guideline does not specify how large the nucleolus should be considered as `prominent';
\two implementation of the examination process --- for example, the mitotic count for grade II meningioma is defined as 4 to 19 mitotic cells in 10 consecutive HPFs. However, the guideline does not specify the sampling rules of these 10 HPFs.
As a result, different pathologists are likely to sample different areas on the slide;
\three natural variability in people, such as the level of experience, time constraint, and fatigue \cite{croskerry2017diagnosis}. For AI, the definition and implementation of guidelines can be codified into the model and visualized in the system that performs consistently to overcome people's variability.
\subsection{System Requirements for xPath}
In addition to the two innate requirements of reducing time and overcoming subjectivity, we further identified the following requirements related to the human-AI collaborative aspects.
{\bf Comprehensiveness}.
The grading of tumors involves multiple sources of information (from different staining, \eg H\&E, Ki-67) and criteria (\eg mitosis, nuclei density, necrosis, small cell, sheeting, prominent nucleoli, brain invasion). To incorporate \xp~ into the current practice, the system should comprehensively support all these sources and criteria.
{\bf Explainability}.
In lieu of a single grading result from a black-box AI model, the system should provide visual evidence to justify the AI's findings according to the medical definition of the criterion. This is because each criterion (based on histological features of an HPF) requires examining lower-level details in order to interpret an AI's finding and further needs to be traceable to the original location in the whole slide image for a review with more contextualized information. Overall, there should be explainability both globally (how results from multiple criteria are combined to yield a grading) and locally (, which includes \one what evidence leads to the computed result of each criterion, \eg where mitoses are detected that lead to the number of mitosis counts, and \two why a specific piece of evidence is captured by AI, \eg which part of the evidence convinces the AI that it contains mitoses).
{\bf Integrability}
Similar to how attending pathologists oversee trainees' work, the system should allow pathologists to oversee AI's findings by retrieving detailed contextualized evidence on-demand. When showing the evidence of grading, the system should not overwhelm pathologists with all evidence from a whole slide; rather, it should direct pathologists to the representative regions of interest. Given that errors are inevitable for most existing AI models, the system should enable pathologists to cross-check each criterion and override the results manually when they detect an error.
\section{The Design of xPath}
Guided by the aforementioned system requirements, we developed \xp~ with two key designs: \one joint-analyses of multiple criteria and \two explanation by hierarchically traceable evidence. We first detail the two designs and then describe how a pathologist uses \xp~ to perform a meningioma grading task.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\linewidth]{figures/menu_bar2.pdf}
\caption{Joint-analyses of multiple criteria in \xp's design: (a) the overall suggested grading; (b) a structured overview of each WHO criterion with (c) an arrow highlighting the main contributing criterion to the suggested grading; (d) users can override criteria by right-clicking on each item and change the result to `found', `not found' or `uncertain'; \xp~provides color bars to indicate the status of each criterion: (e) red indicates a confirmed abnormal criterion (or \textit{presence}), (f) green indicates a confirmed normal criterion (or \textit{absence}), (g) orange indicates the criterion is unconfirmed/confirmed uncertain, and (h) gray indicates the criterion is not applicable in this case.}
\label{fig:high_level}
\end{figure}
\subsection{Joint Analyses of Multiple Criteria}
Based on the formative study, we found that pathologists rely on the WHO guideline to grade meningiomas, a process that involves reasoning jointly from multiple criteria. Thus xPath's design follows the WHO meningioma guideline and employs AI to compute eight criteria for meningioma grading, \ie mitotic count, Ki-67 proliferation index, hypercellularity, necrosis, small cell, prominent nucleoli, sheeting, and brain invasion\footnote{This work does not consider using AI to identify the subtypes (\eg clear cell, frank anaplasia) since they are easy to be discovered and justified by pathologists.}. Details on the AI implementation are described in Section \ref{AI_backend}. These criteria can be split into two categories: quantitative and qualitative. For the quantitative criteria of the mitotic count and Ki-67 proliferation index, we show their predicted \textit{quantitative values} directly. For the rest of the criteria concerned with the \textit{presence} or \textit{absence} of a specific histological pattern, \xp~ provides recommendations of regions of interest (ROI) hotspots according to the largest aggregations of AIs' probabilities.
Figure \ref{fig:high_level} demonstrates the interface of multiple criteria. Besides showing the current suggested grading result (\ie the suggested `WHO grade II', Figure \ref{fig:high_level}a) and a structured overview of each WHO criterion (\eg mitosis, Ki-67, Figure \ref{fig:high_level}b), \xp~ displays an arrow to indicate the main contributing criterion that leads to the grading based on the underpinning WHO guidelines, \eg the mitotic count (Figure \ref{fig:high_level}c). All the criteria are linked with ROIs related to the findings\footnote{Sampling rules of ROIs vary from different criteria. Please refer to Section \ref{evidence_rule} and the supplementary material for details.}. Moreover, AI's recommendation on all the criteria can be confirmed and modified by the pathologist. \xp~ uses color bars (Figure \ref{fig:high_level}e,f,g,h) to indicate the status: as shown in Figure \ref{fig:high_level}, red indicates a confirmed abnormal criterion (or \textit{presence}), green indicates a confirmed normal criterion (or \textit{absence}), orange indicates that the criterion is unconfirmed/uncertain (neither presence nor absence), and gray indicates the criterion is not applicable in this case. Once a pathologist overrides the result of any criterion (Figure 3d), the color bar and the final grading is updated correspondingly.
In summary, the joint analysis of multiple criteria addresses comprehensiveness by following how pathologists often examine more than one criterion; the presentation of rules addresses global explainability, \ie how different AI-computed criteria are combined to arrive at a diagnosis.
\subsection{Explanation by Hierarchically Traceable Evidence for Each Criterion}
\label{evidence_rule}
Another finding from the formative study is that, besides a global explanation of the overall grading, pathologists also would like to see evidence that justifies AI’s grading, \eg how AI processes the image of a local patch (for local explainability). Hence, we designed \xp~ to provide such explanations by top-down, hierarchically traceable evidence.
Further, such a workflow mimics a scenario that we found in the formative study: pathologists reviewing/overseeing trainees' work. By replacing trainees with AI's findings, we emulated the relationship between the pathologists and trainees to pathologist-AI collaboration, thus making AI more integral to pathologists' current practices.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{figures/work_path3.pdf}
\caption{We designed a top-down human-AI collaboration workflow for pathologists to interact with \xp~ (left) and pathologists' corresponding footprints on the \xp's frontend user interface (right). A pathologist user starts from (a) the automatically-generated suggested grading result and then examines (b) the main contributing criterion. They can further examine (c) the evidence list, and register back into the original whole slide image in higher magnifications (d,e). Furthermore, users can (f) approve/decline/declare-uncertain on the evidence, or (g) override AI results directly by right-clicking on each criterion. For the rest of the criteria, the user could repeat (c-g) until they have collected sufficient confidence for a grading diagnosis.}
\label{fig:work_path}
\end{figure}
As shown in Figure \ref{fig:work_path}, \xp~ presents a hierarchical trace of evidence for each criterion with positive findings, which allows a pathologist to see a list of {mid-level} samples (Figure \ref{fig:work_path}c) that lead to a computed criterion (Figure \ref{fig:work_path}b). For the most important criterion --- mitosis, \xp~demonstrates a series of explanations in each mid-level sample, including AI's output probability (Figure \ref{fig:evidence}a), AI's confidence level (Figure \ref{fig:evidence}b), and a saliency map (Figure \ref{fig:evidence}c) that highlights the spatial support for the mitosis class in the reference image\footnote{Please refer to the supplementary material for the implementation of calculating the confidence level and the saliency map.}, allowing pathologists to check AI's validity on each sample quickly. Further, at the {low-level}, \xp~ supports registering each sample of the {mid-level} evidence into the whole slide image (WSI) to enable pathologists to examine in even higher magnification and search nearby for more contextual information (Figure \ref{fig:work_path}d,e). With the provided {mid-} and {low-level} information, a pathologist can approve/decline/declare-uncertain a sample for a criterion with one click (Figure \ref{fig:work_path}f), or return to the top-level and directly override AI's results on each criterion (Figure \ref{fig:work_path}g). Correspondingly, the overall suggested grading (Figure \ref{fig:work_path}a) is updated dynamically upon the user's input.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\linewidth]{figures/evidence_explain.pdf}
\caption{For the most important criterion, mitosis, \xp~demonstrates a series of explanations in each mid-level sample, including the (a) AI's probability, (b) AI's confidence level, which is justified by the probability thresholds, and (c) a saliency map (calculated by the Grad-CAM++ algorithm \cite{chattopadhay2018grad}) that highlights the spatial support for the mitosis class in the reference image on the left.}
\label{fig:evidence}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/hpf_roi_sample2.pdf}
\caption{Selected pieces of sampled evidence from in-the-wild detection where we applied trained models directly on multiple H\&E and Ki-67 IHC slides:
(a) a highest focal region sampling result of mitotic count on H\&E slide (red box, 1HPF), the small blue frames are the mid-level samples (that are shown on the evidence list), the smaller red boxes in the blue frames mark the positions of mitoses found by \xp's AI;
(b) a highest focal region sampling result on the Ki-67 IHC slide (red box, 1HPF);
(c) a highest region sampling result of mitotic count on H\&E slide (red box, 10HPFs), the small blue frames are the mid-level samples;
(d) highest region sampling result on the Ki-67 IHC slide (red box, 10HPFs);
(e) a hypercellularity ROI sample (blue box);
(f) a necrosis ROI sample (blue box);
(g) a small cell ROI sample (the inner blue box, the outer yellow box marks the dimension of 1HPF);
(h) a prominent nucleoli ROI sample (blue box).}
\label{fig:hpf_roi_sample}
\end{figure}
Figure \ref{fig:hpf_roi_sample} demonstrates typical examples of evidence provided by \xp~ during the ``in-the-wild'' test, where we applied trained models directly on multiple H\&E and Ki-67 IHC slides. Particularly, for the mitosis-related criteria (\ie mitotic count and Ki-67 proliferation index) --- the most reliable and commonly used criteria in the grading --- we introduce the two `shortcuts' for pathologists to look into AI's results:
\begin{itemize}
\item \textbf{Highest Region Sampling}.
One WHO criterion is ``mitotic count in 10 consecutive HPFs''. Our formative study found that the inter-observer consistency of ``10 consecutive HPFs'' is low. To address this problem, \xp~provides the highest region sampling tool. The highest region is defined as a $2\times 5$ HPF area with the highest number of mitotic counts (Figure \ref{fig:hpf_roi_sample}c) or the highest Ki-67 proliferation index (Figure \ref{fig:hpf_roi_sample}d).
This tool speeds up a pathologist's work by helping them locate 10 consecutive HPFs from the WHO guidelines.
\end{itemize}
\begin{itemize}
\item \textbf{Highest Focal Region Sampling}.
From our formative study, pathologists mentioned that the high-grade meningiomas share a common feature of increased mitotic activities in a localized area. Hence, \xp~ provides the highest focal sampling tool to help pathologists better localize highly concentrated mitosis/Ki-67 proliferation index areas. In \xp, the highest focal region is calculated as the one HPF with the highest number of mitotic counts (Figure \ref{fig:hpf_roi_sample}a) or the highest Ki-67 proliferation index (Figure \ref{fig:hpf_roi_sample}b). Using this tool, pathologists can locate foci of highly-mitotic areas that the highest region sampling might miss.
\end{itemize}
Pathologists can go beyond the sampled areas and navigate the high-heat areas using heatmaps generated for the whole slide (please see the supplementary material for details). For example, the mitosis heatmap registers all AI-detected positive mitotic cells as a mitotic density atlas, where high-heat areas indicate a high density of mitotic cells. As such, the heatmap would serve as a `screening tool' to help pathologists filter out unrelated areas and rapidly narrow down to the ROIs that are scattered in an entire WSI. \xp~ provides such `screening tools' for all criteria.
In summary, in contrast to prior work that enables pathologists to define their own criteria for finding similar examples \cite{cai2019human}, \xp~aims at making examination based on an existing criterion traceable and transparent with evidence, allowing pathologists to view comprehensive AI findings from multiple criteria; further, the top-down workflow is integrable with pathologists' existing practices of delegating work to trainees, which enables pathologists to oversee AI's performance.
\section{Implementation of xPath's AI Backend}
\label{AI_backend}
\xp~implements an AI-aided pathology image processing backend to compute the eight WHO criteria of the mitotic count, Ki-67 proliferation index, hypercellularity, necrosis, small cell, prominent nucleoli, sheeting, and brain invasion.
In this section, we describe the tasks, problem formulations, datasets, and training details for the AI models.
Finally, we report the performance for each of the AI models from a technical evaluation.
\subsection{Tasks \& Problem Formulations}
\label{sec:problem}
In contrast to performing an image search by user-defined concepts \cite{cai2019human}, \xp~aims to screen the entire WSI to select regions of interest (ROIs) and then determine grades based on such ROIs. To achieve this, \xp~ includes six AI models and two detection rules, one for each criterion, to distill quantized information on histological features. Below we describe how \xp~uses these techniques to process a whole slide image (WSI).
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/xPath_diagram2.pdf}
\caption{Data processing pipeline of \xp: \one \xp~takes H\&E and Ki-67 whole slide images (WSIs) as input. \two For each WSI, \xp~uses a non-overlapping sliding window method to acquire (a) H\&E and (b) Ki-67 patches (size=$512\times512\times3$); Furthermore, each $512\times512\times3$ H\&E patch is processed with (c) resizing, (d) sliding window ($240\times240\times3$), and (c) another sliding window ($96\times96\times3$) to fit the inputs of the down-stream models. \three \xp's AI backend takes over the pre-processed tiles and employs multiple AI models to detect WHO meningioma grading criteria from each patch. Given a $512\times512\times3$ H\&E patch, \xp~uses (f) a nuclei segmentation model to count the number of nuclei (for hypercellularity judgment), (g) a necrosis classification model to calculate necrosis probability, and (h) a sheeting classification model to calculate sheeting probability. \xp~further utilizes the nuclei counting results for (k) small cell recommendation, and (l) brain invasion recommendation. For a $240\times 240 \times 3$ tile, \xp~uses (i) a mitosis classification model to obtain mitosis probability. For a $96\times 96 \times 3$ tile, \xp~uses (j) a prominent nucleoli classification model to predict prominent nuclei probability. For each $512\times512\times3$ Ki-67 patch, \xp~detects positive and negative nucleus and calculates the Ki-67 proliferation index; \four All AI-computed results (marked in the green boxes) are shown in \xp's frontend for pathologist users to oversee.}
\label{fig:backend_diagram}
\end{figure}
\subsubsection{Pre-process whole slide images} Figure \ref{fig:backend_diagram} shows a pipeline of how \xp~pre-processes a whole slide image. At first, \xp~cuts a high-resolution ($\sim$$(10^5)^2$) whole slide image into $512\times 512 \times 3$-pixel\footnote{For the criteria of sheeting and Ki-67 proliferation index, the dimension of each pixel is 0.5$\mu$m. For other criteria, the dimension of each pixel is 0.25$\mu$m.} patches with non-overlapping sliding windows. A patch with an average pixel value greater than 240 is considered as background and is discarded. Otherwise, \xp~further processes each patch separately to fit the tasks of detecting the eight criteria:
\begin{itemize}
\item For the criteria of Ki-67 proliferation index, hypercellularity, necrosis, small cell, brain invasion, \xp~directly applies each AI model or detection rule to the patch;
\item For detecting sheeting patterns, \xp~resizes each patch to $224\times 224 \times 3$ to reduce the computation burden;
\item For counting the mitotic figures, \xp~further processes each patch using a $240\times 240$ sliding window with a step size of 120 to fit the input of the mitosis classification model;
\item For counting prominent nucleolus, \xp~cuts each patch with a $96\times 96$ non-overlapping sliding window to fit the prominent nucleoli classification model.
\end{itemize}
\subsubsection{Detecting each criterion with machine learning techniques}
After \xp~pre-processes the whole slide image, it runs eight techniques to detect eight criteria, which covers three types of tasks: classification (to justify whether a given image is positive or not), semantic segmentation (to recognize and segment nucleus from the tissue background), and rule-based image recommendation\footnote{\xp~uses such an unsupervised approach to detect the small cell pattern because of a shortage of IRB-approved annotated data.} (to recommend candidates based on fixed rules). Below we describe the target of each task and its formulation. Specific thresholds are justified by the maximum F1 scores achieved by each model in the validation set.
\begin{itemize}
\item \textbf{Mitotic Count (Classification)}. \xp~uses an EfficientNet-b7 model \cite{tan2019efficientnet} to identify mitosis (Figure \ref{fig:backend_diagram}i). A $240\times 240$ tile with a prediction probability > 0.78 is counted as positive. \xp~further applies a non-maximum suppression technique to post-process the overlapping positive tiles.
The mitotic distribution of the slide is calculated by merging the results from each $512\times 512 \times 3$ H\&E patch.
\item \textbf{Ki-67 Proliferation Index (Semantic Segmentation)}. \xp~uses a pre-trained Cycle-GAN model \cite{ghahremani2021deepliif} to detect both Ki-67 positive and negative nucleus (Figure \ref{fig:backend_diagram}m).
Given a $512\times 512\times 3$ Ki-67 patch as the observation region, the Ki-67 proliferation index is calculated as \\ $\frac{\text{positive-count}}{\text{positive-count}+\text{negative-count}}\times 100\%$.
\item \textbf{Hypercelluarity (Semantic Segmentation)}. \xp~uses a pre-trained deep neural network \cite{graham2019hover} to segment and count the number of nuclei in a $512\times 512\times 3$ H\&E patch (Figure \ref{fig:backend_diagram}f).
\item \textbf{Necrosis (Classification)}. \xp~uses an EfficientNet-b5 model to justify whether a $512\times 512\times 3$ H\&E patch contains the necrosis pattern (Figure \ref{fig:backend_diagram}g). A patch is considered necrosis-positive if the prediction probability > 0.74.
\item \textbf{Small Cell (Rule-Based Recommendation)}. \xp~applies the rules for recognizing small cell patterns: selecting and recommending the top-10 $512\times512\times3$ H\&E patches with the highest nuclei count within each slide (Figure \ref{fig:backend_diagram}k). For each recommended patch, if it has >125 nucleus/patch, then \xp~includes it in the recommendation.
\item \textbf{Prominent Nucleoli (Classification)}. Similar to mitosis classification, \xp~uses an EfficientNet-b0 model to classify prominent nucleoli (Figure \ref{fig:backend_diagram}j). To avoid false-positive cases influencing the result, only tiles that have >0.9 prediction probabilities are counted as positive\footnote{We choose precision rather than recall in the prominent nucleoli classification because \one the performance of the prominent nucleoli classification model is not satisfactory (see Section 6.3); \two unlike mitosis, this criterion is justified by the presence of cell \textit{clusters} that have prominent nucleolus, and missing one or a few detections would not significantly influence the overall result.}. \xp~counts positive tiles in each $512\times512\times3$ patch for calculating the distribution of prominent nucleoli.
\item \textbf{Sheeting (Classification)}. \xp~uses an EfficientNet-b1 model \cite{tan2019efficientnet} to classify whether the patch includes a sheeting pattern (Figure \ref{fig:backend_diagram}h). A sheeting patch is justified as positive if its prediction probability is >0.52.
\item \textbf{Brain Invasion (Classification)}. \xp~ outlines the brain invasion pattern by classifying whether a given $512\times512\times3$ H\&E patch is tumor, brain, or background (Figure \ref{fig:backend_diagram}l). If the tumor cells are invading the normal brain tissues, it can be seen clearly with a heatmap visualization of tumors {\it vs.} brain areas. Because meningioma is a high-cellular tumor, \xp~classifies tumor patches with the following rule: \one patches that have >55 nucleus in each H\&E patch are counted as the tumor; \two patches that have [10,55] nucleus are counted as the brain; \three otherwise, count as the background.
\end{itemize}
\subsection{Dataset and Model Training}
We built an in-house dataset consisting of 30 WSIs from a local medical center for the model training and evaluation. The WSIs were cropped into patches of the expected dimensions for each model. For the supervised tasks, we created the ground-truth labels for all the patches by working with a board-certified neuropathologist. For the necrosis and sheeting patterns, we used a random-crop technique when generating patches from an annotated region. Note that patches in different sets were generated from a different group of regions. To train the models, we further randomly selected a subset of the training set to be the validation set, and utilized it to determine the optimal hyperparameters. Please find the supplementary material for more specific training details.
\begin{table*}[t]
\footnotesize
\centering
\begin{tabular}{c | c c c}
\hline
Dataset & \begin{tabular}[x]{@{}c@{}}Dimension\\ (in pixels) \end{tabular} & \begin{tabular}[x]{@{}c@{}}\# of Samples\\ (Training) \end{tabular} & \begin{tabular}[x]{@{}c@{}}\# of Samples\\ (Testing) \end{tabular}\\
\hline \hline
Mitosis Nuclei & $240\times240\times3$ & 33,562 (1,925 positive, 31,637 negative) & 8,223 (336 positive, 7,887 negative)\\
\hline
Necrosis & $512\times512\times3$ & \begin{tabular}[x]{@{}c@{}}4,383 (from 190 regions) \\ (651 positive, 3,732 negative) \end{tabular} & \begin{tabular}[x]{@{}c@{}} 3,587 (from 162 regions) \\(770 positive, 2,817 negative) \end{tabular}\\
\hline
Prominent Nucleoli & $96\times96\times3$ & 15,042 (2,447 positive, 12,595 negative) & 3,753 (609 positive, 3,144 negative)\\
\hline
Sheeting & $224\times224\times3$ & \begin{tabular}[x]{@{}c@{}}3,660 (from 55 regions) \\ (1605 positive, 2055 negative) \end{tabular} & \begin{tabular}[x]{@{}c@{}}2,340 (from 45 regions)\\ (1,185 positive, 1,155 negative) \end{tabular}\\
\hline
\end{tabular}
\caption{Dataset descriptions for each task. The input patch dimension in pixels, the size of training/testing sets, and the distribution of positive/negative patches are provided. }
\label{tab_dataset}
\end{table*}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/perf_copy3.pdf}
\caption{Classification performance for (a) mitosis, (b) necrosis, (c) prominent nucleoli, (d) sheeting. The solid lines in each sub-figure illustrate the Precision-Recall curves of each model. The blue crosses indicate the performance achieved by the models using the thresholds that maximized the F1 scores on the validation sets.}
\label{fig:performance}
\end{figure}
\subsection{Technical Evaluation}
\label{sec:tech_eval}
We report the performance of AI models on testing sets. Specifically, we test the supervised models for recognizing mitosis, necrosis, prominent nucleoli, and sheeting criteria, and report the Precision-Recall curve, as shown in Figure \ref{fig:performance}. In summary, \xp~ achieved F1 scores of 0.755, 0.904, 0.763, 0.946 in identifying the histological patterns of mitosis, necrosis, prominent nucleolus, and sheeting. The scores indicate the effectiveness of our models to implement \xp. Moreover, for the tasks of cell-counting in hypercellularity and Ki-67 proliferation index criteria, we tested their performance with 150 randomly-selected $512\times 512\times 3$ patches each and report the average error rate. The results show that the average error rate of nuclei counting (hypercellularity) and Ki-67 proliferation index is 12.08\% and 29.36\%, respectively.
Due to a lack of data at present, for brain invasion and small cell patterns, rather than drawing a definitive conclusion, \xp~ uses a rule-based, unsupervised approach to recommend areas for pathologists to examine. We planned to validate the performance on these two criteria later in the work session with pathologists; however, it was hard for the participants to justify the small cell formation {\it vs.} inflammation areas without proper IHC tests. As such, \xp's AI performance on detecting small cell patterns was not validated. For the brain invasion, most pathologists felt it was faster to examine it manually and did not rely on AI's recommendations.
\section{Work Sessions with Pathologists}
The technical evaluation reported in the previous session validated the effectiveness of \xp's AI backend in the in-house dataset. However, it remains unanswered whether \xp~is beneficial to pathologist users in real clinical settings. Notably, many previous cases showed how easily AI models could break although they showed high accuracy in training/test data \cite{strickland2019ibm, kandula2019reappraising}. To address these concerns, we conducted work sessions with 12 medical professionals in pathology across three medical centers and studied their behavior of grading meningiomas using a traditional interface --- an open-source whole slide image viewer called ASAP\footnote{\url{https://computationalpathologygroup.github.io/ASAP/}. This tool was selected because it is open-source and has gained popularity in the digital pathology research domain \cite{litjens20181399}.} and \xp. In this study, we referred to the traditional interface as system 1 and \xp~as system 2 to avoid biasing of participants. The main research questions are:
\textit{RQ1: Can \xp~ enable pathologists to achieve accurate diagnoses?}
One reason of utilizing AI in \xp~is because it can highlight ROIs of multiple histological patterns, freeing pathologists from examining the entire slide. However, it is still yet unclear whether introducing AI will have a positive or a negative effect on pathologists' diagnoses: On one hand, multiple previous work show that the introduction of human-AI collaboration improves pathologists’ performance \cite{wang2016deep, bulten2021artificial}; On the other hand, due to the existing limitations in AI models' accuracy, users face the risk to generating wrong diagnoses if they over-rely on the non-perfect AI \cite{bansal2019beyond, buccinca2021trust}. As such, we hypothesize that ---
\begin{itemize}
\item \textbf{[H1] Pathologists' grading decisions with \xp~will be as accurate as those with manual examinations.}
\end{itemize}
\textit{RQ2: Do pathologists work more efficiently with \xp?}
Another reason for using AI in \xp~is that it can improve the pathologists' throughput by alleviating their workload. However, it remains unanswered how AI will assist pathologists in \xp, given that previous work shows less-carefully-designed AI might incur extra burdens \cite{gu2020lessons}. As such, it is also necessary to find out whether pathologists can work efficiently with \xp's AI. We hypothesize that ---
\begin{itemize}
\item \textbf{[H2a] Pathologists will spend less time examining meningioma cases using \xp.}
\item \textbf{[H2b] Pathologists will perceive less effort using \xp.}
\end{itemize}
\textit{RQ3: Overall, does \xp~ add value to pathologists' existing workflow?}
Going beyond the influence brought by AI, we introduce two design ingredients --- joint-analyses of multiple criteria \textit{and} explanation by hierarchically traceable evidence --- to fulfill the three system requirements (\ie comprehensiveness, explainability, integrability). In this study, we investigate whether such designs will add value to pathologists' existing workflow. Specifically, we hypothesize that:
\begin{itemize}
\item \textbf{[H3a] \xp~will improve comprehensiveness with the joint-analyses of multiple criteria.}
\item \textbf{[H3b] \xp~will improve explainability with explanation by hierarchically traceable evidence.}
\item \textbf{[H3c] \xp~will improve integrability with the top-down human-AI collaboration workflow.}
\end{itemize}
\subsection{Participants}
We recruited 12 medical professionals in pathology across three medical centers. Our participants' experience ranged from two to 10 years ($\mu$=4.38, $\sigma$=2.16), including two attendings (A), two fellows (F), seven senior residents (SR, $\geq$ PGY-3), and one junior resident (JR, $\leq$ PGY-2)\footnote{Please see the supplementary material for the demographic information for the participants.}. All participants had received training for examining meningiomas prior to the work session. Regarding familiarity with digital pathology tools, pathologists had more experience in glass slides than interfaces in digital pathology: six of them used ImageScope\footnote{\url{https://www.leicabiosystems.com/digital-pathology/manage/aperio-imagescope/}} occasionally for training or reviewing remote cases.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{figures/cookie-cut.pdf}
\caption{We used the `virtual cookie cut' technique to generate the tests cases. Specifically, we first collected (a) pairs of H\&E (in x400) and Ki-67 (in x200) WSIs from a local medical center. Then, we generated `virtual cuts' by (b) selecting 30,000$\times$30,000-pixel regions in H\&E WSIs, and (c) 15,000$\times$15,000-pixel regions from the same position as their H\&E counterparts. (d) Each virtual case consists of one mandatory H\&E slide with two nodes and one optional Ki-67 slide with two corresponding ones. }
\label{fig:cookie-cut}
\end{figure}
\subsection{Test Data}
We collected 18 IRB-approved meningioma WSIs\footnote{\dots which include eleven H\&E WSIs (scanned in x400), and seven Ki-67 WSIs (scanned in x200).} from the same medical center to generate the test cases. In normal conditions, each patient's case consisted of more than 10 WSIs, and an averaged-experienced resident pathologist typically needs to spend about one hour to finish examining an averaged-difficult case (\ie criteria found in the case do not lie on the grading borderlines). As such, we generated nine `virtual cases' with the `virtual cookie cut' technique (see Figure \ref{fig:cookie-cut}) to fit the task of grading meningiomas\footnote{We would like to point out that all cases used in this study are meningioma-positive. Pathologists' task was to \textit{grade} meningiomas instead of justifying each case as positive {\it vs.} negative.} into the hour-long work sessions. Each virtual case consists of a mandatory H\&E slide (in x400), and an optional Ki-67 slide (in x200). Each H\&E slide consists of two nodes (each has a size of 30,000$\times$30,000, which is relatively smaller than the real cases), while each Ki-67 slide has two corresponding Ki-67 nodes (each has a size of 15,000$\times$15,000) that were cut from the same position as their H\&E counterparts, if available. In total, nine virtual cases have nine H\&E slides and six Ki-67 slides, \ie three virtual cases do not come with a corresponding Ki-67 slide. As for the ground-truth grading diagnosis, 2/9 is grade I, 5/9 is grade II, and 2/9 is grade III. We selected three from the grade II cases for the tutorial purpose for the work session, leaving the test set with two cases for each grade.
\subsection{Task \& Procedure}
All sessions were conducted online because of the COVID-19 pandemic. We first introduced the mission of the project and provided a detailed walkthrough of the traditional interface and \xp~ with three pairs of H\&E and Ki-67 slides as an example. Participants used Microsoft Remote Desktop to interact with both systems that ran on a remote server. After that, we ran a testing session for the participants to grade one virtual case with the traditional interface, and one-four other virtual cases with \xp~with the time cost logged\footnote{Variation in the number of cases in xPath was caused by the difference in the participants' ability. Details of the arrangement are reported in the supplementary material.}. The order was counterbalanced across participants. For each case, the time was counted from when participants first clicked the WSI case until they reached the grading diagnosis. After participants finished each case, we asked them to report their grading diagnosis as well as their findings through a questionnaire adapted from the College of American Pathologists (CAP) cancer protocol template\footnote{\url{https://documents.cap.org/protocols/cp-cns-18protocol-4000.pdf}}. In this session, we did not compare \xp~ with traditional optical microscopes because of the difficulty of instrumentation and observation given the remote situation. After participants had examined all the cases, we conducted a semi-structured interview to elicit their responses to \xp's perceived effort and added value. The average duration of each work session was $\sim$70 minutes. Each participant was paid with a \$100 gift card as compensation for their time.
\subsection{Measurements}
In this study, we collected participants' grading decisions from the CAP questionnaire and analyzed the time log. We also asked them to fill in a post-study questionnaire (see Table \ref{tab:study_quant}) with seven-point Likert questions following \cite{cai2019human, jordan1996usability, hart1988development}. We tested our hypotheses via the following measurements:
For \textbf{H1}, we compared the gradings reported by participants and the `ground-truth' gradings provided by a board-certified neuropathologist. We measured the accuracy of both systems by calculating the error rates of gradings.
For \textbf{H2a}, we calculated the average time participants spent on each case using \xp~and the traditional interface. For \textbf{H2b}, we asked them to give both systems ratings to the effort needed for grading (Table \ref{tab:study_quant}, W1), and the effect of the system to reduce the workload (Table \ref{tab:study_quant}, W2) in the post-study questionnaire.
{\bf H3a-c} was evaluated by the post-study questionnaire. For \textbf{H3a}, we asked participants to rate the comprehensiveness of \xp~and the traditional interface (Table \ref{tab:study_quant}, C1). For \textbf{H3b}, we asked them to rate the explainability of \xp~only since the traditional interface did not provide AI detections (Table \ref{tab:study_quant}, E1). For \textbf{H3c}, we asked participants to rate the integrability of both systems (Table \ref{tab:study_quant}, I1).
Apart from the hypotheses, we further investigated whether the participants trusted \xp~by asking them the following two questions: \one \textit{How capable is the system at helping grade meningiomas?} (Table \ref{tab:study_quant}, T1), \two \textit{How confident do you feel about the accuracy of your diagnoses using the system?} (Table \ref{tab:study_quant}, T2). We also studied whether the participants would like to use both systems in the future (Table \ref{tab:study_quant}, F1), and also let the participants to rate the overall preference of system 1 \textit{vs.} system 2 (Table \ref{tab:study_quant}, F2).
\section{Results \& Findings}
In this section, we first discuss our initial research questions and hypotheses. Then, we summarize the recurring themes that we have found in the working sessions.
\subsection{RQ1: Can \xp~ enable pathologists to achieve accurate diagnoses?}
We summarize the CAP questionnaire responses from our participants and collect 12 grading decisions from the traditional interface, and 20 from \xp. We then follow previous works on digital histology \cite{tschandl2020human, steiner2020evaluation} and compare the difference between pathologists’ responses and the ground truth diagnoses justified by a board-certified neuropathologist. In summary, with the traditional interface, participants gave correct gradings for 7/12 cases, lower-than-ground-truth gradings for 4/12 cases, and higher-than-ground-truth grading for 1/12 case. In comparison, using \xp, pathologists gave 17/20 cases correct gradings and lower-than-ground-truth gradings in 3/20 diagnoses. Upon further analysis, we found that all three errors that participants made with \xp~were caused by their over-reliance on the AI. In these cases specifically, participants spent their majority of effort examining the evidence reported by \xp~and missed the false-negative features that \xp~failed to detect --- \quo{ It's just that I got caught up in looking at the boxes and I would forget that I should look at the entire case myself.}{4}
In sum, based on the data collected by the study, we report that pathologists could make more accurate grading decisions with \xp~in comparison to the traditional interface (\textbf{H1}).
\subsection{RQ2: Do pathologists work more efficiently with \xp?}
Contrary to our hypothesis (\textbf{H2a}), participants spent an average of 7min13s examining each case using \xp, which is 1min17s higher than the traditional interface (ASAP). Our study suggests that pathologists tended to ($p$=0.050) invest more time in \xp~than the traditional interface. We believe this is partly because \xp~brings pathologists an extra workload to comprehend and oversee the AI findings. In the traditional interface, our participants share a similar workflow of examining the WSI --- they first scanned the entire WSI in low magnification, then prioritized studying one criterion (such as the brain invasion or the mitotic count) to ascertain a probable diagnosis as quickly as possible. They also checked Ki-67 slides to assist the diagnosis. In this process, they collected evidence that accounts for a higher grade and memorized them in their minds. Once they acquired enough evidence, they would stop and make a grading decision. When using \xp, participants did not abandon their standard workflow as in the traditional interface. Rather, on top of their standard workflow, participants would perform the differential diagnosis based on AI's findings --- they clicked through each piece of evidence in \xp, justified it by registering into the WSI, and at times overrode AI by clicking the approve/decline/declare-uncertain buttons. These extra steps of interactions prolong participants' workflow ---
\quo{System 2 (\xp) actually makes it longer because some of the images have sort of competing opinions --- whether this is mitosis or not \dots So I'd better take a closer look at what the machine suggests.}{3}
Regarding the perceived effort (\textbf{H2b}), participants reported significantly less effort (as shown in Table \ref{tab:study_quant}, W1, \xp: $\mu$=0.91, ASAP: $\mu$=3.67, $p$=0.002) and a stronger effect on reducing the workload (Table \ref{tab:study_quant}, W2, \xp: $\mu$=5.83, ASAP: $\mu$=2.17, $p$=0.002) while using \xp. Pathologists mentioned that automating the process of finding small-scaled histological features, especially mitosis, would save their time and effort ---
\quo{I spend a lot more time crawling around the slide in the high-power looking for mitosis (for system 1), which you don't have to do as much in system 2 (\xp).}{8}
\begin{table}[t]
\scalebox{0.84}{
\begin{tabular}{l|cc}
\hline
\textbf{Question} & \textbf{ASAP} & \textbf{\xp}\\
\hline\hline
C1: Rate the comprehensiveness of the system. & 2.83(1.27) & 5.75(0.75)\\
E1: Rate the explainability of the system. & N/A & 5.58(0.90)\\
I1: Rate the integrability of the system. & 4.17(1.70) & 5.91(1.08)\\
W1: Rate the effort needed to grade meningiomas when using the system. & 3.67(1.37) & 0.91(0.90)\\
W2: Rate the effect of the system on your workload to reach a diagnosis. & 2.17(1.40)& 5.83(1.03)\\
T1: How capable is the system at helping grade meningiomas? & N/A & 5.83(0.94) \\
T2: How confident do you feel about the accuracy of your diagnoses using the system? & N/A & 6.00 (0.95)\\
F1: If approved by the FDA, I would like to use this system in the future. & 3.75(1.76) & 6.42(0.79)\\
F2: Overall preference & \multicolumn{2}{c}{6.75(0.45)} \\
\hline
\end{tabular}
}
\caption{Participant response on the quantitative measurements of a traditional interface (ASAP) and \xp~with seven-point Likert scores. For the rating questions (C1, E1, I1, W1, W2), 1=lowest and 7=highest. For question T1, T2, F1, 1=very strongly disagree, 2=strongly disagree, 3=slightly disagree, 4=neutral, \dots, and 7=very strongly agree. For question F2, score 1=totally prefer system 1 over system 2, 2=much more prefer system 1 over system 2, 3=slightly prefer system 1 over system 2, 4=neutral, \dots, and 7=very strongly prefer system 2 over system 1. Note that for question W1, a higher rating indicates users perceive more effort while using the system. Question E1, T1, T2 are not applicable to ASAP, since it does not have AI detections for users.
}
\label{tab:study_quant}
\end{table}
\subsection{RQ3: Overall, does \xp~ add value to pathologists' existing workflow?}
For the comprehensiveness dimension (\textbf{H3a}), \xp~received a significantly higher rating than the traditional interface (Table \ref{tab:study_quant}, C1, \xp: $\mu$=5.75, ASAP: $\mu$=2.83, $p$=0.001). Specifically, pathologists responded positively that \xp~ provides sufficient information (\ie criteria and evidence) to assist the diagnosis ---
\quo{\dots it (\xp) kind of gives you a step-wise checklist to make sure that it's the correct diagnosis, and also provides you what is most likely a diagnosis.}{11}
For the explainability dimension (\textbf{H3b}), \xp~obtained an average rating of 5.58/7 (Table \ref{tab:study_quant}, E1). In general, pathologists could understand the logical relationship between the evidence and the suggested grading (global explainability). However, some (P1, P5) found it hard to interpret the saliency map, especially for the cases where cues of attentions scattered across the entire evidence (see Figure \ref{fig:fail_evidence}a) ---
\quo{For the heatmap \dots it is also a little bit confusing \dots it takes some time getting used to it and there are some false positives.}{1}
For the integrability dimension (\textbf{H3c}), pathologists gave overall higher scores for \xp~(Table \ref{tab:study_quant}, I1, \xp: $\mu$=5.91, ASAP: $\mu$=4.17, $p$=0.006). Specifically, pathologists were able to perform diagnoses based on the \xp's AI findings similar to their workflow of collaborating with human trainees ---
\quo{It's kind of like a first-year resident marking everything.}{1}
\quo{I'm a cytology fellow, and cases are pre-screened for us. And essentially this is doing similarly.}{4}
For the trust dimension, participants responded positively to \xp's capability of helping to grade meningiomas (T1: $\mu$=5.83) and the accuracy of the diagnoses while using the system (T2: $\mu$=6.00). However, some (P3, P4, P5) pointed out that they might spend more time examining the WSI entirely if more time had been granted ---
\quo{I just went to the areas that the system suggested. If I had more time, I would like to just go to all the areas, just to feel more comfortable that I'm not missing anything.}{5}
Last but not least, participants were more likely to use \xp~than the traditional interface (Table \ref{tab:study_quant}, F1, \xp: $\mu$=6.41, ASAP: $\mu$=3.75, $p$=0.002). Overall, 9/12 of the participants ``totally'' preferred \xp~over the traditional interface, while 3/12 ``much more'' preferred \xp~(Table \ref{tab:study_quant}, F2).
However, it is noteworthy that this study is based on pathologists' examination of WSIs, while pathologists use the optical microscope in their daily practice. During the study, 7/12 of our participants expressed that they preferred using an optical microscope with the glass slide \textit{vs.} a digital interface with the WSI --- \textit{``\dots it's much faster (in the microscope) than moving on the computer \dots we would prefer to look at a real slide instead of using a scan picture.''} (P2). As such, further comparison between \xp~and the optical microscope is considered as future work.
\subsection{Recurring Themes}
Based on our observations of pathologists' using \xp~ and the interview with them, we discuss the following recurring themes that characterize how pathologists interacted with \xp.
\subsubsection{How pathologists use \xp's multiple criteria: prioritizing one, referring to others on-demand}
We noted that pathologists tended to focus on a specific criterion. If that criterion alone did not meet the bar of a diagnosis for a higher grade, pathologists would use \xp~ to browse other criteria, looking for evidence of a differential diagnosis, until they identify sufficient evidence to support their hypothesis.
\quo{I'm done. Because with the mitosis that high, you're done, you don't have to go through that stuff (other criteria).}{12}
However, some pathologists would also like to see other criteria and examine the slide comprehensively ---
\quo{With the mitosis rate that high, you don't actually need it (Ki-67) for the diagnosis. But I will have a look at it.}{1}
\quo{I will just look at (other criteria) because I don't want to grade by one single criterion (mitosis).}{3}
Such a relationship between criteria is analogous to `focus + context' \cite{card1999readings} in information visualization --- different pathologists might focus on a few different criteria. Still, the other criteria are also important to serve as context at their disposal to support an existing diagnosis or find an alternative.
\subsubsection{\xp's top-down workflow with hierarchical explainable evidence enables pathologists to navigate between high-level AI results \& low-level WSI details}
One of the main reasons limiting the throughput of histological diagnosis is that criteria like mitotic count have very small-scaled histological features.
As a result, pathologists have to switch to high power magnification to examine such small features in detail. Given the high resolution of the WSI, it is possible to `get lost' in the narrow scope of HPF, resulting in a time-consuming process to go through the entire WSI. With \xp, pathologists found its hierarchical design and the provision of mid-level evidence (\eg AI's ROI samples) the most helpful for diagnosis as it connects high-level findings and low-level details ---
\quo{It (\xp) finds the best area to look at. \dots You can jump there, and if it is a grade III, then it is a grade III. You don't have to look at other areas.}{6}
Furthermore, pathologists appreciated that \xp~provided heatmap visualizations to assist them in navigating the WSI out of the ROI samples ---
\quo{The heatmap is very useful to assist pathologists to go through the entire slide \dots which saves time and makes sure not missing anything.}{12}
\subsubsection{\xp's explainable design helps pathologists see what AI is doing} We found \xp's evidence-based justification of AI findings assisted pathologists to relate AI-computed results with evidence, which added explainability ---
\quo{System 2 (\xp) does find some evidence and assigns it to a particular observation that is related to the grading, so that it helps with explainability.}{3}
In \xp, the AI might make two kinds of mistakes that may incur potential bias: \one false positive, where AI mistakenly identifies negative areas as positive for a given criterion; \two false negative, where the AI misses positive areas corresponding to a criterion. We observed a number of false-positive detections that confused some participants. We also found out that the participants would rather deal with more false positives than false negatives so that signs of more severe grade would not be missed ---
\quo{It's better that it picks them up and gives me the opportunity to decline it.}{10}
Furthermore, although some participants found the saliency map hard to interpret in some cases, the others used it to locate the cells that led to AI's grading ---
\quo{There were a couple of instances where it was a bit more difficult to figure out what it (the saliency map) was trying to point out to me. But for the majority of the time, I could tell which area they (the saliency maps) were trying to show me.}{9}
Further, with the aid of the saliency map, participants could understand AI's limitations and what might have misled the AI ---
\quo{You can see what this system counted as mitosis \dots the heatmap (the saliency map) helps to understand why AI chose this or that area, for example, I think AI chose neutrophils as mitotic figures in some areas.}{6}
\subsubsection{How pathologists oversee \xp: incrementing human findings onto justified AI results}
Given the explainable evidence provided by \xp, it was straightforward for pathologists to recognize and modify AI results when there was a disagreement. Specifically, pathologists could oversee AI by clicking on the approve/decline/declare-uncertain buttons or modifying AI results directly on the criteria panel. If the overseen AI result were sufficient to conclude a grading decision (\eg seven mitoses in 10 HPFs, enough to make the case as grade II (>4), but still far from grade III (<<20)), they would stop examining and report the grading. However, if the overseen AI result appeared to be marginal (\eg 19 mitoses in 10 HPFs, which is only one mitosis away from upgrading the case to a grade III), pathologists would continue to search based on the AI findings and add their new insights to grade ---
\quo{I count a total of number of five \dots adding previous 19 makes it 24 \dots this is grade III.}{2}
What's more, for the cases where \xp~did not actively report positive detections, pathologists would examine the WSI manually as in a traditional interface --- that is, pathologists would use their experience to evaluate the case further and make a grading decision.
\section{Limitations, Design Recommendations \& Future Work}
\subsection{Limitations \& Future Works}
In this section, we discuss limitations of this work and outline the possible directions of future works.
\subsubsection{Increasing the scope of study}
One limitation of this work is the materials used. Specifically, we used the data from one institute to train and test the AI of \xp. This leaves the performance of \xp's AI questionable while applying the AI to the WSIs from other institutes. This is because other institutes might use a different staining process or a different type of scanner, causing a difference in the domain/distribution (see Figure \ref{fig:medical_center}).
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{figures/medical_center.pdf}
\caption{Mitoses from meningiomas (in x400), scanned by (a) the medical center in this study and (b) a different medical center. The difference in appearance is caused by the difference in processing procedures and scanners used.}
\label{fig:medical_center}
\end{figure}
Another limitation is from the whole slide images. During the work sessions, more than half of the participants expressed that they preferred using an optical microscope with the glass slide \textit{vs.} a digital interface with the WSI. Remarkably, participants found it challenging to navigate a digital WSI, which has also been described and discussed by Ruddle \& Thomas \etal~\cite{ruddle2016design}. We believe such a difficulty in navigation is partly related to the pathologists' unfamiliarity with the traditional (digital) interface. As such, we suggest future work also compare with the optical microscope aside from the traditional digital interface. Therefore, with more data points collected, we can validate \xp's performance and the generalizability more comprehensively.
\subsubsection{Enabling adjusting the thresholds in the frontend}
Currently, \xp~ does not support directly adjusting the threshold on the frontend. In our user study, one participant mentioned that different pathologists might have different thresholds to justify whether a piece of evidence is positive ---
\quo{``I only call the characteristic mitoses \dots other pathologists might have different thresholds.}{7}
Further, dealing with false positives and false negatives is another issue with the fixed-threshold scheme. From our study, we found out that pathologists would prefer high-sensitivity results that include some false positives rather than high-specificity results that have false negatives ---
\quo{I could have more faith if it could find all the candidates. And I could pretty easily click through and accept/reject, and know that it wasn't missing anything.}{8}
Therefore, the system by default should be designed to err on the side of caution, \eg showing a wide range of ROIs despite that some are inevitably false positives. Pathologists are fast in examining ROIs (and ruling out false positives), whereas missing important features would come with a much higher cost (\eg delayed or missed treatment).
\subsubsection{Improving the quality and granularity of explanations}
In the study, we found a number of cases where the saliency maps failed to explain the detection results and caused confusion to the users. As shown in Figure \ref{fig:fail_evidence}, the failed saliency maps showed either scattered attention across the evidence (Figure \ref{fig:fail_evidence}a), or concentrated attention at a wrong place (Figure \ref{fig:fail_evidence}b). Such errors can be explained as the attentions are reasoned from patch-wise annotations rather than localized ones because the localized annotations of positive findings are extremely labor-costly to obtain. The quality of the saliency maps can be potentially improved with the increment of training data for higher model generalization and the advent of the methodologies of unsupervised attention reasoning \cite{arrieta2020explainable}.
Besides, knowing the location of a potential positive finding can be insufficient for pathologists. Since the pathological imaging of tissues is merely an approximation of the real condition, there can often exist uncertainty in diagnosis even for well-trained pathologists. As such, explaining why an area contains positive findings, \eg a highlighted cell is detected to stage as mitosis since its boundary is jagged, can be critical for systems in the future. Such causality enables a system to imitate how pathologists discuss with their peers, which can improve the collaboration between a system and its users. Moreover, future work should also employ more formal measurements (\eg System Causability Scale \cite{holzinger2020measuring}) to evaluate the quality of explanations.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{figures/fail_evidence.pdf}
\caption{Examples of failure explanation cases, where the saliency shows (a) scattered attentions across the image or (b) misleading hot spots. The green arrows point to the location of a mitosis figure marked by a human pathologist.}
\label{fig:fail_evidence}
\end{figure}
\subsection{Design Recommendations for Physician-AI Collaborative Systems}
\subsubsection{Showing the logical relationship amongst multiple types of evidence at the top level} Carcinoma grading usually involves examining multiple criteria from multiple data sources (\eg H\&E slides, IHC slides, FISH (fluorescence in situ hybridization) test, patient's health record). As such, one-size-fits-all AI models are not sufficient. In practice, multiple AI models are employed to locate different types of disease markers. To organize these AI-computed results, medical AI systems (such as \xp) should seek to present the logical relationship that connects these multiple criteria/features/sources of information and update final results dynamically given any pathologists' input (\eg acceptance or rejection of how AI computes each criterion). Such a design is more likely to match the clinical practice of pathologists and cost minimal extra learning when users onboard a system. It is noteworthy that the `multiple criteria' design is not limited to this research but can also generalize to other tasks in digital pathology, such as breast cancer grading \cite{rakha2008prognostic}.
\subsubsection{Making AI's finding traceable with hierarchically organized evidence}
There is a pressing need to deal with the transparency of a black-box model and the traceability of the explanation evidence in the high-stake tasks (\eg medical diagnosis). As such, AI systems should provide local explainability where each piece of low-level evidence is traceable. In \xp, we employ the design of hierarchically traceable evidence for each criterion. Such an organization forms an `evidence chain' where each direct evidence is accountable for the high-level system output. Similar intuitions can also be applied to medical applications in a more general context, such as cancer staging \cite{lydiatt2017head} and cancer scoring \cite{humphrey2004gleason}, where the evidence is accumulated to arrive at a diagnosis.
\subsubsection{Employing a focus+context design towards presenting and/or interacting with multiple criteria} Medical diagnosis involves accumulating evidence from multiple criteria --- our study observed that pathologists started with focusing on one criterion while continuing to examine the others for a differential diagnosis. Thus, medical AI systems should make multiple criteria available, and support the navigation of such criteria following a `focus+context' design \cite{card1999readings}, which is commonly used in information visualization. The major design goal is to strike the dichotomy between juxtaposing the focused criterion with sufficient contextual criteria and overwhelming the pathologists with too much information. It is also possible for a system to, based on a patient's prior history and the pre-processing of their data, recommend a pathologist to start focusing on specific criteria followed by examining some others as context.
\section{Conclusion}
In this work, we identify three gaps of comprehensiveness, explainability, and integrability that prevent AI from being adopted in a clinical setting for pathologists. To close these gaps, we implement \xp~with two key design ingredients: \one joint-analyses of multiple criteria and \two explanation by hierarchically traceable evidence. To validate \xp, we conducted work sessions with twelve medical professionals in pathology across three medical centers. Our findings suggest that \xp~can leverage AI to reduce pathologists’ cognitive workload for meningioma grading. Meanwhile, pathologists learned the tool and benefited from the design (\eg working with multiple criteria in parallel while drilling down to more evidence for individual criteria), and made fewer mistakes. By observing pathologists’ use of \xp~with quantitative and qualitative feedback, we shed light on how pathologists collaborate with AI and summarize design recommendations. We believe this work can help future research on tool design for physician-AI collaborative diagnosis.
|
3,212,635,537,526 | arxiv | \section{Introduction}
Recent years have seen significant interest in the study of theoretical and experimental aspects of optomechanical systems~\cite{aspelmeyer2014cavity}. In particular, the reported achievements of ground-state cooling~\cite{chan2011laser,teufel2011sideband,delic2020cooling} as well as the entanglement of macroscopic systems~\cite{lee2011entangling,riedinger2018remote,ockeloen2018stabilized, thomas2020entanglement}, have significantly improved the prospects for using optomechanical systems as sensors~\cite{mason2019continuous,rademacher2020quantum, yu2020quantum} and for tests of fundamental physics~\cite{bose1999scheme,marshall2003towards, kleckner2008creating,Derakhshani2016,bose2017spin,marletto2017gravitationally}.
The central feature of optomechanical systems is the radiation-pressure interaction between light and matter, which allows for exquisite experimental readout and control. In most cavity-based experiments, the radiation pressure from the input laser couples the photon number to the center-of-mass motion of the mechanical element. This interaction is fundamentally \textit{nonlinear}~\cite{law1995interaction},~i.e., the interaction Hamiltonian is a product of three field operators and the resulting equations of motion for the optical and mechanical modes cannot be written as a linear system of equations.
The dynamics of the nonlinear optomechanical Hamiltonian with a constant light--matter coupling was first solved by two pioneering theoretical studies~\cite{mancini1997ponderomotive, bose1997preparation}. The solutions inspired numerous proposals for tests of fundamental physics~\cite{bose1999scheme,marshall2003towards, kleckner2008creating}, sensing schemes~\cite{qvarfort2018gravimetry, armata2017quantum, schneiter2020optimal, qvarfort2021optimal}, and studies of the generation of non-Gaussian states~\cite{qvarfort2019enhanced, qvarfort2020time}. In many cases, however, the nonlinear optomechanical Hamiltonian is linearized around a strong coherent input state~\cite{aspelmeyer2014cavity}, which sacrifices the nonlinearity and the ability to generate non-Gaussian states for a more tractable mathematical treatment~\cite{serafini2017quantum}. Indeed, most experiments to date are well-described by the linearized optomechanical Hamiltonian~\cite{aspelmeyer2014cavity}, but as an increasing number of theoretical~\cite{ludwig2008optomechanical, nunnenkamp2011single,rabl2011photon,vanner2011selective} and experimental works~\cite{brawley2016nonlinear,leijssen2017nonlinear} enable the study and observations of nonlinear phenomena, it becomes imperative to develop theoretical tools that accurately describe experiments in the nonlinear regime.
An outstanding challenge involves a general and analytical treatment of optical decoherence in a nonlinear optomechanical system. Typically, open dynamics are modeled by solving either a master equation or the quantum Langevin equation~\cite{gardiner2004quantum}. Since the latter can be integrated into the input-output theory framework, it has long been the main focus of the community.
In contrast, modeling optical decay through a master equation has been generally challenging because the optical dissipation terms do not commute with the optomechanical interaction term. A perturbative solution for slowly decaying systems was taken as a first step by~\citet{mancini1997ponderomotive}. Mechanical loss, on the other hand, has been exactly modeled in terms of the Lindblad equation for phonon dissipation~\cite{bose1997preparation} and Brownian motion~\cite{bassi2005towards}. In addition, a treatment of both optical and mechanical losses through a damping-basis approach~\cite{briegel1993quantum} has also been put forward~\cite{torres2019optomechanical}.
\begin{figure}[b!]
\includegraphics[width =0.3\textwidth, trim = 0mm 0mm 0mm 0mm]{Cavity.pdf}
\caption{Optomechanical setup where the optical mode $\hat a$ is coupled to the mechanical position $\hat x_{\rm{m}}$ via the interaction term $\hat a^\dag \hat a \, \hat x_{\rm{m}}$. Imperfections cause the photons to leak from the cavity at a rate $\kappa_{\rm{c}}$, which we represent as a rescaled number with respect to the mechanical frequency $\omega_{\rm{m}}$ as $\tilde{\kappa}_{\rm{c}} = \kappa_{\rm{c}}/\omega_{\rm{m}}$.}
\label{fig:cavity}
\end{figure}
In this work we derive an expression for the nonunitary evolution of a nonlinear optomechanical system by combining a previously established Lie-algebra solution~\cite{wei1963lie} for the unitary dynamics~\cite{bruschi2018mechano,qvarfort2019enhanced} with a vectorization of the Lindblad equation. We also make use of the fact that the nonunitary evolution can be partitioned into separate products in a manner similar to that by which the interaction picture is utilized. To demonstrate how our solution to the Lindblad equation may be applied, we consider the preparation of optical cat states via the nonlinear optomechanical interaction in the presence of optical loss. Our results allow us to bound the optical decay rate given a desired fidelity with which we wish to prepare the states.
The work is structured as follows. In Sec.~\ref{sec:unitary:dynamics}, we review the known unitary solutions for a nonlinear optomechanical system. Following that, in Sec.~\ref{sec:tools} we introduce the Lindblad equation along with the two methods we use for solving it: vectorization and partitioning the time evolution. We proceed to apply these methods in Sec.~\ref{sec:optical:decoherence:nonlinear:optomechanics} to a nonlinear optomechanical system with optical decoherence and consider the three above examples in Sec.~\ref{sec:examples}. We conclude our work with a summary and outlook in Sec.~\ref{sec:conclusions}.
\section{Unitary dynamics of the nonlinear optomechanical Hamiltonian} \label{sec:unitary:dynamics}
We begin by considering a single mode of an optical field that is nonlinearly coupled to the center-of-mass mode of a mechanical element (see Fig.~\ref{fig:cavity}). The full Hamiltonian for the cavity mode and mechanical mode reads
\begin{align} \label{eq:Hamiltonian}
\hat H(t) &= \hbar \, \omega_{\rm{c}} \, \hat a ^\dag \hat a + \hbar \, \omega_{\rm{m}} \, \hat b^\dag \hat b - \hbar \, g(t) \, \hat a^\dag \hat a\, \bigl( \hat b^\dag + \hat b \bigr),
\end{align}
where $\omega_{\rm{c}}$ and $\omega_{\rm{m}}$ are the oscillation frequencies of the optical and mechanical modes respectively, and $g(t)$ denotes the (possibly time-dependent) light--matter coupling strength. The modes are defined by the annihilation and creation operators $\hat a, \hat a^\dag$ and $\hat b, \hat b^\dag$, which satisfy the canonical commutator relations $[\hat a , \hat a^\dag ] = [\hat b, \hat b^\dag] = 1$.
For simplicity of notation, we proceed to rescale all frequencies by $\omega_{\rm{m}}$, which is equivalent to defining a dimensionless time parameter $\tau = t\,\omega_{\rm{m}}$. With this choice of notation, the optomechanical coupling can be written as $\tilde{g}(\tau) = g(\tau/\omega_{\mathrm{m}})/\omega_{\rm{m}}$. We redefine the Hamiltonian $\hat H(t) \rightarrow \hat H(\tau)$ in these new dimensionless units as
\begin{align} \label{eq:Hamiltonian:rescaled}
\hat{H}(\tau) &= \hbar \, \frac{\omega_{\rm{c}}}{\omega_{\mathrm{m}}} \, \hat a ^\dag \hat a + \hbar \, \hat b^\dag \hat b - \hbar \, \tilde{g}(\tau) \, \hat a^\dag \hat a\, \bigl( \hat b^\dag + \hat b \bigr).
\end{align}
The time evolution operator that corresponds to~\eqref{eq:Hamiltonian:rescaled} is given by
\begin{align} \label{eq:time:evolution}
\hat U(\tau) = \overleftarrow{\mathcal{T}} \mathrm{exp} \left[ - \frac{i}{\hbar } \int^\tau_0 \mathrm{d}\tau' \, \hat{H}(\tau') \right],
\end{align}
where $\overleftarrow{\mathcal{T}}$ denotes the time ordering of the exponential.
A solution of~\eqref{eq:time:evolution} for a constant optomechanical coupling was derived by~\citet{bose1997preparation} and~\citet{mancini1997ponderomotive}. When the optomechanical coupling is time dependent, however, the solutions become more complex. It has been previously shown that a Lie-algebra method can be used to obtain solutions for general time dependence~\cite{bruschi2018mechano, qvarfort2019enhanced}. Here we summarize the results.
By identifying a set of operators that is closed under commutation, the time-evolution operator $\hat U(\tau)$ in~\eqref{eq:time:evolution} can be written as
\begin{align} \label{eq:full:time:evolution}
\hat U(\tau)&= e^{- i \hat N_b \tau} e^{-i F_{a}\,\hat{N}^2_a} e^{-i F_{+} \hat{N}_a\,\hat{B}_+}
e^{-i F_{_-}\,\hat{N}_a\,\hat{B}_-} ,
\end{align}
where we have transformed into a frame that rotates with the free optical evolution $\mathrm{exp}\bigl[ - i \hat a^\dag \hat a \, \tau \, \omega_{\mathrm{c}}/\omega_{\mathrm{m}} \bigr]$, and we have defined the following Hermitian operators: $
\hat{N}_a := \hat a^\dagger \hat a$, $
\hat{N}_b := \hat b^\dagger \hat b$, $\hat{B}_+ := \hat b^\dagger +\hat b$, and $
\hat{B}_- := i\,(\hat b^\dagger -\hat b)$.
The $F$ coefficients in~\eqref{eq:full:time:evolution} are functions of time $\tau$ and are given by the following integrals~\cite{bruschi2018mechano, qvarfort2019enhanced}:
\begin{align} \label{eq:definition:of:F:coefficients}
&F_{a} =2 \int^\tau_0 \mathrm{d}\tau' \, \tilde{g}(\tau') \sin(\tau') \, \int^{\tau'}_0 \mathrm{d}\tau'' \tilde{g}(\tau'') \, \cos(\tau''), \nonumber \\
&F_{+} = - \int^\tau_0 \mathrm{d}\tau' \tilde{g}(\tau') \, \cos(\tau'), \, \nonumber \\
&F_{-} = \int^\tau_0 \mathrm{d} \tau' \tilde{g}(\tau') \, \sin(\tau').
\end{align}
For a constant optomechanical coupling $\tilde{g}(\tau) \equiv \tilde{g}_0 = g_0/\omega_{\rm{m}}$, the integrals in~\eqref{eq:definition:of:F:coefficients} evaluate to
\begin{align} \label{eq:F:coefficients}
&\quad\quad\quad\quad F_a = \frac{1}{2}\tilde{g}_0^2 \left[ \sin(2\tau) - 2\tau \right], \\
& F_+ = - \tilde{g}_0 \, \sin(\tau), && F_- = \tilde{g}_0 \, \left[ \cos(\tau) - 1 \right],\nonumber
\end{align}
which is equivalent to the previously obtained solutions~\cite{bose1997preparation, mancini1997ponderomotive} up to the ordering of the terms in~\eqref{eq:full:time:evolution}.
\section{The Lindblad equation} \label{sec:tools}
The Lindblad equation describes Markovian noise-processes as an effective nonunitary contribution to the dynamics~\cite{gardiner2004quantum}. The most general form of the quantum master equation in Gorini-Kossakowski-Sudarshan-Lindblad form for $N$ environmental modes reads~\cite{lindblad1976generators, gorini1976completely}
\begin{equation} \label{eq:Lindblad:general}
\dot{\hat{\varrho}} = - i [\hat H, \hat \varrho] + \sum_{n,m=1}^{N^2-1} h_{nm} \left( \hat L_n \, \hat \varrho \, \hat L_m^\dagger - \frac{1}{2} \{ \hat L_m^\dagger \hat L_n , \hat \varrho \} \right),
\end{equation}
where $\hat \varrho$ is the density matrix of a quantum state, $\hat H$ is the Hamiltonian operator, $\hat L_n$ is a non-Hermitian Lindblad operator, and where $\{\cdot, \cdot \}$ denotes the anti-commutator.
To obtain a solution to~\eqref{eq:Lindblad:general}, we make use of two methods: vectorization and a factorization of the evolution operator akin to moving to the interaction picture. The combination of these methods allows us to write down a solution based on the previously obtained Lie-algebra solution for the unitary dynamics. We outline both methods in the following sections.
\subsection{Introduction to vectorization}
Here we introduce the vectorization procedure for linear operators that act on the Hilbert space and show how the vectorized Lindblad equation is derived. We also refer the reader to the excellent introduction to vectorization in~\cite{d2000bell} and in the Supplemental Material of~\cite{alipour2014quantum}, the notation of which we follow closely.
We start by considering a generic operator $\hat A$ that acts on the Hilbert space $\mathcal{H}$. Given an orthonormal basis $\{\ket{i}\}$ in $\mathcal{H}$, the operator $\hat A$ can be written as
\begin{equation}
\hat A = \sum_{ij} \bra{i} \hat{A} \ket{j} \ketbra{i}{j}.
\end{equation}
We then assign a vector to this operator by flipping one of the bras into a ket:
\begin{equation}
\kket{A} = \sum_{ij} \bra{i} \hat{A} \ket{j} \ket{i}\ket{j}.
\end{equation}
That is, every row in the matrix $\hat{A}$ defined through its elements $A_{ij}:=\bra{i} \hat{A} \ket{j}$ becomes stacked in the vector $\kket{A}$. We note that this makes the vectorization basis dependent.
To vectorize the Lindblad equation, we need the relation (see~\cite{alipour2014quantum} for the derivation)
\begin{equation}\label{app:eq:key:vector:identity}
\kket{ABC} = ( \hat A \otimes \hat C^{\rm{T}}) \kket{B},
\end{equation}
which demonstrates how a vectorized product of operators can be considered. We will later replace $\hat B$ by the density matrix in the Lindblad equation (see Sec.~\ref{sec:optical:decoherence:nonlinear:optomechanics}).
Finally, we note that expectation value for the general operator $\hat A$ and the state $\hat \varrho$ is given in the vectorized language as
\begin{equation} \label{eq:overlap}
\braket{\hat A} = \mathrm{Tr}\bigl[ \hat A \, \hat \varrho \bigr] = \bbraket{A^\dag | \varrho}.
\end{equation}
This will allow us to compute various quantities of interest once we have solved the dynamics.
\subsection{Vectorizing the Lindblad equation}
As noted in the preceding section, vectorization transforms matrices into vectors. Crucially, it also allows us to transform super operators into matrices. In fact, this method has been used to great effect in previous efforts to model nonunitary dynamics (see, e.g.,~\cite{alipour2014quantum, teuber2020solving, buvca2020bethe}).
In this work we denote the vectorized density matrix $\hat \varrho $ by $\kket{\varrho}$ and the free state evolution is subsequently written as
\begin{equation}
\hat U(t) \, \hat \varrho_0\, \hat U^\dag (t) \rightarrow \hat U(t) \otimes \hat U^*(t) \kket{\varrho_0}.
\end{equation}
Note that we here take the complex conjugate rather than the full conjugate transpose of $\hat U(t)$, as mandated by the vectorization mapping that we chose. Throughout this work, we use the tensor product to differentiate between the left-hand and right-hand multiplication of $\hat U(t)$ throughout, rather than showing the structure of the Hilbert space in terms of the optical and mechanical modes.
\begin{table}
\caption{\label{tab:vectorized} Vectorized analogs of terms in the Lindblad equation~\eqref{eq:Lindblad:general}.}
\begin{ruledtabular}
\begin{tabular}{cc}
Operator product & Vectorized analog \\\hline
$\hat \varrho \, \hat H $ & $(\mathds{1}\otimes \hat H^{\mathrm{T}}) |\varrho \rrangle $ \\
$\hat H \, \hat \varrho$ & $(\hat H\otimes \mathds{1}) |\varrho \rrangle$\\
$\hat L_n \, \hat \varrho \, \hat L^\dagger_m$ & $(\hat L_n \otimes \hat L_m^{\dagger \mathrm{T}} ) |\varrho \rrangle$ \\
$\hat L_m^\dagger \hat L_n \, \hat \varrho $ & $(\hat L_m^\dagger \hat L_n \otimes \mathds{1} ) |\varrho \rrangle $ \\
$\hat \varrho \, \hat L^\dagger_m \hat L_n$ & $ (\mathds{1} \otimes (\hat L_m^\dagger \hat L_n)^{\mathrm{T}} ) |\varrho \rrangle$
\end{tabular}
\end{ruledtabular}
\end{table}
To apply the vectorization to the Lindblad equation,
we use the identity~\eqref{app:eq:key:vector:identity} on all terms of the Lindblad equation~\eqref{eq:Lindblad:general}. The terms and their vectorized analogs can be found in Table~\ref{tab:vectorized}. Where only two operators were multiplied, we inserted the identity to ensure that we obtain products of three operators. As a result,~\eqref{eq:Lindblad:general} can be written in the vectorized language as
\begin{equation} \label{eq:vectorized:Lindblad}
\frac{\mathrm{d}}{\mathrm{d} t}|\varrho \rrangle = \hat{\mathcal{L}}(t) \kket{\varrho}.
\end{equation}
Here we write $\hat{\mathcal{L}}(t)$ as:
\begin{equation} \label{eq:vectorized:Lindblad:super:operator}
\hat{\mathcal{L}} = \hat{\mathcal{L}}_H + \hat{\mathcal{L}}_L,
\end{equation}
where (according to the terms listed in Table~\ref{tab:vectorized}) $\hat{\mathcal{L}}_H$ is the unitary (Hamiltonian) contribution given by $\hat{\mathcal{L}}_H := - i \bigl( \hat H(t) \otimes \mathds{1} - \mathds{1}\otimes \hat H^{\rm{T}}(t) \bigr) $ and $\hat{\mathcal{L}}_L$ contains the nonunitary part
\begin{align} \label{eq:LH:LL:definitions}
\hat{\mathcal{L}}_L &:= \sum_{n,m = 1}^{N^2-1} \frac{h_{nm}}{2} \left[ 2 \hat L_n\otimes \hat L_m^{\dagger \rm{T}}- \hat L_m^\dagger \hat L_n \otimes \mathds{1} + \mathds{1}\otimes (\hat L_m^\dagger \hat L_n)^{\rm{T}} \right] .
\end{align}
These expressions might appear nonintuitive at first because of the notation used for the vectorization. The vectorization essentially splits the system into two modes (here explicitly indicated by use of the tensor product), one 'right-handed' and one 'left-handed' mode, which act on separate parts of the vectorized density matrix. We also notice the appearance of transposed operators in~\eqref{eq:LH:LL:definitions}, which follow from our choice of the vectorization mapping. However, we may simplify the expression by adopting a real basis, such as the Fock basis, where $\hat L$ and $\hat L^\dagger$ have exclusively real entries. This means that the transposition operation is equivalent to taking the Hermitian conjugate, which, for example, allows us to write $\hat L^{\rm{T}}_i= \hat L_i^\dag$. This will greatly simplify our calculations, but may have consequences for the case where we wish to explicitly compute quantities using a complex basis. We do not, however, encounter those cases in this work.
The formal solution to the Lindblad equation~\eqref{eq:vectorized:Lindblad} in the vectorized language reads
\begin{equation} \label{eq:Lindblad:formal:solution}
\kket{\varrho(t)} = \hat{\mathcal{S}}(t) \kket{\varrho_0},
\end{equation}
where $\kket{\varrho_0}$ is the vectorized form of the initial state $\hat \varrho_0$ and $\hat{\mathcal{S}}(t)$ is the time-ordered exponential of $\hat{\mathcal{L}}(t)$:
\begin{equation} \label{eq:formal:solution:S}
\hat{\mathcal{S}}(t) = \overleftarrow{\mathcal{T}} \mathrm{exp} \left[ \int^t_0 \mathrm{d} t' \, \hat{\mathcal{L}}(t') \right].
\end{equation}
This is a key expression that captures both the unitary and the nonunitary evolution. In the next section, we proceed to show how $\hat{\mathcal{S}}(t)$ may be further simplified.
\subsection{Partitioning the time-evolution} \label{sec:partitioned:time:evolution}
The second method that we will use to solve the Lindblad equation relies on the fact that any time-evolution operator $\hat U(t)$ [or the nonunitary evolution operator $\hat{\mathcal{S}}(t)$, as will become evident] can be partitioned into products that arise from the different Hamiltonian terms. Once partitioned, each contribution can then be evaluated using a suitable method. For example, the time evolution that arises from a quadratic Hamiltonian can be treated using phase-space methods~\cite{serafini2017quantum}, while a cubic or higher Hamiltonian term can in some cases be treated with a Lie-algebra method~\cite{wei1963lie}, as we do here.
Formally, we consider the time-evolution operator $\hat U(t)$ generated by the Hamiltonian $\hat H(t) = \hat H_A(t) + \hat H_B(t)$, where the partition of $\hat H_A(t) $ and $\hat H_B(t)$ is arbitrary. We may then consider a frame that rotates with $\hat U_A(t)$, which is defined in the standard way as
\begin{equation}
\hat U_A(t) = \overleftarrow{\mathcal{T}} \mathrm{exp}\left[ - \frac{i}{\hbar } \int^t_0 \mathrm{d}t' \, \hat H_A(t') \right].
\end{equation}
It is then possible to write $\hat U(t)$ as the following product $
\hat U(t) = \hat U_A(t) \, \hat U_B(t)$,
where $\hat U_B(t) $ is given by
\begin{equation} \label{eq:UB:evolution}
\hat U_B(t) = \overleftarrow{\mathcal{T}}\mathrm{exp}\left[ - \frac{i}{\hbar } \int^t_0 \mathrm{d}t' \, \hat U_A^\dag (t') \, \hat H_B (t')\, \hat U_A(t') \right].
\end{equation}
See Appendix~\ref{app:interaction:picture} for a detailed derivation, which follows the standard treatment of the interaction picture.
We now seek to generalize these notions to nonunitary dynamics. Consider $\hat{\mathcal{L}}(t) = \hat{\mathcal{L}}_A(t) + \hat{\mathcal{L}}_B(t)$, where again $\hat{\mathcal{L}}_A(t)$ and $\hat{\mathcal{L}}_B(t)$ are completely arbitrary. The formal solution for the evolution with $\hat{\mathcal{L}}_A(t)$ is given from~\eqref{eq:formal:solution:S} and reads
\begin{equation}
\hat{\mathcal{S}}_A(t) = \overleftarrow{\mathcal{T}} \mathrm{exp}\left[ \int^t_0 \mathrm{d}t'\, \hat{\mathcal{L}}_A(t') \right],
\end{equation}
and by considering a transformation similar to the interaction picture for unitary dynamics, we write $\hat{\mathcal{S}}(t) = \hat{\mathcal{S}}_A(t) \, \hat{\mathcal{S}}_B(t)$, where now
\begin{equation} \label{eq:simplified:S}
\hat{\mathcal{S}}_B(t) = \overleftarrow{\mathcal{T}} \mathrm{exp} \left[ \int^t_0 \mathrm{d}t' \, \hat{\mathcal{S}}^{-1}_A (t') \, \hat{\mathcal{L}}_B (t') \, \hat{\mathcal{S}}_A (t') \right].
\end{equation}
In some cases, partitioning $\hat{\mathcal{S}}(t)$ in this way simplifies the problem at hand. We provide a formal proof of the fact that the partitioning holds for nonunitary dynamics in Appendix~\ref{app:interaction:picture}.
We are now ready to consider the Lindblad master equation for optical decoherence in a nonlinear optomechanical system.
\section{Optical decoherence in a nonlinear optomechanical system} \label{sec:optical:decoherence:nonlinear:optomechanics}
The main loss mechanisms in an optical cavity are comprised of intrinsic losses, such as scattering and absorption, and of extrinsic losses, such as an imperfect mirror reflectivity or losses from the output coupling~\cite{aspelmeyer2014cavity}. The latter can generally be controlled in experiments, while the former are unavoidable. We call the total decay rate $\kappa_{\mathrm{c}}$, which gives rise to dissipation in the energy basis, which in turn leads to decoherence of the off-diagonal elements in the density matrix.
Our goal is to solve the Lindblad equation for optical decoherence in a nonlinear optomechanical system.
Concretely, we wish to derive an expression for $\hat{\mathcal{S}}(\tau)$ [shown in~\eqref{eq:Lindblad:formal:solution}] that can be used to evaluate quantities of interest. To do so, we start from the vectorized Lindbladian~\eqref{eq:vectorized:Lindblad:super:operator} for a single optical mode
\begin{align} \label{eq:Lindbladian}
\hat{\mathcal{L}}(\tau) &= - i\left[ \hat H(\tau) \otimes \mathds{1} - \mathds{1}\otimes \hat H (\tau) \right] + \hat L \otimes \hat L \nonumber \\
&\qquad - \frac{1}{2} \left( \hat L^\dagger \hat L \otimes \mathds{1} + \mathds{1} \otimes \hat L^\dagger \hat L \right),
\end{align}
where $\hat H(\tau)$ is the optomechanical Hamiltonian rescaled by $\omega_{\mathrm{m}}$ shown in~\eqref{eq:Hamiltonian:rescaled}. To model optical dissipation, we let the Lindblad operator be $\hat L = \sqrt{\tilde{\kappa}_{\rm{c}}} \, \hat a$, where $\tilde{\kappa}_{\rm{c}} = \kappa_{\rm{c}}/\omega_{\rm{m}}$ is the rescaled optical damping rate.
We proceed by partitioning~\eqref{eq:Lindbladian} into the following unitary and nonunitary parts:
\begin{align}
\hat{\mathcal{L}}_H &= i \, \mathds{1}\otimes \hat H(\tau) - i \, \hat H(\tau) \otimes \mathds{1}, \nonumber \\
\hat{\mathcal{L}}_L &= \frac{\tilde{\kappa}_{\rm{c}}}{2} \, \left( 2 \, \hat a \otimes \hat a - \hat N_a \otimes \mathds{1} + \mathds{1} \otimes \hat N_a \right).
\end{align}
This partition allows us to write the full solution to the Lindblad equation~\eqref{eq:formal:solution:S} as $\hat{\mathcal{S}}(\tau) = \hat{\mathcal{S}}_H(\tau) \, \hat{\mathcal{S}}_L(\tau)$ (see Sec.~\ref{sec:partitioned:time:evolution}),
where
\begin{align} \label{eq:def:of:SH:SL}
\hat{\mathcal{S}}_H &:= \overleftarrow{\mathcal{T}} \mathrm{exp} \left[ \int^\tau_0 \mathrm{d}\tau' \, \hat{\mathcal{L}}_H \right], \nonumber \\
\hat{\mathcal{S}}_L &:= \overleftarrow{\mathcal{T}} \mathrm{exp} \left[ \int^\tau_0 \mathrm{d} \tau' \, \hat{\mathcal{S}}_H^{-1} \, \hat{\mathcal{L}}_L(\tau') \, \hat{\mathcal{S}}_H \right].
\end{align}
Note that $\hat{\mathcal{S}}_H(\tau)$ encodes the unitary evolution, since
\begin{align} \label{eq:unitary:part}
\hat{\mathcal{S}}_H(\tau) &= \overleftarrow{\mathcal{T}} \mathrm{exp} \left[i \int^\tau_0 \mathrm{d} \tau' \, \left[ \mathds{1} \otimes \hat H(\tau') - \hat H(\tau')\otimes \mathds{1} \right] \right] \nonumber \\
&= \hat U(\tau) \otimes \hat U^*(\tau),
\end{align}
where the solution of $\hat U(\tau)$ is shown in~\eqref{eq:full:time:evolution}. The additional complex conjugate arises from the choice of the vectorization mapping.
We then once again split $\hat{\mathcal{L}}_L$ into the following two components: $\hat{\mathcal{L}}_{\hat a, \hat a } = \tilde{\kappa}_{\rm{c}} \, \hat a \otimes \hat a$ and
\begin{align}
\hat{\mathcal{L}}_{\hat N_a} = -\frac{1}{2} \tilde{\kappa}_{\rm{c}} \left( \mathds{1} \otimes\hat N_a + \hat N_a \otimes \mathds{1} \right).
\end{align}
Then, using the fact that $\hat N_a$ commutes with the Hamiltonian~\eqref{eq:Hamiltonian}, we write the full solution as $\hat{\mathcal{S}}(\tau) = \hat{\mathcal{S}}_H \, \hat{\mathcal{S}}_{\hat N_a} \, \hat{\mathcal{S}}_{\hat a} $ where $\hat{\mathcal{S}}_H $ is defined in~\eqref{eq:def:of:SH:SL} and $\hat{\mathcal{S}}_{\hat N_a}$ and $\hat{\mathcal{S}}_{\hat a }$ are given by
\begin{align} \label{eq:Sa}
\hat{\mathcal{S}}_{\hat N_a} &= e^{- \tilde{\kappa}_{\rm{c}} \tau \, \hat N_a /2} \otimes e^{- \tilde{\kappa}_{\rm{c}} \tau \, \hat N_a/2}, \nonumber \\
\hat{\mathcal{S}}_{\hat a } &= \overleftarrow{\mathcal{T}} \mathrm{exp} \left[ \int^\tau_0 \mathrm{d}\tau' \, \hat{\mathcal{S}}_{\hat N_a}^{-1} \, \hat{\mathcal{S}}_H ^{-1} \, \hat{\mathcal{L}}_{\hat a, \hat a} \, \hat{\mathcal{S}}_H \, \hat{\mathcal{S}}_{\hat N_a} \right] \, .
\end{align}
To compute $\hat{\mathcal{S}}_{\hat a}(\tau)$, we must first examine the nontrivial term $\hat{\mathcal{S}}^{-1}_H(\tau) \left( \hat a\otimes \hat a \right) \hat{\mathcal{S}}_H(\tau) $. Using~\eqref{eq:unitary:part}, we write $
\hat{\mathcal{S}}^{-1}_H(\tau) \left( \hat a\otimes \hat a \right) \hat{\mathcal{S}}_H(\tau)
= \hat U^\dag (\tau) \, \hat a \, \hat U(\tau) \otimes \hat U^{\dag*}(\tau) \, \hat a \, \hat U^* (\tau) $, where we again recall that we have disregarded the free optical evolution and that we used a basis where $\hat a$ has real entries, such that $\hat U^{\dag *}(\tau) \, \hat a \, \hat U^*(\tau) = \bigl[ \hat U^\dag (\tau) \, \hat a \, \hat U(\tau) \bigr]^*$.
These terms are just the usual unitary Heisenberg evolution of $\hat a$, which is given by~\cite{qvarfort2019enhanced}
\begin{align}\label{eq:evolution:of:a}
\hat U^\dag (\tau) \, \hat a\, \hat U(\tau) &= e^{- i \, F_a }\, e^{- 2 \, i \, (F_a + F_+ F_-)\, \hat{N}_a} e^{- i F_+ \, \hat{B}_+ } \, e^{- i F_- \, \hat{B}_- } \, \hat{a}.
\end{align}
Then, since $[\hat{\mathcal{S}}_{\hat N_a}(\tau), \hat U(\tau)] = 0$, the term under the integral can be written
\begin{align} \label{app:eq:two:side:evolution}
&\left( e^{ -\tilde{\kappa}_{\rm{c}} \tau \hat N_a \otimes \mathds{1}/2} \, e^{ - \tilde{\kappa}_{\rm{c}} \tau \mathds{1} \otimes \hat N_a /2 } \right)^{-1} \hat a\otimes \hat a \, e^{- \frac{1}{2} \tilde{\kappa}_{\rm{c}} \tau \hat N_a \otimes \mathds{1}} \, e^{- \frac{1}{2} \tilde{\kappa}_{\rm{c}} \tau \mathds{1} \otimes \hat N_a } \nonumber \\
&= e^{-\tilde{\kappa}_{\rm{c}} \tau} \, \hat a \otimes \hat a ,
\end{align}
where we have used the relation $( \hat N_a)^n \, \hat a = \hat a \, ( \hat N_a - 1 )^n $, which in turn yields $
e^{x \, \hat N_a} \, \hat a \, e^{-x \, \hat N_a} = \, e^{-x} \, \hat a \, $.
Inserting the expressions~\eqref{eq:evolution:of:a} and~\eqref{app:eq:two:side:evolution} into $\hat{\mathcal{S}}_{\hat a}$~\eqref{eq:Sa}, we are able to write the full expression for $\hat{\mathcal{S}}(\tau)$ as
\begin{widetext}
\begin{align} \label{eq:explicit:main:result}
\hat{\mathcal{S}}(\tau) &= \left( e^{- i \hat N_b\, \tau} \, e^{- i F_a \, \hat N_a^2} \, e^{- i F_+ \hat N_a \, \hat B_+} \, e^{- i F_- \hat N_a \, \hat B_-}e^{- \tilde{\kappa}_{\rm{c}} \tau \hat N_a/2 }\right) \otimes \left( e^{i \hat N_b\, \tau} \, e^{i F_a \, \hat N_a^2} \, e^{i F_+ \hat N_a \, \hat B_+} \, e^{i F_- \hat N_a \, \hat B_-}e^{- \tilde{\kappa}_{\rm{c}} \tau \hat N_a/2 }\right) \nonumber \\
&\quad \times \overleftarrow{\mathcal{T}} \mathrm{exp} \left[\tilde{\kappa}_{\rm{c}} \int^\tau_0 \mathrm{d}\tau' \, e^{- \tilde{\kappa}_{\rm{c}} \tau'} \,e^{-2\,i\,(F_a+F_+ F_- )\,\hat{N}_a}\,e^{-i\,F_+\,\hat{B}_+}\,e^{-i\,F_-\,\hat{B}_-} \, \hat a \otimes e^{2\,i\,(F_a+F_+ F_- )\,\hat{N}_a}\,e^{i\,F_+\,\hat{B}_+}\,e^{i\,F_-\,\hat{B}_-} \, \hat a \, \right],
\end{align}
\end{widetext}
where the $F$ coefficients can be found in~\eqref{eq:definition:of:F:coefficients}, above which we have also listed the definitions of the operators. We note that all $F$ coefficients inside the integral in~\eqref{eq:explicit:main:result} are functions of $\tau'$.
To further simplify~\eqref{eq:explicit:main:result}, we write the operators under the integral that are acting on the mechanical subsystem as Weyl displacement operators
\begin{align} \label{eq:rewrite:displacement}
e^{- i \, F_+ \hat B_+ } e^{- i \, F_- \hat B_-} &= \hat D(G(\tau)) \, e^{- i \, F_+ F_-} ,
\end{align}
where we have defined $G(\tau) = F_- - i F_+$ and the explicit form of the displacement operator is $\hat D(G(\tau)) \equiv e^{G(\tau) \hat b^\dag - G^*(\tau) \hat b }$.
Since the integral in~\eqref{eq:explicit:main:result} contains both $\hat D(G(\tau)) \, e^{- i \, F_+ F_-}$ and its complex conjugate, we find that the phases cancel and that the final expression can be written in the compact form
\begin{align}\label{eq:noisy:dynamics:general}
&\hat{\mathcal{S}}(\tau) = \hat U(\tau) \, e^{- \tilde{\kappa}_{\rm{c}} \tau \,\hat N_a/2} \otimes \hat U^*(\tau) \, e^{- \tilde{\kappa}_{\rm{c}} \tau \,\hat N_a /2 } \nonumber \\
&\quad \times \overleftarrow{\mathcal{T}} \mathrm{exp} \biggl[\tilde{\kappa}_{\rm{c}} \int^\tau_0 \mathrm{d}\tau' e^{- \tilde{\kappa}_{\rm{c}} \tau'} \hat D(G(\tau')) \, e^{- 2 i A(\tau') \, \hat N_a} \hat a \nonumber \\
&\qquad\qquad\qquad\quad \otimes \hat D(G^* (\tau')) \,e^{2 i A(\tau') \,\hat N_a} \, \hat a \biggr],
\end{align}
where we have defined $A(\tau) = F_a + F_+ \, F_-$ and $\hat U(\tau)$ can be found in~\eqref{eq:full:time:evolution}.
Equation~\eqref{eq:noisy:dynamics:general} is the main result of this paper. It allows for optical dissipation to be considered for any rescaled coupling strength $\tilde{g}_0$ and any decay rate $\tilde{\kappa}_{\rm{c}}$. It generalizes a previous first-order perturbative solution for small $\tilde{\kappa}_{\mathrm{c}}$ derived by~\citet{mancini1997ponderomotive}. While~\eqref{eq:noisy:dynamics:general} cannot be written in terms of a closed-form expression, \footnote{This is due the fact that the Lie algebra that generates the nonunitary evolution is infinite~\cite{wei1963lie}. We can see this by commuting the terms in $\hat{\mathcal{L}}$, which leaves us with terms of the form proportional to $ \bigl(\hat b^\dag + \hat b \bigr)^N$, where $N \in \{1, \infty\}$.} we show in the following sections that it does in fact allow for certain quantities of the system to be computed.
\begin{figure*}
\centering
\begin{minipage}{.45\textwidth}
\centering
\subfloat[ \label{fig:homodyne}]{%
\includegraphics[width=0.67\linewidth, trim = 7mm 0mm -7mm 0mm]{QuadPlot.pdf
} \\
\subfloat[ \label{fig:fidelity}]{%
\includegraphics[width=0.67\linewidth, trim = 7mm 0mm -7mm 0mm]{FidelityPlot.pdf
}
\end{minipage}%
\hfill
\begin{minipage}{.55\textwidth}
\subfloat[ \label{fig:Wigner}]{%
\includegraphics[width=\linewidth, trim = 0mm -1mm 0mm 5mm]{2021-05-18-21.28.06cat-states.png
}
\end{minipage}%
\caption{Impact of optical loss in a nonlinear optomechanical system. (a) Parametric plot of the optical quadratures $\braket{\hat X_{\rm{c}}} = \sqrt{2} \, \mathrm{Re} [\braket{\hat a}]$ and $\braket{\hat P_{\rm{c}}} = \sqrt{2} \, \mathrm{Im}[\braket{\hat a}]$ as a function of time $\tau= \omega_{\mathrm{m}}t$ for $ \tilde{g}_0 = 1$. The optical state is a coherent state with $|\alpha| = 1$ and the mechanical mode is in the ground state.
Time $\tau$ starts at the rightmost tip of the phase-space diagram and runs until $2\pi$. The expression for $\braket{\hat a}$ is given in~\eqref{eq:homodyne:signal}. For unitary dynamics (blue trajectory), the system returns to its initial state at $\tau = 2\pi$, while for nonunitary dynamics (green and orange lines), the quadratures decay and the system does not return to its original state. The larger $\tilde{\kappa}_{\mathrm{c}}$ is, the faster the state decays towards the vacuum expectation value. (b) Fidelity $\mathcal{F}$ for generating a two-component optical cat state as a function of the decay rate $\tilde{\kappa}_{\rm{c}}$. The lines show the fidelity for three different coherent state parameters $\alpha$ given an optomechanical coupling of value $\tilde{g}_0 = \frac{1}{2}$. The shaded regions indicate the lower and upper bounds to $\mathcal{F}$ shown in~\eqref{eq:fidelity:bounds}. (c) Grid of 3 $\times$ 3 numerically computed Wigner functions $W(X,P)$ of a noisy optical cat state with $|\alpha| = 3$. The coupling $\tilde{g}_0$ increases along the horizontal direction (from left to right) and the rescaled decay rate $\tilde{\kappa}_{\rm{c}}$ increases down along the vertical direction (from top to bottom). The negative values of $W(X,P)$, shown in red, indicate where the state is nonclassical. Even for $\tilde{\kappa}_{\rm{c}} \sim 0.05$ (middle row), the nonclassicality (red regions) rapidly decreases.}
\label{fig:}
\end{figure*}
\section{Examples} \label{sec:examples}
To demonstrate the utility of our method, we proceed to compute three quantities of interest: (i) the photon-number expectation value $\braket{\hat N_a}$, (ii) the intracavity quadratures, and (iii) the fidelity $\mathcal{F}$ for generating optical intracavity cat states in the presence of optical loss.
In all three examples, we work with the initially separable state of the mechanical and optical mode
\begin{equation} \label{eq:initial:state}
\ket{\Psi_0} = \ket{\alpha}_{\rm{c}}\otimes\ket{\beta}_{\rm{m}},
\end{equation}
where both $\ket{\alpha}_{\rm{c}}$ and $\ket{\beta}_{\rm{m}}$ are coherent states that satisfy the relations $\hat a\ket{\alpha}_{\rm{c}} = \alpha \ket{\alpha}_{\rm{c}}$ and $\hat b\ket{\beta}_{\rm{m}} = \beta \ket{\beta}_{\rm{m}}$.
While it is commonly assumed that the representation of the optical state as a coherent state is an accurate one, the mechanical state is more often found in a thermal state, which is given by
\begin{equation} \label{eq:thermal:state}
\hat \varrho_{\mathrm{th}} = \frac{1}{\pi \bar{n}} \int_{\mathbb{C}} \mathrm{d}^2 \beta \, e^{- |\beta|^2/\bar{n}} \ketbra{\beta},
\end{equation}
where $\beta \in \mathbb{C}$ and $\bar{n}$ is the average phonon number of the state. Often, results for~\eqref{eq:thermal:state} can be straighforwardly obtained by starting with the coherent state in~\eqref{eq:initial:state} and then integrating over $\beta$ with the appropriate weighting. We show below how this can be done for the intracavity optical quadratures of the state. In general, however, starting with an initial thermal state does not significantly further complicate the calculations because the vectorization has been chosen specifically to model mixed states.
\subsection{Photon-number}
For our first example, we compute the expectation value of the photon-number operator $\hat N_a(\tau)$. Using the identity in~\eqref{eq:overlap}, we find $\braket{\hat N_a} = \llangle n | \hat{\mathcal{S}}(\tau)| \Psi_0 \rrangle$, where $\kket{n}$ is a vectorized Fock state (the eigenstate of $\hat N_a$) and $\kket{\Psi_0}$ is the initial state shown in~\eqref{eq:initial:state}.
The full calculations can be found in Appendix~\ref{app:photon:number}. The key step involves expanding the time-ordered exponential in~\eqref{eq:explicit:main:result} as a von Neumann series, and then acting on the various terms with the optical Fock states. We are then able to trace out the mechanical subsystem and contract the exponential again. We are left with the relatively simple result
\begin{equation} \label{eq:photon:decay}
\braket{\hat N_a(\tau)} = |\alpha|^2 \, e^{-\tilde{\kappa}_{\rm{c}} \, \tau},
\end{equation}
where $|\alpha|^2$ is the initial number of photons in the cavity. The photon number exponentially decays from the initial value $|\alpha|^2$ towards the vacuum state as $\tau \rightarrow \infty$ and corresponds exactly with numerical results.
One might perhaps have expected the interaction between the optical and mechanical modes to influence $\braket{\hat N_a}$. However we note that $\hat N_a$ is a constant of the motion, which means that it commutes with the light-matter interaction term, and thus $\braket{\hat N_a(\tau)}$ decays just like a coherent state in a cavity would.
\subsection{intracavity optical quadratures}
The optical quadratures $\hat X_{\rm{c}} = \bigl( \hat a^\dag + \hat a \bigr) / \sqrt{2}$ and $\hat P_{\rm{c}} = i \bigl( \hat a^\dag - \hat a \bigr)/\sqrt{2}$ are the dimensionless first moments of the optical state. They are often measured in experiments using homodyne measurements and offer insights into the phase space trajectory of the systems.
Our goal is to compute the expectation values $\braket{\hat X_{\rm{c}}(\tau)}$ and $\braket{\hat P_{\rm{c}}(\tau)}$. They are given in terms of the expectation value $\braket{\hat a(\tau)}$ as $\braket{\hat X_{\rm{c}}(\tau)} = \sqrt{2} \, \mathrm{Re} \braket{\hat a(\tau)}$ and $\braket{\hat P_{\rm{c}}(\tau)} = \sqrt{2} \, \mathrm{Im} \braket{a(\tau)}$. Again using the identity~\eqref{eq:overlap}, we find that $\braket{\hat a}$ is given by $\braket{\hat a (\tau)} = \mathrm{Tr} \left[ \hat a \, \hat \varrho( \tau)\right] = \bbraket{\hat a^\dag | \varrho(\tau)} $. After again expanding the time-ordered exponential in~\eqref{eq:explicit:main:result} and effectively tracing out the mechanics (see Appendix~\ref{app:homodyne} for the full calculation), we find
\begin{align} \label{eq:homodyne:signal}
&\braket{\hat a(\tau)} =\alpha \, e^{|\alpha|^2 \left(e^{ - 2\, i \, A(\tau)} \, e^{- \tilde{\kappa}_{\rm{c}} \tau}-1\right) } e^{-|G(\tau)|^2/2}\, \nonumber \\
&\quad \times e^{- i\, A(\tau)} \, e^{ - \tilde{\kappa}_{\rm{c}} \tau/2} \, e^{G(\tau) \beta^* - G^*(\tau) \beta} \, \nonumber \\
&\quad\times \mathrm{exp} \left[\tilde{\kappa}_{\rm{c}} \, |\alpha|^2 \int^\tau_0 \mathrm{d}\tau' \, e^{- \tilde{\kappa}_{\rm{c}} \tau'} \,e^{-2\,i\,A(\tau')} \, e^{i \, B(\tau', \tau) } \right],
\end{align}
where we have defined $B(\tau', \tau) = 2\, \mathrm{Im}[G(\tau) G^*(\tau')]$. While the integral in~\eqref{eq:homodyne:signal} does not appear to have analytical solutions, it is possible to solve it numerically. This can be done straight-forwardly and requires fewer computational resources compared with modeling the full decohering state in a numerically truncated Hilbert space. Comparing with a numerically evolved state for small $|\alpha|$, we find that~\eqref{eq:homodyne:signal} corresponds exactly to the numerical solution.
We plot the optical phase-space quadratures in Fig.~\ref{fig:homodyne} as a function of time for different values of the decay rate $\tilde{\kappa}_{\rm{c}}$, where we have assumed that the mechanical system is in the ground state with $|\beta| = 0$. We set the coupling to $\tilde{g}_0 = 1$, which means that the state should return to its starting point in phase-space at $\tau = 2\pi$ (after one mechanical oscillation). We note that for unitary dynamics with $\tilde{\kappa}_{\rm{c}} = 0$ (the blue line), this is indeed what happens. However, when $\tilde{\kappa}_{\rm{c}} \neq 0$, the trajectory slowly decays towards the vacuum state. We can prove this fact explicitly by examining the expression for $\braket{\hat a(\tau)}$ in~\eqref{eq:homodyne:signal}. The real value of the integral can be simplified and upper bounded, which allows us to prove that both of the real quantities $\hat X_{\rm{c}}(\tau)$ and $\hat P_{\rm{c}}(\tau)$ go to zero as $\tau \rightarrow \infty$, as expected. The proof can be found in Appendix~\ref{app:quadratures}.
We also consider the intracavity optical quadratures when the mechanics is in the thermal state~\eqref{eq:thermal:state}. Since thermal states represent a weighted average over coherent states, we focus on the term in~\eqref{eq:homodyne:signal} that contains $\beta$. By integrating with the appropriate weighting for the thermal state, we find
\begin{equation}
\frac{1}{\bar{n}\pi} \int_{\mathbb{C}} \mathrm{d}^2 \beta \, e^{- |\beta|^2/\bar{n}} \, e^{G(\tau) \beta^* - G^*(\tau) \beta} = e^{- |G|^2 \, \bar{n}}.
\end{equation}
Inserting this into~\eqref{eq:homodyne:signal}, we find the following expression for optical quadratures:
\begin{align} \label{eq:homodyne:signal:thermal}
&\braket{\hat a(\tau)}_{\mathrm{th}} =\alpha \, e^{|\alpha|^2 \left(e^{ - 2\, i \, A(\tau)} \, e^{- \tilde{\kappa}_{\rm{c}} \tau}-1\right) } e^{-|G(\tau)|^2(1 + 2 \bar{n}) /2}\, e^{- i\, A(\tau)} \, \nonumber \\
&\quad\times e^{ - \tilde{\kappa}_{\rm{c}} \tau/2} \, \mathrm{exp} \left[\tilde{\kappa}_{\rm{c}} \, |\alpha|^2 \int^\tau_0 \mathrm{d}\tau' \, e^{- \tilde{\kappa}_{\rm{c}} \tau'} \,e^{-2\,i\,A(\tau')} \, e^{i \, B(\tau', \tau) } \right].
\end{align}
We note that the quadratures tend to zero for large $\bar{n}$ and $|G|$. However, since $G$ is an oscillating function, the state still returns to its original value in phase space whenever $|G| = 0$, which occurs when there the optical and mechanical modes disentangle.
\subsection{Fidelity for generating optical cat-states}
For our final example we consider the generation of optical cat states of the intracavity field in the presence of optical loss. The cat states allow, among other things, for logical qubits to be encoded~\cite{cochrane1999macroscopically, leghtas2013hardware, mirrahimi2014dynamically} which makes them interesting for various information-processing schemes.
It has been shown that two initially coherent states [such as those shown in~\eqref{eq:initial:state}] evolve under the Hamiltonian~\eqref{eq:Hamiltonian} as~\cite{bose1997preparation, mancini1997ponderomotive}
\begin{align}\label{non:linear:state:evolution}
\ket{\Psi(\tau)} =& \, e^{-|\alpha|^2/2}\,\sum_{n = 0}^\infty \frac{\alpha^n}{\sqrt{n!}} \, e^{- i \,\left(F_{a}+ F_{+} \, F_{-} \right)\, n^2}e^{ - i \, \mathrm{Im}\left(G^*\, \beta \right)\, n} \,
\nonumber \\
&\qquad\qquad\times \ket{n}_{\rm{c}} \otimes \ket{e^{- i \tau} \, \beta + e^{- i \tau} \,G\, n}_{\rm{m}} \, ,
\end{align}
where $\ket{e^{- i \tau} \, \beta + e^{- i \tau} \,G \, n}_{\rm{m}}$ is a coherent state of the mechanics.~\footnote{Note that the notation used here is slightly different compared with that in~\cite{bose1997preparation}.}
When the optomechanical coupling is constant, with its rescaled form being $\tilde{g}(\tau) \equiv \tilde{g}_0 = g_0/\omega_{\mathrm{m}}$, we find that the optical and mechanical states evolve into separable states at $\tau = 2\pi$.
We see this from the expressions for $F_+$ and $F_-$ in~\eqref{eq:F:coefficients}, which become $F_+ = F_- = 0$ at $\tau = 2\pi$, which in turn implies that $G = 0$. The traced-out cavity state then becomes
\begin{align} \label{eq:pure:cat:state}
\ket{\Psi(2\pi) }_{\rm{c}}= e^{- |\alpha|^2/2} \sum_{n = 0}^\infty \frac{\alpha^n}{\sqrt{n!}} e^{2\pi \,i \,\tilde{g}_0^2 \, n^2} \ket{n}_{\rm{c}}.
\end{align}
The value of $\tilde{g}_0$ determines the number of components of the cat state~\cite{bose1997preparation}.
For example, $\tilde{g}_0 = \frac{1}{2}$ yields the two-component cat state
\begin{equation}
\ket{\Psi( 2\pi )}_{\rm{c}} = \left( \frac{ 1 + i }{2}\ket{+ \alpha} + \frac{1 - i}{2} \ket{- \alpha} \right),
\end{equation}
where the distance between the two components in phase space is given by the coherent-state parameter $\alpha$. Three- and four-component cat states can be similarly generated with $\tilde{g}_0 = 1/\sqrt{6}$ and $\tilde{g}_0 = 1/2 \sqrt{2}$ ( see~\cite{bose1997preparation}).
To highlight the non-classical features of the state and how these are expected to decay with increasing $\tilde{\kappa}_{\rm{c}}$, we numerically evolve the state and compute its Wigner function $W(X,P)$ for various $\tilde{g}_0$ and $\tilde{\kappa}_{\rm{c}}$. The Wigner function for multicomponent cat states is shown in Figure~\ref{fig:Wigner} for $|\alpha| = \sqrt{3}$. Here the red areas denote the nonclassical features of the state, which can be seen to degrade as the state decoheres. However, since this computation relies on a truncated Hilbert space, we are not able to examine large $|\alpha|$.
We proceed to derive an expression for the fidelity $\mathcal{F}$ of generating an optical cat state in the presence of optical decoherence. We do so by taking the overlap between the ideal cat state~\eqref{eq:pure:cat:state} and the noisy state evolving with $\hat{\mathcal{S}}(\tau)$ given in~\eqref{eq:noisy:dynamics:general}. The vectorized final state is given by $\kket{\varrho(\tau)} = \hat{\mathcal{S}} (\tau) \ket{\alpha}\ket{\beta} \otimes \ket{\alpha^*}\ket{\beta^*}$ and the overlap becomes
\begin{equation}
\mathcal{F}= \bra{\Psi(2\pi)} \hat \varrho(2\pi) \ket{\Psi(2\pi)} = \bbraket{\Psi^\dag(2\pi) | \varrho (2\pi)},
\end{equation}
where $\bbra{\Psi^\dag(2\pi)}$ is the vectorized ideal cat state~\eqref{eq:pure:cat:state}. We find the following expression for the fidelity (see Appendix~\ref{app:fidelity} for the full calculation):
\begin{align} \label{eq:fidelity}
\mathcal{F} &= e^{- 2|\alpha|^2} \sum_{n= 0}^\infty \sum_{n'= 0}^\infty \frac{|\alpha|^{2(n+n')} }{n!n'!} e^{- \tilde{\kappa}_{\rm{c}} \pi (n+n')} \\
&\times \mathrm{exp} \left[\tilde{\kappa}_{\rm{c}}\, |\alpha|^2 \int^{2\pi}_0 \mathrm{d}\tau' e^{- \tilde{\kappa}_{\rm{c}} \tau'} \, e^{- 2 \, i \, A(\tau')(n-n')}\right]. \nonumber
\end{align}
Setting $\tilde{\kappa}_{\mathrm{c}}= 0$, we recover $\mathcal{F} = 1$, as expected. For nonzero $\tilde{\kappa}_{\mathrm{c}}$, we find that~\eqref{eq:fidelity} corresponds exactly to numerical results.
The expression~\eqref{eq:fidelity} can be simplified further. In Appendix~\ref{app:fidelity}, we show how~\eqref{eq:fidelity} can be expanded in increasing orders of $A(\tau)$ and $\tilde{\kappa}_{\rm{c}} |\alpha|^2$. The somewhat lengthy result is shown in~\eqref{app:eq:simplified:fidelity}. While formally infinite, the expression indicates a complicated relationship between $\tilde{\kappa}_{\rm{c}}$, $\alpha$ and $\tilde{g}_0 $ [note that $A(\tau)\propto \tilde{g}_0^2$]. The advantage of the expression~\eqref{app:eq:simplified:fidelity} is that when either $\tilde{g}_0 \ll1$ or $\tilde{\kappa}_{\rm{c}} |\alpha|^2 \ll 1$, the fidelity can be straight-forwardly expanded and evaluated to the desired order.
To obtain a more intuitive limit of the fidelity, we proceed to bound $\mathcal{F}$ from above and below. We find (see Appendix~\ref{app:bounding:fidelity} for the full calculation):
\begin{align} \label{eq:fidelity:bounds}
2 \, e^{- 2 \, |\alpha|^2} \mathrm{sh} + e^{- |\alpha|^2 ( 1 + e^{- \pi \tilde{\kappa}_{\rm{c}}})^2 } \leq \mathcal{F} \leq e^{- |\alpha|^2 ( 1 - e^{- \pi \tilde{\kappa}_{\rm{c}}}) ^2 },
\end{align}
where $\mathrm{sh}:= \sinh (2 \,|\alpha|^2 e^{- \pi \tilde{\kappa}_{\rm{c}}}) $.
We plot $\mathcal{F}$ and its upper and lower bounds~\eqref{eq:fidelity:bounds} as a function of $\tilde{\kappa}_{\rm{c}}$ for different values of $\alpha$ in Fig.~\ref{fig:fidelity}. The shaded areas indicate the upper and lower bounds in~\eqref{eq:fidelity:bounds}. We note that $\mathcal{F}$ rapidly decreases with $\tilde{\kappa}_{\rm{c}}$ for higher values of $|\alpha|$. We also note that a coherent state with $|\alpha| = 1$ retains a fairly high fidelity, which is due to the large non-zero overlap between $\ket{\alpha = 1}$ and the vacuum $\ket{0}$.
The upper bound~\eqref{eq:fidelity:bounds} allows us to bound the decay rate $\tilde{\kappa}_{\rm{c}}$ given a desired fidelity. As an example, let us consider the case where we wish to prepare a two-component optical cat state with $\tilde{g}_0 = 0.5$ using a coherent state with $|\alpha|^2 = 10$. If we wish to generate the cat state with a fidelity of $\mathcal{F} = 0.99$, we find that we require roughly $\tilde{\kappa}_{\rm{c}} \sim 0.01$. The linewidth of a cavity is given by the angular frequency $\kappa_{\rm{c}} = \pi c/2 L F$~\cite{hunger2010fiber}, where $c$ is the speed of light, $L$ is the cavity length, and $F$ is the cavity finesse. Given a cavity of length $L = 10$\,mm and a finesse of $F = $ 500\,000, we find $\kappa_{\rm{c}}/(2\pi) = 15$\,kHz. We thus require a mechanical frequency of $\omega_{\rm{m}} /2 \pi=1.5$\,MHz, such that $\tilde{\kappa}_{\rm{c}} = \kappa_{\rm{c}}/\omega_{\rm{m}} = 0.01$, and a coupling strength of $g_0/2\pi =0.75$\,MHz, such that $\tilde{g}_0 = g_0/\omega_{\rm{m}} = 0.5$. While a finesse, linewidth, and mechanical frequency of similar magnitude have been demonstrated experimentally~\cite{de2020strong,pontin2020ultranarrow}, a single-photon coupling of this strength has not yet been achieved. To access the intracavity cat state, we envision the utilization of a scheme that coherently opens the cavity, such as that proposed by~\citet{tufarelli2014coherently}.
\section{Summary and outlook} \label{sec:conclusions}
In this work we solved the Lindblad master equation for optical decoherence in a nonlinear optomechanical system. The solution involved vectorizing the Lindblad equation as well as partitioning the nonunitary time evolution into treatable contributions. Our main result, shown in~\eqref{eq:noisy:dynamics:general}, is a compact expression that encodes the full nonunitary evolution of the optical and mechanical states.
To demonstrate the applicability of our method, we derived the fidelity for preparing optical cat states with a leaking cavity. The resulting expressions allowed us to bound the optical decay rate required to produce cat states at a desired fidelity.
Our method opens up the possibility for considering optical decoherence in a variety of contexts, such as proposals for generating macroscopic superpositions~\cite{bose1999scheme, marshall2003towards, kleckner2008creating}, and sensing schemes~\cite{schneiter2020optimal, qvarfort2021optimal}. Potentially, the method could be used to provide a theoretical description of the regime of large thermal motion and weak single-photon coupling~\cite{brawley2016nonlinear,leijssen2017nonlinear}; however, we note that it does not yet include a drive of the cavity field or an input-output formalism, both of which are fundamental to many experimental setups. We also note that while some of the results presented here were given in closed-form expressions, such as the optical quadratures~\eqref{eq:homodyne:signal}, other properties of the system, such as the number of phonons of the mechanical state, must be studied perturbatively by expanding the expressions for small $\tilde{\kappa}_{\mathrm{c}}$. We also note that once the mechanical coupling is of strength comparable to the mechanical frequency, one can no longer consider optical and mechanical decoherence separately~\cite{hu2015quantum}. We leave these considerations to future work. Finally, we also note that our method applies to any system that exhibits dynamics captured by the nonlinear Hamiltonian~\eqref{eq:Hamiltonian}, such as electro-optical systems~\cite{tsang2010cavity}.
\section*{Acknowledgments}
We thank Lindsay Orr, Suocheng Zhao, Jack Clarke, Daniel Goldwater, Ying Lia Li, Dennis R\"{a}tzel, Marko Toro\v{s}, Doug Plato, Daniel Braun, Igor Pikovski, Myungshik Kim, Ivette Fuentes, Alessio Serafini, Tania Monteiro, Andr\'{e} Xuereb, Anja Metelmann and Sougato Bose for helpful discussions. We also thank the referees for their careful reading of the manuscript, which helped us improve it. S.Q.~was supported by an Engineering and Physical Sciences Research Council Doctoral Prize Fellowship.
\section*{Data availability statement}
The code used to generate the Wigner function plot (Fig.~\ref{fig:Wigner}) can be found in the following GitHub repository: \href{https://github.com/sqvarfort/noisy-optical-cat-states}{https://github.com/sqvarfort/noisy-optical-cat-states}.
\bibliographystyle{apsrev4-2}
|
3,212,635,537,527 | arxiv | \section{Introduction \label{sec:intro}}
High energy physics theories for dark energy causing the accelerated
expansion of the universe face issues of naturalness -- why is the
current dark energy density measured so different from the initial
conditions of the high energy, early universe, and how is the current
low energy form of the potential energy related to the initial
high energy form that should receive quantum corrections?
The cosmological constant in particular suffers both problems. Making
the field dynamical helps. To more fully solve the amplitude problem
one would like an attractor solution, where the present behavior is
largely insensitive to the exact initial conditions. To ameliorate the
form problem one would like a symmetry or geometric quantity that
protects the potential, or ideally have it predicted from a fundamental
theory such as string theory. Quintessence models cannot achieve both
properties, and even the attractor solutions have difficulty in naturally
reaching a dark energy equation of state $w\approx-1$ \cite{zws}
as indicated by cosmological observations.
Paper 1 \cite{akl}, following the pioneering paper of \cite{martinyam},
highlighted the DBI class from string theory as possessing
desirable properties to serve as dark energy. In particular, it found
not only the attractor solutions accessible to quintessence, but three
new classes that could achieve or approach $w=-1$, the cosmological
constant state.
String theory can impose a specific non-trivial kinetic
behavior through the Dirac-Born-Infeld (DBI) action that arises naturally
in consideration of D3-brane motion within a warped compactification.
The field properties are related to the geometric
position of a three dimensional brane within higher dimensions, and
the brane tension and potential functions are (in principle) given
by string theory, in particular through the AdS/CFT correspondence.
In this paper we extend the attractor solutions as well as more fully
considering the entire evolution and its observational consequences.
In \S\ref{sec:quad} we examine in detail the case motivated by the
simplest physics and find the viable regions of parameter space
constrained through cosmological
observations. We show in \S\ref{sec:exphis}
how to construct the required potential for a given
cosmic expansion history or equation of state. Generalizing DBI theory
to multiple branes or non-standard branes adds a degree of freedom
which we analyze in \S\ref{sec:wdbi}. We explore a new window on
constraining DBI dark energy with observations in terms of the dark
energy sound speed -- this gives a distinct prediction from
quintessence -- and its effects on the matter density power spectrum
in \S\ref{sec:sound}.
\section{Constraints on a Natural DBI Model \label{sec:quad}}
The DBI action arises in Type IIB string theory in terms of the
volume swept out by a D3-brane in a warped geometry, coupled to
gravity. The form is
\begin{equation}
S=\int d^4x\,\sqrt{-g}\,\left[-T(\phi)\sqrt{1-\dot\phi^2/T(\phi)}
+T(\phi)-V(\phi)\right], \label{eq:lag}
\end{equation}
where we ignore the spatial derivatives of $\phi$.
$T$ is the warped brane tension and $V$ is the potential arising
from interactions with Ramond-Ramond fluxes or other sectors.
See, e.g., \cite{siltong} for more details. The kinetic factor is
often written in terms of a Lorentz boost factor
\begin{equation}
\gamma\equiv\frac{1}{\sqrt{1-\dot\phi^2/T}}\,, \label{eq:gamdef}
\end{equation}
and the DBI dark energy equation of state is
\begin{equation}
w\equiv\frac{p_\phi}{\rho_\phi}=-\frac{\gamma^{-1}-1+v}{\gamma-1+v}
\,, \label{eq:wdef}
\end{equation}
where $v=V/T$. The nonrelativistic limit $\gamma-1\ll1$ leads
to the quintessence action and equation of state.
In \cite{akl} the main consideration was the critical points of the
equations of motion and the asymptotic attractor behavior. In this
section we consider perhaps the most natural forms for the tension and
potential and follow the specific dynamics throughout the history of
the universe. A complete string theory would predict the functions
$T$ and $V$; while this is not available one can use known behaviors
for certain circumstances. For a pure AdS$_5$ geometry with radius
$R$, the warped tension is given by
\begin{equation}
T(\phi)=\tau\,\phi^4\,, \label{eq:tphi}
\end{equation}
with $\tau = 1/(g_s\tilde\lambda)$ where $g_s$ is the string coupling,
$\alpha'$ is the inverse string tension, and $\tilde\lambda=R^4/\alpha'^2$
which is identified as the 't Hooft coupling in AdS/CFT correspondence.
The potential is expected to have quadratic terms arising from the
breaking of conformal invariance due to couplings to gravity and other
sectors. In addition, quartic terms enter from such interactions,
while higher order terms are suppressed, e.g.\ by powers of $1/R$
\cite{siltong,ast}. We therefore take an ansatz
\begin{equation}
V(\phi)=m^2\phi^2+cT=m^2\phi^2+c\tau\phi^4\,. \label{eq:vphi}
\end{equation}
Note that we take the potential to have a true zero minimum so that
there is no intrinsic cosmological constant.
For reference, we briefly review the equation of motion. The
DBI version of the Klein-Gordon equation is
\begin{equation}
\ddot\phi+3\gamma^{-2} H\dot\phi+\gamma^{-3}V_{,\phi}+\frac{1}{2}(3\gamma^{-2}
-2\gamma^{-3}-1)\,T_{,\phi}=0\,,
\end{equation}
where $H$ is the Hubble parameter, $V_{,\phi}=dV/d\phi$ and
$T_{,\phi}=dT/d\phi$. The energy-momentum tensor has perfect fluid form
with energy density $\rho_\phi$ and pressure $p_\phi$ given by
\begin{equation}
\rho_\phi=(\gamma-1)\,T+V\quad ; \quad p_\phi=(1-\gamma^{-1})\,T-V\,,
\end{equation}
and so the equation of motion can also be viewed in terms of the
continuity equation
\begin{equation}
\dot\rho_\phi=-3H(\rho_\phi+p_\phi)=-3H(\gamma-\gamma^{-1})\,T\,.
\end{equation}
For the form of Eq.~(\ref{eq:vphi}), the potential for large $\phi$ is
dominated by the quartic term
while for small $\phi$ it looks like a quadratic potential.
\cite{akl} identified the ratio $V/T$ as particularly important for
determining the attractor, if any. With Eq.~(\ref{eq:tphi}) this
implies that
\begin{equation}
v\equiv\frac{V}{T}=c+\mu^2\,(\kappa\phi)^{-2}\, \label{eq:vmu}
\end{equation}
where $\mu^2=(m^2\kappa^2/\tau)$ and $\kappa^2=8\pi G$.
At late times, $\phi$ rolls to zero and the quantity $v$ is dominated
by the second term in Eq.~(\ref{eq:vmu}) so $v\to\infty$, giving the
ultrarelativistic class of attractor solutions discussed by \cite{akl}.
In particular, since $\lambda\equiv-(1/\kappa V)dV/d\phi\sim 1/\phi$
and $\gamma\sim v\sim\phi^{-2}$ in this limit, then the secondary
attractor parameter of \cite{akl} is $\lambda^2/\gamma=$ const. This implies
that it is the second class of attractor solution from Table I of
\cite{akl} that is reached and at late times $w=-1+\lambda^2/(3\gamma)$.
However the evolution at present and at all times before the asymptotic
future is of interest.
Figure~\ref{fig:cdmodels} illustrates the dynamical evolution of
these models in the $w$-$w'$ plane, where $w'=dw/d\ln a$, for
various values of $c$ and $\mu^2$.
The most noticeable common characteristic of the field evolution is
that it is a thawing field. That is, the dynamical history lies
within the thawing region of the $w$-$w'$ phase
plane defined originally for quintessence as bounded by
$1\le w'/(1+w)\le3$, as one of the two major classes
of evolution \cite{caldwelllinder}. Indeed, the field evolves away
from a frozen, $w=-1$ state in the high redshift, matter dominated
era along the $w'=3(1+w)$ line defined by \cite{caldwelllinder} and
shown to be a generic dynamical flow solution by \cite{cahndl}.
The evolution remains within
the thawing region, until today (defined by $\Omega_\phi=0.72$ and
denoted by an x along the evolutionary track) the field lies
roughly near $w'\approx (1+w)$.
\begin{figure}[!htb]
\begin{center}
\psfig{file=dbi2models.ps,width=3.4in}
\caption{The DBI solutions using quartic/quadratic potential/tension
functions of Eq.~(\ref{eq:vmu}) are plotted in the $w$-$w'$ plane.
The initial thawing behavior, the values today (denoted by x's along
the curves) with property $w'\approx 1+w$, and future attractors
to a constant $w$ determined by the value of $\mu^2$ are all evident.
}
\label{fig:cdmodels}
\end{center}
\end{figure}
At early times, in
the matter dominated era, the field is frozen to a cosmological constant
state, until the DBI dark energy density become nonnegligible. This
is independent of initial field value $\phi_i$ and velocity $\gamma_i$,
as Fig.~\ref{fig:gami}
illustrates. The freezing represents the effects of matter domination
and is a different sort of attractor than the late time solution.
The thawing occurs in a manner that does depend on $\phi_i$, but is
insensitive today to $\phi_i$ for $|\phi_i|<1$. In the future, the
DBI attractor ensures the same solution regardless of $\phi_i$.
\begin{figure}[!htb]
\begin{center}
\psfig{file=dbic1d1gami.wa2.ps,width=3.4in}
\caption{The high Hubble friction during matter domination freezes the
field to $w=-1$ for many e-folds in expansion, despite an initial field
velocity measured in terms of the Lorentz factor $\gamma_i$. This
pseudo-attractor ensures that models with different $\gamma_i$ then follow
the same trajectory, as shown by the convergence of tracks from the
left side (early times) to the middle (later times). (Tracks start
in the plot at $\Omega_\phi=10^{-10}$, with $\phi_i=-4$ and the
$\gamma_i$ as labeled.) The light,
green curves diverging from the middle to the right side (today) show
this is not a true attractor since the thawing rate does
depend on the initial field value $\phi_i$ (here fixing $\gamma_i=1$).
However, the late time,
true attractor from DBI dynamics will bring all these trajectories
together; indeed at present all models with $\kappa\phi_i<1$ have the
same dynamics.
}
\label{fig:gami}
\end{center}
\end{figure}
At late times the field only notices the quadratic part of the
potential; that is, this attractor solution only requires that the
potential look quadratic near the minimum -- a highly generic state.
The evolution of the field up to the present, however, does depend on
the quartic term: contrast the $(c,\mu^2)=(0,16)$ and $(1,16)$ curves
in Fig.~\ref{fig:cdmodels}.
At all times until the final asymptotic
value the specific evolution differs, in particular up to the present.
These differences allow us to constrain the parameters of the theory
by comparing to cosmological observations. Here we consider the
distance-redshift relation over the range $z=0-1.7$, as given by
Type Ia supernovae. We examine the maximum fractional difference
$\delta d/d_\Lambda$ of the model predictions for distances from those
of the flat, cosmological constant plus matter universe with
$\Omega_\Lambda=1-\Omega_m=0.72$.
One question we can ask is what are the bounds on $\mu^2$ such that
the distance deviation is less than some value, say 2\%. Large values
of $\phi_i$ give a lengthy frozen state (note $(1/V)[dV/d(\kappa\phi)]
\sim 1/\phi$ becomes small), lasting until close to the present, so
$w\approx-1$. This will give little deviation from a cosmological
constant so the most stringent bounds on $\mu^2$ occur for small $\phi_i$.
For $\kappa\phi_i\lesssim1$, though, the potential tends to be dominated
by the quadratic, attractor part and the field quickly forgets the
initial value (compare the $\kappa\phi_i=-0.1$ vs.\ $\kappa\phi_i=-1$
curves in Fig.~\ref{fig:gami}). This also makes the bound fairly
insensitive to the value of $c$. A fitting formula to the constraint on
$\mu$ is
\begin{equation}
\mu>7.1\,(1+0.002c)\left(\frac{\delta d/d_\Lambda}{2\%}\right)^{-1}\,. \label{eq:mubound}
\end{equation}
Note the weak dependence on $c$. The inverse proportionality to
$\delta d/d_\Lambda$, for small deviations, arises from the maximum
deviation in the equation of state $1+w$. The attractor value is
given by \cite{akl}
\begin{equation}
1+w_c=\frac{2}{3\mu^2}\left[-1+\sqrt{1+3\mu^2}\right]\,,
\end{equation}
which is inversely proportional to $\mu$, for $\mu^2\gg1$.
While Eq.~(\ref{eq:mubound}) gives the most stringent bound to agree
with observations, models with lesser values of $\mu$ are viable if
the values of $\phi_i$ are large enough. Figure~\ref{fig:muphic}
show the constraints in the $c$-$\phi_i$ plane for a maximum allowed
distance deviation of 2\%. For $\mu^2\gtrsim55$, the distance deviation
is less than 2\% for all cases with $c<20$. The maximum tends to be
quite shallow: for $\mu^2=50$ most of the disallowed lower half plane
actually has $0.02<\delta d/d<0.021$. The largest deviation for
$\mu^2=50$ (40) occurs for $c=20$ and is at the 2.18\% (2.44\%) level.
The figure exemplifies how cosmological observations can directly
inform us on string theory parameters.
\begin{figure}[!htb]
\begin{center}
\psfig{file=muphic.ps,width=3.4in}
\caption{Model parameters can be constrained by comparison to distance
data, here taken to be within 2\% of $\Lambda$CDM. Above the solid curves
for each value of $\mu^2$ the deviation is less than 2\% (as $\phi_i$
gets large, $w$ has deviated less from the value $w=-1$ imposed by
the matter dominated freezing). Below the solid curves the distance
deviation is larger, but often by a small amount: only within the
dotted, black contour is the deviation more than 2.1\% for $\mu^2=50$,
and similarly the dashed, red contour bounds the deviation to 2.4\% for
$\mu^2=40$.
}
\label{fig:muphic}
\end{center}
\end{figure}
\section{Customized Expansion History \label{sec:exphis}}
From Eq.~(\ref{eq:wdef}), we can write down a solution for the form
of the potential for any expansion history desired, i.e.\ any given
equation of state evolution $w(a)$ (including $w$ constant).
The reduced potential, $v=V(\phi)/T(\phi)$ must satisfy
\begin{equation}
v(a)=1-\frac{w(a)\,\gamma(a)}{1+w(a)}-\frac{\gamma(a)^{-1}}{1+w(a)}
\,. \label{eq:vwcon}
\end{equation}
Note this expression holds even for a time dependent $\gamma$ (we are
here interested in the full evolution, not just the attractor state).
Combining this expression for $v(a)$ with the solutions of the equations
of motion for $\gamma(a)$ and $\phi(a)$, one can construct the potential
$V(\phi)$ for any desired equation of state function.
Figure~\ref{fig:wconw} shows the potential $V(\phi)$ constructed
(taking $T\sim\phi^4$) to
give constant $w$ for all times, for the cases $w=-0.99$, $-0.9$, and
$-0.8$. (If $w=-1$ exactly then the field does not roll at all and
the potential cannot be reconstructed.)
The conditions for $w\approx-1$ to be realized (for constant $w$)
can be written through Eq.~(\ref{eq:wdef}) in terms of the initial
values (note we are not describing an attractor solution) and are that either
$v_i\gg \gamma_i$ or $v_i\gg \gamma_i-1$. For $\gamma_i=1+\epsilon$,
with $\epsilon$ a small quantity,
$w\approx -1+2\epsilon/v_i$ if the second condition holds. When the
first condition holds, $w\approx -1+[(\gamma_i^2-1)/\gamma_i](1/v_i)$.
In either case, $w\to-1$. The potential is steep initially (roughly
$\lambda^2\sim\Omega_{\phi,i}^{-1}$) and the field
rolls to $\phi=0$. The shape of the potential near $\phi=0$ is given
by $V(\phi\ll1)\sim\phi^2$ (since we took $T\sim\phi^4$), as required
by our previous results. However, as noted there, the potential when
the dynamics is off the attractor trajectory does not need to stay in
the asymptotic form.
\begin{figure}[!htb]
\begin{center}
\psfig{file=wcon2.vphi.ps,width=3.4in}
\caption{Solutions for the potential function are exhibited that deliver
constant values of the equation of state $w$. Solid portions of the curves
correspond to the region over which the field has rolled by the
present. Short dotted arcs near $\phi=0$ show the $V\sim\phi^2$
asymptotic behavior. The potentials do not contain an
explicit cosmological constant (i.e.\ $V(\phi)$ has a true zero minimum),
but the equations of state can approach $w=-1$ due to the DBI dynamics.
}
\label{fig:wconw}
\end{center}
\end{figure}
\section{Multi-Brane DBI \label{sec:wdbi}}
In the presence of multiple D3-branes or a non-BPS brane, the DBI
action acquires an additional potential $U(\phi)$ multiplying the DBI term
\cite{gumward,saridakis},
\begin{eqnarray}
S&=&\int d^4x\,\sqrt{-g} \,\times \nonumber \\
&\,&\left[-U(\phi)\,T(\phi)\sqrt{1-\dot\phi^2/T(\phi)}+T(\phi)-V(\phi)\right]. \label{eq:lagu}
\end{eqnarray}
The energy-momentum tensor
takes a perfect fluid form with energy density $\rho_\phi$ and pressure
$p_\phi$ given by
\begin{equation}
\rho_\phi=(\gamma U-1)\,T+V \quad ;\quad p_\phi=(1-\gamma^{-1}U)T-V\,. \label{eq:rhodef}
\end{equation}
The Lorentz factor $\gamma$ is still given by Eq.~(\ref{eq:gamdef}) and
the equation of state for the DBI field is
\begin{equation}
w\equiv \frac{p_\phi}{\rho_\phi}=-\frac{\gamma^{-1}U-1+v}{\gamma U-1+v}
\,. \label{eq:wudef}
\end{equation}
The extra freedom from the additional potential $U$ means that
interesting results occur in both the nonrelativistic and relativistic
cases, not just $\gamma\to\infty$ as in the standard DBI model.
\subsection{Equations of Motion and Attractors \label{sec:wattrx}}
The equation of motion for the field follows from either functional
variation of the action or directly from the continuity equation for
the energy density,
\begin{equation}
\rho'_\phi=-3(\rho_\phi+p_\phi)=-3(\gamma -\gamma^{-1})\,UT\,,
\end{equation}
where a prime denotes a derivative with respect to the e-folding parameter,
$d/d\ln a$.
To begin, we define the contributions of the tension and potential
to the vacuum energy density relative to the critical density,
\begin{equation}
x^2=\frac{\kappa^2}{3H^2}(\gamma U-1)\,T \quad ; \quad
y^2=\frac{\kappa^2}{3H^2}V \,, \label{eq:xdef}
\end{equation}
where $\kappa^2=8\pi G$ and $H$ is the Hubble parameter. We allow
the parameter $x^2<0$ so as to unify the treatment of when $\gamma U>1$
and $\gamma U<1$.
The equations of motion are given by
\begin{eqnarray}
\frac{1}{2}(x^2)'&=&-\frac{3}{2}x^2(1-x^2)\frac{1-\gamma^{-1}U}{\gamma U-1}
-\frac{3}{2}x^2y^2 \nonumber \\
&\,&+\frac{\sqrt{3}}{2}\lambda\,y^2\sqrt{\frac{(\gamma^2-1)\,x^2}{\gamma^2(\gamma U-1)}}\\
y'&=&\frac{3}{2}\,x^2y\,\frac{\gamma^2-1}{\gamma}\frac{U}{\gamma U-1}
+\frac{3}{2}\,y\,(1-x^2-y^2) \nonumber \\
&\,&-\frac{\sqrt{3}}{2}\lambda\,y\sqrt{\frac{(\gamma^2-1)\,x^2}{\gamma^2(\gamma U-1)}} \\
\kappa\phi'&=&\sqrt{\frac{3(\gamma^2-1)\,x^2}{\gamma^2(\gamma U-1)}}\,,
\end{eqnarray}
where $\lambda=-(1/\kappa V)dV/d\phi$ and
\begin{equation}
\gamma U=1+\frac{V}{T}\frac{x^2}{y^2}\,. \label{eq:gamv}
\end{equation}
When $U=1$ these equations reduce to those in \cite{akl}. The case
$\gamma U=1$ can be handled by the above equations since the denominator
$\gamma U-1$ always occurs in the finite ratio $x^2/(\gamma U-1)$.
We are interested in the DBI field as late time accelerating dark
energy, not for inflation, so we take the initial conditions in the
matter dominated universe and define the present by
$\Omega_\phi=0.72$.
The attractor solutions to the equations of motion have the critical values
\begin{eqnarray}
x^2_{c1}&=&\frac{\lambda^2}{3U^2}\frac{\gamma U-1}{\gamma^2-1} \quad ; \quad
x^2_{c2}=\frac{3\gamma^2}{\lambda^2}\frac{\gamma U-1}{\gamma^2-1} \\
y^2_{c1}&=&1-\frac{\lambda^2}{3U^2}\frac{\gamma U-1}{\gamma^2-1} \quad ; \quad
y^2_{c2}=\frac{3}{\lambda^2}\frac{\gamma^2-\gamma U}{\gamma^2-1} \\
\Omega_{\phi,c1}&=&1 \quad ; \quad \Omega_{\phi,c2}=\frac{3\gamma U}{\lambda^2} \label{eq:ophic} \\
w_{\phi,c1}&=&-1+\frac{\lambda^2}{3\gamma U} \quad ; \quad w_{\phi,c2}=0 \,. \label{eq:wc}
\end{eqnarray}
These are stable, late time attractors, with the $w\ne0$ solution reached
for $\lambda^2<3\gamma U$. The form of these solutions
reveals that paths to the attractor classes are more diverse compared
to standard DBI theory. For example, new windows appear for obtaining
$w=-1$ if $U(\phi_c)\to\infty$ sufficiently quickly. In particular,
this cosmological constant behavior can even be realized when $\gamma\to1$,
without the potential running to infinite field values.
Now the important limit is when $\gamma U\to\infty$ rather than
$\gamma$ alone. These attractors can therefore be achieved when $\gamma$
remains nonrelativistic but $U$ gets large for the asymptotic field value.
The attractor value for $w$ depends on two key parameters: $\lambda^2/U$
and $v\lambda^2/U^2$. The explicit solution is given by
\begin{equation}
w=-1+2\,\left[1+\sqrt{1+12\,\frac{v-1}{\lambda^2}+\left(\frac{6U}{\lambda^2}\right)^2}\right]^{-1}\,, \label{eq:wulim}
\end{equation}
and the value of the Lorentz boost factor is
\begin{equation}
\gamma=\frac{\lambda^2}{6U}+\sqrt{\left(\frac{\lambda^2}{6U}\right)^2
+\frac{\lambda^2\,(v-1)}{3U^2}+1}\,. \label{eq:gamlim}
\end{equation}
Table~\ref{tab:ucrit} shows the parameter combinations that lead to
attractors with accelerated expansion. As stated, although the
essential classes of attractors (the four groups divided by the
horizontal rows) are the same as with standard DBI (cf.~Table~1
of \cite{akl}), the {\it paths\/} to obtaining them are multiplied.
These can deliver cosmological constant like behavior nonrelativistically, due
to the influence of the multibrane potential $U$, as well as new
approaches to $w=$ constant, arbitrarily close to $w=-1$.
(However, as we discuss in the next subsection, one can also absorb
$U$ into standard DBI.)
\begin{table}[htbp]
\begin{center}
\begin{tabular*}{0.9\columnwidth}
{@{\extracolsep{\fill}} c c c c c c}
\hline
$V/T$ & $\lambda^2/U$ & $\lambda^2v/U^2$ & $\gamma$ & $\gamma U$ & $w$ \\
\hline
$\infty$ & moot & $\infty$ & $\infty$ & $\infty$ & $-1$ \\
$\infty$ & $0$ & 0 & 1 & $\infty$ & $-1$ \\
\hline
$\infty$ & $\infty$ & $\infty$ & $\infty$ & $\infty$ & const \\
$\infty$ & $\infty$ & const & $\infty$ & const & const \\
$\infty$ & const & const & const & const & const \\
$\infty^\dagger$ & 0 & 0 & 1 & $\infty$ & $-1$ \\
\hline
const & const & const & const & const & const \\
const & $\infty$ & $\infty$ & $\infty$ & const & const \\
const & 0 & 0 & 1 & $\infty$ & $-1$ \\
\hline
0 & const & 0 & 1 & const & const \\
0 & 0 & 0 & $1$ & const${}^*$ & $-1$ \\
\hline
\end{tabular*}
\end{center}
\caption{Summary of accelerating attractor properties. The columns
give the values of the quantities for the attractor solution, all of
which possess asymptotic $\Omega_\phi=1$. Each grouping of rows corresponds
to one of the classes of standard DBI from Table~1 of \cite{akl}, with
the first row of each group being the standard DBI solution. We see
that multibrane DBI increases the number of ways of obtaining
accelerating attractor
solutions by almost a factor 3 over standard DBI and a factor 11 over
quintessence. The dagger indicates that while $V/T=\infty$, $(V/T)/\lambda^2=$
const. The asterisk in the last row denotes that the constant
is 0 unless $U\to\infty$. The values of constant $w$ are given by
Eq.~(\ref{eq:wulim}).}
\label{tab:ucrit}
\end{table}
Class 1 in the first group of rows of the table achieves cosmological
constant behavior. This can be realized, for example, through taking
$T\sim\phi^m$,
$V\sim\phi^c$, $U\sim\phi^p$ with any $p<-2$. In other words, even
forms of the tension $T$ and potential $V$ that in standard DBI do
not give acceleration, let alone $w=-1$, can give an asymptotic
cosmological constant state if $U$ increases sufficiently rapidly, e.g.~having
an inverse power law form with $p<-2$. The steepness of $U$ trumps the
behavior of $V$, $T$ so also the standard case giving $w$ constant
(e.g.~$T\sim\phi^4$, $V\sim\phi^2$) would instead yield $w=-1$.
Class 2 in the second group of rows of the table delivers a constant $w$,
which can be made arbitrarily close to $-1$ depending on parameter values.
An example would be given by the additional multibrane potential with $p=-2$.
Here, though, if $V$ and $T$ were such that they would cause an attractor
to $w=-1$, then this still holds. Alternately, if $V$ and $T$ could not
attain an accelerating attractor, $U\sim\phi^{-2}$ can achieve this with
a constant $w$. Note that the presence of $U$ also alters the value of
constant $w$ (cf.\ Eq.~\ref{eq:wulim}) from the standard DBI case where
$V$, $T$ give a constant $w$.
However, if $U$ does not get large sufficiently quickly, e.g.~$p>-2$,
then $V$ and $T$ determine the attractor behavior in the same manner
as in standard DBI. Figures~\ref{fig:att42p} and \ref{fig:att43p}
illustrates these various behaviors, for cases where standard DBI would
predict a constant $w$ attractor and no accelerating attractor,
respectively. (We do not show the $V\sim\phi^1$ case because as stated
this has identical asymptotic behavior to the standard DBI theory.)
\begin{figure}[!htb]
\begin{center}
\psfig{file=att42p.ps,width=3.4in}
\caption{The presence of the multibrane potential $U$ alters the
conditions for attractor solutions and opens up new routes to approach $w=-1$.
When $\lambda^2/U\to0$, then the cosmological constant is the
asymptotic solution. When this combination goes to a constant value,
then $w\to$ constant given by Eq.~(\ref{eq:wulim}), and when the
combination goes to $\infty$ then the standard DBI solution is unchanged.
For a power law
potential $U\sim\phi^p$, these correspond to $p<-2$, $=-2$, $>-2$.
}
\label{fig:att42p}
\end{center}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\psfig{file=att43p.ps,width=3.4in}
\caption{As Fig.~\ref{fig:att42p} but for a form of the standard
potential $V$ that would not give an accelerating attractor in
standard DBI theory. Here, however, the multibrane potential can
give constant $w$ (for $p=-2$) or a cosmological constant (for $p<-2$).
}
\label{fig:att43p}
\end{center}
\end{figure}
Class 3 is characteristic of exponential potential and tension, where
the field runs off to infinity. However, the behavior of $U$ can
determine the value of $w$, leading to either a constant $w\ne-1$
attractor or
a cosmological constant state, unlike in standard DBI. Class 4 is similar
to standard quintessence but again $U$ can deliver $w=-1$.
Just as in \S\ref{sec:exphis}, one can design a function $U$ to fit a
specific expansion history, or equation of state, through
Eq.~(\ref{eq:wudef}). Also note that
a constraint on $U$ exists from the nonnegativity of the
energy density in Eq.~(\ref{eq:rhodef}). This imposes the condition
\begin{equation}
\gamma U \ge 1-v\,. \label{eq:ucond}
\end{equation}
This is automatically satisfied for $\gamma U\ge1$
(we always take $V$, $T$ nonnegative). For $\gamma U<1$ though it limits
the allowed forms of $U(\phi)$.
When $\gamma U=1$ then $w=-1+\lambda^2/3$ at all times, not just asymptotically
(when $\lambda^2>3$ there is no attractor). This looks like a standard
quintessence attractor solution, but can actually be realized by
a relativistic $\gamma$ model with $U<1$.
\subsection{Single Brane Equivalence \label{sec:nonrel}}
In examining the nonrelativistic limit of the action (\ref{eq:lagu})
we see that it approaches quintessence with a redefinition of the
field and potential. This suggests a deeper mapping between the
multibrane and standard single brane DBI actions. By defining
\begin{equation} \label{varphi}
\varphi \equiv \int \sqrt{U} d\phi
\end{equation}
we can rewrite the action \eqref{eq:lagu} in terms of $\varphi$:
\begin{equation}\label{eq:lagu2}
S=\int d^4x\,\sqrt{-g} \left[-TU\sqrt{1-\dot\varphi^2/(TU)}+T-V\right].
\end{equation}
Comparing this action with Eq.~\eqref{eq:lag}, we see that it
is equivalent to the original DBI action with tension $\hat T$ and potential
$\hat V$ given by
\begin{align}
\hat T &= TU, \\
\hat V &= TU - T + V\,.
\end{align}
Therefore, the general analysis of \cite{akl} applies to the multibrane
situation when viewed in terms of the equivalent single brane, hatted
quantities. Specifically,
the formulae \eqref{eq:xdef}--\eqref{eq:wc} hold for
\begin{equation}
x^2=\frac{\kappa^2}{3H^2}(\gamma -1)\,\hat T \quad ; \quad
y^2=\frac{\kappa^2}{3H^2}\hat V \,, \label{eq:xdefh}
\end{equation}
with the replacement $U \rightarrow 1$, $v\to\hat v$, $\phi\to\varphi$,
and $\lambda\to\hat\lambda$ where $\hat\lambda=-(1/\hat V)d\hat V/d(\kappa\varphi)$.
In this formulation, the attractor values for $w$ and $\gamma$,
Eqs.~(\ref{eq:wulim}) and (\ref{eq:gamlim}), take
the same form as in standard DBI.
As an explicit example of the mapping between the multibrane and
single brane views, let us consider the case where the (unhatted)
tension and the potentials are given by power laws,
\begin{equation}
T \sim \phi^m,\quad V \sim \phi^c,\quad U \sim \phi^p,
\end{equation}
and investigate how the attractor values of $\gamma$ and $w$ change
as the exponents are varied. This gives an alternate view and
derivation of the results in Sec.~\ref{sec:wattrx}.
We assume $m$ and $c$ are positive for
simplicity. From Eq.~\eqref{varphi}, the redefined field $\varphi$
is related to the original field $\phi$ as $\varphi \sim \phi^{(p+2)/2}$
and the hatted quantities become
\begin{align}
\hat T &= TU \sim \varphi^{\frac{2(m+p)}{p+2}}, \notag \\
\hat V &= TU - T + V
\sim \varphi^{\frac{2(m+p)}{p+2}}
- \varphi^{\frac{2m}{p+2}} + \varphi^{\frac{2c}{p+2}}, \notag \\
\hat v &= \hat V/ \hat T
\sim 1 - \varphi^{-\frac{2p}{p+2}} + \varphi^{\frac{2(c-m-p)}{p+2}}.
\end{align}
Note that, if $p<-2$, $\varphi$ is inversely proportional to $\phi$ and
the small-field limit for one is the large-field limit for the other.
Thus it is natural to separately study the cases $p>-2$ and $p<-2$.
For the case $p>-2$, all the powers of the terms in $\hat V$ are positive
and $\varphi$ would go to zero asymptotically. Then the logarithmic derivative
$\hat\lambda \sim 1/\varphi$ diverges, giving the ultrarelativistic class of
attractor solution $\gamma \rightarrow \infty$. To obtain $w=-1$,
$\hat v/\hat\lambda^2$ should diverge, which happens if $m-c>2$. Note that
this result is independent of $p$. Therefore we conclude that
if $U$ is less singular than $1/\phi^2$ there is no effect of $U$,
in agreement with Sec.~\ref{sec:wattrx}.
If $p=-2$, then $\phi\sim e^\varphi$ and the hatted potentials and tension
appear exponential. These give constant $w$ attractors, even if (unhatted)
$V$ and $T$ would not normally give acceleration. If $V$ and $T$ would
give $w=-1$ by themselves, then this is maintained.
If $p<-2$, then as noted above the small-field and large-field limits
are reversed. Thus we obtain $w=-1$ in any case: if $V$ and $T$ provide
$w=-1$ themselves, then this is maintained, while if they do not give
acceleration then $U$ operates in the opposite limit and drives the
field to a $w=-1$ attractor. Again, see
Figs.~\ref{fig:att42p}-\ref{fig:att43p} and Sec.~\ref{sec:wattrx}.
As a curiosity, note we could take the converse view and split the
single brane picture into multiple branes. For example, the usual
quartic single brane tension $\hat T\sim\varphi^4$ could be viewed
as $T\sim\phi^m$ and $U\sim\phi^{m-4}$ as a way of relaxing the conditions
on the brane tension. It is this extra freedom from $U$ that generates
further paths to the same attractors as in standard DBI.
Another interesting case arises by choosing $T=V=\text{const}$ and
$U(\phi)$ as a runaway type potential connecting $U(0)=1$ and $U(\infty)=0$.
Then the action can be interpreted as the action for an unstable D-brane in
string theory \cite{sen} and the field $\phi$ represents its tachyonic mode.
A standard form for $U(\phi)$ is \cite{buchel,kims,kutasov}
\begin{equation}
U = 1/\cosh{\alpha\phi}
\end{equation}
where $\alpha$ is a constant. In this case, $\varphi \sim e^{-\alpha\phi/2}$
and $\hat V \sim \varphi^2$. Then we get $\gamma \to \infty$ and
$w =0$.
\section{Sound Speed \label{sec:sound}}
Beyond the homogeneous field properties we can briefly consider perturbations
to the dark energy density. These propagate with sound speed $c_s$ and
define a Jeans wavelength above which the dark energy can cluster. The
sound speed is defined in terms of the Lagrangian density $L$ (given by
the term in square brackets in Eq.~\ref{eq:lag} or \ref{eq:lagu}) and
canonical kinetic energy $X=(1/2)\dot\phi^2$ as \cite{soundspeed}
\begin{equation}
c_s^2=\frac{L_{,X}}{L_{,X}+2XL_{,XX}}\,,
\end{equation}
The result is $c_s=1/\gamma$ for both the standard \cite{martinyam} and
generalized DBI
actions, since $U(\phi)$ does not change the kinetic structure.
For the attractors depending on the relativistic limit, such as for
$w\approx-1$ in the standard DBI case, this implies the sound speed
goes to 0 and dark energy can clump on all scales. One of the
interesting aspects of multibrane DBI is that this is no longer
necessary; $w=-1$ can be achieved with $\gamma=1$ and so $c_s=1$.
However, when $w\approx-1$ in whichever case then dark energy
perturbations cannot grow regardless of the sound speed, so the
sound speed is unlikely to give a clear signature of the
DBI theory for the cases we consider. Indeed even models of dark
energy with $c_s=0$ cannot
be readily distinguished from those with $c_s=1$, when $w\approx-1$
and the dark energy does not couple to matter
\cite{beandore,huscranton,coragm}
(see \cite{matarrese,lazkoz} for the case of coupling).
\section{Conclusions \label{sec:concl}}
We have investigated possible constraints on DBI string theory from
cosmological observations, considering the entire field evolution not
just the asymptotic future behavior.
In particular, Eq.~(\ref{eq:mubound}) gives
a bound on the deviation of the locally warped region generated by the
form-field fluxes from the AdS geometry. It is very interesting if more
accurate cosmological data can restrict fundamental string parameters.
To improve the fine tuning problem of initial conditions, we have
enlarged attractor solutions to the case of generalized DBI theory which
includes an additional potential arising from either multiple coincident
branes, or non-BPS branes, or D5-branes wrapping a two-cycle within the
compact space and carrying a non-zero magnetic flux along this cycle
\cite{gumward}. We have obtained exact cosmological constant behavior
from some attractors of the extended DBI theory. Also, we have noticed
that the extended DBI theory can have the identical attractor behavior
to single-brane DBI with a different tension and potential.
An interesting novel feature of the DBI attractors is that the
sound speed can be driven to zero which enhances dark energy
clustering, although this is suppressed when $w\approx-1$.
We also showed that a straightforward quadratic plus quartic potential
acts like a thawing scalar field, and how more complicated potentials
could be designed for a specific cosmic expansion history.
We have analyzed in greater detail than in \cite{akl} how accurate
cosmological observations on the dark energy can constrain some
aspects of fundamental string theory within the DBI framework.
Input from high energy physics on the forms of the functions
is necessary as well.
The connections between string theory and astrophysical data offer
exciting prospects for revealing the nature of the cosmological constant
and the accelerating universe.
\acknowledgments
This work has been supported by the World Class University grant
R32-2008-000-10130-0. CK has been supported in part by the KOSEF grant
through CQUeST with grant No.\ R11 - 2005 - 021.
EL has been supported in part by the
Director, Office of Science, Office of High Energy Physics, of the
U.S.\ Department of Energy under Contract No.\ DE-AC02-05CH11231.
|
3,212,635,537,528 | arxiv | \section{Introduction}
\label{section1}
The inner\footnote{Here the term ``inner'' refers to objects distributed inside the Sun's galactocentric distance} Milky Way (MW) contain a significant population of globular clusters (GCs) with a wide range of metallicities ($-2.37<$ [Fe/H] $\lesssim0$; \citealt{Harris1996}, 2010 edition), most of which are still poorly explored due to the large foreground extinction and high field-star densities that complicate the analysis, especially at low Galactic latitudes. These limitations have been mitigated with wide field near-infrared imaging such as the \textit{Vista Variables in the Via Lactea} survey (VVV), and its extension the VVVX survey \citep[][]{Minniti2010, Smith2018}, which has expanded the family of Galactic GCs to more than $300$ candidates by the inclusion of objects in the inner Galaxy \citep[see e.g.][]{Minniti2017a, Minniti2017b, Palma2019, Garro2020}, including those in the bulge (Geisler et al. in prep.).
In this context, the high-resolution ($R>22,0000$) capabilities of the near-IR multi-fiber spectrographs of the Apache Point Observatory Galactic Evolution Experiment \citep[APOGEE-2;][]{Majewski2017} allow measurement of new parameters (radial velocity, metallicity, and detailed chemical abundances for many species) with high precision for a large number of Galactic GCs in a homogeneous way \citep[see e.g.][]{Meszaros2015, Schiavon2017, Masseron2019, Fernandez-Trincado2019d, Fernandez-Trincado2020d, Meszaros2020}.
Very metal-poor GCs are preferentially associated with the oldest components of galaxies, and are often used as cosmic clocks to track the enrichment history of their host galaxy \citep{Geisler1995}. Thus, beyond the intrinsic scientific value of identifying such ancient objects, the measurements of their chemical composition can therefore provide insight into the build-up of the chemical elements in the earliest epoch of the Milky Way.
To date, the most metal-poor MW GC known is ESO280-SC06 (located $\sim$15 kpc from the Galactic center), with an estimated metallicity of [Fe/H]$=-2.48^{+0.06}_{-0.11}$ \citep[][]{Simpson2018}, and associated to the \textit{Gaia}-Enceladus-Sausage \citep[GES;][]{Massari2019}. Here, we report another possible extreme case, VVV~CL001, which becomes possibly the most metal-poor GC known inside the Sun's Galactocentric distance, and likely the most metal-poor GC in the Galaxy. It has survived the strong tidal field of the inner MW during its interaction, and its existence implies that GCs below that of the empirical metallicity floor \citep[e.g.][]{Wan2020, Larsen2020} have been formed, but only a few exceptional cases have survived during Galactic evolution.
In this Letter, we make use of APOGEE-2 spectra to provide the first spectroscopic study of VVV~CL001.
\begin{figure*}[t]
\begin{center}
\includegraphics[height = 12. cm]{Figure1.png}\\
\caption{Panel (a): The spatial distribution of APOGEE-2 stars (gray dots) observed towards VVV~CL001. The red-dashed circle highlights a radius of 0.1\arcmin\ centered on VVV~CL001 for reference. The black open squares indicate the member candidates in the innermost regions of VVV~CL001 from APOGEE-2, while the blue symbols indicate the stars with RV information compiled in \citet{Baumgardt2018}. The GC UKS~1, with its respective tidal radius, is also over-plotted for reference. Panel (b): The \textit{Gaia} EDR3 proper motion distribution of our sample. The symbols are the same as in panel (a). The concentric ellipses highlight the best-fit distribution after a 3$-\sigma$ clip, with red-dotted lines showing the best-fit proper motion of VVV~CL001, listed at the bottom of the panel. Panels (c) and (d): Metallicities and radial distribution of APOGEE-2 stars (gray dots) and Baumgardt's RV compilation (see Section \ref{section4}) in the field surrounding VVV~CL001. The blue-horizontal and shadowed-cyan region indicate the nominal metallicity and RV of VVV~CL001 within $\pm0.1$ dex and $\pm6.6$ km s$^{-1}$, respectively. The vertical red-dashed lines at 0.1\arcmin\ are highlighted for reference. Panel (e): The best isochrone fit in the $K_{s}$ versus $(J-K_s)$ CMD using DSED models, where the blue line shows the most probable solution and the blue-shadowed region indicates the solutions within 1$-\sigma$. The orange symbols mark the potential candidates from the VVV survey, while the black open squares and triangles refer to stars with RV information from APOGEE-2 and Baumgardt's compilation, respectively. Panel (f): The posterior distributions of the indicated quantities.}
\label{Figure1}
\end{center}
\end{figure*}
\begin{figure}[t]
\begin{center}
\includegraphics[height = 15. cm]{Figure2.png}
\caption{Detection of atomic and molecular lines. Spectral synthesis is shown for the determination of the [N/Fe], [O/Fe], [Mg/Fe], [Al/Fe], [Si/Fe], and [Fe/H] abundances for two stars in the innermost regions of VVV~CL001. Each panel shows the best-fit syntheses (red line) from \texttt{BACCHUS} compared to the observed spectra (black squares) of selected lines (black arrows).}
\label{Figure2}
\end{center}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[height = 11. cm]{Figure3.png}
\caption{Elemental abundance of stars in VVV~CL001 (black symbols) and four comparison metal-poor GCs. Each violin representation show the univariate kernel density estimate of the abundance ratios of each cluster as determined from \citet{Meszaros2020}. Each GC has been slightly offset horizontally to distinguish them.}
\label{Figure3}
\end{center}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics[height = 11. cm]{Figure4.png}
\includegraphics[height = 9. cm]{Figure5.png}\\
\caption{Ensemble of ten thousand orbits of VVV~CL001, projected on the equatorial (a.1, b.1, and c.1) and meridional (a.2, b.2, and c.2) Galactic planes, in the inertial reference frame with a bar pattern speed of 41 km s$^{-1}$ kpc$^{-1}$, integrated over the past 2 Gyr. The red and orange colors correspond to more probable regions of the space, which are crossed most frequently by the simulated orbits. The black solid and dashed lines show the past and future orbits of VVV~CL001, integrated over 200 Myr. The white-dashed line indicates the Sun's radius. The filled and unfilled star symbol indicate the initial and final position of the cluster, respectively. The main orbital elements are listed in panels (a.2, b.2, and c.2), with uncertainty ranges given by the 16$^{\rm th}$ and 84$^{\rm th}$ percentile values. Panel (d): The Characteristic orbital energy ($E_{char}$) versus the orbital Jacobi constant ($E_J$) in the non-inertial reference frame where the bar is at rest. The gray symbols refer to Galactic GCs associated to different progenitors, as suggested in \citet{Massari2019}. VVV~CL001 is shown with red open symbols, while other GCs (black triangles) associated to Seq and GES are highlighted for reference. Panel (e): Line-of-sight velocity dispersion versus radius for our target cluster stars from APOGEE-2 plus Baumgardt's data set (blue dots). The prediction of the best-fitting \textit{N}-body model from \citet{Baumgardt2018} and \citet{Baumgardt2019} is shown as a solid-gray line, and the light-gray shaded region indicates the 1$-\sigma$ uncertainty from the fit.}
\label{Figure4}
\end{center}
\end{figure*}
\section{OBSERVATIONAL DATA}
\label{section2}
We use an interim release data product of the APOGEE-2 survey \citep{Majewski2017} part of the Sloan Digital Sky Survey IV \citep[SDSS-IV;][]{Blanton2017} that includes data taken after the release of DR16 \citep{Ahumada2020}, to investigate, for the first time, the chemical composition of VVV~CL001.
The (300-fiber) APOGEE instruments are high-resolution ($R\sim22,500$), near-infrared (NIR) spectrographs \citep{Wilson2019} observing all the components of the MW (halo, disc, and bulge/bar) from the Northern Hemisphere on the 2.5m telescope at Apache Point Observatory \citep[APO, APOGEE-2N;][]{Gunn2006} and from the Southern Hemisphere on the Ir\'en\'ee du Pont 2.5m telescope \citep[][]{Bowen1973} at Las Campanas Observatory (LCO, APOGEE-2S). Each instrument records most of the \textit{H}-band (1.51$\mu$m -- 1.69$\mu$m). See \citet{Nidever2015}, \citet{Garcia2016}, and \citet{Holtzman2018} for details regarding the data reduction process and standard stellar parameter estimates. As of February 2020, the dual APOGEE-2 instruments have observed some $\sim$600,000 sources across the MW, targeting these stars according to methods described in \citet{Zasowski2013} and \citet{Zasowski2017}, with updates to the targeting plan described in Beaton et al. and Santana et al. (in prep.).
The present study focuses on VVV~CL001, a GC discovered by the VVV survey \citep[see e.g.,][]{Minniti2011}. This GC lies in the direction of the Galactic bulge, and is strongly dominated by large foreground extinction, E(B$-$V) $\gtrsim$ 2.0 mag \citep{Minniti2011}, hampering the observations of this object in the optical bands.
VVV~CL001 stars fall on the same APOGEE-2 plug-plates as those associated with the very nearby GC UKS~1, as shown in panel (a) of Figure \ref{Figure1} \citep[see also][for details regarding to UKS~1]{Fernandez-Trincado2020d}, however the high radial velocities (RVs) of potential VVV~CL001 members allow us to cleanly distinguish the UKS~1 sources from VVV~CL001 stars. Thus, two high-confidence VVV~CL001 members, based on proper motions (see panel (b) of Figure \ref{Figure1}) from \textit{Gaia} Early Data Release 3 \citep[\textit{Gaia} EDR3:][]{Bronw2020}, location in the color-magnitude diagrams (CMDs), and RVs from APOGEE-2 were identified $\lesssim0.1'$ from the cluster centre.
Panels (c) and (d) of Figure \ref{Figure1} show that both our derived [Fe/H] and RV of the VVV~CL001 stars are clearly distinct as compared to the foreground and background stars (hereafter field stars). The [Fe/H] and RVs are at least 1.5 dex and $\sim150$ km s$^{-1}$ offset from the field stars, respectively. Both of these offsets are very large and imply that VVV~CL001 is a truly extreme GC. The two potential cluster members from APOGEE-2 are red giant branch (RGB) stars close to the tip of the giant branch as shown in panel (e) of Figure \ref{Figure1}.
In order to have a self-consistent method for age derivation via statistical isochrone fitting, we use the \texttt{SIRIUS} code \citep[][]{Souza2020}, and the most probable cluster members in the VVV catalogue located inside 1.5$\arcmin$ from the cluster center and that have proper motions compatible with that of VVV~CL001, as well as those sources with RVs information (see Section \ref{section4}), which have been marked as triangle (for Baumgardt's data set) and square (for APOGEE-2 data set) open symbols in panel (e) of Figure \ref{Figure1}. Due to the quality of our data, we applied some assumptions to obtain an age distribution: a uniform prior in age between 1 and 15 Gyr combined with a slow drop above the age of the universe \citep[13.7 Gyr;][]{Planck2016}; the metallicity was variated around the value determined with high-resolution spectroscopy in the present work; the isochrone is limited to $\log$ g $<$ 4.5 representing the RGB region. We dereddened and extinction-corrected the VVV$+$2MASS $Ks_{s}$ and $J-K_{s}$ colors with the bulge-specific reddening maps from \citet{Gonzalez2011, Gonzalez2012} assuming the reddening law of \citet{Cardelli1989}. We also noticed that in a $\sim$2\arcmin$\times$ 2\arcmin area the differential reddening across the field do not affect the CMD of VVV~CL001, with a negligible variation of 0.03 mag in $K_{s}$. Finally, we adopted the Dartmouth Stellar Evolutionary Database \citep[DSED;][]{Dotter2008} isochrones with [$\alpha$/Fe]$=+0.4$ and canonical helium. Panel (e) and (f) of Figure \ref{Figure1} present the best isochrone fits in the K$_s$ versus ($J - K_s$) CMD. Our fit provides a reasonable solution both in the over-plotted isochrone (panel e) and the posterior distributions of the corner plot (panel f). To represent the distributions, we adopt the median as the most probable value and the uncertainties calculated from the $16^{\rm th}$ and $84^{\rm th}$ percentiles. We found an age of $11.9^{+3.12}_{-4.05}$ Gyr and a probable distance of $\sim 8.22^{+1.84}_{-1.93}$ kpc. Also, we want to stress that without the adopted assumptions, the internal error of the age determination could increase and give not a clear age distribution. Consequently, our probable solutions, within $1-\sigma$, fit well the central part of the CMD, providing confidence that the age estimate is a reasonable determination for VVV~CL001.
\section{ELEMENTAL ABUNDANCES}
\label{section3}
As the \texttt{ASPCAP}/APOGEE-2 pipeline \citep{Garcia2016} does not provide [Fe/H] and [X/Fe] determinations for VVV~CL001 stars, we followed the same technique as described in \citet[][]{Fernandez-Trincado2019a, Fernandez-Trincado2019b, Fernandez-Trincado2019c, Fernandez-Trincado2019d, Fernandez-Trincado2020a, Fernandez-Trincado2020b, Fernandez-Trincado2020c, Fernandez-Trincado2020d}, and carried out a consistent chemical-abundance analysis for the two VVV~CL001 stars with the \texttt{BACCHUS} code \citep{Masseron2016}. The spectra of our sample, in general, have a signal-to-noise (S/N) that is appropriate for elemental-abundance determinations; see Table \ref{Table1}. The atmospheric parameters ($T_{\rm eff}$, $\log{g}$, and $\xi_{t}$), metallicity ([Fe/H]), and elemental-abundance ratios ([X/Fe]) were derived with the \texttt{BACCHUS} code.
Figure \ref{Figure2} shows the good quality of the APOGEE-2 spectrum compared to the best-fit synthetic spectra for selected atomic and molecule lines of the two stars in VVV~CL001, from which the chemical-abundance ratios were determined. For each spectrum we were able to identify reliable lines for six chemical species (N, O, Mg, Al, Si, and Fe). The same figure also reveals the scarcely detectable Fe I lines. The resulting elemental-abundance ratios are listed in Table \ref{Table1}.
The resulting chemical abundances are also displayed in Figure \ref{Figure3}, and compared to four GCs at similar metallicity taken from the APOGEE-2 GC sample of \citet{Meszaros2020}. We find that the two program stars exhibit an iron abundance ratio of $-2.47$ to $-2.44$, suggesting that VVV~CL001 has a mean metallicity [Fe/H] $= -2.45$, with an uncertainty due to systematics of 0.24 dex, which makes this cluster possibly the most metal-poor GC identified so far within the Sun's Galactocentric distance, and with an extreme metallicity close to the apparent ``floor" in the empirical metallicity distribution function for GCs in the MW and Local Universe \citep[e.g.,][]{Geisler1995, Simpson2018, Kruijssen2019, Larsen2020, Wan2020}. Note that VVV~CL001 has the lowest Fe abundance amongst this sample, which includes the lowest metallicity GCs observed by APOGEE. With our limited sample, we do not find evidence for an intrinsic Fe-abundance spread.
Regarding the $\alpha$-elements (O, Mg, and Si ), VVV~CL001 displays a modest $\alpha$-element enhancement, a clear signature of the fast enrichment provided by supernovae (SNe) II events, and compatible with other Galactic metal-poor GCs at similar metallicity (see Figure \ref{Figure3}). Furthermore, no Mg-Al anti-correlation is evident in our small sample; the two VVV~CL001 stars exhibit an aluminum deficit, which places them within the definition of \textit{first-generation} stars according to the criteria developed by \citet{Meszaros2020}. However, one star in our sample displays a very high enrichment in nitrogen, and is considered to likely belong to some of the families of \textit{second-generation}\footnote{\textit{Second-generation} is used here to refer to stars in VVV~CL001 that display altered light-element abundances (e.g. He, C, N, O, Na, Al, and Mg), which are different from those of typical MW field stars.} stars with low-aluminum enrichment \citep[see e.g.][]{Meszaros2020}, indicating the possible evidence for multiple stellar populations (MPs) in VVV~CL001. However, we caution the reader that the error bars in either N or Al abundance (or both) are not properly understood or inconclusive, therefore more cluster members need to be followed-up of in order to confirm or discard the existence of MPs in this VVV~CL001.
\begin{table}
\begin{center}
\setlength{\tabcolsep}{3.0mm}
\caption{Atmospheric Parameters, and Elemental Abundances of Stars in VVV~CL001}
\begin{tabular}{|l|c|c|}
\hline
APOGEE$-$ID & 2M17544233 & 2M17544268\\
& $-$2400536 & $-$2400573 \\
\hline
$T_{\rm eff}$ (K) & 4100 & 4200 \\
$\log$ \textit{g} (cgs) & 0.50 & 0.50 \\
$\xi_{t}$ (km s$^{-1}$) & 1.81 & 1.69 \\
S/N (pixel$^{-1}$) & 229 & 193 \\
RV (km s$^{-1}$) & $-323.57$ & $-326.16$ \\
RV-scatter (km s$^{-1}$) & $2.55$ & $2.13$ \\
${\rm [N/Fe]}$ & $+1.23 \pm 0.31$ & $+0.23 \pm 0.40$ \\
${\rm [O/Fe]} $ & $+0.41 \pm 0.18$ & $+0.39 \pm 0.16$ \\
${\rm [Mg/Fe]} $ & $+0.04 \pm 0.02$ & $+0.15 \pm 0.09$ \\
${\rm [Al/Fe]} $ & $-0.63 \pm 0.11$ & $-0.19 \pm 0.09$ \\
${\rm [Si/Fe]} $ & $+0.26 \pm 0.14$ & $+0.33 \pm 0.10$ \\
${\rm [Fe/H]}$ & $-2.47 \pm 0.24$ & $-2.44 \pm 0.24$ \\
\hline
\end{tabular} \label{Table1}
\tablecomments{
The errors were determined in the same manner, as described in \citet{Fernandez-Trincado2020a} by variating the atmospheric parameters one at a time by the typical, albeit conservative, values of $\Delta T_{\rm eff} = \pm 100$ K, $\Delta \log$ \textit{g} $= \pm 100$ cgs, and $\Delta \xi_t = \pm 0.05$ km s$^{-1}$. Thus, the reported uncertainties are defined as $\sigma^{2}_{total} = \sigma^2_{[X/H], T_{\rm eff}} + \sigma^2_{[X/H],{\rm log} g} + \sigma^2_{[X/H],\xi_t} + \sigma^2_{mean}$.
}
\end{center}
\end{table}
\section{DYNAMICAL PROPERTIES}
\label{section4}
We made use of the state-of-art MW model --\texttt{GravPot16} to predict the orbital path of VVV~CL001 in a steady-state gravitational Galactic model that includes a ``boxy/peanut" bar structure.
For the orbit computations, we adopt the same Galactic model configuration, solar position and velocity vector as described in \citet{Fernandez-Trincado2020e}, except for the angular velocity of the bar, for which we employed the recommended value of 41 km s$^{-1}$ kpc$^{-1}$ \citep[][]{Sanders2019}.
The most likely orbital parameters and their uncertainties are estimated using a simple Monte Carlo approach. An ensemble of ten thousand orbits were calculated under variations of the observational parameters according to their estimated errors (assumed as 1$-\sigma$ variation), where the errors were assumed to follow a Gaussian distribution.
To compute the orbits, we adopt a mean RV of $-325.95 \pm 6.6$ km s$^{-1}$ computed from the combined APOGEE-2 RVs of the two program stars and Holger Baumgardt's RV compilation\footnote{\url{https://people.smp.uq.edu.au/HolgerBaumgardt}} of 34 stars, which lie in the RV range between $-341.13$ to $-310.22$ km s$^{-1}$. From those stars, we select objects having a Renormalized Unit Weight Error (\texttt{RUWE}) below 1.4, as extracted from \textit{Gaia} EDR3, which allows us to discard sources with problematic astrometric solutions \citep[see e.g.][]{Lindegren2018}. This reduces the 36 stars with RV information to 27 stars with both reliable proper-motions and RV information, which were considered to compute the nominal proper motion of VVV~CL001 after applying a 3$-\sigma$ clipping to the data, e.g., only 25 out of these 27 sources lies inside 3$-\sigma$ of the nominal proper motion of the cluster, as shown in panel (b) of Figure \ref{Figure1} . From this procedure, the nominal proper motion of VVV~CL001 is $(\mu_{\alpha}\cos(\delta), \mu_{\delta})= (-3.41, -1.97)$ mas yr$^{-1}$, with an assumed uncertainty of 0.5 mas yr$^{-1}$. The heliocentric distance ($d_{\odot}$) of VVV~CL001 remains uncertain; for this reason we assume three possible estimate (5.5, 8.0, and 10.5 kpc) close to the best isochrone fit, as shown in panel (f) of Figure \ref{Figure1}.
The main orbital elements and the ensemble of orbits of VVV~CL001 are displayed in panels (a) to (c) and (d) of Figure \ref{Figure4}. The orbital parameters reveal that VVV~CL001 lies on a radial and highly eccentric ($>$0.8) halo-like orbit with rather small excursions above the Galactic plane ($Z_{max} <$ 3 kpc), and pericentric ($r_{peri}$) distances below 1 kpc. We also find that VVV~CL001 exhibits a retrograde and prograde sense at the same time when assuming a close heliocentric distance, while it is exclusively retrograde at and beyond the Bulge, similar to other GC GCs in the inner Galaxy \citep[see][]{Perez-Villegas2020}. Therefore, a more robust heliocentric distance estimation will better constraint these dynamics scenarios.
Panel (d) of Figure \ref{Figure4} shows that the orbital energy configuration of VVV~CL001 is comparable to that of Galactic GCs associated with the major accretion events such as Sequoia (Seq.) and GES \citep[see e.g.,][]{Myeong2018, Massari2019}, indicating that VVV~CL001 could be the fossil relic of one of these accreted dwarf galaxies.
\section{Mass}
\label{section5}
\citet{Baumgardt2018} performed \textit{N}-body simulations of star clusters, and found that they could reproduce the surface-density profile of VVV~CL001, finding a present-day mass of 9$\times$10$^{4}$ M$_{\odot}$.
With the available RV data, we match the line-of-sight dispersion profiles to the \textit{N}-body simulations of VVV~CL001, as shown in panel (e) of Figure \ref{Figure4}, and thus determine the most likely mass of the cluster from kinematics constraint. We adopted three radial bins (with bin centers of 6\arcmin, 15\arcmin, and 30\arcmin), and chosen to ensure that at least ten stars were in each bin; resulting in the three blue dots shown in panel (e) of \ref{Figure4}. With the new data in panel (e) of Figure \ref{Figure4}, we find $\sigma_{0}\sim6.6$ km s$^{-1}$. This yields a present-day estimated mass of $\sim$2.1$\times$10$^{5}$ M$_{\odot}$, which suggests that VVV~CL001 is two times more massive than previously thought, and approximately four times more massive than ESO280-SC06 (the most metal-poor GC known in the MW).
\section{CONCLUDING REMARKS}
\label{section6}
We have performed the first near-IR high-resolution spectral analysis of two likely members of the GC VVV~CL001, a cluster obscured by the heavy extinction and high field-star density in the direction of the Galactic Bulge. Based on high S/N APOGEE spectra, we measure a mean [Fe/H] metallicity of $-2.45\pm0.1$, which makes VVV~CL001 possibly the most metal-poor GC known, only slightly higher than ESO280-SC06 at $-2.48$.
VVV~CL001 is very close in projection to UKS~1, which motivated \citet{Minniti2011} to suggest the possibility that they could be gravitationally bound. However, the RV of VVV~CL001 is too large, and the orbits are very different, which allows us to rule out the binary-cluster scenario.
We find that the $\alpha$-element abundances of VVV~CL001 are typical of the lowest metallicity GCs known. Spectra for more members are required in order to confirm the presence of MPs in this cluster.
For the derivation of the age and reddening, we employed the new code \texttt{SIRIUS} \citep{Souza2020}. As shown in panel (e) of Figure \ref{Figure1}, we derived a median age of $\sim11.9^{+3.12}_{-4.05}$ Gyr, indicating that the very metal-poor GC VVV~CL001 is among the oldest and most massive ($\sim$2.1$\times$10$^{5}$ M$_{\odot}$) MW clusters.
A dynamical analysis of VVV~CL001 reveals that this object has a radial and highly eccentric halo-like orbit confined inside the Sun's galactocentric distance. Both its metallicity and orbit favor the interpretation of VVV~CL001 being a GC that belongs to an early accretion event in the MW corresponding to the either the Seq or GES dwarf galaxies. Finally, multi-band photometry in the near-IR will be useful to identify other stellar tracers, such as RR Lyrae stars (if any) toward VVV~CL001, which will significantly help to constrain the distance, age, and origin of the cluster.
\acknowledgments
We thank the anonymous referee for helpful comments that greatly improved the paper. We warmly thank Holger Baumgardt for providing his published numerical \textit{N}-body modeling of the line-of-sight velocity dispersion of VVV~CL001.
J.G.F-T is supported by FONDECYT No. 3180210.
D.M. is supported by the BASAL Center for Astrophysics and Associated Technologies (CATA) through grant AFB 170002, and by project FONDECYT Regular No. 1170121.
S.O.S. acknowledges the FAPESP PhD fellowship 2018/22044-3.
T.C.B. acknowledges partial support for this work from grant PHY 14-30152: Physics Frontier Center / JINA Center for the Evolution of the Elements (JINA-CEE), awarded by the US National Science Foundation.
D.G. gratefully acknowledges support from the Chilean Centro de Excelencia en Astrof\'isica y Tecnolog\'ias Afines (CATA) BASAL grant AFB-170002. D.G. also acknowledges financial support from the Direcci\'on de Investigaci\'on y Desarrollo de la Universidad de La Serena through the Programa de Incentivo a la Investigaci\'on de Acad\'emicos (PIA-DIDULS).
S.V. gratefully acknowledges the support provided by Fondecyt reg. n. 1170518.
B.B. acknowledge partial financial support from FAPESP, CNPq, and CAPES - Finance Code 001.
A.P-V. and S.O.S acknowledge the DGAPA-PAPIIT grant IG100319.
L.H. gratefully acknowledges support provided by National Agency for Research and Development (ANID)/CONICYT-PFCHA/DOCTORADO NACIONAL/2017-21171231.
A.R-L. acknowledges financial support provided in Chile by Comisi\'on Nacional de Investigaci\'on Cient\'ifica y Tecnol\'ogica (CONICYT) through the FONDECYT project 1170476 and by the QUIMAL project 130001.
\newline
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS website is www.sdss.org.
\newline
SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'{i}sica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut f\"{u}r Astrophysik Potsdam (AIP), Max-Planck-Institut f\"{u}r Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"{u}r Astrophysik (MPA Garching), Max-Planck-Institut f\"{u}r Extraterrestrische Physik (MPE), National Astronomical Observatory of China, New Mexico State University, New York University, the University of Notre Dame, Observat\'{o}rio Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'{o}noma de M\'{e}xico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
\newline
This work has made use of data from the European Space Agency (ESA) mission \textit{Gaia} (\url{http://www.cosmos.esa.int/gaia}), processed by the \textit{Gaia} Data Processing and Analysis Consortium (DPAC, \url{http://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the \textit{Gaia} Multilateral Agreement.
\newline
Simulations have been executed on HPC resources on the Cluster Supercomputer Atocatl from Universidad Nacional Aut\'onoma de M\'exico (UNAM).
|
3,212,635,537,529 | arxiv | \section{Introduction}
Given a graph $G$, Postnikov defined a graph associahedron $P_G$ as an example of a \emph{generalized permutohedron}, a polytope whose normal fan coarsens the braid arrangement \cite{postnikov:2009permutohedra}. Graph associahedra were also introduced independently in \cite{carr.devadoss:2006coxeter} and \cite{davis.janus.scott:2003fundamental}. Some significant examples of graph associahedra include the associahedron, the cyclohedron, and the permutohedron. Combinatorially, the faces of the graph associahedron correspond to certain collections of connected subgraphs of $G$, called \emph{tubings}. We recall these definitions in Section~\ref{sec:tubing}. We consider a poset $L_G$ on the maximal tubings of $G$ whose Hasse diagram is an orientation of the $1$-skeleton of the graph associahedron.
In \cite{ronco:2012tamari}, Ronco defined a binary operation on a vector space generated by the tubings of an ``admissible'' family of graphs $\Gcal$, which gives this space the structure of an associative algebra. We call this algebra a \emph{tubing algebra}; see Section~\ref{subsec_hopf_algebra}. In particular, when $\Gcal$ is the set of complete graphs $K_n$ or path graphs $P_n$, the tubing algebra is isomorphic to either the Malvenuto-Reutenauer algebra on permutations \cite{malvenuto.reutenauer:1995duality} or the Loday-Ronco algebra on binary trees \cite{loday.ronco:1998hopf}, respectively. The interpretation of these algebras in terms of tubings was given previously in \cite{forcey.springfield:2010geometric}.
In Section~\ref{subsec_tubing_coalgebra}, we introduce the notion of a ``restriction-compatible'' family of graphs. Such families come with a comultiplication on their maximal tubings. We call the resulting coalgebra a \emph{tubing coalgebra}.
Reading introduced a general technique to construct subalgebras of the Malvenuto-Reutenauer algebra using lattice quotients of the weak order on permutations in \cite{reading:2005lattice}. Using the terminology of \cite{reading:2005lattice}, if a sequence of lattice congruences $\{\Theta_n\}_{n\geq 0}$ is translational (respectively, insertional), then the set of congruence classes of $\mathfrak{S}_n$ modulo $\Theta_n$ naturally index a basis of a subalgebra (respectively, sub-coalgebra) of the Malvenuto-Reutenauer algebra.
The main goal of this work is to compare the above constructions of Reading and Ronco. For any graph $G$ with vertex set $[n]$, there is a canonical surjective map ${\Psi_G:\mathfrak{S}_n\ra L_G}$ obtained by coarsening the braid arrangement in $\Rbb^n$ to the normal fan of~$P_G$. Our first main result characterizes graphs for which the map $\Psi_G$ is a lattice map. We say a graph $G$ is \emph{filled} if for each edge $\{i,k\}$ in $G$, there are edges $\{i,j\}$ and $\{j,k\}$ in $G$ whenever $i<j<k$
\begin{theorem}\label{thm_main_lattice}
The map $\Psi_G$ is a lattice quotient map if and only if $G$ is filled.
\end{theorem}
Restricting attention to filled graphs, we have the following comparison between the constructions of Reading and Ronco.
\begin{theorem}\label{thm_main}
Let $\Gcal=\{G_n\}_{n\geq 0}$ be a sequence of filled graphs, and let $\mathbf{\Theta}=\{\Theta_n\}_{n\geq 0}$ be the associated sequence of lattice congruences of the weak order.
\begin{enumerate}
\item\label{thm_main_1} The family $\Gcal$ is admissible if and only if $\mathbf{\Theta}$ is translational.
\item\label{thm_main_2} The family $\Gcal$ is restriction-compatible if and only if $\mathbf{\Theta}$ is insertional.
\end{enumerate}
\end{theorem}
In \cite{forcey:2012species}, Forcey posed the problem of determining whether $L_G$ is a lattice for any graph $G$. This turns out to be false in general; cf. Section~\ref{subsec:tubing_lattice}. We say a graph $G$ on $[n]$ is \emph{right-filled} if whenever $\{i,k\}$ is an edge, so is $\{j,k\}$ for $i<j<k$. Dually, we say $G$ is \emph{left-filled} if $\{i,j\}$ is an edge whenever there is an edge $\{i,k\}$ for $i<j<k$. We prove that $L_G$ is a lattice whenever $G$ is either left-filled or right-filled. More precisely, these are the cases when $L_G$ is a semilattice quotient of the weak order. For other graphs, the poset $L_G$ may still be a lattice, even if it is not a semilattice quotient of the weak order. Some additional examples and conjectures are discussed in Section~\ref{sec:other}.
The rest of the paper is organized as follows. We introduce the poset of maximal tubings $L_G$ in Section~\ref{sec:tubing}. The main result in this section is Theorem~\ref{thm:NRC}, which states that $L_G$ has the \emph{non-revisiting chain property}, defined in Section~\ref{subsec:NRC}. In Section~\ref{sec:lattice}, we recall the congruence-uniform lattice structure of the weak order on permutations and elaborate on the canonical map from permutations to maximal tubings. Sections~\ref{sec_lattice} and~\ref{sec_hopf} are devoted to proving Theorems~\ref{thm_main_lattice} and~\ref{thm_main}, respectively. We end the paper with some open problems and conjectures in Section~\ref{sec:other}.
\section{Poset of maximal tubings}\label{sec:tubing}
\subsection{Tubings and $G$-trees
In this section, we recall the principal combinatorial objects in this paper, namely the maximal tubings of a graph and $G$-trees.
Let $G=(V,E)$ be a simple graph with vertex set $V=[n]:=\{1,\ldots,n\}$. If $I\subseteq V$, we let $G|_I$ denote the induced subgraph of $G$ with vertex set $I$. A \emph{tube} is a nonempty subset $I$ of vertices such that the induced subgraph $G|_I$ is connected. Any tube not equal to $V$ is called a \emph{proper tube}. We let $\Ical(G)$ be the set of all tubes of $G$.
We define the \emph{deletion} $G\setm I$ to be the graph $G|_{V\setm I}$ and the \emph{contraction} (or \emph{reconnected complement}) $G/I$ as the graph with vertex set $V\setm I$ and edges $\{i,j\},\ (i\neq j)$ if either $\{i,j\}\in E(G)$ or there exists a tube $J$ of $G|_I$ such that $\{i,k\}\in E(G)$ and $\{j,l\}\in E(G)$ for some $k,l\in J$.
Note that we define deletion and contraction on sets of vertices of $G$ rather than on edges as it is done for graphic matroids. Furthermore, the contracted graph $G/I$ is always simple, i.e. it has no loops or parallel edges.
Two tubes $I, J$ are said to be \emph{compatible} if either
\begin{itemize}
\item they are \emph{nested}: $I\subseteq J$ or $J\subseteq I$, or
\item they are \emph{separated}: $I\cup J$ is not a tube.
\end{itemize}
A \emph{tubing} $\Xcal$ of $G$ is any collection of pairwise compatible tubes. The collection $\Xcal$ is said to be a \emph{maximal tubing} if it is maximal by inclusion. We let $\MTub(G)$ be the set of maximal tubings of the graph $G$. If $\Xcal$ is a tubing of $G$ and $X_1,\ldots,X_r\in\Xcal$ are pairwise disjoint, then the union $I=X_1\cup\cdots\cup X_r$ is called an \emph{ideal} of $\Xcal$. This terminology may be explained by the connection to $G$-trees given later in this section.
\begin{lemma}
If $\Xcal$ is a tubing of $G$ with an ideal $I$ then there is a unique collection $X_1,\ldots, X_r$ of pairwise disjoint tubes in $\Xcal$, namely the connected components of $G|_I$, such that $I=X_1\cup\cdots\cup X_r$.
\end{lemma}
Tubings of $G$ may be restricted to certain induced subgraphs or contracted graphs as follows.
If $I$ is a subset of $[n]$, let $\Comp(I)$ be the set of maximal tubes of $G|_I$; i.e., $J\in\Comp(I)$ if $J\subseteq I$ and $G|_J$ is a connected component of $G|_I$. If $\Xcal$ is a tubing of $G$, we set
$$\Xcal|_I:=\bigcup_{J\in\Xcal}\Comp(I\cap J).$$
\begin{lemma}\label{lem:tubing_restriction}
Let $\Xcal$ is a tubing of $G$ and $I\subseteq [n]$.
The collection $\Xcal|_I$ is a tubing of $G|_I$. If $\Xcal$ is maximal then so is $\Xcal|_I$.
\end{lemma}
Lemma~\ref{lem:tubing_restriction} can be deduced from a cellular map between different graph associahedra; see \cite[Definition 3.4]{forcey.springfield:2010geometric}. This map is a generalized form of the \emph{Tonks projection}, one of the standard maps from the faces of the permutahedron to the faces of the associahedron.
When $I$ is an ideal of $\Xcal$ we set
$$\Xcal/I:=\{J\setm I:\ J\in\Xcal,\ J\nsubseteq I\}.$$
\begin{lemma}\label{lem:tubing_del_con}
Let $\Xcal$ is a tubing of $G$ with an ideal $I$.
The collection $\Xcal/I$ is a tubing of~$G/I$.
If $\Xcal$ is maximal then so is $\Xcal/I$.
\end{lemma}
Any maximal tubing $\Xcal$ contains exactly $n$ tubes. Indeed, we have the following bijection between $\Xcal$ and $[n]$.
\begin{lemma}
If $\Xcal$ is a maximal tubing, then each tube $I$ contains a unique element $\topT_{\Xcal}(I)\in [n]$ not contained in any proper tube of $\Xcal|_I$. Furthermore, the function $\topT_{\Xcal}$ is a bijection between the tubes in $\Xcal$ and the vertex set $[n]$.
\end{lemma}
\begin{proof}
It is straight forward to check that $\topT_\Xcal(I)$ is well-defined for each tube $I\in \Xcal$.
Let $k\in [n]$ and let $\Ical$ be the set of tubes in $\Xcal$ which contain $k$.
Observe that $\Ical$ is not empty (because the connected component of $G$ containing $k$ is a tube in $\Xcal$.)
Because each of the tubes in $\Ical$ are nested, there is a smallest tube $I\in \Ical$ (under containment) which contains $k$.
For this tube, we have $\topT_\Xcal(I) =k$.
It follows that if $\topT_\Xcal(I)=\topT_\Xcal(J)=k$ then $I=J$.
Therefore $\topT_\Xcal$ is indeed a bijection.
\end{proof}
\begin{figure}
\centering
\includegraphics{tubing_tree}
\caption{\label{fig:tubing_tree}(left) A maximal tubing (right) Its associated $G$-tree}
\end{figure}
Let $T$ be a forest poset on $[n]$.
That is, each connected component of $T$ is a rooted tree, and $i<_T k$ whenever $i$ and $k$ belong to the same connected component, and the unique path from $i$ to the root of this component passes through $k$.
Let $i_\downarrow$ denote the principal order ideal generated by $i$ in $T$.
We say that $T$ is a \emph{$G$-forest}, or \emph{$G$-tree} when $T$ is connected, if it satisfies both of the following conditions (see also \cite[Definition~8.1]{postnikov.reiner.williams:2008faces}):
\begin{itemize}
\item For each $i\in [n]$, the set $i_\downarrow$ is a tube of $G$;
\item If $i$ and $k$ are incomparable in $T$, then $i_\downarrow \cup k_\downarrow$ is not tube of $G$.
\end{itemize}
Given a $G$-forest $T$, observe that the collection $\chi(T)=\{i_\downarrow: i\in [n]\}$ is a maximal tubing on $G$.
Indeed, consider $I=i_\downarrow$ and $J=k_\downarrow$ for any pair $i$ and $k$ in $[n]$.
If $i$ and $k$ are not comparable, then it is immediate that $I$ and $J$ are compatible (because $I\cup J$ is not a tube).
On the other hand, if $i$ and $k$ are comparable, then either $I\subset J$ or $J\subset I$.
The following theorem is essentially \cite[Proposition~8.2]{postnikov.reiner.williams:2008faces}, specialized to the case where the building set $\Bcal$ is the collection of tubes of~$G$.
An example of this correspondence is shown in Figure~\ref{fig:tubing_tree}.
\begin{theorem}\label{G-trees}
Let $G$ be a graph with vertex set $[n]$.
Then the map $\chi$ which sends $T\mapsto \{i_\downarrow: i\in [n]\}$ is a bijection from the set of $G$-forests to the set of maximal tubings on $G$.
The inverse to $\chi$, which we denote by $\tau$ maps the maximal tubing $\Xcal$ to a tree-poset $T$ satisfying: $\top_\Xcal(I)<\top_\Xcal(J)$ if and only if $I\subset J$, where $I$ and $J$ are tubes in $\Xcal$.
\end{theorem}
It follows that $G$ is connected if and only if each $G$-forest is actually a $G$-tree.
\subsection{Graph associahedra}\label{subsec:graph_assoc}
Before defining the graph associahedron, the main polytopes discussed in this paper, we recall the definition of the normal fan of a polytope.
A \emph{(polyhedral) fan} $\Ncal$ is a set of cones in $\Rbb^n$ such that for any two elements $C,C^{\pr}\in\Ncal$, their intersection $C\cap C^{\pr}$ is in $\Ncal$ and it is a face of both $C$ and $C^{\pr}$. It is \emph{complete} if $\bigcup_{C\in\Ncal} C=\Rbb^n$ and \emph{pointed} if $\{0\}\in\Ncal$. A pointed fan $\Ncal$ is \emph{simplicial} if the number of extreme rays of each $C\in\Ncal$ is equal to its dimension. We consider a simplicial fan to be a type of ``realization'' of a simplicial complex; more accurately, it is a cone over a geometric realization.
For a polytope $P\subseteq\Rbb^n$ and $f\in(\Rbb^n)^*$ in the dual space, we let $P^f$ be the subset of $P$ at which $f$ achieves its maximum value. We consider an equivalence relation on $(\Rbb^n)^*$ where $f\sim g$ if $P^f=P^g$. It is not hard to show that each equivalence class is a relatively open polyhedral cone. The \emph{normal fan} of $P$ is the set of closures of these cones, which forms a complete polyhedral fan. A polytope is simple if and only if its normal fan is simplicial.
The set of tubings of a graph forms a flag simplicial complex $\Delta_G$, called the \emph{nested set complex}. A set $W$ consisting of the vertices of a connected component of $G$ is a tube that is compatible with every other tube, so it is a cone point in $\Delta_G$. The nested set complex is sometimes defined with these cone points removed since this subcomplex is a simplicial sphere. For our purposes, however, it will be convenient to consider the maximal tubes as part of every maximal tubing of $G$.
The nested set complex may be realized as a simplicial fan, which is the normal fan $\Ncal_G$ of a polytope $P_G$ known as the graph associahedron \cite[Theorem 2.6]{carr.devadoss:2006coxeter}, \cite[Theorem 3.14]{feichtner.sturmfels:2005matroid}, \cite[Theorem 7.4]{postnikov:2009permutohedra}. We recall Postnikov's construction below.
For polytopes $P,Q\subseteq\Rbb^n$, their \emph{Minkowski sum} $P+Q$ is the polytope
$$P+Q=\{\mathbf{x}+\mathbf{y}\ |\ \mathbf{x}\in P,\ \mathbf{y}\in Q\}.$$
The normal fan of $P$ is a coarsening of the normal fan of $P+Q$ \cite[Proposition~7.12]{ZieglerGu}.
Let $\mathbf{e}_1,\ldots,\mathbf{e}_n$ be the standard basis vectors in $\Rbb^n$. Given $I\subseteq[n]$, let $\Delta_I$ be the simplex with vertices $\{\mathbf{e}_i\ |\ i\in I\}$. The \emph{graph associahedron} $P_G$ is the Minkowski sum of simplices $\Delta_I$ over all tubes $I$ of $G$; that is,
$$P_G=\sum\Delta_I=\left\{\sum \mathbf{x}_I\ |\ (\mathbf{x}_I\in\Delta_I:\ I\ \mbox{is a tube})\right\}.$$
\begin{figure}
\centering
\includegraphics{mink}
\caption{The graph associahedron for the graph with edge set $E=\{\{1,3\},\{2,3\}\}$}
\end{figure}
Proofs that the face lattice of $P_G$ coincides with the nested set complex are given in \cite{feichtner.sturmfels:2005matroid} and \cite{postnikov:2009permutohedra}.
We recall the correspondence between maximal tubings and vertices, which will be most important for our purposes.
See \cite[Proposition~7.9]{postnikov:2009permutohedra}.
Recall that the notation $i_{\downarrow}$ refers to the principal order ideal generated by $i$ in a $G$-tree.
For a maximal tubing $\Xcal$, we interpret $i_{\downarrow}$ as the smallest tube in $\Xcal$ that contains the element $i$
\begin{lemma}\label{polytope_poset}
If $\Xcal$ is any maximal tubing, the point $\mathbf{v}^\Xcal=(v_1,\ldots,v_n)$ is a vertex of $P_G$ where $v_i$ is the number of tubes $I$ such that $i\in I$ and $I\subseteq i_{\downarrow}$. Conversely, every vertex of $P_G$ comes from a maximal tubing in this way.
\end{lemma}
Before we give the proof of Lemma~\ref{polytope_poset}, we need the following easy lemma.
\begin{lemma}\label{poset_polytope_helper}
Let $\Xcal$ be a tubing of $G$ and let $w_1\ldots w_n$ a permutation on $[n]$ such that $\{w_1,\ldots, w_j\}$ is an ideal of $\Xcal$ for each $j\in[n]$.
Suppose that $i=w_j$ for some $j\in[n]$, and write the ideal $\{w_1,\ldots, w_j\}$ as a disjoint union of tubes $X_1\cup \cdots \cup X_r$.
Then $i_\downarrow=X_l$ for some $l\in [r]$.
\end{lemma}
\begin{proof
Since $i_\downarrow$ is the smallest tube in $\Xcal$ containing $i$ there is a unique $l\in [r]$ such that $i_\downarrow \subseteq X_l$.
Assume that $i_\downarrow$ is a proper subset of $X_l$, and choose $k\in X_l\setminus i_\downarrow$ such that $i_\downarrow\cup \{k\}$ is a tube.
(This is possible because $X_l$ is a tube; that is, $G|_{X_l}$ is connected.)
Since $k\in \{w_1,\ldots, w_j\}$ (and clearly $k\ne i$), there is some $p<j$ such that $w_p=k$.
Now consider the tube $k_\downarrow\subseteq \{w_1,\ldots, w_p\}$.
Observe that $i_\downarrow\not\subseteq k_\downarrow$ because $i\not \in k_\downarrow$.
Also $k_\downarrow\not\subseteq i_\downarrow$ because $k\notin i_\downarrow$.
But $k_\downarrow \cup i_\downarrow$ is a tube (since $\{k\}\cup i_\downarrow$ is a tube), and that is a contradiction.
The statement follows.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{polytope_poset}]
By definition, a point $\mathbf{v}\in P_G$ is a vertex if there exists a linear functional $f:\Rbb^n\ra\Rbb$ such that $\mathbf{v}$ is the unique point in $P_G$ at which $f$ achieves its maximum value. We let $P_G^f$ denote this vertex. The key observation is that if $P_G=\sum\Delta_I$ is the decomposition of the graph associahedron $P_G$ as a Minkowski sum of simplices, then $P_G^f=\sum\Delta_I^f$.
If $f$ is any linear functional such that $f(\mathbf{e}_i)\neq f(\mathbf{e}_j)$ for all $i\neq j$, then $f$ is maximized at a unique vertex of the simplex $\Delta_I$ for any nonempty $I\subseteq[n]$. Namely, if $w=w_1\cdots w_n$ is the permutation of $[n]$ such that $f(\mathbf{e}_{w_1})<\cdots <f(\mathbf{e}_{w_n})$, then $\Delta_I^f=\mathbf{e}_{w_k}$ where $k$ is the maximum index such that $w_k\in I$.
Now let $\Xcal$ be a maximal tubing, and let $\mathbf{v}=\mathbf{v}^{\Xcal}$ be defined as above. Let $w=w_1\cdots w_n$ be a permutation such that $\{w_1,\ldots,w_j\}$ is an ideal of $\Xcal$ for all $j$.
(Such a permutation exists.
For example, take any linear extension of the $G$-tree corresponding to $\Xcal$.
Set $$f(x_1,\ldots,x_n)=x_{w_1}+2x_{w_2}+\cdots+nx_{w_n}.$$ We claim that $P_G^f=\mathbf{v}$.
Let $I$ be a tube (not necessarily in $\Xcal$), and let $i\in I$.
To verify the claim, we will show that $f|_{\Delta_I}$ is maximized at the vertex $\mathbf{e}_i$ if and only if $\Delta_I$ contributes $\mathbf{e}_i$ to $\mathbf{v}$.
That is, $f|_{\Delta_I}$ is maximized at the vertex $\mathbf{e}_i$ if and only if $I \subseteq i_\downarrow$.
Suppose that $i=w_j$ in the permutation~$w$.
Observe that $f|_{\Delta_I}$ is maximized at $\mathbf{e}_i$ if and only if $I\subseteq \{w_1,w_2,\ldots, w_j\}$.
Write the ideal $\{w_1,w_2,\ldots, w_j\}$ as a disjoint union $X_1\cup X_2\cup\cdots \cup X_r$ of tubes in $\Xcal$.
By Lemma~\ref{poset_polytope_helper}, $i_\downarrow=X_l$ for some $l$.
If $I\subseteq X_1\cup X_2\cup \cdots \cup X_r$ then $I \subseteq X_l$ because $i\in I$ and $I$ is a tube.
Clearly, if $I\subseteq i_\downarrow =X_l$ then $I \subseteq X_1\cup\cdots \cup X_r$
We have proved the claim that $P_G^f=\mathbf{v}$.
Next, we prove that every vertex of $P_G$ is of the form $\mathbf{v}^{\Xcal}$ for some $\Xcal$. Let $w$ be a permutation and $f$ any linear functional such that $f(\mathbf{e}_{w_1})<\cdots<f(\mathbf{e}_{w_n})$. If there exists some maximal tubing $\Xcal$ such that $\{w_1,\ldots,w_j\}$ is an ideal of $\Xcal$ for all $j$, then we know that $P_G^f=\mathbf{v}^{\Xcal}$. Indeed, one can define a tubing $\Xcal=\{X_1,\ldots,X_n\}$ where $X_j$ is the largest tube in the subset $\{w_1,\ldots,w_j\}$ containing $w_j$.
(That is, $X_j$ is the set of vertices of the connected component of $G|_{ \{w_1,\ldots,w_j\}}$ containing $w_j$.)
It is clear that $\Xcal$ has the desired property.
\end{proof}
If $I$ is any tube of $G$, then the subcomplex of tubings containing $I$ is isomorphic to the product of nested set complexes $\Delta_{G|_I}\times\Delta_{G/I}$. By induction, we may deduce that any face of $P_G$ is isomorphic to a product of graph associahedra.
When $G$ is a complete graph, the polytope $P_G$ is the ``standard'' permutahedron, and its normal fan $\Ncal_G$ is the set of cones defined by the braid arrangement. For a general graph $G$, the polytope $P_G$ is a Minkowski summand of the standard permutahedron, so its normal fan is coarser than that defined by the braid arrangement.
Besides the usual ordering of tubings by inclusion, there is an alternate partial order introduced by Forcey \cite{forcey:2012species} and Ronco \cite{ronco:2012tamari}. We describe the restriction of their poset to $\MTub(G)$.
Suppose that $I$ is a non-maximal tube in $\Xcal$.
Since $P_G$ is a simple polytope whose face lattice is dual to $\Delta_G$, there exists a unique tube $J$ distinct from $I$ such that $\Ycal=\Xcal\setm\{I\}\cup\{J\}$ is a maximal tubing of $G$. Define a \emph{flip} as the relation $\Xcal\ra \Ycal$ if $\topT_\Xcal(I)<\topT_{\Ycal}(J)$. We say $\Xcal\leq \Ycal$ holds if there exists a sequence of flips of maximal tubings of the form $\Xcal\ra\cdots\ra \Ycal$.
\begin{figure}
\centering
\includegraphics{tamari}
\caption{\label{fig:L_ex}A poset of maximal tubings}
\end{figure}
\begin{lemma}\label{lem_poset}
The set $\MTub(G)$ is partially ordered by the relation $\leq$.
\end{lemma}
\begin{proof}
The edges of the graph associahedron $P_G$ take the following form. Let $\Xcal$ and $\Ycal$ be maximal tubings of $G$ such that $\Ycal=\Xcal\setm\{I\}\cup\{J\}$ for some distinct tubes $I,J$. Set $i=\topT_\Xcal(I)$ and $j=\topT_{\Ycal}(J)$. Then the vertices $\mathbf{v}^{\Xcal}$ and $\mathbf{v}^{\Ycal}$ agree on every coordinate except the $i^{th}$ and $j^{th}$ coordinates. Indeed, $\mathbf{v}^{\Ycal}-\mathbf{v}^{\Xcal}=\lambda(\mathbf{e}_i-\mathbf{e}_j)$ where $\lambda$ is equal to the number of tubes of $G$ contained in $I\cup J$ that contain both $i$ and $j$.
Let $\lambda:\Rbb^n\ra\Rbb$ such that $\lambda(x_1,\ldots,x_n)=nx_1+(n-1)x_2+\cdots+x_n$. If $\Xcal$ and $\Ycal$ are as above and $i<j$, then $\lambda(\mathbf{v}^{\Ycal}-\mathbf{v}^{\Xcal})>0$. Hence, the relation $\Xcal\ra \Ycal$ on maximal tubings is induced by the linear functional $\lambda$. Consequently, the relation is acyclic, so its transitive closure is a partial order.
\end{proof}
We let $(L_G,\leq)$ denote the poset on $\MTub(G)$ defined above. An example of the poset $L_G$ for the graph $G$ with vertex set $V=[3]$ and edge set $E=\{\{1,3\},\{2,3\}\}$ is given in Figure~\ref{fig:L_ex}. The figure demonstrates that $L_G$ is the transitive, reflexive closure of an orientation of the 1-skeleton of the graph associahedron $P_G$.
\begin{remark}
The proof of Lemma~\ref{lem_poset} identifies the poset of maximal tubings with a poset on the 1-skeleton of the polytope $P_G$ oriented by a linear functional. This type of construction of a poset on the vertices of a polytope appears frequently in the literature, e.g. in the shellability of polytopes \cite{bruggesser.mani:1972shellable}, the complexity of the simplex method \cite{kalai:1997linear}, and the generalized Baues problem \cite{bjorner:1992essential}, among others.
One may choose to orient the edges of $P_G$ by some other generic linear functional $\lambda^{\pr}$, giving some new partial order $L$ on the vertices of $P_G$. Letting $w=w_1\cdots w_n$ be the permutation such that $\lambda^{\pr}(\mathbf{e}_{w_1})>\cdots >\lambda^{\pr}(\mathbf{e}_{w_n})$, it is easy to see that $L\cong L_{G^{\pr}}$ where $G^{\pr}$ is the graph obtained by relabeling vertex $w_i$ in $G$ by $i$ for all $i\in[n]$. Hence, by considering the class of posets $L_G$, we are considering all posets on the vertices of a graph associahedron induced by a generic linear functional.
\end{remark}
\subsection{Properties of the poset of maximal tubings}\label{subsec_properties}
In this section, we cover some basic properties of $L_G$ that hold for any graph $G$.
If $H$ is a graph with $V(H)\subseteq\Nbb$, the \emph{standardization} $\std(H)$ is the same graph with vertex set $V(\std(H))=[n],\ n=|V(H)|$ such that the vertices of $\std(H)$ have the same relative order as in $H$. That is, there is a graph isomorphism $\phi:H\ra\std(H)$ such that if $i,j\in V(H),\ i<j$ then $\phi(i)<\phi(j)$.
\begin{lemma}\label{lem_decomposition}
Let $G$ be a graph, $I\subseteq V(G)=[n]$ such that $G$ does not have any edge $\{i,j\}$ with $i\in I$ and $j\in[n]\setm I$. Then
$$L_G\cong L_{\std(G|_I)}\times L_{\std(G|_{[n]\setm I})}.$$
\end{lemma}
\begin{proof}
Under the assumptions about $G$ and $I$, there do not exist any tubes $X$ such that $X\cap I\neq\emptyset$ and $X\cap([n]\setm I)\neq\emptyset$. Furthermore, any tube of $G|_I$ is compatible as a tube of $G$ with any tube of $G|_{[n]\setm I}$. Hence, the set $\MTub(G)$ naturally decomposes as a Cartesian product
$$\MTub(G)\stackrel{\sim}{\longlra}\MTub(\std(G|_I))\times\MTub(\std(G|_{[n]\setm I})).$$
We claim that this bijection induces the desired isomorphism of posets
$$L_G\cong L_{\std(G|_I)}\times L_{\std(G|_{[n]\setm I})}.$$
If $\Xcal,\Ycal\in\MTub(G)$ such that $\Ycal=\Xcal\setm\{J\}\cup\{J^{\pr}\}$ for some tubes $J,J^{\pr}$ then $J$ and $J^{\pr}$ must be incompatible. Consequently, either both tubes $J,J^{\pr}$ are contained in $I$, or both tubes are contained in $[n]\setm I$. Without loss of generality, assume that $\topT_{\Xcal}(J)<\topT_{\Ycal}(J^{\pr})$ and that $J$ and $J^{\pr}$ are both subsets of $I$, which implies $\Xcal\ra\Ycal$ holds in $L_G$.
Let $\phi:H\ra\std(H)$ be the natural graph isomorphism between $H$ and its standardization. Then the inequality $\topT_{\std(\Xcal|_I)}(\phi(J))<\topT_{\std(\Xcal|_I)}(\phi(J^{\pr}))$ still holds, so we have the relation $\std(\Xcal|_I)\ra\std(\Ycal|_I)$ in $L_{\std(G|_I)}$.
Conversely, if $\Xcal$ and $\Ycal$ are maximal tubings of $\std(G|_I)$ and $\Zcal$ is any maximal tubing of $G|_{[n]\setm I}$, then the relation $\Xcal\ra\Ycal$ in $L_{\std(G|_I)}$ implies a relation $(\phi^{-1}(\Xcal)\cup\Zcal)\ra(\phi^{-1}(\Ycal)\cup\Zcal)$ in $L_G$.
\end{proof}
If $(L,\leq)$ is any poset, its \emph{dual} $(L^*,\leq^*)$ is the poset with the same underlying set such that $a\leq b$ if and only if $b\leq^* a$.
If $G$ is any graph with $V(G)=[n]$, we let $G^*$ be the graph obtained by swapping vertices $i$ and $n+1-i$ for all $i$.
This induces a natural bijection between maximal tubings of $G$ and maximal tubings of $G^*$.
\begin{lemma}\label{graph duality}
The natural bijection $\MTub(G)\ra\MTub(G^*)$ induces an isomorphism of posets $L_G^*\cong L_{G^*}$.
\end{lemma}
\begin{proof}
Let $\Xcal,\Ycal\in\MTub(G)$ are distinct tubings such that $\Ycal=\Xcal\setm\{I\}\cup\{J\}$. Let $\Xcal^*,\Ycal^*\in\MTub(G^*)$ be the corresponding maximal tubings of $G^*$. Then
\begin{align*}
\Xcal\ra\Ycal &\LRa \top_{\Xcal}(I)<\top_{\Ycal}(J)\\
&\LRa \top_{\Ycal^*}(J^*)<\top_{\Xcal^*}(I^*)\\
&\LRa \Ycal^*\ra\Xcal^*.
\end{align*}
Passing to the transitive closure of $\ra$, we deduce that $L_G$ and $L_{G^*}$ are dual posets.
\end{proof}
\subsection{The non-revisiting chain property}\label{subsec:NRC}
In this section, we prove that graph associahedra have the non-revisiting chain property, defined below. This is equivalent to the statement that for any tubing $\Xcal$, the set of maximal tubings containing $\Xcal$ is an interval of~$L_G$.
Given a polytope $P$, we will say a linear functional $\lambda:\Rbb^n\ra\Rbb$ is \emph{generic} if it is not constant on any edge of $P$. When $\lambda$ is generic, we let $L(P,\lambda)$ be the poset on the vertices of $P$ where $v\leq w$ if there exists a sequence of vertices $v=v_0,v_1,\ldots,v_l=w$ such that $\lambda(v_0)<\lambda(v_1)<\cdots<\lambda(v_l)$ and $[v_{i-1},v_i]$ is an edge for all $i\in\{1,\ldots,l\}$.
The following properties of $L(P,\lambda)$ are immediate.
\begin{proposition}\label{prop:omega_properties}
Let $P$ be a polytope with a generic linear functional $\lambda$.
\begin{enumerate}
\item The dual poset $L(P,\lambda)^*$ is isomorphic to $L(P,-\lambda)$.
\item If $F$ is a face of $P$, then the inclusion $L(F,\lambda)\hookra L(P,\lambda)$ is order-preserving.
\item $L(P,\lambda)$ has a unique minimum $v_{\hat{0}}$ and a unique maximum $v_{\hat{1}}$.
\end{enumerate}
\end{proposition}
The pair $(P,\lambda)$ is said to have the \emph{non-revisiting chain (NRC) property} if whenever $\mathbf{x}<\mathbf{y}<\mathbf{z}$ in $L(P,\lambda)$ such that $\mathbf{x}$ and $\mathbf{z}$ lie in a common face $F$, then $\mathbf{y}$ is also in $F$.
The name comes from the fact that if $P$ has the NRC property, then any sequence of vertices following edges monotonically in the direction of $\lambda$ does not return to a face after leaving it.
By definition, the NRC property means that faces are \emph{order-convex} subsets of $L(P,\lambda)$.
(Recall that a subset $S$ of a poset is \emph{order-convex} provided that whenever elements $x,z\in S$ satisfy $x<z$ then the entire interval $[x,z]$ belongs to $S$.)
In light of Proposition~\ref{prop:omega_properties}, this is equivalent to the condition that for any face $F$, the set of vertices of $F$ form an interval of $L(P,\lambda)$ isomorphic to $L(F,\lambda)$.
\begin{remark}
There is also an unoriented version of the NRC property due to Klee and Wolfe called the \emph{non-revisiting path property}, which is the condition that for any two vertices $\mathbf{v}, \mathbf{w}$ of $P$, there exists a path from $\mathbf{v}$ to $\mathbf{w}$ that does not revisit any facet of $P$.
It was known that the Hirsch conjecture on the diameter of 1-skeleta of polytopes is equivalent to the conjecture that every polytope has the non-revisiting path property.
These conjectures were formulated to determine the computational complexity of the simplex method from linear programming in the \emph{worst-case} scenario.
The Hirsch conjecture was disproved by Santos \cite{santos:2012counterexample}, but many interesting questions remain.
In particular, the polynomial Hirsch conjecture remains open.
\end{remark}
In contrast to the non-revisiting path property, many low-dimensional polytopes lack the non-revisiting chain property.
For example, if $P$ is a simplex of dimension at least $2$, then $[\mathbf{v}_{\hat{0}},\mathbf{v}_{\hat{1}}]$ is an edge of $P$ that is not an interval of $L(P,\lambda)$.
However, the property does behave nicely under Minkowski sum.
\begin{proposition}\label{prop:MS_NRF}
If $(P,\lambda)$ and $(Q,\lambda)$ have the non-revisiting chain property, then so does $(P+Q,\lambda)$.
\end{proposition}
The proof of Proposition~\ref{prop:MS_NRF} relies on Lemma~\ref{lem:sum_order_embed}. For polytopes $P$ and $Q$, the normal fan of $P+Q$ is the common refinement of $\Ncal(P)$ and $\Ncal(Q)$; that is,
$$\Ncal(P+Q)=\{C\cap C^{\pr}\ |\ C\in\Ncal(P),\ C^{\pr}\in\Ncal(Q)\}.$$
Let $V(P)$ be the set of vertices of $P$, and let $C_v$ be the normal cone to the vertex $v$ in $P$. From the description of the normal fan of $P+Q$, there is a canonical injection $\iota:V(P+Q)\hookra V(P)\times V(Q)$ that assigns a vertex $\mathbf{v}\in P+Q$ to $(\mathbf{u},\mathbf{w})$ if the normal cones satisfy $C_{\mathbf{v}}=C_\mathbf{u}\cap C_\mathbf{w}$.
\begin{lemma}\label{lem:sum_order_embed}
The map $\iota:V(P+Q)\hookra V(P)\times V(Q)$ is an order-preserving function from $L(P+Q,\lambda)$ to $L(P,\lambda)\times L(Q,\lambda)$.
\end{lemma}
\begin{proof}
Let $E=[\mathbf{v},\mathbf{w}]$ be an edge of $P+Q$, and suppose $\lambda(\mathbf{v})<\lambda(\mathbf{w})$. It suffices to show that $\iota(\mathbf{v})<\iota(\mathbf{w})$.
Let $\iota(\mathbf{v})=(\mathbf{v}^{\pr},\mathbf{v}^{\pr\pr})$ and $\iota(\mathbf{w})=(\mathbf{w}^{\pr},\mathbf{w}^{\pr\pr})$.
Then the normal cone $C_E$ is the intersection of $C_\mathbf{v}$ and $C_\mathbf{w}$, which themselves are the intersections of $C_{\mathbf{v}^{\pr}},\ C_{\mathbf{v}^{\pr\pr}}$ and $C_{\mathbf{w}^{\pr}},\ C_{\mathbf{w}^{\pr\pr}}$.
Since
$$C_E=(C_{\mathbf{v}^{\pr}}\cap C_{\mathbf{w}^{\pr}})\cap(C_{\mathbf{v}^{\pr\pr}}\cap C_{\mathbf{w}^{\pr\pr}})$$
is a cone of codimension 1, we may deduce that $C_{\mathbf{v}^{\pr}}\cap C_{\mathbf{w}^{\pr}}$ and $C_{\mathbf{v}^{\pr\pr}}\cap C_{\mathbf{w}^{\pr\pr}}$ are both of codimension $\leq 1$.
Hence, the segments $E^{\pr}=[\mathbf{v}^{\pr},\mathbf{w}^{\pr}]$ and $E^{\pr\pr}=[\mathbf{v}^{\pr\pr},\mathbf{w}^{\pr\pr}]$ are either vertices or edges of $P$ and $Q$, respectively.
Moreover, if both $E^{\pr}$ and $E^{\pr\pr}$ are edges, then they must be parallel and $E=E^{\pr}+E^{\pr\pr}$.
In the event one of them is a vertex, say $E^{\pr\pr}$ (so that $\mathbf{v}''=\mathbf{w}''$), then $E^{\pr}$ must be an edge, and
$$\lambda(\mathbf{v}^{\pr})=\lambda(\mathbf{v})-\lambda(\mathbf{v}^{\pr\pr})<\lambda(\mathbf{w})-\lambda(\mathbf{v}^{\pr\pr})=\lambda(\mathbf{w})-\lambda(\mathbf{w}^{\pr\pr})=\lambda(\mathbf{w}^{\pr}).$$
If both $E^\pr$ and $E^{\pr\pr}$ are edges, then since $\lambda$ achieves its minimum value on $E=E^{\pr}+E^{\pr\pr}$ at $\mathbf{v}=\mathbf{v}^{\pr}+\mathbf{v}^{\pr\pr}$, we have $\lambda(\mathbf{v}^{\pr})<\lambda(\mathbf{w}^{\pr})$ and $\lambda(\mathbf{v}^{\pr\pr})<\lambda(\mathbf{w}^{\pr\pr})$.
In both cases, $\iota(\mathbf{v})<\iota(\mathbf{w})$ holds.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:MS_NRF}]
Every face of $P+Q$ is of the form $F+F^{\pr}$ where $F$ is a face of $P$ and $F^{\pr}$ is a face of $Q$.
Suppose $\mathbf{u},\mathbf{v},\mathbf{w}$ are vertices of $P+Q$ such that $\mathbf{u}<\mathbf{v}<\mathbf{w}$ in $L(P+Q,\lambda)$ and $\mathbf{u},\mathbf{w}\in F+F^{\pr}$. Set $\iota(\mathbf{u})=(\mathbf{u}_P,\mathbf{u}_Q)$, and analogously for $\iota(\mathbf{v})$ and $\iota(\mathbf{w})$.
Then $\mathbf{u}_P\leq \mathbf{v}_P\leq \mathbf{w}_P$ in $L(P,\lambda)$ and $\mathbf{u}_Q\leq \mathbf{v}_Q\leq \mathbf{w}_Q$ in $L(Q,\lambda)$.
Since $P$ and $Q$ have the non-revisiting chain property, $\mathbf{v}_P$ is in $F$ and $\mathbf{v}_Q$ is in $F^{\pr}$.
Hence, $\mathbf{v}=\mathbf{v}_P+\mathbf{v}_Q$ is in $F+F^{\pr}$, as desired.
\end{proof}
\begin{corollary}[Proposition 7.2 \cite{hersh:2018nonrevisiting}]
Every zonotope has the non-revisiting chain property with respect to any generic linear functional.
\end{corollary}
We now return to graph associahedra.
Let $G$ be a graph on $[n]$, and let $\lambda$ be the linear functional in the proof of Lemma~\ref{lem_poset}, where $\lambda(\mathbf{x})=nx_1+(n-1)x_2+\cdots+x_n$, so that $L_G\cong L(P_G,\lambda)$. Using the decomposition $P_G=\sum\Delta_I$, Lemma~\ref{lem:sum_order_embed} implies that $\pi_J:L_G\ra L(\Delta_J,\lambda)$ obtained as the composition
$$L_G\hookra\bigotimes_I L(\Delta_I,\lambda)\thra L(\Delta_J,\lambda)$$
is order-preserving. We note that the poset $L(\Delta_J,\lambda)$ is a chain where $\mathbf{e}_i>\mathbf{e}_j$ whenever $i,j\in J$ with $i<j$.
\begin{lemma}\label{NRC helper}
Suppose that $\Xcal$ is a maximal tubing of $G$ and $J$ is a tube not necessarily in~$\Xcal$.
Then there exists a unique $k\in J$ such that $J\subseteq k_{\downarrow}$, and for this $k$ we have $\pi_J(\Xcal)=\mathbf{e}_k.$
\end{lemma}
\begin{proof}
Recall that $k_\downarrow$ is the smallest tube in $\Xcal$ that contains $k$.
Hence, there is at most one such element $k\in J$ satisfying $J\subseteq k_\downarrow$.
(Indeed, if $j\in J$ and $J\subseteq j_\downarrow$ then we have $j\in J\subseteq k_\downarrow$.
Thus $j_\downarrow\subseteq k_\downarrow$.
By symmetry, $k_\downarrow\subseteq j_\downarrow$.
Therefore $j=k$.)
Consider the vertex $\mathbf{v}^\Xcal$ in $P_G$.
Lemma~\ref{polytope_poset} implies that $\Delta_J$ contributes $\mathbf{e}_k$ to $\mathbf{v}^\Xcal$ if and only if $k\in J$ and $J\subseteq k_\downarrow$.
Therefore $\pi_J(\Xcal) = \mathbf{e}_k$, as desired.
\end{proof}
\begin{theorem}\label{thm:NRC}
The pair $(P_G,\lambda)$ has the non-revisiting chain property.
\end{theorem}
\begin{proof}
Every face of $P_G$ is the intersection of some facets, and the intersection of a family of order-convex sets is again order-convex. Hence, it suffices to prove that if $F$ is any facet of $P_G$ then $V(F)$ is an order-convex subset of $L(P_G,\lambda)$. We argue by way of contradiction that this set is order-convex by selecting an appropriate projection $\pi_J$.
Under the dictionary between tubings of $G$ and faces of $P_G$, if $F$ is a facet, then there exists a tube $I$ such that
$$V(F)=\{\mathbf{v}^{\Xcal}\ |\ \Xcal\in\MTub(G),\ I\in\Xcal\}.$$
Suppose that there are maximal tubings $\Xcal<\Ycal<\Zcal$ such that $I\in\Xcal$ and $I\in\Zcal$ but $I$ is not in $\Ycal$. Given that such a triple exists, we are free to assume that $\Xcal\ra\Ycal$ is a flip. Then the flip exchanges $I$ for some tube $I^{\pr}$. Let $a=\topT_{\Xcal}(I)$ and $b=\topT_{\Ycal}(I^{\pr})$. The union $I\cup I^{\pr}$ is a tube in both $\Xcal$ and $\Ycal$ such that $b=\topT_{\Xcal}(I\cup I^{\pr})$ and $a=\topT_{\Ycal}(I\cup I^{\pr})$.
Hence, $I$ is maximal a tube in the ideal $(I\cup I^{\pr})\setminus \{b\}$ in $\Xcal$.
That means $G|_I$ is one of the connected components of $G|_{I\cup I^{\pr}\setm\{b\}}$. Since $I\cup I^{\pr}$ is a tube, this implies $I\cup\{b\}$ is a tube as well.
Set $J=I\cup\{b\}$.
We claim that if $\Wcal$ is any maximal tubing containing $I$, then the projection $\pi_J(\Wcal)=\mathbf{e}_b$.
If $\pi_J(\Wcal)=\mathbf{e}_k\ne \mathbf{e}_b$ then Lemma~\ref{NRC helper} says that $k\in J$ and $J \subseteq k_\downarrow$.
Since $k\ne b$, it follows that $k\in I$.
Since $k_\downarrow$ is the smallest tube in $\Wcal$ that contains $k$, we have $k_\downarrow \subseteq I$.
But then $I\subsetneq J \subseteq k_\downarrow \subseteq I$, and that is a contradiction.
Therefore, $\pi_J(\Wcal) = \mathbf{e}_b$.
So $\pi_J(\Xcal)=\mathbf{e}_b=\pi_J(\Zcal)$, but $\pi_J(\Ycal)=\mathbf{e}_a$, contradicting the assumption that $\Ycal<\Zcal$.
\end{proof}
\begin{corollary}\label{faces_are_intervals}
For any tubing $\Ycal$ of $G$, the set of maximal tubings which contain $\Ycal$ is an interval in $L_G$.
\end{corollary}
\begin{remark}
Another property that a polytope graph may have is the \emph{non-leaving face property}, which is satisfied if for any two vertices $u,v$ that lie in a common face $F$ of $P$, every geodesic between $u$ and $v$ is completely contained in $F$. This property holds for all zonotopes, but is quite special for general polytopes. Although ordinary associahedra are known to have the non-leaving face property, not all graph associahedra do. We note that the example geodesic in \cite[Figure 6]{manneville.pilaud:2015graph} that leaves a particular facet cannot be made into a monotone path, so it does not contradict our Theorem~\ref{thm:NRC}.
\end{remark}
Recall that the M\"obius function $\mu=\mu_L:\Int(L)\ra\Zbb$ is the unique function on the intervals of a finite poset $L$ such that for $x\leq y$:
$$\sum_{x\leq z\leq y}\mu(x,z)=\begin{cases}1\ \mbox{if }x=y\\0\ \mbox{if }x\neq y\end{cases}.$$
When $L(P,\lambda)$ is a lattice with the non-revisiting chain property, the M\"obius function was determined in \cite{hersh:2018nonrevisiting}. One way to prove this is to show that $L(P,\lambda)$ is a crosscut-simplicial lattice; cf. \cite{mcconville:2017crosscut}. In the case of the poset of maximal tubings, we may express the M\"obius function as follows. For a tubing $\Xcal$, let $|\Xcal|$ be the number of tubes it contains.
\begin{corollary}\label{cor:mobius}
Let $G$ be a graph with vertex set $[n]$ such that $L_G$ is a lattice. Let $\Xcal$ be a tubing that contains every maximal tube.
The set of maximal tubings containing $\Xcal$ is an interval $[\Ycal,\Zcal]$ of $L_G$ such that $\mu(\Ycal,\Zcal)=(-1)^{n-|\Xcal|}$. If $[\Ycal,\Zcal]$ is not an interval of this form, then $\mu(\Ycal,\Zcal)=0$.
\end{corollary}
Based on some small examples, we conjecture that Corollary~\ref{cor:mobius} is true even without the assumption that $L_G$ is a lattice.
\subsection{Covering relations and $G$-forests}\label{subsec_Gtrees}
As above, let $G$ be a graph with vertex set $[n]$.
In the following sections, it will be useful to realize $L_G$ as a partial order on the set of $G$-forests.
The advantage to working with $G$-forests, rather than maximal tubings, is that cover relations in $L_G$ are encoded by certain adjacent (covering) pairs in the forest poset.
As in Theorem~\ref{G-trees}, let $T$ be a $G$-forest and let $\Xcal$ be the maximal tubing $\chi(T)$.
Recall that we write $i<_T k$ if $k$ is in the unique path from $i$ to the root.
A covering relation \emph{in} $T$ is a pair $i$ and $k$ such that $i<_Tk$ and also, $i$ and $k$ are adjacent in $T$.
We say that $k$ \emph{covers} $i$ (or $i$ \emph{is covered by} $k$) and write $i{\,\,<\!\!\!\!\cdot\,\,\,}_T k$ (or $k{\,\,\,\cdot\!\!\!\! >\,\,}_T i$).
We say that $k$ has a \emph{lower (resp. upper) cover} if there exists an element $i\in T$ such that $i {\,\,<\!\!\!\!\cdot\,\,\,}_Tk$ (resp. $i{\,\,\,\cdot\!\!\!\! >\,\,}_T k)$.
The following easy lemma will be useful.
\begin{lemma}\label{parent}
Let $T$ be a $G$-forest or $G$-forest.
Each element in $T$ has at most one upper cover.
In particular, if $i<_T j$ and $i<_T k$, then $j$ and $k$ are comparable.
\end{lemma}
\begin{proof}
Suppose that $i$ is less than $j$ and $k$ in $T$.
Then the tubes $j_\downarrow$ and $k_\downarrow$ have nonempty intersection.
Thus, either $j_\downarrow\subseteq k_\downarrow$ or $k_\downarrow \subseteq j_\downarrow$.
\end{proof}
We say that the pair $(i,k)$ is a \emph{descent of $T$} if $k{\,\,<\!\!\!\!\cdot\,\,\,}_T~i$ and $i<k$ as integers.
Dually, the pair is an \emph{ascent of $T$} if $i>k$ as integers.
The next proposition follows from Theorem~\ref{G-trees}.
\begin{proposition}\label{cor: covering relations}
Suppose that $T$ is a $G$-forest and $\Xcal$ is the corresponding maximal tubing~$\chi(T)$.
\begin{itemize}
\item Each descent $(i,k)$ in $T$ corresponds bijectively to a covering relations $\Xcal {\,\,\,\cdot\!\!\!\! >\,\,} \Xcal'$ in~$L_G$.
\item Each ascent $(i,k)$ in $T$ corresponds bijectively to a covering relation $\Xcal'' {\,\,\,\cdot\!\!\!\! >\,\,} \Xcal$ in $L_G$.
\end{itemize}
\end{proposition}
\begin{proposition}\label{cover relations}
Let $T$ be a $G$-forest with descent $(i,k)$, and let $\Xcal=\chi(T)$ be its corresponding maximal tubing.
Write the ideal $\{x: x<_{T} k\}$ as the disjoint union of tubes $Y_1\cup \cdots \cup Y_t$.
Then swapping $i$ and $k$, we obtain a $G$-forest covered by $T$ in $L_G$, whose corresponding maximal tubing is $$\Xcal\setminus \{k_\downarrow\} \cup \left\{ i_\downarrow \setminus \left(\{k\} \cup \bigcup Y_{a_j}\right) \right\},$$
where the union $\bigcup Y_{a_j}$ is over all $Y_{a_j}\in \{Y_1,\ldots, Y_t\}$ such that $Y_{a_j}\cup \{i\}$ not a tube.
(Throughout $x_\downarrow$ is interpreted as the principal order ideal in $T$.)
\end{proposition}
\begin{proof}
Write $S$ for $ i_\downarrow \setminus \left(\{k\} \cup \bigcup Y_{a_j}\right)$ and $\Ycal$ for $\Xcal\setminus \{k_\downarrow\} \cup \{S\}$.
First we show that $\Ycal$ is a maximal tubing.
Observe that $S$ is a tube.
We check that each tube $I$ in $\Xcal\setminus \{k_\downarrow\}$ is compatible with $S$.
Since both $I$ and $i_\downarrow$ are tubes in $\Xcal$, either $I\subset i_\downarrow$, $I\supset i_\downarrow$, or $I\cup i_\downarrow$ is not a tube.
If $I\supset i_\downarrow$ or $I\cup i_\downarrow$ is not a tube, then the fact that $S\subset i_\downarrow$ implies that $I$ and $S$ are compatible.
So, we assume that $I$ is a subset of $i_\downarrow$.
Write $X_1\cup X_2\cup \cdots \cup X_r$ for the ideal $\{x: x<_{T} i\}$.
Since $i{\,\,\,\cdot\!\!\!\! >\,\,}_{T} k$ we have $k_\downarrow= X_l$ for some $l$.
Thus $I\subseteq X_j$ for some $j\ne l$ or $I\subseteq Y_s$ for some $s\in [t]$.
If $I\subseteq X_j$ then it follows immediately that $I\subseteq S$.
Similarly, if $I\subseteq Y_s$ and $Y_s\cup \{i\}$ is a tube, then $I\subseteq S$.
Assume that $I\subseteq Y_s$ and $Y_{s}\cup \{i\}$ is not a tube.
Then $I\not\subseteq S$ and $S\not\subseteq I$.
We claim that that $Y_s\cup S$ is not a tube.
Observe that $X_j$ and $Y_s$ are compatible in $\Xcal$, and neither $X_j\not\subset Y_s$ nor $Y_s\not\subseteq X_j$, for each $j\in [r]$ with $j\ne l$.
Thus, $X_j\cup Y_s$ is not a tube.
The same argument shows each tube $Y_s\cup Y_j$ is not a tube, for each $j\in[t]$ with $j\ne s$.
Thus $Y_s\cup S$ is not a tube, and hence $I\cup S$ is not a tube.
We conclude that $I$ and $S$ are compatible.
We conclude that $\Ycal$ is a maximal tubing of $G$.
Since $\Ycal$ differs from $\Xcal$ by a flip, it follows that $\Xcal$ covers $\Ycal$ in~$L_G$.
\end{proof}
\section{Lattices}\label{sec:lattice}
\subsection{Lattices and lattice congruences}
Recall that a poset $L$ is a lattice if each pair $x$ and $y$
has a greatest common lower bound or \emph{meet} $x\wedge y$, and
has a smallest common upper bound or \emph{join} $x\vee y$.
Throughout we assume that $L$ is finite.
A set map $\phi: L\to L'$ is a \emph{lattice map} if it satisfies $\phi(x\wedge y) = \phi(x)\wedge \phi(y)$ and $\phi(x\vee y) = \phi(x)\vee \phi(y)$.
We say that $\phi$ preserves both the meet and join operations.
When $\phi$ is surjective, we say that it is a \emph{lattice quotient map} and $L'$ is a lattice quotient of $L$.
We say that $\phi$ is \emph{meet (join) semilattice map} if it preserves the meet (join) operation, and the image $\phi(L)$ is called a \emph{meet (join) semilattice quotient} of $L$.
To determine whether a given set map $\phi: L\to L'$ preserves either the meet or join operations, we consider the equivalence relation on $L$ induced by the fibers of $\phi$.
That is, set $x\equiv y \mod \Theta_\phi$ if $\phi(x)=\phi(y)$.
\begin{definition}\label{def: cong}
Let $L$ be a finite lattice, and let $\Theta$ be an equivalence relation on $L$.
We say that $\Theta$ is a \emph{lattice congruence} if it satisfies both of the following conditions for each $x,y,$ and $z$ in $L$.
\begin{equation}\label{meet-preserving}
\text{ if $x\equiv y \mod \Theta$ then $x\wedge z\equiv y\wedge z\mod \Theta$}\tag{$\Mcal$}
\end{equation}
\begin{equation}\label{join-preserving}
\text{ if $x\equiv y \mod \Theta$ then $x\vee z\equiv y\vee z\mod \Theta$}\tag{$\Jcal$}
\end{equation}
We say that $\Theta$ is a \emph{meet (join) semilattice congruence} if $\Theta$ satisfies~\ref{meet-preserving} (\ref{join-preserving}).
\end{definition}
Observe that $\phi: L\to L'$ preserves the meet (join) if and only if the equivalence relation $\Theta_\phi$ induced by its fibers is a meet (join) semilattice congruence.
The next proposition implies that each meet semilattice congruence on $L$ gives rise to a meet semilattice quotient.
\begin{proposition}\label{meet cong}
Let $\Theta$ be an equivalence relation on $L$.
Then $\Theta$ is a meet semilattice congruence if and only if $L$ satisfies each of the following conditions:
\begin{enumerate}
\item Each $\Theta$-class has a unique minimal element;
\item the map $\pi_\downarrow^\Theta:L\to L$ which sends $x$ to the unique minimal element in its $\Theta$-class is order preserving.
\end{enumerate}
In particular, the subposet of $L$ induced by $\pi_\downarrow^\Theta(L)$ is a meet semilattice quotient of $L$.
\end{proposition}
\begin{proof}
The proof of the first statement can be found in \cite[Proposition~9-5.2]{reading:2016lattice}.
We assume that $\Theta$ is a meet semilattice congruence or, equivalently, that the two conditions above hold.
We check that the subposet of $L$ induced by the image $\pi_\downarrow^{\Theta}(L)$ is a lattice and that $\pi_\downarrow^{\Theta}$ is a meet semilattice map.
Suppose that $x$ and $y$ belong to $\pi_\downarrow^\Theta(L)$.
We write $x\wedge_\Theta y$ to distinguish the meet operation in $\pi_\downarrow^\Theta(L)$ from the meet operation in $L$.
(In general, these are different operations; that is, $x\wedge_\Theta y \ne x\wedge y$.)
It is enough to show that the meet $x\wedge_{\Theta} y$ is equal to $\pi_\downarrow^{\Theta}(x\wedge y)$.
Because $\pi_\downarrow^\Theta$ is order preserving, we have $\pi_\downarrow^{\Theta}(x\wedge y)\le x, y$.
If $z\in \pi_\downarrow^\Theta(L)$ and $z$ is a common lower bound for $x$ and $y$ then $z\le x\wedge y$.
Applying the fact that $\pi_\downarrow^\Theta$ is order preserving again, we have $z=\pi_\downarrow^\Theta(z) \le \pi_\downarrow^\Theta(x\wedge y)$.
\end{proof}
The set $\Con(L)$ of lattice congruences of $L$ forms a distributive lattice under the refinement order.
That is, $\Theta\leq\Theta^{\pr}$ holds if $x\equiv y\mod\Theta^{\pr}$ implies $x\equiv y\mod\Theta$ for $x,y\in L$.
Hence, when $\Con(L)$ is finite, it is the lattice of order ideals of its subposet of join-irreducible elements.
If $L$ is a lattice with a cover relation $x\lessdot y$, the \emph{contraction} $\con(x,y)$ is the most refined lattice congruence identifying $x$ and $y$.
It is known that $\con(x,y)$ is join-irreducible, and if $L$ is finite, then every join-irreducible lattice congruence is of this form \cite[Proposition 9-5.14]{reading:2016lattice}.
\subsection{Lattice congruences of the weak order}\label{subsec_weak_cong}
Recall $x\le y$ in the weak order on $\mathfrak{S}_n$ if $\operatorname{inv}(x) \subseteq \operatorname{inv}(y)$, where $\operatorname{inv}(x)$ is the set of inversions of $x$.
(A pair $(i,k)$ is an \emph{inversion} of $x$ if $i<k$, and $k$ precedes $i$ in $x=x_1\ldots x_n$.
That is, $i=x_s$ and $k=x_r$, where $r<s$.)
It is well-known that the weak order on $\mathfrak{S}_n$ is a lattice.
A \emph{descent} of $x$ is an inversion $(i,k)$ such that $i$ and $k$ are consecutive in $x_1\ldots x_n$.
That is, $i=x_s$ and $k=x_{s-1}$, where $s\in \{2,\ldots, n\}$.
The \emph{descent set} $\des(x)$ of $x$ is the set of all descents of $x$.
An \emph{ascent} is a noninversion $(i,k)$ in which $i=x_{s-1}$ and $k=x_{s}$.
If $y_s=i$ and $y_{s-1}=k$ is a descent of $y$, then swapping the positions of $i$ and $k$, we obtain a permutation $x$ (with $x_i=y_i$ for each $i\in [n]\setminus \{s-1, s\}$ and $x_{s-1}=i$ and $x_s=k$) that is covered by $y$ in the weak order.
Each lower cover relation $y{\,\,\,\cdot\!\!\!\! >\,\,} x$ corresponds bijectively to a descent of $y$.
Dually, each upper cover relation $y{\,\,<\!\!\!\!\cdot\,\,\,} y'$ corresponds bijectively to an ascent of $y$.
The following lemma is immediate.
\begin{lemma}\label{descents and inversions}
Suppose that $x'> x$ in the weak order on $\mathfrak{S}_n$.
Then there exists a descent $(i,k)$ of $x'$ which is not an inversion of $x$.
Swapping $i$ and $k$ in $x'$ we obtain a permutation $x''$ which satisfies $x'{\,\,\,\cdot\!\!\!\! >\,\,} x''\ge x.$
\end{lemma}
Recall that each pair $x{\,\,<\!\!\!\!\cdot\,\,\,} y$ maps to a join-irreducible congruence $\con(x,y)$ in $\Con(\mathfrak{S}_n)$.
For $n>2$, this map is not injective.
We can obtain a bijection by restricting to pairs $x{\,\,<\!\!\!\!\cdot\,\,\,} y$ where $y$ is join-irreducible.
Below, we make this bijection explicit with the combinatorics of arc diagrams.
An \emph{arc} is a triple $\alpha=(i,k, \epsilon)$ where $1\leq i<k\leq n$ and $\epsilon=(\epsilon_1,\ldots,\epsilon_{k-i-1})$ such that $\epsilon_h\in\{+,-\}$ for $h\in[k-i-1]$.
Listing the numbers $1,\ldots,n$ vertically, an arc is typically drawn as a path from $i$ to $k$ that goes to the left of $j$ if $\epsilon_{j-i}=+$ and to the right of $j$ if $\epsilon_{j-i}=-$.
\begin{figure}
\centering
\includegraphics{arc}
\caption{\label{fig_arc}The arc $(2,5,(-,+))$}
\end{figure}
If $x\lessdot y$ is a cover relation of permutations that swaps $i$ and $k$, then we define $\alpha(x,y)=(i,k,\epsilon)$ to be the arc such that for $i<j<k$:
$$\epsilon_{j-i}=\begin{cases}+\ \mbox{if }u^{-1}(j) > u^{-1}(i)\\-\ \mbox{if }u^{-1}(j) < u^{-1}(k)\end{cases}.$$
For example, $\alpha(32514,35214)=(2,5,(-,+))$ is the arc in Figure~\ref{fig_arc}.
Given an arc $(x,y,\epsilon)$, write $\{l_1<l_2<\ldots < l_p\}$ for the set $\{x':\epsilon_{x'-x} = -\}$ and $\{r_1<r_2<\ldots<r_q\}$ for the set $\{y': \epsilon_{y'-x} = +\}$.
Informally, $l_1<\ldots<l_p$ are the nodes on the left side of the arc $(x,y,\epsilon)$, and $r_1<\ldots<r_q$ are the nodes on the right side of the arc.
The next results follows from \cite[Proposition~2.3]{reading:2015noncrossing}.
\begin{proposition}\label{arc to perm}
Let $\alpha=(x,y,\epsilon)$ be an arc, and let $l_1<\ldots<l_p$ and $r_1<\ldots<r_q$ be defined as above.
Then, among all permutations $w\in \mathfrak{S}_n$ such that $\alpha(u,w) =\alpha$ for some $u$ covered by $w$, the unique minimal element is
\[j_\alpha= 12\ldots (x-1) l_1\ldots l_p \,y\, x\, r_1\ldots r_q (y+1) (y+2) \ldots n\]
In particular, $j_\alpha$ is join-irreducible.
\end{proposition}
The map $\alpha$ induces a bijection between join-irreducible lattice congruences and join-irreducible permutations.
\begin{theorem}\label{thm_weak_arcs}
Given two cover relations $x\lessdot y$ and $x^{\pr}\lessdot y^{\pr}$, we have $\con(x,y)=\con(x^{\pr},y^{\pr})$ if and only if $\alpha(x,y)=\alpha(x^{\pr},y^{\pr})$.
\end{theorem}
In light of Theorem~\ref{thm_weak_arcs}, we will identify a join-irreducible lattice congruence $\Theta^{\alpha}$ of the weak order by its associated arc $\alpha$. For arcs $\alpha,\beta$, we say that $\alpha$ \emph{forces} $\beta$ if $\Theta^{\beta}\leq\Theta^{\alpha}$. An arc $\alpha=(i,k,\epsilon)$ is a \emph{subarc} of $\beta=(i^{\pr},k^{\pr},\epsilon^{\pr})$ if $i^{\pr}\leq i<k\leq k^{\pr}$ and for all $j\in[k-i-1]$, $\epsilon_j=\epsilon_{j+i-i^{\pr}}^{\pr}$. The following theorem is \cite[Theorem 4.4]{reading:2015noncrossing}, which is a translation of \cite{reading:2004lattice}.
\begin{theorem}\label{thm_forcing_arcs}
Given arcs $\alpha$ and $\beta$, $\alpha$ forces $\beta$ if and only if $\alpha$ is a subarc of $\beta$.
\end{theorem}
We say that a lattice congruence $\Theta$ \emph{contracts} an arc if $\Theta^{\alpha} \le \Theta$.
At times we say that $\Theta$ contracts a pair $x{\,\,<\!\!\!\!\cdot\,\,\,} y$, when we mean $\Theta^{\alpha(x,y)} \le \Theta$.
Equivalently, $x\equiv_\Theta y$.
Similarly, $\alpha$ is \emph{uncontracted} if $\Theta^{\alpha}\not\subseteq \Theta$.
In this case each pair $x{\,\,<\!\!\!\!\cdot\,\,\,} y$ with $\alpha(x,y)=\alpha$ belongs to a distinct $\Theta$-class.
\begin{figure}
\centering
\includegraphics{weakquot}
\caption{\label{fig_cong}A lattice congruence of the weak order}
\end{figure}
\begin{example}
The weak order on $\mathfrak{S}_4$ is shown in Figure~\ref{fig_cong}. Permutations connected by blue zigzags are equivalence classes of the lattice congruence that contracts the arcs $(2,4,(+)),\ (1,4,(+,+))$ and $(1,4,(-,+))$. The first arc is a subarc of the latter two, so the congruence is the join-irreducible $\Theta^{(2,4,(+))}$.
\end{example}
The following is \cite[Corollary~4.5]{reading:2015noncrossing}.
\begin{corollary}\label{uncontracted}
A set $U$ of arcs is the set of arcs that are uncontracted by some lattice congruence $\Theta$ if and only if $U$ is closed under taking subarcs.
\end{corollary}
\begin{example}\label{subgraphs}
Let $V\subset [n]$, and consider the the map $\rho: \mathfrak{S}_n\to \mathfrak{S}_{V}$ which sends the permutation $w=w_1\ldots w_n$ to the subword of $w$ in which we delete each $w_i\not\in V$.
Let $\Theta$ denote the smallest (or most refined) lattice congruence on $\mathfrak{S}_n$ in which $\rho(x)= \rho(y)$ implies that $x\equiv _\Theta y$.
We claim that $\rho$ is a lattice map if and only if $V$ is an interval \cite[Example~2.2]{reading:2017homomorphisms}.
First assume that $\rho$ is a lattice map, or, equivalently, the classes of $\Theta$ are precisely the fibers of $\rho$.
Then $\alpha(x,y)$ is uncontracted by $\Theta$ whenever $\rho(x)\ne \rho(y)$.
This happens whenever $x=(i,k)y$ and $i,k\in V$.
Thus, the set of arcs uncontracted by $\Theta$ is $\{(i,k,\epsilon): i,k\in V\}$.
Since this set must be closed under taking subarcs, it follows that $V$ is an interval.
Conversely, if $V$ is an interval then the set $U=\{(i,k,\epsilon): i,k\in V\}$ is the set of uncontracted arcs for some lattice congruence $\Theta'$ (because this set is closed under taking subarcs).
For each $x{\,\,<\!\!\!\!\cdot\,\,\,} y$, we have $\alpha(x,y)\notin U$ if and only if $\rho(x)=\rho(y)$.
Thus $\Theta'= \Theta$, and the equivalence classes of $\Theta$ are precisely the fibers of $\rho$.
\end{example}
\subsection{Map from permutations to $G$-forests}
Recall that for any graph $G$ with vertex set $[n]$, and permutation $w=w_1\ldots w_n$, we have $\Psi_G(w) = \{X_1,\ldots, X_n\}$ where $X_i$ is the largest tube in the subset $\{w_1,\ldots,w_j\}$ containing $w_j$.
That is, $X_j$ is the set of vertices of the connected component of $G|_{ \{w_1,\ldots,w_j\}}$ containing $w_j$.
Next, we recursively describe the surjection $\Psi_G: \mathfrak{S}_n\to L_G$ as a map onto the set of $G$-trees.
Given a connected graph $G$ with vertex set $[n]$ and permutation $w=w_1\ldots w_n$ we recursively construct a $G$-tree $\Psi_G(T)$ as follows:
Let $w_n$ be the root of $T$.
Let $G_1,\ldots, G_r$ be the connected components of the subgraph induced by $\{w_1,\ldots, w_{n-1}\}$.
Restricting $w_1\ldots w_{n-1}$ to each component $G_i$ gives a subword of $w$.
We apply the construction to each subword to obtain subtrees $T_1,\ldots, T_r$.
Finally, we attach each subtree to the root $w_n$.
The next proposition follows from \cite[Corollary~3.9]{postnikov.reiner.williams:2008faces}.
\begin{proposition}\label{linear extensions}
The fiber $\Psi_G^{-1}(T)\subseteq \mathfrak{S}_n$ is the set of linear extensions of~$T$.
\end{proposition}
The authors of \cite{postnikov.reiner.williams:2008faces} define a special section of the map $\Psi_G$, whose image we describe below.
See \cite[Definition~8.7]{postnikov.reiner.williams:2008faces} and Proposition~\ref{prw 8.9}.
\begin{definition}\label{b-permutations}
\normalfont
Let $G$ be a graph with vertex set $[n]$.
A permutation $w$ in $\mathfrak{S}_n$ is a $G$-permutation provided that $w_i$ and $\max\{w_1,\ldots, w_i\}$ lie in the same connected component of $G|_{\{w_1,\ldots, w_i\}}$.
\end{definition}
The following lemma is \cite[Proposition~8.10]{postnikov.reiner.williams:2008faces}.
\begin{lemma}\label{lex smallest}
Let $T$ be a $G$-forest and let $w\in \Phi_G^{-1}(T)$.
Then $w$ is a $G$-permutation if and only if it is the lexicographically minimal linear extension of $T$.
\end{lemma}
Let $V$ be a subset of the vertex set $[n]$, and let $G'$ be the subgraph of $G$ induced by $V$.
We write $\rho_{G'}: L_G\to L_{G'}$ for the map which takes a maximal tubing $\Xcal$ to $\Xcal|_{V}$.
Similarly, write $\rho_{V}: \mathfrak{S}_n\to \mathfrak{S}_{V}$ for the map which sends a permutation $w=w_1w_2\ldots w_n$ to the subword in which we delete each $w_i\notin V$ (without changing the order of the remaining entries).
\begin{lemma}\label{lem: interval lemma}
Let $G$ be a graph with vertex set $[n]$.
If $\Psi_G:\mathfrak{S}_n \to L_G$ is a lattice map, then for each $V\subseteq[n]$ and induced subgraph $G'=G|_{V}$, the map $\Psi_{\std(G')}: \mathfrak{S}_{\std(V)} \to L_{\std(G')}$ is a lattice map.
\end{lemma}
\begin{proof}
Let $V=a_1<a_2<\cdots<a_r$ and $[n]\setminus V = b_1<b_2<\cdots<b_s$, where $r+s=n$.
We consider the interval $I$ in the weak order on $\mathfrak{S}_n$ whose elements consist of all of the permutations on $V$ followed by the fixed permutation $b_1b_2\ldots b_s$.
Observe that for each $\Xcal\in \Psi_G(I)$, the set $V$ is an ideal.
Indeed, let $w=w_1\ldots w_rb_1\ldots b_s$ be a permutation in $I$ and let $\Psi_G(w)=\Xcal=\{X_1\ldots, X_n\}$.
For each connected component $H$ of $G'$ there is a largest integer $j$ in $[r]$ such that $w_j\in H$.
Recall that $X_j$ is the set of vertices of the connected component of $G|_{\{w_1,\ldots w_j\}}$ that contains~$w_j$.
Thus $X_j=H$.
We claim that the following diagram commutes.
\begin{center}
\begin{tikzpicture}[scale=0.75]
\node (A) at (0,0) {$I$};
\node(B) at (3,0) {$\Psi_G(I)$};
\node (C) at (0,-2) {$\mathfrak{S}_{V}$};
\node (D) at (3,-2) {$L_{G'}$};
\node (E) at (-0.25,-3.2) {$\mathfrak{S}_{\std(V)}$};
\node (F) at (3.35, -3.2) {$L_{\std(G')}$};
\node at (1.65,.25) {\scriptsize{$\Psi_G$}};
\node at (-.35,-1) {\scriptsize{$\rho_{V}$}};
\node at (3.35,-1) {\scriptsize{$\rho_{G'}$}};
\node at (1.65,-1.75) {\scriptsize{$\Psi_{G'}$}};
\node at (1.6,-2.85) {\scriptsize{$\Psi_{\std(G')}$}};
\draw[->>] (A.east)--(B.west);
\draw[->] (A.south)--(C.north);
\draw[->>] (C.east)--(D.west);
\draw[->] (B.south)--(D.north);
\draw[-] (3.,-3) -- (3, -2.5);
\draw[-] (3.1,-3) -- (3.1, -2.5);
\draw[-] (-.05,-3) -- (-.05, -2.5);
\draw[-] (.05,-3) -- (.05, -2.5);
\draw[->>] (E.east) -- (F.west);
\end{tikzpicture}
\end{center}
Set $\Psi_G(w):=\Xcal$ and $\Psi_{G'}(\rho_V(w)):=\Ycal$, where $w= w_1\ldots w_rb_1\ldots b_s$ as above.
Observe that $w_1<w_2<\cdots<w_r$ is a linear extension for both $\tau(\Xcal|_V)$ and $\tau(\Ycal)$.
Therefore, $\rho_{G'}(\Psi_{G}(w))=\Xcal|_{V}=\Ycal=\Psi_{G'}(\rho_{V}(w))$, and the diagram commutes.
Next, we check that $\rho_{G'}: \Psi_{G}(I)\to L_{G'}$ is a poset isomorphism.
Because $\rho_{V}:I\to \mathfrak{S}_V$ is a poset isomorphism, it follows that $\rho_{G'}$ is surjective.
Suppose that $\Xcal, \Ycal\in \Psi_G(I)$ with $\Xcal|_V =\Zcal= \Ycal|_V.$
We will argue that $\tau(\Xcal)=T_{\Xcal}$ and $\tau(\Ycal)= T_{\Ycal}$ are equal.
The only possible difference between $T_\Xcal$ and $T_\Ycal$ must occur among the elements of $V$.
(Each $i,k\notin V$ that are in the same connected component of $G|_{[n]\setminus V}$ are linearly ordered by $b_1<b_2<\cdots<b_s$.)
We write $<_\Xcal$ for the order relation in $T_\Xcal$ and similarly $<_\Ycal$ for the relation in $T_\Ycal$.
Assume that $i<_\Xcal k$ and $i\not<_\Ycal k$ for some $i,k\in V$.
Observe that the pair must be incomparable in the $G'$-forest $\tau(\Zcal)$.
Thus, $\{j\in [n]: j\le_\Xcal k\}\cap V$ is not a tube.
But since $V$ is an ideal in $\Xcal$, we have $\{j\in [n]: j\le_\Xcal k\}\cap V = \{j\in [n]: j\le_\Xcal k\}$.
The latter is clearly a tube.
By this contradiction, we conclude that $\Xcal= \Ycal$, as desired.
\end{proof}
\section{Lattices of maximal tubings}\label{sec_lattice}
\subsection{Right-filled graphs}
We say that a graph $G$ with vertex set $[n]$ and edge set $E$ is \emph{right-filled} provided that the following implication holds:
\begin{equation*}\label{right filled}
\text{If $\{i,k\}\in E$ then $\{j,k\}$ also belongs to $E$ for each $1\le i<j<k\le n$.}\tag{RF}
\end{equation*}
Dually, we say that $G$ is \emph{left-filled} provided that:
\begin{equation*}\label{left filled}
\text{If $\{i,k\}\in E$ then $\{i,j\}$ also belongs to $E$ for each $1\le i<j<k\le n$.}\tag{LF}
\end{equation*}
The goal of this section is two-fold:
First we show that if $G$ is right-filled, then the subposet of the weak order induced by the set of $G$-permutations in~$\mathfrak{S}_n$ is a lattice.
In fact, we show that this subposet is a meet semilattice quotient of the weak order.
(See Corollary~\ref{g-perm lattice cong}.)
Second, we prove that $L_G$ is isomorphic to the subposet of the weak order induced by the set of $G$-permutations in~$\mathfrak{S}_n$.
Hence, $L_G$ is a lattice.
(See Theorem~\ref{inversion order}.)
\begin{remark}\label{rmk: dualizing and left-filled}
Recall that $G^*$ is the graph obtained from $G$ by swapping the labels $i$ and $n+1-i$ for all $i\in [n]$.
Observe that $G$ is right-filled if and only if $G^*$ is left-filled.
Lemma~\ref{graph duality} says that $L_{G^*} \cong {L_G}^*$, thus we obtain dual versions of Corollary~\ref{g-perm lattice cong} and Theorem~\ref{inversion order} when $G$ is left-filled.
Some care is required.
In particular, we note that for left-filled graphs, $L_G$ is \emph{not} isomorphic to the subposet induced by the set of $G$-permutations.
\end{remark}
\begin{proposition}\label{connected}
Suppose that $G$ is a right-filled graph with vertex set $[n]$ and connected components $G_i=(V_i, E_i)$ where $i\in [s]$ and $s\ge 2$.
If $\Psi_i: \mathfrak{S}_{V_i} \to L_{G_i}$ is a lattice map for each $i$, then $\Psi_G:\mathfrak{S}_n\to L_G$ is a lattice map.
\end{proposition}
\begin{proof}
We claim that each $V_i$ is an interval.
Write $m_i$ for $\min(V_i)$ and $M_i$ for $\max(V_i)$.
Observe that each geodesic $M_i=q_0, q_1, \ldots, q_k=m_i$ in the graph $G$ monotonically decreases.
That is, $q_0>q_1>\ldots>q_k$.
Indeed, if there exists $q_r>q_{r+1}<q_{r+2}$ then the~\ref{right filled} property implies that $q_r$ and $q_{r+2}$ are adjacent.
Applying the~\ref{right filled} property again, each closed interval $[q_r, q_{r+1}]\subseteq V_i$.
Thus, $V_i$ is an interval.
Observe that the following the diagram commutes.
By Lemma~\ref{lem_decomposition}, the vertical map from $L_G$ to $\prod_{i=1}^sL_{G_i}$ is an isomorphism.
The vertical map from $\mathfrak{S}_n$ onto $\rho:=\prod_{i=1}^s\mathfrak{S}_{V_i}$, where $\rho_{V_i}$ is the restriction map from Example~\ref{subgraphs}.
Since each $V_i$ is an interval, $\rho$ is a lattice map.
The statement of the proposition now follows.
\begin{center}
\begin{tikzpicture}[scale=0.75]
\node (A) at (0,0) {$\mathfrak{S}_n$};
\node(B) at (5,0) {$L_G$};
\node (C) at (0,-2) {$\prod_{i=1}^s\mathfrak{S}_{V_i}$};
\node (D) at (5,-2) {$\prod_{i=1}^s L_{G_i}$};
\node at (2.75,.35) {\scriptsize{$\Psi_G$}};
\node at (2.5,-1.7) {\scriptsize{$\prod_{i=1}^s\Psi_{G_{i}}$}};
\draw[->>] (A.east)--(B.west);
\draw[->>] (A.south)--(C.north);
\draw[->>] (C.east)--(D.west);
\draw[->] (B.south)--(D.north);
\end{tikzpicture}
\end{center}
\end{proof}
With Proposition~\ref{connected} in hand, we assume throughout that $G$ is connected.
We realize $L_G$ as a poset on the set of $G$-trees, where $T\le T'$ if and only if $\chi(T)\le \chi(T')$, where $\chi$ is the bijection $T\mapsto \{x_\downarrow: x\in [n]\}$ from Theorem~\ref{G-trees}.
\begin{lemma}\label{lem: child relations}
Let $G$ be a left or right-filled graph, and let $T\in L_G$.
If $x_1$ and $x_2$ are incomparable in $T$,
then there does not exist any triple $i<j<k$ such that $i$ and $k$ belong to ${x_1}_\downarrow$ and $j\in {x_2}_\downarrow$.
\end{lemma}
\begin{proof}
Consider the set of pairs $i<k$ in ${x_1}_\downarrow$ such that $i<k-1$.
Because ${x_1}_\downarrow$ is a tube, there is a path $i=q_0,\ldots, q_m=k$ in $G$ such that each $q_l$ belongs to ${x_1}_\downarrow$.
Choose such a path so that $m$ is minimal.
We argue by induction on $m$ that there exists no vertex $j$ in ${x_2}_\downarrow$ satisfying $i<j<k$.
Observe that if $j\in {x_2}_\downarrow$ then neither $\{i,j\}$ nor $\{j,k\}$ are edges in $G$.
Thus the base case holds because $G$ is either right-filled or left-filled.
Now assume that $m>1$, let $j\in \{i+1,\ldots, k-1\}$, and for the moment assume that $G$ is right-filled.
Consider $q_{m-1}$.
If $q_{m-1}<j$, then the \ref{right filled}-property implies that $j$ and $k$ are adjacent.
Hence $j\not \in {x_2}_\downarrow$.
If $j<q_{m-1}$ then we have $i<j<q_{m-1}$, and $i$ and $q_{m-1}$ are connected by a path of length~$m-1$.
By induction $j\not \in {x_2}_\downarrow$, and the statement follows.
If $G$ is left-filled the proof is similar, except that we compare $j$ with $q_2$ instead of $q_{m-1}$.
\end{proof}
\begin{proposition}\label{prop: child relations}
Let $G$ be a left or right-filled graph, and let $T\in L_G$.
Suppose that $x_1$ and $x_2$ are incomparable in $T$ and that $x_1<x_2$ as integers.
Then each element in ${x_1}_\downarrow$ is smaller than each element in ${x_2}_\downarrow$ (as integers).
\end{proposition}
\begin{proof}
Set $i:= \max \{a\in {x_2}_{\downarrow}: \text{there exists } b\in {x_1}_\downarrow\text{ with }a< b\}$.
So, there is some $j\in {x_1}_\downarrow$ such that $i<j$.
Assume that $i$ is the largest element in ${x_2}_\downarrow$.
Thus, $x_1<i<j$
(where we have the first inequality because $x_1<x_2$ and $x_2<i$.)
Since $x_1$ and $j$ both belong to ${x_1}_\downarrow$, we have a contradiction to Lemma~\ref{lem: child relations}.
So, there exists some number $k$ in ${x_2}_\downarrow$ with $k>i$, and the maximality of $i$ implies that $k\not<j$ for any $j\in {x_1}_\downarrow$.
Then the triple $i<j<k$ satisfies: $i,k$ both in ${x_2}_{\downarrow}$ and $j\in {x_1}_\downarrow$.
That is a contradiction to Lemma~\ref{lem: child relations} again.
(Note that the roles of $x_1$ and $x_2$ are symmetric in Lemma~\ref{lem: child relations}.)
The proposition follows.
\end{proof}
Below we recursively construct a special linear extension $\sigma(T)$ for $T\in L_G$.
First, if $T$ has a root $x$ then we remove it.
Let $C_1,\ldots,C_r$ be the connected components of $T\setminus \{x\}$.
We index the connected components so that each element of $C_i$ is less than each element of $C_j$ (as integers) whenever $i<j$.
Next, we apply the construction to each component to obtain $v_{{C_i}_1}\ldots v_{{C_i}_s}=\sigma(C_i)$ for $i\in [r]$.
Finally, we concatenate the words $\sigma(C_1)\ldots \sigma(C_r)$, ending with the root~$x$ (if there is one).
Observe that $\sigma(T)$ is the lexicographically minimal linear extension of the $G$-tree $T$.
The next proposition follows from Lemma~\ref{lex smallest} (see also \cite[Proposition~8.10]{postnikov.reiner.williams:2008faces}).
\begin{proposition}\label{prw 8.9}
The image $\sigma(L_G)$ is the equal to the set of $G$-permutations in $\mathfrak{S}_n$.
Moreover, the map $\Psi_G$ induces a bijection between $G$-permutations and $G$-trees, and $\sigma: L_G\to \mathfrak{S}_n$ is a section of the map $\Psi_G$.
\end{proposition}
A pair of numbers $(i,j)$ is an \emph{inversion} of a $G$-tree $T$ if $i<j$ and $j<_T i$.
For example, a descent of $T$ is an inversion such that $i$ covers $j$ in $T$.
A pair $(i,j)$ is a non-inversion if $i<j$ and $i<_T j$.
(Pairs $i$ and $j$ which are incomparable in $T$ are neither inversions nor non-inversions.)
Write $\operatorname{inv}(T)$ for the set of all inversions of $T$ and $\operatorname{inv}^\wedge(T)$ for the set of noninversions.
The next lemma follows immediately from Proposition~\ref{prop: child relations} and the construction of $\sigma(T)$.
The second item of the statement also follows from \cite[Proposition~9.5]{postnikov.reiner.williams:2008faces}.
\begin{lemma}\label{lem: pidown}
Let $G$ be left or right-filled graph with vertex set $[n]$.
Suppose that $x$ and $x'$ are incomparable in $T$ and $x$ precedes $x'$ in the linear extension $\sigma(T)$.
Then $x$ is less than $x'$ as integers.
In particular:
\begin{itemize}
\item the inversion set of $T$ is equal to the inversion set of $\sigma(T)$;
\item the descent set of $T$ is equal to the descent set of $\sigma(T)$
\end{itemize}
\end{lemma}
\begin{remark}\label{rmk: dual section}
Dually we recursively construct a (lexicographically) largest linear extension $\sigma^*(T)$ as follows:
As before $C_1,\ldots, C_r$ are the connected components of $T\setminus \{x\}$ (if $T$ has root $x$) or $T$ (if $T$ does not have a root), indexed so that each element in $C_i$ is less than each element in $C_j$ if $i<j$.
Apply the construction $\sigma^*(C_i)$ to each connected component.
Concatenate the words: $\sigma^*(C_r)\ldots \sigma^*(C_1)$, and finally end with the root $x$.
Indeed, if $G$ is either left or right filled, then $\sigma^*(T)$ is the unique largest element of the fiber $\Psi_G^{-1}(T)$.
\end{remark}
\begin{proposition}\label{inversions}
Suppose that $G$ is a right-filled graph with vertex set $[n]$.
If $w\le w'$ in the weak order on $\mathfrak{S}_n$ then $\operatorname{inv}(\Psi_G(w))\subseteq \operatorname{inv}(\Psi_G(w'))$.
\end{proposition}
\begin{proof}
Write $T$ for $\Psi_G(w)$ and $T'$ for $\Psi_G(w')$.
Suppose that $(i,k)$ is an inversion in~$T$.
Since $w$ is a linear extension of $T$, we have $(i,k)\in \operatorname{inv}(w)$.
Hence $(i,k)\in \operatorname{inv}(w')$.
If $i$ and $k$ are comparable in $T'$, then $(i,k)\in \operatorname{inv}(T')$, since $w'$ is a linear extension of $T'$.
Because $(i,k)\in \operatorname{inv}(T)$, there is a path $i=q_0,\ldots, q_m=k$ (which we take to have minimal length) in $G$ connecting $i$ to $k$ such that $q_l<_T i$ for each $l\in[m]$.
We prove, by induction on $m$, that $i$ and $k$ are comparable in $T'$.
In the base case $i$ and $k$ are adjacent in~$G$, and the claim is immediate.
Assume $m>1$ (so, in particular, $i$ and $k$ are \textit{not} adjacent in $G$).
We make two easy observations:
First, because $G$ is right-filled, $q_{m-1}>i$.
(Indeed, if $q_{m-1}<i<k$ then $G$ must have the edge $\{i,k\}$, contrary to our assumption that $i$ and $k$ are not adjacent.)
Thus, $(i,q_{m-1})\in \operatorname{inv}(T)$.
By induction, $i$ and $q_{m-1}$ are comparable in $T'$.
Thus, $q_{m-1}<_{T'} i$.
Second, because $q_{m-1}$ is adjacent to $k$, they are also comparable in~$T'$.
If $k<_{T'} q_{m-1}$ then we are done by transitivity.
On the other hand, if $q_{m-1} <_{T'} k$ then Lemma~\ref{parent} implies that $k$ and $i$ are comparable in $T'$.
\end{proof}
We obtain the following corollary.
\begin{corollary}\label{g-perm lattice cong}
Let $G$ be a right-filled graph with vertex set $[n]$ and let $T\in L_G$.
Then the equivalence relation $\Theta_G$ induced by the fibers of $\Psi_G$ satisfies:
\begin{enumerate}
\item The $\Theta_G$-class $\Psi_G^{-1}(T)$ has a smallest element in the weak order, namely the $G$-permutation $v$ in $\Psi_G^{-1}(T)$;
\item the map $\pi_\downarrow^G:\mathfrak{S}_n\to \mathfrak{S}_n$ which sends $w$ to the unique $G$-permutation in its $\Theta_G$-class is order preserving.
\end{enumerate}
Thus, the subposet of the weak order on $\mathfrak{S}_n$ induced by the set of $G$-permutations is a meet-semilattice quotient of $\mathfrak{S}_n$.
In particular, the subposet induced by the set of $G$-permutations is a lattice.
\end{corollary}
\begin{proof}
Suppose that $w\in \Psi^{-1}(T)$.
Since $w$ is a linear extension of $T$, $\operatorname{inv}(T)\subseteq \operatorname{inv}(w')$.
Since $\operatorname{inv}(T)=\operatorname{inv}(v)$, we conclude that $v\le w$.
Thus, $v$ is the unique minimal element of the fiber $\Psi^{-1}(T)$.
Suppose that $w\le w'$ in the weak order on $\mathfrak{S}_n$.
Then Proposition~\ref{inversions} says that $\operatorname{inv}(\Psi_G(w))\subseteq \operatorname{inv}(\Psi_G(w'))$.
For each $u\in \mathfrak{S}_n$, $\operatorname{inv}(\pi_\downarrow^G(u)) = \operatorname{inv}(\Psi_G(u))$, by Lemma~\ref{lem: pidown}.
Thus, $\operatorname{inv}(\pi_\downarrow^G(w))\subseteq \operatorname{inv}((\pi_\downarrow^G(w'))$.
The remaining statements of the corollary follow immediately from Proposition~\ref{meet cong}.
\end{proof}
We are now prepared to state the main theorem of this section.
\begin{theorem}\label{inversion order}
Suppose that $G$ is right-filled and $T$ and $T'$ belong to $L_G$.
Then:
\begin{enumerate}
\item $T\le T'$ in $L_G$ if and only if $\operatorname{inv}(T)\subseteq \operatorname{inv}(T')$.
\item The poset of maximal tubings $L_G$ is isomorphic the subposet of the weak order induced by the set of $G$-permutations in $\mathfrak{S}_n$.
In particular, $L_G$ is a lattice.
\item $\Psi_G: \mathfrak{S}_n\to L_G$ is meet semilattice map.
That is, for all $w, w'\in \mathfrak{S}_n$ we have \[\Psi_G(w\wedge w') = \Psi_G(w)\wedge \Psi_G(w').\]
\end{enumerate}
\end{theorem}
\begin{lemma}\label{descent helper}
Suppose that $G$ is a right-filled graph with vertex set $[n]$, let $a\in [n]$, and let $T'\in L_G$.
Let the disjoint union $C_1\cup C_2\cup\cdots \cup C_k$ of tubes denote the ideal $a_\downarrow \setminus \{a\}$ in $T$, indexed so that each element of $C_i$ is less than each element in $C_j$ (as integers) whenever $i<j$.
\begin{enumerate}
\item If $(a,x)\in\operatorname{inv}(T)$, then $x$ belongs to~$C_k$.
\item If $(a,x)$ is a descent of $T'$, then swapping $a$ and $x$ we obtain $T$ which satisfies: $$\operatorname{inv}(T)\subseteq \operatorname{inv}(T').$$
\end{enumerate}
\end{lemma}
\begin{proof}
By Proposition~\ref{prop: child relations}, the tubes $C_1,\ldots, C_k$ can be indexed as described in the statement of the lemma.
Suppose there exists $i<k$ and $x\in C_i$ such that $x>a$.
Let $y$ be any element of $C_{i+1}$ that is adjacent to $a$ in $G$.
Because $a<x<y$ and $G$ is right-filled, $x$ and $y$ are adjacent in $G$.
That is a contradiction.
Suppose that $(p,q)\in \operatorname{inv}(T)$.
Hence $p<q$ as integers and $q\in \{y\in [n]: y\le_{T} p\}$.
We must show that $q\in \{y\in [n]: y\le_{T'} p\}$.
If $p$ is not equal to $a$ or $x$ then the statement follows from the fact that $\{y\in [n]: y\le_{T'} p\}=\{y\in [n]: y\le_{T} p\}$.
If $p=a$ then Proposition~\ref{cover relations} implies that $\{y\in [n]: y\le_{T} a\}\subseteq\{y\in [n]: y\le_{T'} a\}$.
Thus $(p,q)\in \operatorname{inv}(T')$.
Assume that $p=x$, so that we have $a<x<q$, ordered as integers.
The first statement of the lemma implies that $q<_{T'} x$ (because $C_k = \{y\in [n]: y\le_{T'} x\}$).
Hence $(p,q)\in \operatorname{inv}(T')$.
\end{proof}
The next lemma is the $G$-tree analog to Lemma~\ref{descents and inversions} (which characterizes covering relations in the weak order on $\mathfrak{S}_n$).
\begin{lemma}\label{lem: inversion covers}
Let $G$ be a right-filled graph with vertex set $[n]$.
Suppose that $T$ and $T'$ are in $L_G$ such that $\operatorname{inv}(T)\subsetneq \operatorname{inv}(T')$.
Then there exists $T''$ such that $T'{\,\,\,\cdot\!\!\!\! >\,\,} T''$ and $\operatorname{inv}(T)\subseteq \operatorname{inv}(T'')\subset \operatorname{inv}(T')$.
\end{lemma}
\begin{proof}
We claim that there exists some descent $(i,k)$ of $T'$ that is not an inversion~$T$.
The claim follows from Lemma~\ref{lem: pidown}.
Indeed, write $v$ for $\sigma(T)$ and $v'$ for $\sigma(T')$.
Lemma~\ref{lem: pidown} implies that $v'>v$ in the weak order.
By Lemma~\ref{descents and inversions}, there is some descent $(i,k)$ of $v'$ which is not an inversion of $v$.
Since $\operatorname{inv}(v)=\operatorname{inv}(T)$, $\operatorname{inv}(v')=\operatorname{inv}(T')$, and $\des(v')=\des(T')$ the claim follows.
Let $T''{\,\,<\!\!\!\!\cdot\,\,\,} T'$ via this $(i,k)$ descent.
Next, we apply Proposition~\ref{cover relations} to the covering relation $T'{\,\,\,\cdot\!\!\!\! >\,\,} T''$.
As in the notation of that proposition, we interpret $x_\downarrow$ as the principal order ideal in $T'$.
We will continue to do so for the remainder of the proof.
Write the ideal $\{x: x<_{T'} k\}$ as the disjoint union of tube $Y_1\cup \cdots \cup Y_t$.
Proposition~\ref{cover relations} says that, $\chi(T'')$ is equal to$$\chi(T')\setminus \{k_\downarrow\} \cup \left\{ i_\downarrow \setminus \left(\{k\} \cup \bigcup Y_{a_j}\right) \right\},$$
where the union $\bigcup Y_{a_j}$ is over all $Y_{a_j}\in \{Y_1,\ldots, Y_t\}$ such that $\{i\}\cup Y_{a_j}$ not a tube.
We write $B$ for the set $\{b\in\bigcup Y_{a_j}: (i,b)\in \operatorname{inv}(T')\}.$
It follows that $$\operatorname{inv}(T')\setminus (\{(i,k)\}\cup \{(i,b): b\in B\})= \operatorname{inv}(T'').$$
Let $C$ be the set of $c\in \bigcup Y_{a_j}$ such that $(i,c)\in \operatorname{inv}(T)$.
(As above, each $Y_{a_j}$ satisfies: $Y_{a_j}\cup\{i\}$ is not a tube; so in particular, no element $c\in C$ is adjacent to $i$.)
To complete the proof, we argue that $C$ is empty.
Suppose not, and choose $c\in C$ so that there is a path $i=q_0, q_1,\ldots, q_m=c$ with $q_p \le_T i$ and $q_p\not \in C$ for each $p\in [m-1]$.
Consider $q_{m-1}$.
On the one hand, if $q_{m-1}<i$ (as integers) then the \ref{right filled}-property implies that $i$ and $c$ are adjacent.
But no element in $C$ is adjacent to $i$.
So we have a contradiction.
On the other hand, if $q_{m-1}>i$ then $(i,q_{m-1})$ is an inversion of $T$.
We will argue that $q_{m-1}$ must belong to $C$, and conclude a contradiction.
Since $\operatorname{inv}(T)\subset \operatorname{inv}(T')$ have have $(i,q_{m-1})\in \operatorname{inv}(T')$.
Thus $q_{m-1}$ is in the ideal $\{x: x<_{T'} i\}$, which we write as a disjoint union of tubes $X_1\cup X_2\cup \cdots \cup X_r$.
Because $i{\,\,\,\cdot\!\!\!\! >\,\,}_{T'} k$ and $(i,k)$ is a descent in $T'$, we have $k_{\downarrow}=X_r$ by Lemma~\ref{descent helper}.
Since $q_{m-1}$ is adjacent to $c$, both belong to the same tube in the disjoint union $X_1\cup\cdots \cup X_r$.
Because $c\in Y_{a_j}\subseteq k_\downarrow$, we have $q_{m-1}$ is also in $k_\downarrow$.
Similarly, because $c$ and $q_{m-1}$ are adjacent, they belong to the same tube $Y_{a_j}$ in the ideal $\{x: x<_{T'} k\} = Y_1\cup \cdots \cup Y_t$.
We conclude that $q_{m-1}\in C$, contradicting our choice of~$c$.
Therefore, $\operatorname{inv}(T)\subseteq \operatorname{inv}(T'') \subset \operatorname{inv}(T')$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{inversion order}]
Lemma~\ref{descent helper} implies that $T\le T'$ then $\operatorname{inv}(T)\subseteq \operatorname{inv}(T')$.
Suppose that $\operatorname{inv}(T)\subseteq \operatorname{inv}(T')$.
We argue that $T\le T'$ by induction on the size of $\operatorname{inv}(T')\setminus \operatorname{inv}(T)$.
Lemma~\ref{lem: inversion covers} says there exists a $G$-tree $T''$ such that $$\text{$T'{\,\,\,\cdot\!\!\!\! >\,\,} T''$ and $\operatorname{inv}(T)\subseteq \operatorname{inv}(T'')\subset \operatorname{inv}(T')$}.$$
Lemma~\ref{lem: pidown} implies that $\operatorname{inv}(T)=\operatorname{inv}(T')$ if and only if $T=T'$.
When $\operatorname{inv}(T)$ and $\operatorname{inv}(T')$ differ by one element, we must have that $T=T''$.
When $\operatorname{inv}(T')\setminus \operatorname{inv}(T)$ has $m>1$ elements, the inductive hypothesis implies that $T\le T''{\,\,<\!\!\!\!\cdot\,\,\,} T'$, and we are done.
\end{proof}
\subsection{Left-filled graphs}
In this section we prove the analog of Corollary~\ref{g-perm lattice cong} and Theorem~\ref{inversion order} for left-filled graphs.
\begin{corollary}\label{left filled cor}
Let $G$ be a left-filled graph with vertex set $[n]$.
Then $L_G$ is a lattice, and $\Psi_G: \mathfrak{S}_n\to L_G$ is a join semilattice map.
That is, for all $w, w'\in \mathfrak{S}_n$, we have
\[\Psi_G(w\vee w')= \Psi_G(w)\vee \Psi_G(w').\]
\end{corollary}
\begin{proof}
Observe that $G^*$ is right-filled.
(Recall that $G^*$ is the graph we obtain by swapping labels $i$ and $n+1-i$ for all $i$.)
Lemma~\ref{graph duality} says that $L_{G^*} \cong L_G^*$ (where $L_G^*$ is the dual of $L_G$, as posets).
Since $L_{G^*}$ is a lattice (by Theorem~\ref{inversion order}), we have $L_G$ is a lattice.
Indeed, Lemma~\ref{graph duality} implies that the following diagram commutes.
\begin{center}
\begin{tikzpicture}[scale=0.75]
\node (A) at (0,0) {$\mathfrak{S}_n$};
\node(B) at (3,0) {$\mathfrak{S}_n^*$};
\node (C) at (0,-2) {$L_G$};
\node (D) at (3,-2) {$L_{G^*}$};
\node at (-.5,-.85) {\scriptsize{$\Psi_G$}};
\node at (3.5,-.85) {\scriptsize{$\Psi_{G^*}$}};
\draw[<->] (A.east)--(B.west);
\draw[->>] (A.south)--(C.north);
\draw[<->] (C.east)--(D.west);
\draw[->>] (B.south)--(D.north);
\end{tikzpicture}
\end{center}
The maps in the top and bottom rows of the diagram are essentially the same:
They both swap $i$ and $n+1-i$ for all $i\in [n]$.
Both maps are lattice anti-isomorphisms.
It follows from Theorem~\ref{inversion order} that $\Psi_G$ is a join semilattice map.
\end{proof}
\subsection{Filled graphs and lattice congruences}
We prove our main result (see Theorem~\ref{thm_main_lattice}).
\begin{theorem}\label{main}
Suppose that $G$ is a graph with vertex set $[n]$ and edge set $E$.
Then $\Psi_G: \mathfrak{S}_n\to L_G$ is a lattice quotient map if and only if $G$ is filled.
\end{theorem}
\begin{proof}
If $G$ is filled, then $\Psi_G: \mathfrak{S}_n \to L_G$ preserves the meet operation (by Theorem~\ref{inversion order}) and the join operation (by Corollary~\ref{left filled cor}).
Thus $\Psi_G:\mathfrak{S}_n\to L_G$ is a lattice quotient map.
Assume that $G$ is not filled.
Thus there exists $i<j<k$ such that $\{i,k\}\in E$ but either $\{i,j\}$ or $\{j,k\}$ is not in $E$.
Let $G'$ denote the induced subgraph $G|_{\{i,j,k\}}$.
We check that in all possible cases $\Psi_{\std(G')}:\mathfrak{S}_{3}\to L_{\std(G')}$ is not a lattice map.
By Lemma~\ref{lem: interval lemma}, $\Psi_G: \mathfrak{S}_n\to L_G$ is not a lattice map.
In the first case, assume $\{i,k\}$ and $\{j,k\}$ are edges, but $\{i,j\}$ is not.
Observe that $\Psi_{\std(G')}$ does not preserve the join operation.
On the one hand, $213\vee 132 = 321$ in the weak order on $\mathfrak{S}_3$.
Thus,
\[\Psi_{\std(G')} (213\,\vee\, 132) = \Psi_{\std(G')}(321)= \substack{1\\2\\3},\]
where we write $\Psi_{\std(G')}(321)$ as a $\std(G')$-tree (with the elements ordered vertically).
On the other hand,
\[ \Psi_{\std(G')}(213) \vee \Psi_{\std(G')}(132)= \substack{3\\[.25em]1\,2} \vee\, \substack{2\\3\\1} =\substack{3\\1\,2}\]
The reader can check the computation of the join in $L_{\std(G')}$ with Figure~\ref{fig:L_ex}.
The case in which $\{i,k\}$ and $\{i,j\}$ are edges (but $\{j,k\}$ is not) is proved dually.
Assume that $\{i,k\}$ is an edge and neither $\{j,k\}$ and nor $\{i,j\}$ are edges.
Then, for example, $\Psi_{\std(G')}$ does not preserve the join operation.
Indeed
\[\Psi_{\std(G')}(123)=\Psi_{\std(G')}(213) = \Psi_{\std(G')}(132)\]
is the smallest element in $L_{\std(G')}$.
But $\Psi_{\std(G')}(213\,\vee \,132)$ is the biggest element in $L_{\std(G')}$.
We conclude that if $G$ is not filled, then $L_G$ is not a lattice map.
\end{proof}
\subsection{Generators of the congruence $\Theta_G$}
Let $\Theta_G$ be the equivalence relation on $\mathfrak{S}_n$ induced by the fibers of $\Psi_G$.
In light of Theorem~\ref{main}, when $G$ is filled $\Theta_G$ is a lattice congruence on the weak order.
Recall from Section~\ref{subsec_weak_cong} that $\Con(\mathfrak{S}_n)$ is a finite distributive lattice.
We identify each congruence $\Theta$ with the corresponding order ideal of join-irreducible congruences.
The \emph{generators} of a congruence are the maximal elements of this order ideal.
Recall that the join-irreducible congruences of the weak order are given by arcs. (This is Theorem~\ref{thm_weak_arcs}.)
Let $(x,y, +)$ denote the arc with $\epsilon_i = +$ for each $i\in [y-x]$.
Occasionally we call such an arc a \emph{positive arc}.
(Pictorially, this is an arc which does not pass to the right of any point between its endpoints.)
A \emph{minimal non-edge} is a pair $x<y$ such that for each $z\in\{x+1, x+2,\ldots,y-1\}$, $\{x,z\}$ and $\{z,y\}$ are edges in $G$, but $\{x,y\}$ is not an edge.
\begin{theorem}\label{generators}
Suppose that $G$ is a filled graph, and consider the lattice congruence $\Theta_G$ induced by $\Psi_G$.
Then $\Theta_G$ is generated by $$\{(x,y,+): \{x,y\}\text{ is a minimal non-edge of $G$}\}.$$
\end{theorem}
Before we prove Theorem~\ref{generators} we gather some useful facts.
Throughout this section, we write $j$ for a join-irreducible permutation and $j_*$ for the unique element that it covers in the weak order.
Recall that we associate each $j$ with the join-irreducible congruence $\Theta^\alpha$ where $\alpha$ is the arc $\alpha(j_*,j)$.
Conversely, given an arc $\alpha=(x,y,\epsilon)$, the corresponding join irreducible permutation is
\[j_{\alpha}= 12\ldots (x-1) l_1\ldots l_p \,y\, x\, r_1\ldots r_q (y+1) (y+2) \ldots n\]
where $\{l_1<l_2<\ldots < l_p\}$ is the set $\{x':\epsilon_{x'-x} = -\}$ and $\{r_1<r_2<\ldots<r_q\}$ is the set $\{y': \epsilon_{y'-x} = +\}$.
(See Proposition~\ref{arc to perm}.)
We say a join-irreducible element $j$ is \emph{contracted by $\Theta_G$} if $j\equiv_G j_*$.
The congruence $\Theta^\alpha$ is a generator for $\Theta_G$ if $j_\alpha$ is contracted by $\Theta_G$, and for each subarc $\beta$ of $\alpha$, the corresponding permutation $j_{\beta}$ is \textit{not} contracted.
The next result follows immediately from Corollary~\ref{g-perm lattice cong}.
\begin{proposition}
Let $G$ be a filled graph with vertex set $[n]$ and let $j$ be a join-irreducible permutation in $\mathfrak{S}_n$.
Then $j$ is not contracted by $\Theta_G$ if and only if $j$ is $G$-permutation.
\end{proposition}
\begin{lemma}\label{not contracted}
Let $G$ be a filled graph with vertex set $[n]$ and $1\le x<y\le n$.
Suppose that $\{x,y\}$ is an edge in $G$, with $x<y$.
Then no arc $(x,y,\epsilon)$ is contracted by $\Theta_G$.
\end{lemma}
\begin{proof}
Let $j$ be join-irreducible with unique descent $(x,y)$.
Observe $G|_{[x,y]}$ is a complete graph because $G$ is filled.
Let $r=y-x+1$.
Write $j$ in one-line notation as:
$$j= j_1\ldots j_n=12\ldots (x-1) j_x\ldots j_{x+r} (y+1) (y+2)\ldots n.$$
We claim that $j_i$ and $\max\{j_1,\ldots j_i\}$ belong to the same connected component of $G|_{\{j_1,\ldots j_i\}}$.
If $i\le x$ or $i \ge x+r+1$ then $j_i = \max\{j_1,\ldots j_i\}$.
So the claim follows.
Suppose that $x<i\le x+r$.
Then $\max\{j_1,\ldots j_i\} = \max\{j_x,\ldots, j_i\}$.
Since $\{j_x,\ldots, j_i\}$ is a subset of $[x,y]$ claim follows.
Therefore $j$ is a $G$-permutation.
\end{proof}
\begin{lemma}\label{+ is contracted}
Suppose that $(x,y)$ is not an edge in $G$.
The arc (x,y,+) is contracted by~$\Theta_G$.
\end{lemma}
\begin{proof}
Let $j_{x,y}$ denote the join-irreducible corresponding to the arc $(x,y,+)$.
We argue that $j_{x,y}$ is contracted by $\Theta_G$.
Since $x$ and $y$ are not adjacent and $G$ is filled, $y$ is not connected to any vertex $x'<x$.
Write $j_{x,y}$ as $$1\,2\,\ldots (x-1) \,y x \, (x+1)\ldots (y-1)\,(y+1)\,\ldots n.$$
Observe that $j_{x,y}$ is not a $G$-permutation because $y=\max\{1,2,\ldots, x,y\}$ is isolated in the subgraph $G|_{\{1,2,\ldots, x,y\}}$.
Thus $j$ is contracted by $\Theta_G$.
\end{proof}
\begin{lemma}\label{minimal nonedges}
Suppose that $\{x,y\}$ is a minimal non-edge of $G$.
Let $\alpha$ be the arc $(x,y,\epsilon)$.
If $\epsilon\ne +$ then $\alpha$ is not contracted by $\Theta_G$.
\end{lemma}
\begin{proof}
Let $j$ be a join-irreducible corresponding to $\alpha$.
Let $r=x-y+1$.
Write $j$ in one-line notation as $j_1\ldots j_n$.
Observe that $$j=12\ldots (x-1) j_x\ldots j_{x+r} (y+1) (y+2)\ldots n.$$
We claim that $j_i$ and $\max\{j_1,\ldots j_i\}$ belong to the same connected component of $G|_{\{j_1,\ldots j_i\}}$.
If $i\le x$ or $i \ge x+r+1$ then $j_i = \max\{j_1,\ldots j_i\}$.
So the claim follows.
Suppose that $x<i\le x+r$.
Then $\max\{j_1,\ldots j_i\} = \max\{j_x,\ldots, j_i\}$.
Because $G$ is filled, our hypotheses imply that we have each edge in $\binom{[x,y]}{2}$ except $(x,y)$.
If $\max\{j_x,\ldots, j_i\}$ is not equal to $y$ then $G|_{\{j_x,\ldots, j_i\}}$ is a complete graph.
So the claim follows.
Assume that $\max\{j_x,\ldots, j_i\}=y$.
Because $\alpha\ne (x,y,+)$ we have $x<j_x<y$.
Thus $(x,j_x)$ and $(j_x,y)$ are both edges in $G$.
Therefore $G|_{\{j_x,\ldots, j_i\}}$ is connected.
So the claim follows, and $j$ is $G$-permutation.
\end{proof}
\begin{figure}
\centering
\includegraphics{arc2}
\caption{\label{fig:arc2}An arc considered in the proof of Theorem~\ref{generators}}
\end{figure}
\begin{proof}[Theorem~\ref{generators}]
Let $\Gcal$ denote the set $\{(x,y,+): (x,y)\text{ is a minimal non-edge}\}$.
Lemma~\ref{+ is contracted} implies that the join-irreducible elements in $\Gcal$ are among the generators of~$\Theta_G$.
To prove the theorem we argue that they are the only generators.
By way of contradiction, assume that $(x,y,\epsilon)$ is a generator of $\Theta_G$ and $(x,y,\epsilon) \notin \Gcal$.
Write $j$ for the corresponding join-irreducible.
By Lemma~\ref{not contracted}, $(x,y)$ is not an edge (because $j$ is contracted by $\Theta_G$).
If $\{x,y\}$ is a minimal non-edge then Lemma~\ref{minimal nonedges} says that $\epsilon=+$, and hence $(x,y,\epsilon)\in \Gcal$.
Thus we may assume that $\{x,y\}$ is not a minimal non-edge, and it has a subarc $\alpha'$ with end points $x'<y'$ which is a minimal non-edge.
Since no subarc of $\alpha$ is contracted, in particular $\alpha'$ is not contracted.
It follows that $\alpha'$ is not a positive arc.
Therefore, $\epsilon \ne +$.
To obtain a contradiction we argue that $j$ is a $G$-permutation.
Write $j$ as $$12\ldots (x-1) l_1\ldots l_p \,y\,x\, r_1 \ldots r_q (y+1) \ldots n$$ where $\{l_1<\ldots< l_p, r_1< \ldots< r_q\}= \{x+1, \ldots y-1\}$.
Therefore, we get $j_i = \max\{j_1,\dots j_i\}$ for $j_i \in \{1,2,\ldots l_p, (y+1), \ldots, n\}$.
We claim that $r_i$ is in the same connected component as $y$ in the subgraph induced by $\{1,2,\ldots,x,y,r_1,\ldots r_i\}$.
Let $j'$ be the join-irreducible whose corresponding arc is the subarc of $(x,y,\epsilon)$ with endpoints $x'=r_i<y$.
As above, write
$$j'=1\,2\ldots (x'-1) \,l'_1\ldots l'_{p'} \,y\,x'\, r'_1 \ldots r'_{q'}\, (y+1) \ldots n.$$
Observe that $\{1,2,\ldots, (x'-1), l'_1,\ldots, l'_{p'} ,y,x'\} = \{1,2,\ldots,x,y,r_1,\ldots r_i\}$.
(Each of the entry $l_k$ remains left of the descent $(x' y')$ in $j'$ because $l_k<y$.
Each $r_k$ with $k<i$ is left of the descent $(x',y)$ because $r_k<r_i$.)
Because $j'$ is not contracted, it is a $G$-permutation.
Therefore $y$ and $r_i=x'$ belong to the same connected in the subgraph $G|_{\{1,2,\ldots,x,y,r_1,\ldots r_i\}}$.
Finally we consider $x$ and $y= \max\{1,2,\ldots, (x-1), l_1, \ldots, l_p, x,y\}$ in the subgraph $G|_{\{1,2,\ldots, (x-1), l_1, \ldots, l_p, x,y\}}$.
We will be done if we can show that $x$ and $y$ belong to the same connected component.
We do so by showing, first, that $x$ and $l_p$ belong to the same connected component in this subgraph, and second, that $l_p$ and $y$ belong the same connected component.
(Indeed we will see that $l_p$ and $y$ are adjacent in $G$.)
Observe that $l_p$ exists because $\epsilon \ne +$.
Consider the permutation $$j''=1\,2\ldots (x-1) l_1\ldots l_p \,x\, r_1 \ldots r_q \, y\, (y+1) \ldots n.$$
Observe that this permutation has a unique descent $(l_p,x)$, so it is join-irreducible.
Moreover, the arc corresponding to $j''$ is the subarc of $(x,y,\epsilon)$ with endpoints $x<l_p$.
Hence $j''$ is a $G$-permutation.
Thus $x$ and $l_p$ belong to the same connected component in the subgraph $G|_{\{1,2,\ldots, (x-1), l_1, \ldots, l_p, x\}}$.
So, $x$ and $l_p$ belong to the same connected component in~$G|_{\{1,2,\ldots, (x-1), l_1, \ldots, l_p, x, y\}}$.
Next consider the subarc of $(x,y,\epsilon)$ with endpoints $l_p <y$.
As none of the $l_i$ lie strictly between $l_p$ and $y$, this is a positive arc; see Figure~\ref{fig:arc2}.
Since it is not contracted, we must have $l_p$ and $y$ form an edge (by Lemma~\ref{+ is contracted}).
We conclude that $x$ and $y$ belong to the same connected component in the sugraph induced by $\{1,2,\ldots, (x-1), l_1, \ldots, l_p, x,y\}$.
Thus $j$ is a $G$-permutation.
By this contradiction, we obtain the desired result.
\end{proof}
\section{Algebras and coalgebras of tubings}\label{sec_hopf}
\subsection{The Malvenuto-Reutenauer algebra}\label{subsec_MR}
Fix a field $\Kbb$. For a set $X$, we let $\Kbb[X]$ denote the vector space over $\Kbb$ for which the set $X$ indexes a basis. For $X=\mathfrak{S}_n$, we let $\Kbb[\mathfrak{S}_n]$ have a distinguished basis $\{\Fbb_w:\ w\in\mathfrak{S}_n\}$. The \emph{Malvenuto-Reutenauer} algebra is a Hopf algebra on the graded vector space
\[ \Kbb[\mathfrak{S}_{\infty}]=\bigoplus_{n=0}^{\infty}\Kbb[\mathfrak{S}_n]. \]
If $v=v_1\cdots v_n$ is a permutation of $[n]$ and $m\geq 0$, we define the \emph{shift by $m$} to be the word $v[m]=(v_1+m)(v_2+m)\cdots(v_n+m)$. For basis elements $\Fbb_u\in\Kbb[\mathfrak{S}_m],\ \Fbb_v\in\Kbb[\mathfrak{S}_n]$, the product $\Fbb_u\cdot\Fbb_v$ is the sum of the elements $\Fbb_w$ for which $w$ is a shuffle of $u$ and $v[m]$. For example,
$$\Fbb_{21}\cdot\Fbb_{12}=\Fbb_{2134}+\Fbb_{2314}+\Fbb_{2341}+\Fbb_{3214}+\Fbb_{3241}+\Fbb_{3421}.$$
The coproduct $\Delta(\Fbb_u)\in\Kbb[\mathfrak{S}_{\infty}]\otimes\Kbb[\mathfrak{S}_{\infty}]$ for $u\in\mathfrak{S}_n$ is defined to be
$$\Delta(\Fbb_u)=\sum_{i=0}^n\Fbb_{\std(u_1\cdots u_i)}\otimes\Fbb_{\std(u_{i+1}\cdots u_n)},$$
where $\std(a_1\cdots a_i)$ for a sequence of distinct integers $a_1,\ldots,a_i$ is the element of $\mathfrak{S}_i$ with the same relative order as $a_1\cdots a_i$. For example,
$$\Delta(\Fbb_{3241})=\iota\otimes \Fbb_{3241} + \Fbb_1\otimes \Fbb_{231} + \Fbb_{21}\otimes \Fbb_{21} + \Fbb_{213}\otimes \Fbb_1 + \Fbb_{3241}\otimes\iota.$$
Here, the element $\iota\in\Kbb[\mathfrak{S}_0]$ is the multiplicative identity. The counit $\epsilon:\Kbb[\mathfrak{S}_{\infty}]\ra\Kbb$ is the linear map with $\epsilon(\iota)=1$ and $\epsilon(\Fbb_v)=0$ for $v\in\mathfrak{S}_n,\ n\geq 1$. These operations are compatible in a way that makes $\Kbb[\mathfrak{S}_{\infty}]$ a (connected, graded) bialgebra. This automatically gives the Malvenuto-Reutenauer algebra the structure of a Hopf algebra; that is, it comes with a (unique) antipode $S$. We refer to \cite{grinberg.reiner:2014hopf} for further background on Hopf algebras from a combinatorial perspective.
The Malvenuto-Reutenauer algebra contains the algebra of noncommutative symmetric functions $\NCSym$ as a sub-Hopf algebra. Loday and Ronco \cite{loday.ronco:1998hopf} discovered a Hopf algebra $\Kbb[Y_{\infty}]=\bigoplus\Kbb[Y_n]$ on the vector space spanned by planar binary trees and a sequence of Hopf algebra embeddings
$$\NCSym\hookra\Kbb[Y_{\infty}]\hookra\Kbb[\mathfrak{S}_{\infty}].$$
More generally, we may consider a family of nonempty sets $\{Z_0,Z_1,Z_2,\ldots\}$ with surjections $f_n:\mathfrak{S}_n\thra Z_n$ for each $n\geq 0$. Letting $\{\Pbb_x:\ x\in Z_{\infty}\}$ be a basis for $\Kbb[Z_{\infty}]$, there is a vector space embedding $c:\Kbb[Z_{\infty}]\hookra\Kbb[\mathfrak{S}_{\infty}]$ where
$$c(\Pbb_x)=\sum_{w\in f_n^{-1}(x)} \Fbb_w\ \hspace{3mm}\ \mbox{for }x\in Z_n.$$
We are especially interested in the case where $Z_n$ is the set of vertices of a generalized permutahedron of rank $n$ and $f_n:\mathfrak{S}_n\thra Z_n$ is the canonical map. The main problem is to determine whether $c$ makes $\Kbb[Z_{\infty}]$ into an algebra or a coalgebra, i.e. whether $c(\Pbb_x)\cdot c(\Pbb_y)$ and $\Delta(c(\Pbb_x))$ lie in the image of $c$ for any $x,y\in Z_{\infty}$.
\subsection{Translational families of lattice congruences}\label{subsec_MR_alg}
As usual, we consider the symmetric group $\mathfrak{S}_n$ as a poset under the weak order. When the map $f_n:\mathfrak{S}_n\thra Z_n$ has the structure of a lattice quotient map, there is a generalized permutahedron known as a \emph{quotientope} with vertex set $Z_n$ associated to the map $f_n$ \cite{pilaud.santos:2017quotientopes}. In \cite{reading:2005lattice}, Reading proved that the embedding $c$ associated to a sequence of lattice quotient maps $\{f_n\}_{n\geq 0}$ is an algebra map (resp., coalgebra map) if the family $\{f_n\}$ is translational (resp., insertional). We recall the definition of a translational family in this section and of an insertional family in Section~\ref{subsec_insertional}.
Let $\Theta$ be a lattice congruence of the weak order on $\mathfrak{S}_n$ for some $n$.
Recall that $\Theta$ contracts a join-irreducible $j$ if $j\equiv j_*\mod \Theta$, where $j{\,\,\,\cdot\!\!\!\! >\,\,} j_*$.
Equivalently, for the corresponding arc $\alpha=\alpha(j_*,j)$, we have $\Theta^{\alpha}\leq\Theta$ in the lattice $\Con(L)$.
We abuse notation, and say that $\Theta$ \emph{contracts} the arc $\alpha$ if $\Theta$ contracts $j_{\alpha}$.
(Indeed, $\Theta$ contracts an arc $\alpha$ if and only if there exists a covering relation $u\lessdot w$ such that $\alpha(u,w)=\alpha$ and $u\equiv w\mod \Theta$.)
In particular, the set of arcs contracted by $\Theta$ correspond to the set of join-irreducible elements of $\Con(L)$ less than or equal to $\Theta$ in $\Con(L)$. By Theorem~\ref{thm_forcing_arcs}, if $\alpha$ is contracted by $\Theta$ and $\alpha$ is a subarc of $\beta$, then $\beta$ is contracted by $\Theta$ as well.
Fix a sequence $\mathbf{\Theta}=\{\Theta_n\}_{n\geq 0}$ where $\Theta_n$ is a lattice congruence of the weak order on $\mathfrak{S}_n$ for each $n\geq 0$. We let $Z_n=\mathfrak{S}_n/\Theta_n$ be the set of equivalence classes modulo $\Theta_n$, and set $Z_{\infty}^{\mathbf{\Theta}}=\{Z_n\}_{n\geq 0}$.
As we consider lattice congruences of the weak order for varying $n$, we may say that $\alpha$ is an \emph{arc on $[n]$} to mean that it is an arc for $\mathfrak{S}_n$. An arc $\alpha=(i,j,\epsilon)$ on $[n]$ is a \emph{translate} of an arc $\beta=(k,l,\epsilon^{\pr})$ on $[m]$ if $j-i=l-k$ and $\epsilon=\epsilon^{\pr}$. The family $\{\Theta_n\}_{n\geq 0}$ is called \emph{translational} if whenever $\Theta_n$ contracts an arc $\alpha$ and when $\beta$ is an arc on $[m]$ that is a translate of $\alpha$, the congruence $\Theta_m$ contracts $\beta$.
The following is equivalent to \cite[Theorem 1.2, Proposition 7.1]{reading:2005lattice}.
\begin{theorem}\label{thm_translational_subalg}
If $\mathbf{\Theta}=\{\Theta_n\}_{n\geq 0}$ is a translational family, then the map
$$c:\Kbb[Z_{\infty}^{\mathbf{\Theta}}]\ra\Kbb[\mathfrak{S}_{\infty}]$$
embeds $\Kbb[Z_{\infty}^{\mathbf{\Theta}}]$ as a subalgebra of $\Kbb[\mathfrak{S}_{\infty}]$.
\end{theorem}
We proved (Theorem~\ref{main}) that the map $\Psi_G:\mathfrak{S}_n\ra L_G$ is a lattice map if and only if $G$ is a filled graph. We determine when a sequence of filled graphs determines a translational family of lattice congruences of the weak order. As before, we will write $(i,j,+)$ to represent the arc $(i,j,(+,\ldots,+))$. An arc of the form $(i,j,+)$ is called a \emph{positive arc}. For nonnegative integers $k,n$, let $H_{k,n}$ be the graph with vertex set $[n]$ such that $\{i,j\}$ is an edge whenever $1\leq i<j\leq n$ and $j-i\leq k$. Clearly, if $k\geq n-1$, then $H_{k,n}$ is the complete graph on $[n]$.
\begin{proposition}\label{prop_translational_char}
A sequence of filled graphs $\{G_n\}_{n\geq 0}$ determines a translational family $\{\Theta_n\}_{n\geq 0}$ if and only if there exists some $k\in\{0,1,2,\ldots\}\cup\{+\infty\}$ such that $G_n=H_{k,n}$ for all $n$.
\end{proposition}
\begin{proof}
Let $\{G_n\}_{n\geq 0}$ be a sequence of filled graphs, and let $\Theta_n$ be the lattice congruence induced by $\mathfrak{S}_n\ra L_{G_n}$.
Suppose the family $\{\Theta_n\}_{n\geq 0}$ is translational. If $\{i,j\}$ is not an edge of $G_n$, then $\Theta_n$ contracts the arc $(i,j,+)$. Being a translational family means that any arc of the form $(i^{\pr},j^{\pr},+)$ is contracted by $\Theta_m$ where $1\leq i^{\pr}<j^{\pr}\leq m$ and $j-i=j^{\pr}-i^{\pr}$. This in turn means that $\{i^{\pr},j^{\pr}\}$ is not an edge of $G_m$. Hence, there must exist some set $S\subseteq\Nbb$ such that for all $i,j,n$ such that $1\leq i<j\leq n$, we have $j-i\in S$ if and only if $\{i,j\}$ is an edge of $G_n$. As the graphs $G_n$ are filled, the set $S$ must either be of the form $S=[k]$ for some $k\in\{0,1,2,\ldots\}$ or $S=\Nbb$.
Conversely, suppose there exists $k\in\{0,1,2,\ldots\}\cup\{\infty\}$ such that $G_n=H_{k,n}$ for all~$n$. Then $\Theta_n$ is the lattice congruence generated by $\{\Theta^{(i,j,+)}:\ j-i=k+1\}$. Since the generating set is closed under translation, the family $\{\Theta_n\}_{n\geq 0}$ must be translational.
\end{proof}
\begin{remark}
The lattice congruence $\Theta_n$ in Proposition~\ref{prop_translational_char} resembles the \emph{metasylvester congruence} $\equiv_n^k$, which is the lattice congruence of the weak order on $\mathfrak{S}_n$ generated by relations of the form
$$UacV_1b_1\cdots V_kb_kW\equiv_n^k UcaV_1b_1\cdots V_kb_kW,$$
where $a<b_i<c$ holds for all $i\in[k]$. In other words, two letters $a,c$ in a permutation can be swapped if there are $k$ letters on the right with values between $a$ and $c$. Via the dictionary between arcs and join-irreducible lattice congruences, the metasylvester congruence is the most refined congruence that contracts every arc of the form $(a,c,\epsilon)$ where the number of $+$ entries in $\epsilon$ is at least $k$. In contrast, the congruence $\Theta_n$ corresponding to the graph $H_{k,n}$ is generated by $\Theta^{(i,j,+)}$ where $j-i=k+1$, meaning that $\epsilon_1=\cdots=\epsilon_k=+$.
It is straight-forward to check that the family $\{\equiv_n^k\}_{n\geq 1}$ is both translational and insertional; cf. Section~\ref{subsec_insertional}. Hence, the embedding
$$c:\Kbb[\mathfrak{S}_{\infty}/\equiv^k]\hookra\Kbb[\mathfrak{S}_{\infty}]$$
realizes $\Kbb[\mathfrak{S}_{\infty}/\equiv^k]$ as both a subalgebra and a sub-coalgebra of the Malvenuto-Reutenauer Hopf algebra. It follows that $\Kbb[\mathfrak{S}_{\infty}/\equiv^k]$ inherits the structure of a bialgebra. It is known that an antipode is inherited as well, giving it the structure of a sub-Hopf algebra. Pilaud \cite{pilaud:2018brick} interpreted this Hopf algebra in terms of the vertices of a family of \emph{brick polytopes}, which are a different class of generalized permutahedra from those we consider in this paper.
\end{remark}
\subsection{Tubing algebras}\label{subsec_hopf_algebra}
We begin this section by recalling the tubing algebra defined by Ronco \cite{ronco:2012tamari}.
For $I\subseteq\Nbb,\ n\geq 0$, let $I+n:=\{i+n\ |\ i\in I\}$. In particular, $[m]+n=\{n+1,n+2,\ldots,n+m\}$. If $\Xcal$ is a tubing, we let $\Xcal+n:=\{I+n\ |\ I\in\Xcal\}$.
Consider a family of graphs $\Gcal=\bigsqcup_{n\geq 0}\Gcal_n$ where $\Gcal_n$ is a finite collection of graphs with vertex set $[n]$. We allow $\Gcal_n$ to contain multiple copies of the same graph, and for the purposes of defining the tubing algebra, it will be important to be able to distinguish between multiple copies of the same graph. This could be done by defining $\Gcal$ as a sequence of graphs rather than as a set, but we prefer to describe $\Gcal$ as a set. Define an operation $\circ$ on $\Gcal$ to be \emph{admissible} if
\begin{itemize}
\item $(G\circ G^{\pr})\circ G^{\pr\pr}=G\circ(G^{\pr}\circ G^{\pr\pr})$, and
\item for $G\in\Gcal_n,\ G^{\pr}\in\Gcal_m$:
\begin{itemize}
\item $G\circ G^{\pr}$ is in $\Gcal_{n+m}$,
\item $G=(G\circ G^{\pr})|_{[n]}$, and
\item $(G^{\pr}+n)=(G\circ G^{\pr})|_{[m]+n}$.
\end{itemize}
\end{itemize}
If $\circ$ is admissible, we call the pair $(\Gcal,\circ)$ an \emph{admissible family}. We remark that our definition of admissibility is stronger than that of \cite[Definition 3.4]{ronco:2012tamari}, but this is the appropriate condition to define an associative algebra of maximal tubings; cf. \cite[Theorem 3.10]{ronco:2012tamari}.
Let $\MTub(\Gcal)$ be the set of all maximal tubings of these graphs:
$$\MTub(\Gcal)=\bigsqcup_{G\in\Gcal}\MTub(G)$$
We let $\Kbb[\Gcal]=\Kbb[\MTub(\Gcal)]$ be the $\Kbb$-vector space for which $\MTub(\Gcal)$ indexes a basis. We will consider the distinguished basis $\{\Pbb_{\Xcal}:\ \Xcal\in\MTub(\Gcal)\}$ for $\Kbb[\Gcal]$. The vector space $\Kbb[\Gcal]$ is graded so that an element $\Pbb_{\Xcal}$ is of degree $n$ if $\Xcal$ is a tubing of a graph $G$ with $n$ vertices. Since each $\Gcal_n$ is finite, each graded component of $\Kbb[\Gcal]$ is finite-dimensional.
\begin{definition}\label{def:tubing_mult}
Let $G\in\Gcal_n$ and $G^{\pr}\in\Gcal_m$ be given. For maximal tubings $\Xcal\in\MTub(G)$ and $\Ycal\in\MTub(G^{\pr})$, define
$$\Pbb_{\Xcal}\cdot\Pbb_{\Ycal}=\sum\Pbb_{\Zcal}$$
where the sum is over all maximal tubings $\Zcal$ of $G\circ G^{\pr}$ such that $\Xcal=\Zcal|_{[n]}$ and $(\Ycal+n)=\Zcal|_{[m]+n}$.
\end{definition}
\begin{theorem}[Theorem 3.10 \cite{ronco:2012tamari}]\label{thm_admissible_associative}
If $\Gcal$ is a family of graphs with an admissible operation $\circ$ as above, then the binary operation in Definition~\ref{def:tubing_mult} is associative.
\end{theorem}
To prove Theorem~\ref{thm_admissible_associative}, one may show directly that if $G\in\Gcal_n,\ G^{\pr}\in\Gcal_m,$ and $G^{\pr\pr}\in\Gcal_r$ are graphs with maximal tubings $\Xcal,\ \Ycal,$ and $\Zcal$, respectively, then
$$(\Pbb_{\Xcal}\Pbb_{\Ycal})\Pbb_{\Zcal}=\sum\Pbb_{\Wcal}=\Pbb_{\Xcal}(\Pbb_{\Ycal}\Pbb_{\Zcal})$$
where the sum is taken over $\Wcal\in\MTub(G\circ G^{\pr}\circ G^{\pr\pr})$ such that
\[ \Xcal=\Wcal|_{[n]},\ \Ycal+n=\Wcal|_{[m]+n},\ \Zcal+(n+m)=\Wcal|_{[r]+m+n}. \]
\begin{example}\label{ex_MR_alg}
Consider the family of complete graphs $\Gcal=\{K_n\}_{n\geq 0}$ where we define $K_n\circ K_m=K_{n+m}$. If $\Xcal$ is any maximal tubing of $K_n$, its corresponding $K_n$-tree $\tau(\Xcal)$ is a chain. Letting $\Xcal\in\MTub(K_n),\ \Ycal\in\MTub(K_m)$, the elements $\Pbb_{\Zcal}$ in the support of $\Pbb_{\Xcal}\cdot\Pbb_{\Ycal}$ are indexed by precisely those tubings of $K_{n+m}$ for which $\tau(\Zcal)$ is a linear extension of $\tau(\Xcal)\sqcup \tau(\Ycal)$. But this is the shuffle product of $\tau(\Xcal)$ and $\tau(\Ycal)$ when viewed as permutations. Hence, the natural map $\Kbb[\mathfrak{S}_{\infty}]\ra\Kbb[\Gcal]$ is an isomorphism of algebras from the Malvenuto-Reutenauer algebra to the tubing algebra on the family of complete graphs. A similar result about the coalgebra structure of $\Kbb[\mathfrak{S}_{\infty}]$ will be given in Example~\ref{ex_MR_coalg}.
\end{example}
\begin{remark}\label{rem_MR_decorated}
In many instances, it is useful to consider a generalization of the Malvenuto-Reutenauer algebra, which is indexed by \emph{decorated permutations}; see \cite{novelli2010free} or \cite{pilaud2018hopf}. A decorated permutation is a pair $(w,G)$ consisting of a permutation $w$ and an element $G$ called the decoration. If $\Gcal=\sqcup_{n=0}^{\infty}\Gcal_n$ is a graded set with an admissible operation $\circ$, one may define an algebra with a basis
\[ \bigsqcup_{n=0}^{\infty}\{\Fbb_{(w,G)}:\ w\in\mathfrak{S}_n,\ G\in\Gcal_n\} \]
in much the same way as the Tubing algebra, where $\Fbb_{(u,G)}\cdot\Fbb_{(v,G^{\pr})}$ is the sum of $\Fbb_{(w,G\circ G^{\pr})}$ for which $w$ is a shuffle of $u$ and a shift of $v$. Likewise, the coalgebra structure can be extended to the decorated setting.
For this paper, we have chosen to focus on the undecorated setting, though we expect many of our results to hold for decorated permutations as well.
\end{remark}
\begin{figure}
\centering
\includegraphics[scale=.8]{G2prod}
\caption{\label{fig:G2prod}This is the product of basis elements indexed by two $G$-trees (Example~\ref{ex_admissible}); for clarity, we remove the letter $\Pbb$ from this expression.}
\end{figure}
\begin{example}\label{ex_admissible}
Let $G_n$ be the complete bipartite graph on $[n]$ where $i$ and $j$ are adjacent if $|i-j|$ is odd. It is straight-forward to check that $\{G_n\}_{n\geq 0}$ is an admissible family with $G_n\circ G_m=G_{n+m}$. The product of the basis elements indexed by the two $G$-trees for $G=G_2$ is shown in Figure~\ref{fig:G2prod}.
Similarly, the sequences of path graphs, complete graphs, and edge-free graphs are admissible, so their tubings form the basis of an associative algebra. These algebras are the Loday-Ronco algebra, the Malvenuto-Reutenauer algebra, and the polynomial ring in one variable, respectively (c.f. \cite{forcey.springfield:2010geometric}).
On the other hand, while the sequence of cycle graphs $C_n$ is not an admissible family, \cite{forcey.springfield:2010geometric} constructs a different binary operation to make the vector space $\Kbb[\Gcal]$ into an associative algebra. We leave the details of that construction to their paper.
\end{example}
For the remainder of this section, we make the assumption that $|\Gcal_n|=1$ for all $n\geq 0$. For clarity, we may refer to such a collection $\Gcal$ as a \emph{1-parameter family}. In this situation, there is at most one admissible operation $\circ$ defined by the fact that $G\circ G^{\pr}$ is in $\Gcal_{n+m}$ whenever $G\in\Gcal_n$ and $G^{\pr}\in\Gcal_m$. Hence, we simply say that the family $\Gcal$ is admissible if the operation $\circ$ is.
Our first main result in this section is a characterization of admissible families. For $A\subseteq\Nbb:=\{1,2,3,\ldots\}$, let $\Gcal(A)=\{G_n^A\}_{n\geq 0}$ be the family of graphs such that $V(G_n^A)=[n]$ and there is an edge between $i$ and $j$ if and only if $|j-i|\in A$.
\begin{proposition}\label{prop_admissible_characterization}
A 1-parameter family $\Gcal$ is admissible if and only if there exists $A\subseteq\Nbb$ such that $\Gcal=\Gcal(A)$.
\end{proposition}
\begin{proof}
For a given $A\subseteq\Nbb$, it is clear that $\Gcal(A)$ is an admissible family. Indeed, it is clear from the definition that the restriction of $G_{n+m}^A$ to $[n]$ is equal to $G_n^A$, and the restriction of $G_{n+m}^A$ to $[m]+n$ is $G_m^A+n$, as desired.
Now suppose $\Gcal=\{G_n\}_{n\geq 0}$ is an admissible family, and let $A=\{k\in\Nbb|\ (1,k+1)\in E(G_{k+1})\}$. We claim that $\Gcal=\Gcal(A)$. To this end, let $n\geq 1$ and $k\in A$ be given where $k\leq n-1$. Select $1\leq i<j\leq n$ such that $j-i=k$. We may decompose $G_n$ as $G_n=G_j\circ G_{n-j}$, so the edge $(i,j)$ is in $G_n$ if and only if it is in $G_j$. Furthermore, $G_j=G_{i-1}\circ G_{j-i+1}$, so $G_j$ has the edge $(i,j)$ exactly when $G_{j-i+1}+(i-1)$ does. By definition of $A$, this occurs exactly when $k\in A$. It follows that $G_n=G_n^A$.
\end{proof}
As a corollary, we may deduce the first part of Theorem~\ref{thm_main}.
\begin{proof}[Proof of Theorem~\ref{thm_main}(\ref{thm_main_1})]
If $\Gcal$ is an admissible 1-parameter family of graphs, then $\Gcal=\Gcal(A)$ for some subset $A\subseteq\Nbb$. If each graph $G_n^A$ is filled, then for $i<j$, if $j\in A$ then $i\in A$. This is equivalent to the condition that there exists some $k\in\{0,1,2,\ldots\}\cup\{+\infty\}$ such that $A=\{i\in\Nbb:\ i\leq k\}$. But this means $G_n^A=H_{k,n}$ for all $n$. By Proposition~\ref{prop_translational_char}, this means that the sequence of lattice congruences $\mathbf{\Theta}=\{\Theta_n\}_{n\geq 0}$ corresponding to the filled graphs $\{G_n\}_{n\geq 0}$ form a translational family.
\end{proof}
Let $H,G$ be graphs on $[n]$ such that $E(H)\subseteq E(G)$. From the definition of the graph associahedron, the polytope $P_H$ is a Minkowski summand of $P_G$, so the normal fan of $P_H$ coarsens the normal fan of $P_G$. This in turn induces a surjective map $\Psi_H^G:\MTub(G)\ra\MTub(H)$.
For the remainder of the section, we fix subsets $A\subseteq B\subseteq\Nbb$. The graph $G_n^A$ is a subgraph of $G_n^B$ for all $n\geq 0$, which determines a surjective map $\MTub(G_n^B)\thra\MTub(G_n^A)$. For notational convenience, we write $\Psi_n$ in place of the map $\Psi_{G_n^A}^{G_n^B}$ for all $n\geq 0$.
\begin{lemma}\label{lem_psi_restriction}
For $\Wcal\in\MTub(G_{n+m}^B)$:
\begin{enumerate}
\item\label{lem_psi_restriction_1} $\Psi_{n+m}(\Wcal)|_{[n]}=\Psi_n(\Wcal|_{[n]})$
\item\label{lem_psi_restriction_2} $\std(\Psi_{n+m}(\Wcal)|_{[m]+n})=\Psi_m(\std(\Wcal|_{[m]+n}))$
\end{enumerate}
\end{lemma}
\begin{proof}
The standardization map in (\ref{lem_psi_restriction_2}) has the effect of shifting the vertex set from $[m]+n$ to $[m]$. Besides this point, the two parts are symmetric, so we only prove the first.
Let $\Wcal$ be a maximal tubing of $G_{n+m}^B$ and set $\Zcal=\Psi_{n+m}(\Wcal)$. We wish to show that $\Zcal|_{[n]}$ is equal to $\Psi_n(\Wcal|_{[n]})$. Since they are both maximal tubings of $G_n$, it suffices to show that their $G$-trees share a common linear extension.
Let $u=u_1\cdots u_{n+m}$ be a permutation of $[n+m]$ that is a linear extension of $\tau(\Wcal)$. Then $u$ is also a linear extension of $\tau(\Zcal)$, so $u|_{[n]}$ is a linear extension of $\tau(\Zcal)|_{[n]}$. On the other hand, $u|_{[n]}$ is a linear extension of $\tau(\Wcal|_{[n]})$, so it is also a linear extension of $\Psi_n(\Wcal|_{[n]})$.
\end{proof}
Now we return to the embedding $c:\Kbb[Z_{\infty}]\hookra\Kbb[\mathfrak{S}_{\infty}]$ from Section~\ref{subsec_MR}. The maps $\{\Psi_n\}_{n\geq 0}$ give rise to an embedding of vector spaces $c:\Kbb[\Gcal(A)]\hookra\Kbb[\Gcal(B)]$ where
$$c(\Pbb_{\Xcal})=\sum_{\Ycal\in\Psi_n^{-1}(\Xcal)}\Pbb_{\Ycal}$$
for $\Xcal\in G_n^A$.
\begin{theorem}\label{thm_graph_MR_subalg}
The embedding
$$c:\Kbb[\Gcal(A)]\hookra\Kbb[\Gcal(B)]$$
is a map of algebras.
\end{theorem}
\begin{proof}
Let $\Xcal\in\MTub(G_n^A)$ and $\Ycal\in\MTub(G_m^A)$ be given. Then $c(\Pbb_{\Xcal}\cdot\Pbb_{\Ycal})=\sum c(\Pbb_{\Zcal})$, where the sum is over $\Zcal\in\MTub(G_{n+m}^A)$ such that $\Xcal=\Zcal|_{[n]}$ and $\Ycal+n=\Zcal|_{[m]+n}$. We have
$$\sum_{\Zcal} c(\Pbb_{\Zcal})=\sum_{\Zcal}\sum_{\Wcal\in\Psi_{n+m}^{-1}(\Zcal)}\Pbb_{\Wcal}$$
On the other hand,
\begin{align*}
c(\Pbb_{\Xcal})\cdot c(\Pbb_{\Ycal}) &=(\sum_{\Wcal^{\pr}\in\Psi_n^{-1}(\Xcal)}\Pbb_{\Wcal^{\pr}})\cdot (\sum_{\Wcal^{\pr\pr}\in\Psi_m^{-1}(\Ycal)}\Pbb_{\Wcal^{\pr\pr}})\\
&=\sum_{\substack{\Wcal^{\pr}\in\Psi_n^{-1}(\Xcal)\\\Wcal^{\pr\pr}\in\Psi_m^{-1}(\Ycal)}}\Pbb_{\Wcal^{\pr}}\cdot\Pbb_{\Wcal^{\pr\pr}}
\end{align*}
We show that $c(\Pbb_{\Xcal}\cdot \Pbb_{\Ycal})=c(\Pbb_{\Xcal})\cdot c(\Pbb_{\Ycal})$. To this end, fix $\Pbb_{\Zcal}$ in the expansion of $\Pbb_{\Xcal}\cdot \Pbb_{\Ycal}$, and let $\Wcal\in\Psi_{n+m}^{-1}(\Zcal)$. Set $\Wcal^{\pr}=\Wcal|_{[n]}$ and $\Wcal^{\pr\pr}+n=\Wcal_{[m]+n}$ so that $\Wcal^{\pr}\in\MTub(G_n^B)$ and $\Wcal^{\pr\pr}\in\MTub(G_m^B)$. Clearly, $\Pbb_{\Wcal}$ is in the expansion of $\Pbb_{\Wcal^{\pr}}\cdot \Pbb_{\Wcal^{\pr\pr}}$. But,
\begin{align*}
&\Psi_n(\Wcal^{\pr}) =\Psi_{n+m}(\Wcal)|_{[n]}=\Zcal|_{[n]}=\Xcal,\ \hspace{2mm}\mbox{and}\\
&\Psi_m(\Wcal^{\pr\pr})=\std(\Psi_{n+m}(\Wcal)|_{[m]+n})=\std(\Zcal|_{[m]+n})=\Ycal,
\end{align*}
so $\Pbb_{\Wcal}$ is in the expansion of $c(\Pbb_{\Xcal})\cdot c(\Pbb_{\Ycal})$.
Conversely, suppose $\Wcal^{\pr}\in\Psi_n^{-1}(\Xcal)$ and $\Wcal^{\pr\pr}\in\Psi_m^{-1}(\Ycal)$ are given, and let $\Pbb_{\Wcal}$ an element in the expansion of $\Pbb_{\Wcal^{\pr}}\cdot\Pbb_{\Wcal^{\pr\pr}}$. Set $\Zcal=\Psi_{n+m}(\Wcal)$. Then
\begin{align*}
&\Zcal|_{[n]}=\Psi_{n+m}(\Wcal)|_{[n]}=\Psi_n(\Wcal^{\pr})=\Xcal,\ \hspace{2mm}\mbox{and}\\
&\std(\Zcal|_{[m]+n})=\std(\Psi_{n+m}(\Wcal)|_{[m]+n}=\Psi_m(\Wcal^{\pr\pr})=\Ycal,
\end{align*}
so $\Pbb_{\Zcal}$ is in the expansion of $\Pbb_{\Xcal}\cdot \Pbb_{\Ycal}$. Both $c(\Pbb_{\Xcal}\cdot \Pbb_{\Ycal})$ and $c(\Pbb_{\Xcal})\cdot c(\Pbb_{\Ycal})$ are multiplicity-free sums of basis elements with the same support, so they are equal.
\end{proof}
\begin{corollary}
If $\Gcal$ is an admissible 1-parameter family, the tubing algebra $\Kbb[\Gcal]$ is a subalgebra of the Malvenuto-Reutenauer algebra.
\end{corollary}
\subsection{Tubing coalgebras}\label{subsec_tubing_coalgebra}
We next define a comultiplication on $\Kbb[\Gcal]$. We will assume throughout that $\Gcal$ is a 1-parameter family, though it should be possible to extend it to more general families of graphs by defining a ``selection'' operation as in \cite{pilaud2018hopf}.
Say that $\Gcal$ is \emph{restriction-compatible} if for any $G\in\Gcal$ and any subset of vertices $I\subseteq V(G)$,
\begin{itemize}
\item $\std(G|_I)$ is a subgraph of the graph $G^{\pr}\in\Gcal$ where $V(G^{\pr})=V(\std(G|_I))$, and
\item $\std(G/I)$ is a subgraph of the graph $G^{\pr\pr}\in\Gcal$ where $V(G^{\pr\pr})=V(\std(G/I))$.
\end{itemize}
We note that the second property actually implies the first since $\std(G|_I)$ is a subgraph of $\std(G/(V\setm I))$.
\begin{example}\label{ex_res_comp}
Path graphs, complete graphs, and edge-free graphs are all restriction-compatible in addition to being admissible (Example~\ref{ex_admissible}). In these cases, the quotient graphs $G/I,\ I\subseteq V(G)$ are again path graphs, complete graphs, and edge-free graphs, respectively. Similarly, the sequence of cycle graphs $C_n$ whose vertices are labeled in cyclic order are also restriction-compatible since the quotient graphs are all cycles. On the other hand, the family of complete bipartite graphs in Example~\ref{ex_admissible} is not restriction-compatible.
\end{example}
We will not attempt to completely describe all restriction-compatible families of graphs, but we may describe those families that are both restriction-compatible and admissible.
\begin{proposition}
If $\Gcal$ is a 1-parameter family of graphs that is both restriction-compatible and admissible, then $\Gcal$ must be either the set of path graphs, complete graphs, or edge-free graphs.
\end{proposition}
\begin{proof}
To be an admissible family, $\Gcal$ must be equal to $\Gcal(A)$ for some set $A\subseteq\Nbb$. We wish to show that restriction-compatibility forces either $A=\{1\},\ A=\Nbb$, or $A=\emptyset$. Restriction-compatibility of these cases was observed in Example~\ref{ex_res_comp}. To prove that these are the only examples, it is enough to show that if there exists $k\in A,\ k\geq 2$, then $A=\Nbb$.
Suppose such $k$ exists, and let $j\in\Nbb$ with $j<k$. Let $H=(G_{k+1})|_{[j]\cup\{k+1\}}$. Since $\{1,k+1\}\in E(G_{k+1})$, the graph $\std(H)$ is a subgraph of $G_{j+1}$ containing the edge $\{1,j+1\}$. This implies $j\in A$.
On the other hand, suppose $j\in\Nbb$ with $j>k$ and set $n=(j+1)k$. Let $I\subseteq[n]$ such that
\begin{enumerate}
\item $|I|=j-1$,
\item $I$ does not contain any multiples of $k$, and
\item the smallest element of $I$ is greater than $k$.
\end{enumerate}
Such a collection exists since $k\geq 2$. Now let $J=[n]\setm (I\cup\{k,n\})$. Then the graph $\std(G_n/J)$ is a subgraph of $G_{j+1}$ containing the edge $\{1,j+1\}$. Hence, $j\in A$ holds.
\end{proof}
If $H$ is a subgraph of $G$ with the same vertex set $[n]$ and $\Xcal$ is in $\MTub(H)$, we let $c_H^G(\Xcal)=\sum\Ycal$ where the sum ranges over $\Ycal\in\MTub(G)$ such that $\Psi_H^G(\Ycal)=\Xcal$. Suppose $\Gcal$ is a restriction-compatible family. We define
$$\Delta_{\Gcal}=\Delta:\Kbb[\Gcal]\ra\Kbb[\Gcal]\otimes\Kbb[\Gcal]$$
as follows. If $\Xcal\in\MTub(G)$, let
$$\Delta(\Pbb_{\Xcal})=\sum c_{\std(G|_I)}^{G^{\pr}}(\Pbb_{\std(\Xcal|_I)})\otimes c_{\std(G/I)}^{G^{\pr\pr}}(\Pbb_{\std(\Xcal/I)}),$$
where the sum is over ideals $I$ of $\Xcal$, and $G^{\pr},G^{\pr\pr}\in\Gcal$ such that $|I|=|V(G^{\pr})|$ and $|V(G)\setm I|=|V(G^{\pr\pr})|$.
\begin{figure}
\centering
\includegraphics[scale=.75]{ex_coprod}
\caption{\label{fig:coprod}This is the comultiplication of a basis element indexed by a $G$-tree for the family of cycle graphs (Example~\ref{ex_cycle_coalg}); for clarity, we remove the letter $\Pbb$ from this expression.}
\end{figure}
\begin{example}\label{ex_MR_coalg}
We again consider the case $\Gcal=\{K_n\}_{n\geq 0}$ from Example~\ref{ex_MR_alg}. Every induced subgraph $H$ of $K_n$ is a complete graph, as is the quotient $K_n/H$. Thus, for $\Xcal\in\MTub(K_n)$, the formula for $\Delta$ simplifies to
$$\Delta(\Pbb_{\Xcal})=\sum\Pbb_{\std(\Xcal|_{I})}\otimes \Pbb_{\std(\Xcal/I)},$$
where the sum ranges over the ideals of $\Xcal$. Since $\tau(\Xcal)$ is a chain $u_1<\cdots<u_n$, its order ideals are of the form $\{u_1,\ldots,u_i\}$ for $i=0,1,\ldots,n$. Under the bijection between $\MTub(K_n)$ and $\mathfrak{S}_n$, this expression becomes
$$\Delta(\Fbb_u)=\sum\Fbb_{\std(u_1\cdots u_i)}\otimes \Fbb_{\std(u_{i+1}\cdots u_n)}.$$
Thus, $\Kbb[\Gcal]$ has the same coalgebra structure as $\Kbb[\mathfrak{S}_{\infty}]$.
\end{example}
\begin{example}\label{ex_cycle_coalg}
The set $\{C_n\}_{n\geq 0}$ of cyclically ordered cycle graphs is another restriction-compatible family. In Figure~\ref{fig:coprod} we show the comultiplication applied to a $C_4$-tree, or equivalently, a maximal tubing $\Xcal$ of $C_4$. The sum is split into six terms, one for each choice of ideal of $\Xcal$. We observe that for the two ideals $I$ such that $G|_I$ is not a cycle graph, the element $c_{\std(G|_I)}^{G^{\pr}}(\Pbb_{\std(\Xcal_I)})$ has multiple summands.
For example, the fourth term corresponds with the ideal $I=\{1,3\}$.
Observe that $\std(G|_I)$ is the edge-free graph on $[2]$, $\Xcal|_{\std(\{1,3\})}=\{\{1\}, \{2\}\}$, and the corresponding $G$-forest $T$ has $1$ and $2$ incomparable.
Since $C_2$ is also the complete graph on $[2]$, each element of $\Psi_{G}^{H}$ fiber of $\Xcal|_{\std(\{1,3\})}$ is just a linear extension of $T$.
\end{example}
\begin{theorem}
If $\Gcal$ is a restriction compatible family, then the map
$$c:\Kbb[\Gcal]\hookra\Kbb[\mathfrak{S}_{\infty}]$$
commutes with $\Delta$. In particular, $\Delta_{\Gcal}$ is coassociative.
\end{theorem}
\begin{proof}
Fix a maximal tubing $\Xcal\in\MTub(G_n)$. We show that $\Delta(c(\Pbb_{\Xcal}))=(c\otimes c)\circ(\Delta_{\Gcal}(\Pbb_{\Xcal}))$.
The element $c(\Pbb_{\Xcal})$ is supported by the permutations of $[n]$ that are linear extensions of the tree poset $\tau(\Xcal)$. Let $\Lcal(P)$ be the set of linear extensions of a poset $P$. Then,
\begin{align*}
\Delta(c(\Pbb_{\Xcal})) &= \sum_{u\in\Lcal(\tau(\Xcal))}\Delta(\Fbb_u)\\
&= \sum_{i=0}^n\sum_{u\in\Lcal(\tau(\Xcal))}\Fbb_{\std(u_1\cdots u_i)}\otimes \Fbb_{\std(u_{i+1}\cdots u_n)}
\end{align*}
If $u=u_1\cdots u_n$ is a linear extension of $\tau(\Xcal)$, then the subset $\{u_1,\ldots,u_i\}$ is an ideal, and the complement $\{u_{i+1},\ldots,u_n\}$ is an order filter. If $I$ is an order ideal, then $\tau(\Xcal)|_I=\tau(\Xcal|_I)$ and $\tau(\Xcal)|_{[n]\setm I}=\tau(\Xcal/I)$. Putting these together, we have
\begin{align*}
\Delta(c(\Pbb_{\Xcal})) &= \sum_I(\sum_{u\in\Lcal(\tau(\Xcal)|_I)} \Fbb_{\std(u)}) \otimes (\sum_{w\in\Lcal(\tau(\Xcal)|_{[n]\setm I})}\Fbb_{\std(w)})\\
&= \sum_I(\sum_{u\in\Lcal(\tau(\Xcal|_I))} \Fbb_{\std(u)}) \otimes (\sum_{w\in\Lcal(\tau(\Xcal/I))}\Fbb_{\std(w)})\\
&= \sum_I c(\Pbb_{\std(\Xcal|_I)})\otimes c(\Pbb_{\std(\Xcal/I)}),
\end{align*}
where the sum ranges over ideals $I$ of $\tau(\Xcal)$. If $K\subseteq H\subseteq G$ is a sequence of subgraphs with a common vertex set $[n]$, then the map $c_K^G$ factors as $c_K^G=c_H^G\circ c_K^H$. Since $\Gcal$ is a restriction-compatible family,
$$\sum_I c(\Pbb_{\std(\Xcal|_I)}) \otimes c(\Pbb_{\std(\Xcal/I)}) = \sum_I c_{G^{\pr}}^{K_{|I|}}c_{\std(G|_I)}^{G^{\pr}}(\Pbb_{\std(\Xcal|_I)})\otimes c_{G^{\pr\pr}}^{K_{|V\setm I|}}c_{\std(G/I)}^{G^{\pr\pr}}(\Pbb_{\std(\Xcal/I)}),$$
where $G^{\pr},G^{\pr\pr}\in\Gcal$ such that $|V(G^{\pr})|=|I|$ and $|V(G^{\pr\pr})|=|V\setm I|$. The latter sum simplifies to
$$(c\otimes c)\left(\sum_I c_{\std(G|_I)}^{G^{\pr}}(\Pbb_{\std(\Xcal|_I)})\otimes c_{\std(G/I)}^{G^{\pr\pr}}(\Pbb_{\std(\Xcal/I)})\right)=(c\otimes c)\circ(\Delta_{\Gcal}(\Pbb_{\Xcal})),$$
as desired.
\end{proof}
\subsection{Insertional families of lattice congruences}\label{subsec_insertional}
If $\alpha=(i,j,\epsilon)$ is an arc on $[n]$, we define the \emph{deletion} $\alpha\setm k$ to be the arc on $[n-1]$ where
$$\alpha\setm k=\begin{cases}(i-1,j-1,\epsilon)\ \mbox{if }k<i\\(i,j,\epsilon)\ \mbox{if }k>j\\(i,j-1,\epsilon^{\pr})\ \mbox{if }i\leq k\leq j\end{cases},$$
where $\epsilon^{\pr}_l=\epsilon_l$ when $l\leq k-i$ and $\epsilon^{\pr}_l=\epsilon_{l+1}$ when $l>k-i$. That is, $\epsilon^{\pr}$ is obtained from $\epsilon$ by deleting some $+$ or $-$ entry. Reversing this operation, we say the arc $\beta$ is obtained from $\alpha$ by \emph{inserting} $k$ if $\alpha=\beta\setm k$.
A sequence of lattice congruences $\mathbf{\Theta}=\{\Theta_n\}_{n\geq 0}$ is an \emph{insertional family} if for any arc $\alpha$ contracted by $\Theta_n$, any arc $\beta$ obtained by inserting some $k\in[n+1]$ is contracted by $\Theta_{n+1}$. The analogue of Theorem~\ref{thm_translational_subalg} proved in \cite[Theorem 1.3, Proposition 8.1]{reading:2005lattice} is as follows.
\begin{theorem}
If $\mathbf{\Theta}=\{\Theta_n\}_{n\geq 0}$ is an insertional family, then the map
$$c:\Kbb[Z_{\infty}^{\mathbf{\Theta}}]\ra\Kbb[\mathfrak{S}_{\infty}]$$
embeds $\Kbb[Z_{\infty}^{\mathbf{\Theta}}]$ as a sub-coalgebra of $\Kbb[\mathfrak{S}_{\infty}]$.
\end{theorem}
We now prove the second part of Theorem~\ref{thm_main}.
\begin{proof}[Proof of Theorem~\ref{thm_main}(\ref{thm_main_2})]
Let $\Gcal=\{G_n\}_{n\geq 0}$ be a 1-parameter family of filled graphs, and let $\mathbf{\Theta}=\{\Theta_n\}_{n\geq 0}$ be the corresponding sequence of lattice congruences. We must prove that $\Gcal$ is restriction-compatible if and only if $\mathbf{\Theta}$ is insertional.
Suppose first that $\mathbf{\Theta}$ is an insertional family of lattice congruences. To prove that $\Gcal$ is restriction-compatible, it suffices to show that $\std(G_n/\{i_1,\ldots,i_l\})$ is a subgraph of $G_{n-l}$ for $1\leq i_1<\cdots<i_l\leq n$. Indeed, it is enough to prove this statement for $l=1$ since if $H$ and $G$ are graphs on $[n]$ such that $E(H)\subseteq E(G)$, the quotient $H/i$ is a subgraph of $G/i$ for any $i\in[n]$. Hence, the statement for $l=1$ gives a sequence of inclusions:
$$E(\std(G_n/\{i_1,\ldots,i_l\}))\subseteq\cdots\subseteq E(\std(G_{n-l+1}/\{i_1\}))\subseteq E(G_{n-l}).$$
For $k\in[n]$, we show that $\std(G_n/k)$ is a subgraph of $G_{n-1}$. Suppose $\{i,j\}$ is not an edge of $G_{n-1}$. Then $\Theta_{n-1}$ contracts the arc $\alpha=(i,j,+)$. Let $\beta^+=(i^{\pr},j^{\pr},+)$ be the arc obtained from $\alpha$ by inserting $k$ such that its sign vector $\epsilon$ is $(+,\ldots,+)$. If $i<k<j+1$, then there is another arc $\beta^-=(i^{\pr},j^{\pr},\epsilon^{\pr})$ such that $\epsilon^{\pr}_{k-i^{\pr}}=-$. Since $\mathbf{\Theta}$ is insertional, both $\beta^+$ and $\beta^-$ are contracted by $\Theta_n$.
We claim that $\{i,j\}$ is not an edge of $\std(G_n/k)$. If, to the contrary, it is an edge of $\std(G_n/k)$, then either $\{i^{\pr},j^{\pr}\}$ is an edge of $G_n$, or $\{i^{\pr},k\}$ and $\{k,j^{\pr}\}$ are both edges of $G_n$. In the former case, the arc $\beta^+$ is not contracted by $\Theta_n$, a contradiction. On the other hand, suppose $\{i^{\pr},j^{\pr}\}$ is not an edge, but $\{i^{\pr},k\}$ and $\{k,j^{\pr}\}$ both are. Then $i^{\pr}<k<j^{\pr}$ holds since $G_n$ is filled. But this means $i^{\pr}=i$ and $j^{\pr}=j+1$, so the arc $\beta^-$ is well-defined, and it is contracted by $\Theta_n$. Since $\Theta_n$ is generated by positive arcs, either $(i^{\pr},k,+)$ or $(k,j^{\pr},+)$ must be contracted by $\Theta_n$. But this contradicts the assumption that $\{i^{\pr},k\}$ and $\{k,j^{\pr}\}$ are edges of $G_n$.
Now assume that $\Gcal$ is a restriction-compatible family. Let $\alpha=(i,j,\epsilon)$ be an arc contracted by $\Theta_{n-1}$, and pick $k\in[n]$. We claim that any arc $\beta$ obtained by inserting $k$ into $\alpha$ is contracted by $\Theta_n$. This will prove that $\mathbf{\Theta}$ is an insertional family.
Since $\Theta_{n-1}$ is generated by positive arcs, there exists a positive subarc $\alpha^{\pr}=(i^{\pr},j^{\pr},+)$ of $\alpha$ that is contracted by $\Theta_{n-1}$. As a result, the pair $\{i^{\pr},j^{\pr}\}$ is not an edge of $G_{n-1}$. Moreover, any arc $\beta$ of $[n]$ with $\beta\setm k=\alpha$ contains a subarc $\beta^{\pr}$ such that $\beta^{\pr}\setm k=\alpha^{\pr}$. Hence, to show that $\beta$ is contracted by $\Theta_n$, it is enough to show that $\beta^{\pr}$ is contracted by $\Theta_n$.
If $\beta^{\pr}$ is a positive arc, then it follows that $\beta^{\pr}$ is contracted by $\Theta_n$ since $E(\std(G_n\setm k))\subseteq E(G_{n-1})$. If $\beta^{\pr}$ is not a positive arc, then $i^{\pr}<k\leq j^{\pr}$ and $\beta^{\pr}=(i^{\pr},j^{\pr}+1,\epsilon^{\pr})$ where $\epsilon^{\pr}_{k-i^{\pr}}$ is the only negative entry in $\epsilon^{\pr}$. In this case, since $E(\std(G_n/k))\subseteq E(G_{n-1})$, either $\{i^{\pr},k\}$ or $\{k,j^{\pr}+1\}$ is not an edge, which means that some subarc of $\beta^{\pr}$ is contracted by $\Theta_n$. It follows that $\beta^{\pr}$ is contracted by $\Theta_n$ as well.
\end{proof}
\section{Open problems}\label{sec:other}
\subsection{Lattices of maximal tubings}\label{subsec:tubing_lattice}
Not every poset of maximal tubings is a lattice. For example, the two indicated atoms of the poset of maximal tubings shown in Figure~\ref{fig_nl} has two minimal upper bounds, so it is not a lattice.
\begin{figure}
\centering
\includegraphics{G3102}
\caption{\label{fig_nl}A poset of maximal tubings that is not a lattice}
\end{figure}
Corollary~\ref{g-perm lattice cong} characterizes graphs $G$ for which $L_G$ is a meet-semilattice quotient of the weak order. A more fundamental problem is to characterize all graphs such that $L_G$ is a lattice. To this end, we make the simple observation that an interval $L^{\pr}$ of a lattice $L$ is a sublattice of $L$. In particular if $G^{\pr}$ is any graph obtained by contracting or deleting vertices of $G$ such that $L_{\std(G^{\pr})}$ is not a lattice, then $L_G$ is not a lattice either. Continuing to borrow from matroid terminology, we say that $G^{\pr}$ is a \emph{minor} of $G$ if it is the standardization of a sequence of contractions and deletions.
\begin{problem}
Give an explicit list of minors such that $L_G$ is a lattice whenever $G$ does not contain a minor from the list.
\end{problem}
By exhaustive search, we found that when $G$ is a connected graph with four vertices, the poset $L_G$ is not a lattice if and only if $\{1,3\}$ and $\{2,4\}$ are edges but $\{2,3\}$ is not an edge in $G$. These are the seven graphs shown in Figure~\ref{fig_nlex}.
\begin{figure}
\centering
\includegraphics{nlex}
\caption{\label{fig_nlex}Graphs with four vertices such that $L_G$ is not a lattice}
\end{figure}
\subsection{Cyclohedra}\label{subsec:cycles}
Let $C_n$ be the $n$-cycle graph, with vertices labeled $1,2,\ldots,n$ in cyclic order. The graph associahedron $P_{C_n}$ is known as a \emph{cyclohedron}. The cyclohedron is combinatorially equivalent to the \emph{Type $B_{n-1}$ associahedron} \cite{simion:2003typeB}. Its facial structure is usually described in terms of Type $B_n$ Coxeter-Catalan combinatorial objects, e.g. centrally symmetric triangulations of polygons. The graph associahedron $P_{C_n}$ does not have the same normal fan as the Type $B_{n-1}$ associahedron, however. This geometric distinction is relevant in many of its applications. The graph associahedron $P_{C_n}$ is used to study the self-linking of knots \cite{bott.taubes:1994self} or to tile the moduli space $\ov{Z}^n$ in \cite{devadoss:2002space}, whereas the Type $B_n$ associahedron arises in the theory of cluster algebras \cite{fomin.zelevinsky:2003clusterII}.
From the Coxeter-Catalan point of view, the vertices of the Type $B_n$ associahedron can be partially ordered in several ways, which are called Cambrian lattices; \cite{reading:2006cambrian},\cite{thomas:2006tamari}. A \emph{Cambrian lattice} is a certain lattice quotient of the weak order of a finite Coxeter system. We remark that the poset of maximal tubings $L_{C_n}$ is not isomorphic to a Type $B_{n-1}$ Cambrian lattice for $n\geq 3$, despite the fact that they arise as orientations of the same undirected graph. Indeed, $L_{C_3}=L_{K_3}$ is the weak order of Type $A_2$, which is not isomorphic to any Cambrian lattice of Type $B_2$.
Cambrian lattices have a remarkable structure: they are all semidistributive lattices \cite{reading:2006cambrian}. A lattice is \emph{semidistributive} if for any three elements $x,y,z$:
\begin{itemize}
\item if $x\wedge z=y\wedge z$, then $(x\vee y)\wedge z=x\wedge z$ and
\item if $x\vee z=y\vee z$, then $(x\wedge y)\vee z=x\vee z$.
\end{itemize}
The weak order is known to be semidistributive, so when $G$ is filled, the poset $L_G$ inherits semidistributivity as a lattice quotient of the weak order. We do not know of a way to represent $L_{C_n}$ as a lattice quotient of the weak order for $n\geq 4$. In particular, the canonical map $\Psi_{C_n}:\mathfrak{S}_n\ra L_{C_n}$ is not a lattice map as $C_n$ is not filled for $n\geq 4$. However, we have verified by computer calculation that $L_{C_n}$ is a semidistributive lattice for $n\leq 6$. This has led us to the following question.
\begin{question}
Is $L_{C_n}$ a semidistributive lattice for each $n\geq 1$?
\end{question}
We remark that the poset $L_G$ need not be semidistributive even when it is a lattice. For example, on may check that the star graph $G$ with $E(G)=\{\{1,2\},\{1,3\},\{1,4\}\}$ has a lattice of maximal tubings that is not semidistributive.
\subsection{Facial weak order}\label{subsec:facial_weak}
For $n\geq 0$, let $\Pi_n$ be the set of \emph{ordered set partitions} $(B_1,\ldots,B_l)$ of $[n]$. In \cite{chapoton:2000algebres}, Chapoton defined a Hopf algebra $\Kbb[\Pi_{\infty}]=\bigoplus\Kbb[\Pi_n]$ on the set of ordered set partitions. Identifying maximally refined ordered set partitions $(B_1,\ldots,B_n)$ with permutations, the natural inclusion $\Kbb[\mathfrak{S}_{\infty}]\ra\Kbb[\Pi_{\infty}]$ is a Hopf algebra map.
This led to the development of the \emph{facial weak order} by Palacios and Ronco \cite{palacios:2006weak}, which is a partial ordering on $\Pi_n$ distinct from the usual refinement order. Under this poset, the product of two ordered set partitions is a sum of elements in an interval of the facial weak order. Dermenjian, Hohlweg, and Pilaud \cite{dermenjian:2018facial} proved that the facial weak order on $\Pi_n$ is a lattice for all $n\geq 1$. Furthermore, they show that any lattice congruence of the weak order may be ``lifted'' to a lattice congruence of the facial weak order. This suggests the following question:
\begin{question}
Does a translational (resp. insertional) family $\mathbf{\Theta}=\{\Theta_n\}_{n\geq 0}$ of lattice congruences of the weak order lift to a family $\hat{\mathbf{\Theta}}=\{\hat{\Theta}_n\}_{n\geq 0}$ of congruences of the facial weak order such that $\Kbb[\Pi_{\infty}/\hat{\mathbf{\Theta}}]$ is a subalgebra (resp. sub-coalgebra) of $\Kbb[\Pi_{\infty}]$?
\end{question}
\section*{Acknowledgements}
The second author was supported by NSF/DMS-1440140 while in residence at the Mathematical Sciences Research Institute in Fall 2017.
We thank Vincent Pilaud and Ricky Ini Liu for helpful suggestions.
|
3,212,635,537,530 | arxiv | \section{Approach rationale}
Safe Interactive Model-Based Learning (SiMBL) aims to control a deterministic dynamical system:
\begin{equation} \label{eq:ode}
x(t+1) = x(t)+ dt\ f(x(t), u(t)),\quad y(t) = x(t),
\end{equation}
where $x$ is the state and $y$ are the measurements, in this case assumed equivalent. The system (\ref{eq:ode}) is sampled with a known constant time $dt$ and it is subject to closed and bounded, possibly non-convex, operational constraints on the state and input:
\begin{eqnarray}
x(t)\in\mathbb{X}\subseteq \mathbb{R}^{n_x}, \
u(t)\in\mathbb{U}\subset \mathbb{R}^{n_u}, \quad \forall t>0. \label{eq:constraints}
\end{eqnarray}
The stability of (\ref{eq:ode}) is studied using discrete time systems analysis. In particular, tools from discrete-time \emph{control Lyapunov functions} \citep{Blanchini, Khalil_book} will be used to compute policies that can keep the system \emph{safe}.
\paragraph{Safety.}
In this work, \emph{safety} is defined as the capability of a system to remain within a subset $\mathbb{X}_s \subseteq \mathbb{X}$ of the operational constraints and to return asymptotically to the equilibrium state from anywhere in $\mathbb{X}_s$. A feedback control policy, $u=K(x)\in\mathbb{U}$, is certified as \emph{safe} if it can provide safety with \emph{high probability}. In this work, safety is verified with a statistical method that extends \cite{bobiti_samplingdriven_nodate}.
\paragraph{Safe learning.} \begin{wrapfigure}{r}{7.5cm}
\centering
\includegraphics[width=0.9\linewidth, trim={0. 0. 0. 0.}, clip]{figures/SiMBL_Diagram_big.pdf}
\caption{{\bf Safe interactive Model-Based Learning (SiMBL) rationale.} Approach is centered around an uncertain RNN forward model for which we compute a safe set and a control policy using principles from robust control. This allows for safe exploration though MPC and iterative refinement of the model and the safe set. An initial safe policy is assumed known. } \label{fig:simbldiagram}
\vspace{-0.6cm}
\end{wrapfigure} The proposed framework aims at learning a policy, $K(x)$, and Lyapunov function, $V(x)$,
by means of simulated trajectories from an uncertain forward model and an initial policy $K_0$,
used to collect data.
The model, the policy, and the Lyapunov function are \emph{iteratively refined} while \emph{safely}
collecting more data
through a \emph{Safe Model Predictive Controller} (Safe-MPC).
Figure~\ref{fig:simbldiagram} illustrates the approach.
\paragraph{Summary of contribution.} This work presents the algorithms for:
1) Iteratively learning a novel Bayesian RNN model with a large posterior over unseen states and inputs;
2) Learning a safe set and the associated controller with
neural networks from the model trajectories;
3) Safe exploration with MPC.
For 1) and 2), we propose to retain the model from scratch using a consistency prior to include knowledge
of the previous uncertainty and then to recompute the safe set.
The safe set increase as more data becomes available and the safety of the exploration strategy are
demonstrated on an inverted pendulum simulation with limited control
torque and stability region. Their final integration for continuous model and controller refinement with data from safe exploration (see Figure \ref{fig:simbldiagram}) is left for future work.
\section{The Bayesian recurrent forward model}
A discrete-time stochastic forward model of system \eqref{eq:ode} is formulated as a Bayesian RNN. A grey-box approach is used, where available prior knowledge is integrated into the network in a differentiable way (for instance, the known relation between an observation and its derivative). The model provides an estimate of the next states distribution that is large (up to a defined value) where there is no available data. This is inspired by recent work on Noise Contrastive Priors (NCP) \citep{hafner_reliable_2018}. We extend the NCP approach to RNNs, and propose the Noise Contrastive Prior Bayesian RNN (NCP-BRNN), with full state information, which follows the discrete-time update:
\begin{align} \label{eq:forward}
\hat{x}(t+1) = \hat{x}(t) + dt\ d\hat{x}(t),& \quad
d\hat{x}(t) = \mu\left(\hat{x}(t), u(t); \theta_\mu\right) + \hat{w}(t), \\
\hat{w}(t) \sim q(\hat{x}(t), u(t);\theta_\Sigma), &\quad q(\hat{x}(t), u(t);\theta_\Sigma) = \mathcal{N}\left(0,\ \Sigma\left(\hat{x}(t), u(t); \theta_\Sigma\right)\right), \\
\hat{y}(t) \sim \mathcal{N}\left(\hat{x}(t), \sigma_y^2\right), \label{eq:measure} & \quad
\hat{x}(0) \sim \mathcal{N}(x(0), \sigma_y^2), \\
\Sigma(\cdot) = \sigma_w \text{sigm}(\Sigma_{net}(\cdot)), & \quad \mu\left(\cdot\right)=\text{GreyBox}_{net}(\cdot),
\end{align}
where $\hat{x}(t), \hat{y}(t)$ denote the state and measurement estimated from the model at time $t$, and $d\hat{x}(t)$ is drawn from the distribution model, where $\mu$ and $\Sigma$ are computed from neural networks, sharing some initial layers. In particular, $\mu$ combines an MLP with some physics prior while the final activation of $\Sigma$ is a sigmoid which is then scaled by the hyperparameter $\sigma_w$, namely, a finite maximum variance. The next state distribution depends on the current state estimate $\hat{x}$, the input $u$, and a set of \emph{unknown} constant parameters $\theta$, which are to be learned from the data. The estimated state $\hat{x}(t)$ is for simplicity assumed to have the same physical meaning of the true system state $x(t)$. The system state is measured with a Gaussian uncertainty with standard deviation $\sigma_y$, which is also learned from data. During the control, the measurement noise is assumed to be negligible ($\sigma_y\approx 0$). Therefore, the control algorithms will need to be robust with respect to the model uncertainty. Extensions to partial state information and output noise robust control are also possible but are left for future work.
\paragraph{Towards reliable uncertainty estimates with RNNs}
The \emph{fundamental assumption} for model-based safe learning algorithm is that the model predictions \emph{contain} the actual system state transitions with high probability \citep{berkenkamp_safe_2017}. This is difficult to meet in practice for most neural network models. To mitigate this risk, we train our Bayesian RNNs on sequences and include a Noise-Contrastive Prior (NCP) term \citep{hafner_reliable_2018}. In the present work, the uncertainty is modelled as a point-wise Gaussian with mean and standard deviation that depend on the current state as well as on the input. The learned 1-step standard deviation, $\Sigma$, is assumed to be a diagonal matrix. This assumption is limiting but it is common in variational neural networks for practicality reasons \citep{zhao_infovae_2017, chen_variational_2016}. The NPC concept is illustrated in Figure \ref{fig:NCP}. More complex uncertainty representations will be considered in future works.
The cost function used to train the model is:
\begin{align}\label{eq:model_loss}
\mathcal{L}(\theta_\mu,\theta_\Sigma) = & -\frac{1}{T}\sum_{t=0}^T
\mathbf{E}_{p_{\text{train}}(x(0),u(t))}\left[
\mathbf{E}_{q(\hat{x}(t), {u}(t);\theta_\Sigma)}\left[\ln p(\hat{y}(t)|y(t),u(t);\theta_\mu;\theta_\Sigma)\right]\right] \nonumber \\ & + D_{\text{KL}}\left[q\left(\tilde{x}(t), \tilde{u}(t);\theta_\Sigma\right)\ ||\ \mathcal{N}(0,\sigma_w^2)\right] \nonumber \\ & + \mathbf{E}_{p_{\text{train}}(x(0),u(t))}\left[\text{ReLU}\left[\Sigma\left(\hat{x}(t),u(t);\theta_{\Sigma}\right) - \Sigma\left(\hat{x}(t),u(t);\theta_{\Sigma_\text{prev}}\right)\right]\right]
\end{align}
where the first term is the expected negative log likelihood over the uncertainty distribution, evaluated over the training data. The second term is the KL-divergence which is evaluated in closed-form over predictions $\tilde{x}$ generated from a set of background initial states and input sequences, $\hat{x}(0)$ and $\hat{u}(t)$. These are sampled from a uniform distribution for the first model and then, once a previous model is available and new data is collected, they are obtained using rejection sampling with PyMC \citep{Salvatier2016} with acceptance condition: $\Sigma(\tilde{x}(0),\tilde{u}(t);\theta_{\Sigma_\text{prev}})\geq 0.5\ \sigma_w$. If a previous model is available, then the final term is used which is an \emph{uncertainty consistency prior} which forces the uncertainty estimates over the training data to not increase with respect to the previous model.
The loss (\ref{eq:model_loss}) is optimised using stochastic backpropagation through truncated sequences. In order to have further consistency between model updates, it a previous model is available, we train from scratch but stop optimising once the final loss of the previous model is reached.
\begin{wrapfigure}{l}{5.5cm}
\centering
\vspace{0.cm}
\includegraphics[width=1.05\linewidth]{figures/pendulum/NCP_example.png}
\caption{{\bf Variational neural networks with Noise Contrastive Priors (NCP).} Predicting sine-wave data (red-black) with confidence bounds (blue area) using NAIS-Net \citep{Ciccone2018NAISNetSD} and NCP \citep{hafner_reliable_2018}. }\label{fig:NCP}
\vspace{-1.3cm}
\end{wrapfigure}
\section{The robust control problem} We approximate a chance-constrained stochastic control problem with a min-max robust control problem over a convex uncertainty set. This non-convex min-max control problem is then also approximated by computing the loss only at the vertices of the uncertainty set. To compensate for this approximation, (inspired by variational inference) the centre of the set is sampled from the uncertainty distribution itself (Figure \ref{fig:variational_sets}).
The procedure is detailed below.
\paragraph{Lyapunov-Net.} The considered Lyapunov function is:
\begin{equation}\label{eq:lyap1}
V(x) = x^T\left(\epsilon I + V_{net}(x)^T V_{net}(x)\right)x + \psi(x),
\end{equation}
where $V_{net}(x)$ is a feedforward network that produces a $n_V\times n_x$ matrix, where $n_V$ and $\epsilon>0$ are hyperparameters. The network parameters have to be trained and they are omitted from the notation. The term $\psi(x)\geq 0$ represents the prior knowledge of the state constraints. In this work we use:
\begin{equation}
\psi(x) = \text{ReLU}(\phi(x)-1),
\end{equation}
where $\phi(x)\geq 0$ is the Minkowski functional\footnote{Minkowski functionals measure the distance from the set center and they are positive definite.} of a user-defined \emph{usual region of operation}, namely:
\begin{equation}
\mathbb{X}_\phi=\{x \in \mathbb{X}: \phi(x)\leq 1\}.
\end{equation}
Possible choices for the Minkowski functional include quadratic functions, norms or semi-norms \citep{Blanchini,Horn:2012:MA:2422911}.
Since $V(x)$ must be positive definite, the hyperparameter $\epsilon>0$ is introduced~\footnote{The trainable part of the function $V(x)$ is chosen to be piece-wise quadratic but this is not the only possible choice. In fact one can use any positive definite and radially unbounded function. For the same problem multiple Lyapunov functions can exist. See also \cite{Blanchini}.}. While other forms are possible as in \cite{Blanchini}, with \eqref{eq:lyap1} the activation function does not need to be invertible. The study of the generality of the proposed function is left for future consideration.
\paragraph{Safe set definition.}
Denote the candidate \emph{safe level set} of $V$ as:
\begin{equation}
\mathbb{X}_s = \{x \subseteq \mathbb{X}: V(x)\leq l_s\},
\end{equation}
where $l_s$ is the safe level. If, for $x \in \mathbb{X}_s$, the function $V(x)$ satisfies the Lyapunov inequality over the system closed-loop trajectory with a control policy $K$, namely,
\begin{equation}\label{eq:lyap2}
u(t)=K\left(x\left(t\right)\right)\Rightarrow V(x(t+1))-V(x(t))\leq 0,\ \forall x(t)\in \mathbb{X}_s,
\end{equation}
then set $\mathbb{X}_s$ is \emph{safe}, i.e., it satisfies the conditions of \emph{positive-invariance} \citep{Blanchini,Kerrigan:2000}. Note that the policy $K(x)$ can be either a neural network or a model-based controller, for instance a Linear Quadratic Regulator (LQR, see \cite{KalmanLQR}) or a Model Predictive Controller (MPC, see \cite{Maciejowski_book,rawlingsMPC,Cannon_book,Rakovic2019}). A stronger condition to eq. (\ref{eq:lyap2}) is often used in the context of optimal control:
\begin{equation}\label{eq:lyap2_lqr}
u(t)=K\left(x\left(t\right)\right)\Rightarrow V(x(t+1))-V(x(t))\leq -\ell(x(t),u(t)),\ \forall x(t)\in \mathbb{X}_s
\end{equation}
where $\ell(x(t),u(t))$ is a positive semi-definite stage loss.
In this paper, we focus on training policies with the quadratic loss used in LQR and MPC, where the origin is the target equilibrium, namely:
\begin{equation}
\ell(x,u)= x^T Q x + u^T R u, \quad Q\succeq 0,\ R\succ 0.
\end{equation}
\paragraph{From chance constrained to min-max control}
Consider the problem of finding a controller $K$ and a function $V$ such that $u(t)=K\left(x\left(t\right)\right)$ and:
\begin{equation}
\label{eq:stochastic}
\mathcal{P}\big[V(\hat{x}(t+1))-V(x(t))\leq -\ell(x(t),u(u))\big]\geq 1-\epsilon_p,
\end{equation}
where $\mathcal{P}$ represents a probability and $0<\epsilon_p<<1$. This is a \emph{chance-constrained} non-convex optimal control problem \citep{Cannon_book}. We truncate the distributions and approximate (\ref{eq:stochastic}):
\begin{equation}
\max_{\hat{x}(t+1) \in\ \mathbb{W}(x(t),u(t),\theta)}\big[V\left(\hat{x}(t+1)\right)\big]-V(x(t)) \leq - \ell\left(x(t), K\left(x(t)\right)\right), \label{eq:minmax_def}
\end{equation}
which is deterministic. A strategy to jointly learn $(V,\ K)$ fulfilling (\ref{eq:minmax_def}) is presented next.
\section{Learning the controller and the safe set}\label{sec:safeset}
We wish to build a controller $K$, a function $V$, and a safe level $l_s$ given the state transition probability model, $(\mu,\Sigma)$, such that the condition in (\ref{eq:lyap2_lqr}) is satisfied with high probability for the physical system generating the data.
Denote the one-step prediction from the model in (\ref{eq:forward}), in closed loop with $K$, as:
$$\hat{x}^{+} = x + dt\ d\hat{x},\text{ with, }u=K(x),$$
where $\hat{x}^{+}$ represents the next state prediction and the time index $t$ is omitted.
\paragraph{Approximating the high-confidence prediction set.} A polytopic approximation of a \emph{high confidence region} of the estimated uncertain set $\hat{x}^{+} \in \mathbb{W}(x,u,\theta)$ is obtained from the parameters of $\Sigma$ and used for training $(V,\ K)$. In this work, the uncertain set is taken as a hyper-diamond centered at $x$, scaled by the (diagonal) standard deviation matrix, $\Sigma$:
\begin{equation}
\mathbb{W}_1(x,u,\theta)=\left\{x^{+}: x^{+}=x + dt\ d\hat{x}, \ \left\| (\Sigma(x,u;\theta_\Sigma))^{-1}\hat{w}\right\|_1\leq \bar{\sigma} \right\},
\end{equation}
where $\bar{\sigma}>0$ is a hyper-parameter. This choice of set is inspired by the Unscented Kalman filter \citep{wan2000}.
Since $\Sigma$ is diagonal, the vertices are given by the columns of the matrix resulting from multiplying $\Sigma$ with a mask $M$ such that:
\begin{align}
\text{vert}[\mathbb{W}(x,u,\theta)]= \text{cols}[\Sigma(x,u;\theta_\Sigma)\ M], \quad
M= \bar{\sigma}\ [I,\ -I].
\end{align}
\paragraph{Learning the safe set.} Assume that a controller $K$ is given. Then, we wish to learn a $V$ of the from of (\ref{eq:lyap1}), such that the corresponding safe set $\mathbb{X}_s$ is as big as possible, ideally as big as the state constraints $\mathbb{X}$. In order to do so, the parameters of $V_{net}$ are trained using a grid of initial states, a forward model to simulate the next state under the policy $K$, and an appropriate cost function. The cost for $V_{net}$ and $l_s$ is inspired by \citep{richards_lyapunov_2018}. It consists of a combination of two objectives: the first one penalises the deviation from the Lyapunov stability condition; the second one is a classification penalty that separates the stable points from the unstable ones by means of the decision boundary, $V(x)=l_s$. The combined robust Lyapunov function cost is:
\begin{equation}\label{eq:lyapunov_loss}
\min_{V_{net},\ l_s} \mathbf{E}_{[x(t)\in\mathbb{X}_{grid}]} \big[J\left(x\left(t\right)\right)\big],
\end{equation}
\begin{align}\label{eq:cost}
J(x) &= \mathcal{I}_{\mathbb{X}_s}(x)\ J_{s}(x)+ \text{sign}\big[\nabla V(x)\big]\
\left[l_s - V(x) \right],
\end{align}
\begin{align}
\mathcal{I}_{\mathbb{X}_s}(x) &= 0.5 \left(\text{sign}\left[l_s - V(x) \right]+1\right), \quad
J_{s}(x) = \frac{1}{\rho V(x)}\ \text{ReLU}\left[ \nabla V(x) \right], \\
\nabla V(x) &= \max_{\hat{x}^{+} \in\ \mathbb{W}(x,K\left(x\right),\theta)}\big[V\left(\hat{x}^{+}\right)\big]-V(x) + \ell\left(x, K\left(x\right)\right), \label{eq:minmax}
\end{align}
where $\rho>0$ trades off stability for volume. The robust Lyapunov decrease in (\ref{eq:minmax}) is evaluated by using sampling to account for uncertainty over the confidence interval $\mathbb{W}$. Sampling of the set centre is performed as opposite of setting $\mathbb{W}=\mathbb{W}_1$, which didn't seem to produce valid results. Let us omit $\theta$ for ease of notation. We substitute $\nabla V(x)$ with $\mathbf{E}_{\mathbb{W}}\big[\nabla V(x)\big]$, which we define as:
\begin{align}
\mathbf{E}_{\hat{w} \sim \mathcal{N}\left(0,\ \Sigma\left(x, K(x)\right)\right)}\left\{\max_{\hat{x}^{+} \in\ \mathbb{W}_1(x,K\left(x\right),\theta) + \hat{w}dt}\big[V\left(\hat{x}^{+}\right)\big]\right\}-V(x) + \ell\left(x, K\left(x\right)\right), \label{eq:expected:minmax}
\end{align}
Equations (\ref{eq:minmax}) and (\ref{eq:expected:minmax}) require a maximisation of the non-convex function $V(x)$ over the convex set $\mathbb{W}$. For the considered case, a sampling technique or another optimisation (similar to adversarial learning) could be used for a better approximation of the max operator. The maximum over $\mathbb{W}$ is instead approximated by the maximum over its vertices:
\begin{equation}
\nabla V(x) \approx \max_{\hat{x}^{+} \in\ \text{vert}[\mathbb{W}_1(x,K\left(x\right),\theta)] + \hat{w}dt}\big[V\left(\hat{x}^{+}\right)\big]-V(x) + \ell\left(x, K\left(x\right)\right). \label{eq:minmax_approx}
\end{equation}
This consists of a simple enumeration followed by a max over tensors that can be easily handled. Finally, during training (\ref{eq:expected:minmax}) is implemented in a variational inference fashion by evaluating (\ref{eq:minmax_approx}) at each epoch over a different sample of $\hat{w}$. This entails a variational posterior over the center of the uncertainty interval. The approach is depicted in Figure \ref{fig:variational_sets}.
The proposed cost is inspired by \cite{richards_lyapunov_2018}, with the difference that here there is no need for \emph{labelling} the states as safe by means of a multi-step simulation. Moreover, in this work we train the Lyapunov function and controller \emph{together}, while in \citep{richards_lyapunov_2018} the latter was given.
\paragraph{Learning the safe policy.}
We alternate the minimisation of the Lyapunov loss (\ref{eq:lyapunov_loss}) and the solution of the \emph{variational robust control problem}:
\begin{align}\label{eq:policy_cost}
\min_{u=K(x)}\mathbf{E}_{[x\in\mathbb{X}_{grid}]} \left[\mathcal{I}_{\mathbb{X}_s}(x) L_c(x,u)\right], \quad \text{s.t.: } K(0)=0,
\end{align}
\begin{align}
L_c(x,u) = \ell(x,u) + &\mathbf{E}_{\mathbb{W}}\left\{\max_{\hat{x}^{+} \in\ \mathbb{W}(x,u,\theta)} \left[V(\hat{x}^{+}) - \gamma \log(l_s-V(\hat{x}^{+}))\right]\right\}, \label{eq:robust_control_loss}
\end{align}
\begin{wrapfigure}{l}{5.5cm}
\vspace{-0.4cm}
\centering
\includegraphics[trim={120, 330, 150, 270}, clip, scale=0.55]{figures/rendering_3.pdf}
\caption{{\bf Approximating the non-convex maximisation.} Centre of the uncertain set is sampled and Lyapunov function is evaluated at its vertices. }
\label{fig:variational_sets}
\vspace{-0.3cm}
\end{wrapfigure}
subject to the forward model (\ref{eq:forward}). In this work, (\ref{eq:policy_cost}) is solved using backpropagation through the policy, the model and $V$. The safety constraint, $\hat{x}^{+}\in\mathbb{X}_s$, namely, $V(\hat{x}^{+})\leq l_s$ is relaxed through a log-barrier \citep{Boyd:2004:CO:993483}. If a neural policy $K(x)$ solves (\ref{eq:policy_cost}) and satisfies the safety constraint, $\forall x\in\mathbb{X}_s$, then it is a candidate robust controller for keeping the system within the safe set $\mathbb{X}_s$. Note that the expectation in (\ref{eq:robust_control_loss}) is once again treated as a variational approximation of the expectation over the center of the uncertainty interval.
Obtaining an exact solution to the control problem for all points is computationally impractical. In order to provide statistical guarantees of safety, probabilistic verification is used after $V$ and $K$ have been trained. This refines the safe level set $l_s$ and, if successful, provides a probabilistic safety certificate. If the verification is unsuccessful, then the learned $(\mathbb{X}_s,\ K)$ are not safe. The data collection continues with the previous safe controller until suitable $V$, $l_s$, and $K$ are found. Note that the number of training points used for the safe set and controller is in general lower than the ones used for verification. The alternate learning procedure for $\mathbb{X}_s$ and $K$ is summarised in Algorithm \ref{alg:alternateDescent}. The use of 1-step predictions makes the procedure highly scalable through parallelisation on GPU.
\begin{wrapfigure}{r}{6.5cm}
\noindent\begin{minipage}{0.45\columnwidth}
\vspace{-0.1cm}
\begin{algorithm}[H]
\DontPrintSemicolon
\KwInput{$K_0$, $\mathbb{X}_{\text{grid}}$, $\theta_\mu$, $\theta_\Sigma$, $\sigma_w\geq0$, $\bar{\sigma}>0$, $\epsilon>0$}
\KwOutput{$(V_{net},\ l_s,\ K)$}
\small
\caption{Alternate descent for safe set}
\label{alg:alternateDescent}
\For{$i=0...N$}{
\For{$j=0...N_v$}{
$(V_{net},\ l_s)\leftarrow$ Adam step on (\ref{eq:cost}) \;
}
\For{$j=0...N_k$}{
$K \leftarrow$ Adam step on (\ref{eq:policy_cost}) \;
}
}
\end{algorithm}
\end{minipage}
\vspace{-0.4cm}
\end{wrapfigure}
\paragraph{Probabilistic safety verification.}
A probabilistic verification is used to numerically prove the physical system stability with high probability. The resulting certificate is of the form (\ref{eq:stochastic}), where the $\epsilon_P$ decreases with increasing number of samples. Following the work of \cite{bobiti_samplingdriven_nodate}, the simulation is evaluated at a large set of points within the estimated safe set $\mathbb{X}_s$. Monte Carlo rejection sampling is performed with PyMC \citep{Salvatier2016}.
\begin{wrapfigure}{r}{7.5cm}
\vspace{-0.4cm}
\noindent\begin{minipage}{0.53\columnwidth}
\begin{algorithm}[H]
\DontPrintSemicolon
\KwInput{$N$, $V$, $K$, $\theta_\mu$, $\theta_\Sigma$, $\sigma_w\geq0$, $\bar{\sigma}>0$, $\delta>0$}
\KwOutput{$(\text{SAFE},\ l_u,\ l_l)$}
\small
\caption{Probabilistic safety verification}
\label{alg:verification}
$\text{SAFE}\leftarrow \text{False}$ \;
\For{$l_u = 1,1-\delta,1-2\delta,...,0$ }{
\For{$l_l = 0, \delta, 2\delta,...,l_u$}{
draw $N$ uniform $x$-samples s.t.:\\\qquad $l_l\ l_s\leq V(x)\leq l_u\ l_s$ \;
draw $N$ $w$-samples from $\mathcal{U}(-\bar{\sigma},\bar{\sigma})$ \;
\If{$V(\hat{x}^{+})-V(x)\leq 0,\forall x,\forall w$}{
draw $N$ uniform $x$-samples s.t.:\\\qquad $V(x)\leq l_l\ l_s$ \;
\If{$V(\hat{x}^{+})\leq l_u\ l_s,\forall x,\forall w$}{
$\text{SAFE}\leftarrow \text{True}$ \;
\textbf{return} \text{SAFE}, $l_u$, $l_l$\;
}
}
}
}
Verification failed.\;
\end{algorithm}
\end{minipage}
\vspace{-0.5cm}
\end{wrapfigure}
In practical applications, several factors limit the convergence of the trajectory to a neighborhood of the target (the \emph{ultimate bound}, \cite{Blanchini}). For instance, the policy structural bias, discount factors in RL methods or persistent uncertainty in the model, the state estimates, and the physical system itself. Therefore, we extended the verification algorithm of \citep{bobiti_samplingdriven_nodate} to estimate the ultimate bound as well as the invariant set, as outlined in Algorithm \ref{alg:verification}. Given a maximum and minimum level, $l_l$, $l_u$, we first sample initial states uniformly within these two levels and check for a robust decrease of $V$ over the next state distribution. If this is verified, then we sample uniformly from inside the minimum level set $l_l$ (where $V$ may not decrease) and check that $V$ does not exceed the maximum level $l_u$ over the next state distribution. The distribution is evaluated by means of uniform samples of $w$, independent of the current state, within $(-\bar{\sigma},\bar{\sigma})$. These are then scaled using $\Sigma$ from the model. We search for $l_l$, $l_u$ with a step $\delta$.
Note that, in Algorithm \ref{alg:verification}, the uncertainty of the surrogate model is taken into account by sampling a single uncertainty realisation for the entire set of initial states. The values of $w$ will be then scaled using $\Sigma$ in the forward model. This step is computationally convenient but breaks the assumption that variables are drawn from a uniform distribution. We leave this to future work. In this paper, independent Gaussian uncertainty models are used and stability is verified directly on the environment. Note that probabilistic verification is expensive but necessary, as pathological cases could result in the training loss (\ref{eq:lyapunov_loss}) for the safe set could converging to a local minima with a very small set. If this is the case then usually the forward model is not accurate enough or the uncertainty hyperparameter $\sigma_w$ is too large. Note that Algorithm \ref{alg:verification} is highly parallelizable.
\section{Safe exploration}\label{sec:exploration}
Once a verified safe set is found the environment can be controlled by means of a 1-step MPC with probabilistic stability (see Appendix). Consider the constraint $V(x)\leq l^{\star}_s=l_u\ l_s$, where $V$ and $l_s$ come from Algorithm \ref{alg:alternateDescent} and $l_u$ from Algorithm \ref{alg:verification}. The Safe-MPC exploration strategy follows:
\paragraph{Safe-MPC for exploration.} For collecting new data, solve the following MPC problem:
\begin{eqnarray}\label{eq:MPCexploration1}
u^\star= \arg\min_{u\in\mathbb{U}}\bigg\{ \beta\ell(x,u) -\alpha\ \ell_{expl}(x,u) + \max_{\hat{x}^{+} \in\ \mathbb{W}(x,u,\theta)}\left[\beta V(\hat{x}^{+}) - \gamma \log(l^{\star}_s-V(\hat{x}^{+}))\right]\bigg\},
\end{eqnarray}
where $\alpha\leq\gamma$ is the \emph{exploration} hyperparameter, $\beta\in[0,1]$ is the \emph{regulation} or \emph{exploitation} parameter and $\ell_{expl}(x,u)$ is the info-gain from the model, similar to \citep{hafner_reliable_2018}:
\begin{equation}
\ell_{expl}(x,u) = \sum_{i=1,\dots, N_x}\frac{\left(\Sigma_{ii}\left({x}, u, p, \theta\right)\right)^2}{N_x\ \sigma_{y_{ii}}^2}.
\end{equation}
The full derivation of the problem and a probabilistic safety result are discussed in Appendix.
\paragraph{Alternate min-max optimization.}
Problem (\ref{eq:MPCexploration1}) is approximated using alternate descent. In particular, the maximization in the loss function over the uncertain future state $\hat{x}^{+}$ with respect to $\hat{w}$, given the current control candidate $u$, is alternated with the minimization with respect to $u$, given the current candidate $\hat{x}^{+}$. Adam \citep{kingma_adam:_2014} is used for both steps.
\section{Inverted pendulum example}
The approach is demonstrated on an inverted pendulum, where the input is the angular torque and the states/outputs are the angular position and velocity of the pendulum. The aim is to collect data safely around the unstable equilibrium point (the origin). The system has a torque constraint that limits the controllable region. In particular, if the initial angle is greater than 60 degrees, then the torque is not sufficient to swing up the pendulum. In order to compare to the LQR, we choose a linear policy with a $\tanh$ activation, meeting the torque constraints while preserving differentiability.
\paragraph{Safe set with known environment, comparison to LQR.}
We first test the safe-net algorithm on the nominal pendulum model and compare the policy and the safe set with those from a standard LQR policy. Figure \ref{fig:pendulum_lyap_nom} shows the safe set at different stages of the algorithm, approaching the LQR.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={45 23.5 0 0}, clip, scale=0.245]{figures/pendulum/nom_iter-1.png}
\caption*{\centering{\hspace*{1.5em} \tiny $i=1$\newline $K=[-10, -0.05]$}}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={45 23.5 0 0}, clip, scale=0.245]{figures/pendulum/nom_iter-30.png}
\caption*{\centering{\hspace*{1.5em} \tiny $i=30$\newline $K=[-9.24,\ -1.56]$}}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={45 23.5 0 0}, clip, scale=0.245]{figures/pendulum/nom_iter-50.png}
\caption*{\centering{\hspace*{1.5em} \tiny $i=50$\newline $K=[-8.49,\ -2.25]$}}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={45 23.5 0 0}, clip, scale=0.245]{figures/pendulum/nom_iter-60.png}
\caption*{\centering{\hspace*{1.5em} \tiny $i=60$\newline $K=[-8.1, -2.7]$}}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={45 23.5 0 20}, clip, scale=0.245]{figures/pendulum/nom_LQR.png}
\caption*{\centering{\hspace*{1.5em} \tiny LQR\newline $K=[-7.26,\ -2.55]$}}
\end{subfigure}
\caption{{\bf Inverted Pendulum. Safe set and controller with proposed method for known environment model.} Initial set ($i=0$) is based on a unit circle plus the constraint $|\alpha|\leq 0.3$. Contours show the function levels. Control gain gets closer to the LQR solution as iterations progress until circa $i=50$, where the minimum of the Lyapunov loss (\ref{eq:lyapunov_loss}) is achieved. The set and controller at iteration $50$ are closest to the LQR solution, which is optimal around the equilibrium in the unconstrained case. In order to maximise the chances of verification the optimal parameters are selected with a early stopping, namely when the Lyapunov loss reaches its minimum, resulting in $K=[-8.52,\ -2.2]$. }
\label{fig:pendulum_lyap_nom}
\end{figure}
\vspace{-0.5cm}
\paragraph{Safe set with Bayesian model.} In order to test the proposed algorithms, the forward model is fitted on sequences of length $10$ for an increasing amount of data points ($10k$ to $100k$). Data is collected in closed loop with the initial controller, $K_0=[-10,\ 0]$, with different initial states. In particular, we perturb the initial state and control values with a random noise with standard deviations starting from, respectively, $0.1$ and $0.01$ and doubling each $10k$ points. The only prior used is that the velocity is the derivative of the angular position (normalized to $2\pi$ and $\pi$). The uncertainty bound was fixed to $\sigma_w=0.01$. The architecture was cross-validated from $60k$ datapoints with a $70$-$30$ split. The model with the best validation predictions as well as the largest safe set was used to generate the results in Figure \ref{fig:pendulum_lyap}.
\begin{wrapfigure}{l}{7.5cm}
\vspace{-0.3cm}
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[trim={52 35 0 0}, clip, scale=0.25]{figures/pendulum/verification_samples_nominal.png}
\caption{\scriptsize environment}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[trim={52 35 0 0}, clip, scale=0.25]{figures/pendulum/verification_samples_80k.png}
\caption{\scriptsize NCP-BRNN ($80k$ points).}
\end{subfigure}
\caption{{\bf Inverted pendulum verification}. Nominal and robust safe sets are verified on the pendulum simulation using $5k$ samples. We search for the largest stability region and the smallest ultimate bound of the solution. If a simulation is not available, then a two-level sampling on BRNN is performed.}
\label{fig:verification}
\vspace{-0.9cm}
\end{wrapfigure}
The results demonstrate that the size of the safe set can improve with more data, provided that the model uncertainty decreases and the predictions have comparable accuracy. This motivates for exploration.
\vspace{-0.1cm}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.19\textwidth}
\includegraphics[trim={52 35 101 45}, clip, scale=0.224]{figures/pendulum/uncertainty_init_10k.png}\vspace{-0.1cm}
\caption*{$10k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[trim={52 35 101 45}, clip, scale=0.224]{figures/pendulum/uncertainty_init_30k.png}\vspace{-0.1cm}
\caption*{$30k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[trim={52 35 101 45}, clip, scale=0.224]{figures/pendulum/uncertainty_init_50k.png}\vspace{-0.1cm}
\caption*{$50k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[trim={52 35 101 45}, clip, scale=0.224]{figures/pendulum/uncertainty_init_70k.png}\vspace{-0.1cm}
\caption*{$70k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[trim={52 35 50 45}, clip, scale=0.224]{figures/pendulum/uncertainty_init_90k.png}\vspace{-0.1cm}
\caption*{$90k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.1965\textwidth}
\centering
\includegraphics[trim={52 35 65 45}, clip, scale=0.21]{figures/pendulum/verification_trajectory_10k.png}\vspace{-0.1cm}
\caption*{$10k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={52 35 65 45}, clip, scale=0.21]{figures/pendulum/verification_trajectory_30k.png}\vspace{-0.1cm}
\caption*{$30k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={52 35 65 45}, clip, scale=0.21]{figures/pendulum/verification_trajectory_50k.png}\vspace{-0.1cm}
\caption*{$50k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={52 35 65 45}, clip, scale=0.21]{figures/pendulum/verification_trajectory_80k.png}\vspace{-0.1cm}
\caption*{$80k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={52 35 65 45}, clip, scale=0.21]{figures/pendulum/verification_trajectory_nominal.png}\vspace{-0.1cm}
\caption*{environment}
\end{subfigure}
\caption{{\bf Inverted pendulum safe set with Bayesian model.} Surrogates are obtained with increasing amount of data. The initial state and input perturbation from the safe policy are drawn from Gaussians with standard deviation that doubles each $10k$ points. {\bf Top}: Mean predictions and uncertainty contours for the NCP-BRNN model. After $90k$ points no further improvement is noticed. {\bf Bottom}: Comparison of safe sets with surrogates and environment. Reducing the model uncertainty while maintaining a similar prediction accuracy leads to an increase in the safe set. After $90k$ points no further benefits are noticed on the set which is consistent with the uncertainty estimates. }
\label{fig:pendulum_lyap}
\end{figure}
\paragraph{Verification on the environment.}
The candidate Lyapunov function, safe level set, and robust control policy are formally verified through probabilistic sampling of the system state, according to Algorithm \ref{alg:verification}, where the simulation is used directly. The results for $5k$ samples are shown in Figure \ref{fig:verification}. In particular, the computed level sets verify at the first attempt and no further search for sub-levels or ultimate bounds is needed.
\begin{wrapfigure}{r}{7.5cm}
\centering
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[trim={40 23.5 0 20}, clip, scale=0.3]{figures/pendulum/random_input.png}
\caption*{\centering{\hspace*{1em} \scriptsize Semi-random exploration\newline \hspace*{2em} $50$ trials of $1k$ steps\newline $\text{vol}=0.06$}}
\end{subfigure}
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[trim={44.5 26 0 23.5}, clip, scale=0.3]{figures/pendulum/exploration.png}
\caption*{\centering{\hspace*{1em} \scriptsize Safe-MPC exploration\newline \hspace*{2em} $1$ trial of $50k$ steps\newline $\text{vol}=0.04$}}
\end{subfigure}
\caption{{\bf Safe exploration}. Comparison of a naive semi-random exploration strategy with the proposed Safe-MPC for exploration. The proposed algorithm has an efficient space coverage with safety guarantees. }
\label{fig:pendulum_explore}
\end{wrapfigure}
\paragraph{Safe exploration.}
Safe exploration is performed using the min-max approach in Section \ref{sec:exploration}. For comparison, a semi-random exploration strategy is also used: if inside the safe set, the action magnitude is set to maximum torque and its sign is given by a random uniform variable once $V(x)\geq 0.99\ l_s$, then the safe policy $K$ is used. This does not provide any formal guarantees of safety as the value of $V(x)$ could exceed the safe level, especially for very fast systems and large input signals. This is repeated for several trials in order to estimate the maximum reachable set within the safe set. The results are shown in Figure \ref{fig:pendulum_explore}, where the semi-random strategy is used as a baseline and is compared to a single trial of the proposed safe-exploration algorithm. The area covered by our algorithm in a \emph{single trial} of $50k$ steps is about $67\%$ of that of the semi-random baseline over $50$ trials of $1k$ steps. Extending the length of the trials did not significantly improve the baseline results. Despite being more conservative, our algorithm continues to explore safely indefinitely.
\section{Conclusions}
Preliminary results show that the SiMBL produces a Lyapunov function and a safe set using neural networks that are comparable with that of standard optimal control (LQR) and can account for state-dependant additive model uncertainty. A Bayesian RNN surrogate with NCP was proposed and trained for an inverted pendulum simulation. An alternate descent method was presented to jointly learn a Lyapunov function, a safe level set, and a stabilising control policy for the surrogate model with back-propagation. We demonstrated that adding data-points to the training set can increase the safe-set size provided that the model improves and its uncertainty decreases. To this end, an uncertainty prior from the previous model was added to the framework. The safe set was then formally verified through a novel probabilistic algorithm for ultimate bounds and used for safe data collection (exploration). A one-step safe MPC was proposed where the Lyapunov function provides the terminal cost and constraint to mimic an infinite horizon with high probability of recursive feasibility. Results show that the proposed safe-exploration strategy has better coverage than a naive policy which switches between random inputs and the safe policy.
\bibliographystyle{apa-good}
\section{Approach rationale}
Safe Interactive Model-Based Learning (SiMBL) aims to control a deterministic dynamical system:
\begin{equation} \label{eq:ode}
x(t+1) = x(t)+ dt\ f(x(t), u(t)),\quad y(t) = x(t),
\end{equation}
where $x$ is the state and $y$ are the measurements, in this case assumed equivalent. The system (\ref{eq:ode}) is sampled with a known constant time $dt$ and it is subject to closed and bounded, possibly non-convex, operational constraints on the state and input:
\begin{eqnarray}
x(t)\in\mathbb{X}\subseteq \mathbb{R}^{n_x}, \
u(t)\in\mathbb{U}\subset \mathbb{R}^{n_u}, \quad \forall t>0. \label{eq:constraints}
\end{eqnarray}
The stability of (\ref{eq:ode}) is studied using discrete time systems analysis. In particular, tools from discrete-time \emph{control Lyapunov functions} \citep{Blanchini, Khalil_book} will be used to compute policies that can keep the system \emph{safe}.
\paragraph{Safety.}
In this work, \emph{safety} is defined as the capability of a system to remain within a subset $\mathbb{X}_s \subseteq \mathbb{X}$ of the operational constraints and to return asymptotically to the equilibrium state from anywhere in $\mathbb{X}_s$. A feedback control policy, $u=K(x)\in\mathbb{U}$, is certified as \emph{safe} if it can provide safety with \emph{high probability}. In this work, safety is verified with a statistical method that extends \cite{bobiti_samplingdriven_nodate}.
\paragraph{Safe learning.} \begin{wrapfigure}{r}{7.5cm}
\centering
\includegraphics[width=0.9\linewidth, trim={0. 0. 0. 0.}, clip]{figures/SiMBL_Diagram_big.pdf}
\caption{{\bf Safe interactive Model-Based Learning (SiMBL) rationale.} Approach is centered around an uncertain RNN forward model for which we compute a safe set and a control policy using principles from robust control. This allows for safe exploration though MPC and iterative refinement of the model and the safe set. An initial safe policy is assumed known. } \label{fig:simbldiagram}
\vspace{-0.6cm}
\end{wrapfigure} The proposed framework aims at learning a policy, $K(x)$, and Lyapunov function, $V(x)$,
by means of simulated trajectories from an uncertain forward model and an initial policy $K_0$,
used to collect data.
The model, the policy, and the Lyapunov function are \emph{iteratively refined} while \emph{safely}
collecting more data
through a \emph{Safe Model Predictive Controller} (Safe-MPC).
Figure~\ref{fig:simbldiagram} illustrates the approach.
\paragraph{Summary of contribution.} This work presents the algorithms for:
1) Iteratively learning a novel Bayesian RNN model with a large posterior over unseen states and inputs;
2) Learning a safe set and the associated controller with
neural networks from the model trajectories;
3) Safe exploration with MPC.
For 1) and 2), we propose to retain the model from scratch using a consistency prior to include knowledge
of the previous uncertainty and then to recompute the safe set.
The safe set increase as more data becomes available and the safety of the exploration strategy are
demonstrated on an inverted pendulum simulation with limited control
torque and stability region. Their final integration for continuous model and controller refinement with data from safe exploration (see Figure \ref{fig:simbldiagram}) is left for future work.
\section{The Bayesian recurrent forward model}
A discrete-time stochastic forward model of system \eqref{eq:ode} is formulated as a Bayesian RNN. A grey-box approach is used, where available prior knowledge is integrated into the network in a differentiable way (for instance, the known relation between an observation and its derivative). The model provides an estimate of the next states distribution that is large (up to a defined value) where there is no available data. This is inspired by recent work on Noise Contrastive Priors (NCP) \citep{hafner_reliable_2018}. We extend the NCP approach to RNNs, and propose the Noise Contrastive Prior Bayesian RNN (NCP-BRNN), with full state information, which follows the discrete-time update:
\begin{align} \label{eq:forward}
\hat{x}(t+1) = \hat{x}(t) + dt\ d\hat{x}(t),& \quad
d\hat{x}(t) = \mu\left(\hat{x}(t), u(t); \theta_\mu\right) + \hat{w}(t), \\
\hat{w}(t) \sim q(\hat{x}(t), u(t);\theta_\Sigma), &\quad q(\hat{x}(t), u(t);\theta_\Sigma) = \mathcal{N}\left(0,\ \Sigma\left(\hat{x}(t), u(t); \theta_\Sigma\right)\right), \\
\hat{y}(t) \sim \mathcal{N}\left(\hat{x}(t), \sigma_y^2\right), \label{eq:measure} & \quad
\hat{x}(0) \sim \mathcal{N}(x(0), \sigma_y^2), \\
\Sigma(\cdot) = \sigma_w \text{sigm}(\Sigma_{net}(\cdot)), & \quad \mu\left(\cdot\right)=\text{GreyBox}_{net}(\cdot),
\end{align}
where $\hat{x}(t), \hat{y}(t)$ denote the state and measurement estimated from the model at time $t$, and $d\hat{x}(t)$ is drawn from the distribution model, where $\mu$ and $\Sigma$ are computed from neural networks, sharing some initial layers. In particular, $\mu$ combines an MLP with some physics prior while the final activation of $\Sigma$ is a sigmoid which is then scaled by the hyperparameter $\sigma_w$, namely, a finite maximum variance. The next state distribution depends on the current state estimate $\hat{x}$, the input $u$, and a set of \emph{unknown} constant parameters $\theta$, which are to be learned from the data. The estimated state $\hat{x}(t)$ is for simplicity assumed to have the same physical meaning of the true system state $x(t)$. The system state is measured with a Gaussian uncertainty with standard deviation $\sigma_y$, which is also learned from data. During the control, the measurement noise is assumed to be negligible ($\sigma_y\approx 0$). Therefore, the control algorithms will need to be robust with respect to the model uncertainty. Extensions to partial state information and output noise robust control are also possible but are left for future work.
\paragraph{Towards reliable uncertainty estimates with RNNs}
The \emph{fundamental assumption} for model-based safe learning algorithm is that the model predictions \emph{contain} the actual system state transitions with high probability \citep{berkenkamp_safe_2017}. This is difficult to meet in practice for most neural network models. To mitigate this risk, we train our Bayesian RNNs on sequences and include a Noise-Contrastive Prior (NCP) term \citep{hafner_reliable_2018}. In the present work, the uncertainty is modelled as a point-wise Gaussian with mean and standard deviation that depend on the current state as well as on the input. The learned 1-step standard deviation, $\Sigma$, is assumed to be a diagonal matrix. This assumption is limiting but it is common in variational neural networks for practicality reasons \citep{zhao_infovae_2017, chen_variational_2016}. The NPC concept is illustrated in Figure \ref{fig:NCP}. More complex uncertainty representations will be considered in future works.
The cost function used to train the model is:
\begin{align}\label{eq:model_loss}
\mathcal{L}(\theta_\mu,\theta_\Sigma) = & -\frac{1}{T}\sum_{t=0}^T
\mathbf{E}_{p_{\text{train}}(x(0),u(t))}\left[
\mathbf{E}_{q(\hat{x}(t), {u}(t);\theta_\Sigma)}\left[\ln p(\hat{y}(t)|y(t),u(t);\theta_\mu;\theta_\Sigma)\right]\right] \nonumber \\ & + D_{\text{KL}}\left[q\left(\tilde{x}(t), \tilde{u}(t);\theta_\Sigma\right)\ ||\ \mathcal{N}(0,\sigma_w^2)\right] \nonumber \\ & + \mathbf{E}_{p_{\text{train}}(x(0),u(t))}\left[\text{ReLU}\left[\Sigma\left(\hat{x}(t),u(t);\theta_{\Sigma}\right) - \Sigma\left(\hat{x}(t),u(t);\theta_{\Sigma_\text{prev}}\right)\right]\right]
\end{align}
where the first term is the expected negative log likelihood over the uncertainty distribution, evaluated over the training data. The second term is the KL-divergence which is evaluated in closed-form over predictions $\tilde{x}$ generated from a set of background initial states and input sequences, $\hat{x}(0)$ and $\hat{u}(t)$. These are sampled from a uniform distribution for the first model and then, once a previous model is available and new data is collected, they are obtained using rejection sampling with PyMC \citep{Salvatier2016} with acceptance condition: $\Sigma(\tilde{x}(0),\tilde{u}(t);\theta_{\Sigma_\text{prev}})\geq 0.5\ \sigma_w$. If a previous model is available, then the final term is used which is an \emph{uncertainty consistency prior} which forces the uncertainty estimates over the training data to not increase with respect to the previous model.
The loss (\ref{eq:model_loss}) is optimised using stochastic backpropagation through truncated sequences. In order to have further consistency between model updates, it a previous model is available, we train from scratch but stop optimising once the final loss of the previous model is reached.
\begin{wrapfigure}{l}{5.5cm}
\centering
\vspace{0.cm}
\includegraphics[width=1.05\linewidth]{figures/pendulum/NCP_example.png}
\caption{{\bf Variational neural networks with Noise Contrastive Priors (NCP).} Predicting sine-wave data (red-black) with confidence bounds (blue area) using NAIS-Net \citep{Ciccone2018NAISNetSD} and NCP \citep{hafner_reliable_2018}. }\label{fig:NCP}
\vspace{-1.3cm}
\end{wrapfigure}
\section{The robust control problem} We approximate a chance-constrained stochastic control problem with a min-max robust control problem over a convex uncertainty set. This non-convex min-max control problem is then also approximated by computing the loss only at the vertices of the uncertainty set. To compensate for this approximation, (inspired by variational inference) the centre of the set is sampled from the uncertainty distribution itself (Figure \ref{fig:variational_sets}).
The procedure is detailed below.
\paragraph{Lyapunov-Net.} The considered Lyapunov function is:
\begin{equation}\label{eq:lyap1}
V(x) = x^T\left(\epsilon I + V_{net}(x)^T V_{net}(x)\right)x + \psi(x),
\end{equation}
where $V_{net}(x)$ is a feedforward network that produces a $n_V\times n_x$ matrix, where $n_V$ and $\epsilon>0$ are hyperparameters. The network parameters have to be trained and they are omitted from the notation. The term $\psi(x)\geq 0$ represents the prior knowledge of the state constraints. In this work we use:
\begin{equation}
\psi(x) = \text{ReLU}(\phi(x)-1),
\end{equation}
where $\phi(x)\geq 0$ is the Minkowski functional\footnote{Minkowski functionals measure the distance from the set center and they are positive definite.} of a user-defined \emph{usual region of operation}, namely:
\begin{equation}
\mathbb{X}_\phi=\{x \in \mathbb{X}: \phi(x)\leq 1\}.
\end{equation}
Possible choices for the Minkowski functional include quadratic functions, norms or semi-norms \citep{Blanchini,Horn:2012:MA:2422911}.
Since $V(x)$ must be positive definite, the hyperparameter $\epsilon>0$ is introduced~\footnote{The trainable part of the function $V(x)$ is chosen to be piece-wise quadratic but this is not the only possible choice. In fact one can use any positive definite and radially unbounded function. For the same problem multiple Lyapunov functions can exist. See also \cite{Blanchini}.}. While other forms are possible as in \cite{Blanchini}, with \eqref{eq:lyap1} the activation function does not need to be invertible. The study of the generality of the proposed function is left for future consideration.
\paragraph{Safe set definition.}
Denote the candidate \emph{safe level set} of $V$ as:
\begin{equation}
\mathbb{X}_s = \{x \subseteq \mathbb{X}: V(x)\leq l_s\},
\end{equation}
where $l_s$ is the safe level. If, for $x \in \mathbb{X}_s$, the function $V(x)$ satisfies the Lyapunov inequality over the system closed-loop trajectory with a control policy $K$, namely,
\begin{equation}\label{eq:lyap2}
u(t)=K\left(x\left(t\right)\right)\Rightarrow V(x(t+1))-V(x(t))\leq 0,\ \forall x(t)\in \mathbb{X}_s,
\end{equation}
then set $\mathbb{X}_s$ is \emph{safe}, i.e., it satisfies the conditions of \emph{positive-invariance} \citep{Blanchini,Kerrigan:2000}. Note that the policy $K(x)$ can be either a neural network or a model-based controller, for instance a Linear Quadratic Regulator (LQR, see \cite{KalmanLQR}) or a Model Predictive Controller (MPC, see \cite{Maciejowski_book,rawlingsMPC,Cannon_book,Rakovic2019}). A stronger condition to eq. (\ref{eq:lyap2}) is often used in the context of optimal control:
\begin{equation}\label{eq:lyap2_lqr}
u(t)=K\left(x\left(t\right)\right)\Rightarrow V(x(t+1))-V(x(t))\leq -\ell(x(t),u(t)),\ \forall x(t)\in \mathbb{X}_s
\end{equation}
where $\ell(x(t),u(t))$ is a positive semi-definite stage loss.
In this paper, we focus on training policies with the quadratic loss used in LQR and MPC, where the origin is the target equilibrium, namely:
\begin{equation}
\ell(x,u)= x^T Q x + u^T R u, \quad Q\succeq 0,\ R\succ 0.
\end{equation}
\paragraph{From chance constrained to min-max control}
Consider the problem of finding a controller $K$ and a function $V$ such that $u(t)=K\left(x\left(t\right)\right)$ and:
\begin{equation}
\label{eq:stochastic}
\mathcal{P}\big[V(\hat{x}(t+1))-V(x(t))\leq -\ell(x(t),u(u))\big]\geq 1-\epsilon_p,
\end{equation}
where $\mathcal{P}$ represents a probability and $0<\epsilon_p<<1$. This is a \emph{chance-constrained} non-convex optimal control problem \citep{Cannon_book}. We truncate the distributions and approximate (\ref{eq:stochastic}):
\begin{equation}
\max_{\hat{x}(t+1) \in\ \mathbb{W}(x(t),u(t),\theta)}\big[V\left(\hat{x}(t+1)\right)\big]-V(x(t)) \leq - \ell\left(x(t), K\left(x(t)\right)\right), \label{eq:minmax_def}
\end{equation}
which is deterministic. A strategy to jointly learn $(V,\ K)$ fulfilling (\ref{eq:minmax_def}) is presented next.
\section{Learning the controller and the safe set}\label{sec:safeset}
We wish to build a controller $K$, a function $V$, and a safe level $l_s$ given the state transition probability model, $(\mu,\Sigma)$, such that the condition in (\ref{eq:lyap2_lqr}) is satisfied with high probability for the physical system generating the data.
Denote the one-step prediction from the model in (\ref{eq:forward}), in closed loop with $K$, as:
$$\hat{x}^{+} = x + dt\ d\hat{x},\text{ with, }u=K(x),$$
where $\hat{x}^{+}$ represents the next state prediction and the time index $t$ is omitted.
\paragraph{Approximating the high-confidence prediction set.} A polytopic approximation of a \emph{high confidence region} of the estimated uncertain set $\hat{x}^{+} \in \mathbb{W}(x,u,\theta)$ is obtained from the parameters of $\Sigma$ and used for training $(V,\ K)$. In this work, the uncertain set is taken as a hyper-diamond centered at $x$, scaled by the (diagonal) standard deviation matrix, $\Sigma$:
\begin{equation}
\mathbb{W}_1(x,u,\theta)=\left\{x^{+}: x^{+}=x + dt\ d\hat{x}, \ \left\| (\Sigma(x,u;\theta_\Sigma))^{-1}\hat{w}\right\|_1\leq \bar{\sigma} \right\},
\end{equation}
where $\bar{\sigma}>0$ is a hyper-parameter. This choice of set is inspired by the Unscented Kalman filter \citep{wan2000}.
Since $\Sigma$ is diagonal, the vertices are given by the columns of the matrix resulting from multiplying $\Sigma$ with a mask $M$ such that:
\begin{align}
\text{vert}[\mathbb{W}(x,u,\theta)]= \text{cols}[\Sigma(x,u;\theta_\Sigma)\ M], \quad
M= \bar{\sigma}\ [I,\ -I].
\end{align}
\paragraph{Learning the safe set.} Assume that a controller $K$ is given. Then, we wish to learn a $V$ of the from of (\ref{eq:lyap1}), such that the corresponding safe set $\mathbb{X}_s$ is as big as possible, ideally as big as the state constraints $\mathbb{X}$. In order to do so, the parameters of $V_{net}$ are trained using a grid of initial states, a forward model to simulate the next state under the policy $K$, and an appropriate cost function. The cost for $V_{net}$ and $l_s$ is inspired by \citep{richards_lyapunov_2018}. It consists of a combination of two objectives: the first one penalises the deviation from the Lyapunov stability condition; the second one is a classification penalty that separates the stable points from the unstable ones by means of the decision boundary, $V(x)=l_s$. The combined robust Lyapunov function cost is:
\begin{equation}\label{eq:lyapunov_loss}
\min_{V_{net},\ l_s} \mathbf{E}_{[x(t)\in\mathbb{X}_{grid}]} \big[J\left(x\left(t\right)\right)\big],
\end{equation}
\begin{align}\label{eq:cost}
J(x) &= \mathcal{I}_{\mathbb{X}_s}(x)\ J_{s}(x)+ \text{sign}\big[\nabla V(x)\big]\
\left[l_s - V(x) \right],
\end{align}
\begin{align}
\mathcal{I}_{\mathbb{X}_s}(x) &= 0.5 \left(\text{sign}\left[l_s - V(x) \right]+1\right), \quad
J_{s}(x) = \frac{1}{\rho V(x)}\ \text{ReLU}\left[ \nabla V(x) \right], \\
\nabla V(x) &= \max_{\hat{x}^{+} \in\ \mathbb{W}(x,K\left(x\right),\theta)}\big[V\left(\hat{x}^{+}\right)\big]-V(x) + \ell\left(x, K\left(x\right)\right), \label{eq:minmax}
\end{align}
where $\rho>0$ trades off stability for volume. The robust Lyapunov decrease in (\ref{eq:minmax}) is evaluated by using sampling to account for uncertainty over the confidence interval $\mathbb{W}$. Sampling of the set centre is performed as opposite of setting $\mathbb{W}=\mathbb{W}_1$, which didn't seem to produce valid results. Let us omit $\theta$ for ease of notation. We substitute $\nabla V(x)$ with $\mathbf{E}_{\mathbb{W}}\big[\nabla V(x)\big]$, which we define as:
\begin{align}
\mathbf{E}_{\hat{w} \sim \mathcal{N}\left(0,\ \Sigma\left(x, K(x)\right)\right)}\left\{\max_{\hat{x}^{+} \in\ \mathbb{W}_1(x,K\left(x\right),\theta) + \hat{w}dt}\big[V\left(\hat{x}^{+}\right)\big]\right\}-V(x) + \ell\left(x, K\left(x\right)\right), \label{eq:expected:minmax}
\end{align}
Equations (\ref{eq:minmax}) and (\ref{eq:expected:minmax}) require a maximisation of the non-convex function $V(x)$ over the convex set $\mathbb{W}$. For the considered case, a sampling technique or another optimisation (similar to adversarial learning) could be used for a better approximation of the max operator. The maximum over $\mathbb{W}$ is instead approximated by the maximum over its vertices:
\begin{equation}
\nabla V(x) \approx \max_{\hat{x}^{+} \in\ \text{vert}[\mathbb{W}_1(x,K\left(x\right),\theta)] + \hat{w}dt}\big[V\left(\hat{x}^{+}\right)\big]-V(x) + \ell\left(x, K\left(x\right)\right). \label{eq:minmax_approx}
\end{equation}
This consists of a simple enumeration followed by a max over tensors that can be easily handled. Finally, during training (\ref{eq:expected:minmax}) is implemented in a variational inference fashion by evaluating (\ref{eq:minmax_approx}) at each epoch over a different sample of $\hat{w}$. This entails a variational posterior over the center of the uncertainty interval. The approach is depicted in Figure \ref{fig:variational_sets}.
The proposed cost is inspired by \cite{richards_lyapunov_2018}, with the difference that here there is no need for \emph{labelling} the states as safe by means of a multi-step simulation. Moreover, in this work we train the Lyapunov function and controller \emph{together}, while in \citep{richards_lyapunov_2018} the latter was given.
\paragraph{Learning the safe policy.}
We alternate the minimisation of the Lyapunov loss (\ref{eq:lyapunov_loss}) and the solution of the \emph{variational robust control problem}:
\begin{align}\label{eq:policy_cost}
\min_{u=K(x)}\mathbf{E}_{[x\in\mathbb{X}_{grid}]} \left[\mathcal{I}_{\mathbb{X}_s}(x) L_c(x,u)\right], \quad \text{s.t.: } K(0)=0,
\end{align}
\begin{align}
L_c(x,u) = \ell(x,u) + &\mathbf{E}_{\mathbb{W}}\left\{\max_{\hat{x}^{+} \in\ \mathbb{W}(x,u,\theta)} \left[V(\hat{x}^{+}) - \gamma \log(l_s-V(\hat{x}^{+}))\right]\right\}, \label{eq:robust_control_loss}
\end{align}
\begin{wrapfigure}{l}{5.5cm}
\vspace{-0.4cm}
\centering
\includegraphics[trim={120, 330, 150, 270}, clip, scale=0.55]{figures/rendering_3.pdf}
\caption{{\bf Approximating the non-convex maximisation.} Centre of the uncertain set is sampled and Lyapunov function is evaluated at its vertices. }
\label{fig:variational_sets}
\vspace{-0.3cm}
\end{wrapfigure}
subject to the forward model (\ref{eq:forward}). In this work, (\ref{eq:policy_cost}) is solved using backpropagation through the policy, the model and $V$. The safety constraint, $\hat{x}^{+}\in\mathbb{X}_s$, namely, $V(\hat{x}^{+})\leq l_s$ is relaxed through a log-barrier \citep{Boyd:2004:CO:993483}. If a neural policy $K(x)$ solves (\ref{eq:policy_cost}) and satisfies the safety constraint, $\forall x\in\mathbb{X}_s$, then it is a candidate robust controller for keeping the system within the safe set $\mathbb{X}_s$. Note that the expectation in (\ref{eq:robust_control_loss}) is once again treated as a variational approximation of the expectation over the center of the uncertainty interval.
Obtaining an exact solution to the control problem for all points is computationally impractical. In order to provide statistical guarantees of safety, probabilistic verification is used after $V$ and $K$ have been trained. This refines the safe level set $l_s$ and, if successful, provides a probabilistic safety certificate. If the verification is unsuccessful, then the learned $(\mathbb{X}_s,\ K)$ are not safe. The data collection continues with the previous safe controller until suitable $V$, $l_s$, and $K$ are found. Note that the number of training points used for the safe set and controller is in general lower than the ones used for verification. The alternate learning procedure for $\mathbb{X}_s$ and $K$ is summarised in Algorithm \ref{alg:alternateDescent}. The use of 1-step predictions makes the procedure highly scalable through parallelisation on GPU.
\begin{wrapfigure}{r}{6.5cm}
\noindent\begin{minipage}{0.45\columnwidth}
\vspace{-0.1cm}
\begin{algorithm}[H]
\DontPrintSemicolon
\KwInput{$K_0$, $\mathbb{X}_{\text{grid}}$, $\theta_\mu$, $\theta_\Sigma$, $\sigma_w\geq0$, $\bar{\sigma}>0$, $\epsilon>0$}
\KwOutput{$(V_{net},\ l_s,\ K)$}
\small
\caption{Alternate descent for safe set}
\label{alg:alternateDescent}
\For{$i=0...N$}{
\For{$j=0...N_v$}{
$(V_{net},\ l_s)\leftarrow$ Adam step on (\ref{eq:cost}) \;
}
\For{$j=0...N_k$}{
$K \leftarrow$ Adam step on (\ref{eq:policy_cost}) \;
}
}
\end{algorithm}
\end{minipage}
\vspace{-0.4cm}
\end{wrapfigure}
\paragraph{Probabilistic safety verification.}
A probabilistic verification is used to numerically prove the physical system stability with high probability. The resulting certificate is of the form (\ref{eq:stochastic}), where the $\epsilon_P$ decreases with increasing number of samples. Following the work of \cite{bobiti_samplingdriven_nodate}, the simulation is evaluated at a large set of points within the estimated safe set $\mathbb{X}_s$. Monte Carlo rejection sampling is performed with PyMC \citep{Salvatier2016}.
\begin{wrapfigure}{r}{7.5cm}
\vspace{-0.4cm}
\noindent\begin{minipage}{0.53\columnwidth}
\begin{algorithm}[H]
\DontPrintSemicolon
\KwInput{$N$, $V$, $K$, $\theta_\mu$, $\theta_\Sigma$, $\sigma_w\geq0$, $\bar{\sigma}>0$, $\delta>0$}
\KwOutput{$(\text{SAFE},\ l_u,\ l_l)$}
\small
\caption{Probabilistic safety verification}
\label{alg:verification}
$\text{SAFE}\leftarrow \text{False}$ \;
\For{$l_u = 1,1-\delta,1-2\delta,...,0$ }{
\For{$l_l = 0, \delta, 2\delta,...,l_u$}{
draw $N$ uniform $x$-samples s.t.:\\\qquad $l_l\ l_s\leq V(x)\leq l_u\ l_s$ \;
draw $N$ $w$-samples from $\mathcal{U}(-\bar{\sigma},\bar{\sigma})$ \;
\If{$V(\hat{x}^{+})-V(x)\leq 0,\forall x,\forall w$}{
draw $N$ uniform $x$-samples s.t.:\\\qquad $V(x)\leq l_l\ l_s$ \;
\If{$V(\hat{x}^{+})\leq l_u\ l_s,\forall x,\forall w$}{
$\text{SAFE}\leftarrow \text{True}$ \;
\textbf{return} \text{SAFE}, $l_u$, $l_l$\;
}
}
}
}
Verification failed.\;
\end{algorithm}
\end{minipage}
\vspace{-0.5cm}
\end{wrapfigure}
In practical applications, several factors limit the convergence of the trajectory to a neighborhood of the target (the \emph{ultimate bound}, \cite{Blanchini}). For instance, the policy structural bias, discount factors in RL methods or persistent uncertainty in the model, the state estimates, and the physical system itself. Therefore, we extended the verification algorithm of \citep{bobiti_samplingdriven_nodate} to estimate the ultimate bound as well as the invariant set, as outlined in Algorithm \ref{alg:verification}. Given a maximum and minimum level, $l_l$, $l_u$, we first sample initial states uniformly within these two levels and check for a robust decrease of $V$ over the next state distribution. If this is verified, then we sample uniformly from inside the minimum level set $l_l$ (where $V$ may not decrease) and check that $V$ does not exceed the maximum level $l_u$ over the next state distribution. The distribution is evaluated by means of uniform samples of $w$, independent of the current state, within $(-\bar{\sigma},\bar{\sigma})$. These are then scaled using $\Sigma$ from the model. We search for $l_l$, $l_u$ with a step $\delta$.
Note that, in Algorithm \ref{alg:verification}, the uncertainty of the surrogate model is taken into account by sampling a single uncertainty realisation for the entire set of initial states. The values of $w$ will be then scaled using $\Sigma$ in the forward model. This step is computationally convenient but breaks the assumption that variables are drawn from a uniform distribution. We leave this to future work. In this paper, independent Gaussian uncertainty models are used and stability is verified directly on the environment. Note that probabilistic verification is expensive but necessary, as pathological cases could result in the training loss (\ref{eq:lyapunov_loss}) for the safe set could converging to a local minima with a very small set. If this is the case then usually the forward model is not accurate enough or the uncertainty hyperparameter $\sigma_w$ is too large. Note that Algorithm \ref{alg:verification} is highly parallelizable.
\section{Safe exploration}\label{sec:exploration}
Once a verified safe set is found the environment can be controlled by means of a 1-step MPC with probabilistic stability (see Appendix). Consider the constraint $V(x)\leq l^{\star}_s=l_u\ l_s$, where $V$ and $l_s$ come from Algorithm \ref{alg:alternateDescent} and $l_u$ from Algorithm \ref{alg:verification}. The Safe-MPC exploration strategy follows:
\paragraph{Safe-MPC for exploration.} For collecting new data, solve the following MPC problem:
\begin{eqnarray}\label{eq:MPCexploration1}
u^\star= \arg\min_{u\in\mathbb{U}}\bigg\{ \beta\ell(x,u) -\alpha\ \ell_{expl}(x,u) + \max_{\hat{x}^{+} \in\ \mathbb{W}(x,u,\theta)}\left[\beta V(\hat{x}^{+}) - \gamma \log(l^{\star}_s-V(\hat{x}^{+}))\right]\bigg\},
\end{eqnarray}
where $\alpha\leq\gamma$ is the \emph{exploration} hyperparameter, $\beta\in[0,1]$ is the \emph{regulation} or \emph{exploitation} parameter and $\ell_{expl}(x,u)$ is the info-gain from the model, similar to \citep{hafner_reliable_2018}:
\begin{equation}
\ell_{expl}(x,u) = \sum_{i=1,\dots, N_x}\frac{\left(\Sigma_{ii}\left({x}, u, p, \theta\right)\right)^2}{N_x\ \sigma_{y_{ii}}^2}.
\end{equation}
The full derivation of the problem and a probabilistic safety result are discussed in Appendix.
\paragraph{Alternate min-max optimization.}
Problem (\ref{eq:MPCexploration1}) is approximated using alternate descent. In particular, the maximization in the loss function over the uncertain future state $\hat{x}^{+}$ with respect to $\hat{w}$, given the current control candidate $u$, is alternated with the minimization with respect to $u$, given the current candidate $\hat{x}^{+}$. Adam \citep{kingma_adam:_2014} is used for both steps.
\section{Inverted pendulum example}
The approach is demonstrated on an inverted pendulum, where the input is the angular torque and the states/outputs are the angular position and velocity of the pendulum. The aim is to collect data safely around the unstable equilibrium point (the origin). The system has a torque constraint that limits the controllable region. In particular, if the initial angle is greater than 60 degrees, then the torque is not sufficient to swing up the pendulum. In order to compare to the LQR, we choose a linear policy with a $\tanh$ activation, meeting the torque constraints while preserving differentiability.
\paragraph{Safe set with known environment, comparison to LQR.}
We first test the safe-net algorithm on the nominal pendulum model and compare the policy and the safe set with those from a standard LQR policy. Figure \ref{fig:pendulum_lyap_nom} shows the safe set at different stages of the algorithm, approaching the LQR.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={45 23.5 0 0}, clip, scale=0.245]{figures/pendulum/nom_iter-1.png}
\caption*{\centering{\hspace*{1.5em} \tiny $i=1$\newline $K=[-10, -0.05]$}}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={45 23.5 0 0}, clip, scale=0.245]{figures/pendulum/nom_iter-30.png}
\caption*{\centering{\hspace*{1.5em} \tiny $i=30$\newline $K=[-9.24,\ -1.56]$}}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={45 23.5 0 0}, clip, scale=0.245]{figures/pendulum/nom_iter-50.png}
\caption*{\centering{\hspace*{1.5em} \tiny $i=50$\newline $K=[-8.49,\ -2.25]$}}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={45 23.5 0 0}, clip, scale=0.245]{figures/pendulum/nom_iter-60.png}
\caption*{\centering{\hspace*{1.5em} \tiny $i=60$\newline $K=[-8.1, -2.7]$}}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={45 23.5 0 20}, clip, scale=0.245]{figures/pendulum/nom_LQR.png}
\caption*{\centering{\hspace*{1.5em} \tiny LQR\newline $K=[-7.26,\ -2.55]$}}
\end{subfigure}
\caption{{\bf Inverted Pendulum. Safe set and controller with proposed method for known environment model.} Initial set ($i=0$) is based on a unit circle plus the constraint $|\alpha|\leq 0.3$. Contours show the function levels. Control gain gets closer to the LQR solution as iterations progress until circa $i=50$, where the minimum of the Lyapunov loss (\ref{eq:lyapunov_loss}) is achieved. The set and controller at iteration $50$ are closest to the LQR solution, which is optimal around the equilibrium in the unconstrained case. In order to maximise the chances of verification the optimal parameters are selected with a early stopping, namely when the Lyapunov loss reaches its minimum, resulting in $K=[-8.52,\ -2.2]$. }
\label{fig:pendulum_lyap_nom}
\end{figure}
\vspace{-0.5cm}
\paragraph{Safe set with Bayesian model.} In order to test the proposed algorithms, the forward model is fitted on sequences of length $10$ for an increasing amount of data points ($10k$ to $100k$). Data is collected in closed loop with the initial controller, $K_0=[-10,\ 0]$, with different initial states. In particular, we perturb the initial state and control values with a random noise with standard deviations starting from, respectively, $0.1$ and $0.01$ and doubling each $10k$ points. The only prior used is that the velocity is the derivative of the angular position (normalized to $2\pi$ and $\pi$). The uncertainty bound was fixed to $\sigma_w=0.01$. The architecture was cross-validated from $60k$ datapoints with a $70$-$30$ split. The model with the best validation predictions as well as the largest safe set was used to generate the results in Figure \ref{fig:pendulum_lyap}.
\begin{wrapfigure}{l}{7.5cm}
\vspace{-0.3cm}
\centering
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[trim={52 35 0 0}, clip, scale=0.25]{figures/pendulum/verification_samples_nominal.png}
\caption{\scriptsize environment}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\centering
\includegraphics[trim={52 35 0 0}, clip, scale=0.25]{figures/pendulum/verification_samples_80k.png}
\caption{\scriptsize NCP-BRNN ($80k$ points).}
\end{subfigure}
\caption{{\bf Inverted pendulum verification}. Nominal and robust safe sets are verified on the pendulum simulation using $5k$ samples. We search for the largest stability region and the smallest ultimate bound of the solution. If a simulation is not available, then a two-level sampling on BRNN is performed.}
\label{fig:verification}
\vspace{-0.9cm}
\end{wrapfigure}
The results demonstrate that the size of the safe set can improve with more data, provided that the model uncertainty decreases and the predictions have comparable accuracy. This motivates for exploration.
\vspace{-0.1cm}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.19\textwidth}
\includegraphics[trim={52 35 101 45}, clip, scale=0.224]{figures/pendulum/uncertainty_init_10k.png}\vspace{-0.1cm}
\caption*{$10k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[trim={52 35 101 45}, clip, scale=0.224]{figures/pendulum/uncertainty_init_30k.png}\vspace{-0.1cm}
\caption*{$30k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[trim={52 35 101 45}, clip, scale=0.224]{figures/pendulum/uncertainty_init_50k.png}\vspace{-0.1cm}
\caption*{$50k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[trim={52 35 101 45}, clip, scale=0.224]{figures/pendulum/uncertainty_init_70k.png}\vspace{-0.1cm}
\caption*{$70k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.19\textwidth}
\centering
\includegraphics[trim={52 35 50 45}, clip, scale=0.224]{figures/pendulum/uncertainty_init_90k.png}\vspace{-0.1cm}
\caption*{$90k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.1965\textwidth}
\centering
\includegraphics[trim={52 35 65 45}, clip, scale=0.21]{figures/pendulum/verification_trajectory_10k.png}\vspace{-0.1cm}
\caption*{$10k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={52 35 65 45}, clip, scale=0.21]{figures/pendulum/verification_trajectory_30k.png}\vspace{-0.1cm}
\caption*{$30k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={52 35 65 45}, clip, scale=0.21]{figures/pendulum/verification_trajectory_50k.png}\vspace{-0.1cm}
\caption*{$50k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={52 35 65 45}, clip, scale=0.21]{figures/pendulum/verification_trajectory_80k.png}\vspace{-0.1cm}
\caption*{$80k$ points}
\end{subfigure}
\begin{subfigure}[b]{0.195\textwidth}
\centering
\includegraphics[trim={52 35 65 45}, clip, scale=0.21]{figures/pendulum/verification_trajectory_nominal.png}\vspace{-0.1cm}
\caption*{environment}
\end{subfigure}
\caption{{\bf Inverted pendulum safe set with Bayesian model.} Surrogates are obtained with increasing amount of data. The initial state and input perturbation from the safe policy are drawn from Gaussians with standard deviation that doubles each $10k$ points. {\bf Top}: Mean predictions and uncertainty contours for the NCP-BRNN model. After $90k$ points no further improvement is noticed. {\bf Bottom}: Comparison of safe sets with surrogates and environment. Reducing the model uncertainty while maintaining a similar prediction accuracy leads to an increase in the safe set. After $90k$ points no further benefits are noticed on the set which is consistent with the uncertainty estimates. }
\label{fig:pendulum_lyap}
\end{figure}
\paragraph{Verification on the environment.}
The candidate Lyapunov function, safe level set, and robust control policy are formally verified through probabilistic sampling of the system state, according to Algorithm \ref{alg:verification}, where the simulation is used directly. The results for $5k$ samples are shown in Figure \ref{fig:verification}. In particular, the computed level sets verify at the first attempt and no further search for sub-levels or ultimate bounds is needed.
\begin{wrapfigure}{r}{7.5cm}
\centering
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[trim={40 23.5 0 20}, clip, scale=0.3]{figures/pendulum/random_input.png}
\caption*{\centering{\hspace*{1em} \scriptsize Semi-random exploration\newline \hspace*{2em} $50$ trials of $1k$ steps\newline $\text{vol}=0.06$}}
\end{subfigure}
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[trim={44.5 26 0 23.5}, clip, scale=0.3]{figures/pendulum/exploration.png}
\caption*{\centering{\hspace*{1em} \scriptsize Safe-MPC exploration\newline \hspace*{2em} $1$ trial of $50k$ steps\newline $\text{vol}=0.04$}}
\end{subfigure}
\caption{{\bf Safe exploration}. Comparison of a naive semi-random exploration strategy with the proposed Safe-MPC for exploration. The proposed algorithm has an efficient space coverage with safety guarantees. }
\label{fig:pendulum_explore}
\end{wrapfigure}
\paragraph{Safe exploration.}
Safe exploration is performed using the min-max approach in Section \ref{sec:exploration}. For comparison, a semi-random exploration strategy is also used: if inside the safe set, the action magnitude is set to maximum torque and its sign is given by a random uniform variable once $V(x)\geq 0.99\ l_s$, then the safe policy $K$ is used. This does not provide any formal guarantees of safety as the value of $V(x)$ could exceed the safe level, especially for very fast systems and large input signals. This is repeated for several trials in order to estimate the maximum reachable set within the safe set. The results are shown in Figure \ref{fig:pendulum_explore}, where the semi-random strategy is used as a baseline and is compared to a single trial of the proposed safe-exploration algorithm. The area covered by our algorithm in a \emph{single trial} of $50k$ steps is about $67\%$ of that of the semi-random baseline over $50$ trials of $1k$ steps. Extending the length of the trials did not significantly improve the baseline results. Despite being more conservative, our algorithm continues to explore safely indefinitely.
\section{Conclusions}
Preliminary results show that the SiMBL produces a Lyapunov function and a safe set using neural networks that are comparable with that of standard optimal control (LQR) and can account for state-dependant additive model uncertainty. A Bayesian RNN surrogate with NCP was proposed and trained for an inverted pendulum simulation. An alternate descent method was presented to jointly learn a Lyapunov function, a safe level set, and a stabilising control policy for the surrogate model with back-propagation. We demonstrated that adding data-points to the training set can increase the safe-set size provided that the model improves and its uncertainty decreases. To this end, an uncertainty prior from the previous model was added to the framework. The safe set was then formally verified through a novel probabilistic algorithm for ultimate bounds and used for safe data collection (exploration). A one-step safe MPC was proposed where the Lyapunov function provides the terminal cost and constraint to mimic an infinite horizon with high probability of recursive feasibility. Results show that the proposed safe-exploration strategy has better coverage than a naive policy which switches between random inputs and the safe policy.
\bibliographystyle{apa-good}
|
3,212,635,537,531 | arxiv | \section*{Introduction}
Recently there has been considerable progress in understanding the notion of
ellipticity on noncompact manifolds and manifolds with singularities. For a
wide class of manifolds, ellipticity conditions for operators were established
and the corresponding finiteness theorems\footnote{Stating that an elliptic
operator is Fredholm in certain function spaces.} were proved; the
corresponding operator $C^*$-algebras were constructed. Hence the study of
\textit{topological} aspects of the theory of elliptic operators becomes
topical. Here one mainly speaks of the classification problem and the index
problem. Note that Gelfand's homotopy classification problem for elliptic
operators can naturally be restated in modern language as the problem of
computing the $K$-groups of symbol algebras, which prove noncommutative in most
cases. Thus Gelfand's problem naturally fits in the framework of topical
problems of Connes's noncommutative geometry~\cite{Con1}.
\subsubsection*{Aim of the paper}
This paper deals with elliptic theory on manifolds with corners.
Operators on manifolds with corners have been actively studied, and a number of
important interesting results emerged recently. For example, the $C^*$-closure
of symbol algebras was studied in~\cite{MeNi2}, and a spectral sequence
converging to the $K$-theory of the $C^*$-algebra of symbols was constructed.
Monthubert~\cite{Mon3} obtained a description of the operator algebra in the
spirit of noncommutative geometry in terms of a special groupoid that can be
associated with a manifold with corners (see also \cite{LeMo1}).
Bunke~\cite{Bun2} constructed index theory of Atiyah--Patodi--Singer type for
Dirac operators and studied cohomological obstructions to elliptic problems
(see also \cite{Loy1,LaMo3}); Monthubert and Nistor~\cite{MoNi1} produced a
formula for the boundary map in the $K$-theory of symbol algebras in
topological terms. Krainer~\cite{Kra1} studied boundary value problems in this
setting.
Although these results permitted finding the group classifying the homotopy
classes of elliptic operators in a number of special cases (e.g.,
see~\cite{MePi2} or \cite{Nis1}), the homotopy classification problem remained
open.
We solve Gelfand's problem for \emph{manifolds with corners}. Our goal is
to obtain a simple explicit formula for the classifying group in terms of
Atiyah's $K$-homology functor \cite{Ati4}.
\subsubsection*{Elliptic operators and $K$-homology}
Note that the idea of classifying elliptic operators by the $K$-homology
functor has long been known. For the reader's convenience, we recall it
using operators on a smooth compact manifold $M$ as an example.
The commutator of an elliptic zero-order\footnote{Working solely with
zero-order operators does not result in loss of generality, since order
reduction (say, multiplication by an appropriate power of the Laplace
operator) is always available.} operator $D$ on $M$ with the operator of
multiplication by a continuous function $f\in C(M)$ is compact
\begin{equation}\label{compa1}
[D,f]\in \mathcal{K}.
\end{equation}
By one definition, the \emph{contravariant $K$-theory} $K^0(C(M))$ of the
algebra $C(M)$ just consists of Fredholm operators for which the
commutators~\eqref{compa1} are compact. Thus $D$ determines an element of the
group $K^0(C(M))$, which is isomorphic to the $K$-homology group of $M$:
$$
K^0(C(M))\simeq K_0(M)
$$
by the Atiyah--Brown--Douglas--Fillmore--Kasparov theorem. Thus, assigning
the corresponding class in the $K$-homology to each elliptic operator, we
obtain a mapping
$$
\Ell(M)\longrightarrow K_0(M),
$$
where $\Ell(M)$ is the group of elliptic operators in sections of bundles
on $M$ modulo stable homotopy and $K_0(M)$ is the even $K$-homology group
of ${M}$.
Kasparov showed this mapping to be an isomorphism. In other words, the
$K$-homology group of a smooth manifold classifies elliptic operators on
this manifold modulo stable homotopy.
This approach to classification also proved fruitful in the case of compact
stratified manifolds with singularities. Namely, it was shown
in~\cite{R:NaSaSt3} that in this case the even $K$-homology group of the
underlying compact topological space classifies elliptic operators on this
manifold.
However, no classification results were known for manifolds with corners of
codimension $\ge 2$. The classification in the form of the $K$-homology of the
manifold with corners, which suggests itself, is too meagre to be true: one can
always smooth the corners, and we see that the $K$-homology of the manifold
with corners is too coarse an invariant, for it does not take into account the
structure of a manifold with corners.
Moreover even the space whose $K$-homology would classify elliptic
operators was unknown.
\subsubsection*{Main result}
We establish the isomorphism
\begin{equation}
\label{funda1} \Ell(M)\simeq K_0({M}^\#),
\end{equation}
where $M$ is a manifold with corners and ${M}^\#$ is the dual manifold (see
\cite{Part1}) which is a stratified manifold with singularities. More
precisely, the isomorphism \eqref{funda1} will be established under the
following assumption concerning the combinatorial structure of the faces of
our manifold:
\begin{center}
\textit{The normal bundles of all faces of $M$ are trivial.}
\end{center}
If this assumption fails, then, generally speaking, the isomorphism
\eqref{funda1} does not hold. In this case, one should abandon the search for a
classifying \emph{space} and seek some \textit{algebra} whose $K$-cohomology
classifies elliptic operators. This algebra proves to be noncommutative, and
one needs to use ideas on noncommutative geometry. These results will be
considered elsewhere.
Note an interesting special case: if a manifold with corners is a
polyhedron with a given triangulation of the boundary, then the dual
stratified space is also a polyhedron, namely, the one used in the
classical proof of Poincar\'e duality in cohomology! For example, if $M$ is
a cube, then ${M}^\#$ is an octahedron. Thus the construction of the dual
manifold in the first part~\cite{Part1} of the present paper generalizes
the \textit{Poincar\'e dual polyhedron} to the case of noncontractible
faces.
\subsubsection*{Manifolds with corners and manifolds with multicylindrical ends}
Note that there is a different perspective on the theory of operators on
manifolds with corners. An application of a logarithmic change of variables
in a neighborhood of the boundary taking the boundary to infinity (see
Fig.~\ref{ris4}, where this is shown for manifolds with boundary) results
in the class of so-called \emph{manifolds with multicylindrical ends}.
These two pictures give the same operator algebras. Thus the results of the
present paper also provide classification on manifolds with
multicylindrical ends.
\begin{figure}
\begin{center}
\includegraphics[height=5cm]{etmcii.eps}
\caption{Transition from a neighborhood of the
boundary to an infinite cylinder.}\label{ris4}
\end{center}
\end{figure}
\subsubsection*{Outline of the paper}
This is the second of the two parts of the paper. In the first
part~\cite{Part1}, the dual manifold of a manifold with corners was
constructed and calculus of pseudodifferential operators ($\psi$DO) on
manifolds with corners was developed in the $C^*$-algebraic context.
The present part has the following structure.
In the first section, we recall some information from~\cite{Part1}. In
Sec.~2, we state the classification theorem. The proof occupies the next
three sections. Note that the general scheme of the proof is the same as
in~\cite{R:NaSaSt3}, and we proceed by induction, passing from a smooth
manifold to increasingly complex manifolds with singularities. In Sec.~6 we
discuss the relationship with some results due to Monthubert and Nistor. As
a consequence of the classification theorem, we obtain a $K$-homology
criterion for the vanishing of the index and a formula for the $K$-group of
the $C^*$-algebra of $\psi$DO with zero interior symbol (this algebra
corresponds to the $C^*$-algebra of the groupoid constructed by
Monthubert). In the appendix, we prove a higher analog of the relative
index theorem, which naturally arises when we obtain the classification of
operators.
\section{Manifolds with Corners and Dual Stratified Manifolds}\label{pargeom}
\subsubsection*{Manifolds with corners and faces}
Here we recall some information given in~\cite{Part1}.
Consider a manifold $M$ with corners of depth $k$. It has a natural
stratification
$$
M=\bigcup_{j=0}^{k} M_j,
$$
where the stratum $M_0=M^\circ$ is just the interior of $M$ and each stratum
$M_j$ is the union of connected components, open faces $M_{j\alpha}$ of
codimension $j$ in $M$.
Each face $F=M_{ja}$ in the stratum $M_j$ is isomorphic to the interior of a
manifold $\overline{F}=\overline{M}_{ja}$, which will be called a closed face of $M$.
Faces of codimension one are called \emph{hyperfaces}.
\subsubsection*{Main assumption}
The main results of the paper will be obtained under the following
assumption.
\begin{assumption}\label{assa1}
The normal bundle $N_+F$ of each face $F$ is trivial.
\end{assumption}
In this case, the local defining functions $\rho_1,\ldots,\rho_j$ of $F$ are
globally defined as functions on the normal bundle $N_+F$.
\begin{remark*}
Assumption \ref{assa1} holds if all hyperfaces are \emph{embedded}, i.e., if
there exists a global defining function for each hyperface $F\subset M$.
However, it also holds, for some manifolds with nonembedded hyperfaces, say,
for the raindrop. The simplest example of a manifold with corners that does not
satisfy Assumption~\ref{assa1} is the Klein bottle with raindrop instead of the
circle as the base.
\end{remark*}
\subsubsection*{The dual space}
The dual space $M^\#$ of a manifold $M$ with corners was introduced
in~\cite{Part1}. If the original manifold is represented as the union
$$
M=\bigcup_{j\ge 0} M_j,\qquad M_j=\bigcup_{\alpha} M_{j\alpha},
$$
then ${M}^\#$ is the union of \emph{dual faces},
$$
{M}^\#=\bigcup_{j\ge 0} {M}^\#_j,\qquad {M}^\#_j=\bigcup_{\alpha}
{M}^\#_{j\alpha},
$$
each of which is isomorphic to the interior of a simplex
$$
{M}^\#_{j\alpha}\simeq \Delta^\circ_{j-1}.
$$
Here, by definition, ${M}^\#_0:=M_0$ is the interior of $M$. Thus to each
face $F$ of codimension $j$ in $M$ there corresponds a simplex $F^\#$ of
dimension $j-1$ in the dual space.
\subsubsection*{The fibration structure on $M^\#$}
It was proved in~\cite{Part1} that a neighborhood ${U}^\#$ of the stratum
${F}^\#$ is homeomorphic to the product of ${F}^\#$ by the cone
$$
K_{\Omega}=[0,1)\times \Omega/\{0\}\times \Omega
$$
whose base $\Omega$ is the dual space ${\overline{F}}^\#$ of the closed
face $\overline{F}$ (which is well defined, since $\overline{F}$ itself is
a manifold with corners). As a result, we find that ${M}^\#$ is a
stratified manifold with singularities.
\section{Classification Theorem}\label{maina}
Let $M$ be a manifold with corners satisfying Assumption \ref{assa1}, and
let $\Psi(M)$ be the $C^*$-algebra of zero-order $\psi$DO in the space
$L^2(M)$ (see~\cite{Part1}). The notion of a $\psi$DO acting on sections of
finite-dimensional vector bundles on $M$ is introduced in the usual way.
There is a natural equivalence relation, \textit{stable homotopy}, on the
set of elliptic operators. Recall the definition.
\begin{definition}
Two elliptic operators
$$
D:L^2(M,E)\to L^2(M,F)\;\;\text{ and }\;\; D':L^2(M,E')\to L^2(M,F')
$$
on $M$ are said to be \textit{stably homotopic} if there exists a
continuous homotopy
$$
D\oplus 1_{E_0}\sim f^*\bigl(D'\oplus 1_{F_0}\bigr)e^*
$$
of elliptic operators, where $E_0,F_0\in{\rm Vect}(M)$ are vector bundles and
$$
e:E\oplus E_0\longrightarrow E'\oplus F_0,\qquad f:F'\oplus
F_0\longrightarrow F\oplus E_0
$$
are bundle isomorphisms.
\end{definition}
Here ellipticity is understood as the invertibility of all components of
the symbol of the operator, and only homotopies of $\psi$DO preserving
ellipticity are considered.
\subsubsection*{Even groups $\Ell_0({M})$}
Stable homotopy is an equivalence relation on the set of elliptic $\psi$DO
acting in sections of vector bundles. By $\Ell_0({M})$ we denote the
corresponding quotient set. It is a group with respect to the direct sum of
operators, and the inverse in this group is given by the coset of the
almost inverse operator (i.e., an inverse modulo compact operators).
\subsubsection*{Odd groups $\Ell_1({M})$.}
Odd elliptic theory $\Ell_1({M})$ is defined in a similar way as the group
of stable homotopy classes of elliptic self-adjoint operators.
Stabilization is defined in terms of the operators $\pm Id$.
\begin{remark}
An equivalent definition of the odd $\Ell$-group can be given in terms of
smooth operator families on ${M}$ with parameter space $\mathbb{S}^1$
modulo constant families.
\end{remark}
We shall compute the groups $\Ell_*(M)$ for $*=0$ and $*=1$, i.e., find
the classification of elliptic operators modulo stable homotopy. Our
approach is based on the following fact (see the definition of $\psi$DO
in~\cite{Part1}):
\begin{center}
\emph{$\psi$DO on $M$ can be viewed as local operators in the sense of
Atiyah on the dual manifold ${M}^\#$.}
\end{center}
Thus an elliptic $\psi$DO defines a Fredholm module on the space
$L^2(M)$ viewed as a $C({M}^\#)$-module. (For Fredholm modules and
$K$-theory, see~\cite{HiRo1} or \cite{Bla1}).
\subsubsection*{Classification of elliptic operators}
The following theorem is the main result of this paper.
\begin{theorem}\label{th1}
The mapping that takes each elliptic $\psi$DO to the corresponding Fredholm
module defines the group isomorphism
\begin{equation}\label{iso1}
\Ell_*(M)\stackrel\simeq\longrightarrow K_*({M}^\#).
\end{equation}
\end{theorem}
We shall obtain this theorem as a special case of the following more
general theorem.
\subsubsection*{Classification of partially elliptic operators \rom(cf.~\cite{R:NaSaSt3}\rom).}
Let $\Ell_*\left(M_{\ge j}\right)$ be the group generated by operators
whose symbols are invertible on the main stratum and all faces of
codimension $\ge j$. Thus we consider operators satisfying the ellipticity
condition on part of the faces.
The corresponding dual space
$$
{M}^\#_{\ge j}:={M}^\#\setminus \bigcup^{j-1}_{j'=1}{M}^\#_{j'}
$$
is obtained from ${M}^\#$ by
deleting all simplices of dimension $\le j-2$.
\begin{lemma}\label{lem1a}
An operator $D$ such that $[D]\in \Ell_*\left(M_{\ge j}\right)$ defines a
Fredholm module over the algebra $C_0({M}^\#_{\ge j})$ of functions on the
dual space.
\end{lemma}
\begin{proof}
We should verify the following properties of a Fredholm module: the
expression
$$
f(DD^*-1)
$$
is compact for all $f\in C_0\left({M}^\#_{\ge j}\right)$ (here we assume
that $D$ is normalized by the condition $\sigma^*_s(D)=\sigma_s(D)^{-1}$
for $s\ge j$). The compactness follows from the fact that, by construction,
on each face $F\subset M$ either the corresponding symbol of our operator
is invertible or the functions in the algebra
$C_0\left({M}^\#_{\ge j}\right)$ are zero.
\end{proof}
By Lemma~\ref{lem1a}, the mapping
\begin{equation}\label{iso2}
\Ell_*\left(M_{\ge j}\right)\stackrel{\varphi_j}\longrightarrow
K_*\left({M}^\#_{\ge j}\right)
\end{equation}
that takes partially elliptic operators to the corresponding Fredholm
modules is well defined (cf.~\cite{R:NaSaSt3}).
\begin{theorem}\label{th2}
For each $1\le j\le k+1$, the mapping \eqref{iso2} is an isomorphism.
\end{theorem}
Theorem \ref{th1} is the special case of Theorem \ref{th2} for $j=1$.
\section{Beginning of Proof of the Classification Theorem}
We prove Theorem~\ref{th2} by induction on $j$ decreasing from $k+1$ (where
$k$ is the depth of $M$) to~$1$.
\subsubsection*{Inductive assumption}
For $j=k+1$, the group $\Ell_*(M_{\ge j})$ classifies elliptic interior
symbols and hence is isomorphic to $K_c^*(T^*M)$. Moreover, the mapping
taking the symbol to the corresponding operator determines an isomorphism
$K_c^*(T^*M)\simeq K_*(M_0)$ (e.g., see~\cite{Kas3}). On the other hand,
the right-hand side of \eqref{iso2} in this case just contains the group
$K_*(M_0)$. Thus the theorem holds for $j=k+1$.
\subsubsection*{Inductive step.}
To justify the inductive step, we need to study exact sequences in
$K$-homology and $K$-theory permitting one to relate the maps $\varphi_j$
in \eqref{iso2} for two values of the subscript, $j$ and $j+1$.
\subsection{Exact sequence in $K$-homology (see~\cite{R:NaSaSt3})}\label{geom}
Consider the embedding
\begin{equation}\label{pair1}
{M}^\#_{\ge j}\supset{M}^\#_{j} .
\end{equation}
The complement ${M}^\#_{\ge j}\setminus{M}^\#_{j}$ is obviously equal to
${M}^\#_{\ge j+1}$, and we have the exact sequence of the
pair~\eqref{pair1} in $K$-homology
\begin{equation}\label{seq1}
\ldots\rightarrow K_*({M}^\#_{j})\rightarrow K_*({M}^\#_{\ge j})\rightarrow
K_*({M}^\#_{\ge j+1})\stackrel\partial\rightarrow
K_{*+1}({M}^\#_{j})\rightarrow\ldots.
\end{equation}
All maps but the boundary map $\partial$ in this sequence correspond to a
change of module structure on the corresponding Fredholm modules. The
boundary map $\partial$ can be reduced to a form convenient for
computations by the following standard method.
Let ${U}^\#\subset {M}^\#_{\ge j}$ be the open neighborhood of the stratum
${M}^\#_{j}$ constructed\footnote{The neighborhood $U$ is defined as the
union of neighborhoods of all simplices $F^\#\subset M^\#_j$. By
construction, these neighborhoods are disjoint.} in~\cite[Sec.~1.2]{Part1}.
We have the homeomorphism
\begin{equation}
{U}^\#\simeq {M}^\#_{j}\times K_{{\overline{M}_j}^\#},
\end{equation}
where the cone $K_{\Omega_j}$ is the disjoint union of the cones
corresponding to the connected components of the base $\Omega_j$.
Then we have the mappings
$$
{M}^\#_{j}\times (0,1)\stackrel{\pi}{\longleftarrow}{M}^\#_{j}\times
(0,1)\times \Omega_j\stackrel{\simeq}\leftarrow {U}^\#\setminus
{M}^\#_{j}\stackrel{l}\longrightarrow {M}^\#_{\ge j+1}
$$
(by $l$ we denote the embedding of an open manifold, and $\pi$ is the
projection onto the first two factors), which permit us to represent the
boundary map $\partial$ in \eqref{seq1} as the composition
\begin{equation}\label{Kgranica}
K_*({M}^\#_{\ge j+1})\stackrel{l^*}\rightarrow K_*({U}^\#\setminus
{M}^\#_{j})\stackrel{\pi_*}\longrightarrow K_*((0,1)\times
{M}^\#_{j})\stackrel\beta\simeq K_{*+1}({M}^\#_{j})
\end{equation}
of the restriction $l^*$ of operators to an open set, the push-forward
$\pi_*$, and the periodicity isomorphism $\beta$. This representation
follows from the fact that $\partial$ is natural.
\subsection{Exact sequence related to elliptic operators (see~\cite{MeNi2})}
Let $M$ be a manifold with corners of depth $k>0$, and let $j$, $1\le j\le
k$, be some number. We denote the algebra formed by the symbols
$(\sigma_0,\sigma_{j},\sigma_{j+1},\ldots,\sigma_k)$ of all $\psi$DO on $M$
by
$$
\Sigma_j=\im (\sigma_0,\sigma_{j},\sigma_{j+1},\ldots,\sigma_k).
$$
Then we have the short exact sequence of $C^*$-algebras
\begin{equation}\label{seq2}
0\to J\to \Sigma_j\to \Sigma_{j+1}\to 0.
\end{equation}
Here the ideal $J$ consists of the symbols
$(\sigma_0,\sigma_{j},\sigma_{j+1},\ldots,\sigma_k)$ in which all
components but $\sigma_{j}$ are zero. From the compactness criterion for
$\psi$DO and compatibility conditions for symbols (see~\cite{Part1}), we
see that under these conditions the symbol $\sigma_{j}$ is a tuple of
compact-valued families decaying at infinity, so that one has the
isomorphism
$$
J\simeq \bigoplus_{F\subset M_j} C_0(\mathbb{R}^{j},\mathcal{K}L^2(F)),
$$
where the sum is taken over faces $F$ of codimension $j$ in $M$.
By virtue of this isomorphism, we can write out the exact sequence in
$K$-theory corresponding to the short sequence \eqref{seq2} in the form
\begin{equation}\label{seq3}
\ldots\to K_*(J)\to K_*(\Sigma_j)\to
K_*(\Sigma_{j+1})\stackrel{\delta}\longrightarrow K_{*+1}(J)\to\ldots.
\end{equation}
Clearly,
$$
K_*(J)\simeq K_*(C_0(\mathbb{R}^j))\oplus K_*(C_0(\mathbb{R}^j))\oplus\ldots=\mathbb{Z}^l,
$$
where $l$ is the number of connected components in $M_j$. In terms of this
isomorphism, the boundary map $\delta$ can be represented (for $*=1$) in
the following simple form. An arbitrary class
$$
[\sigma]\in K_1(\Sigma_{j+1})
$$
is realized by an invertible symbol
$$
\sigma=(\sigma_0,\sigma_{j+1},\sigma_{j+2},\ldots,\sigma_k).
$$
(From now on, for brevity we carry out the computations only for $K$-theory
elements representable by scalar operators; the consideration of the matrix
case differs only in the awkwardness of formulas.) Take an arbitrary symbol
$\sigma_j$ compatible with $\sigma$. The symbol $\sigma_{j}$ defines an elliptic
family with parameters in $\mathbb{R}^{j}$, and the index of that family is
a well-defined element of the $K$-group with compact supports of the
parameter space. One has
$$
\delta[\sigma]=\ind \sigma_j\in \bigoplus_{F\subset
M_j}K_0(C_0(\mathbb{R}^j)).
$$
There is a similar expression for the boundary map for the case $*=0$. (To
obtain it, one can pass to the suspension.)
\subsection{Comparison of exact sequences}
Let us show that the sequences \eqref{seq1} and \eqref{seq3} can be
combined into the commutative diagram
\begin{equation}\label{maindiagram}
\begin{array}{rccccl}
\ldots\to K_*(J)\quad& \to K_*(\Sigma_j)\to & K_*(\Sigma_{j+1})&
\stackrel{\delta}\longrightarrow & K_{*+1}(J)& \to\ldots\\
\downarrow\varphi_0\quad & \downarrow\varphi_j & \downarrow\varphi_{j+1} & & \downarrow\varphi_0 \\
\ldots\rightarrow K_{*+1}({M}^\#_{j})& \to K_{*+1}({M}^\#_{\ge j
})\to & K_{*+1}({M}^\#_{\ge j+1})& \stackrel{\partial}\longrightarrow &
K_{*}({M}^\#_{j})& \to\ldots
\end{array}
\end{equation}
(The construction of this diagram and the verification of its commutativity
will be finished in Sec.~\ref{compare1}.)
\subsubsection*{First, we define the vertical maps in the diagram.}
Without loss of generality, we can assume that $M$ has no connected
components with empty boundary. (Everybody knows the classification on such
components.) Then for all $j$ we have the isomorphism
\cite{Sav8}\footnote{This isomorphism generalizes the well-known expansion
$K^1(S^*M)\simeq \Ell(M)\oplus K^1(M)$ on a smooth closed manifold $M$ on
which there exists a nonzero vector field. Elimination of closed components
permits us to claim that there exists a nonzero vector field in our
situation.}
\begin{equation}\label{dek1}
K_*(\Sigma_j)\simeq \Ell_{*+1}(M_{\ge j})\oplus K_*(C(M)).
\end{equation}
Hence we define the maps $\varphi_j$, $j\ge 1$, in diagram
\eqref{maindiagram} as the composition
$$
K_*(\Sigma_j)\longrightarrow \Ell_{*+1}(M_{\ge j})\longrightarrow
K_{*+1}({M}^\#_{\ge j })
$$
of the projection onto the $\Ell$-group and the quantization \eqref{iso2}.
Thus these maps are induced by quantization, which takes symbols to
operators.
It remains to define the map $\varphi_0$. Just as the other vertical arrows
in the diagram, it is defined by quantization, namely, by quantization of
symbols $\sigma=\sigma_{j}$ in the ideal $J$. The quantization of elements
of the ideal differs from the quantization of general elements of the
algebra $\Sigma_j$ only in that the operator is considered in the $L^2$
space in a small neighborhood $U$ of the stratum $M_{j}$ in $M$ constructed
in~\cite[Lemma~1.9]{Part1}. We denote the operator by $\widehat{\sigma}_j$.
Let us define a module structure on $L^2(U)$. To this end recall that
$U$ can be considered also as a subset of the positive quadrant $N'_+M_j$ of the logarithmic
normal bundle. Thus, this space is naturally a
$C_0({M}^\#_{j})$-module. (Elements of $C_0({M}^\#_{j})$ act on $N'_+M_j$ as
operators of multiplication by radially constant functions $f(y)$,
in logarithmic coordinates $y=-\ln
\rho$.) The verification of locality of operator with respect to this
module structure (i.e., proving that the operator $\widehat{\sigma}_j$
commutes with operators of multiplication by functions modulo compact operators)
is immediate, and hence for the element $[\sigma_j]\in K_{*+1}(J)$ we define the element
\begin{equation}\label{var1}
\varphi_0[\sigma_j]:=[\widehat{\sigma}_j]\in K_*({M}^\#_j).
\end{equation}
\subsubsection*{Diagram \eqref{maindiagram} commutes}
The commutativity of the middle square of the diagram follows directly from
definitions.
\begin{lemma}
The left square of diagram \eqref{maindiagram} commutes.
\end{lemma}
\begin{proof}
Indeed, consider the composition of mappings passing through the right
upper corner of the square
$$
\begin{array}{rccccl}
K_*(J)& \to & K_*(\Sigma_j)\\
\downarrow\varphi_0 & & \downarrow\varphi_j \\
K_{*+1}({M}^\#_{j})& \to & K_{*+1}({M}^\#_{\ge j }).
\end{array}
$$
It takes an elliptic symbol $\sigma_j$ to the operator
$\widehat{\sigma}_j$, which acts outside a neighborhood $U$ of the stratum
${M}^\#_j$ as the identity operator (modulo compact operators) in the space
$L^2(M)$ with the natural structure of a $C_0({M}^\#_{\ge j
})$-module. Now if we restrict the operator to a neighborhood of ${M}_j$
(the element in $K$-homology remains unchanged, since the corresponding
Fredholm module changes by a degenerate module) and then use a homotopy to
reduce the module structure to the composition
$$
C_0({M}^\#_{\ge j })\stackrel{i^*}\to C_0({M}^\#_j)\stackrel{\pi^*}\to
C({U}^\#)\to \mathcal{B}(L^2(U)),
$$
where $\pi:{U}^\#\to {M}^\#_j$ is a projection and $i:{M}^\#_j\subset
{M}^\#_{\ge j}$ is an embedding, then we obtain the Fredholm module
assigned to the symbol $\sigma_j$ by the composition of maps passing
through left bottom corner of the square. The commutativity of the square
is thereby established.
\end{proof}
Verification of the commutativity of the square containing the boundary
maps is rather cumbersome, and so we make it in a separate section.
\section{Boundary and Coboundary Maps}
In this section, we establish the commutativity of the square
\begin{equation}\label{kvad}
\begin{array}{ccc}
K_*(\Sigma_{j+1})&
\stackrel{\delta}\longrightarrow & K_{*+1}(J)\\
\downarrow\varphi_{j+1} & & \downarrow\varphi_0 \\
K_{*+1}({M}^\#_{\ge j+1})& \stackrel{\partial}\longrightarrow &
K_{*}({M}^\#_{j})
\end{array}
\end{equation}
containing the boundary maps in diagram \eqref{maindiagram}. The scheme of
proof is as follows. We
\begin{enumerate}
\item Compute the composition $\varphi_0\circ\delta$.
\item Compute the composition $\partial\circ \varphi_{j+1}$.
\item Compare the resulting expressions.
\end{enumerate}
\subsection{Composition $\varphi_0\circ\delta$}
Let $[\sigma]\in K_*(\Sigma_{j+1})$ be the element defined by some symbol
$\sigma=(\sigma_0,\sigma_{j+1},\ldots,\sigma_k)$. Take a symbol $\sigma_j$
on $M_j$ compatible with $\sigma$ and denote by
\begin{equation}\label{sij}
\widehat{\sigma}_j:L^2(NM_j)\to L^2(NM_j)
\end{equation}
the corresponding translation-invariant infinitesimal operator. (It is
conjugate to $\sigma_j$ by the Fourier transform.)
Representing the space $N'_+M_j$ as the product $ N'_+M_j\simeq M_j\times
{\mathbb{R}_+^j}$, we see that $L^2(NM_j)$ is a
$C_0(\bigsqcup\mathbb{R}^j_+)$-module. Here $\bigsqcup\mathbb{R}^j_+$ is
the disjoint union of as many open quadrants as there are faces of
codimension $j$ in $M$. The operator $\widehat{\sigma}_j$ is local with
respect to this module structure. We denote the corresponding element of
the $K$-homology group by
\begin{equation}\label{var2}
[\widehat{\sigma}_j]\in K_{*+1}(\bigsqcup\mathbb{R}^j_+).
\end{equation}
\begin{lemma}\label{l1}
The element $[\sigma]\in K_*(\Sigma_{j+1})$ satisfies the chain of
relations
\begin{equation}\label{lem2}
\varphi_0\delta[\sigma]=\varphi_0(\ind
\sigma_j)=\beta[\widehat{\sigma}_j]\in K_*({M}^\#_{j}),
\end{equation}
where
$\beta:K_{*+1}(\bigsqcup\mathbb{R}_+^j)=K_{*+1}({M}^\#_{j}\times\mathbb{R}_+)
\longrightarrow K_*({M}^\#_{j})$ is the Bott periodicity isomorphism, and
the index is understood as the index
$$
\ind \sigma_j\in K^{*+1}(\bigsqcup\mathbb{R}^j)\simeq K_{*+1}(J)
$$
of the elliptic operator-valued symbol $\sigma_j$.
\end{lemma}
\begin{proof}
For brevity, we assume that $M_j$ consists of exactly one stratum. In this
case, we have $M_j^\#\times\mathbb{R}_+\simeq \mathbb{R}_+^j$.
The first relation in \eqref{lem2} follows from definitions (since the
boundary map in $K$-theory of algebras is the index map).
1. Let us establish the second relation $\varphi_0(\ind
\sigma_j)=\beta[\widehat{\sigma}_j]$. The proof is based on the diagram
\begin{equation}\label{triang1}
\xymatrix{
K_{*+1}(\Sigma_{j+1})\ar[dr]
\ar[r]^{\ind\sigma_j}& K_*(J)\ar[rd]^{\varphi_0}\ar[d]_q\\
&
K_*({M}^\#_j\times \mathbb{R}_+)\ar[r]^\beta &
K_{*+1}({M}^\#_j),
}
\end{equation}
where the map $K_{*+1}(\Sigma_{j+1})\to K_*({M}^\#_j\times \mathbb{R}_+)$
is induced by the map that takes the symbol $\sigma$ to the operator
$\widehat{\sigma}_j$ in \eqref{var2}. Finally, the group $K_*(J)\simeq
K^*(\mathbb{R}^j)$ is interpreted as the $K$-group of the cotangent bundle
to $\mathbb{R}^j_+$, and the map $q$ is induced by standard quantization
(to a symbol on the cosphere bundle, one assigns a pseudodifferential operator).
2. We claim that diagram~\eqref{triang1} commutes. Indeed, let us verify
the commutativity of the left triangle, i.e., the relation
\begin{equation}\label{luka3}
[\widehat{\sigma}_j]=q[\ind \sigma_j].
\end{equation}
Note that the operator $\widehat{\sigma}_j$ is given over the product
$NM_j=M_j\times \mathbb{R}^j$. Moreover, it can be viewed as a $\psi$DO on
$\mathbb{R}^j$ with operator-valued symbol $\sigma_j=\sigma_j(\xi),$
$\xi\in \mathbb{R}^j$. This symbol is independent of the physical variables
$x\in \mathbb{R}^j$. Without loss of generality, it can be assumed to be
smooth with respect to the parameter $\xi$ (since $\sigma_j(\xi)$, just as
any $\psi$DO with a parameter, can be arbitrarily closely approximated by a
smooth $\psi$DO with a parameter; see~\cite{Part1}). Hence $\sigma_j(\xi)$
is an operator-valued symbol in the sense of \cite{Luk1}, i.e., has a
compact variation with respect to $\xi$ and all of its derivatives starting
from the first decay at infinity. Now relation \eqref{luka3} follows by
analogy with the generalized Luke theorem in~\cite{R:NaSaSt3}.
The commutativity of the right triangle follows (see Corollary~\ref{aux1}
in the appendix) from the higher relative index theorem.
3. The commutativity of diagram~\eqref{triang1} implies the second relation
\eqref{lem2}. (The right-hand side is obtained if from the left top corner
of the diagram we go directly to the group $K_*(M_j^\#\times\mathbb{R}_+)$
and then apply the periodicity isomorphism $\beta$.)
\end{proof}
\subsection{Composition $\partial\circ\varphi_{j+1}$}
\subsubsection*{Space $N'_+M_j$ as a manifold with corners}
The image of the positive quadrant $N'_+M_j$ under the inverse of the
logarithmic map is the set $M_j\times [0,1)^j\subset M_j\times
\overline{\mathbb{R}}^j_+=N_+M_j$. Hence we treat $N'_+M_j$ as the interior
of a manifold with corners, denoted by $\overline{N'_+M_j}$. We denote the
corresponding dual space by ${\overline{N'_+M_j}}^\#$. On the complement
${\overline{N'_+M_j}}^\#\setminus {M}^\#_j$, there is a well-defined
projection
\begin{equation}\label{proek}
\begin{array}{ccc}
{\overline{N'_+M_j}}^\# \setminus {M}^\#_j& \stackrel{\pi}\longrightarrow &
{M}^\#_j \times\mathbb{R}_+\\
\displaystyle(y,x,\omega) &\mapsto &\left(\displaystyle\frac{y}{|y|},
\frac{|x|+1}{|y|}\right),
\end{array}
\end{equation}
whose fiber is the space ${\overline{M}_j}^\#$.
\subsubsection*{Reduction into a neighborhood of the edge}
We have the diagram of embeddings
\begin{equation}
\label{emb}
\begin{array}{rcl}
{M}^\#_j\subset& {U}^\# & \subset {M}^\#_{\ge j}\\
& \bigcap &\\
& {\overline{N'_+M}_j}^\#. &
\end{array}
\end{equation}
Let $[\sigma]\in K_{*+1}(\Sigma_{j+1})$ be the element determined by the
symbol $\sigma$ as above. By passing to the corresponding operators, we
obtain the element
$$
\varphi_{j+1}[\sigma]\in K_*({M}^\#_{\ge j+1}).
$$
On the other hand, the infinitesimal operator $\widehat{\sigma}_j$
compatible with $\sigma$ (see~\eqref{sij}) defines the element
$$
[\widehat{\sigma}_j]'\in K_*({\overline{N_+M}_j}^\#\setminus {M}^\#_j).
$$
This element is well defined, since the components of its symbol are
elliptic on the corresponding strata. We use primes to distinguish this
element from the element \eqref{var2}: although they are determined by one
and the same operator, the module structures on the $L^2$-spaces are different.
The naturality of the boundary map in $K$-homology results in the following
lemma.
\begin{lemma}\label{l2}
One has
$$
\partial \varphi_{j+1}[\sigma]=\partial'[\widehat{\sigma}_j]',
$$
where $\partial':K_*({\overline{N_+M}_j}^\#\setminus {M}^\#_j)\to
K_{*+1}({M}^\#_j)$ is the boundary map for the pair
${M}^\#_j\subset{\overline{N_+M}_j}^\#$.
\end{lemma}
\begin{proof}
Let $D$ be some operator on $M$ with symbol $\sigma$.
1. The infinitesimal operator $\widehat{\sigma}_j$ is obtained from $D$ by
localization to the set $M_j^\#$. Hence the restrictions $D|_U$ and
$\widehat{\sigma}_j|_U$ of these operators to a small space $U$ of $M_j$
are connected by a linear homotopy; i.e., one has
\begin{equation}\label{easy1}
[D|_{U}]=[\widehat{\sigma}_j|_U]\in K_*({U}^\#).
\end{equation}
2. By applying the naturality of the boundary map in $K$-homology to the
embedding diagram \eqref{emb}, we obtain
$$
\partial \varphi_{j+1}[\sigma]\equiv\partial [D]=\partial''[D|_{U}],
$$
where $\partial''$ is the boundary map for the pair ${M}^\#_j\subset
{U}^\#$. Now if on the right-hand side of the last relation we replace the
element $[D|_{U}]$ according to \eqref{easy1} and once more use the
naturality of the boundary map, then we obtain the desired relation
$$
\partial \varphi_{j+1}[\sigma]=\partial''[\widehat{\sigma}_j|_U]=
\partial'[\widehat{\sigma}_j]'.
$$
\end{proof}
Thus in what follows, when computing the composition $\partial\circ
\varphi_{j+1}$, we can (and will) work with the operator
$\widehat{\sigma}_j$ on $N'_+M_j$.
\subsubsection*{Homotopy of the module structure}
By \eqref{Kgranica}, the boundary map $\partial'$ in Lemma~\ref{l2} can be
represented as the composition
\begin{equation}\label{kompa1}
K_*({\overline{N'_+M}_j}^\#\setminus {M}^\#_j)
\stackrel{\pi_*}\longrightarrow
K_*({M}^\#_j\times\mathbb{R}_+)\stackrel\beta\rightarrow K_{*+1}({M}^\#_j)
\end{equation}
of the push-forward with respect to the projection $\pi$ and the
periodicity isomorphism.
Unfortunately, although the classes $[\widehat{\sigma}_j]$ and
$\pi_*[\widehat{\sigma}_j]'$ are determined by the same operator
$\widehat{\sigma}_j$, they have different module structures on the space
$L^2(NM_j)$: in the first case, the structure is independent of the
coordinate $x$, while in the second case it depends on
(see~\eqref{kompa1}).
Let us make a homotopy of module structures. To this end, we define a
homotopy
$$
\pi^\varepsilon:{\overline{N'_+M}_j}^\#\setminus
{M}^\#_j\longrightarrow {M}^\#_j\times\mathbb{R}_+
$$
of projections by the formula (cf.~\eqref{proek})
$$
\pi^\varepsilon(y,x,\omega):=\left(\displaystyle\frac{y}{|y|},
\frac{\varepsilon|x|+1}{|y|}\right).
$$
This formula defines a continuous family of maps for $\varepsilon>0$.
However, the family is not continuous as $\varepsilon\to 0$.\footnote{And
hence the map $\pi^0_*$ is not defined on the $K$-group.} Nevertheless,
continuity takes place for the Fredholm modules, as shown by the following
lemma.
\begin{lemma}\label{l3}
The family $\pi^\varepsilon_*(\widehat{\sigma}_j)'$ of Fredholm modules
obtained by the change of module structure defines a homotopy in the sense
of $KK$-theory, and one has
\begin{equation}
\lim_{\varepsilon\to 0}\pi_*^\varepsilon (\widehat{\sigma}_j)'=\pi_*^0
(\widehat{\sigma}_j)',
\end{equation}
whence it follows that $\pi_*[\widehat{\sigma}_j]'=[\pi_*^0
(\widehat{\sigma}_j)]'\in K_*(M_j^\#\times\mathbb{R}_+)$.
\end{lemma}
\begin{proof}
For brevity, we assume that $M_j$ consists of a single face, i.e., is
connected. Then the homotopy in the sense of $KK$-theory means (e.g., see
\cite{Bla1}) that for each function $f\in C_0(\mathbb{R}^j_+)$ the family
$$
g^\varepsilon=(\pi^\varepsilon)^*(f):L^2(N'_+M_j)\longrightarrow
L^2(N'_+M_j)
$$
of operators of multiplication by the functions $(\pi^\varepsilon)^*(f)$ is
strongly $*$-continuous and that the operator families
$$
[g^\varepsilon,\widehat{\sigma}_j],\quad
g^\varepsilon(\widehat{\sigma}_j\widehat{\sigma}_j^{-1}-1)
$$
in $L^2(NM_j)$ are continuous families of compact operators as
$\varepsilon\to 0$.
It suffices to prove all these facts for (a dense set of) smooth functions
$f$. If $f$ is smooth, then one should smooth the family $g^\varepsilon$ and
use the composition formulas, which provide the desired compactness and
continuity.
\end{proof}
\subsection{Comparison of the compositions $\varphi_0\circ\delta$ and
$\partial\circ\varphi_{j+1}$}\label{compare1}
Now let us use Lemmas \ref{l1}--\ref{l3}.
We obtain the chain of relations
\begin{multline*}
\partial\varphi_{j+1}[\sigma]\stackrel{\text{Lemma \ref{l2}}}=
\partial'[\widehat{\sigma}_j]'\stackrel{\text{formula \eqref{kompa1}}}
=\beta \pi^1_*[\widehat{\sigma}_j]' \stackrel{\text{Lemma \ref{l3}}}=\beta
[\pi^0_*\widehat{\sigma}_j]'=\\
=\beta[\widehat{\sigma}_j] \stackrel{\text{Lemma
\ref{l1}}}=\varphi_0\delta[\sigma].
\end{multline*}
The equality at the end of the first row corresponds to the identical
coincidence of the corresponding Fredholm modules.
Thus the square \eqref{kvad} commutes, and we have established the
commutativity of diagram \eqref{maindiagram}.
\section{End of Proof of the Classification Theorem}
By virtue of the isomorphism \eqref{dek1}, we can single out and cancel the
summand $K_*(C(M))$ in diagram \eqref{maindiagram} in the terms
$K_*(\Sigma_j)$ and $K_*(\Sigma_{j+1})$. We obtain the diagram
\begin{equation}\label{maindiagram1}
\begin{array}{rccccl}
\ldots\to K_*(J)& \to \Ell_{*+1}(M_{\ge j})\to & \Ell_{*+1}(M_{\ge j+1})&
\stackrel{\delta}\rightarrow & K_{*+1}(J)& \to\ldots\\
\downarrow\varphi_0\quad & \downarrow\varphi_j & \downarrow\varphi_{j+1} & & \downarrow\varphi_0 \\
\ldots\rightarrow K_{*+1}({M}^\#_{j})& \to K_{*+1}({M}^\#_{\ge j
})\to & K_{*+1}({M}^\#_{\ge j+1})& \stackrel{\partial}\rightarrow &
K_{*}({M}^\#_{j})& \to\ldots.
\end{array}
\end{equation}
The map $\varphi_{j+1}$ is an isomorphism by the inductive assumption. The
map $\varphi_0$ is also an isomorphism (see Corollary~\ref{aux1} in the
Appendix). Since the diagram commutes, we can apply the $5$-lemma and
obtain the desired justification of the induction step in Theorem
\ref{th2}: if the map $\varphi_{j+1}$ is isomorphic on the $\Ell$-group,
then so is the map $\varphi_{j}$.
The proof of Theorem \ref{th2} is complete.
\section{Application to the Monthubert--Nistor index}
Let us discuss the relationship with the problems considered by Monthubert
and Nistor \cite{MoNi1}. In the notation of the present paper, for the case
of manifolds with embedded corners they considered the short exact sequence
\begin{equation}\label{shorts}
0\to J\longrightarrow \Psi(M) \stackrel{\sigma_0}\longrightarrow C(S^*M) \to 0,
\end{equation}
where $\sigma_0$ is the interior symbol map, and the ideal $J$ consists of
operators with zero interior symbol. They studied the boundary map
corresponding to this sequence:
$$
\delta:K_*(C(S^*M))\longrightarrow K_{*+1}(J).
$$
For a closed manifold $J$ is the ideal of compact operators (hence
$K_*(J)\simeq \mathbb{Z}$) and the boundary map coincides with the analytic
index. Moreover, Monthubert and Nistor showed that in the general case this map
has an important topological meaning: it gives the obstruction to the existence
of an invertible operator with a given interior symbol. For these reasons,
Monthubert and Nistor call this map the \emph{analytic index of manifolds with
corners}.
We claim that the classification theorem readily implies a $K$-homology
criterion for the vanishing of the analytic index. Indeed, consider the diagram
\begin{equation}
\label{mont1}
\begin{array}{cccl}
K_{*+1}({M}^\#) & \longrightarrow&
K_{*+1}(M_0)&\stackrel\partial\longrightarrow
K_{*}({M}^\#\setminus M_0)\\
\varphi_1\uparrow& &\uparrow\varphi_{k+1}\\
K_*(\Psi(M))&\longrightarrow& K_*(C(S^*M))&\stackrel\delta\longrightarrow
K_{*+1}(J),
\end{array}
\end{equation}
where the lower row is the sequence induced by the short exact sequence
\eqref{shorts} and the upper row is the exact sequence of the pair
${M}^\#\setminus M_0\subset {M}^\#$ in $K$-homology. The maps $\varphi_1$ and
$\varphi_{k+1}$ are induced by quantization of elliptic symbols on $M^\#$ and
$M_0$ correspondingly (cf.~\eqref{maindiagram}). The diagram is obviously
commutative.
From the exactness of the sequences and the obvious commutativity of the
diagram, we obtain the following assertion. Let us assume for simplicity that
$M$ has no connected components with empty boundary.
\begin{proposition}
The analytic index $\delta(x)\in K_{*+1}(J)$ of $x\in K_*(C(S^*M))$ vanishes if
and only if $\partial\varphi_{k+1}(x)=0$.
\end{proposition}
\begin{proof}
1. There are splittings (cf.~\eqref{dek1})
$$
K_*(\Psi(M))\simeq \widetilde{\Ell}_{*+1}(M)\oplus K_{*}(C(M)),\quad
K_*(C(S^*M))\simeq \Ell_{*+1}(M_0)\oplus K_{*}(C(M)),
$$
where $\widetilde{\Ell}$ is the \emph{reduced} $\Ell$-group generated by
operators of index zero. Moreover, the direct summands $K_{*}(C(M))$ can be
cancelled in~\eqref{mont1}. This does not affect the boundary map. Hence, we
obtain the commutative diagram
\begin{equation}
\label{mont2}
\begin{array}{cccl}
\widetilde{K}_{*+1}({M}^\#) & \longrightarrow&
K_{*+1}(M_0)&\stackrel\partial\longrightarrow
\widetilde{K}_{*}({M}^\#\setminus M_0)\\
\varphi_1\uparrow& &\uparrow\varphi_{k+1}\\
K_*(\Psi(M))/K_*(C(M))&\longrightarrow&
K_*(C(S^*M))/K_*(C(M))&\stackrel\delta\longrightarrow K_{*+1}(J),
\end{array}
\end{equation}
where $\widetilde{K}_*$ is the reduced $K$-homology group generated by
operators of index zero.
3. By the classification theorem, the quantization maps $\varphi$ in
\eqref{mont2} induce isomorphisms. Hence, the commutativity of the diagram
readily shows that vanishing of $\delta$ is equivalent to the vanishing of the
boundary map $\partial$ in $K$-homology.
\end{proof}
The reader can readily rewrite this formula in a more explicit form as a
condition on the interior symbol $\sigma_0$. There is also a cohomological
form of this condition. Needless to say, the cohomological formula is only
valid modulo torsion.
\begin{remark*}
One actually has the group isomorphism
$$
K_*(J)\stackrel\simeq\longrightarrow \widetilde{K}_*({M}^\#\setminus M_0)
$$
determined by quantization of operators with zero interior symbol. (One can
readily obtain this isomorphism by reproducing the proof of our classification
theorem. In the proof, only the inductive assumption is changed: now for
$j=k+1$ we claim that $0=0$.)
\end{remark*}
|
3,212,635,537,532 | arxiv | \section{Introduction}
Gabidulin codes \cite{Gabidulin_TheoryOfCodes_1985} can be seen as the analogs of Reed--Solomon (RS) codes in rank metric.
There are several efficient decoding algorithms up to half the minimum rank distance.
However, in contrast to RS codes, there is no polynomial-time decoding algorithm beyond half the minimum distance.
For RS codes, it can be shown that the number of codewords in a ball around \emph{any} received word is always polynomial in the length when the radius of the ball is at most the Johnson radius.
The Guruswami--Sudan algorithm provides an efficient polynomial-time list decoding algorithm of RS codes up to the Johnson radius.
For Gabidulin codes, there is no polynomial-time list decoding algorithm; it is not even known, whether such an algorithm can exist or not.
An exponential lower bound on the number of codewords in a ball of radius $\tau$ around the received word would prohibit polynomial-time list decoding since the
list size can be exponential, whereas a polynomial upper bound would show that it might be possible.
Faure \cite{Faure2006Average} and Augot and Loidreau \cite{AugotLoidreau-JohnsonRankMetric} made first investigations of this problem.
In this paper, we provide a lower and an upper bound on the list size. The lower bound shows that the list size can be exponential in the length when the radius is at least the Johnson radius and therefore in this region, no polynomial-time list decoding is possible.
The upper bound uses the properties of subspaces and gives a good estimate of the number of codewords in such a ball, but is exponential in the length and therefore does not provide an answer to polynomial-time list decodability in the region up to the Johnson bound.
\section{Preliminaries}
\subsection{Definitions and Notations}
Let $q$ be a power of a prime, let $\mathbb F_{q}$ denote the finite field of order $q$ and let $\mathbb F_{q^m}$ be the extension field of degree $m$ over $\mathbb F_{q}$.
We denote $x^{[i]}= x^{q^i}$ for any integer $i$, then a \emph{linearized polynomial},
introduced by Ore \cite{Ore_OnASpecialClassOfPolynomials_1933},
over $\mathbb F_{q^m}$ has the form
$f(x) = \sum_{i=0}^{d_f} f_i x^{[i]}$,
with $f_i \in \mathbb{F}_{q^m}$.
If the coefficient $f_{d_f}\neq 0$, we call $d_f \overset{\defi}{=} \deg_q f(x)$ the \textit{q-degree} of $f(x)$.
For all $\alpha_1,\alpha_2 \in \mathbb{F}_{q}$ and $\forall \ a,b \in \mathbb{F}_{q^m}$, the following holds:
$f(\alpha_1 a+\alpha_2 b) = \alpha_1 f(a)+\alpha_2 f(b)$.
The (usual) addition and the non-commutative composition $f(g(x))$ (also called \emph{symbolic product}) convert the set of linearized polynomials into a non-commutative ring with the identity element $x^{[0]}=x$. In the following, all polynomials are linearized polynomials.
Given a basis of $\mathbb F_{q^m}$ over $\mathbb F_{q}$, there exists a one-to-one mapping for each vector $\mathbf x \in \mathbb{F}_{q^m}^n$ on a matrix $\mathbf X \in \mathbb F_{q}^{m \times n}$.
Let $\rank(\mathbf x)$ denote the (usual) rank of $\mathbf X$ over $\mathbb{F}_{q}$ and let
$\mathcal R (\mathbf x) = \mathcal R(\mathbf X)$ denote the row space of $\mathbf X$ over $\mathbb F_{q}$.
The kernel of a matrix is denoted by $\ker(\mathbf x) = \ker(\mathbf X)$ and the image by $\im(\mathbf x) = \im(\mathbf X)$.
For an $m \times n$ matrix, if $\dim \ker(\mathbf x) = t$, then $\dim \im(\mathbf x)=\rank(\mathbf x) = n-t$.
Throughout this paper, we use the notation as vector or matrix equivalently, whatever is more convenient.
The \textit{minimum rank distance} $d$ of a code $\mathcal C$ is defined by
\begin{equation*}
d = \min \lbrace \rank(\mathbf c) \; | \; \mathbf c \in \mathcal C, \mathbf c \neq \mathbf 0 \rbrace.
\end{equation*}
A \emph{Gabidulin code} can be defined by the evaluation of degree-restricted linearized polynomials as follows.
\begin{definition}[Gabidulin Code \cite{Gabidulin_TheoryOfCodes_1985}]
A linear $\Gab{n}{k}$ Gabidulin code of length $n$ and dimension $k$ over $\mathbb{F}_{q^m}$ for $n \leq m$ is the set of all codewords, which are the evaluation of a $q$-degree restricted linearized polynomial $f(x)$:
\begin{equation*}
\Gab{n}{k} \overset{\defi}{=} \lbrace \c = (f(\alpha_0), f(\alpha_1),\dots, f(\alpha_{n-1}) \big| \deg_q f(x) < k)\rbrace,
\end{equation*}
where the fixed elements $\alpha_0, \dots, \alpha_{n-1} \in \mathbb F_{q^m}$ are linearly independent over $\mathbb F_{q}$.
\end{definition}
Gabidulin codes are \textit{Maximum Rank Distance} (MRD) codes, i.e., they fulfill the rank metric Singleton bound with equality and $d=n-k+1$ \cite{Gabidulin_TheoryOfCodes_1985}.
The number of $s$-dimensional subspaces of an $n$-dimensional vector space over $\mathbb F_{q}$
is the Gaussian binomial, calculated by
\begin{equation*}
\quadbinom{n}{s} \overset{\defi}{=} \prod\limits_{i=0}^{s-1} \frac{q^n-q^i}{q^s-q^i},
\end{equation*}
with the upper and lower bounds
$q^{s(n-s)}\leq \quadbinomsmall{n}{s} \leq 4 q^{s(n-s)}$.
Moreover, $\mathcal B_{\tau}(\a)$ denotes a ball of radius $\tau$ in rank metric around a word $\a\in \mathbb F_{q^m}^n$. The volume of $\mathcal B_{\tau}(\a)$ is independent of its center and is simply the number of $m \times n$ matrices of rank less than or equal to $\tau$.
\subsection{Problem Statement}
\begin{problem}[Maximum List Size]
Let the Gabidulin code $\Gab{n}{k}$ over $\mathbb F_{q^m}$ with $n \leq m$ and $d=n-k+1$ be given. Let $\tau < d$.
Find a lower and upper bound on the maximum number of codewords $\ell$ in the ball of
rank radius $\tau$ around $\r= (r_0 \ r_1 \ \dots \ r_{n-1}) \in \mathbb F_{q^m}^n$. Hence, find a bound on
\begin{equation*}
\ell \overset{\defi}{=} \max_{\r \in \mathbb F_{q^m}^n}\left(\left|\mathcal B_{\tau}(\r) \cap \mathcal G\right|\right).
\end{equation*}
\end{problem}
For an upper bound, we have to show that the bound holds for \emph{any} received word $\r$,
whereas for a lower bound it is sufficient to show that there exists one $\r$ for which
this bound on the list size is valid.
Let $\mathcal L = \{ \c_1,\c_2,\dots,\c_{\ell} \}$ with $|\mathcal L|=\ell$ denote the list of all codewords in the ball
of radius $\tau$ around $\r$, i.e., $\c_i \in \Gab{n}{k}$ and $\rank(\r-\c_i) \leq \tau$, for $i=1,\dots,\ell$.
\section{A Lower Bound on the List Size}
Faure showed with a probabilistic approach in \cite{Faure2006Average} that the maximum list size of Gabidulin codes with $n=m$
is exponential in $n$ for $\tau \geq n- \sqrt{n(n-d)}$. Our bound slightly improves this value and uses a different proving strategy,
based on evaluating linearized polynomials. This approach is inspired by Justesen and Hoholdt's \cite{Justesen2001Bounds} and Ben-Sasson, Kopparty, and Radhakrishna's \cite{BenSasson2010Subspace} approaches for bounding the list size of RS codes.
\begin{theorem}[Lower Bound on the List Size]
Let the Gabidulin code $\Gab{n}{k}$ over $\mathbb F_{q^m}$ with $n \leq m$ and $d=n-k+1$ be given. Let $\tau < d$.
Then, there exists a word $\r \in \mathbb F_{q^m}^n$ such that
\begin{equation}\label{eq:listsize_polys}
\ell \geq\left|\mathcal B_{\tau}(\r) \cap \mathcal G\right|\geq \frac{\quadbinomsmall{n}{n-\tau}}{(q^m)^{n-\tau-k}}
= q^m q^{\tau(m+n) -\tau^2-md},
\end{equation}
and for the special case of $n=m$:
$\ell \geq q^n q^{2n\tau - \tau^2 - nd}$.
\end{theorem}
\begin{proof}
Since $\tau < d = n-k+1$, also $ k-1 < n-\tau $ holds.
Consider all monic linearized polynomials of $q$-degree exactly $n-\tau$ with a root space of dimension
$n-\tau$, where all roots are in $\mathbb F_{q^n}$. There are exactly (see e.g. \cite[Theorem 11.52]{Berlekamp1984Algebraic})
$\quadbinomsmall{n}{n-\tau}$
such polynomials. Now, let us consider a subset of these polynomials, denoted by $\mathcal P$: all polynomials where
the $q$-monomials of
$q$-degree greater than or equal to $k$ have the same coefficients.
Due to Dirichlet's principle there exist coefficients such that the number of
such polynomials is
\begin{equation*}
|\mathcal P| \geq\frac{\quadbinomsmall{n}{n-\tau}}{(q^m)^{n-\tau-k}},
\end{equation*}
since there are $(q^m)^{n-\tau-k}$ possibilities to choose the
highest $n-\tau -(k-1)$ coefficients of a \emph{monic} linearized polynomial over $\mathbb F_{q^m}$.
Note that
the difference between any two polynomials in $\mathcal P$ is a linearized polynomial of $q$-degree
strictly less than $k$ and therefore the evaluation polynomial of a codeword of $\Gab{n}{k}$.
Let $\r$ be the evaluation of $f(x) \in \mathcal P$ at a basis $\mathcal A = \{\alpha_0,\alpha_1,\dots,\alpha_{n-1}\}$ of $\mathbb F_{q^n}$ over $\mathbb F_{q}$:
\begin{equation*}
\r = (r_0 \ r_1 \ \dots \ r_{n-1}) = (f(\alpha_0) \ f(\alpha_1) \ \dots \ f(\alpha_{n-1})).
\end{equation*}
Further, let also $g(x) \in \mathcal P$, then
$f(x)-g(x)$ has $q$-degree less than $k$. Let
$\c$ denote the evaluation of $f(x)-g(x)$ at $\mathcal A$.
Then, $\r-\c$ is the evaluation of $f(x)-f(x)+g(x) = g(x) \in \mathcal P$, whose
root space has dimension $n-\tau$ and all roots are in $\mathbb F_{q^n}$.
Thus, $\dim \ker(\r-\c) = n-\tau$ and $\dim \im (\r-\c) = \rk(\r-\c) = \tau$.
Therefore, for \emph{any} $g(x) \in \mathcal P$, the evaluation of $f(x)-g(x)$
is a codeword from $\Gab{n}{k}$ and has rank distance $\tau$ from $\r$.
This provides the following lower bound on the maximum list size:
\begin{equation*}
\ell \geq \frac{\quadbinomsmall{n}{n-\tau}}{(q^m)^{n-\tau-k}}
\geq \frac{q^{(n-\tau)\tau}}{(q^m)^{n-\tau-k}}
= q^m q^{\tau(m+n) -\tau^2-md},
\end{equation*}
and for $n=m$ the special case follows.
\end{proof}
This lower bound is valid for any $\tau < d$, but we want to know, which is the smallest value for $\tau$ such that this expression is \emph{exponential} in $n$.
For $n=m$, we can rewrite \eqref{eq:listsize_polys} by
\begin{equation*}
\ell \geq q^{n(1-\epsilon)} \cdot q^{2n\tau - \tau^2 - nd+n\epsilon},
\end{equation*}
where the first part is exponential in $n$ for any $0 \leq \epsilon < 1$.
The second exponent is positive for
\begin{equation}\label{eq:tau_lessjohnson}
\tau \geq n- \sqrt{n(n-d+\epsilon)} \overset{\defi}{=} \tau_{LB}.
\end{equation}
Therefore, our lower bound \eqref{eq:listsize_polys}
shows that the maximum list size is exponential in $n$ for $\tau \geq \tau_{LB}$.
For $\epsilon = 0$, the value $\tau_{LB}$ gives exactly the Johnson radius
for Hamming metric.
This reveals a difference between the known limits to list decoding of Gabidulin and RS codes.
For RS codes, polynomial-time list decoding up to the Johnson radius is guaranteed
by the Guruswami--Sudan algorithm. However, it is not proven that the Johnson radius is tight for RS codes, i.e., it is not known if the list size is polynomial between the Johnson radius and the known exponential lower bounds (see e.g. \cite{Justesen2001Bounds,BenSasson2010Subspace}).
For Gabidulin codes, we have shown that the maximum list size is exponential for $\tau \geq\tau_{LB}$, which is asymptotically equal to the
Hamming metric Johnson radius.
\begin{example}
For the Gabidulin code $\mathcal G (n=12,k=6)$ with $d=7$, the Bounded Minimum Distance decoding radius is $\tau_{BMD} = \left\lfloor (d-1)/2\right\rfloor= 3$,
the lower bound by Faure (equivalent to the Hamming metric Johnson radius) is $\tau_J = \lceil 4.2 \rceil = 5$ and
\eqref{eq:tau_lessjohnson} with $\epsilon = 0.9$ gives $\tau_{LB} = \lceil 3.58 \rceil = 4$. This means for this
code of rate $k/n=1/2$, no polynomial time list-decoding beyond $\tau_{BMD}$ is possible.
\end{example}
\section{An Upper Bound on the List Size}
The following lemma shows that the row spaces of $\r-\c_i$ and $\r-\c_j$, $\c_i,\c_j \in \mathcal L$, $i \neq j$, have no $(2\tau-d+1)$-dimensional subspace in common.
\begin{lemma}\label{lem:subspace2taud1}
Let $\tau < d$ and let $\r \in \mathbb F_{q^m}^n$. Let $\c_i$, for $i = 1,\dots,\ell$, be codewords of
the Gabidulin code $\Gab{n}{k}$ with minimum rank distance $d$ and let $\rk(\r-\c_i) \leq \tau$ hold for all $i=1,\dots,\ell$. Let $\rk(\r-\c_i) =t_i\leq \tau$ and $\rk(\r-\c_j) =t_j\leq \tau$, $i \neq j$.
Then, the row spaces of $(\r-\c_i)$ and $(\r-\c_j)$ have no subspace of dimension at least $t_i +t_j -d+1$ in common, for $\left\lfloor (d-1)/2\right\rfloor < t_i,t_j \leq \tau$.
\end{lemma}
\begin{proof}
Assume, there exist $(\r-\c_i)$ and $(\r-\c_j)$, with $\rank(\r-\c_i)= t_i$, $\rank(\r-\c_j)=t_j$, $i \neq j$, such that their row spaces contain the same
subspace of dimension at least $(t_i+t_j -d+1)$. Then,
\begin{align*}
\dim(\mathcal R(\c_i-\c_j)) &=
\dim(\mathcal R(\r-\c_i-\r+\c_j))
\leq \dim\left(\mathcal R
\left(
\begin{matrix}
\r-\c_i\\
\r-\c_j
\end{matrix}\right)\right)\\
& \leq t_i+t_j -(t_i+t_j-d+1) = d-1,
\end{align*}
which is a contradiction to $\rk(\c_i-\c_j) \geq d$.
\end{proof}
This means in particular, if $\rk(\r-\c_i) = \rk(\r-\c_j) = t \leq \tau$, they have no subspace of dimension at least
$2t-d+1$ in common.
Based on this lemma, we obtain the following upper bound on the maximum list size.
\begin{theorem}[Upper Bound on the List Size]
Let the Gabidulin code $\Gab{n}{k}$ over $\mathbb F_{q^m}$ with $n \leq m$ and $d=n-k+1$ be given. Let $\tau < d$.
Then, for any word $\r \in \mathbb F_{q^m}^n$ and hence, for the maximum list size, the following holds
\begin{align}
\ell &=\max_{\r \in \mathbb F_{q^m}^n}\left(\left|\mathcal B_{\tau}(\r) \cap \mathcal G\right|\right)
\leq \sum\limits_{t=\left\lfloor (d-1)/2\right\rfloor+1}^{\tau} \frac{\quadbinomsmall{n}{2 t + 1-d}}{\quadbinomsmall{t}{2 t + 1-d}}\label{eq:listsize_polys_upper}\\
&\leq 4 \sum\limits_{t=\left\lfloor (d-1)/2\right\rfloor+1}^{\tau} q^{(2t-d+1)(n-t)}
\leq 4\left(\tau-\left\lfloor \tfrac{d-1}{2}\right\rfloor\right)\cdot q^{(2\tau-d+1)(n-\left\lfloor (d-1)/2\right\rfloor-1)}.\label{eq:listsize_polys_upper2}
\end{align}
\end{theorem}
\begin{proof}
We consider all words in $\mathbb F_{q^m}^n$ with $n\leq m$, therefore these words can be seen as matrices in an $n$-dimensional space.
For any $t$, where $\left\lfloor (d-1)/2\right\rfloor \leq t \leq d$,
there are $\quadbinomsmall{n}{2 t-d+1}$ subspaces of dimension
$2 t-d+1$.
Let $\r$ be any fixed word in $\mathbb F_{q^m}^n$ and all codewords in $\mathcal L$ have $\rk(\r-\c_i) \leq \tau$.
Each $\r-\c_i$, for $i=1,\dots,\ell$, of rank $t \leq \tau$ contains $\quadbinomsmall{t}{2 t -d+ 1}$ subspaces of dimension $2t-d+1$.
Due to Lemma~\ref{lem:subspace2taud1}, different $\r-\c_i$
have no $(2 t -d+ 1)$-dimensional subspace in common and therefore
there are at most $\quadbinomsmall{n}{2 t -d+ 1}/\quadbinomsmall{t}{2 t + 1-d}$ possible codewords in
rank distance $t$ to the word $\r$. We sum this up for $t$ from $\left\lfloor (d-1)/2\right\rfloor +1$ up to $\tau$ and we obtain \eqref{eq:listsize_polys_upper}.
With the bounds for the Gaussian binomial and since $\left\lfloor (d-1)/2\right\rfloor+1\leq t \leq \tau$, the upper bound from \eqref{eq:listsize_polys_upper2} follows.
\end{proof}
Note that for the special case $\tau = d/2$ and even minimum distance $d$, the upper bound from \eqref{eq:listsize_polys_upper} is the bound from \cite[Equation~(4)]{AugotLoidreau-JohnsonRankMetric}, which is
\begin{align*}
\ell &\leq (q^n-1) \frac{q^{n-d/2}-q^{n-d}}{q^n-2q^{n-d/2}+q^{n-d}}
=(q^n-1) \frac{q^{-\tau}-q^{-2\tau}}{1-2q^{-\tau}+q^{-2\tau}}
=\frac{q^n-1}{q^\tau-1} =
\frac{\quadbinomsmall{n}{1}}{\quadbinomsmall{\tau}{1}}.
\end{align*}
Thus, we have proved an upper bound on the maximum list size of a Gabidulin code.
Unfortunately, this upper bound is exponential in $n \leq m $ for any $\tau > \left\lfloor (d-1)/2\right\rfloor$ and therefore
does not provide any conclusion if polynomial-time list decoding is possible or not in the region up to the Johnson bound.
|
3,212,635,537,533 | arxiv | \section{Introduction}
Galactic globular clusters (GCs) are key to improving our understanding the formation and evolution of our Galaxy. In most GCs, stars are formed over a very short time span and with very similar compositions in most chemical elements \citep{Bastian18}, and they are all at the same distance from our planet. These qualities make Galactic GCs apt for the study of stellar evolution, as are their stars, which are in a range of different evolutionary phases across a single system. Studying the extratidal region\footnote{In this manuscript, we consider a star to be extratidal when it is located beyond the observational tidal radius ($r_t$). Generally, $r_t$ is estimated as the radius at which the surface density profile of the cluster population drops to zero density \citep[either with a King, Wilson, or LIMEPY model, see e.g.,][and references therein]{deboer19}.}, that is, the region which is out of the $r_t$ of a given cluster, can give a better insight into the different internal processes acting on the cluster itself, including stellar evolution, gas expulsion, and two-body relaxation \citep{Geyer2000}, as well as the forces acting on the cluster which may strip stars from it. These stars can be considered as "potential escapers" as they are still bound to the cluster if they are inside the Jacobi radius ($r_J$), corresponding to the boundary distance from the cluster center at which a star is still bound. For more details, see, for example, \citet{Fukushige2000, Baumgardt10, Kupper10, julio11, Claydon10} and references therein. Disk or bulge shocking, tidal disruptions, relaxation, and dynamical friction may produce these potential escapers, together or independently, which may lead to their leaving the influence of the gravitational potential of the cluster and forming tidal tails and halos around these systems \citep{Leon00, Odenkirchen2001, Moreno14, Hozumi14, Balbinot18}. However, these effects have been proven to have a minor impact in the morphology of tidal tails \citep[see, e.g.,][]{BaumgardMakino}.
\\
Observational evidence of extratidal stars around GCs have been found in the form of extended tidal tails \citep[see e.g.,][]{Grillmair2006, ostholt10, sollima11, balbinot11, myeong17, camila17, Bonaca20} and the asymmetric distribution of stars in the immediate outskirts of some clusters \citep[e.g., ][]{jose16, julio17, kundu19a}, as well as through the chemo-dynamic detection of stars that have shown CN anomalies on the outskirts of some GCs \citep{Hankw20} and as debris stars throughout the inner and outer stellar halo \citep{costa2008, majewski12, jose15a, jose15,Fernandez-Trincado2016b,jose19c,jose19halo}. However, it is still not clear why some GCs show signs of extratidal material, but scarcely any extratidal stars or tidal tails, while others lack any evidence for such structures \citep[see, e.g.,][]{piatti20}.
\\
A complete census of potential extratidal structures in the outskirts of GCs could help to provide a better understanding of the nature of the inner and outer stellar halo. In general, the light-element abundances of cluster stars are key to revealing the origin of halo field stars with unique chemical signatures throughout the MW \citep[this is the so-called "chemical-tagging" method, see e.g.,][]{Martell10,jose19c,jose19halo,jose2020a,jose2020b}. In this sense, the \textit{Gaia} data release 2 \citep[][hereafter, DR2]{gaiadr2} astrometry allows us to achieve a homogeneous exploration in the immediate vicinity around GCs to probe the existence or absence of potential extratidal stars for future spectroscopic follow-ups.
\\
In this work, we take advantage of the \textit{Gaia} DR2 mission to examine the outermost regions of three GCs buried in different Galactic environments, that is, NGC 6397, NGC 2808 and NGC 6266 (M 62). The exquisite data from {\textit{Gaia}} DR2 allows us to homogeneously improve and increase the number of dimensions of the parameter space to select potential cluster members in the outermost regions of these GCs, which were chosen for a number of reasons. First, NGC 6397 has been dynamically classified as a main-disk GC \citep{Massari19} even though it has a mean metallicity of [Fe/H]= -1.88 \citep{Correnti2016, Meszaros2020}, which is largely offset from the metallicity of disk field stars.
\\
NGC 6266 has been classified as a main-bulge cluster \citep{Massari19}, with a mean metallicity of -1.29 dex \citep{Correnti2018}, which is similar to that of Si-/N-/Al-rich stars that were recently found in the bulge region \citep{jose2020a,jose2020b}, which could be part of an ancient GC population that formed the bulge \citep[although their origin is still under debate, see e.g.,][]{bekki19}. NGC 2808 is known to be a massive cluster, hosting several stellar populations \citep{Piotto07, Milone15a, Latour19} and has been recently associated with the accreted Gaia-Sausage-Enceladus \citep{Belokurov2018, Haywood2018, Helmi2018, Koppelman2018, Myeong2018} dwarf galaxy, which is supposed to dominate the inner halo stellar population.
\\
With regard to the analysis of possible extratidal stars, these three clusters are easily accessible in terms of distance:\ NGC 6397 is the second-closest GC, located at a distance of 2.3 kpc \citep[2010 edition of ][hereafter, H96]{Harris96}, only after NGC 6121 (M4, at 2.2 kpc). Both NGC 6266 (6.8 kpc) and NGC 2808 (9.6 kpc) are closer than 10 kpc (H96). All these clusters are located in regions with high stellar density and relatively high reddening values of E(B-V)= 0.22, 0.21, and 0.47 mag for NGC 6397 \citep{Correnti2018}, NGC 2808 \citep{Correnti2016} and NGC 6266 (H96), respectively. These conditions could affect the reliability of potential extratidal star detections if only photometry is considered, but more reliable candidates can be obtained using proper motions (PMs) from \textit{Gaia} DR2 \citep[see, e.g.,][]{Piatti20a}.
\\
It is worth mentioning that some evidence of extratidal material has been claimed in the literature specifically for NGC 6266 based on near-infrared photometry, without taking into account PMs \citep{Chun15}, and using RR Lyrae stars \citep{dante18}, while evidence for extratidal stars around NGC 2808 was found by \citet{julio17} based on deep photometry, without including PMs. Previously, \citet{Leon00} detected tidal tails for NGC 6397.
\\
Our previous works suggest that several GCs, when explored in detail, show some evidence for extratidal stars \citep{jose15, jose15a, jose16, camila17, dante18, kundu19, kundu19a, Piatti20a}. Here, we exploit the superb Gaia DR2 data-set in order to address the issue of the existence of the extratidal features around three GCs, using both photometry and astrometry and their intrinsic limitations. The candidate extratidal stars identified in this study could be investigated in follow-up spectroscopic campaigns, such as the SDSS-V Pioneering Panoptic Spectroscopy survey \citep[see, e.g.,][]{Kollmeier2017}, to fully characterize them, both chemically and dynamically.
\\
This paper is organised as follows: Section 2 discusses the criteria used to select the extratidal stars based on their position, PMs, and color-magnitude diagrams (CMDs) of the clusters. In Section 3, we carry out a backwards integration of the orbits of the clusters and derive updated orbital parameters and membership values to the disk, bulge, and halo Galaxy components. Finally, in Section 4, we discuss our results and present our main conclusions. \\
\section{Selecting extratidal star candidates}
\label{sec2}
In this work, we study the outer region of the Galactic GCs NGC 6397, NGC 6266, and NGC 2808 using the {\textit{Gaia}} DR2 catalog. We adopted the same procedure followed by \citet{kundu19} to clean the sample, thereby eliminating any contamination due to data processing artifacts or spurious measurements, as suggested by the {\textit{Gaia}} collaboration \citep[for details refers to Section 2 in][]{kundu19}.
\\
We began our analysis by selecting the $r_t$ values of the clusters to adopt in this study. In order to do so, we first selected the stars with PMs within three sigma of the mean value \citep[as listed in][]{GC} in a region covering two degrees around the cluster center. Figure~\ref{fig:TR} shows the spatial distribution of the selected stars along with the values for the $r_t$ from \citet{mackey05} and \citet{Moreno14}. We can see from Figure~\ref{fig:TR} that for NGC 6397, the $r_t$ from \citet{mackey05} is underestimated, hence, we adopted the value for $r_t$ reported in \citet{Moreno14} for this analysis. For the other two clusters, the $r_t$ values provided by \citet{mackey05} seem to bound all the cluster stars and, therefore, we chose these values. The adopted values for r$_t$ are the following: 15.55 arcmin for NGC 2808, 44.53 arcmin for NGC 6397, and 8.97 arcmin for NGC 6266.
\\
Once we adopted a $r_t$ for each cluster, we estimated the mean and the intrinsic dispersion of the PM distribution of each cluster. It is worth mentioning that the uncertainties listed in \citet{GC} for the mean PMs take into account the statistical and systematic uncertainties but they do not represent the intrinsic dispersion of the PM distribution, which can be up to ten times larger. Therefore, to consider the intrinsic dispersion in the PM distribution, we selected {\textit{Gaia}} data for stars up to the $r_t$ for the three clusters. A Gaussian mixture model consisting of two Gaussians (one for the cluster and one for the field stellar populations) was fitted to $\mu_{\alpha}\cos{\delta}$ and $\mu_{\delta}$ independently, without the need to take into account correlated errors between these two quantities. Gaussian mixture models for the PM distribution of cluster and field stars have been used to measure mean proper motions of globular clusters \citep[see e.g.,][]{dana10,Baumgardt2019}, including the errors on the measurements. It is beyond the scope of this work to model the exact shape of the PM distribution, convolving it with the error function, of each cluster but to have an estimate of the intrinsic dispersion in order to select extratidal star candidates. The results from the fit were adopted as the mean PM (center of the Gaussian) and the intrinsic dispersion (one sigma) for the distribution of $\mu_{\alpha}\cos{\delta}$ and $\mu_{\delta}$ of the cluster population. Our PM values match the reported values in \citet{GC}, within the errors. A proper fit of the PM distribution of the cluster stars including PM in both directions (RA and Dec at the same time) is beyond the scope of this paper, although it has been used in the literature to estimate the membership probability and uncover, in combination with parallaxes and CMDs, the tidal tails in several clusters. For an example, see \citet{Sollima20}.
\\
\begin{figure}
\subfloat {\includegraphics[width = 3in]{TR_6397.eps}}\\
\subfloat {\includegraphics[width = 3in]{TR_2808.eps}}\\
\subfloat {\includegraphics[width = 3in]{TR_6266.eps}}
\caption{Black dots are the {\textit{Gaia}} DR2 stars whose PM is similar to the mean PM of the cluster within 3 sigma \citep[as listed in][]{GC}. The solid and dashed red circles correspond to the $r_t$ from \citet{Moreno14} and \citet{mackey05}, respectively.}
\label{fig:TR}
\end{figure}
Once we had both the $r_t$ and the PMs for the clusters, we selected the extratidal stars based on three main criteria: the position of the stars with respect to the cluster centers, PMs of the clusters and the stars, and the position of the stars on the CMDs. First, we selected the stars which lie on an annular disk centered on the cluster, having as inner and outer radii of one and five times the $r_t$ of each cluster. Next, to remove field stars based on PMs, we selected only those stars whose PMs match the PM of the cluster, within the combined error bar of the pair GC-star, that is, \{($\mu_{\alpha}\cos{\delta}\pm\sigma$), ($\mu_{\delta}\pm\sigma$)\}$^{star}$ $\lesssim$ \{($\mu_{\alpha}\cos{\delta}\pm\sigma$), ($\mu_{\delta}\pm\sigma$)\}$^{cluster}$, with $\sigma^{star}$ as the error in the PM of the star and $\sigma^{cluster}$ the dispersion in the PM distribution of the cluster. Finally, we selected stars based on the PARSEC isochrones\footnote{http://stev.oapd.inaf.it/cmd} \citep{Bressan12,Marigo17} for the clusters. We selected only those stars that lie within a 0.01 magnitude/color ratio away from the isochrone.
\\
The number of stars selected at each step along with the cleaning and after, along with each selection criterion, is provided in Table~\ref{Tabla1}. The first five lines list the cleaning criteria and after that selection criteria are provided. The de-reddened CMDs of the clusters (member stars as black dots) along with the isochrones (blue) and selected stars (red for stars within the $r_J$ and yellow for stars out of the $r_J$) are shown in Figure~\ref{fig:f1}. All {\textit{Gaia}} magnitudes were de-reddened using the individual E(B$-$V) values from \citet{Schlafly11}\footnote{\url{https://irsa.ipac.caltech.edu/applications/DUST/}} dust maps and the {\textit{Gaia}} extinction coefficients provided by \citet{gaiadr2}. \citet{Alonso12} showed that NGC 6266 suffers from considerable differential reddening and, therefore, we used their reddening map to de-redden the regions around the cluster where we searched for extratidal stars. However, the data are available only for the stars inside the $r_t$, hence we used \citet{Schlafly11} maps to de-redden the extratidal stars around the cluster. The stars which lie within the $r_t$ of the clusters and have PMs similar to the clusters were selected as cluster stars for the CMDs. Figure~\ref{fig:f2} shows the spatial distribution of the selected stars (white dots), along with the $r_t$ (white solid circle is from \citealt{mackey05} and the cyan solid circle is from \citealt{Moreno14}) and $r_J$ \citep[taken from][shown as a white dashed circle]{deboer19} of each cluster. The dotted-dashed line in Figure~\ref{fig:f2} shows the direction of the mean PM of the cluster and the dotted line points towards the Galactic center. These plots were made using the kernel density estimator (KDE) routine in AstroML \citep{VanderPlas12}, using a grid of 400 pixels in each direction. The bandwidth of the Gaussian KDE used was 24.0 arcmin for NGC 6397, 8.4 arcmin for NGC 2808 and 4.2 arcmin for NGC 6266. The contours mark the levels with more than [3,5,9], [10,35,75], and [10,50,110] stars per square degree for NGC 6397, NGC 2808 and NGC 6266, respectively. The adopted parameters for the selection process and our results for each cluster are discussed in the following subsections.
\\
\begin{table}
\centering
\caption{Number of selected stars passing each high-quality criterion and different selection cuts.}
\label{Tabla1}
\begin{adjustbox}{width=1.0\columnwidth,center}
\begin{tabular}{|lr|}
\hline
Criteria & Number of stars \\
\hline
\hline
NGC~6397&\\
\hline
\hline
\texttt{1.) ASTROMETRIC\_GOF\_AL $<$ 3} & 8,445,370 \\
\texttt{2.) ASTROMETRIC\_EXCESS\_NOISE\_SIG $\leq$ 2.} & 8,200,243 \\
\texttt{3.) $-$0.23 $\leq$ MEAN\_VARPI\_FACTOR\_AL $\leq$ 0.32} & 8,195,763 \\
\texttt{4.) VISIBILITY\_PERIODS\_USED > 8} & 7,768,205 \\
\texttt{5.) G $<$ 19 mag} & 3,507,704 \\
\texttt{6.) Between $r_{t}$ and $5\times$ $r_{t}$} & 1,789,728\\
\texttt{7.) With similar PM as the cluster} & 434 \\
\texttt{8.) Stars in the cluster CMD} & {\bf 120} \\
\hline
\hline
NGC~2808& \\
\hline
\hline
\texttt{1.) ASTROMETRIC\_GOF\_AL $<$ 3} & 286,944 \\
\texttt{2.) ASTROMETRIC\_EXCESS\_NOISE\_SIG $\leq$ 2.} & 280,162 \\
\texttt{3.) $-$0.23 $\leq$ MEAN\_VARPI\_FACTOR\_AL $\leq$ 0.32} & 275,781 \\
\texttt{4.) VISIBILITY\_PERIODS\_USED > 8} & 273,149 \\
\texttt{5.) G $<$ 19 mag} & 125,699 \\
\texttt{6.) Between $r_{t}$ and $5\times$ $r_{t}$} & 101,782\\
\texttt{7.) With similar PM as the cluster} & 424 \\
\texttt{8.) Stars in the cluster CMD} & {\bf 126} \\
\hline
\hline
NGC~6266 (with normal cuts)&\\
\hline
\hline
\texttt{1.) ASTROMETRIC\_GOF\_AL $<$ 3} & 7,008,462 \\
\texttt{2.) ASTROMETRIC\_EXCESS\_NOISE\_SIG $\leq$ 2.} & 6,667,043 \\
\texttt{3.) $-$0.23 $\leq$ MEAN\_VARPI\_FACTOR\_AL $\leq$ 0.32} & 6,156,844 \\
\texttt{4.) VISIBILITY\_PERIODS\_USED > 8} & 3,636,806 \\
\texttt{5.) G $<$ 19 mag} & 1,228,027 \\
\texttt{6.) Between $r_{t}$ and $5\times$ $r_{t}$} & 159,567\\
\texttt{7.) With similar PM as the cluster} & 6,784 \\
\texttt{8.) Stars in the cluster CMD} & {\bf 2,155} \\
\hline
\hline
\hline
NGC~6266 (with stricter cuts)&\\
\hline
\hline
\texttt{1.) Between $r_{t}$ and $5\times$ $r_{t}$} & 159,567\\
\texttt{2.) With similar PM as the cluster} & 1,729 \\
\texttt{3.) Stars in the cluster CMD} & {\bf 107} \\
\hline
\end{tabular}
\end{adjustbox}
\end{table}
\begin{figure}
\subfloat {\includegraphics[width = 3in]{6397HR.eps}}\\
\subfloat {\includegraphics[width = 3in]{2808HR.eps}}\\
\subfloat {\includegraphics[width = 3in]{6266HR.eps}}
\caption{De-reddened CMDs of the clusters in {\textit{Gaia}} DR2 bands. Cluster stars (stars within the $r_t$ of the cluster) are shown with black dots and selected extratidal stars within and outside the cluster r$_J$ are shown in red and yellow, respectively. For NGC 6266, OGLE RR Lyrae stars are shown in pink (see Section~\ref{RRL}).}
\label{fig:f1}
\end{figure}
\begin{figure}
\subfloat {\includegraphics[width = 3in]{Figure2d.eps}}\\
\subfloat {\includegraphics[width = 3in]{Figure2e.eps}}\\
\subfloat {\includegraphics[width = 3in]{Figure2f.eps}}
\caption{Stellar density maps built from extratidal star candidates that occupy the CMD in Figure \ref{fig:f1}. The white and cyan circle centered on each cluster indicates the $r_{t}$ from \citet{mackey05} and from \citet{Moreno14}, respectively. While the dashed circle indicate the $r_{J}$ from \citet{deboer19}. The white lines indicate the directions of the cluster PM (dash-dot), the Galactic center (dotted), and the cluster orbit (solid) computed with the \texttt{GravPot16} code. Diamond symbols indicate the RR Lyrae stars.}
\label{fig:f2}
\end{figure}
\subsection{NGC 6397}
The mean PM of the cluster, as determined by the Gaussian fitting model is: $\mu_{\alpha}\cos{\delta}$= 3.302$\pm$0.540 mas yr$^{-1}$; $\mu_{\delta}$=-17.600$\pm$0.631 mas yr$^{-1}$. Here, 0.540 mas yr$^{-1}$ and 0.631 mas yr$^{-1}$ are the associated dispersions in $\mu_{\alpha}\cos{\delta}$ and $\mu_{\delta}$, respectively. The input parameters used to get the isochrone were taken from \citet{Correnti2018}, as they give a better description of the CMD than the parameters listed in H96. In particular, we consider an age of 12.6 Gyr, [Fe/H] = -1.88, distance modulus of 12.1 mag. Then, applying the different cuts explained earlier, we found 120 extratidal stars around the cluster. Out of these 120 extratidal stars, 85 of them are outside the $r_J$ \citep[75.6 arcmin, ][]{deboer19}. The top panel of Figure~\ref{fig:f1} shows the CMD of the cluster along with the selected extratidal stars. The top panel of Figure~\ref{fig:f2} shows the density map of these 120 extratidal stars (white dots).
\\
\subsection{NGC 2808}
The mean PM and one sigma dispersion of the cluster, as determined by the Gaussian fitting model, are: $\mu_{\alpha}\cos{\delta}$= 1.087$\pm$0.620 mas yr$^{-1}$; $\mu_{\delta}$=0.248$\pm$0.503 mas yr$^{-1}$. The parameters used to download the isochrone are taken from \citet{Correnti2016}: 11.2 Gyr, [Fe/H] = -1.23, distance modulus of 15.09 mag. NGC 2808 is the most distant cluster out of the three. Hence, we also used the individual stellar parallaxes provided by {\textit{Gaia}} DR2 to reject the obvious foreground stars. In particular, we rejected all the stars whose parallax is larger than 0.5 mas (i.e., stars at distances less than 2 kpc). In the final selection, there are 126 extratidal stars and their position on the CMD of the cluster and spatial distribution are shown in the middle panels of Figure~\ref{fig:f1} and \ref{fig:f2}, respectively. Out of these 126 stars, 32 extratidal stars lie outside the r$_J$ of the cluster \citep[63.5 arcmin, ][]{deboer19}.
\\
\subsection{NGC 6266}
The mean PM of the cluster, as determined by the Gaussian fitting model, are: $\mu_{\alpha}\cos{\delta}$= -5.047$\pm$0.674 mas yr$^{-1}$; $\mu_{\delta}$=-3.021$\pm$0.566 mas yr$^{-1}$. The isochrone for the cluster is downloaded adopting the metallicity [Fe/H] = -1.02, age = 11.78 Gyr \citep{Forbes10} and distance modulus of 14.2 mag, a slightly higher value than in H96 (15.64 mag) to fit the isochrone to the horizontal-branch level of cluster. Based on the PM and cuts in the CMD, 2155 extratidal stars were selected. This cluster is located in a high-density region and our selection suffers from a high fraction of contaminants (see next Section). The CMD and spatial map for the extra-tidal stars found, after stricter cuts (see Section 2.5) are shown in the bottom panels of Figures~\ref{fig:f1} and ~\ref{fig:f2}, respectively.
\\
The tables containing final set of selected candidates are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via \url{http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/}.
\subsection{Significance of extratidal stars}
\label{sec_sig}
For each cluster, the significance of the potential extra-tidal star candidates was examined. To this purpose, we made use of the updated version of the Besan\c{c}on Galaxy model\footnote{\url{www.model.obs-besancon.fr}} \citep[hereafter BGM,][]{Robin2003} in order to get a rough estimation of the expected Galactic contamination along the regions examined in this study.
\\
The BGM makes use of the population synthesis approach that simulates observations of the sky with errors and biases. It is based on a scenario for Galaxy formation and evolution that reflects our present knowledge about the Milky Way (MW). Four stellar populations are considered in the model: a thin disk, a thick disk, a bar, and a halo, with each stellar population having a specific density distribution. Our simulations were done using the revised scheme of BGM \citep{Czekaj2014} where the stellar content of each population is modelled through an initial mass function, a star formation history, and follows evolutionary tracks \citep[revised in][]{Lagarde2017}. The resulting astrophysical parameters are used to compute their observational properties, using atmosphere models, and assuming a 3D extinction map computed from \citet{Marshall2006} and \citet{Lallement2019}. It includes the simulation of binarity, while the merging is done assuming the 0.4 arcsec spatial resolution of {\textit{Gaia}} DR2. A dynamical model is used to compute radial velocities and PMs, as described in \cite{Robin2017}.
\\
We roughly estimated the number of MW stars (false positives) passing our criteria, which were then compared with our potential extratidal star candidates given in Section~\ref{sec2}. In this way, we get 48 field stars for NGC 2808, 24 for NGC 6397, and 5184 stars for NGC 6266. Thus, the number of extratidal stars found in the previous sections are roughly consistent with a significance of $\sim$19.6 sigma for NGC 6397 and $\sim$11.3 sigma for NGC 2808. Hence, the number of such stars we obtain for NGC 6397 and NGC 2808 are statistically significant. The adopted procedure to select extratidal stars failed to give reliable results in terms of the contamination fraction with regard to the case of NGC 6266 because this cluster has a similar mean PM as that of the field stars. Therefore, we need additional criteria to deal with such clusters.
\subsection{Stricter cuts for NGC 6266}
\label{sec_6266}
To refine our selection of extratidal stars, we imposed stricter cuts on PMs and the difference in color and magnitude with respect to the isochrone of NGC 6266. We selected only those stars whose PMs are matched with the cluster's mean PM within the range of 0.5 times the dispersion in the cluster PM plus the individual error in the PM of the star, that is, \{($\mu_{\alpha}\cos{\delta}\pm 0.5\sigma$), ($\mu_{\delta}\pm 0.5\sigma$)\}$^{star}$ $\lesssim$ \{($\mu_{\alpha}\cos{\delta}\pm 0.5\sigma$), ($\mu_{\delta}\pm 0.5\sigma$)\}$^{cluster}$, where $\sigma_{star}$ and $\sigma_{cluster}$ are the error in the PM measurement of the star and the dispersion in the PM distribution of NGC 6266, respectively. We found 12693 stars matching this criterion. Next, we selected the stars which lie on the annular disk with its center as the center of the cluster and inner and outer radii as the $r_t$ and 5$r_t$ of the cluster, respectively, obtaining 1729 stars. From these stars, only 107 candidates are 0.001 magnitude/color away from the isochrone. We applied the same criteria to the Galactic stars obtained using BGM and found 30 field stars. Hence, we got a $\sim$11.50 sigma detection for this cluster. The position of the extratidal stars on the cluster CMD and their spatial distribution are shown in the bottom panels of Figure~\ref{fig:f1} and Figure~\ref{fig:f2}, respectively. Out of these 107 stars found there, 50 stars are out of the $r_J$ \citep[24.1 arcmin, ][]{deboer19}.
\\
\subsection{Background contamination based on \textit{Gaia} DR2 data}
We also examined the \textit{Gaia} DR2 sources in different adjacent fields to estimate the number of field stars around each cluster. Our motivation is to have an independent, data-driven approach in the estimate of the contamination from field MW stars for the number of extratidal stars we found. Four random fields, having an area of 5 r$_t$ and separated by at least 3.0 deg from the cluster center, in four different regions around each cluster, were searched for extratidal stars using the same criteria as used in Section 2. The adjacent fields have different stellar densities compared to the cluster region. Therefore, to get an estimate of the expected contamination around the cluster, we scaled the number of selected field stars taking the density of the region into account. Table~\ref{tab:extra-tidal} lists the coordinates of the four fields around each cluster, the number of field stars recovered after our selection criteria was applied, the total number of stars in that area, the scaled number of field stars, and the significance of our detections.
\\
According to our estimates, for all the clusters, there are at least two regions around each of them that have significance values that indicate the number of selected extratidal stars are significant over the number of field stars in that region. For NGC 6397, in the field centered at (RA, Dec) = (270, -54) deg, we found a higher number of stars than the number of extratidal stars found around the cluster itself. This region is located in the trailing side of the cluster, aligned with its orbit. Looking at the density map (Figure~\ref{fig:f2}) and its extratidal stars, overdensities (yellow regions) can be seen along the trailing and leading sides of its orbit. Hence, the overdensity of field stars in this particular region can be attributed, rather, to extended tidal debris from NGC 6397 along its trailing side. This kind of alignment is typical due to bulge or disk shocking \citep{Montuori07}.
For NGC~2808, our analysis shows a low contamination level for three of the four regions analyzed, but very high contamination towards the eastern region of the cluster. The density map (Figure~\ref{fig:f2}) of the cluster also shows an overdensity of extratidal stars towards that region, hence, this high value of contamination may again be due to extended tidal debris from the cluster. In the case of NGC~6266, we found two regions having a large fraction of selected field stars, larger than our selection of extratidal stars around the cluster: the field centered at (RA, Dec.) = (255, -25) deg, on the trailing side, and the field centered at (RA, Dec.) = (250, -30) deg, on the eastern side of the cluster. According to the density map for this cluster (bottom panel in Figure~\ref{fig:f2}), there is a significant overdensity of extratidal stars along its past orbit, however there is no such overdensity in the eastern direction. Hence, the large number of field stars found in the trailing side of the cluster can be due to more extended tidal debris, as in the case of NGC 2808, whereas for the other region, this may be due to a high number of MW field stars aligned with the direction towards the Galactic center. Therefore, depending on the direction and the field that is chosen, we find that the number of extratidal stars we recover is significant over the number of field stars in some fields for the three clusters. In other regions, the number of field stars is higher and this could be due to extended tidal debris that is further away from the area studied in this work.
\\
\begin{table*}
\centering
\caption{\textit{Gaia} DR2 field stars that pass our selection criteria in 4 different regions around each cluster.}
\label{tab:extra-tidal}
\begin{adjustbox}{width=2.0\columnwidth,center}
\begin{tabular}{|ccccc|}
\hline
Center (RA, Dec) & Number of field stars & Total number of stars in the region & Scaled number of field stars$^!$ & Significance$^\#$ \\
\hline
\hline
NGC~6397 & 120 & 1789728 & &\\
\hline
265, -35 & 15 & 2497841 & 11 & 32.9\\
265, -70 & 34 & 617114 & 99 & 14.7\\
270, -54 & 82 & 610438 & 241 & High contamination\\
250, -54 & 16 & 3789764 & 8 & 39.6\\
\hline
\hline
NGC~2808 & 126 & 101782 & &\\
\hline
138, -60 & 99 & 183933 & 58 & 8.9\\
138, -70 & 75 & 67880 & 113 & 1.2\\
143, -65 & 77 & 202611 & 39 & 13.9\\
133, -65 & 106 & 62596 & 173 & High contamination\\
\hline
\hline
NGC~6266 & 107 & 159567 & &\\
\hline
255, -25 & 93 & 103460 & 143 & High contamination\\
255, -35 & 51 & 125075 & 65 & 5.2\\
250, -30 & 72 & 75295 & 152 & High contamination\\
260, -30 & 77 & 144520 & 85 & 2.4\\
\hline
\end{tabular}
\end{adjustbox}
\tablefoot{!-$\text{Scaled number of field stars} (N_{\rm field})=\text{Number of field stars in the region}\times\dfrac{\text{Total number of stars around the cluster}}{\text{Total number of stars in the region}}$ ; \#-$\text{Significance}=(N_{\rm extra-tidal} - N_{\rm field})/\sqrt{N_{\rm field}}$, where $N_{\rm extra-tidal}$ is number of extratidal stars around the cluster.}
\end{table*}
We conclude that the number of potential extratidal star candidates around NGC 6397 and NGC 2808 includes a low number of contaminants from the Galactic population and, therefore, our detection of extratidal stars around these clusters represent real overdensities over the field population. However, we find that the expected number of MW stars toward NGC 6266 is larger given its location towards the dense region of the Galactic bulge, suggesting that our potential extratidal stars can be composed of a mixture of truly extratidal cluster stars and stars from this Galactic population. Future spectroscopic follow-up observations will help to identify the truly members of NGC 6266 according to the radial velocities and spectroscopic metallicities of the individual candidates.
\subsection{Extra-tidal RR Lyrae variable stars around NGC 6266}
\label{RRL}
The field around NGC 6266 has a very high contamination from field stars, hence, we decided to further study its extratidal region of the cluster using RR Lyrae stars from the Optical Gravitational Lensing Experiment (OGLE) database. The RR Lyrae variable stars are excellent standard candles and tracers of extratidal debris around GCs. Recently, \citet{ogle2019} published their new catalog\footnote{\url{ftp://ftp.astrouw.edu.pl/ogle/ogle4/OCVS/blg/rrlyr/}} of 78350 RR Lyrae variable stars, including some variables in the region around NGC 6266. We applied the same criteria as discussed above, but we selected the ab-type RR Lyrae stars whose PM matches with the cluster within three sigma of the cluster mean PM. We relaxed this cut to include more candidates and get rid of possible contaminants based on the CMD, as RR Lyrae stars should be located at the horizontal branch level of the cluster's CMD. After applying the PM, $r_t$, and CMD criteria, we are left with 11 extratidal RR Lyrae variable stars. The position of these extratidal RR Lyrae stars on the extinction-corrected CMD is shown in Figure~\ref{fig:f1} in pink and the spatial-distribution of these stars is shown in Figure~\ref{fig:f2} using stars as symbols. Overall, 9 out of 11 extratidal RR Lyrae stars lie outside the $r_J$ of the cluster. The presence of these extratidal RR Lyrae variable stars and their spatial distribution is another, independent piece of evidence of the existence of extratidal stars around NGC 6266. It is worth mentioning that these RR Lyrae stars are located much further out from the cluster center than the excess of RR Lyrae reported in \citet{dante18}.
\section{Orbits of the clusters}
\label{sec3}
We successfully identified potential extratidal candidates around NGC~6397, NGC~2808, and NGC~6266. To get a general picture about their specific extra-tidal features, we computed the orbits of each cluster. To this purpose, we used the {\texttt{GravPot16}} code\footnote{https://gravpot.utinam.cnrs.fr}, which employs a physical and realistic (as far as possible) "boxy/peanut" bar model of the Galaxy along with other stellar components \citep[see, e.g.,][]{jose2020}. For this study, we have caried out a backwards integration (until 3 Gyr) of an ensemble (one million simulations per cluster) for orbits of each cluster by adopting the same galactic configurations and a Monte Carlo approach, as described in \citet{jose2020}.
\\
Table~\ref{Table2} lists the input parameters used in our analysis along with the results. This analysis reveals that the three clusters have high eccentric orbits $e = 0.59$ -- $0.94$, with a perigalactic and apogalactic distance between $r_{peri}$ $\sim$ 0.37 -- 0.41 kpc, and $r_{apo}\sim$ 2.88 -- 14.47 kpc, respectively. The clusters exhibit vertical excursions above the Galactic plane not larger than 3.7 kpc, which indicates that the former may be still interacting with the disk and suggesting that there could be some signatures for the presence of extra-tidal material around these clusters as already been noted in other GC near the Galactic plane such as NGC~6535 and NGC~6254 \citep[see, e.g.,][]{Leon00}.
\\
Given that the angular momentum is not a conserved quantity in a model with non-axisymmetric components (e.g., a bar structure), we listed both the minimum and maximum z-component of the angular momentum ($L_z$) in Table~\ref{Table2}. Our simulations reveal that the three clusters have prograde ({$L_{z, min}$, $L_{z, max}$}$< {0, 0}$) orbits with respect to the direction of the Galactic rotation.
\\
Following the same methodology as in \citet[][]{jose2020}, we provide the classification of the GCs into a specific Galactic component. This classification is based on the location of the cluster on the characteristic orbital energy (($E_{max}+E_{min}$)/2) versus the orbital Jacobi constant ($E_J$) plot. The plot is divided into three different regions corresponding to disk population, stellar halo and bulge/bar population \citep[see Fig. 3 in][]{jose2020}. The position of the cluster in this diagram gives us the membership probability corresponding to each Galactic component. Table \ref{Table2} reveals that NGC 6397 shows a high probability (> 90\%) to belong to the thick disk component, whereas NGC 2808 is characterized by having a very high probability (> 99\%) of belonging to the inner halo component and NGC 6266 is in the boundary between two Galactic components, indicating that this cluster has a similar probability to be part of the bulge or bar ($<48$\%) and inner disk ($\sim51$\%) for the three patterns speeds of the bar. However, \citet{Perez-Villegas2020}, in adopting a more simplified Galactic model for the MW, recently classified NGC~6266 with a high probability ($>97$\%) of belonging to the bulge or bar component, indicating that its orbital configuration strongly depends on the choice of the gravitational potential assumed for the MW and their observational parameters.
\\
It is worth mentioning that our orbital classification can be compared to other Galactic GCs, including both those formed in situ and those formed in different progenitors, which were only accreted later \citep[see, e.g.,][]{Massari19}. Considering their origin can help us to assess the level up to which the possible extratidal star candidates could be contributing to the stellar populations in the inner Galaxy. Based on the ($E_{max}+E_{min}$)/2 and $E_J$, as envisioned by \citet{Moreno2015} and \citet{jose2020}, and the orbital elements of known GCs and their associated origin according to \citet{Massari19}, we can say that NGC~2808 has an orbital energy consistent with GCs associated to the Gaia-Enceladus-Sausage \citep{Massari19}. This association is in agreement with our classification of an inner halo cluster related to merger events experienced by the MW in early epochs. Therefore, the evidence of extratidal material beyond its tidal radius can give us insights in the formation of the inner stellar halo. On the ($E_{max}+E_{min}$)/2 versus $E_J$ diagram, NGC~6397 occupies the loci dominated by GCs in the main disk and NGC~6266 lies in the group of GCs associated to the main bulge, therefore, any further evidence for extratidal material around these two clusters could provide important clues for disentangling the origin of the chemically anomalous stars identified recently towards the bulge and inner disk \citep[see, e.g.,][]{jose16, Schiavon17, Fernandez-Trincado2017c, jose19a, jose19h, jose19c, jose19halo}.
\begin{sidewaystable*}
\begin{tiny}
\setlength{\tabcolsep}{0.5mm}
\caption{\textit{Lines 1--5:} Basic parameters of the selected GCs. \textit{Lines 6--17:} Main orbital parameters of the GCs analyzed in this study. The numbers inside parentheses indicate the sensitivity of the orbital elements to the different angular velocity of the bar ($\Omega_{\rm bar}$), which we have computed as the standard deviation of the orbital elements when considering three different values for the bar pattern speeds, $\Omega_{\rm bar} = $33, 43, and 53 km s$^{-1}$ kpc$^{-1}$. \textit{Lines 18--22:} Membership probability for the different bar pattern speed adopted.
}
\centering
\begin{tabular}{|l | c | c | c | c | c | c | c | c | c |}
\hline
Cluster Ids& RA & Dec & $\mu_{\alpha}\cos(\delta)$ & $\mu_{\delta}$ & $RV$ & d & $\Delta \mu_{\alpha}\cos(\delta)$ & $\Delta \mu_{\delta}$ & $ \Delta RV$ \\
& (deg.) & (deg.) & (mas yr$^{-1}$) & (mas yr$^{-1}$) & (km s$^{-1}$) & (kpc) & (mas yr$^{-1}$) & (mas yr$^{-1}$) & (km s$^{-1}$) \\
\hline
\hline
NGC~2808 & 138.01 & $-$64.86 & 1.01 & 0.27 & 103.57 & 9.60 & 0.05 & 0.05 & 0.27 \\
$\dagger$ & & & 0.58 & 2.06 & 101.60 & 9.60 & 0.45 & 0.46 & 0.70 \\
$\dagger\dagger$ & & & 1.02 & 0.28 & 103.57 & 10.21 & 0.01 & 0.01 & 0.27 \\
NGC~6397 & 265.17 & $-$53.67 & 3.28 & $-$17.60 & 18.51 & 2.30 & 0.04 & 0.04 & 0.08 \\
$\dagger$ & & & 3.69 & $-$14.88 & 18.80 & 2.30 & 0.29 & 0.26 & 0.10 \\
$\dagger\dagger$ & & & 3.3 & $-$17.60 & 18.51 & 2.44 & 0.01 & 0.01 & 0.08 \\
NGC~6266 & 255.30 & $-$30.11 & $-$5.05 & $-$2.95 & $-$73.98 & 6.80 & 0.06 & 0.06 & 0.67 \\
$ \dagger$ & & & $-$3.50 & $-$0.82 & $-$70.10 & 6.80 & 0.37 & 0.37 & 1.40 \\
$\dagger\dagger$ & & & $-$4.99 & $-$2.95 & $-$73.98 & 6.41 & 0.02 & 0.02 & 0.67 \\
$*$ & 255.31 & $-$30.11 & $-$5.06 & $-$2.98 & $-$73.49 & 6.41 & 0.07 & 0.07 & 0.70 \\
\hline
\hline
Cluster Ids & r$_{\rm peri}$ & r$_{\rm apo}$ & $ eccentricity (e) $ & Z$_{\rm max}$ & L$_{\rm z,min}$ & L$_{\rm z,max}$ & $E_{J}$ & $E_{\rm char}$ & Orbit \\
& (kpc) & (kpc) & & (kpc) & (10$^{1}$ km s$^{-1}$ kpc) & (10$^{1}$ km s$^{-1}$ kpc) & (10$^{5}$ km$^{2}$ s$^{2}$) & (10$^{5}$ km$^{2}$ s$^{2}$) & \\
\hline
\hline
NGC~2808 & 0.42 $\pm$ 0.08 ( 0.04) & 14.48 $\pm$ 0.11 ( 0.03) & 0.94 $\pm$ 0.01 ( 0.005) & 3.00 $\pm$ 0.10 ( 0.07) & $-$36.0 $\pm$ 1.5 ( 1.25) & $-$16.0 $\pm$ 5.0 ( 2.94) & $-$1.77 $\pm$ 0.010 (0.03) & $-$1.66 $\pm$ 0.02 ( 0.01) & Prograde \\
$\dagger$ & 2.27 & 10.74 & 0.649 & 2.39 & ... & ... & ... & ... & ... \\
$\dagger\dagger$ & 0.97 $\pm$ 0.02 & 14.76 ± 0.13 & ... & .. & ... & ... & ... & ... & ... \\
\hline
NGC~6397 & 1.98 $\pm$ 0.07 ( 0.51) & 7.77 $\pm$ 0.04 ( 0.59) & 0.59 $\pm$ 0.01 ( 0.10) & 3.73 $\pm$ 0.07 ( 0.12) & $-$102.0 $\pm$ 1.5 (11.81) & $-$56.0 $\pm$ 1.5 (15.11) & $-$2.28 $\pm$ 0.004 (0.06) & $-$1.97 $\pm$ 0.002 (0.05) & Prograde \\
$\dagger$ & 2.53 & 5.12 & 0.34 & 1.46 & ... & ... & ... & ... & ... \\
$\dagger\dagger$ & 2.63 $\pm$ 0.03 & 6.23 $\pm$ 0.02 & ,,, & ... & ... & ... & ... & ... & ... \\
\hline
NGC~6266 & 0.38 $\pm$ 0.17 ( 0.08) & 2.88 $\pm$ 0.14 ( 0.04) & 0.76 $\pm$ 0.09 ( 0.04) & 1.01 $\pm$ 0.14 ( 0.04) & $-$43.0 $\pm$ 2.0 ( 1.25) & $-$9.0 $\pm$ 4.0 ( 2.16) & $-$2.62 $\pm$ 0.01 (0.02) & $-$2.48 $\pm$ 0.02 (0.01) & Prograde \\
$\dagger$ & 1.52 & 2.63 & 0.28 & 0.83 & ... & ... & ... & ... & ... \\
$\dagger\dagger$ & 0.83 $\pm$ 0.07 & 2.36 $\pm$ 0.09 & ... & ... & ... & ... & ... & ... & ... \\
$ *$ & 0.35 $\pm$ 0.16 & 2.82 $\pm$ 0.16 & 0.79 $\pm$ 0.09 & 1.10 $\pm$ 0.14 & ... & ... & ... & ... & ... \\
\hline
\hline
$ \Omega_{\rm bar} = $ & $33$ km s$^{-1}$ kpc$^{-1}$ & $33$ km s$^{-1}$ kpc$^{-1}$ & $33$ km s$^{-1}$ kpc$^{-1}$ & $43$ km s$^{-1}$ kpc$^{-1}$ & $43$ km s$^{-1}$ kpc$^{-1}$ & $43$ km s$^{-1}$ kpc$^{-1}$ & $53$ km s$^{-1}$ kpc$^{-1}$ & $53$ km s$^{-1}$ kpc$^{-1}$ & $53$ km s$^{-1}$ kpc$^{-1}$ \\
Cluster Ids & Bulge/Bar & Disk & Stellar Halo & Bulge/Bar & Disk & Stellar Halo & Bulge/Bar & Disk & Stellar Halo \\
& \% & \% & \% & \% & \% & \% & \% & \% & \% \\
\hline
NGC~2808 & 0.00 & 0.65 & 99.35 & 0.00 & 1.30 & 98.70 & 0.00 & 2.62 & 97.38 \\
NGC~6397 & 0.01 & 90.62 & 9.36 & 0.03 & 95.39 & 4.57 & 0.24 & 97.23 & 2.52 \\
NGC~6266 & 45.07 & 54.60 & 0.33 & 47.19 & 52.52 & 0.29 & 48.25 & 51.49 & 0.26 \\
\hline
\end{tabular}
\label{Table2}
\tablefoot{$\dagger$\citet{Moreno14}; $\dagger\dagger$\citet{Baumgardt2019}; $*$\citet{Perez-Villegas2020}; input parameters employed in this study were taken from \citet{GC}. \citet{Moreno14} and \citet{Baumgardt2019} used $\Omega_{\rm bar}= 55$ km s$^{-1}$ kpc$^{-1}$ and \citet{Perez-Villegas2020} used $\Omega_{\rm bar}=45$ km s$^{-1}$ kpc$^{-1}$.}
\end{tiny}
\end{sidewaystable*}
\section{Discussion and conclusions}
\label{sec5}
We examined the outermost regions of the Galactic GCs NGC~6397, NGC~2808, and NGC~6266 in our search for evidence of extratidal features in the \textit{Gaia} DR2 database. We identified potential extratidal star candidates toward NGC~2808 and NGC~6397, while some possible extratidal signatures seem to be present around NGC~6266. The high reddening (E(B-V)=0.47 mag, H96) and high field density, along with its comparable PM to foreground and background Galactic stars make it difficult to identify extratidal features around this cluster with a high level of confidence. This study yields 120, 126, and 107 extratidal candidates associated to NGC~6397, NGC~2808, and NGC~6266, respectively. These extratidal stars are statistically significant over the field stars in that region of the sky. Our results for each cluster are summarized as follows:
\\
{\bf NGC~6397:} Our result seems to be in good agreement with the results of \citet{Leon00}, where tidal tails for the cluster were reported, although the dust extinction prevented the authors from further exploring their distribution and extent. The cluster has a relatively high eccentric ($e>0.59$) prograde orbit with vertical excursions above the Galactic plane not larger than $3.73$ kpc with a very likely orbit confined to the disk population, which is crossing the Galactic plane every $\sim$0.12 Gyr. Then, the extratidal star candidates could be the effect of the shocks experienced by the cluster with the disk in a short timescale. The extratidal stars for the cluster are asymmetrically distributed around its orbit (see top panel in Figure~\ref{fig:f2}), forming a spiral-like structure from the north-east to south-west direction. These spiral arms can be seen in both the leading and trailing regions around the cluster, resembling the S-shape structured that is considered a characteristic feature of tidal disruption \citep{ray17,ray19}. Moreover, we found a high density of stars satisfying our selection criteria along the direction of the past orbit of the cluster. Based on the shape of the extratidal stars in the cluster's vicinity they can be due to tidal disruption. The cluster's orbit and the high density of stars in the direction opposite to the cluster motion indicate that the nature of a few of the stars can be the result of the disk shocking. Hence, the features and overdensities around the cluster can be the result of a combined effect of tidal disruption and disk shocking.
\\
{\bf NGC~2808:} Most of the stars selected in our study clearly lie on or near the prominent sub-giant branch and horizontal branch of the cluster. Our dynamical analysis shows that this is a cluster that lies in a halo-like orbit, therefore, any signature for extratidal features could help improve our understanding of the origin of stellar properties and the content of the inner halo of the MW. We also find that NGC~2808 is crossing the disk with a frequency of $\sim$0.20 Gyr$^{-1}$, which could explain the asymmetric distribution of the potential extratidal candidates (seen in Figure~\ref{fig:f2}) that exhibit a high stellar density in the trailing region of the cluster with some misalignment with respect to its orbit, which is in good agreement with previous works \citep[see, e.g.,][]{julio17}. Recently, \citet{Sollima20} studied the presence of tidal tails around several GCs, including NGC 2808. The author did not find any coherent tidal tail structure for this cluster beyond 1.5 times the r$_J$. Our study covers up to 1.4 times the r$_J$ (a radius of 1.5 degree from the cluster center) and, therefore, our findings cannot be directly compared to the lack of detections in \citet{Sollima20}.
\\
{\bf NGC~6266:} Most of the 107 extratidal candidates identified around NGC~6266 follow the red giant branch (RGB) of the cluster. Contamination analysis of the region reveals that the cluster may have tidal tails in the northern and eastern sides. Similar distribution of the stars was also found by \citet{Chun15}. Eastern overdensity found in our analysis is in the trailing part of the cluster. Also, our dynamical study reveals that NGC~6266 is crossing the Galactic plane every $\sim$0.04 Gyr in the inner Galaxy. The extratidal stars are symmetrically distributed around the cluster, resembling the shape of an extended stellar envelope \citep{kuzma16, kuzma17}. This extended stellar halo is in agreement with the results of \citet{Gieles11}, which places this cluster into the expansion dominated phase, which is the internal relaxation the main mechanism producing extratidal stars. Based on the orbit and the contamination analysis, at larger projected distances from the cluster center, some of the extratidal stars can be the result of a recent disk shock. Despite a high fraction of contaminants is expected in our final sample, this is the best sample that has so far identified possible extratidal feature and provided a motivation for a future spectroscopic follow-up study to confirm or refute the cluster members, in particular towards the inner Galaxy, where some missing pieces still lack in the understanding of the origin of some unusual stars in the inner stellar halo at the same metallicity of NGC~6266.
\\
\citet{Ernst13} used both observations and simulations to conclude that most of the GCs in the MW under-fill their Roche lobe, presenting a mean ration of $r_t$/$r_J$ of 0.48. This is also the case of NGC 6397, NGC 2808 and NGC 6266, having $r_t$/$r_J$ of 0.59, 0.25 and 0.38, respectively. Therefore, any extratidal star based on the adopted r$_t$ is still bound to the cluster and does not necessarily mean that the cluster is under disruption. To better understand the extratidal stars that are fully detached from the cluster and possible disruption process, we used $r_J$. According to \citet{Kupper10}, most of the stars which lie beyond 50\% of the $r_J$ of a given cluster are energetically unbound while beyond 70\% of the $r_J$ almost all stars are detached from the cluster. {Hence, the stars lying outside the $r_t$ but inside the $r_J$ can be termed as potential escapers and the stars situated outside the $r_J$ are fully detached from the clusters.} It can be seen from the density plots (Figure~\ref{fig:f2}) that the selected extratidal stars are located inside as well as outside the $r_J$ for all the clusters. For NGC 6397, out of the 120 extratidal stars identified in this work, 84 are outside its r$_J$. Hence, up to 70\% of candidates are fully unbound from the cluster. Similarly, for NGC 2808 and NGC 6266, 25.4\%, and 72.9\% stars outside the $r_J$, respectively. The stars that are outside the $r_J$ are fully detached from the cluster, while the stars inside the $r_J,$ but outside the $r_t$ of each cluster, have a higher probability of being pulled out of the $r_J$ as compared to other stars due to the gravitational field of the Galaxy.
\\
Based on the distribution of extratidal stars, we found that most likely NGC 6397 and NGC 2808 suffer from disk shocks and tidal disruption. This is not completely consistent with the position of these clusters in the survivability diagram of \citet{gnedin97}, where NGC 6397 lies outside the survivability diagram, where the "lucky survivor" GCs reside, while NGC 2808 is at the middle of the diagram, having been able to survive another Hubble time. If the internal relaxation is the main mechanism of disruption, we would expect to see an extended stellar envelope around these clusters, which we may not have recovered due to our cuts in magnitude, in the case of NGC 2808. For NGC 6397, however, we searched for stars up to the main sequence and our results do not support a scenario in which this cluster would be surrounded by a stellar envelope. In the case of NGC 6266, the distribution of extratidal stars is homogeneous and resembles an extended stellar halo around the cluster. This is in agreement with the position of the cluster in the diagram of \citet{gnedin97}, in which this cluster is at the edge of the survivability boundary, being affected by internal relaxation and bulge and disk shockings. Radial velocity measurements are therefore needed to confirm the extratidal stars candidates found around these clusters and to determine the disruption mechanisms that are producing the overdensities recovered in this work.
\\
The three selected clusters lie in regions of the Galaxy characterized by different environments and different extinction values. Our analysis shows that if the cluster stars are well-separated from the field stars in the PM plane, using the basic photometric data, with a small dependence on the part of the parallaxes, we were able to extract possible extratidal stars from all the clusters. Our techniques provide us with the best sample of possible extratidal stars based on basic photometric and astrometric observations. The use of PMs and CMDs minimizes the level of foreground and background contamination in the regions where accurate distances to the stars are not available. We cross-matched our sample of extratidal stars with the galaxy and quasar catalog of \citet{Coryn19}. We did not find any match indicating that our data is free from any obvious background contamination from such sources. Hence, in a future work, we plan to apply the same techniques to most of the Galactic GCs in order to study the 3D spatial distribution of clusters that present evidence of extra-tidal stars, with the aim of shedding light on the gravitational potential of our Galaxy.
\begin{acknowledgements}
We thank the anonymous referee for an useful report that helped to improve this paper. We are grateful to Julio Carballo-Bello for useful discussions about the disruption processes in these three globular clusters. R.~K and D.~M are very grateful for the hospitality of the Vatican Observatory, where this collaboration was started. J.~G.~F-T is supported by FONDECYT No. 3180210. D.~M gratefully acknowledges support provided by the BASAL Center for Astrophysics and Associated Technologies (CATA) through grant AFB 170002, and from project FONDECYT No.1170121. This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology.
\end{acknowledgements}
\bibliographystyle{aa}
|
3,212,635,537,534 | arxiv | \section{Introduction}
The purpose of this paper
is to prove the following theorem
and explain its context.
\begin{theorem}\label{twoconj}
Suppose $\mathcal{R}$ is a dense subring of $\mathbb{R}$,
$A$ is a primitive matrix over $\mathcal{R}$ and
$B$ is a matrix over $\mathcal{R}$ which is shift equivalent
over $\mathcal{R}$ to $A$.
Then $B$ is strong shift equivalent over $\mathcal{R}$ to a primitive matrix.
\end{theorem}
We begin with the context. By ring, we mean a ring with 1;
by a semiring, we mean a semiring containing $\{0,1\}$.
A primitive matrix is a square matrix
which is nonnegative (meaning
entrywise nonnegative) such that for some $k>0$ its $k$th power
is a positive matrix.
Definitions and more background for shift equivalence (SE) and
strong shift equivalence (SSE) are given in Section
\ref{sesect}.
We recall the Spectral Conjecture
for primitive matrices from \cite{BH91}.
In the statement,
$\Delta = (d_1, \dots ,d_k)$ is a
$k$-tuple of nonzero complex numbers.
$\Delta$ is the
{\it nonzero spectrum} of a matrix $A$ if $A$ has
characteristic polynomial of the form
$\chi_A(t)=t^m\prod_{1\leq i \leq k}(t-d_i)$.
$\Delta $ has a {\it Perron value} if there exists $i$
such that $d_i>|d_j|$ when $j\neq i$.
The
{\it trace} of $\Delta$ is
$\textnormal{tr}(\Delta ) =
d_1 + \cdots + d_k$.
$\Delta^n$ denotes $((d_1)^n, \dots ,(d_k)^n)$,
the tuple of $n$th powers;
and the {\it $n$th net trace} of $\Delta$ is
\[
\textnormal{tr}_n(\Delta ) = \sum_{d|n}\mu (n/d) \textnormal{tr}(\Delta ^d)
\]
in which $\mu$ is the M\"{o}bius function
($\mu (1) =1$; $\mu (n)= (-1)^r$ if $n$ is the product of $r$
distinct primes; $\mu (n) = 0$ if $n$ is divisible by the square
of a prime).
\begin{spec} \cite{BH91}
Let $\mathcal{R}$ be a subring of $\mathbb{R}$. Then $\Delta$ is the nonzero spectrum
of some primitive matrix over $\mathcal{R}$ if and only if the following
conditions hold:
\begin{enumerate}
\item
$\Delta$ has a Perron value.
\item The coefficients of the polynomial
$\prod_{i=1}^k(t-d_i)$ lie in $\mathcal{R}$.
\item
If $\mathcal{R} = \mathbb{Z}$, then for all positive integers $n$,
$\textnormal{tr}_n(\Delta )\geq 0 $; \\
if $\mathcal{R} \neq \mathbb{Z}$, then for all positive integers $n$ and $k$,
\\ (i) $\textnormal{tr}(\Delta^n) \geq 0 $ and
(ii) $\textnormal{tr}(\Delta^n) > 0 $ implies
$\textnormal{tr}(\Delta^{nk}) > 0 $.
\end{enumerate}
\end{spec}
It is not difficult to check that the nonzero spectrum
of a primitive matrix satisfies the three conditions \cite{BH91}.
(We remark, following \cite{JohnsonLaffeyLoewy1996}
it is known that the nonzero spectra
of symmetric primitive matrices
cannot possibly have such a
simple characterization.)
To understand the possible spectra of nonnegative
matrices is a classical problem of linear algebra
(for early background see e.g. \cite{BH91}) on which
interesting progress continues (see e.g.
\cite{EllardSmigoc2013,
Laffey2012BHTheorem,
LLS2009, LLS13} and their references). Understanding
the nonzero spectra of primitive matrices is a variant of
this problem and also an approach to it: to know the minimal size
of a primitive matrix with a prescribed nonzero spectrum is to solve the
classical problem (for details, see \cite{BH91}); and it is in
the primitive case that the Perron-Frobenius constraints
manifest most simply.
Finally, as the spectra of matrices over various subrings
of $\mathbb{R}$ appear in applications,
in which the nonzero part of the spectrum is sometimes
the relevant part \cite{Boyle91matrices, BH91}, it is natural to consider
the nonzero spectra of matrices over arbitrary subrings of
$\mathbb{R}$.
The Spectral Conjecture has been proved in enough cases that
it seems almost certain to be true in general.
For example, it is true under any of the following conditions:
\begin{itemize}
\item
The Perron value of $\Lambda$ is in $\mathcal{R}$ (this always
holds when $\mathcal{R}=\mathbb{R}$) or is a quadratic
integer over $\mathcal{R}$ \cite{BH91}.
\item
$\textnormal{tr}(\Lambda )> 0$ \cite[Appendix 4]{BH91}
\item
$\mathcal{R} =\mathbb{Z}\textnormal{ or } \mathbb Q$ \cite{S8}.
\end{itemize}
The general proofs in
\cite{BH91} do not give even remotely effective general bounds on the
size of a primitive matrix realizing a given nonzero spectrum.
The methods used in \cite{S8} for the case
$\mathcal{R}=\mathbb{Z}$ are much more tractable but still very complicated.
However, there is now an elegant construction of Tom Laffey
\cite{Laffey2012BHTheorem}
which proves the conjecture for $\mathcal{R}=\mathbb{R}$ in the
central special case of positive
trace, and in some other cases; where it applies,
the construction provides meaningful
bounds on the size of the realizing matrix in terms
of the spectral gap.
The nonzero spectrum of a matrix is a \lq\lq stable\rq\rq\ or
\lq\lq eventual\rq\rq\ invariant of a matrix. For a matrix over a field,
an obvious finer invariant is the isomorphism class
of the nonnilpotent part of its action as a linear transformation.
The classification of matrices over a field by this invariant
is the same as the classification up to shift equivalence
over the field; for matrices over general rings,
from the module viewpoint (see Sec.\ref{sesect}),
shift equivalence
is the natural generalization
of the isomorphism class of this nonnilpotent linear transformation.
For some rings, an even finer invariant
is the strong shift equivalence class. The
Generalized Spectral Conjecture of Boyle and Handelman
(in both forms below) heuristically is
saying that only the obvious necessary spectral
conditions constrain the eventual algebra of a primitive
matrix over a subring of $\mathbb{R}$, regardless of the subring
under consideration.
\begin{wgenspec}
Suppose $\mathcal{R}$ is a subring of $R$ and $A$ is a square matrix over
$\mathcal{R} $ whose nonzero spectrum satisfies the three necessary
conditions of the Spectral Conjecture. Then $A$ is SE over
$\mathcal{R}$ to a primitive matrix.
\end{wgenspec}
\begin{sgenspec}
Suppose $\mathcal{R}$ is a subring of $R$ and $A$ is a square matrix over
$\mathcal{R} $ whose nonzero spectrum satisfies the three necessary
conditions of the Spectral Conjecture. Then $A$ is SSE over
$\mathcal{R}$ to a primitive matrix.
\end{sgenspec}
The weak form was stated in \cite[p.253]{BH91} and
\cite[p.124]{BH93}.
The strong form was stated in
\cite[Sec. 8.4]{Boyle91matrices}), along with an
explicit admission that the authors of the conjecture did not
know if the conjectures were equivalent (not knowing if
shift equivalence over a ring implies strong shift equivalence
over it).
Following \cite{BoSc1}
(see Theorem \ref{sseclassif}), we know now that the strong form
of the Generalized Spectral Conjecture was not a vacuous generalization:
there are subrings of $\mathbb{R}$ over which SE does not imply
SSE (Example \ref{badrealring}).
The results of \cite{BoSc1} also provide enough structure that
we can prove Theorem \ref{twoconj}, which shows
that the two forms
of the Generalized Spectral Conjecture are equivalent.
{\bf Note!} In contrast to the
statement of the Generalized Spectral Conjecture for
{\it primitive} matrices,
it is {\it not} the case that
the existence of a strong shift equivalence
over $\mathcal{R}$ from a matrix $A$ over $\mathcal{R}$
to a nonnegative matrix can in general be characterized
by a spectral condition on $A$.
There are dense subrings of $\mathbb{R}$
over which there are nilpotent matrices which are not SSE
to nonnegative matrices
(Remark \ref {nilpotentnonneg}).
There is some motivation from
symbolic dynamics for pursuing the zero trace case of the GSC.
The Kim-Roush and Wagoner primitive
matrix counterexamples \cite{S11, Wagoner2000} to
Williams'
conjecture SE-$\mathbb{Z}_+$ $\implies $ SSE-$\mathbb{Z}_+$ rely absolutely
on certain zero-positive patterns of traces of powers of the
given matrix. We still do not know whether the refinement of
SE-$\mathbb{Z}_+$ by SSE-$\mathbb{Z}_+$
is algorithmically undecidable or (at another extreme) if it allows some finite
description involving such sign patterns. We are looking
for any related insight.
\section{Shift equivalence and strong shift equivalence} \label{sesect}
Suppose $\mathcal{R}$ is a subset of a semiring and $\mathcal{R}$ contains $\{0,1\}$.
(For example, $\mathcal{R}$ could be $\mathbb{Z},\mathbb{Z}_+,\{0,1\},\mathbb{R},\mathbb{R}_+, \ \dots \ $)
Square matrices $A,B$ over $\mathcal{R}$ (not necessarily of the same
size) are
{\it elementary strong shift equivalent over $\mathcal{R}$} (ESSE-$\mathcal{R}$) if there exist
matrices $U,V$ over $\mathcal{R}$ such that $A=UV$ and $B=VU$.
Matrices $A,B$ are
{\it strong shift equivalent over $\mathcal{R}$} (SSE-$\mathcal{R}$) if there
are a positive integer $\ell$
(the {\it lag} of the given SSE) and
matrices $A=A_0,A_1,\dots ,A_{\ell}=B$ such that $A_{i-1}$ and $A_i$
are ESSE-$\mathcal{R}$, for $1\leq i \leq \ell$. For matrices over a
subring of $\mathbb{R}$, the relation ESSE-$\mathcal{R}$ is never transitive.
For example, if matrices
$A,B$ are ESSE over $\mathbb{R}$, $j>1$ and $A^j\neq 0$, then $B^{j-1}\neq
0$; but if $A$ is the $n\times n$ matrix such that
$A(i,i+1)=0$ for $1\leq i <n$ and $A=0$ otherwise,
then $A$ is SSE-$\mathcal{R}$ to $(0)$.
Over any ring $\mathcal{R}$, the relation SSE-$\mathcal{R}$ on square matrices is
generated by similarity over $\mathcal{R}$ ($U^{-1}AU \sim A$) and
nilpotent extensions,
$
\left(\begin{smallmatrix} A&X\\0&0
\end{smallmatrix}\right) \sim A
\sim
\left(\begin{smallmatrix} 0&X\\0&A
\end{smallmatrix}\right) $
\cite{MallerShub1985}.
Square matrices $A,B$ over $\mathcal{R}$ are
{\it shift equivalent over $\mathcal{R}$} (SE-$\mathcal{R}$)
if there exist a positive integer $\ell$ and
matrices $U,V$ over $\mathcal{R}$ such that the following hold:
\begin{align*}
A^{\ell} &= UV \qquad B^{\ell}=VU \\
AU &= UB \qquad BV = VA \ .
\end{align*}
Herem $\ell$ is the {\it lag} of the given SE.
It is always the case that SSE over $\mathcal{R}$ implies SE over $\mathcal{R}$:
from a given lag $\ell$ SSE one easily creates a lag $\ell$ SE \cite{Williams73}.
For certain semirings $\mathcal{R}$, including above all $\mathcal{R}=\mathbb{Z}_+$, the relations of
SSE and SE over $\mathcal{R}$ are significant for symbolic dynamics.
The relations were introduced by Williams for the cases
$\mathcal{R}=\mathbb{Z}_+$ and $\mathcal{R}=\{0,1\}$ to study the classification of
shifts of finite type.
Matrices over $\mathbb Z_+$ are SSE over $\mathbb{Z}_+$ if and only if
they define topologically conjugate shifts of finite type.
However, the relation SSE-$\mathbb{Z}_+$ to this day remains
mysterious and is not even know to be decidable.
In contrast, SE-$\mathbb{Z}_+$
is a tractable, decidable, useful and very strong invariant of SSE-$\mathbb{Z}_+$.
Suppose now $\mathcal{R}$ is a ring,
and $A$ is $n\times n$ over $\mathcal{R}$.
T
o see the shift equivalence relation SE-$\mathcal{R}$ more conceptually,
recall that the direct limit $G_A$ of $\mathcal{R}^n$ under the
$\mathcal{R}$-module homomorphism $x\mapsto Ax$
is the set of equivalence classes $[x,k]$ for $x\in \mathcal{R}^n, k\in \mathbb{Z}_+$
under the equivalence relation $[x,k]\sim [y,j]$ if there
exists $\ell >0$ such that
$A^{j+\ell}x =A^{k+\ell}y $. $G_A$ has a well defined
group structure ($[x,k]+[y,j]=[A^kx +A^jy,j+k]$) and is an
$\mathcal{R}$-module ($r: [x,k]\mapsto [xr,k]$). $A$ induces an
$\mathcal{R}$-module isomorphism $\hat A: [x,k]\mapsto [Ax,k]$
with inverse $[x,k]\mapsto [x,k+1]$. $G_A$ becomes an
$\mathcal{R} [t]$ module
(also an $\mathcal{R} [t,t^{-1}]$ module) with $t: [x,k]\mapsto [x,k+1]$.
$A$ and $B$ are SE-$\mathcal{R}$ if and only if
these $\mathcal{R} [t]$-modules are
isomorphic (equivalently, if and only if they
are isomorphic as
$\mathcal{R} [t,t^{-1}]$ modules).
If the square matrix $A$ is $n\times n$, then
$I-tA$ defines a homomorphism $\mathcal{R}^n \to \mathcal{R}^n$ by the
usual multiplication $v\mapsto (I-tA)v$, and $\text{cok}(I-tA)$
is an $\mathcal{R}[t]$-module which is isomorphic to the
$\mathcal{R}[t]$-module $G_A$. For more detail and references on
these relations (by no means original to us)
see \cite{BoSc1,LindMarcus1995}.
Williams introduced SE and SSE in the 1973 paper \cite{Williams73}.
For any principal ideal domain $\mathcal{R}$,
Effros showed SE-$\mathcal{R}$ implies SSE-$\mathcal{R}$
in the 1981 monograph \cite{Effros1981}
(see \cite{Williams1992}
for Williams' proof for the case $\mathcal{R} = \mathbb{Z}$).
In the 1993 paper \cite{BH93},
Boyle and Handelman extended this result to the case that
$\mathcal{R}$ is a Dedekind domain (or, a little more generally, a Pr{\"u}fer domain).
Otherwise, the relationship of SE and SSE of matrices
over a ring remained open until the recent
paper \cite{BoSc1}, which
explains the relationship in general as follows.
\begin{theorem} \label{sseclassif} \cite{BoSc1}
Suppose $A,B$ are SE over a ring $\mathcal{R}$.
\begin{enumerate}
\item
There is a nilpotent matrix $N$ over $\mathcal{R}$
such that $B$ is SSE over $\mathcal{R}$ to the
matrix
$A\oplus N=
\begin{pmatrix} A & 0 \\ 0 & N
\end{pmatrix} $.
\item
The map $[I-tN] \to [A \oplus N]_{SSE}$ induces a bijection from
$\textnormal{NK}_{1}(\mathcal R)$ to the set of SSE classes of matrices
over $\mathcal{R}$ which are in the SE-$\mathcal{R}$ class of $A$.
\end{enumerate}
\end{theorem}
We will say just a little now about
$\textnormal{NK}_1(\mathcal{R})$, a group of great importance in algebraic
$K$-theory;
for more background, we have found \cite{Ranickibook,Rosenberg1994, WeibelBook}
very helpful.
$\textnormal{NK}_1(\mathcal{R})$ is the kernel of the map $K_1(\mathcal{R} [t]) \to K_1(\mathcal{R} )$
induced by the ring homomorphism $\mathcal{R} [t] \to \mathcal{R} $ which
sends $t$ to $0$. The finite matrix $I-tN$ corresponds to
the matrix $I-(tN)_{\infty}$ in the group $\textnormal{GL}(\mathcal{R} [t])$
(with $I$ denoting the $\mathbb N \times \mathbb N$ identity matrix and
$(tN)_{\infty}$ the
$\mathbb N \times \mathbb N$ matrix which agrees with $tN$ in an upper left
corner and is otherwise zero). Every class of $\textnormal{NK}_1(\mathcal{R}
)$ contains a matrix of the form
$I-(tN)_{\infty}$ with $N$ nilpotent over $\mathcal{R}$.
$\textnormal{NK}_1(\mathcal{R})$ is trivial for many rings
(e.g., any field, or more generally any left regular Noetherian
ring) but not for all rings. If
$\textnormal{NK}_1(\mathcal{R})$ is not trivial, then it is not finitely generated
as a group. From the established theory, it is easy
to give an example of
a subring $\mathcal{R}$ of $\mathbb{R}$ for which $\textnormal{NK}_1(\mathcal{R})$ is not trivial
(Example \ref{badrealring}).
\section{Proof of Theorem \ref{twoconj}}
\begin{proof}[Proof of Theorem \ref{twoconj}]
Given a square matrix $M$ over $\mathbb{R}$, let $\lambda_M$ denote its
spectral radius and define the matrix $|M|$ by $|M|(i,j)= |M(i,j)|$.
By Theorem \ref{sseclassif},
let $N$ be a nilpotent matrix
such that $B$ is SSE over $\mathcal{R}$ to
the matrix
$\begin{pmatrix} A&0\\0&N
\end{pmatrix} $ .
Suppose $M$ is a matrix
SSE over $\mathcal{R}$ to $N$ and $M$ also satisfies the following
conditions:
\begin{enumerate} \label{conditions}
\item
$\lambda_{|3M|}<\lambda_A $
\item
For all positive integers $n$,
$\text{trace}(|3M|^n)\leq
\text{trace}(A^n)
$ .
\item
For all positive integers $n$ and $k$,
if $\text{tr}(|3M|^n)<\text{tr}(A^n)$,
then
$\text{tr}(|3M|^{nk})<\text{tr}(A^{nk})$.
\end{enumerate}
Then by the Submatrix Theorem (Theorem 3.1 of \cite{BH91}),
there is a primitive
matrix $C$ SSE over $\mathcal{R}$ to $A$ such that
$|3M|$ is a proper principal submatrix of $C$.
Without loss of generality, let this submatrix
occupy the upper left corner of $C$.
Define $M_0$ to be the matrix of size matching
$C$ which is $M$ in its upper left corner and which
is zero in other entries. Then $B$ is SSE over $\mathcal{R}$
to the matrix
$\begin{pmatrix} C&0\\0&M_0
\end{pmatrix}$. Choose $\epsilon \in \mathcal{R}$ such that
$1/3 < \epsilon < 2/3$ and compute
\begin{align*}
\begin{pmatrix} I &-\epsilon I\\0&I\end{pmatrix}
\begin{pmatrix} C &0\\0&M_0\end{pmatrix}
\begin{pmatrix} I &\epsilon I\\0&I\end{pmatrix}
&=
\begin{pmatrix} C &\epsilon ( C - M_0 )\\0&M_0\end{pmatrix}
\\
\begin{pmatrix} I &0\\ I&I\end{pmatrix}
\begin{pmatrix} C &\epsilon (C - M_0 )\\0&M_0\end{pmatrix}
\begin{pmatrix} I &0\\- I&I\end{pmatrix}
&=
\begin{pmatrix} (1-\epsilon )C +\epsilon M_0 &\epsilon
(C - M_0)
\\(1-\epsilon )(C-M_0 )&\
\epsilon C +(1-\epsilon )M_0 \end{pmatrix} := G\ .
\end{align*}
The matrix $G$ is SSE over $\mathcal{R}$ to $B$, and it is nonnegative.
The diagonal blocks have positive entries wherever $C$ does;
because $C$
is primitive, there is a $j>0$ such that $C^j>0$, and
therefore the
diagonal blocks of $G^j$ are also positive.
Because neither offdiagonal block of $G$ is the zero block,
it follows that $G$ is primitive.
So, it suffices to find $M$ SSE over $\mathcal{R}$ to $N$ satisfying the
conditions
(1)-(3) above.
Choose $K$ such that $\text{tr}(A^k)>0 $ for all $k> K$.
Let $n$ be the integer such that
$N$ is $n\times n$, and let $J$
by the integer provided by Proposition \ref{nilpotentproposition}
given $n$ and $K$. Given this $J$,
choose $\epsilon >0$ such that for any
$J\times J$ matrix $M$ with $||M||_{\infty}< \epsilon$,
we have $\lambda_{3|M|}< \lambda_A$ and for
$k>K$ we also have $\text{tr}(|3M|^k)< \text{tr}(A^k)$.
Now let $\delta>0$ be as provided by Proposition \ref{nilpotentproposition}
for this $\epsilon $.
If we can now find an $n\times n$ nilpotent matrix $N'$ which is
SSE over $\mathcal{R}$ to $N$ and satisfies $||N'||<\delta$, then
we can apply Proposition \ref{nilpotentproposition} to this
$N'$ to produce a matrix $M$ SSE over $\mathcal{R}$ to $N$ and
with $||M||<\epsilon $ and with $\text{tr}(M^k)=0$ for $1\leq k \leq K$.
This matrix $M$ will satisfy the conditions (1)-(3).
Pick $\gamma >0$ such that
$||\gamma N||_{\infty} < \delta$.
There is a matrix $U$ in $\text{SL}(n,\mathbb{R} )$ such that
$U^{-1}NU = \gamma N$. The matrix $U$ is a product of basic elementary
matrices over $\mathbb{R}$, and these can be approximated arbitrarily
closely by basic elementary matrices over $\mathcal{R}$. Consequently there
is a matrix $V$ in $\text{SL}(n,\mathcal{R} )$ such that
$||V^{-1}NV||_{\infty} < \delta $. Choose
$N' = V^{-1}NV$.
\end{proof}
To prove the Proposition
\ref{nilpotentproposition} on which the proof of Theorem \ref{twoconj}
depends, we use a correspondence proved in \cite{BoSc1}.
We need some definitions.
Given a finite matrix $A$, let $A_{\infty}$ denote the
$\mathbb{N} \times \mathbb{N}$ matrix which has $A$ as its upper left
corner and is otherwise zero. In any $\mathbb{N} \times \mathbb{N}$ matrix,
$I$ denotes the infinite identity matrix.
Given a ring $R$, $\text{El}(R)$ is the group of
$\mathbb{N}\times \mathbb{N}$ matrices over $R[t]$, equal to the infinite
identity matrix except in finitely many entries, which are
products of basic elementary matrices (these basic
matrices are by definition equal to $I$ except
perhaps in a single offdiagonal entry).
For finite matrices $A,B$,
the matrices $I-A_{\infty}$ and $I-B_{\infty}$ are
$\text{El}(R[t])$ equivalent if there are matrices
$U,V$ in $\text{El}(R[t])$ such that
$U(I-A_{\infty})V=I-B_{\infty}$.
\begin{definition} \label{asharpdefinition}
Given a finite matrix $A$ over $t\mathcal{R} [t]$, choose $n\in \mathbb{N}$ and $k\in \mathbb{N}$
such that $A_1, \dots A_k$ are $n\times n$ matrices over
$\mathcal{R}$ such that
\[
A_{\infty} = \sum_{i=1}^k t^i(A_i)_{\infty}
\]
and define a finite matrix $\mathcal{A^{\sharp}}= \mathcal A^{\sharp (k,n)}$ over $\mathcal{R}$ by the
following block form,
in which every block is $n\times n$:
\[
\mathcal{A^{\sharp}} =
\begin{pmatrix}
A_1 & A_2 &A_3& \dots &A_{k-2}&A_{k-1} & A_k \\
I & 0 &0 & \dots & 0 & 0 & 0 \\
0 & I &0 & \dots & 0 & 0 & 0 \\
0 & 0 & I & \dots & 0 & 0 & 0 \\
\dots &\dots &\dots &\dots &\dots &\dots &\dots \\
0 & 0 & 0 & \dots &I & 0 & 0 \\
0 & 0 & 0 & \dots &0 & I & 0
\end{pmatrix} \ .
\]
\end{definition}
In the definition, there is some freedom in
the choice of $\mathcal{A^{\sharp}}$: $k$ can be increased by
using zero matrices, and $n$ can be increased
by filling additional entries of the $A_i$
with zero. These choices do not affect the
SSE-$\mathcal{R}$ class of $\mathcal{A^{\sharp}}$.
\begin{theorem}\label{finecentral}\cite{BoSc1}
Let $\mathcal{R}$ be a ring. Then there is a
bijection between the following sets:
\begin{itemize}
\item the set of
$\textnormal{El}(\mathcal{R} [t])$ equivalence classes of
$\mathbb{N} \times \mathbb{N}$ matrices $I-A_{\infty}$ such that $A$
is a finite matrix over $t\mathcal{R} [t]$
\item
the set of SSE-$\mathcal{R}$ classes of square matrices
over $\mathcal{R}$.
\end{itemize}
The bijection from $\textnormal{El}(\mathcal{R} [t])$ equivalence classes
to SSE-$\mathcal{R}$ classes is induced by
the map
$ I-A_{\infty} \mapsto \mathcal{A^{\sharp}}$. The inverse map (from
the set of SSE-$\mathcal{R}$ classes)
is induced by the map sending $A$ over $\mathcal{R}$ to
the $\mathbb{N} \times \mathbb{N} $ matrix
$ I-tA$.
\end{theorem}
By the degree of a matrix with polynomial entries we mean the maximum
degree of
its entries. If $M$ is a matrix over $\mathbb{R} [t]$,
with entries $M(i,j)= \sum_{i,j,k} m_{ijk}t^k$,
then we define
$||M||= \max_{k>0} \max_{i,j} |m_{ijk}|$. If $M$ is a matrix over $\mathbb{R}$,
with $M(i,j)= m_{ij}$,
then $||M||_{\infty} $ is the usual sup norm,
$||M||_{\infty}= \max_{i,j} |m_{ij}|$.
\begin{lemma}\label{nilpotentlemma}
Suppose $\mathcal{R}$ is a dense subring of $\mathbb{R}$
and $A$ is an $n\times n$ matrix
of degree $d$
over $t^k\mathcal{R} [t]$, with entries $a_{ij} = \sum_{1\leq r \leq d} a_{ij}^{(r)} t^r $.
Suppose $\sum_{i=1}^na^{(k)}_{ii}=0$ and
$||A|| \leq \frac 1{4n^2}$.
Then there is
an $n\times n$
matrix $B$ over $t^{k+1}\mathcal{R} [t]$ such that
$I-A_{\infty}$ is
$\textnormal{El}(\mathcal{R} [t])$ equivalent to
$I-B_{\infty}$ and
the following hold:
\begin{enumerate}
\item
$\textnormal{degree}(B) \ \leq \
\textnormal{degree}(A) + 3k
$.
\item $||B||\ \leq \ 4n^3||A||$ .
\end{enumerate}
\end{lemma}
\begin{proof}
For finite square matrices $I-C$ and $I-D$, we
use $I-C \sim I-D$ to denote elementary equivalence
over $\mathcal{R} [t]$ of
$I-C_{\infty}$ and
$I-D_{\infty}$. We have
\begin{align*}
I-A \ &=\
\begin{pmatrix}
1-a_{11} & -a_{12} & \cdots & -a_{1n} \\
-a_{21} & 1-a_{22} & \cdots & -a_{2n} \\
\vdots & & \ddots & \vdots \\
-a_{n1} & -a_{n2} & \cdots & 1-a_{nn} \end{pmatrix} \\
&\sim \
\begin{pmatrix} 1-a_{11} & -a_{12} & \cdots & -a_{1n} & a^{(k)}_{11}t^k \\
-a_{21} & 1-a_{22} & \cdots & -a_{2n}& a^{(k)}_{22}t^k \\
\vdots & & \ddots & \vdots & \vdots \\
-a_{n1} & -a_{n2} & \cdots & 1-a_{nn} & a^{(k)}_{nn}t^k \\
0 & 0 & \cdots & 0 & 1\end{pmatrix} := I-A_1 \ .
\end{align*}
In order, apply the following elementary operations:
\begin{enumerate}
\item
For $1\leq j\leq n$, add column $n+1$ to column $j$ of $I-A_1$, to produce
a matrix $I-A_2$.
Then $\textnormal{degree} (A_2)= \textnormal{degree} (A)$;
the diagonal entries of $A_2$ lie in $t^{k+1}\mathcal{R} [t]$;
and
$||A_2||\leq 2||A_1||=2||A||$.
Every entry in row $n+1$ of $I-A_2$ equals 1.
(By definition these entries have no impact on $||A_2||$.)
\item
For $1\leq i \leq n$, add (-1)(row $i$) of $(I-A_2)$
to row $n+1$ to form
$I-A_3$. Then the entries of
$A_3$ lie in $t^k \mathcal{R} [t]$, and the diagonal entries of
$A_3$ still lie in $t^{k+1} \mathcal{R} [t]$, since
$\sum_{i=1}^na^{(k)}_{ii}=0$ .
We have $||A_3||\leq n||A_2||\leq 2n||A||<1$
and
$\textnormal{degree} (A_3)\leq \textnormal{degree} (A)$ .
\item
For $1\leq i \leq n$, add ($-a^{(k)}_{ii}t^k$)(row $n+1$) of $(I-A_3)$
to row $i$ to form $I-A_4$.
In block form,
\[
I-A_4 \ = \
\begin{pmatrix} I-A_5 & 0 \\ x & 1
\end{pmatrix}
\]
in which $A_5$ is $n\times n$ and
$x=(x_1\cdots x_n)$. Adding multiples of column $n+1$ to
columns $1, \dots , n$ to clear out $x$, we see
$I-A_5\sim I-A$. We have
$\textnormal{degree} (A_5)\leq \textnormal{degree} (A)+k $ and
\begin{align*}
||A_5||\ &
\leq \ || A_3|| + (|| A||)( ||A_3|| ) \\
&\leq \ 2||A_3||\ \leq \ 4n ||A||\ < \ 1 \ .
\end{align*}
In $A_5$, the diagonal terms lie in $t^{k+1}\mathcal{R} [t]$ and
the offdiagonal terms
lie in $t^{k}\mathcal{R} [t]$.
In the next two steps,
we apply elementary
operations to clear the degree $k$ terms outside the diagonal.
We use part of a clearing algorithm from \cite{S12}.
\item
Let $b_{ij}$ be the coefficient of $t^k$ in $A_5(i,j)$.
For $2\leq i \leq n$, add $(-b_{1j}t^k)(\text{row }j)$ to
row 1. Continuing in order for rows $i=2, \dots , n-1$:
for $i+1\leq j \leq n$, add
$(-b_{ij}t^k)(\text{row }j)$ to
row $i$. Let $(I-A_6)$ be the resulting matrix.
The entries of $A_6$ on and above the diagonal lie in
$t^{k+1}\mathcal{R} [t]$. We have
\[
\textnormal{degree} (A_6)\ \leq \ \textnormal{degree} (A_5)+k
\ \leq \ \textnormal{degree} (A)+2k
\]
and
\begin{align*}
||A_6||\ & \leq \ ||A_5|| + (n-1)||A_5||^2 \\
& \leq \ n||A_5||
\ \leq \ 4n^2 ||A|| \leq 1 \ .
\end{align*}
\item
Let $c_{ij}$ denote the coefficient of $t^k$ in
$A_6(i,j)$. For $2\leq j \leq n$, add $(-c_{j1}t^k)(\text{column }j)$
of $A_6$ to column 1. Continuing in order for columns $i=2, \dots , n-1$:
for $i+1\leq j \leq n$, add
$(-c_{ji})(\text{column }j)$ to
column $i$. For the resulting matrix $(I-B)$,
the entries of $B$ lie in $t^{k+1}\mathcal{R} [t]$, with
\[
\textnormal{degree} (B)
\ \leq \ \textnormal{degree} (A_6)+k \
\leq \ \textnormal{degree} (A)+3k
\]
and
\begin{align*}
||B||\ & \ \leq \ ||A_6||+ (n-1) ||A_6||^2 \\
&\ \leq \ n ||A_6||\ \leq \ 4n^3
||A|| \ .
\end{align*}
\end{enumerate}
\end{proof}
\begin{proposition} \label{nilpotentproposition}
Suppose $\mathcal{R}$ is a dense subring of $\mathbb{R}$,
$n\in \mathbb{N}$ and $K\in \mathbb{N}$.
Then there is a $J$ in $\mathbb{N}$ such that for any $\epsilon >0$
there exists $\delta>0$ such that the following holds: \\
if $N$ is a nilpotent $n\times n$ matrix over $\mathcal{R}$ and
$||N||_{\infty}<\delta$, then there is a $J\times J$ matrix $M$
over $\mathcal{R}$ such that
\begin{enumerate}
\item
$M$ is SSE over $\mathcal{R}$ to $N$,
\item
$\textnormal{tr}\, (|M|^k) =0$ for $1\leq k \leq K$, and
\item
$||M||_{\infty}< \epsilon $ .
\end{enumerate}
\end{proposition}
\begin{proof}
Because $N$ is nilpotent, $\text{tr}(N^k)=0$ for all
positive integers $k$.
Set $B_0=tN$. We define matrices $B_1, \dots ,B_K$ recursively,
letting $I-B_{k+1}$ be the matrix $I-B$ provided by
Lemma \ref{nilpotentlemma} from input $I-A=I-B_{k}$.
The conditions of the lemma are satisfied recursively, because
the (zero) trace of the $k$th power of the nilpotent matrix $(B_k)^{\sharp}$
must be (in the terminology of the lemma) $\sum_i a_{ii}^{(k)}$.
The matrix $B_{K}$ is $n\times n$
with entries of degree at most
\[
d:= 1 +3(1) + 3(2) + \dots + 3(K)
= 1 + 3K(K+1)/2 \ .
\]
Let
$(B_K)_i$ be the matrices, $1\leq i \leq d$, such that
$B_K=
\sum_{i=1}^d (B_K)_i t^i$ .
Define $M$ to be the matrix
$(B_K)^{\sharp}$, an $nd\times nd$ matrix over $\mathcal{R}$
which is SSE over $\mathcal{R}$ to $N$. Set $J=
nd $.
It is now clear from condition $(2)$
of Lemma \ref{nilpotentlemma} and induction
that given $\epsilon > 0$, there is a $\delta >0$
such that $||N||<\delta $ implies
$|| (B_K)||< \epsilon$. (We are not trying to
optimize estimates.) With $K>1$ (without loss of
generality), we have $|| B_K||= || (B_K)^{\sharp}||_{\infty}$.
This finishes the proof.
\end{proof}
\begin{example} \label{badrealring}
{\normalfont
There are subrings of $\mathbb{R}$ with nontrivial $\textnormal{NK}_1$.
For example, let $\mathcal{R}=\mathbb Q [t^2,t^3,z,z^{-1}]$. By the Bass-Heller-Swan Theorem (see \cite{Rosenberg1994}, 3.2.22) for any ring $\mathcal{S}$, there is a splitting $K_{1}(\mathcal{S}[z,z^{-1}]) \cong K_{1}(\mathcal{S}) \oplus K_{0}(\mathcal{S}) \oplus \textnormal{NK}_{1}(\mathcal{S}) \oplus \textnormal{NK}_{1}(\mathcal{S})$, which implies $\textnormal{NK}_{1}(\mathcal{S}[z,z^{-1}])$ always contains a copy of $\textnormal{NK}_{0}(\mathcal{S})$. An elementary argument (see for example exercise 3.2.24 in \cite{Rosenberg1994}) shows that $\textnormal{NK}_{0}(\mathbb{Q}[t^{2},t^{3}]) \ne 0$, so $\textnormal{NK}_{1}(\mathbb{Q}[t^{2},t^{3},z,z^{-1}])$ is non-zero. Since
$\mathbb{Q}[t^{2},t^{3},z,z^{-1}]$ can be realized as a subring of $\mathbb{R}$
(by an embedding
sending $t,z$ to
algebraically independent
transcendentals in $\mathbb{R}$)
this provides an example of a subring
$\mathcal{R}$ of $\mathbb{R}$ for which $\textnormal{NK}_{1}(\mathcal{R})$ is not zero, and
therefore shift equivalence over $\mathcal{R}$
does not imply strong shift equivalence
over $\mathcal{R}$. \\
It is possible
to produce explicit
examples
by tracking through the
exact sequences behind the argument of the last paragraph.
This is done in \cite{SchmiedingExamples},
and for $\mathcal{R} = \mathbb{Q}[t^{2},t^{3},z,z^{-1}]$
yields the following matrix
over $\mathcal{R} [s]$,
\[
I-M=\begin{pmatrix} 1-(1-z^{-1})s^{4}t^{4} & (z-1)(s^{2}t^{2}-s^{3}t^{3})
\\ (1-z^{-1})(s^{2}t^{2})(1+st+s^{2}t^{2}+s^{3}t^{3}) &
1+(z-1)(s^{4}t^{4}) \end{pmatrix} \ ,
\]
which is nontrivial as an element
of $\textnormal{NK}_1(\mathcal{R} )$.
Writing $M$ as
\[
M=\begin{pmatrix}
(1-z^{-1})s^{4}t^{4} & (1-z)(s^{2}t^{2}-s^{3}t^{3}) \\
(z^{-1}-1)(s^{2}t^{2})(1+st+s^{2}t^{2}+s^{3}t^{3}) &
(1-z)(s^{4}t^{4}) \end{pmatrix}
= \sum_{i=1}^5 s^iM_i
\]
with the
$M_i$ over $\mathcal{R}$, we obtain (see \cite{BoSc1}) a nilpotent matrix $N$ over $\mathcal{R}$,
\begin{align*}
&N\ =\
\begin{pmatrix}
M_1 & M_2 & M_3 & M_4 & M_5 \\
I & 0 & 0 & 0 & 0 \\
0 & I & 0 & 0 & 0 \\
0 & 0 & I & 0 & 0 \\
0 & 0 & 0 & I & 0
\end{pmatrix} \ =\ \\
&
\begin{pmatrix}
0&0 & 0&(1-z)t^2 & 0&(1-z)(-t^3) & (1-z^{-1})t^4&0 & 0&0\\
0&0 & (z^{-1}-1)t^2&0 & (z^{-1}-1)t^3&0 & (z^{-1}-1)t^4&(1-z)t^4 & (z^{-1}-1)t^5&0\\
1&0 & 0&0 & 0&0 & 0&0 &0&0 \\
0&1 & 0&0 & 0&0 & 0&0 &0&0 \\
0&0 & 1&0 & 0&0 & 0&0 &0&0 \\
0&0 & 0&1 & 0&0 & 0&0 &0&0 \\
0&0 & 0&0 & 1&0 & 0&0 &0&0 \\
0&0 & 0&0 & 0&1 & 0&0 &0&0 \\
0&0 & 0&0 & 0&0 & 1&0 &0&0 \\
0&0 & 0&0 & 0&0 & 0&1 &0&0
\end{pmatrix}
\end{align*}
which is nontrivial as an element of $\textnormal{Nil}_{0}(\mathcal{R})$, as
is the matrix
$N'$ obtained
by
removing the last row and the last column from $N$.
The matrix $N'$ is $9\times 9$. We don't have a smaller example, and
we don't have a decent example of two positive matrices which are
shift equivalent but not strong shift equivalent over a subring of
$\mathbb R$.
}
\end{example}
\begin{remark} \label{nilpotentnonneg}
{\normalfont
Suppose $\mathcal{R}$ is a subring of $\mathbb{R}$ and $N$ is a nonnegative
nilpotent matrix over $\mathcal{R}$. Then there is a permutation
matrix $P$ such that $P^{-1}NP$ is triangular with zero diagonal.
Using elementary SSEs of the block form
\[
\begin{pmatrix} X&Y\\0&0
\end{pmatrix}
=
\begin{pmatrix} I\\0
\end{pmatrix}
\begin{pmatrix} X&Y
\end{pmatrix}
\qquad \text{and} \qquad
\begin{pmatrix} X
\end{pmatrix}
=
\begin{pmatrix} X&Y
\end{pmatrix}
\begin{pmatrix} I\\0
\end{pmatrix}
\]
we see that $P^{-1}NP$ (and hence $N$)
is SSE over $\mathcal{R}$ to $[0]$.
By Theorem \ref{sseclassif}, with $A=0$, it follows that
a nilpotent matrix $N$ is
SSE over $\mathcal{R}$ to a nonnegative matrix if and only if $[I-tN_{\infty} ]$
is trivial in $\textnormal{NK}_1(\mathcal{R} )$.
Therefore,
if (and only if)
$\textnormal{NK}_1(\mathcal{R} )$ is nontrivial, there will be
nilpotent matrices over $\mathcal{R}$ which cannot be SSE over $\mathcal{R}$ to a
nonnegative matrix.
The matrix $N$ in Example \ref{badrealring} is one such example.
}
\end{remark}
\section{Reflections on the Generalized Spectral Conjecture}
Is the Generalized Spectral Conjecture true?
For $\mathcal{R}=\mathbb{Z}$, the Spectral Conjecture is true \cite{S8}.
The GSC is true for $\mathcal{R}=\mathbb{Z}$ for a given
$\Delta$ if every entry of $\Delta$ is a rational
integer \cite{BH93}. There is not much more
direct evidence for the GSC for $\mathcal{R}=\mathbb{Z}$,
but we know of no results which
cast doubt.
From here, suppose $\mathcal{R}$ is a dense subring of $\mathbb{R}$.
As noted earlier, the Spectral Conjecture is almost surely true.
Theorem \ref{twoconj} removes the possibility that
the very subtle algebraic invariants
following from Theorem
\ref{sseclassif} could be an obstruction to the GSC.
The GSC was proved in \cite{BH93} in the following cases:
\begin{enumerate}
\item
when the nonzero spectrum is contained in $\mathcal{R}$, and
$\mathcal{R}$ is a Dedekind domain with a nontrivial unit;
\item \label{postrpro}
when the nonzero spectrum has positive trace and
either (i) the spectrum is real or (ii)
the minimal and characteristic polynomials
of the given matrix are equal up to a power of the
indeterminate.
\end{enumerate}
The following Proposition (almost explicit in
\cite[Appendix 4]{BH91}) is more evidence for the
GSC in the positive trace case.
\begin{proposition} \label{realreduction}
Suppose the Generalized Spectral Conjecture holds for
matrices of positive trace for the ring $\mathbb{R}$. Then it
holds for matrices of positive trace for every dense subring
$\mathcal{R}$ of $\mathbb{R}$.
\end{proposition}
\begin{proof}
Let $A$ be a square matrix over $\mathcal{R}$ of positive trace
which over $\mathbb{R}$
is SSE to a primitive real matrix $B$.
We need to show that $A$ is SSE over $\mathcal{R}$ to a primitive
matrix.
By \cite{S32} (or the alternate exposition
\cite[Appendix B]{BKR2013}), because $B$ is primitive with positive
trace, there is a positive matrix $B_1$ SSE over $\mathbb{R}$
(in fact over $\mathbb{R}_+$) to
$B$. And then, by arguments in \cite{S32},
for some $m$ there are $m\times m$ matrices
$A_2,B_2$ (obtained through
row splittings of $A$ and $B_1$ ),
with $B_2$ positive, such that $A$ is SSE over $\mathcal{R}$ to $A_2$;
$B_1$ is SSE over $\mathbb{R}$
(in fact over $\mathbb{R}_+$)
to a positive
matrix $B_2$; and there is a matrix $U$
in $\textnormal{SL}(m, \mathbb{R})$ such that $U^{-1}A_2U =B_2$.
Because $\textnormal{SL}(m, \mathcal{R})$ is dense in $\textnormal{SL}(m, \mathbb{R})$,
and $B_2$ is positive, there is a $V$ in
$\textnormal{SL}(m, \mathcal{R})$ such that $V^{-1}A_2V$ is positive.
This matrix $(V^{-1}A_2)(V)$ is SSE over $\mathcal{R}$ to
the matrix $(V)(V^{-1}A_2)=A$.
\end{proof}
After more than 20 years, the GSC
remains open even in the case $\mathcal{R} = \mathbb R$.
Still, the GSC seems correct.
What we lack is a proof.
\bibliographystyle{plain}
|
3,212,635,537,535 | arxiv | \section{Appel and Jonqui\`{e}re Integrals}\label{sec1}
The polylogarithm $\Li_{s}(x)$ is defined by the power series
\begin{equation}\label{sec1-eq1}
\Li_{s}(x)=\sum_{n=1}^{\infty}\frac{x^n}{n^{s}}.
\end{equation}
The definition is valid for all complex values $s$ and all
complex values of $x$ such that $|x|<1$. The series is convergent
for $x=1$ only when $\re(s)>1$.
Using the identity
\begin{equation}\label{sec1-eq2}
\frac{1}{n^s}=\frac{1}{\Gamma(s)}\int_{0}^{\infty}e^{-nt}t^{s-1}\,dt,
\end{equation}
equation (\ref{sec1-eq1}) can be rewritten as
\begin{equation}\label{sec1-eq3}
\Li_{s}(x)=\frac{x}{\Gamma(s)}\int_{0}^{\infty}\frac{t^{s-1}}{e^{t}-x}\,dt.
\end{equation}
The integral in (\ref{sec1-eq3}) is called Appell's integral or
Jonqui\`{e}re's integral. It defines $\Li_{s}(x)$ not only in the
unit circle but also in the whole slit plane $\mathbb{C}\setminus
[1, \infty)$ provided that $\re(s)>0$
To obtain a formula valid for practically every complex number
$s$, we use Hankel's device which consists in replacing the real
integral by a contour integral. The contour is denoted by
$\mathcal{C}$ and is called Hankel contour. It consists of the
three parts $C=C_{+}\cup C_{\epsilon}\cup C_{-}$: a path which
extends from $(\infty,\epsilon)$, around the origin counter
clockwise on a circle of center the origin and of radius
$\epsilon$ and back to $(\epsilon,\infty)$, where $\epsilon$ is an
arbitrarily small positive number. The integral (\ref{sec1-eq3})
becomes
\begin{equation}\label{sec1-eq4}
\Li_{s}(x)=e^{-i\pi s}\frac{\Gamma(1-s)}{2\pi
i}\int_{\mathcal{C}}\frac{xt^{s-1}}{e^{t}-x}\,dt.
\end{equation}
Equation (\ref{sec1-eq4}) now defines $\Li_{s}(x)$ for any $x$ in
the cut plane and any $s$ not a positive integer.
\section{A New Expansion of $\Li_{s}(x)$}\label{sec2}
The integral in (\ref{sec1-eq3}) can be rewritten as
\begin{equation}\label{sec2-eq1}
\Li_{s}(x)=\frac{1}{\Gamma(s)}\int_{0}^{\infty}\frac{xte^{-t}}{1-xe^{-t}}t^{s-2}\,dt.
\end{equation}
By observing that
\begin{equation}\label{sec2-eq2}
\frac{d}{dt}\bigg (\frac{-xte^{-t}}{1-xe^{-t}}\bigg )=\bigg
(\frac{t}{(1-xe^{-t})^2}-\frac{1}{1-xe^{-t}}\bigg)xe^{-t},
\end{equation}
we may integrate by parts (\ref{sec2-eq1}) to obtain
\begin{equation}\Li_{s}(x)=\frac{1}{(s-1)\Gamma(s)}
\int_{0}^{\infty}\bigg
(\frac{t}{(1-xe^{-t})^2}-\frac{1}{1-xe^{-t}}\bigg)xe^{-t}t^{s-1}\,dt,\label{sec2-eq3}
\end{equation}
where .
If we define the new variable $X$ by
\begin{eqnarray}\label{sec2-eq4}
X&=&1-xe^{-t},\\
t&=&-\log(1-X)+\log{x}\label{sec2-eq5},
\end{eqnarray}
where $\log(1-X)$ and $\log{x}$ are both real when $X<1$ and $x>0$
respectively, the function between the parenthesis inside the
integral (\ref{sec2-eq3}) becomes
\begin{eqnarray}\nonumber
\frac{t}{(1-xe^{-t})^2}-\frac{1}{1-xe^{-t}}&=&-\frac{\log(1-X)}{X^2}-\frac{1}{X}+\frac{\log{x}}{X^2}\\
&=&\sum_{n=1}^{\infty}\frac{X^{n-1}}{n+1}+\frac{\log{x}}{X^2},\label{sec2-eq6}
\end{eqnarray}
provided of course that the infinite series on the right hand
side of (\ref{sec2-eq6}) is convergent. The radius of convergence
of the series is $1$, so we require that $|X|=|1-xe^{t}|<1$. When
$|1-x|<1$, the condition $|1-xue^{-r}|<1$ is trivially satisfied
for all $t>0$.
The set $|1-x|<1$ is a disk of center $1$ and radius 1. If we want
$x$ to avoid the cut $[1, \infty)$, then it is judicious to
restrict $x$ to the set $D$ defined by
\begin{equation}\label{sec2-eq6bis}
D=\{x\in \mathbb{C}: |x|<1 \quad\text{and}\quad
|1-x|<1\}.
\end{equation}
In this paper, we will further restrict $x$ to the real interval
$(0,1)$ which is a subset of $D$ since the consideration of
complex values of $x$ does not offer any advantages to our
analysis.
Finally, if we go back to the original variables, (\ref{sec2-eq3})
simplifies to
\begin{eqnarray}\nonumber
(s-1)\Li_{s}(x) &=&\frac{x}{\Gamma(s)}
\int_{0}^{\infty}\sum_{n=1}^{\infty}\frac{(1-xe^{-t})^{n-1}}{n+1}e^{-t}t^{s-1}\,dt\\
&&\qquad
+x\frac{\log{x}}{\Gamma(s)}\int_{0}^{\infty}\frac{e^{-t}}{(1-xe^{-t})^2}t^{s-1}\,dt.\label{sec2-eq7}
\end{eqnarray}
Now, can we interchange the sum and the integral in
(\ref{sec2-eq7})? The answer is affirmative if we can show that
the series
\begin{equation}\label{sec2-eq8}
\sum_{n=1}^{\infty}\frac{1}{\Gamma(s)}\int_{0}^{\infty}\frac{(1-xe^{-t})^{n-1}}{n+1}xe^{-t}t^{s-1}\,dt
\end{equation}
converges absolutely and uniformly for all $t>0$. For this
purpose, we define
\begin{equation}\label{sec2-eq8bis}
\sigma_n(s,x)=\sum_{k=0}^{n-1}(-1)^{k}{n-1 \choose
k}x^{k+1}(k+1)^{-s},
\end{equation}
which, using (\ref{sec1-eq2}), can be rewritten as
\begin{equation}\label{sec2-eq9}
\sigma_n(s,x)=\frac{1}{\Gamma(s)}
\int_{0}^{\infty}(1-xe^{-t})^{n-1}xe^{-t}t^{s-1}\,dt
\end{equation}
when $\re(s)>0$.
The series (\ref{sec2-eq8}) can then be rewritten as the function
$Z(s,x)$ defined by
\begin{definition}\label{sec2-def1}
For $x\in (0,1)$ and for $s\in \mathbb{C}$, we define the function
\begin{equation}\label{sec2-eq10}
Z(s,x)\triangleq \sum_{n=1}^{\infty}\frac{\sigma_n(s,x)}{n+1}.
\end{equation}
\end{definition}
Uniform and absolute convergence of (\ref{sec2-eq8}) or
(\ref{sec2-eq10}) is a direct consequence of the asymptotic
estimate of $\sigma_n(s,x)$ in Proposition~\ref{sec3-prop4} to be
proved later in Section~\ref{sec3}. However, we will prove the
result using simpler but characteristic estimates when $\re(s)>0$.
To prove absolute and uniform convergence, it suffices to prove
absolute and uniform convergence for the dominant series
\begin{equation}\label{sec2-eq11}
\sum_{n=1}^{\infty}\int_{0}^{\infty}\frac{(1-xe^{-t})^{n-1}}{n+1}e^{-t}t^{\sigma-1}\,dt.
\end{equation}
A straightforward calculation of the derivative shows that the
function $(1-xe^{-t})^{n-1}e^{-t/2}$ is maximized when
$e^{-t}=\frac{1}{x(2n-1)}$ and that the maximum value is equal to
\begin{equation}\label{sec2-eq12}
K=(1-\frac{1}{2n-1})^{n-1}\frac{1}{\sqrt{x(2n-1)}}.
\end{equation}
Hence, for $n\ge 2$,
\begin{eqnarray}
\int_{0}^{\infty}(1-xe^{-t})^{n-1}xe^{-t}t^{\sigma-1}\,dt&=&
\int_{0}^{\infty}(1-xe^{-t})^{n-1}e^{-t/2}
(xe^{-t/2}t^{\sigma-1})\,dt\nonumber\\
&\le &K \int_{0}^{\infty}xe^{-t/2}t^{\sigma-1}\,dt\nonumber\\
&=&\left(1-\frac{1}{x(2n-1)}\right)^{n-1}\frac{x\Gamma(\sigma)}{\sqrt{x(2n-1)}(1/2)^{\sigma}}\nonumber\\
&\le&K^{\prime}\frac{\sqrt{x}}{\sqrt{2n-1}}.\label{sec2-eq13}
\end{eqnarray}
The last inequality implies that each term of the dominating
series is bounded by $K^{\prime}\sqrt{x}/(n+1) \sqrt{2n-1}$.
Therefore, by the the comparison test the series (\ref{sec2-eq7})
is absolutely and uniformly convergent, and can be rewritten as
as
\begin{equation}
(s-1)\Li_{s}(x)
=Z(s,x)+\log{x}\frac{1}{\Gamma(s)}\int_{0}^{\infty}\frac{xe^{-t}}{(1-xe^{-t})^2}t^{s-1}\,dt.\label{sec2-eq14}
\end{equation}
The last equation is our new expansion of $\Li_{s}(x)$ when
$\re(s)>0$. To extend the definition of (\ref{sec3-eq14}) to all
complex numbers $s$, $s\ne 1,2, \cdots$, we can still use Hankel's
contour defined previously to obtain our first result:
\begin{proposition}\label{sec2-prop1}
For $x\in (0,1)$ and for $s\notin \{1,2,\cdots\}$,
\begin{equation}\label{sec2-eq15}
(s-1)\Li_{s}(x) =Z(s,x)+e^{-i\pi s}\log{x}\frac{\Gamma(1-s)}{2\pi
i}\int_{\mathcal{C}}\frac{xe^{-t}}{(1-xe^{-t})^2}t^{s-1}\,dt.
\end{equation}
\end{proposition}
\section{Some properties of the functions $\sigma_n(s,x)$ and $Z(s,x)$}\label{sec3}
\subsection{The function $\sigma_n(s,x)$}\label{sec3.1}
The function $\sigma_n(s,x)$ has been defined in
section~\ref{sec2} by the following alternating sum which is valid
for any $s\in \mathbb{C}$ and any $x\in \mathbb{C}$:
\begin{equation}\label{sec3-eq1}
\sigma_n(s,x)=\sum_{k=0}^{n-1}(-1)^{k}{n-1 \choose
k}x^{k+1}(k+1)^{-s}.
\end{equation}
As a function of $x$, $\sigma_n(s,x)$ is a polynomial in $x$ of
degree $n$ for any fixed $s$; however, as a function of $s$, it
is entire.
We can express the polynomials $\sigma_n(s,x)$ in terms the
generalized Stirling numbers of the second kind:
\begin{proposition}\label{sec3-prop2}
\begin{equation}
\sigma_{n}(s,x) =
\frac{-1}{n}\sum_{j=1}^{n}(-1)^{j}j!\stirs{1-s}{j}{n\choose
j}x^{j}(1-x)^{n-j},\label{sec3-eq5}
\end{equation}
where where $\stirs{\alpha}{j}$ are the generalized Stirling
numbers of the second kind defined by
\begin{equation}
\stirs{\alpha}{j}=\frac{1}{j!}\sum_{m=1}^{j}{j\choose
m}m^{\alpha}.\label{sec3-eq6}
\end{equation}
\end{proposition}
\begin{proof}
We have by definition
\begin{equation}
\sigma_{n}(s,x)=\sum_{k=0}^{n-1}(-1)^{k}{n-1\choose
k}x^{k+1}(k+1)^{-s}\label{sec3-eq7}.
\end{equation}
By making the change of variable, $k=m-1$, the sum can obviously
be put into the form
\begin{equation}
\sigma_{n}(s,x) = \frac{-1}{n}\sum_{m=1}^{n}{n\choose
m}(-1)^{m}x^{m}m^{1-s}.\label{sec3-eq8}
\end{equation}
In \cite{boyadzhiev}, the following identity was proved for every
integer $n$ and every complex numbers $\alpha, x$, $\alpha\ne 0$
\begin{equation}
\sum_{m=1}^{n}{n\choose m}x^{m}m^{\alpha}= \sum_{j=1}^{n}{n\choose
j}j!\stirs{\alpha}{j}x^{j}(1+x)^{n-j},\label{sec3-eq9}
\end{equation}
where $\stirs{\alpha}{j}$ are the generalized Stirling numbers of
the second kind. Applying the identity to (\ref{sec3-eq8}), we get
\begin{equation}
\sigma_{n}(s,x) =
\frac{-1}{n}\sum_{j=1}^{n}(-1)^{j}j!\stirs{1-s}{j}{n\choose
j}x^{j}(1-x)^{n-j}.\label{sec3-eq10}
\end{equation}
\end{proof}
\begin{remark}
According to proposition~\ref{sec3-prop2}, $\sigma_{n}(s,x)$ can
be viewed as a Bernstein polynomial $B_{n}(f)(x)$ if one sets the
function $f$ to be such that
\begin{equation}
f\left(\frac{j}{n}\right)=
\frac{-1}{n}(-1)^{j}j!\stirs{1-s}{j}.\label{sec3-eq11}
\end{equation}
\end{remark}
\begin{remark}Obviously, one has
\begin{eqnarray}
\sigma_{n}(s,0)&=& 0 \label{sec3-eq12}\\
\sigma_{n}(s,1) &=& (-1)^{n}(n-1)!\stirs{1-s}{n}.\label{sec3-eq13}
\end{eqnarray}
\end{remark}
The next proposition gives the asymptotic estimates of
$\sigma_n(s,x)$ when $n$ is large:
\begin{proposition}\label{sec3-prop3}
For $\re(s)>0$, $x\in (0,1)$ and $n$ large enough
\begin{equation}\label{sec3-eq14}
\sigma_n(s,x)\sim \frac{1}{n(\log{n})^{1-s}\Gamma(s)}.
\end{equation}
\end{proposition}
\begin{proof}
When $\re(s)>0$, we know from the previous section that
$\sigma_n(s,x)$ can be written as an integral:
\begin{equation}\label{sec3-eq14bis}
\sigma_n(s,x)=\frac{1}{\Gamma(s)}
\int_{0}^{\infty}(1-xe^{-t})^{n-1}e^{-t}t^{s-1}\,dt
\end{equation}
when $\re(s)>0$. Thus, the problem is reduced to find the
asymptotic estimates of the following integral
\begin{equation}\label{sec3-eq15}
I(n,s)=\int_{0}^{\infty}(1-xe^{-t})^{n-1}xe^{-t}t^{s-1}\,dt.
\end{equation}
We prove the proposition by the method of Laplace. Put
$u=xe^{-t}$, the integral becomes
\begin{equation}\label{sec3-eq16}
I(n,s)=\int_{0}^{1}e^{(n-1)\log(1-u)}(-\log{u}+\log{x})^{s-1}\,du,
\end{equation}
Define $h(u)=-\log(1-u)$, then $h^{\prime}(u)=\frac{1}{1-u}$ and
$h(0)=0$. To put the integral in a Laplace integral format, we let
$w=h(u)$. Since $h(u)$ is increasing on $(0,1)$ and
$h^{\prime}>0$, then
\begin{equation}\label{sec3-eq17}
\int_{0}^{1}e^{-(n-1)(-\log(1-u))}(-\log{u}+\log{x})^{s-1}\,du=\int_{0}^{-\log(1-x)}f(w)e^{-(n-1)w}\,dw,
\end{equation}
where
\begin{equation}\label{sec3-eq18}
f(w)=\frac{(-\log{u}+\log{x})^{s-1}}{h^{\prime}(u)}=e^{-w}\left(-\log(1-e^{-w})+\log{x}\right)^{s-1}
\end{equation}
Now, using the generating function of Bernoulli numbers, we find
\begin{eqnarray}\nonumber
-\log(1-e^{-w})&=&-\log(1-e^{-1})-\log{w}-\frac{1}{2}(1-w)-\frac{1}{6.2}(1-w^2)+\cdots\\
&=&-\log{w}+O(1),\quad\text{as}\quad w\to 0,\label{sec3-eq19}
\end{eqnarray}
and since $e^{-w}=O(1)$ as $w\to 0$, then we have
\begin{equation}\label{sec3-eq20}
f(w)=\left(-\log{w}\right)^{s-1}+O(\frac{1}{\log{w}})\quad\text{as}\quad
w\to 0.
\end{equation}
There are two cases to consider: $x\ge 1-e^{-1}$ and $x<
1-e^{-1}$. Let's first suppose that $x\ge 1-e^{-1}$, then
the integral (\ref{sec3-eq17}) can be split into two parts:
\begin{equation}\label{sec3-eq21}
I(n,s)=\int_{0}^{1-\epsilon}f(w)e^{-(n-1)w}\,dw+
\int_{1-\epsilon}^{-\log(1-x)}f(w)e^{-(n-1)w}\,dw,
\end{equation}
where $\epsilon$ is an arbitrarily small positive number.
By the general properties of Laplace integrals, and the fact that
$f(w)=O(e^{-w})$ for large $w$, the second integral is
exponentially small and verifies
\begin{equation}\label{sec3-eq22}
\int_{1-\epsilon}^{-\log(1-x)}f(w)e^{-(n-1)w}\,dw=O\left(e^{\delta(1-\epsilon))n}\right),
\end{equation}
where $\delta$ is an appropriate positive number.
Finally, replacing $f(w)$ by the estimate (\ref{sec3-eq20}), we
get
\begin{equation}\label{sec3-eq23}
I(n,s)=\int_{0}^{1-\epsilon}\left(-\log{w})\right)^{s-1}e^{-(n-1)w}\,dw+
O\left(\frac{1}{(n-1)\log(n-1)}\right).
\end{equation}
To obtain an expansion of the first integral, we use the following
theorem which has been proved in \cite{wong:integrals}:
\begin{theorem}[\cite{wong:integrals}~Theorem~2, p. 70]\label{sec3-thm1}
Let $\mu$ and $\lambda$ be any complex numbers with $\re(\mu)>-1$
and $\re(\lambda)>0$ and let $c=|c|e^{i\gamma}$, $0<|c|<1$, then
\begin{equation}\label{sec3-eq24}
\int_{0}^{c}t^{\lambda-1}\left(-\log{t})\right)^{\mu}e^{-zt}\,dt\sim
z^{-\lambda}\left(\log{z}\right)^{\mu}\sum_{r=0}^{\infty}(-1)^r{\mu\choose
r}\Gamma^{(r)}(\lambda)\left(\log{z})\right)^{-r},
\end{equation}
uniformly in $\arg(z)$ as $z\to \infty$ in
$|\arg(ze^{i\gamma})|\le \pi/2 -\Delta$ where $\Delta$ is a small
positive number and the path of integration is a straight line
joining $t=0$ to $t=c$.
\end{theorem}
In our case, $\lambda=1$, $\mu=s-1$ and $z=n-1$, therefore the
final result is
\begin{equation}\label{sec3-eq25}
I(n,s)\sim \frac{\left(\log{n}\right)^{s-1}}{n},
\end{equation}
for $n$ large enough.
The case $x< 1-e^{-1}$ can be dealt with in a similar fashion. In
this case, the integral (\ref{sec2-eq14}) is equal to
\begin{equation}\label{sec3-eq26}
I(n,s)=\int_{0}^{c}f(w)e^{-(n-1)w}\,dw,
\end{equation}
with $c=-\log(1-x)<1$. Again, using the estimate of $f(w)$ as
$w\to 0$ and Theorem~\ref{sec3-thm1}, we get
\begin{equation}\label{sec3-eq27}
I(n,s)\sim \frac{\left(\log{n}\right)^{s-1}}{n}.
\end{equation}
This completes the proof of the proposition.
\end{proof}
\begin{proposition}\label{sec3-prop4}
Let $x$ be in $(0,1)$.
\begin{description}
\item[1.] For $s=-k$, $k$ a positive integer,
$\sigma_{n}(-k,x)$ is a polynomial in $x$ of degree $n$. Moreover, $|\sigma_{n}(-k,x)|$ is bounded
above by a fixed contant for all $n$.
\item[2.] For $s\notin \{0,-1,-2,\cdots\}$ and $n$ large
enough
\begin{equation}\label{sec3-eq28}
\sigma_{n}(s,x)\thicksim
\frac{1}{n(\log{n})^{1-s}\Gamma(s)}.
\end{equation}
\end{description}
\end{proposition}
\begin{proof}
By looking at the definition of $\sigma_{n}(s,x)$, we can see
that when $s=-k$, $k$ a positive integer, $\sigma_{n}(s,x)$ are
polynomials in $x$ of degree $n$. Now, using
proposition~\ref{sec3-prop2} these polynomials can be rewritten as
\begin{equation}
\sigma_{n}(-k,x) =
\frac{-1}{n}\sum_{j=1}^{n}(-1)^{j}j!\stirs{k+1}{j}{n\choose
j}x^{j}(1-x)^{n-j}. \label{sec3-eq29}
\end{equation}
The last equation shows that the coefficients of
$\sigma_{n}(s,x)$ are multiples of Stirling numbers of the second
kind. It is well known that $\stirs{k+1}{j}$ is zero for $n>k+1$.
Thus, $|\sigma_{n}(-k,x)|$ is bounded above by a fixed contant for
every $n>k+1$ and for every bounded $x$. This proves the first
statement.
The second statement of the proposition has been proved in
Proposition~\ref{sec3-prop3} when $\re(s)>0$. For the remaining
values of $s$, we can either use the method of Laplace as in our
proof of proposition~\ref{sec3-thm1} or use an elegant result of
Flajolet et al. \cite{flajolet:rice} regarding asymptotic
expansions of sums of the form (\ref{sec3-eq8}).
The function that defines the alternating sums $\sigma_{n}(-k,x)$
is $x^{z}z^{1-s}$. Since $x^{z}z^{1-s}$ has a non-integral
algebraic singularity at $s_0=0$, the proof of
\cite[Theorem~3]{flajolet:rice} remains valid in its entirety, the
only changes that are needed concern the change of variables
immediately after equation~(15) in \cite[p. 119]{flajolet:rice}.
Instead of the change of variable $\zeta=s\log{n}$ carried out in
\cite{flajolet:rice}, one needs the change of variable
$\zeta=s\log(nx)$. Consequently, when $s$ is nonintegral,
$\sigma_{n}(s,x)$ has the following asymptotics when $n$ is large
\begin{equation}\label{sec3-eq30}
\sigma_{n}(s,x)\thicksim
\frac{(\log{n}+\log{x})^{s-1}}{n\Gamma(s)}\thicksim
\frac{1}{n(\log{n})^{1-s}\Gamma(s)}+\log{x}\frac{s-1}{n(\log{n})^{2-s}\Gamma(s)}.
\end{equation}
When $s=k\in \{1,2,\cdots\}$, the following expansion applies
\begin{equation}\label{sec3-eq31}
\sigma_{n}(s,x)\thicksim
\frac{(\log{n})^{k-1}}{n(k-1)!}+\log{x}\frac{(\log{n})^{k-2}}{n(k-2)!}.
\end{equation}
The asymptotic estimates (\ref{sec3-eq30}) and (\ref{sec3-eq31})
are valid for $n$ large enough and for all $s\notin
\{0,-1,-2,\cdots\}$. These estimates are in accordance with the
estimates of Proposition~\ref{sec3-prop3}.
\end{proof}
\subsection{The Function $Z(s,x)$}\label{sec4}
The function $Z(s,x)$ is a function of two variables defined by
\begin{eqnarray}\label{sec4-eq1}
Z(s,x)\triangleq \sum_{n=1}^{\infty}\frac{\sigma_n(s,x)}{n+1},
\end{eqnarray}
where $s\in\mathbb{C}$, $x\in(0,1)$, and
\begin{equation}\label{sec4-eq2}
\sigma_n(s,x)=\sum_{k=0}^{n-1}(-1)^{k}{n-1 \choose
k}x^{k+1}(k+1)^{-s}.
\end{equation}
Clearly, $Z(s,x)$ is an infinite series of functions that should
be uniformly convergent in order to have a useful function. If the
variable $x$ is allowed to lie in the unit interval $0<x<1$ and
the variable $s$ is fixed but allowed to be anywhere in
$\mathbb{C}$, then the series, viewed as a function of $s$, is
uniformly convergent by the Weierstrass M-test. Indeed, we have
\begin{proposition}\label{sec4-prop1}
Suppose that $0<x<1$, then
\begin{description}
\item[1.] $Z(s,x)$ is an entire function of $s$.
\item[2.] $\lim_{x\to 1}Z(s,x) =(s-1)\zeta(s)$
\end{description}
\end{proposition}
\begin{proof}
By Proposition~\ref{sec3-prop4}, the asymptotic estimates of
$\sigma_n(s,x)$ are valid for $n$ large enough and for all
$s\notin \{0,-1,-2,\cdots\}$. Moreover, for $s=-k$, $k$ a positive
integer, $|\sigma_{n}(-k,x)|$ is bounded above by a fixed contant
for all $n$ and all $0<x<1$.
We first prove that $Z(s,x)$ is well-defined and does not have any
singularity when $\Re(s)>0$. Indeed, by the logarithmic test of
series our series is dominated by a uniformly convergent series
for all finite $s$ such that $\Re(s)>0$ and $0<x<1$. Moreover,
$Z(s,x)$ is finite for finite $s$ and finite $x$; therefore,
$Z(s,x)$ does not have any singularity when $\Re(s)>0$. To extend
$Z(s,x)$ outside the domain $\Re(s)>0$, we use Weierstrass theorem
of the uniqueness of analytic continuation and repeat the same
process for $\Re(s)>-k$, $k\in \mathbb{N}$. The final result
yields a well-defined function $Z(s,x)$ with no finite singularity
for all $s\in\mathbb{C}$; hence, $Z(s,x)$ is an entire function
of $s$.
To prove the second property, we take the limit $x\to 1$ in the
identity defining $\sigma_{n}(s,x)$. It yields the series
representation of the Riemann zeta function \cite{lazhar,lazhar2}:
\begin{eqnarray}\label{sec4-eq3}
(s-1)\zeta(s)=\sum_{n=1}^{\infty}\frac{S_n(s)}{n+1},
\end{eqnarray}
where
\begin{equation}\label{sec4-eq4}
S_n(s)=\sum_{k=0}^{n-1}(-1)^{k}{n-1 \choose k}(k+1)^{-s}.
\end{equation}
This finishes the proof of the proposition.
\end{proof}
\begin{remark}
It is the second statement of Proposition~\ref{sec4-prop1} that
makes $Z(s,x)$ more interesting that $\Li_{s}(x)$ in the analysis
of the present paper. The reason is simple: $Z(s,x)$ is
convergent when $x\to 1$ for all values of $s$ while $\Li_{s}(x)$
is convergent as $x$ tends $1$ only when $\re(s)>1$. In
particular, $\Li_{s}(x)$ is divergent when $x\to 1$ for any
nontrivial zero of the Riemann zeta function.
\end{remark}
\section{An Expansion of $Z(s,x)$}\label{sec5}
In this section, we suppose again that $0<x<1$. To obtain an
expansion of
\begin{equation}\label{sec5-eq1}
Z(s,x)=(s-1)\Li_{s}(x) -\log{x}e^{-i\pi s}\frac{\Gamma(1-s)}{2\pi
i}\int_{\mathcal{C}}\frac{xe^{-t}}{(1-xe^{-t})^2}t^{s-1}\,dt,
\end{equation}
we replace $\Li_{s}(x)$ by its the integral expression
(\ref{sec1-eq4}) and we expand the contour integrals:
\begin{align}
Z(s,x)&=(s-1)e^{-i\pi s}\frac{\Gamma(1-s)}{2\pi
i}\int_{\mathcal{C}}\frac{xt^{s-1}}{e^{t}-x}\,dt\nonumber\\
& \qquad\qquad -\log{x}e^{-i\pi s}\frac{\Gamma(1-s)}{2\pi
i}\int_{\mathcal{C}}\frac{xe^{-t}}{(1-xe^{-t})^2}t^{s-1}\,dt.\label{sec5-eq2}
\end{align}
We may first suppose that $\re(s)<0$. To carry out the
integration, we use Riemann's trick of folding back the contour
and integrating the function over the entire plane outside the
contour. Of course, in the integration process the poles must be
avoided. We leave out the details which can be found in
\cite{jonquiere:1889}.
The first integrand has simple poles at the points $t=\log{x}+2\pi
ni$, $n=0, \pm 1, \pm 2,\cdots $ and the second integrand has
double poles at the points $t=\log{x}+2\pi ni$, $n=0, \pm 1, \pm
2,\cdots $. By Cauchy's theorem, (\ref{sec5-eq2}) becomes
\begin{equation}\label{sec5-eq3}
Z(s,x)= \Gamma(1-s)\sum_{n=-\infty}^{\infty}-e^{-i\pi
s}\underset{t=\log{x}+2\pi
ni}{\text{Res}}\bigg((s-1)\frac{xt^{s-1}}{e^{t}-x}
-\log{x}\frac{xe^{t}t^{s-1}}{(e^{t}-x)^2}\bigg).
\end{equation}
Regarding the first residue evaluation without the factor $s-1$,
we have
\begin{equation}\label{sec5-eq4}
\underset{t=\log{x}+2\pi
ni}{\text{Res}}\frac{xt^{s-1}}{e^{t}-x}=\lim_{t\to\log{x}+2\pi
ni}\frac{(t-\log{x}-2\pi ni)t^{s-1}}{e^{t}-x}=(\log{x}+2\pi
ni)^{s-1}.
\end{equation}
To evaluate the second residue, we appeal to the following lemma
\begin{lemma}\label{sec5-lem1}
Let $f(z)$ and $g(z)$ be two analytic functions. Let $z=a$ be a
simple zero of $g(z)$ and suppose that $f(a)\ne 0$. Then,
\begin{equation}\label{sec5-eq5}
\underset{z=a}{\text{Res}}\bigg\{\frac{f(z)}{g(z)^2}\bigg \}
=\frac{f^{\prime}(a)g^{\prime}(a)-f(a)g^{\prime\prime}(a)}{g^{\prime\prime\prime}(a)}.
\end{equation}
\end{lemma}
\begin{proof}
Expand the numerator and denominator of $\frac{f(z)}{g(z)^2}$ into
Taylor series around $z=a$ and identify the coefficient of
$\frac{1}{z-a}$.
\end{proof}
In our case, we have $g(t)=e^{t}-x$, $f(t)=xe^{t}t^{s-1}$ and
$a=\log{x}+2\pi ni$. Lemma~\ref{sec5-lem1} gives
\begin{eqnarray}\label{sec5-eq6}
\underset{z=a}{\text{Res}}\bigg\{\frac{xe^{t}t^{s-1}}{(e^{t}-x)^2}\bigg
\} =x(s-1)a^{s-2}e^{a}=x^{2}(s-1)(\log{x}+2\pi ni)^{s-2}.
\end{eqnarray}
Finally, using the fact that
\begin{eqnarray}\label{sec5-eq6bis}
-e^{-i\pi s}(\log{x}+2\pi ni)^{s-1}&=&(-\log{x}+2\pi ni)^{s-1},\\
-e^{-i\pi s}(\log{x}+2\pi ni)^{s-2}&=&-(-\log{x}+2\pi
ni)^{s-2},\label{sec5-eq6bisbis}
\end{eqnarray}
the expansion for $Z(s,x)$ can be summarized in the following
theorem
\begin{theorem}\label{sec5-thm1}
For $0<x<1$ and $s\notin \{1, 2, 3,\cdots\}$,
\begin{align}
Z(s,x)=
(s-1)\Gamma(1-s)&\sum_{n=-\infty}^{\infty}\bigg[(-\log{x}+2\pi
ni)^{s-1}\nonumber\\
& +x^2\log{x}(-\log{x}+2\pi ni)^{s-2}\bigg].\label{sec5-eq7}
\end{align}
\end{theorem}
To further simplify (\ref{sec5-eq7}), we substitue the following
two identities from \cite[eq. (8), p. 29]{erdelyi:1953}
\begin{align}\label{sec5-eq6bis}
\sum_{n=-\infty}^{\infty}(-\log{x}-2\pi
ni)^{s-1}&=(-\log{x})^{s-1}\nonumber\\
&+\frac{1}{\Gamma(1-s)}\sum_{n=0}^{\infty}\frac{(\log{x})^{n}}{n!}\zeta(s-n),\\
\sum_{n=-\infty}^{\infty}(-\log{x}-2\pi
ni)^{s-2}&=(-\log{x})^{s-2}\nonumber\\
&
-\frac{1}{(s-1)\Gamma(1-s)}\sum_{n=0}^{\infty}\frac{(\log{x})^{n}}{n!}\zeta(s-n-1)\label{sec5-eq6bisbis}
\end{align}
into the equation of Theorem~\ref{sec5-thm1}. The expansion of
$Z(s,x)$ simplifies to
\begin{align}
Z(s,x)=&(s-1)\Gamma(1-s)(-\log{x})^{s-1}
(1-x^2)+\nonumber\\
&(s-1)\sum_{n=0}^{\infty}\frac{(\log{x})^{n}}{n!}\zeta(s-n)\nonumber\\
&-x^2\log{x}\sum_{n=0}^{\infty}\frac{(\log{x})^{n}}{n!}\zeta(s-n-1).\label{sec5-eq21}
\end{align}
We have proved the identity when $\re(s)<0$. The identity is
however still valid by the principle of analytic continuation for
every $s$ which is not a pole of $\Gamma(1-s)$. Put in a more
convenient form, expansion (\ref{sec5-eq21}) is summarized in the
following theorem:
\begin{theorem}\label{sec5-thm2}
For $0<x<1$ and $s\notin \{1, 2, 3,\cdots\}$,
\begin{align}
Z(s,x)=&(s-1)\zeta(s)+(s-1)\Gamma(1-s)(1-x^2)(-\log{x})^{s-1}
-x^2\log{x}\zeta(s-1)\nonumber\\
&+\sum_{n=1}^{\infty}\frac{(\log{x})^{n}}{n!}\bigg[(s-1)\zeta(s-n)-
x^2\log{x}\zeta(s-n-1)\bigg].\label{sec5-eq22}
\end{align}
\end{theorem}
When $s=1$, Definition~\ref{sec2-def1} provides
\begin{align}
Z(1,x)&=\frac{1}{n(n+1)}-\sum_{n=1}^{\infty}\frac{(1-x)^n}{n(n+1)}\nonumber\\
&=\frac{x\log{x}}{1-x}, \label{sec5-eq22bis}
\end{align}
and when $s$ is a positive integer different from 1, a similar
expansion is still valid as a consequence of the previous theorem:
\begin{corollary}\label{sec5-cor1}
For $0<x<1$ and $s=k\in \{2, 3,\cdots\}$,
\begin{align}
Z(k,x)=&(k-1)\zeta(k)+(1-x^2)\frac{(\log{x})^{k-1}}{(k-1)!}\bigg[H_{k-1}-\log(-\log{x})\bigg]\nonumber\\
&+(k-1)\sum_{\substack{n=1\\n\ne
k-1}}^{\infty}\frac{(\log{x})^{n}}{n!}\zeta(k-n)\nonumber\\
&+x^2\sum_{\substack{n=1\\n\ne
k-2}}^{\infty}\frac{(\log{x})^{n+1}}{n!}\zeta(k-n-1),\label{sec5-eq23}
\end{align}
where $H_n$ is the $n$-th harmonic number.
\end{corollary}
\begin{proof}
Let $s=k+\epsilon$, $k\in \{1, 2, 3,\cdots\}$ and $\epsilon$ small
positive real number. Theorem~\ref{sec5-thm1} gives the expansion
\begin{align}
Z(k+\epsilon,x)=&(k-1+\epsilon)\zeta(k+\epsilon)+(k-1+\epsilon)\Gamma(1-k-\epsilon)(1-x^2)(-\log{x})^{k-1
+\epsilon}\nonumber\\
&-x^2\log{x}\zeta(k-1+\epsilon)+(k-1+\epsilon)\zeta(1+\epsilon)\frac{(\log{x})^{k-1}}{(k-1)!}\nonumber\\
&+(k-1+\epsilon)\sum_{\substack{n=1\\n\ne
k-1}}^{\infty}\frac{(\log{x})^{n}}{n!}\zeta(k-n+\epsilon)\nonumber\\
&-x^2\log{x}\zeta(1+\epsilon)\frac{(\log{x})^{k-2}}{(k-2)!}\nonumber\\
&-x^2\log{x}\sum_{\substack{n=1\\n\ne
k-2}}^{\infty}\frac{(\log{x})^{n}}{n!}\zeta(k-n-1+\epsilon).\label{sec5-eq24}
\end{align}
Now, we replace $\Gamma(1-k-\epsilon)$ and $\zeta(1+\epsilon)$ by
their well-known Laurent expansions:
\begin{eqnarray}\label{sec5-eq25}
\Gamma(1-k-\epsilon)&=&-\frac{(-1)^{k-1}}{(k-1)!}\frac{1}{\epsilon}+\frac{(-1)^{k-1}}{(k-1)!}(H_{k-1}-\gamma)+
o(\epsilon)\\
\zeta(1+\epsilon)&=&\frac{1}{\epsilon}+\gamma+o(\epsilon),\label{sec5-eq26}
\end{eqnarray}
where
\begin{eqnarray}\label{sec5-eq27}
H_{k-1}&=& 1+\frac{1}{2}+\frac{1}{3}+\cdots+ \frac{1}{k-1},\\
\gamma&=& 0.577215665,\label{sec5-eq28}
\end{eqnarray}
we obtain
\begin{align}
Z(k+\epsilon,x)=&(k-1)\zeta(k)+(1-x^2)\frac{(\log{x})^{k-1}}{(k-1)!}\bigg[H_{k-1}-\frac{(-\log{x})^{\epsilon}-1}{\epsilon}\bigg]\nonumber\\
&+(k-1)\sum_{\substack{n=1\\n\ne
k-1}}^{\infty}\frac{(\log{x})^{n}}{n!}\zeta(k-n)\nonumber\\
&+x^2\sum_{\substack{n=1\\n\ne
k-2}}^{\infty}\frac{(\log{x})^{n+1}}{n!}\zeta(k-n-1)+o(\epsilon).\label{sec5-eq29}
\end{align}
Finally, making $\epsilon$ tend to zero and recalling that
\begin{equation}\label{sec5-eq30}
\lim_{\epsilon\to
0}\frac{(-\log{x})^{\epsilon}-1}{\epsilon}=\log(-\log{x}),
\end{equation}
we get
\begin{align}
Z(k,x)=&(k-1)\zeta(k)+(1-x^2)\frac{(\log{x})^{k-1}}{(k-1)!}\bigg[H_{k-1}-\log(-\log{x})\bigg]\nonumber\\
&+(k-1)\sum_{\substack{n=1\\n\ne
k-1}}^{\infty}\frac{(\log{x})^{n}}{n!}\zeta(k-n)\nonumber\\
&+x^2\sum_{\substack{n=1\\n\ne
k-2}}^{\infty}\frac{(\log{x})^{n+1}}{n!}\zeta(k-n-1).\label{sec5-eq31}
\end{align}
\end{proof}
The expansion of $Z(s,x)$ in Theorem~\ref{sec5-thm1} or
Corollary~\ref{sec5-cor1} is the counterpart of Lindel{\"o}f's
series expansion of the polylogarithm \cite{Lindelof:1905}. The
applications of the expansion of $Z(s,x)$ are numerous. One
important application regarding the zeros of the Riemann zeta
function is the subject of the next section.
\section{A Necessary Condition for a Nontrivial Zero of the Riemann Zeta Function}\label{sec6}
Suppose that $s$ is a nontrivial zero of $\zeta(s)$. We
necessarily have $0<\re(s)<1$. The expansion of
Theorem~\ref{sec5-thm2} becomes
\begin{align}
Z(s,x)=&(s-1)\Gamma(1-s)(1-x^2)(-\log{x})^{s-1}
-x^2\log{x}\zeta(s-1)\nonumber\\
&+\sum_{n=1}^{\infty}\frac{(\log{x})^{n}}{n!}\bigg[(s-1)\zeta(s-n)-
x^2\log{x}\zeta(s-n-1)\bigg].\label{sec6-eq1}
\end{align}
We immediately obtain
\begin{align}
\lim_{x\to
1}\frac{Z(s,x)}{(1-x^2)(-\log{x})^{s-1}}=&(s-1)\Gamma(1-s).\label{sec6-eq2}
\end{align}
Thus, we have proved the following important theorem:
\begin{theorem}\label{sec6-thm1}
Let $s$ be a nontrivial zero of $\zeta(s)$ and let $Z(s,x)$ be the
function defined by (\ref{sec4-eq2}), then
\begin{equation}
\lim_{x\to
1}\frac{Z(s,x)}{(1-x)(-\log{x})^{s-1}}=2(s-1)\Gamma(1-s).\label{sec6-eq3}
\end{equation}
\end{theorem}
We also have an immediate corollary:
\begin{corollary}\label{sec6-cor1}
Let $s$ be a nontrivial zero of $\zeta(s)$ and let $Z(s,x)$ be the
function defined by (\ref{sec4-eq2}), then
\begin{equation}
\lim_{x\to
1}\frac{Z(s,x)}{(1-x)^s}=2(s-1)\Gamma(1-s).\label{sec6-eq5}
\end{equation}
\end{corollary}
\begin{proof}
The proof follows from the fact that
\begin{equation}
\lim_{x\to
1}\frac{(1-x)^{s-1}}{(-\log{x})^{s-1}}=1,\label{sec6-eq6}
\end{equation}
and Theorem~\ref{sec6-thm1}.
\end{proof}
Recalling that $\lim_{x\to 1}Z(s,x)=(s-1)\zeta(s)$, we may
conclude that Theorem~\ref{sec6-thm1} and
Corollary~\ref{sec6-cor1} can have important computational and
theoretical applications
to the theory of the Riemann zeta
function. For example, when $s$ is a nontrivial zero, we can write
\begin{equation}
Z(s,x)\approx 2(s-1)\Gamma(1-s)(1-x)^s\label{sec6-eq7}
\end{equation}
when $x$ is near 1. Equation~\ref{sec6-eq7} is a measure of the
rate of approach of $Z(s,x)$ to the zero $(s-1)\zeta(s)$ when $x$
is near one. The rate of approach to zero is equal to
$2(s-1)\Gamma(1-s)(1-x)^s$.
|
3,212,635,537,536 | arxiv | \section{Introducción}
Una de las maneras de estudiar el clima es por medio de modelos numéricos computacionales. Aquellos modelos que toman como sistema de estudio toda la atmósfera del planeta se les llama Modelos de Circulación General (MCG). Éstos son útiles para analizar y prever el comportamiento y tendencia global de la atmósfera. Las resoluciones horizontales típicas utilizadas están alrededor de los 100 km~\cite{salathe2010regional,dosio2015dynamical}. Por otra parte, se encuentran los Modelos Climáticos Regionales (MCR) los cuales estudian la porción de la atmósfera que se encuentra sobre un dominio o región limitada de la superficie terrestre. Al ser un modelo de área limitada, un MCR posee fronteras laterales, en las cuales se utiliza la información generada por un MCG como condición de frontera. La ventaja de un MCR es que se pueden utilizar grillas de alta resolución horizontal para representar con fidelidad las características finas de la topografía del terreno. Un MCR con alta resolución es útil en estudios de adaptación e impacto ambiental~\cite{wang2004regional,lee2014potential}.
Cuando se utiliza un MCR se habla de su valor agregado. Esto se refiere a la ganancia en información acerca de las variables climáticas al utilizar una grilla horizontal con una resolución de decenas de kilómetros o menos. Dicha ganancia en información está representada por la variación espacial y/o temporal de los campos meteorológicos al incrementar la resolución por encima de la de los datos suministrados~\cite{lee2014potential}. En simulaciones donde el MCR incrementa la resolución de datos de reanálisis se ha reportado valor agregado en las variables de superficie, (temperatura y velocidad del viento a 10 m) en regiones caracterizadas por detalles orográficos en pequeña escala tales como regiones montañosas~\cite{di2012potential}. Al tener una mejor representación de las laderas de las montañas, el movimiento vertical de aire presenta un incremento adicional que no se observa en MCG. A mayor movimiento vertical, mayor es la precipitación dinámica, producida por el ascenso lento de masas de aire húmedo~\cite{wallace2006atmospheric}, que a su vez resulta en una atmósfera más seca y cálida~\cite{jones1995simulation,caldwell2010california,emanuel1999development}.
Los MCR se han utilizado para estudiar la sensibilidad de los modelos a la resolución. Se ha encontrado que con una alta resolución en el MCR se obtiene menos precipitación ligera y más del tipo fuerte sobre regiones montañosas como los Alpes en Europa~\cite{giorgi1996investigation}, las cordilleras del estado de California~\cite{leung2003sensitivity} y las montañas del estado de Washington~\cite{salathe2010regional}. Otros estudios han tratado de encontrar mecanismos o explicaciones a fenómenos locales, tales como la canícula o veranillo en la región mesoamericana~\cite{small2007central,magana1999midsummer}. En la región de América del Sur se ha encontrado que la efectividad de un MCR depende de la parte de la Cordillera de los Andes que se estudie, siendo claves los análisis de sensibilidad para determinar las diferencias entre la información proporcionada por un MCR y un MCG~\cite{de2011assessing}. Los MCR también se han utilizado en estudios de evaluación de energía eólica y variabilidad del viento~\cite{garcia2013relationship}.
En el presente estudio se utiliza el MCR llamado RegCM~\cite{giorgi2012regcm4} para analizar la circulación horizontal del viento en la parte más baja de la capa de frontera planetaria, es decir, a pocos metros de altura sobre la superficie del terreno. El área de interés es el territorio guatemalteco, sobre el cual se utilizan dominios de extensión y resolución variable. El propósito de este trabajo es obtener el valor agregado de un MCR para los patrones de circulación del viento y el efecto que la topografía pueda tener sobre los mismos. En este caso, el valor agregado se encuentra en las variaciones del campo de velocidad a una escala menor que la de los datos iniciales. Dado que el valor de un campo en las celdas de la grilla representa el promedio de tal campo en el área que cubre, una mayor resolución proporciona valores promedio sobre áreas más pequeñas; obteniendo así información adicional sobre el patrón de flujo de viento.
\section{Materiales y métodos}
Los resultados que se presentan en este estudio fueron obtenidos con el MCR llamado RegCM versión 4.5, el cual ha sido desarrollado en el {\it Abdus Salam International Centre for Theoretical Physics} (ICTP). Se utiliza el núcleo no-hidrostático, el cual resuelve las ecuaciones de la dinámica de fluidos en las tres dimensiones espaciales. La topografía del terreno se incorpora en los cálculos mediante la coordenada vertical adimensional $\sigma$, la cual se define en términos de la presión atmosférica y la elevación del terreno~\cite{giorgi1996investigation}. La superficie de la tierra se representa con el valor constante $\sigma=1$, mientras que la frontera superior de la atmósfera adquiere el valor $\sigma=0$. RegCM utiliza el modelo de radiación {\it Community Climate Model} (CCM3) del {\it National Center for Atmospheric Research} (NCAR), el cual toma en cuenta el efecto de la presencia de gases como O$_3$, H$_2$O, CO$_2$ y O$_2$ en la atmósfera~\cite{kiehl1998national}. La interacción entre la parte inferior de la atmósfera, la vegetación y el contenido de humedad en el suelo se modela utilizando el {\it Biosphere-atmosphere transfer scheme} (BATS)~\cite{dickinson1993biosphere}. La precipitación convectiva sobre el suelo se calcula mediante el esquema de~\cite{grell1993prognostic} o el de Emanuel y \v{Z}ivkovi\'{c}-Rothman (1999).
Los resultados que se presentan se obtuvieron haciendo cinco corridas diferentes con RegCM. Para las primeras cuatro (V0-V3) el dominio en consideración es una región cuadrada, centrada en la república de Guatemala. Según se muestra en la Tabla~\ref{tab:runs}, éstas utilizan una resolución $\Delta s$ que empieza en 60 km y se refina hasta llegar a 2 km. El parámetro $\Delta s$ representa la separación entre los puntos de la grilla para ambas dimensiones horizontales. El dominio horizontal tiene una longitud de 1,800 km por lado para las corridas V0, V1 y V2. Para la corrida V3 el dominio tiene 960 km por lado. La reducción del tamaño del dominio obedece a la limitante impuesta por los recursos computacionales, lo cual se puede ver en el número de puntos que se utilizan para establecer la grilla horizontal. Utilizar más de $120\times120$ haría que el tiempo necesario para completar una simulación de un año tome más de dos semanas de ejecución ininterrumpida. La Figura~\ref{fig:vmag} ilustra el área cubierta por el dominio de 960 km por lado. La corrida V4 utiliza una resolución $\Delta s=2$~km y cubre un área cuadrada de 120 km por lado en la región central de Guatemala, donde se ubican los volcanes de Acatenango, Fuego, Agua y Pacaya. La Figura~\ref{fig:vAcate} muestra el dominio utilizado en esta corrida.
\begin{table}
\begin{center}
\begin{tabular}{cccc}
\hline
corrida & no. de puntos & $\Delta s$ [km] & dominio [km] \\
\hline
V0 & 30 $\times$ 30 & 60 & 1,800 \\
V1 & 60 $\times$ 60 & 30 & 1,800 \\
V2 & 120 $\times$ 120 & 15 & 1,800 \\
V3 & 120 $\times$ 120 & 8 & 960 \\
V4 & 60 $\times$ 60 & 2 & 120 \\
\hline
\end{tabular}
\caption{Descripción del dominio computacional.}
\label{tab:runs}
\end{center}
\end{table}
Las corridas V0 a V3 abarcan un período de simulación del 1 de enero de 2016 hasta el 31 de diciembre del mismo año, mientras que en la corrida V4 la solución numérica se calcula solamente para los meses de noviembre y diciembre de 2016.
Los resultados de las simulaciones son almacenados en disco duro en el formato binario netCDF~\cite{rew1990netcdf}. Para las corridas que abarcan todo el 2016, la solución numérica se almacena cuando la simulación alcanza las 0, 6, 12 y 18 h, es decir que se tienen cuatro registros por día. Para la corrida V4, que comprende los últimos dos meses de 2016, el intervalo de salida de la simulación fue de 30 min, i.e. 48 registros por día. El objetivo de almacenar la solución con mayor frecuencia es poder tener una mejor representación en el tiempo de los procesos climáticos de período diurno.
Como en todo modelo climático regional, RegCM necesita condiciones iniciales y de frontera para poder resolver las ecuaciones diferenciales parciales de la dinámica de fluidos. En ambos casos se utilizaron los datos de reanálisis atmosférico global ERA-Interim, producidos por el {\it European Centre for Medium-Range Weather Forecasts} (ECMWF)~\cite{dee2011era}. El conjunto de datos ERA-Interim provee los valores de las variables dinámicas (temperatura, velocidad del viento, humedad, etc.) en función de las coordenadas longitud, latitud, altura y tiempo. La resolución en latitud y longitud es de 1.5$^\circ$, lo cual equivale a una separación en distancia de 167 km (en el ecuador). La resolución utilizada en todas las corridas es mayor a la de ERA-Interim, lo cual implica que los datos globales son interpolados en la grilla más fina que se utiliza en las simulaciones con RegCM.
Las condiciones de frontera lateral son impuestas a intervalos de seis horas. Esta es la información externa al dominio que dicta el comportamiento de la solución numérica sobre la región que se analiza. El resultado de la simulación es la combinación de la dinámica climática local (producto de la solución numérica de las ecuaciones), y de la influencia de los patrones climáticos globales que son alimentados al dominio a través de las fronteras laterales.
Las corridas V0, V1 y V2 utilizan los datos ERA-Interim para imponer las condiciones de frontera lateral. RegCM permite utilizar los datos de salida de una simulación como condición de frontera, siempre que el dominio esté enteramente contenido dentro del dominio de la solución que proveerá dicha condición de frontera. En este caso el dominio de V3 está contenido o anidado en el de V2, por lo que se utilizó la solución de la corrida V2 como datos de frontera para V3. La misma técnica fue empleada para darle condiciones de frontera a la corrida V4. La ventaja de usar una solución calculada previamente en lugar de los datos ERA-Interim como condición de frontera, es la incorporación de los patrones dinámicos locales que se desarrollan debido a una mejor representación tanto de la topografía del terreno como de la circulación del viento.
En todas las corridas se utilizó 18 niveles verticales, con una presión atmosférica tope de 50 mb en la frontera superior. En este artículo se analizan únicamente las componentes horizontales de la velocidad del viento para el nivel vertical más cercano al suelo. Esto quiere decir que la altura para cual se muestran los resultados es de aproximadamente 20 m sobre el suelo.
El análisis de las variables climáticas requiere realizar el cálculo de múltiples operaciones y de promedios aritméticos, tanto en tiempo como en espacio. Para tal fin se utilizó el software llamado {\it netCDF Operator} (NCO)~\cite{zender2008analysis}, el cual consta de una serie de comandos de Linux para manipular archivos en formato binario netCDF.
\section{Resultados}
\subsection{Comparación con datos de estación}
La Figura~\ref{fig:LaAurora} es una comparación de los resultados obtenidos de las corridas V0 a V3 y los datos recolectados por una estación meteorológica.
La variable que se muestra es la magnitud de la velocidad horizontal del viento a lo largo del 2016 para la ubicación de la estación. Los datos que se utilizaron fueron recolectados por la estación La Aurora, perteneciente a la red de estaciones meteorológicas del Instituto Nacional de Sismología Vulcanología Meteorología e Hidrología (Insivumeh). La gráfica muestra los promedios semanales a lo largo de todo el año. Para cuantificar la cercanía de cada corrida a los datos recolectados se calculó la norma L1 y L2 de las diferencias entre los datos de estación y las corridas. La norma L1 da como resultado: 387, 285, 213 y 211; para las diferencias con las corridas V0, V1, V2 y V3; respectivamente. De forma similar, los resultados para la norma L2 son: 56.9, 41.9, 31.0 y 32.0 (ambas normas fueron truncadas a tres cifras significativas).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{pics/LaAurora}
\caption{Comparación de la magnitud de la velocidad del viento para diferentes resoluciones con datos de estación meteorológica. Los datos fueron obtenidos del sitio web del Insivumeh para la estación La Aurora. Cada punto representa promedio semanal.}
\label{fig:LaAurora}
\end{figure}
\subsection{Velocidad promedio}
\begin{figure}
\includegraphics[width=\columnwidth]{pics/uAvgInSpace}
\includegraphics[width=\columnwidth]{pics/vAvgInSpace}
\caption{Componente zonal $u$ y meridional $v$ del viento promediadas sobre todo el dominio para diferentes resoluciones $\Delta s$.}
\label{fig:uvAvg}
\end{figure}
En la Figura~\ref{fig:uvAvg} se observan los promedios espaciales $\bar{u}$ y $\bar{v}$ de las componentes zonal $u$ y meridional $v$ del viento como función del tiempo. Los valores zonales positivos denotan dirección hacia el este y los negativos hacia el oeste, mientras que los valores meridionales positivos denotan dirección hacia el norte y los negativos hacia el sur. El promedio se calcula sobre el dominio de la corrida V3 (mismo que se muestra en la Figura~\ref{fig:vmag}). Cada variable depende de cuatro coordenadas: longitud, latitud, altura y tiempo. El promedio de una variable sobre una región espacial se calcula como
\begin{equation}
\bar{u}(\sigma_k, t_n) = \frac{1}{N_xN_y} \sum_{i=1}^{N_x} \sum_{j=1}^{N_y} u(x_i,y_j,\sigma_k,t_n),
\end{equation}
donde $x_i$ e $y_j$ son las coordenadas longitud y latitud para la $ij$-ésima celda de la grilla, $\sigma_k$ es la altura del $k$-ésimo nivel vertical, $t_n$ es el tiempo a lo largo del año y $N_x$, $N_y$ es el número de puntos a lo largo de las coordenadas longitud, latitud. Las curvas de la gráfica corresponden a los diferentes valores de resolución $\Delta s$.
En la Figura~\ref{fig:vmag} se gráfica el valor promedio a lo largo de todo el 2016 de la magnitud de la velocidad horizontal. Los diferentes paneles muestran la misma variable para diferentes valores de la resolución espacial $\Delta s$. La magnitud del vector velocidad horizontal $V$ en términos de $u$ y $v$ se calcula como $V=\sqrt{u^2+v^2}$. El promedio en el tiempo se calcula con el valor de $V$ en cada punto del dominio de acuerdo a
\begin{equation}
\langle V(x_i,y_j,\sigma_k) \rangle=\frac{1}{N_t} \sum_{n=1}^{N_t} V(x_i, y_j, \sigma_k, t_n),
\label{eq:promedio}
\end{equation}
donde $N_t$ es el número total de instantes en la dimensión temporal. Todos los casos de interés mostrados en este artículo se refieren al flujo del viento en las inmediaciones de la superficie terrestre, por lo cual $\sigma_k$ corresponde al nivel vertical de menor altura.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{pics/vMagAll}
\caption{Magnitud de la velocidad del viento promediada a lo largo de todo el 2016. Las resoluciones utilizadas son 60, 30, 15 y 8 km.}
\label{fig:vmag}
\end{figure*}
En la Figura~\ref{fig:vmagang3} se muestra de nuevo la magnitud de la velocidad horizontal del viento promediada durante el 2016 y se agrega a la gráfica la dirección de la velocidad promediada a lo largo del mismo intervalo de tiempo. La dirección promedio del vector velocidad ${\bf V}$ se calcula como $\langle \theta \rangle = \tan^{-1}(\langle v \rangle/\langle u \rangle)$, para cada celda de la grilla. El valor promedio en el tiempo para $u$ y $v$ se obtiene con una fórmula similar a la Ec.~(\ref{eq:promedio}). En esta gráfica se ha hecho un acercamiento para mostrar más detalle sobre el territorio de Guatemala. Los resultados corresponden a la resolución $\Delta s=8$ km.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{pics/vMagAng3}
\caption{Magnitud y dirección del viento promediada a lo largo de todo el año 2016. La resolución utilizada es de 8 km.}
\label{fig:vmagang3}
\end{figure}
En la Figura~\ref{fig:vAcate} se muestra la magnitud y dirección del viento promediada únicamente en los meses de noviembre y diciembre de 2016. En esta corrida se utiliza la resolución más fina, con $\Delta s=$ 2 km. Esto representa un factor de 30 en el aumento de la resolución, comparada con el valor de la corrida V0, donde $\Delta s=60$~km. Con una resolución de 2 km es posible lograr una buena representación de la orografía en la región central de Guatemala. En la gráfica se identifica la ubicación de los volcanes de la región central: Acatenango, Fuego, Agua y Pacaya. Esta gráfica también muestra algunos contornos de nivel con la altura marcada en metros sobre el nivel del mar. Si bien con una grilla de 2 km de resolución es posible representar detalles salientes de la orografía, todavía no es suficiente para ver detalles finos, tal como la separación entre las cumbres de los volcanes de Fuego y Acatenango, que es de 3 km. Ambas formaciones se ven como un mismo volcán. Para esta corrida, las variables climáticas se almacenaron a intervalos de tiempo de 30 minutos. Esto hace que hayan 48 registros por día, lo cual da una mayor resolución temporal para el estudio de fenómenos climáticos con un período de duración de un día, tal como se presenta a continuación.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{pics/vMagAngAcate}
\caption{Magnitud y dirección del viento promediada durante los meses de noviembre y diciembre de 2016. La resolución espacial es de 2 km. La elevación del terreno se muestra por medio de curvas de nivel etiquetadas por su altura en metros sobre el nivel del mar.}
\label{fig:vAcate}
\end{figure}
\subsection{Drenado de viento}
La Figura~\ref{fig:vadrain} consta de dos partes. El panel inferior muestra el perfil de elevación del suelo para un corte norte-sur a una longitud constante de $90.6628^\circ$O. Para colocar la escala de latitud en contexto geográfico, vale la pena mencionar algunos puntos conocidos con su respectiva latitud. Por ejemplo: la ciudad de Escuintla está a $14.3^\circ$N, la ciudad de Guatemala se encuentra a $14.6^\circ$N, el punto mínimo que aparece aproximadamente a $14.85^\circ$N es la cuenca del río Motagua y el límite territorial entre Baja Verapaz y Quiché se encuentra a una latitud alrededor de $15.2^\circ$N. El panel superior de la gráfica muestra la componente meridional $v$ del viento en función de la latitud y el tiempo. Viento hacia el norte es representado con grises claros y hacia el sur con grises oscuros. El tiempo está en unidades de días a partir de 1 de diciembre de 2016.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{pics/vaDrainagePalin}
\caption{Panel superior: componente meridional $v$ de la velocidad del viento para una sección norte-sur a una latitud fija de $90.6628^\circ$O. Esta sección corresponde a la línea vertical punteada que se muestra en la figura~\ref{fig:vAcate}. Panel inferior: perfil de elevación del suelo por donde pasa la línea punteada antes mencionada.}
\label{fig:vadrain}
\end{figure}
\section{Discusión}
A fin de comparar los resultados obtenidos de las simulaciones con mediciones reales, en la Figura~\ref{fig:LaAurora} se grafica el promedio semanal de la magnitud de la velocidad del viento obtenida con RegCM y medida por la estación meteorológica La Aurora del Insivumeh. Se puede ver que existe concordancia con la tendencia general, sin embargo el modelo sobreestima la velocidad del viento para todas las resoluciones. Esto podría ser un error sistemático tanto en la simulación como en el instrumento de medición. Sin tener información sobre la calibración del instrumento es difícil concluir acerca del origen de la diferencia. Si bien es cierto que RegCM sobreestima la velocidad del viento, también se aprecia que al aumentar la resolución las diferencias entre la simulación y los datos de estación es menor. Esto lo verifican las normas L1 y L2, cuyo valor decrece al disminuir $\Delta s$, excepto para L2 con la máxima resolución. Estos resultados indican que el cálculo de la velocidad del viento puede mejorar al aumentar la resolución, pero existe un límite más allá del cual el MCR ya no mejora la estimación.
Se puede observar en la Figura~\ref{fig:uvAvg} que la componente zonal $u$, es predominantemente negativa, lo que significa que el viento sopla hacia el oeste, el promedio de $u$ a lo largo del 2016 es de $-12.1$ km/h.
De forma similar, el promedio anual de la componente meridional $v$ es de $-5.0$ km/h, lo cual indica que la dirección predominante es hacia el sur.
En la componente meridional se observa que el viento sopla hace el sur durante los meses de enero, febrero y de octubre a diciembre. Durante el resto del año el viento sopla hacia el norte.
Este patrón en la dirección del viento es lo que se observa empíricamente a lo largo del año.
La velocidad promedio mostrada en la Figura~\ref{fig:uvAvg} exhibe la tendencia de acercarse a cero a medida que se incrementa la resolución.
Este efecto es más notorio para $\bar{u}$ que para $\bar{v}$, lo cual es entendible, pues se observa empíricamente que el viento tiende a soplar de norte a sur de manera constante durante la mayor parte del año. El efecto se acentúa para la componente zonal en los meses de junio, julio y agosto; donde $|\bar{u}|$ alcanza sus valores máximos en la dirección oeste, sobre todo para la resolución $\Delta s=60$~km. Este comportamiento se explica al tomar en cuenta que un $\Delta s$ menor (i.e. mayor resolución) implica una representación más fiel de la orografía (montañas) e irregularidades del terreno, cualidades que constituyen un obstáculo al flujo libre del viento. La diferencia es mínima en los picos de velocidad máxima, los cuales son influenciados en menor medida por el terreno y más por patrones de circulación sinóptica como ciclones y/o sistemas de alta y baja presión, originados más allá del dominio en consideración. Como consecuencia, el viento tiende a ser frenado, con la obvia implicación que $|\bar{u}|$ y $|\bar{v}|$ tienden a ser menores. La influencia de la elevación del terreno sobre el flujo del viento se verifica también al analizar las capas superiores de la atmósfera, en donde la velocidad permanece prácticamente invariante al cambiar la resolución de la grilla.
En la Figura~\ref{fig:vmag} se puede observar el aumento de detalle al aumentar la resolución. El panel A utiliza $\Delta s=60$ km. Es evidente que a este nivel de detalle no se tiene suficiente información sobre el territorio guatemalteco. Una sola celda de la grilla es suficiente para cubrir departamentos como Sacatepequez, Sololá o Totonicapán. Todo el detalle de la orografía y su influencia sobre la circulación local del viento está ausente. En los paneles B y C se utilizan resoluciones de 30 y 15 km, respectivamente. Se empieza a ver una distinción con más detalle de regiones de alta y baja velocidad. La característica saliente es que existe una franja de alta velocidad que coincide geográficamente con la Sierra Madre que atraviesa el territorio de Guatemala de oeste a este, siguiendo el perfil de la costa sur. En el panel D la resolución utilizada es de 8 km. Se puede observar que las regiones con velocidades más altas corresponden a las laderas montañosas de la Sierra Madre y la parte sur del departamento de Jutiapa. Los resultados mostrados en la Figura~\ref{fig:vmag} ilustran la ventaja de utilizar un modelo climático regional, que es la capacidad de observar la influencia de los accidentes geográficos de mesoescala en el comportamiento promedio del flujo de viento. La información acerca del flujo del viento se completa en la Figura~\ref{fig:vmagang3}, en la cual se ha agregado también la dirección promedio del viento. La franja de alta velocidad que coincide con la Sierra Madre tiene una dirección predominante hacia el sur.
Es posible alcanzar todavía mayor detalle en la simulación de la dinámica atmosférica siempre que hayan características del terreno que aparezcan al aumentar la resolución. La Figura~\ref{fig:vAcate} muestra tal situación. Aquí se ha utilizado un $\Delta s=2$ km, con lo cual se hacen evidentes las formaciones volcánicas más prominentes de la región central de Guatemala. Se puede apreciar que la dirección promedio en noviembre y diciembre es hacia el sur. El valor promedio máximo de la velocidad se alcanza en la región que se ubica entre los volcanes de Agua y Pacaya, que corresponde al municipio de Palín en el departamento de Escuintla. En la gráfica se puede apreciar que en esta región la dirección promedio es prácticamente perpendicular a las curvas de nivel del terreno. Esta peculiaridad se explica como un drenado de viento proveniente del área de la ciudad de Guatemala, la cual se encuentra a mayor altura. El viento que desciende desde el valle central tiende a acelerarse al bajar por el gradiente de altura que pasa por Palín. Un efecto similar, pero de menor proporción se observa entre los volcanes de Acatenango y Agua, donde la dirección promedio del viento también es perpendicular a las curvas de nivel del terreno. Si bien dos meses de simulación no son tiempo suficiente para considerar estos resultados como una característica climatológica estable, es un indicativo del potencial de estas regiones para la extracción de energía del viento. No es coincidencia que ya existe un parque eólico en las inmediaciones de Palín.
Utilizando una resolución de 2 km y registros de la velocidad del viento a cada media hora, es posible observar fenómenos de circulación térmica diurna, tal efecto se muestra en la Figura~\ref{fig:vadrain}. Durante un día de buen tiempo, el sol calienta la pendiente de las montañas elevando su temperatura. El aire tiende a ascender a la largo de la pendiente, creando viento anabático. Durante la noche el terreno se enfría y la circulación se invierte, es decir, el aire baja por la pendiente estableciendo un viento katabático~\cite{wallace2006atmospheric}. Las latitudes 14.8$^\circ$N y 14.9$^\circ$N son un ejemplo perfecto de la circulación anabática y katabática.
Estas latitudes corresponden a puntos que están en los lados sur y norte (a la misma longitud) de la cuenca del río Motagua, respectivamente.
Se puede apreciar que al mismo tiempo que el viento sopla hacia el sur en el lado sur de la cuenca, éste sopla hacia el norte en el lado norte de la cuenca.
En otras palabras, el viento asciende en ambas laderas. Durante la noche la dirección de la circulación se invierte.
Es notorio que la franja de tiempo de circulación anabática es más corta que la katabática.
Un patrón similar puede observase en la latitud 14.2$^\circ$N, donde los días de buen tiempo llegan hasta el 9 de diciembre y luego se observa fuerte viento proveniente del norte.
En este caso, en los primeros días del mes no se alcanza una velocidad hacia el sur, el patrón de circulación es más bien un aumento y disminución de la velocidad hacia el norte, el cual tiene también un patrón diurno. En los días alrededor del 20 de diciembre el patrón se rompe debido a la entrada de un frente frío que hace que el viento sople hacia el sur.
En conclusión, se ha utilizado un MCR para realizar corridas de alta resolución sobre el territorio guatemalteco. El valor agregado que se puede extraer es la variación del campo de velocidad a una escala menor que la de los datos que proveen las condiciones iniciales y de frontera. En este caso el estudio ha sido enfocado a los patrones de circulación del viento en las inmediaciones de la superficie terrestre. Al contrario de lo que sucede a alturas superiores de la atmósfera donde el viento es geostrófico, la circulación del viento en la capa de frontera atmosférica está influenciada por la orografía, la textura del suelo y la presencia de ciudades. Se ha observado que la representación detallada de la topografía por medio de grillas de alta resolución influye marcadamente en la circulación del viento a mesoescala. Aunque el incremento en resolución ha permitido mejorar la exactitud de la velocidad del viento, la norma de las diferencias entre los datos de estación (para un punto) y las simulaciones indican que hay una resolución más allá de la cual los cálculos ya no mejoran.
Uno de los factores importantes del estudio y análisis de los patrones de circulación del viento es la creciente demanda de producción de energía limpia. Poder identificar zonas con un flujo fuerte y constante de viento es clave para el aprovechamiento de la energía eólica.
En trabajos futuros se analizarán las variables de precipitación y temperatura, las cuales tienen un efecto visible e inmediato sobre las diferentes actividades humanas.
\section{Agradecimientos}
A Vittorio M. Canuto del NASA Goddard Institute for Space Studies por haber presentado la ciencia del clima como un campo emocionante para investigar.
\bibliographystyle{apacite}
|
3,212,635,537,537 | arxiv | \section{Introduction}
The Ising model conveys, in its simplicity, a richness of
physical information which makes it relevant as a model for
critical phenomena in different instances (ferromagnetic materials,
lattice gas, binary alloys).
The model is also paradigmatic of a common situation in
statistical physics since, being one of the simplest models, yet
it only allows to compute analytically the thermodynamic limit for particular
classes of lattices (in one or two dimensions). In two
dimensions, the Ising model in a square lattice has been solved
in the continuum limit with cylindrical and toroidal boundary
conditions \cite{onsa,kast}.
It has been also found analytic solution for the
two-dimensional model on a triangular or honeycomb
lattice \cite{varios}. In
general, however, the introduction of more specific boundary
conditions precludes the resolution of the model in closed
analytic form.
On the other hand, when resorting to a numerical simulation of
the observables one may take advantage of finite size effects to
infer the critical behavior in the thermodynamic limit \cite{barber}.
It happens, though, that finite size effects depend in general on
the boundary conditions, in a way that may not be crucial but
one cannot predict. There
are again a few number of situations in which the asymptotic
dependence on the spatial dimensions of the lattice has been
rigorously studied. One of these cases corresponds to the
analysis by Ferdinand and Fisher of the two-dimensional Ising
model on large toroidal lattices \cite{fisher}.
The conclusions reached there
support, in essence, the assumptions made in discussing finite
size effects and, more precisely, the hypothesis of finite size
scaling \cite{fss}. Some open questions are raised, however, regarding
the approach to the critical coupling, which is drastically
influenced by the shape of the torus.
The purpose of the present paper is to investigate finite size
effects and critical
properties of the Ising model on a class of two-dimensional
lattices with spherical topology. Our choice for the elements of
this class is not arbitrary, but is rather dictated by a
prescription which makes possible increasing the size of the
lattice without changing the local geometry. We propose to
consider, in fact, a type of honeycomb lattices folded on the
tetrahedron, which are built by assembling triangular blocks of
the kind shown in Figure~1 as the faces of the polyhedron. One
may construct a whole family of these lattices with increasing
size, in such a way that the member of the $N^{\rm th}$ generation
$\Delta_n$
has a number of lattice points equal to $n = 12 N^{2}$. The
coordination number is constant in each lattice. Moreover, from
the point of view of the simplicial geometry, the curvature is
always concentrated at the same faces, which are those formed by
the three-fold rings around the four vertices.
In principle, this makes the problem
of taking the thermodynamic limit along our sequence of lattices
well-defined. In ref. \cite{jose} clear evidence was given of
critical behavior in the ferromagnetic regime, as well as
evidence supporting the hypothesis of finite size scaling
applied to the model on the curved surface.
The lattices we are considering may be understood as the result
of applying nontrivial boundary conditions for the honeycomb
lattice on the plane, though they have the effect of introducing
curvature in the model. We undertake the
investigation of the influence of these boundary conditions on
finite size effects and, more significantly from the physical
point of view, on the critical properties of the model.
Regarding the first point we will see that a discrepancy
arises between the scaling of {\em pseudocritical} coupling
constants for finite lattices and the true scaling of the
correlation length. The second issue may be probably addressed
in the continuum. In fact, investigations of the effect of
boundary conditions in conformal field theories have been
carried out before \cite{cardy1}. Anyhow, the inclusion of a conical
singularity requires the kind of boundary condition which may
call for a nonlocal operator in the theory \cite{pol}, so the analysis
of our model in the continuum does not appear quite straightforward.
The content of the paper is distributed as follows.
In Section 2 we review the
dimer approach applied to the computation of the partition
functions and correlation functions of the model. Section 3 is
devoted to the finite size analysis of data obtained with the
above method, clarifying the issue concerning the $\nu$ critical
exponent. In Section 4 we give technical details of the Monte
Carlo simulations carried out to measure in some of the larger
lattices. Section 5 contains the results for the critical
exponents $\alpha, \beta, \gamma$ obtained after combining data
from the dimer approach and the Monte Carlo simulations. Finally
we draw our conclusions and outline further directions of our work.
\section{The dimer approach to the Ising model}
We review in this section the dimer formulation of the
two-dimensional Ising model \cite{mont,mcoy-wu}. This approach
presents the advantage of being applicable to lattices with an
irregular coordination. It allows to write partition functions and
correlation functions in closed compact form, in terms
essentially of determinants of some coordination matrices for
the lattice. This is something that cannot be achieved for our
curved lattices by any other standard resolution method of the
two-dimensional Ising model. There is no obvious way, for
instance, as to how to apply the transfer matrix method to write
down the partition function for a lattice with the topology of the
sphere, not even to produce a numerical computation
of the same. Within the dimer approach one may, in principle,
compute the partition function for any of the hexagonal lattices
inscribed on the tetrahedron. Although we have not been able to
infer from this construction the thermodynamic limit along the
sequence of growing lattices, the method shows very efficient
to calculate observables like the specific heat or the
correlation length with arbitrary precision. One can easily
progress to lattices with up to more than 1000 points, with the
possibility of applying a finite size analysis to study the
critical behavior of the model.
The dimer formulation of the Ising model makes use first of the
equivalence between the partition function of the model and the
dimer partition function of certain decorated lattice built from
the original one \cite{kast}. One may apply afterward powerful
techniques
developed to perform the sum over dimer configurations. Let us
review the former correspondence for the hexagonal lattice,
while allowing for some kind of frustration which keeps constant
the coordination number over the lattice. Given the collection
of spins $\{\sigma_{i} \}$, with $i$ running over all the
lattice points, the partition function ${\cal Z}$ is defined as
the sum over all possible configurations
\begin{equation}
{\cal Z} = \sum_{\sigma_{i} = \pm 1} \mbox{\large $e^{- J H}$}
\label{1}
\end{equation}
where the Hamiltonian $H$ is given by the sum over all the
lattice links $\langle i,j \rangle$
\begin{equation}
H = - \sum_{<i,j>} \sigma_{i} \sigma_{j}
\end{equation}
The factor $1/kT$ is absorbed for simplicity in the definition of the
nearest--neighbor coupling $J$.
It is well known that (\ref{1}) can be cast as a sum over all
the closed loops over the lattice. Calling this collection $\{
l_{i} \}$ and $\{ n_{i} \}$ the respective numbers of links of
the paths, we have actually
\begin{equation}
{\cal Z} = (\cosh J)^l \sum_{ \{ l_{i} \} } (\tanh J)^{ n_i }
\label{2}
\end{equation}
$l$ being the total number of links of the lattice. One can
draw a correspondence between each closed path and a dimer
configuration in the appropriate decorated lattice. This is
formed in our model by inserting a triangle in place of each
of the points of the original lattice. To each of these one may
assign four different states, depending on whether the point is
traversed or not by a closed path and on what direction, in the
first instance. These states are labeled in Figure 2. In a
similar fashion, there are four different possible
configurations of dimers on each triangle and adjacent links of
the decorated lattice, which bear a one-to-one correspondence
with the above states. These dimer configurations are labeled in
Figure 3. One may easily convince that, establishing as a rule
the equivalence between respective states in Figures 2 and 3,
a unique closed path can be reconstructed starting from a
given dimer configuration in the decorated lattice, and
viceversa. Furthermore, if a weight equal to $ z = \tanh J $
is given to each dimer on a triangle link and equal to 1
for dimers joining
neighboring triangles, it is clear that the dimer partition
function reproduces the sum in the Ising partition function
(\ref{2}).
There exist, in turn, powerful techniques developed to the
computation of dimer partition functions, which rely mainly on
the relation between these and the Pfaffians of appropriate
coordination matrices on the decorated lattice. We sketch here
this relation, which has been worked out quite rigorously in
ref. \cite{mcoy-wu}. The first step is to establish an order relation
among the points of the decorated lattice. Once this is done,
one may assign a matrix element $a_{p_{1}p_{2}}$ for points
numbered $p_{1}$ and $p_{2}$ such that $a_{p_{1}p_{2}} = \tanh J$ if
the points belong to the same triangle, $a_{p_{1}p_{2}} = 1$ if
they are nearest neighbors belonging to different triangles, and
$a_{p_{1}p_{2}} = 0$ otherwise. The sum over all dimer
configurations weighted as proposed before amounts to perform
the sum over all permutations $\{p_{1}, p_{2}, \ldots p_{K} \}$
\footnote{$K$ is the total number of points in the decorated lattice.}
\begin{equation}
\sum a_{p_{1}p_{2}} a_{p_{3}p_{4}} \ldots a_{p_{K-1}p_{K}}
\end{equation}
restricted by $p_{1} < p_{3} < \ldots p_{K-1}$ and
$p_{1} < p_{2}, p_{3} < p_{4}, \ldots p_{K-1} < p_{K}$.
While there is no known algorithm to compute efficiently a sum
of the above kind, one may think of allowing for an
antisymmetric matrix $A = \{ a_{ij} \}$, so that the dimer partition
function (a sum of all positive terms) may become proportional
to
\begin{equation}
\sum (-1)^{P} a_{p_{1}p_{2}} a_{p_{3}p_{4}} \ldots a_{p_{K-1}p_{K}}
\label{7}
\end{equation}
with $p_{1} < p_{3} < \ldots p_{K-1}$,
$p_{1} < p_{2}, p_{3} < p_{4}, \ldots p_{K-1} < p_{K}$, as
before, and $(-1)^{P}$ being the signature of the permutation.
The expression (\ref{7}) reproduces the definition of the
Pfaffian of the matrix $A$, which may be subsequently
computed as the square root of its determinant.
The remarkable conclusion which follows from the work of ref.
\cite{kast} is that, for planar lattices, it is always possible
to choose the sign of the nearest-neighbor matrix elements
$a_{p_{1}p_{2}}$ so that all the terms in the sum (\ref{7})
have the same sign. Since the matrix $A$ becomes
antisymmetric, it is customary to fix pictorically the sign of
each $a_{p_{1}p_{2}}$ by giving an orientation to every link of the
decorated lattice ---$a_{p_{1}p_{2}}$ is positive, for instance,
if the arrow goes from $p_{1}$ to $p_{2}$---. We may enunciate the
Kasteleyn theorem by saying that in any planar lattice there is
always a system of arrows such that the dimer partition function
can be computed as the Pfaffian of the corresponding
antisymmetric coordination matrix.
The lattices we consider here fall into the category of planar
lattices since they have the topology of the sphere. As long as
we are interested on dimers mainly for computational purposes,
we simply give the recipes which have to be followed to form
the appropriate system of arrows on a planar lattice. Once
superposed on the plane, the lattice is made of so-called
elementary polygons, which are the closed cycles that do not
contain points in their interior. On the other hand, a polygon
is said to be clockwise odd if the number of arrows pointing in
the clockwise direction in the polygon is odd. The basic results
which hold for planar lattices are that (a) it is always
possible to choose a system of arrows such that all the
elementary polygons are clockwise odd, and (b) with this choice
and taking $a_{p_{1}p_{2}}$ as positive when the arrow goes
from $p_{1}$ to $p_{2}$,
all the terms in the expansion of the Pfaffian (\ref{7}) have
the same sign.
In our case, a possible system of arrows realizing the property
(a) for a decorated lattice inscribed on the tetrahedron is
shown in Figure 4, where all the arrows for the triangle links
(not drawn) are supposed to point in the clockwise direction.
The advantage of this choice of arrows is
that it keeps a regular pattern in the bulk, while only a few
arrows on the boundary links have to be flipped to make all
elementary polygons clockwise odd. In general, progressing to
the next member of our family of lattices on the tetrahedron
just amounts to adding a column of (decorated) hexagons at each side of
Figure 4, expanding appropriately the vertical dimension.
The system of arrows proposed may be extended in a
straightforward way to larger lattices.
According to the above discussion, we set always the absolute value of
matrix elements $a_{p_{1}p_{2}}$ equal to $\tanh J$ for
points in the same triangle, and equal to 1 for
nearest neighbors on different triangles. The partition function
for any honeycomb lattice on the tetrahedron can be represented,
therefore, in terms of the respective matrix $A$ by
\begin{equation}
{\cal Z} = (\cosh J)^{l} (\det A)^{1/2}
\label{12}
\end{equation}
We have made use of the representation (\ref{12}) to perform the
numerical computation of the maximum of the specific heat (in
the ferromagnetic regime) for lattices up to $\Delta_{1452}$. We
have been able to measure that quantity with a relative error of
less than $10^{-7}$ in most of the cases. Correspondingly, a
precise determination of the coupling constant at which the maximum is
attained in each lattice has been also possible (see Table 1).
The values of
these pseudocritical coupling constants are fundamental
ingredients for the finite size analysis to be accomplished in
the next section. We have also computed the values of the
specific heat of the curved lattices at the critical coupling
constant of the planar honeycomb lattice (see Table 2). These are also
relevant under the hypothesis of finite size scaling since, as
we will see, the sequence of pseudocritical temperatures
converges in the thermodynamic limit to the critical temperature
of the planar hexagonal lattice.
We conclude this section with an outline of how the two-point
correlation functions can be obtained within the dimer
approach \cite{mont,mcoy-wu}.
Given two arbitrary spins $\sigma_{p}$ and $\sigma_{q}$ in the
lattice, the average
\begin{equation}
\left\langle \sigma_p \sigma_q \right\rangle =
\frac{1}{\cal Z}
\sum_{\sigma_i = \pm 1} \sigma_p \sigma_q e^{- J H}
\end{equation}
may be computed with the following trick. One chooses a path
${\cal C}$ from $\sigma_{p}$ to $\sigma_q$ on the lattice,
which will comprise a number of consecutive spins $\{
\sigma_{p_{1}}, \sigma_{p_{2}}, \ldots \sigma_{p_m} \}$. The
two-point function may also be expressed as
\begin{equation}
\left\langle \sigma_p \sigma_q \right\rangle =
{1 \over {\cal Z} }
\sum_{\sigma_i = \pm 1} \sigma_p \sigma_{p_1}
\sigma_{p_1} \ldots \sigma_{p_{m-1}} \sigma_{p_m}
\sigma_q e^{- J H}
\end{equation}
Now we have that $\sigma_p \sigma_{p_1}$, $\sigma_{p_1}
\sigma_{p_2}$, \ldots $\sigma_{p_m} \sigma_q$ are pairs of
nearest-neighbor spins. Therefore, we find
\begin{equation}
\left\langle \sigma_p \sigma_q \right\rangle =
{1 \over {\cal Z} }
\sum_{\sigma_{i} = \pm 1}
\prod_{<i,j> \not\in {\cal C}} (\cosh J + \sigma_i \sigma_j
\sinh J)
\prod_{<k,l> \in {\cal C}} (\sinh J + \sigma_k
\sigma_l \cosh J)
\label{17}
\end{equation}
where the first product extends to all the links which do not
belong to ${\cal C}$, and the second product runs over the links
that do belong to the path.
{}From (\ref{17}) we arrive finally at the expression
\begin{equation}
\left\langle \sigma_p \sigma_q \right\rangle =
{1 \over {\cal Z} }
(\cosh J)^{l} (\tanh J)^{m+1}
\sum_{\{ l_i \} } (\tanh J)^{n_i - r_i} \frac{1}{(\tanh J)^{r_i}}
\label{18}
\end{equation}
where the sum, as in expression (\ref{2}), is over all the
closed loops on the lattice, but with the difference now that
the number $r_{i}$ of links in each loop belonging to ${\cal C}$
have to be weighted with $(\tanh J)^{-1}$ rather than with
$\tanh J$. It becomes obvious that all the machinery
of the dimer formulation can be applied again to transform
the sum in (\ref{18}) into a suitable dimer partition function
on the decorated lattice, so that
\begin{equation}
\left\langle \sigma_p \sigma_q \right\rangle =
{1 \over {\cal Z} }
(\cosh J)^{l} (\tanh J)^{m+1} (\det A^\prime)^{1/2}
\label{19}
\end{equation}
The appropriate coordination matrix $A^\prime = \{ a_{ij}^\prime \}$ has
to keep track of the different weight that the $m+1$ links in ${\cal C}$
carry in the sum over closed loops.
\section{Finite size scaling and critical exponents}
\subsection{Finite size scaling}
It is well known that singularities in the free energy (i.e. phase
transitions) can only occur in the thermodynamic limit. For finite
volumes the free energy is an analytic function of the temperature and
any other parameter in the Hamiltonian. The thermodynamic singularities
are thus smoothed out around the transition point. A trace of the
existence of such non-analyticities is the presence of some peaks in the
specific heat $CV$ or magnetic susceptibility $\chi$ curves.
The dependence on the linear size of the system $L$ of the location of
the maxima of those peaks and their height permits the description of
the thermodynamic limit from finite--size data \cite{barber}.
In second order phase transitions this round--off is due to the fact
that the correlation length $\xi$ is limited by the size of the system.
This fact defines a pseudocritical coupling $J^\star(L)$ such that
\begin{equation}
\label{def_j_star}
\xi(J^\star(L)) \sim L
\end{equation}
At this point the surface contribution to the free energy is not
negligible compared to the bulk one. In the vicinity of
a second order transition point $J_c$ the correlation length diverges
with a power--law given by
\begin{equation}
\label{def_nu}
\xi(J) \sim (J - J_c)^{-\nu}
\end{equation}
{}From \reff{def_j_star} and \reff{def_nu} it can be derived
the dependence of the pseudocritical coupling on the lattice size
\begin{equation}
\label{scaling_j_star}
| J^\star(L) - J_c | \sim L^{-1/\nu}
\end{equation}
Unfortunately in practical situations it is a very involved task
to compute such quantity $J^\star(L)$. It is easier to look
at the position and height of those peaks mentioned above.
If some quantity $P$ diverges near the critical point as
\footnote{Hereafter we will denote quantities computed in a finite
volume with a subscript $L$ meaning the linear size of the system
(i.e. $P_L$). Whenever no subscript is present, the thermodynamic
limit is assumed (i.e. $P = \lim_{L\rightarrow \infty} P_L$).}
\begin{equation}
\label{def_rho}
P(J) \sim | J - J_c|^{-\rho} ; \qquad \rho > 0
\end{equation}
then it can be shown \cite{barber} that for a finite volume it attains
a maximum value $P_{max}(J_L)$ at a point $J_L(P_{max})$ given by
\begin{subeqnarray}
\slabel{scaling_j_p}
|J_L(P_{max}) - J_c| & \sim & L^{-\theta_P} \\
\slabel{scaling_p_max}
P_{max}(J_L) & \sim & L^{\rho/\nu}
\end{subeqnarray}
when $L$ is large enough. In most systems it is found that
\begin{equation}
\label{igualdad}
\theta_P = {1 \over \nu}
\end{equation}
but this is not a general result. There are some examples where this
property does not hold: the spherical model, the ideal Bose gas
\cite{modelos_raros} and the one--dimensional $q = \infty$ clock model
\cite{clock}. In the present paper we will face another situation in which
the relation (\ref{igualdad}) is violated.
On the other hand, if $\theta \geq 1/\nu$ then the behavior of
this quantity at finite volume evaluated at the critical point
$P_L(J_c)$ is the same as in \reff{scaling_p_max}
\begin{equation}
\label{scaling_p_c}
P_L(J_c) \sim L^{\rho/\nu}
\end{equation}
Using \reff{scaling_j_p} and \reff{scaling_p_max} the critical
coupling $J_c$ and the critical exponents ratio $\rho/\nu$ can be
derived from finite--size data. When $J_c$ is explicitly known and
$\theta_P \geq 1/\nu$ then \reff{scaling_p_c} can be used
instead of \reff{scaling_p_max}. In this paper we are mainly
concerned with the analysis of the susceptibility and the specific
heat. So, we will obtain estimates of $J_c$ and the
ratios $\gamma/\nu$ and $\alpha/\nu$.
The rest of the critical
exponents may be derived using the scaling relations \cite{itzykson}:
\begin{subeqnarray}
\slabel{beta}
{\beta \over \nu} &=& 1 - {\gamma \over 2\nu} \\
\slabel{one_nu}
{1 \over \nu} &=& 1 + {\alpha \over 2\nu} \\
\slabel{eta}
\eta &=& 2 - {\gamma \over \nu} \\
\slabel{delta}
\delta &=& {4 \over \eta} - 1
\end{subeqnarray}
However, in this paper we will check numerically scaling relations
\reff{beta} and \reff{one_nu}.
We will obtain independent estimates of $\nu$
and $\beta/\nu$ respectively in terms of the analysis of the
correlation
length (see below) and the magnetization at the critical point (see
Section~5).
\subsection{Critical point and exponent $\theta_{CV}$}
In Section 2 we showed that for lattices up to $\Delta_{1452}$
we were able to obtain very accurate estimates of the internal
energy E and the specific heat. These quantities are defined
hereafter as follows
\begin{eqnarray}
\label{energia}
E_N &=& {2 \over 3V} \left\langle H \right\rangle \\
\label{calor_e}
CV_N &=& {3V \over 2} \sigma^2 \left( {2 \over 3V } H \right)
\end{eqnarray}
where the factor $3V/2$ is equal to the number of links in a lattice of
$V$ sites and $\sigma(\cdot)$ is the standard deviation.
Thus, the values of $CV_{max}$ and $J_L(CV_{max})$ can be computed with
high precision. In what
follows we will identify the lattice linear size $L$ with the index
$N$ characterizing the fullerene lattice (see Section 1). This choice is
consistent as the volume increases as $V = 12\ N^2$.
Data will be fitted according to Equation~\reff{scaling_j_p}
\begin{equation}
\label{fit_j_c}
J_N(CV_{max}) = J_c + A N^{-\theta_{CV}}
\end{equation}
using a least $\chi^2$ method. Here the input errors are given by the
precision of the computer in calculating $J_N(CV_{max})$. In order to
obtain a more reliable
result, we will sequentially remove the point with smallest $N$.
One eventually can observe
a monotonous trend to some value, which will be identified with the
thermodynamic limit.
Our best result is
\begin{equation}
J_c = 0.65850 \pm 0.00002
\end{equation}
for $5 \leq N \leq 11$. In this case $\chi^2 = 0.07$ with 2 degrees of
freedom.
Throughout this paper all the errors associated with our final
results will be
equal to 2 standard deviations ---i.e.~95\% of confidence level---.
The later result is compatible with the critical point of the
Ising model on a toroidal honeycomb lattice
\begin{equation}
\label{j_c_toro}
J_c^{\rm torus} = {1 \over 2} \log (2 + \sqrt{3}) = 0.65848
\end{equation}
This result can be easily derived using duality \cite{itzykson,baxter}.
Thus, our data strongly supports that both critical points coincide.
A good estimate of $\theta_{CV}$ is obtained by repeating the fit with
$J_c = J_c^{\rm torus}$. The result is
\begin{equation}
\theta_{CV} = 1.745 \pm 0.015
\end{equation}
for $6 \leq N \leq 11$ and with $\chi^2 = 1.2$.
This one is in clear disagreement with the result expected for a
lattice on a torus ($\theta_{CV} = 1$ \cite{fisher}). This fact makes
necessary a direct determination of the $\nu$ critical exponent, in
order to see if the above measurement bears any relation to it (see
below).
We will skip here the analysis of $CV_{max}$, as it is rather subtle
to distinguish between a logarithmic and a power--law behaviors,
specially when such a power is rather small.
This will carried out in Section 5.
\subsection{Correlation length and exponent $\nu$}
An independent way to compute the critical exponent $\nu$ is to
study the correlation length near the critical point. This quantity is
defined in terms of the connected two--point function
\begin{equation}
\label{correlation_length}
\left\langle \sigma_{\bf 0} \cdot \sigma_{\bf r} \right\rangle^c =
\left\langle \sigma_{\bf 0} \cdot \sigma_{\bf r} \right\rangle -
\left\langle \sigma_{\bf 0} \right\rangle
\left\langle \sigma_{\bf r} \right\rangle
\sim e^{-r/\xi}
\end{equation}
when $r$ is large enough. The connected two--point correlators are
equal to the usual ones $\left\langle \sigma_{\bf 0} \cdot
\sigma_{\bf r} \right\rangle$ for $J < J_c$ (unbroken phase).
This feature allows
their exact computation using the machinery developed in Section 2.
(For finite lattices odd quantities such as the magnetization are
always equal to zero, even in the broken phase).
A major problem is how to recover $\xi_N(J)$ from the
finite--volume correlators $\left\langle \sigma_0 \cdot \sigma_{\bf r}
\right\rangle_N$. For a torus of linear size $L$ these functions are
expected to behave as $\sim \cosh((x-L/2)/\xi_L)$
when $x \gg 1$. But this question is not clear for
the truncated tetrahedron. On the other hand, it is well known
\cite{mcoy-wu} that the correlation length does depend on the
direction along the spins $\sigma_{\bf r}$ are disposed. However, the
same critical behavior is expected for all the possible directions.
This study has been carried out on the lattice $\Delta_{972}$, which
is the largest one allowed by our computer facilities. We believe that
this one is large enough to see the thermodynamic limit.
We have chosen couples of spins
along the diagonal in the representation on Figure 4 of the
tetrahedron unfolded on the plane.
Our choice for
$\sigma_{\bf r}$ allowed us to introduce an increasing distance
between spins running from 3,6,$\cdots$ up to 27 (in units of the
lattice spacing). At
$r = 27$ both spins are located at antipodal points. If we go on along
the diagonal we finally arrive at $\sigma_0$. For that reason we
expect that the correlators behave for large $r$ as a symmetrized
version of equation \reff{correlation_length}.
In order to improve our results,
we include in our ansatz the correct leading term for
the square lattice on the torus \cite{mcoy-wu}
\begin{equation}
\left\langle \sigma_{\bf 0} \sigma_{\bf r} \right\rangle =
\frac{f}{\sqrt{r}} e^{-r/\xi} (1 + {\cal O}(1/r) )
\label{ansatz_correlation}
\end{equation}
suitably symmetrized around $r = 27$.
We have analyzed the cases $J$ = 0.58, 0.59, 0.60, 0.61 and 0.62
(see Table~3 and Figure~5). We have obtained extremely good fits
for all these cases,
giving differences of order $\sim 10^{-5}$. Although the lattice
$\Delta_{972}$, as any of the lattices inscribed on the
tetrahedron, is not homogeneous, it is remarkable that the
values of the two-point functions at each different $J$ fit,
to a high degree of precision, to the correct leading
behavior for the Ising model on a square lattice on the torus.
The deviation that we have found from the dependence
(\ref{ansatz_correlation}) appears to
be even smaller than for similar measurements carried out for
the lattice on a torus.
The estimated values of the correlation
length are shown in Table~3 and will be used
in the computation of the $\nu$ critical exponent.
We unsuccessfully tried to fit data for $J >
0.62$ to \reff{ansatz_correlation}. The reason is that very close to
the critical point we have to take into account in Equation
\reff{ansatz_correlation} the ${\cal O}(1/r)$ (or even higher) terms.
Given the values of $\xi^{-1}_{N=9}(J)$ in Table~3, we tried to fit
them according to \reff{def_nu}
\begin{equation}
\xi^{-1}_{N=9}(J) = A | J - J_c |^\nu
\end{equation}
We obtain a value equal to $\nu = 1.01 \pm 0.04$ with
$\chi^2 = 0.36$. However, if we drop the point with
$J=0.62$ (the closest to $J_c$) the result is
\begin{equation}
\nu = 1.00 \pm 0.06
\end{equation}
with a remarkable small value of the $\chi^2 \sim 7 \cdot 10^{-5}$.
The conclusions of the analysis of the data coming from the dimer
computations can be summarized in the following points (a) The critical
point is compatible with $J_c^{\rm torus}$. (b) The critical exponent
$\nu = 1.00 \pm 0.06$. (c) The finite--size exponent
$\theta_{CV} = 1.745 \pm 0.015$ is significantly different from $1/\nu$.
In summary, our results suggest that the critical properties of the
Ising
model on the truncated tetrahedron are the same as on the torus.
However, we find a very clear difference in the finite--size behavior
of those models, as long as the scaling of pseudocritical coupling
constants (determined from the maxima of the specific heat)
does not match with the scaling behavior of the correlation
length. On the tetrahedron the thermodynamic limit is
achieved much faster than on the
torus, at least in what concerns to the specific heat.
\section{Technical aspects of the Monte Carlo simulations}
We have performed several Monte Carlo (MC) runs for different lattice
volumes $V = 12 N^2$ and coupling constants $J$. The relevant
information about the simulations can be found in Table~4.
We have used a Metropolis algorithm with the R250
pseudorandom--number generator \cite{random} --initialized with the
RANMAR subroutine--. The period of such generator is equal to
$2^{250}-1$.
Recently, it has been claimed \cite{random} that the combination of the
Metropolis algorithm with the R250 generator gives better results than
other more involved procedures. We have compared those values obtained
both by the dimer approach and by direct MC simulation. They were
consistent within statistical errors.
In all cases we have measured the internal energy density and the
magnetization defined as
\begin{equation}
\label{magnetizacion}
M_N = \left\langle \left| {1 \over V } \sum_i \sigma_i
\right| \right\rangle
\end{equation}
We have also measured the specific heat and the magnetic
pure--phase susceptibility,
\begin{equation}
\label{suscep}
\chi_N = V \sigma^2 \left( \left| {1 \over V } \sum_i \sigma_i
\right| \right)
\end{equation}
In all cases we discarded the first $10^5$ MC steps for termalization.
Then we have measured each observable once every typically 100
MC steps. In this way we obtained statistically independent data, as
it can be checked by computing the corresponding autocorrelation times
\cite{autocor}.
In order to calculate de maximum value of the specific heat and the
susceptibility we have used the Spectral Density Method
\cite{sweferr}. At a given coupling $J$ we can obtain the histograms
$N(E,M;J)$ which keep track of the numbers of configurations with
magnetization $M$ and internal density energy $E$. This information
is enough in order to compute the expectation value of any function
$f(M,E)$ at any other coupling $J^\prime$. In our case the magnetic
field is zero and in the equations
(\ref{energia},\ref{calor_e},\ref{magnetizacion},\ref{suscep})
the observables do not depend on
$E$ and $M$ simultaneously. For those reasons we could use the following
formulae
\begin{eqnarray}
\langle f(E) \rangle (J^\prime) &=& \frac{\sum_{E} f(E) N_{0}(E;J)
\exp\{(J^\prime - J)E \} }{ \sum_{E} N_{0}(E;J)
\exp\{(J^\prime - J)E \} } \\
\langle f(M) \rangle ( J^\prime ) &=& \frac{\sum_{E} N_{f}(E;J)
\exp\{(J^\prime - J)E \} }{\sum_{E} N_{0}(E;J)
\exp\{(J^\prime - J)E \}}
\end{eqnarray}
where the one dimensional histograms are defined as follows
\begin{eqnarray}
N_{0} (E;J) &=& \sum_{M} N(E,M;J) \\
N_{f} (E;J) &=& \sum_{M} f(M) N(E,M;J)
\end{eqnarray}
The Spectral Density Method gives the correct answer for couplings
$J^\prime$ close to the coupling $J$ where the simulation was performed.
A criterion for the applicability of such method is the following
\cite{sweferr}
\begin{equation}
\label{criterio}
|J^\prime - J| \sim {1 \over \sigma(E) V}
\end{equation}
In most cases only one simulation at $J=J_c$ is enough in order to
determine
the maximum of $CV$ and $\chi$. However, for the smaller lattices an
additional run had to be performed in order to obtain a reliable
estimate of such quantities.
We have divided up the entire sample into typically 30--120 subsamples,
each of them containing $\sim 1000$ measures. For each subsample we
computed
every quantity (including $CV_{max}$, $J_N(CV_{max})$, $\ldots$). With
these estimates we calculated the statistical errors using the
jack--knife method \cite{okawa}. In this way, the effect of
correlation among data was taken into account.
We have done all the MC simulations on a VAX 9000 machine with a
vectorial processor. The program was not fully vectorizable, as the
lattice could not be splitted into two disjoint sublattices, in such a
way every element of one sublattice is surrounded by elements belonging
to the other one\footnote{This feature has to do with the
onset of frustration in the antiferromagnetic regime.}.
However, we could divide the whole lattice into
three subsets. The elements of the first two are arranged on two
disjoint triangular sublattices, so their update could be
fully vectorize.
On the other hand, the rest of the spins can be located on two lines
joining pairs of vertices on the tetrahedron
and their number depend explicitly on the planar representation of the
lattice. For these ones the update is clearly not vectorizable. However,
their
effect on the CPU time is not very important for the larger lattices, as
their number behaves as $\sim V^{1/2}$.
\section{Results of the Monte Carlo simulations}
\subsection{Position of the critical point}
Here we will repeat the analysis of Section 3, but with all the data of
Table 1. For data coming from the dimer analysis the input error will be
taken
as the precision of the subroutines used. For those coming from the MC
simulations the error is given by the jack--knife method described in
the preceding section. Data for
$N > 11$ posses large error bars compared with
the rest. For that reason the fits presented in Section 3 cannot have
large variations. Our final results for the specific heat are
\begin{equation}
J_c = 0.65850 \pm 0.00002
\end{equation}
for $5 \leq N \leq 21$ and with $\chi^2 = 0.67$. And
\begin{equation}
\theta_{CV} = 1.745 \pm 0.015
\end{equation}
for $6 \leq N \leq 21$ and with $\chi^2 = 1.9$ (See Figure~7).
If we repeat the same procedure with the susceptibility we
obtain an estimate for $J_c$ compatible with the later one,
but with larger error bar. If we fix this quantity to
$J_c^{\rm torus}$ we arrive at the following estimate for
the exponent $\theta_\chi$
\begin{equation}
\theta_\chi = 1.01 \pm 0.02
\end{equation}
for $9 \leq N \leq 21$ and $\chi^2 = 0.8$ (See Figure~7).
We observe that $\theta_\chi$ is close the value $1/\nu = 1$, in
agreement with the Ising model on the torus. We conclude that
\begin{equation}
{1 \over \nu} = \theta_\chi < \theta_{CV}
\end{equation}
and this feature implies that one can obtain the critical exponents
using either eq.~\reff{scaling_p_max} or eq.~\reff{scaling_p_c}. As we
have
identified the critical coupling of our system, it seems more natural
to
rely our conclusions on eq.~\reff{scaling_p_c}. In any case, the values
obtained from eq.~\reff{scaling_p_max} are always consistent with those
presented in this paper within statistical errors.
\subsection{Exponent ratios $\gamma/\nu$ and $\beta/\nu$}
The value of $\gamma/\nu$ can be derived using the value of the
critical magnetic susceptibility $\chi_N(J_c)$. We have fitted our data
to
\begin{equation}
\chi_N(J_c) = A N^{\gamma/\nu}
\end{equation}
and our best result is
\begin{equation}
{\gamma \over \nu} = 1.73 \pm 0.02
\end{equation}
for values of $N$ ranging from 21 to 9 and with $\chi^2 = 0.79$
This result is very close to the usual Ising model one
$\gamma/\nu = 7/4 = 1.75$. If we fix $\gamma/\nu$ to this value we
obtain a $\chi^2$ value of $\sim 2.8$ which shows that the fit is
reasonably good (See Figure~8).
To determine the value of $\beta/\nu$ in an independent way (i.e. not
using scaling relation \reff{beta}) we will fix our attention on the
magnetization at the critical point. In the thermodynamic limit
the magnetization near $J_c$ behaves as
\begin{equation}
\label{def_beta}
M(J) \sim |J - J_c|^\beta
\end{equation}
Using similar arguments as in Section~3 it can be predicted that
\begin{equation}
\label{scaling_mc}
M_N(J_c) \sim N^{- \beta/\nu}
\end{equation}
The result of performing such fit is the following
\begin{equation}
{\beta \over \nu} = 0.126 \pm 0.004
\end{equation}
using data with $7 \leq N \leq 21$ and with $\chi^2 = 0.33$ (See
Figure~9). This
value is also very close to the Ising one $\beta/\nu = 1/8 = 0.125$.
This result supports that the scaling relation \reff{beta}
does hold in this model.
This feature can be used in order to obtain a more accurate estimate of
those exponents. We can try to fit $\chi_N(J_c)$ and $M_N(J_c)$
simultaneously using explicitly the relation \reff{beta}. The result is
\begin{equation}
\begin{array}{l}
\gamma/ \nu = 1.748 \pm 0.008 \\
\beta / \nu = 0.126 \pm 0.004
\end{array}
\end{equation}
Thus, our results strongly suggest that the ratio $\gamma/\nu = 7/4$ as
in the Ising model on a torus. Notice that the error in that ratio is
less than 0.6\%.
Using the relations \reff{eta} and
\reff{delta} we can derive the value of the exponents $\eta$ and
$\delta$.
\begin{subeqnarray}
\eta &=& 2 - {\gamma \over \nu} = 0.252 \pm 0.008 \\
\delta &=& {4 \over \eta} - 1 = 14.8 \pm 0.6
\end{subeqnarray}
\subsection{Exponent ratio $\alpha/\nu$}
We can perform the same game for the specific heat and try to obtain the
exponent ratio $\alpha/\nu$. If we try to fit data to the function
\begin{equation}
\label{fit_cv}
CV_N(J_c) = A + B N^{\alpha/\nu}
\end{equation}
we do not obtain a satisfactory result. The best one gives a ratio
$\alpha/\nu \sim 0.060$ with $\chi^2 \sim 9$ and $9 \leq N \leq 21$
(See Figure~8).
On the other hand, and motivated in part for the preceding results we
could try to fit the data to a logarithmic function.
\begin{equation}
CV_N(J_c) = A + B \log N
\end{equation}
In this case, the
fit is successful giving a $\chi^2 = 1.9$ with $7 \leq N \leq 21$ (See
Figure~10). This immediately implies that
\begin{equation}
\alpha = 0
\end{equation}
and using equation \reff{one_nu}
\begin{equation}
\nu = 1
\end{equation}
which is in agreement with the result from direct measurements of the
correlation length displayed in Section 3.
Thus, both exponents take the same values as in the
Ising model on the torus.
On the other hand, we have also verified that the scaling
relation \reff{one_nu} does hold in this model.
\section{Conclusions and outlook}
In this paper we have presented the first deep study of the
critical properties of an Ising model on a lattice with the
topology of the sphere. In particular we have chosen the family
of honeycomb lattices that can be constructed on the tetrahedron.
Our results can be summarized as follows
\begin{itemize}
\item
The dimer approach is very useful and competitive for lattices up to
$\sim 10^3$ points. It provides very accurate data for the internal
energy, specific heat and two--point correlators.
\item
The critical properties of the Ising model on the tetrahedron are just
the same as on the torus. In particular we have checked that $J_c$ is
the same, as well as the critical exponents $\nu$, $\alpha$, $\gamma$
and $\beta$.
\item
We also have checked two scaling relations \reff{beta} and
\reff{one_nu} among these exponents. Using the two other
equations we obtained the last two critical exponents $\delta$
and $\eta$. They are in agreement with those corresponding to
the Ising model on a torus.
\item
But the finite--size scaling properties of those two systems are not
the same. In our case the position of the maxima of
the specific heat scales near
$J_c$ with a critical exponent that does not bear any relation to the
critical behavior of the correlation length. That is, the
thermodynamic limit is achieved
faster on the tetrahedron. However, the behavior of the susceptibility
is just the same in both ones.
\end{itemize}
Our results suggest that the same analysis carried out in other types of
lattices with the same topology will yield the same conclusions: the
critical behavior does not change, although variations in their
finite--size properties are expected to hold. All of them belong to
the same universality class of the Ising model on a torus.
On the other hand, the Ising model on the tetrahedron can be
view
as the Ising model with some non--standard boundary conditions. These
ones has the advantage that the critical behavior is reached before
than for periodic boundary conditions.
Ferdinand and Fisher \cite{fisher,barber} studied how the exponent
$\theta_{CV}$ varies for the Ising model defined on a square lattice on
a $m \times n$ torus. They concluded that $J_l(CV_{max})$ behaves as
\begin{equation}
\label{fisher_equation}
{J_l(CV_{max}) \over J_c^{\rm torus} } = 1 + {b(\eta) \over l} +
{\rm o}(l^{-1})
\end{equation}
where $l = (m^{-2} + n^{-2})^{-1}$ measures the linear size of the
torus and $\eta = m/n$ its shape. They also showed that $b(\eta)$ is not
a monotonous function of $\eta$. In particular $b(\eta) > 0$ in the
range $\eta \in (\eta_0^{-1},\eta_0)$ with $\eta_0 = 3.139278$, and
$b(\eta) < 0$ in $\eta \in (0,\eta_0^{-1}) \cup (\eta_0,\infty)$.
Exactly at $\eta = \eta_0$ and $\eta_0^{-1}$ the function $b(\eta)$
vanishes\footnote{The same occurs for $\eta = 0$, $\infty$
\cite{fisher}}, so at these points the leading term in
\reff{fisher_equation} vanishes and its behavior is controlled by
the subleading term. In this model it can be written as
\begin{equation}
\left.
{J_l(CV_{max}) \over J_c^{\rm torus} } \right|_{\eta = \eta_0}
= 1 - {c \log l \over l^2} + \cdots
\end{equation}
The behavior of $b(\eta)$ as a function of the shape of the torus is
explained as a highly non--trivial interplay between the different
terms which appear in the expression of the partition function. We
believe that the same feature is present in the Ising model on the
truncated tetrahedron. In this case, the shape of the lattice is fixed,
but the chosen boundary conditions are the basic ingredient which makes
the leading term in \reff{fisher_equation} vanish.
We should mention that the our conclusions may not apply to the
antiferromagnetic regime. The reason is that for such boundary
conditions, the lattice is not bipartite. Thus, the phenomenon of
frustration may occur in that regime. This feature is absent in the
Ising model on the torus. In this case, the lattice is bipartite and
for sufficiently low temperatures we find a N\'eel ground state. This
question on the tetrahedron is currently under research.
Finally, we would like to say a few words about the continuum theory
which is attained in the thermodynamic limit. The evidence we have
found of scaling suggests a description in terms of the fields and
weights of a conformal field theory. This cannot be a trivial example
of field theory on the sphere (or on the plane), since four curvature
singularities arise which cannot be removed by conformal transformations.
Under the assumption of conformal invariance, though, we could still
stick three of the vertices together (at a point we may take as infinity)
by means of $SL(2,C)$ transformations, leaving alone a singularity
in the bulk. This picture is close in spirit to the Coulomb
gas representation of conformal field theories, but with the difference
now that not all the curvature is pinched at the point at infinity.
A conical singularity on the plane may have sensible effects on the
correlation functions of the theory. We recall here another example with
nontrivial boundary conditions, namely that of a conformal field theory
on the semiplane. In this case correlators which are taken at a finite
distance of the boundary do not measure the conformal weights of the
theory on the plane \cite{cardy}. In our model, it may not be necessary
to compute
correlators infinitely far away from the singularity to measure bulk
conformal weights, but again some dependence on the location of the points
should be expected. This point should deserve further clarification,
though its investigation in the lattice would require more powerful
computer facilities than used in the present work.
\section*{Acknowledgements}
We thank Juan Jes\'us Ruiz--Lorenzo and Miguel Angel
Mart\'{\i}n--Delgado for helpful discussions. We acknowledge the
financial support of the CICyT.
\newpage
|
3,212,635,537,538 | arxiv | \section{Introduction}
Recent advances in deep learning (DL) architectures have improved accuracy in many domains, but
they typically come at the cost of much higher model sizes. Natural language processing (NLP) is now
awash with multi-billion parameter models such as BERT-Large~\cite{devlinBert},
GPT-3~\cite{brown2020language}, and Megatron-LM~\cite{shoeybi2019megatron}.
Interest in such large models is also growing in computer vision (e.g.,~\cite{vit}) and for tasks bridging
NLP and relational data~\cite{tabert}. Unfortunately, GPU memory capacity growth has not kept pace,
creating a new bottleneck in DL~\cite{sohonilowmem}.
A common way of addressing this problem is ``model parallelism''~\cite{modelparallelism}.
Model-parallel execution first \textit{partitions} the model into disjoint subsets known as ``shards'' and places
them on multiple devices (GPUs), as Figure~\ref{fig:panel}A illustrates. During execution, intermediates
between model shards are exchanged between GPUs to emulate single-device training.
Unfortunately, the majority of DL models that necessitate model parallelism (e.g., Transformers)
utilize a sequential architecture, that impedes parallel shard execution across forward and backward
passes. This inter-shard ordering constraint enforces an execution order that results in massive GPU
under-utilization and poor speedups. Besides, this approach also forces DL users to get multiple GPUs
to even attempt using such models, raising costs.
Two recent works that improve upon regular model parallelism are FlexFlow~\cite{flexflow}, which
hybridizes model and data parallelism, and ``pipeline parallelism'' such as in GPipe~\cite{GPipe, harlapPipeline, narayanan2021efficient}.
But FlexFlow was mainly designed to optimize parallelization strategies and reduce compute costs, not
tackle memory limits. Model parallelism's role in FlexFlow is primarily a dimension of an optimization
space where execution speed is the ultimate goal. In this sense, FlexFlow's goals are complementary
to our focus of efficiently training larger-than-GPU-memory models.
Pipeline parallelism, on the other hand, was designed to achieve higher speedups on chain neural
architectures, which does help in the large model setting. Pipelining exploits data access patterns
in mini-batch stochastic gradient descent (SGD), staging out successive mini-batches across devices
and overlapping compute on different shards so as to reduce idling. Unfortunately, this approach still
suffers from idle periods, also called ``bubbles'', due to shared dependencies between the backward
and forward passes of different mini-batches. The issue is illustrated in Figure \ref{fig:panel}B.
\begin{figure*}[th!]
\centering
\includegraphics[keepaspectratio=true, width=\linewidth]{panel}
\caption{A) Traditional model parallel execution across three GPUs. The model divided into three shards, denoted $M_{0}, M_{1}, and M_{2}$. As data
passes through the model, only one shard, and only one GPU, is active at any given point in time.
B) The same model as before placed in a pipeline parallel configuration. $F_{i, j}$ refers to shard $j$'s execution period during minibatch $i$'s forward pass,
and $B_{i, j}$ refers to the the same period during the backward pass. Pipelined execution overlaps several different model parallel execution sequences on different minibatches
to reduce device idle times.}
\label{fig:panel}
\end{figure*}
\eat {
\begin{figure}[h]
\centering
\includegraphics[keepaspectratio=true, width=0.95\linewidth]{model_parallel_execution}
\caption{Traditional model parallel execution on a feedforward architecture.
}
\label{fig:mp_execution}
\end{figure}
}
In this paper, we present \textsc{Hydra}, a new system for larger-than-memory DL model training
that \textit{decouples scalability from parallelism} to tackle both challenges from first principles.
We identify analogies between challenges in larger-than-GPU-memory DL training and classic
relational database management system (RDBMS) design. This allows us for first time to
leverage and adapt classic techniques from the RDBMS world to large-model DL training,
focusing on sequential neural architectures such as Transformers.
We build a layered optimization stack, starting with \textit{spilling}, a technique used in
RDBMSs to promote/demote data shards across levels of a memory hierarchy. This is central
to how RDBMSs scale queries to larger-than-DRAM data. We bring that approach to DL model
shards, allowing us to train arbitrarily large models on even just one GPU.
Next, we aim to optimize multi-model execution, which is ubiquitous in model selection scenarios
such as hyperparameter tuning, architecture tuning, AutoML heuristics, etc. We propose
a novel hybrid of task parallelism and model parallelism in this context we call ``shard alternator
parallelism'' (SHARP) to raise throughput and GPU utilization. It is inspired by the notion of
\textit{multi-query optimization} (MQO)~\cite{sellis} in the RDBMS world.
We formalize the scheduling problem of SHARP and propose a new greedy scheduling
heuristic we call ``Sharded-LRTF'' to further optimize resource utilization. The deterministic
nature of selection in Sharded-LRTF enables some precognition in scheduling, which naturally
leads to our next optimization technique.
A common trick in RDBMSs to hide data movement latency during spilling is ``double buffering:''
split DRAM into two buffers so that computation happens over one and future data is pre-loaded
to the other. We bring this idea to DL systems by weaving it into Sharded-LRTF, allowing
\textsc{Hydra}~to `double-buffer model shards on GPU memory and hide latency of model spilling.
\begin{figure}[h]
\centering
\includegraphics[keepaspectratio=true, width=0.95\linewidth]{optimization_stack}
\caption{\textsc{Hydra}'s layered optimization stack combining scalability (spilling), hybrid parallelism
(SHARP), efficient scheduling (Sharded-LRTF), and latency hiding (double buffering).}
\label{fig:opt_stack}
\end{figure}
Overall, \textsc{Hydra}~features a carefully thought out system design combining the right mix of both
classical RDBMS-inspired techniques adapted to DL and novel techniques to solve our core problem.
Our implementation on top of PyTorch also minimizes overheads across our layered optimization stack
without needing to modify the internal code of PyTorch, helping ease practical adoption.
We evaluate \textsc{Hydra}~empirically on two standard large-model benchmark workloads and datasets:
hyperparameter tuning for BERT-style models on the WikiText-2 dataset~\cite{wikitext2} and neural
architecture evaluation for Vision Transformer~\cite{vit} on the CIFAR-10 dataset.
\textsc{Hydra}~substantially outperforms prior art on an 8-GPU machine, e.g., 7.5x speedups over standard
PyTorch model parallelism, 4x over both DeepSpeed and FlexFlow, and 50\% faster than GPipe.
\textsc{Hydra}~also reports the highest GPU utilization rates and offers near-linear speedups. It also scales
well for various model sizes.
In summary, this paper makes the following contributions:
\begin{packeditems}
\item {To the best of our knowledge, this is the first paper to decouple scalability from parallelism
for large-model DL training to study both from first principles.}
\item {Inspired by RDBMSs, we present a suite of scaling and efficiency techniques rooted in
automated model sharding and spilling.}
\item {To optimize multi-model execution, we present a novel hybrid form of DL execution called
SHARP that blends task parallelism and model parallelism, mitigating the key cons of both.}
\item{We cast SHARP as a form of MQO and build a simple scheduler featuring an efficient greedy
heuristic called Sharded-LRTF. We further hide shard movement latency with double buffering in
Sharded-LRTF.}
\item{We implement all of our ideas in a system we call \textsc{Hydra}, on top of PyTorch without altering
its internal code. A thorough empirical evaluation with real large-model workloads shows that
\textsc{Hydra}~significantly outperforms prior industrial and open source tools.}
\end{packeditems}
\begin{figure*}[th!]
\centering
\includegraphics[keepaspectratio=true, width=0.9\linewidth]{system_overview}
\caption{Overall system architecture of \textsc{Hydra}. It takes as input a user-specified set of models.
It partitions the models automatically and assigns shards to GPUs according to its Scheduler.
DRAM is used to temporarily store inactive model shards and inter-shard outputs.}
\label{fig:system_overview}
\end{figure*}
\section{Related Work}
\noindent
\textbf{Hybrid Parallelism and Pipeline Parallelism.}
Hybrid parallel approaches such as FlexFlow~\cite{flexflow} exploit mixed model-data parallelism
within a DL model. But they do not tackle our core issue of model scalability and they often force
users to manually split models across devices. Some even call such a framework ``memory
oblivious''~\cite{memflow}. Our experiments show \textsc{Hydra}~is significantly faster than a careful manual
model splitting in FlexFlow. Note that such manual splitting is not practical for most DL users; \textsc{Hydra}~automates that stage away.
Pipeline parallelism~\cite{GPipe, harlapPipeline, narayanan2021efficient} reduces idle times by staging
multiple mini-batch computations across devices. But it conflates parallelism with scalability, forcing users
to get many GPUs. By decoupling scalability from parallelism, \textsc{Hydra}~can scale to large models with
even just one GPU. Finally, none of the prior art exploit the higher degree of parallelism inherent in
multi-model training such as model selection workloads. We devise new hybrid parallelism techniques
from first principles for this setting. That said, our techniques are \textit{complementary} to both FlexFlow
and pipeline parallelism under certain conditions; we leave it to future work to unify them all.
\textbf{Reducing model memory footprints} has received much attention in DL
systems~\cite{chen2016training, gruslys2016memoryefficient, Kumar:EfficientRematerialization, Jain:Checkmate, TASO}.
That goal is \textit{orthogonal} to ours. ZeRO and DeepSpeed~\cite{zeroDeep} propose a
data parallelism `technique for reducing memory footprints by sharing model state across data-parallel instances --- this does not, however, address our
core challenge of scalability and multi-model training. Other work on machine teaching~\cite{machineteaching} and data distillation~\cite{distillation} aims to minimize the memory footprints of data,
but these techniques address a different aspect of memory in DL systems.
While all these memory-reduction techniques address an orthogonal challenge, they could eventually
be infused into \textsc{Hydra}~in the future to reduce the number of shards created.
\noindent
\textbf{Multi-query optimizations for DL systems} are techniques to optimize ML systems by exploiting multi-model
execution, e.g., systems such Cerebro~\cite{cerebro:kumar}, ModelBatch~\cite{modelbatch}, and ASHA~\cite{asha}.
Cerebro proposes a hybrid parallelism scheme named MOP combining task parallelism and data parallelism,
akin to (but different from) SHARP's hybrid model-task parallelism. ModelBatch raises GPU utilization by altering
the DL tool's internal execution kernels. None of them tackle larger-than-GPU-memory models, which is our focus.
Other examples of MQO for DL systems are Krypton~\cite{krypton} and HummingBird~\cite{hummingbird} but
they focus on inference, not training.
We presented an early version of this work at a non-full length (2 page) venue~\cite{mp:nagrecha}.
In that article, we explained the basic ideas of model sharding and spilling, outlined how task parallelism
can be exploited for multi-model training, and proposed a vision for our system. This paper realizes that
vision to build \textsc{Hydra}, fleshes out SHARP, dives deeper into our scheduling formulation, and proposes the
Sharded-LRTF and double buffering optimizations. This paper also presents a thorough empirical evaluation
on real DL workloads.
\section{Overview of \textsc{Hydra}}
\textsc{Hydra}~is designed to be a lightweight wrapper around the popular DL tool PyTorch.\footnote{It is straightforward
engineering effort to add support for TensorFlow too but we skip it in our current version for tractability.}
We do not need any internal code of the DL tool to be altered, which can help ease practical adoption.
Figure~\ref{fig:system_overview} illustrates the overall architecture of \textsc{Hydra}~and how it handles DL models.
\eat {
\begin{figure}[h!]
\centering
\includegraphics[keepaspectratio=true, width=0.9\linewidth]{system_stack}
\caption{\textsc{Hydra}~in the typical DL application stack. \textsc{Hydra}~executes large-model training for DL application users by automatically using available system resources efficiently.
}
\label{fig:system_stack}
\end{figure}
}
\textsc{Hydra}~takes PyTorch models and dataloaders as input. The standard PyTorch APIs can be used for model definitions ---
no special annotation effort is required. Neural architectures are then automatically inferred by \textsc{Hydra}. \textsc{Hydra}~then proceeds to ascertain
the memory capacity of available GPUs, then automatically partitions the model(s)
given into a queue of shards, with each shard-task respecting memory constraints. Section 4.1 explains this sharding process in more detail.
Model shards are then placed onto DRAM. \textsc{Hydra}~then begins sharded execution, applying SHARP to simultaneously employ
all GPUs by executing different shards of different models in parallel. The scheduling algorithm selects which model
will be trained next, and the shard at the front of its queue is then promoted into a buffer space in GPU memory.
As soon as the GPU is free, the shard then begins training, already prepared and ready in memory.
Upon completion, the shard is returned to DRAM and placed at the back of the model's queue. Intermediate outputs
are either held in memory or returned to DRAM depending on whether the next buffered shard can use them or not.
As the queue loops, it is refreshed with new mini-batches from the dataloader so as to complete epochs.
The entire process relies on the memory-independence between shards introduced by spilling. Existing
model-parallel approaches are forced to maintain all shards across multiple GPUs' memory, limiting
their flexibility. But model spilling offers us more scheduling flexibility.
\eat {
\section{Techniques in \textsc{Hydra}}
We now dive into the techniques in \textsc{Hydra}~to achieve seamless scalability and resource-efficient parallelism for training multiple large DL models in one go.
Our techniques are inspired by a suite of classical ideas in RDBMSs, viz., spilling, sharding, multi-query optimization, and double buffering~\cite{cowbook,},
but our work is among the first to study them in the context of DL training. While the individual techniques may not be highly novel in the context of data management systems, the way we
identify the right set of techniques, adapt them for DL, and synthesize them in \textsc{Hydra}~is novel. This enables \textsc{Hydra}~to offer state-of-the-art results in this
important DL systems setting.
\subsection{Model Spilling}
}
\section{Model Spilling}
We observe that it is an overkill to maintain all model shards across multiple GPUs at all times, as is done in
regular model parallelism or pipeline parallelism.
This is because sequential dependencies across layers in a neural architecture means only one device
(or a few) are really fully ``active'' with heavy computations at any point in time. Other devices act as mere
repositories for ``inactive'' model shards.
\begin{figure*}
\centering
\includegraphics[keepaspectratio=true,width=\linewidth]{model_spilling}
\vspace{-6mm}
\caption{Illustration of model spilling as a temporal schematic. \textsc{Hydra}~places inactive shards at a lower level of the memory hierarchy (DRAM here), from which they are re-activated later.}
\label{fig:model_spilling}
\end{figure*}
Exploiting the above observation, we propose the following for inactive shards in \textsc{Hydra}: \textit{spill them to DRAM}.\footnote{One can also easily spill further to disk if really needed.}
Only active shards are promoted to a GPU's memory; the rest ``wait'' in DRAM. This is akin to sharding a table
to stage reads between disk and DRAM in RDBMSs, except we are at a level higher in the memory hierarchy
and focus on DL models instead. All this means \textsc{Hydra}~can scale to virtually arbitrarily large DL models
\textit{on a single device}, e.g., even a trillion-parameter model can be trained with just one GPU out of the box.
This can already make a big difference for DL users with limited resources, e.g., for academic DL researchers
or NLP applications in small enterprises.
\begin{algorithm}[tb]
\caption{Dynamic model partitioning algorithm.}
\label{alg:partitioning}
\begin{algorithmic}
\STATE {\bfseries Input:} Model as a sequence of layers $L$ with size $m$; data mini-batch $B$; GPU $G$
\STATE {\bfseries Output:} Array of partition indices $A$
\STATE Append 0 to $S$
\FOR{$i=0$ {\bfseries to} $m-1$}
\STATE Place $L[i]$ and $B$ on $G$
\STATE $B'$ $\leftarrow$ Forward pass through $L[i]$ with $B$
\STATE$T$ $\leftarrow$ New tensor with same shape as $B'$
\STATE Backpropagate $T$ through $L[i]$ without freeing its memory
\IF{$G$ out of memory}
\STATE Append $i$ to $S$
\FOR{$j=0$ {\bfseries to} $i-1$}
\STATE Release all memory consumed by L[j]
\STATE Append $i$ to $A$
\ENDFOR
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Automated Model Partitioning}
\label{sec:partition}
Both traditional model parallelism and our model spilling depend on some sort of ``cut point'' to split the neural
computational graph because shards must consist of disjoint subsets of layer groups.
Prior art~\cite{narayanan2021efficient} use some basic heuristics, albeit restricted to a specific class of models.
Unfortunately, their approach is not general enough for our purpose.
While one could use sophisticated graph partitioning algorithms for ``optimal'' partitioning, we find that is
not worthwhile for two reasons. First, this stage is anyway only a tiny part of the overall runtime, which
is dominated by the actual training runs. Second, due to the marginal utility of over-optimizing here,
it will just make system engineering needlessly complex.
We prefer simplicity that still offers generality and good efficiency. Thus, we use a \textit{dynamic greedy
approach} based on toy ``pilot runs.'' Algorithm~\ref{alg:partitioning} presents it succinctly. The basic idea
is to pack as much of a model as possible on to a GPU. If the set of GPUs is heterogeneous, we use the
smallest-memory GPU to ensure cross-device compatibility of shards. We treat a DL model as an ordered
list of layer indices, with the layers being ``cut-points'' in the graph to enable smooth partitioning.
\textsc{Hydra}~then iterates through these indices to run ``toy'' passes with a \textit{single mini-batch once}.
If the run is successful, the Partitioner raises the shard size by appending the next set of layers.
If the GPU throws an out-of-memory error, we remove the set of layers appended last. Thus, in this
dynamic way, we find the near-maximum set of layers that fit in GPU memory; this set is then cut off
from the model as its own shard. The Partitioner continues this process for the remaining the layers,
until that model is fully partitioned We record runtime statistics for later use by our Scheduler.
Our above approach is general, flexible, and easy to implement, but it does have two (minor) cons.
First, the shard may be too tiny at the end of a model, perhaps even just one layer. But this is not a
showstopper in \textsc{Hydra}~because it has little effect on overall efficiency in the mix of things.
Second, since our pilot runs use the smallest-memory GPU, the actual training later may under-utilize
the memory of larger GPUs (if applicable). Nevertheless, Section 6 shows that our approach substantially
boosts GPU utilization relative to all prior art anyway.
\section{SHARP}
We now present one of our key novel techniques: Shard Alternator Parallelism (SHARP), a hybrid of classical
model parallelism and task parallelism to improve resource efficiency. We define our basic unit of computation,
\textit{shard unit}, as follows: the subset of computations of a forward or backward pass on a model's
shard. Thus, a full forward or backward pass of a model is a \textit{sequence} of shard units.\footnote{In
recent ML literature, this unit is also called a ``microbatch''~\cite{GPipe}. We prefer to use the more
standard terminology of ``unit'' from the operations research and systems literatures instead because
the term ``microbatch'' may cause confusion on whether the mini-batch \textit{data} is split further, which
is not the case. A shard unit splits the \textit{computations} (not data) of a forward/backward pass of a whole
mini-batch.} Overall, the scheduling goal is to execute all shard units of all models given by the user for
all epochs.
Figure~\ref{fig:sharp} illustrates the basic idea of SHARP contrasted with both task- and model parallelism.
After a model's shards are created (Section \ref{sec:partition}), shard units are naturally set. The key difference in
SHARP is that a given model's shard units do not necessarily run \textit{immediately} after one another, i.e.,
they may be staggered over time. This is the key reason for SHARP's higher efficiency--it breaks tasks
down and reassembles them in a ``better'' way.
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio=true, width=\linewidth]{shard_parallel_schedule}
\caption{Illustration of SHARP and contrasting with regular task parallelism and model parallelism for training
3 models A, B, and C, each with 2 shards.
}
\label{fig:sharp}
\end{figure}
While the basic idea of SHARP is simple (but novel), realizing it in a working system poses two technical
challenges: (1) the sheer number of shard units, and (2) the latency of swapping shards between
device memory and DRAM. First, note that the number of shard units to be handled is
multiplicative in 4 quantities: number of models given by the user, number of shards per model,
number of mini-batches per epoch, and number of epochs per model training run. In realistic DL
scenarios, one can easily hit \textit{tens of millions of shard units}!
\eat{
The sheer number of microbatches causes a different issue, however. The number of inter-task transitions in a task parallel system are determined by the total number of tasks. Equation \ref{eqn:data_swaps} shows how many tasks exist in a shard parallel setup. For every model $m$ of the total set $M$, the number of microbatches can be computed as a product of the number of shards $m_{k}$, number of mini-batches $B / {m_{b}}$, and epochs that have been assigned for that model. A typical train job may see hundreds of thousands of minibatches sent through models, and a larger-than-memory model may require dozens of shards. This translates to millions of transfers between DRAM and GPU memory, slowing down \textsc{Hydra}'s training procedures.
\vspace{-4mm}
\begin{align}
\label{eqn:data_swaps}
\begin{split}
s = 2 \times \sum_{m=0}^{M} {m_{k}} \times ({B / {m_{b}}}) \times m_{e} \\
\end{split}
\vspace{-8mm}
\end{align}
}
\subsection{\textbf{Automated Shard Orchestration}}
To realize SHARP in an system, we must handle 3 kinds of ``data'' before, during, and after a shard unit:
(1) training data mini-batch, (2) model parameters, and (3) intermediate data/outputs of a shard unit.
Thankfully, DL tools like PyTorch offer APIs that enable Data to be transferred from GPU
memory to DRAM and vice versa. We use those APIs in \textsc{Hydra}~under the hood to automate shard
orchestration.
Each model is defined as a ``queue'' of shards in DRAM, ordered according to the neural computational
graph. Each shard maintains the prepared data that will need to be used with the ``next'' shard. This data
could include the training data mini-batch, intermediate data exchanged \textit{between} shards, and/or
gradients sent backward. The shard at the front of the queue is transferred to GPU memory along with
its associated data to begin running that shard unit.
After shard unit execution completes, the inputs it used are
discarded. The shard parameters (possibly updated) are returned to DRAM. In addition, the shard's
intermediate outputs, say, a gradient vector or a loss value, are also written to DRAM and attached to
the model. They will be used as inputs for the model's next shard. The last shard
of a model concludes a full mini-batch training pass; after that, the old mini-batch is discarded and the
next mini-batch of the prepared data will be used.
\subsection{\textbf{Double Buffering}}
A common trick used in RDBMSs, e.g., for external merge sort, is double buffering~\cite{cowbook}.
The basic idea is this: the processor's memory (higher in the memory hierarchy) is split into two
regions: one for active processing and the other as a ``loading zone'' for the next task. We bring
this trick to the DL systems world for the first time. \textsc{Hydra}~uses it to mask shard loading
latencies. We protect a ``buffer space'' in GPU memory during model partitioning (Section \ref{sec:partition}) to
guarantee that so much buffer memory will be available during training. The buffer size is
15\% by default, but users can adjust it if they like.
When our Scheduler picks the next shard to be run, we transfer it to that GPU's buffer space \textit{even
as the previous shard unit is running there}. We load only what will fit into the buffer space. Usually, this
is feasible because a model shard alone is often not \textit{that} big--it is the combination with training
data and massive intermediate outputs that causes memory
pressure in large-model DL.
Interestingly, our double-buffered DL training in \textsc{Hydra}~also offers a serendipitous new bonus:
we can avoid spilling (to DRAM) altogether in some cases. When a model's shard unit it active,
if its next shard is double-buffered on another GPU, we send intermediates with a fast GPU-to-GPU
transfer instead of a slower GPU-to-DRAM-to-GPU route. Going further, if the next shard is
double-buffered on the same GPU, intermediates need not move out of the GPU at all, substantially
reducing latency.
Overall, while we focus on the GPU memory-DRAM dichotomy, our above techniques are general
enough to be applicable across the entire memory hierarchy: between DRAM and local disk, local
and remote disk, etc.
\begin{table}[tb!]
\caption{Notation for our scheduling formalization.}
\label{table:tradeoffs_notation}
\begin{tabular}{c p{5cm}}
\toprule
Symbol & Description \\
\midrule
$T$ & List of models to be trained.\\
\midrule
$P$ & List of devices (GPUs) available for training.\\
\midrule
$M_{i} \in {\rm \mathbb{Z}^+} $ & $M_{i}$ is the total shard unit count for model $T_{i} \in T$. Note that this covers all mini-batches (and potentially epochs). \\
\midrule
$S_{i} \in {\rm \mathbb{R}}^{M_{i}}$ & $S_{i}$ is a variable-length list of shard unit runtimes for model $T_i$. The runtime of shard unit $j$ is denoted as $S_{i, j}$.\\
\midrule
$X_{i} \in {\rm \mathbb{R}}^{|P| \times |M_{i}| }$ & $X_{i}$ is a variable-shape matrix of start times of shard units of model $T_i$ across workers. The start time of shard unit $j$ on worker $p$
is denoted as $X_{i, p, j}$.\\
\midrule
$Y_{i} \in {\rm \{0, 1\}^{ |P| \times L \times L} }$ & $L$ is the total number of shard units across all models, i.e., $L = \sum_i M_i$,
indexed cumulatively by the index of model $i$ and its shard unit $j$ (denoted $i\_j$). $Y_{p, i\_j, i'\_j'} = 1 \Leftrightarrow X_{i, p, j} < X_{i', p, j'}$.\\
\midrule
$U$ & An extremely large value used to enforce boolean logic. \\
\midrule
\bottomrule
\end{tabular}
\vspace{-4mm}
\label{tb:milp_table}
\end{table}
\subsection{Scheduling Formalization of SHARP}
\label{sec:orchestration}
The sheer number and variable runtimes of shard units across models necessitates a rigorous automated Scheduler.
We immediately face three technical challenges. First, different models may train for different numbers of epochs,
say, due to convergence-based SGD stopping criteria or early stopping in AutoML heuristics. Second, it is possible
different devices have different compute capacities. Third, devices may disappear over time, say, due to faults, or
get added, say, due to elasticity. Due to all these reasons, we choose a \textit{dynamic scheduling} approach to place
shard units on devices as and when a device becomes available over time. This design decision tackles all three
challenges above in a unified way and also simplifies system implementation.
Specifically, we formalize our problem for a given set of epochs per model at a time. This may
mean one epoch at a time or a pre-fixed number of epochs the user gives per model. Regardless,
we treat each model to be trained as a \textit{queue of shard units} unifying reasoning of division
within a mini-batch, across mini-batches within an epoch, and across epochs.
\subsubsection{\textbf{Formal Problem Statement as MILP}}
\label{sec:scheduler}
The scheduling problem is as follows. At a given point in time, when a device (GPU) becomes available, a shard
unit must be selected from one the model's queues to be placed upon that device. Shard units become \textit{eligible}
for scheduling if they have no pending dependencies, i.e., they are at the front of their queue and no other shard unit
of that same model is still running on another device. The Scheduler's job then is to pick a shard unit from the set of
eligible shard units. Double-buffered training is already factored into this formulation: the Scheduler is actually
picking shard units for double-buffering, and they get promoted from the buffer to compute.
All shard unit runtimes are given as input. Recall from Section \ref{sec:partition} that the partitioner records this data during its pilot run.
These runtimes may not be fully accurate due to variable system factors during execution, but are sufficient for effective scheduling.
We now present the formal scheduling problem as an MILP. Table~\ref{tb:milp_table} explains our notation.
\vspace{-4mm}
\begin{align}
\label{eqn:milp_obj}
\texttt{Objective:} ~\quad \displaystyle\min_{X,Y} {C}
\end{align}
\vspace{0mm}
\texttt{Constraints:}
\begin{align*}
\begin{split}
&~\forall t, t' \in [1,\dots,|T|] \hspace{5mm} \forall p, p' \in P \\
(a) &~\displaystyle {~\forall j \in [2,\dots,M_{t}] ~X_{t, p, j} \geq X_{t, p', j - 1} + S_{t, j - 1}}\\
(b) &~\displaystyle{~\forall j \in [1,\dots,M_{t}] \hspace{5mm} \forall j' \in [1,\dots,M_{t'}] } \\
& \indent ~\displaystyle{~X_{t, p, j} \geq X_{t', p, j'} + S_{t',j'} - (U \times Y_{p, t\_j, t'\_j'} ) } \\
(c) & ~\displaystyle{~\forall j \in [1,\dots,M_{t}] \hspace{5mm} \forall j' \in [1,\dots,M_{t'}] }\\
& \indent ~\displaystyle{~X_{t, p, j} \leq X_{t', p, j'} - S_{t,j} + (U \times (1 - Y_{p, t\_j, t'\_j'} ))} \\
(d) &~\displaystyle{~\forall j \in [1,\dots,M_{t}]}\\
&\indent~\displaystyle{X_{t, p, j} \geq 0}\\
(e) &~\displaystyle{~\forall j \in [1,\dots,M_{t}]}\\
(f) &\indent~\displaystyle{C \geq X_{t, p, j} + S_{t, j}}
\end{split}
\end{align*}
The objective is to pick a shard unit that can minimize makespan (completion time of the whole workload at
this granularity). Constraints (a) simply enforce the \textit{sequential ordering of shard units} within a model.
Note that this set per model here is unified within a mini-batch, across mini-batches within an epoch, and
potentially across epochs too--they are all sequentially dependent.
Constraints (b) and (c) enforce \textit{model training isolation}, i.e., only one shard unit can run on a device
at a time. Constraints (d) is just non-negativity of start times, while Constraints (e) define the makespan.
Using a MILP solver such as Gurobi~\cite{gurobi} enables us to produce an ``optimal'' schedule in this
context. But the above task is a variant of a general job-shop scheduling problem described
in~\cite{ullman:scheduling}, and it is known to be NP-complete. Given that the number of shard units can
span thousands to tens of millions, solving it optimally will likely be impractically slow. Thus, we look for
fast and easy-to-implement scheduling algorithms that can still offer near-optimal makespans.
\subsubsection{\textbf{Intuitions on Scheduling Effectiveness}}
We observe that there are 2 primary cases encountered by a scheduler in our setting:
\begin{packedenums}
\item The number of models is equal to or greater than the number of available devices.
\item The number of models is less than the number of available devices.
\end{packedenums}
In case (1), there will always be at least one eligible shard unit for each device at every scheduling decision.
Any shard-parallel scheduling algorithm that accounts for all devices can easily keep all devices busy most
of the time, i.e., \textit{busy waiting} is unlikely.
In case (2), all models can be trained simultaneously. Since each model's shard unit uses at most one device
in SHARP, and since there are more devices than models, there is no contention for resources here. In this
case, regular task parallelism-style scheduling suffices and the makespan will just be the runtime of the
longest ``task.''
\begin{table*}[th]
\centering
\vspace{4mm}
\caption{Details of end-to-end workloads. *Architectures similar to BERT-Large and ViT, scaled up for demonstration.}
\begin{tabular}{c c c c c c } \hline
Dataset&Model Architectures&Model Sizes&Batch Size&Learning Rate&Epochs\\
\midrule
WikiText 2&BERT-Large*&1B&{8, 16, 32}&$10^{-3}, 10^{-4}, 10^{-5}, 10^{-6}$&{4}\\
\midrule
CIFAR-10&ViT*&{300M, 600M, 800M, 1B, 1.5B, 2B}&{64, 128}&$10^{-3}$&5\\
\end{tabular}
\label{tb:end-to-end}
\end{table*}
In both the cases above, even basic randomized scheduling might yield reasonable makespans.
However, what it will not take into account is that case (1) is not static. Over time, as models finish their
training, our setting may ``degrade'' from case (1) to case (2). Thus, two different schedulers that operate
on a workload in case (1) may differ in their effectiveness based on how gracefully they degrade to case (2).
As noted before, the makespan in case (2) scenario is determined solely by the longest-running remaining
model. This gives us an intuition for a simple scheduler that can often do better than randomized:
\textit{minimize the maximum remaining time among the remaining models}.
In most realistic DL model selection workloads, overall completion time will be dominated by case (1). Thus, the
marginal utility of pursuing an overly optimized Scheduler is low given the likely higher implementation
complexity. However, if degradation to case (2) occurs earlier on, and if there is a substantial differences in
task runtimes post-degradation, the overall completion times can differ more significantly based on the
scheduling. Such degradation can arise in model selection workloads that use early stopping for
underperforming models, e.g., Hyperband~\cite{hyperband}, or by manual user intervention.
Thus, we aim for a scheduling algorithm that can address such cases too in a unified way.
We propose a simple and practical greedy heuristic we call Sharded Longest Remaining Time First
(LRTF) based on our above intuitions. Algorithm~\ref{alg:shardedlrtf} explains our algorithm. Sharded-LRTF selects the model training task
with the \textit{longest total remaining train time}
at every possible scheduling decision time. Since a new scheduling choice must be made at the completion of every shard unit, the selection will update
its choice of longest task at a very fine-grained level. Note that the
selection procedure runs efficiently with \textit{linear time complexity}.\footnote{In fact, an alternate data
structure to record shard references can enable even constant-time selection.}
\begin{algorithm}[tb]
\caption{The Sharded-LRTF scheduling algorithm.}
\label{alg:shardedlrtf}
\begin{algorithmic}
\STATE{\bfseries Struct \{}{
\STATE \ \ \ \ Remaining epochs $e$\\
\STATE \ \ \ \ Minibatches per epoch $b$\\
\STATE \ \ \ \ Remaining minibatches in current epoch $ce$\\
\STATE \ \ \ \ Minibatch training time $t$\\
\STATE \ \ \ \ Remaining train time in current minibatch $cm$
\}
}
\STATE{ \bfseries Input:} Idle Models [$M$]
\STATE{\bfseries Output:} Model $MaxModel$
\STATE $MaxTrainTime = 0$
\FOR{Index $i$, Model $m$ in $[M]$}
\STATE $ModelTrainTime$ =$((m_{e} - 1) \times m_{b} + m_{ce} - 1 )\times m_{t} + m_{cm} $
\IF{$ModelTrainTime > MaxTrainTime$}
\STATE $MaxTrainTime = ModelTrainTime$
\STATE $MaxModel = m$\
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{figure}[t]
\centering
\includegraphics[keepaspectratio=true,width=\linewidth]{simulation_comparisons_keynote}
\caption{Comparison of various scheduling algorithms. Makespans are normalized to Optimal.}
\label{fig:sched_proc}
\vspace{-2mm}
\end{figure}
\subsubsection{\textbf{Our Scheduling Algorithm: Sharded-LRTF}}
\label{sec:scheduler_alg}
To quantitatively understand the effectiveness of Sharded-LRTF, we compare it using simulations against
a basic randomized schedule and a Gurobi-output ``optimal'' schedule. For tractability, we set a timeout of
100s for Gurobi. We who both a homogeneous setting (all neural architecture are identical) and a heterogeneous setting,
wherein they differ significantly.
We assume all GPU devices are identical for simplicity, but that is also common in practice. Per-epoch
runtimes of a model in the homogeneous setting are all fixed to 2 hours each, with 2000 shard units each.
For the heterogeneous setting, per-epoch model runtimes are set between 30 minutes to 4 hours;
number of shard units are set between 100 to 10,000. We randomly sample an initial set
and report the average and standard deviations of 3 runs on the fixed set. Variance occurs
due to non-deterministic scheduling behaviors from random selection and Gurobi timeout.
Figure~\ref{fig:sched_proc} shows the results.
\begin{figure*}[pt!]
\centering
\includegraphics[width=\linewidth]{e2ebarchart}
\vspace{-4mm}
\caption{End-to-end workload results: Runtimes and Total GPU Utilization. Speedups are relative to the baseline PyTorch Distributed.
}
\label{fig:e2e_results}
\end{figure*}
MILP ``optimal'' has higher makespan in some cases because Gurobi did not converge to the
global optimal in the given time budget. The randomized approach matches it or performs worse in many
cases. But Sharded-LRTF matches or significantly outperforms the other approaches,
especially in the heterogeneous setting.
Also note that the runtime of Sharded-LRTF is in the order of tens of milliseconds, ensuring it is practical
for us to use in \textsc{Hydra}~repeatedly for scheduling shard units on devices dynamically. Note that the
actual mini-batch training computations on the device dominate overall runtimes.
\section{Experiments}
We now compare \textsc{Hydra}~against state-of-the-art open source and industrial tools for large-model DL
training: PyTorch Distributed, Microsoft's DeepSpeed, FlexFlow from Stanford/CMU, and Google's GPipe idea.
FlexFlow requires some manual guidance by editing the system-generated parallelism strategy file to ensure
memory errors do not occur.
We also show multiple variants with DeepSpeed, including superimposing a hybrid task parallelism (note that
regular task parallelism is not applicable) and a hybrid data parallelism offered by DeepSpeed. We then dive into
how \textsc{Hydra}~performs when various workload and system parameters are varied.
\textbf{Workload Details.}
We use two popular DL model selection scenarios: hyperparameter evaluation and neural architecture evaluation.
Table~\ref{tb:end-to-end} lists details. For hyperparameter evaluation, we focus on masked-language modeling with the
Transformer architecture BERT-Large~\cite{devlinBert}, trained on the benchmark WikiText-2 dataset. The neural architecture is fixed and we vary
batch size and learning rate as key hyperparameters to create a total of 12 models to train, each with 1B parameters.For
neural architecture evaluation, we focus on an emerging computer vision task with variants of the Vision Transformer (ViT)
model~\cite{vit}. We use the benchmark CIFAR-10 dataset. We create models with sizes between 600M parameters and
2B parameters. We also vary batch sizes, leading to a total of 12 models again.
\textbf{Machine Setup.}
We focus on the single-node multi-GPU setting, anecdotally the most common among DL practitioners (although
nothing in \textsc{Hydra}~prevents generalizing to multi-node clusters in the future).We use 8 GPUs, each being
GTX 1080Ti's with 11GB memory. The machine has 512 GB of DRAM and an Intel Xeon CPU with 10 2.20GHz cores,
and it runs Ubuntu 18.04.
\subsection{End-to-End Workloads}
Figure~\ref{fig:e2e_results} presents overall runtimes and GPU utilization results. We find that the baseline off-the-shelf
PyTorch Distributed and DeepSpeed model parallelism report massive resource under-utilization. Thus, their runtimes
are the highest. The basic hybrids with data- or task- parallelism do provide higher utilization and
some modest speedups, but the fundamental limitations of model parallelism persist with such approaches, such that
they still fall substantially short of ideal linear speedup (8x in this case). GPipe-style pipeline parallelism is much better,
with about a 5x speedup against regular model parallelism. But \textsc{Hydra}~is the most efficient approach overall, reaching about
7.5x, close to the physical upper bound. The average GPU utilization of \textsc{Hydra}~is also the highest at over 80\%.
\subsection{Drill Down Analysis of \textsc{Hydra}}
We now dive into how \textsc{Hydra}~performs when varying key parameters from both ML and system standpoints.
For reference sake, we also show standard model parallelism of PyTorch Distributed alongside.
\subsubsection{\textbf{Impact of Model Scale.}}
We vary the scale of the models to see the impact on relative performance of \textsc{Hydra}. We fix the number of GPUs
at 8 and the number of models to 12. Figure~\ref{fig:runtime_vs_scale} shows the results. We see that \textsc{Hydra}'s
speedups over regular model parallelism \textit{is fairly consistent} even as the model scale grows. This is because
our partitioning approach (Section \ref{sec:partition}) and the dynamic Sharded-LRTF algorithm (Section 5.3) together ensure that
shard unit times are similar even as model scale grows; basically, it just leads to more shard units to run. Our SHARP
and double-buffering techniques further ensure that having more shard units do \textit{not} cause relatively more
resource idling on average.
\begin{figure}[h]
\centering
\includegraphics[keepaspectratio=true,width=\linewidth]{runtime_vs_scale}
\caption{Impact of model scale. Runtimes normalized to the first instance of regular model parallelism for clarity.}
\label{fig:runtime_vs_scale}
\vspace{-4mm}
\end{figure}
\subsubsection{\textbf{Impact of Number of GPUs.}}
We now study how varying the number of GPUs affects \textsc{Hydra}'s speedup behavior. We fix the workload to 4
Transformer models, each with 250M parameters. We choose only 4 models to showcase both regimes: when
the number of devices is less than models and vice versa. Figure~\ref{fig:runtime_vs_cluster} shows the results.
We see that \textsc{Hydra}~exhibits a roughly linear speedup when there are more models than devices. And when
that flips, since \textsc{Hydra}~runs out of models to schedule, the speedup flattens as the degree of parallelism
is limited. This flattening in the fewer-models regime is inherited from task parallelism by SHARP. We believe
further hybridization of SHARP with data parallel training can help boost speedups and resource utilization
in this regime; due to its complexity, we leave it to future work.
\begin{figure}[h]
\centering
\includegraphics[keepaspectratio=true,width=\linewidth]{runtime_vs_cluster}
\caption{Varying the number of GPUs. Speedups are normalized to standard model parallelism.
}
\label{fig:runtime_vs_cluster}
\vspace{-4mm}
\end{figure}
\subsubsection{\textbf{Impact of Number of Models.}}
We now vary the number of models that are trained together. The number of GPUs is set to 8; all models have are
uniformly large, at 250M parameters (same Transformer workload as before). Figure~\ref{fig:runtime_vs_models} shows
the results. We see that \textsc{Hydra}~exhibits close to 8x speedups when the number of models is 8 or more but
lower than that, the speedup is capped close to the actual number of models. As before, this too is due to
SHARP inheriting the degree of parallelism from task parallelism. The GPU utilization numbers vary proportionally
to the speedups seen.
\begin{figure}[h]
\centering
\includegraphics[keepaspectratio=true,width=\linewidth]{runtime_vs_models}
\caption{Varying the number of models trained together. Speedups are normalized to standard model parallelism.
}
\label{fig:runtime_vs_models}
\end{figure}
\subsubsection{\textbf{Ablation Tests.}}
In this experiment, we explore the effect of system components on framework performance. The number of devices is fixed to 8, with 16 Transformer models. All optimization levels include model spilling as a baseline, as this technique is critical to \textsc{Hydra}'s basic operations. Table \ref{tb:ablation} demonstrates the results.
\begin{table}[ht]
\centering
\vspace{4mm}
\label{tb:ablation}
\scalebox{0.6}{
\begin{tabular}{c c c} \hline
Optimization Level & Runtime (hrs) & Runtime relative to \textsc{Hydra}~\\
\midrule
\textsc{Hydra}~without SHARP or double-buffering & 24.14 & 13.05X \\
\midrule
\textsc{Hydra}~without double-buffering & 4.25 & 2.3X\\
\midrule
\textsc{Hydra} & 1.85 & 1X \\
\end{tabular}
}
\caption{Runtimes and slowdowns of \textsc{Hydra}~when our two key optimizations are disabled one by one.}
\label{tb:ablation}
\end{table}
Pure model spilling dramatically slows down model training. This is only to be expected, given that it introduces a dependency on DRAM. SHARP's throughput improvements dominate the slowdowns of model spilling, but it is important to note that SHARP's speedups are workload-dependent. Double-buffering largely eliminates the cost of model spilling, enabling further speedups.
\eat {
\section{Current Limitations and Future Opportunities}
\vspace{2mm}
\noindent
\textbf{Non-Sequential Neural Architectures.} As mentioned earlier, \textsc{Hydra}~focuses on neural computational graphs that can
be represented as sequences of layers or groups of layers (prior work on pipeline parallelism also assumes this).
The most popular classes of GPU-memory-bottlenecked models in DL practice today, viz., Transformers, as well as most
convolutional neural networks and multi-layer perceptrons do satisfy the assumption. Some not-fully-sequential models such as
Inception, ResNets, and DenseNets are easily handled because residual or skip connections can be linearized with a single
``super-vertex'' in the graph specification given to \textsc{Hydra}. As long as the user defines the graph in that way using the
DL tool's API, \textsc{Hydra}~works out of the box for such models too.
But for recurrent neural networks (RNNs) and graph neural networks (GNNs), \textsc{Hydra}~would need to be extended to account
for non-trivial dependencies across shard units of a model. Backpropagation through time, maintaining memory cells, and
cross-layer global connections all require non-trivial extra implementation machinery and modifications to our Scheduler.
We leave such extensions to future work.
\vspace{2mm}
\noindent
\textbf{Large Model Inference.}
This paper focused primarily on training of large models. But a trained model is then used for inference in an application.
If one wants to use a GPU for inference, the same GPU memory bottleneck exists. Fortunately, \textsc{Hydra}'s model spilling,
automated partitioning, and automated shard orchestration all suffice already for out-of-the-box large model inference too.
While we have not implemented an inference API as of this writing, it is a straightforward addition we plan to do soon.
\vspace{2mm}
\noindent
\textbf{DL Tool Generality.} \textsc{Hydra}~is currently implemented as a wrapper around PyTorch. But all of our techniques described
in Section 4 are generic enough to be used with, say, TensorFlow or MXNet as well. We just chose to prioritize the depth of our
system over generality of DL tools as of this writing. But nothing in \textsc{Hydra}~prevents support for additional DL tools in the future.
\vspace{2mm}
\noindent
\textbf{CUDA-level Optimization.} We designed \textsc{Hydra}~to operate on top of PyTorch to ensure backward compatibility as PyTorch
evolves. This means we did not exploit any low-level optimizations for GPU-to-GPU transfers. One could technically imagine
using multiprocessing in CUDA to reduce this latency, including for our double-buffering technique. But all this will require us to
write new kernels in CUDA for memory management and hook them into the DL tool. We leave such ideas to future work.
\vspace{2mm}
\noindent
\textbf{More Hybrid Parallelism.}
When there are fewer models than devices, \textsc{Hydra}~may under-utilize the devices due to the limitation it inherits from task
parallelism (so will all alternative approaches). But one could do better by hybridizing data parallelism and pipeline parallelism
with \textsc{Hydra}~to raise overall resource utilization. This is feasible because both of those approaches are technically
\textit{complementary} to SHARP and \textsc{Hydra}'s other techniques. We leave such sophisticated hybrid-of-hybrids parallelization
to future work. But we note that in cases where there are more models than devices, SHARP is already rather close to optimal
utilization, as our empirical results show.
\vspace{2mm}
\noindent
\textbf{Static Partitioning.} Our current pilot run-based dynamic partitioning is an overhead, albeit not that high compared to
actual training runtimes. But in the future, it may be worthwhile to adopt a more static off-the-shelf model size estimator such
as DNNMem~\cite{DNNMem} to reduce the runtime of the partitioning phase even further.
}
\section{Limitations and Future Work}
\textsc{Hydra}~currently has two key workload limitations, which we had assumed to ensure tractability. We aim to relax these
in future work and generalize to more DL workloads.
\noindent
\textbf{Non-Sequential Architectures.} Just like pipeline-parallel systems, \textsc{Hydra}~focuses on models that
are largely a sequence of layers (some branching is fine). Thankfully, the most critical GPU-memory-bottlenecked
models in modern DL, viz., Transformers, satisfy this assumption. But note that \textsc{Hydra}~also supports large CNNs and MLPs
out of the box today. Recurrent neural networks (RNNs) and graph neural networks (GNNs) have more complex dependencies
across shard units, and we leave it to future work to study those model families in depth.
\noindent
\textbf{Data Parallelism.}
When there are more models than devices, \textsc{Hydra}~already achieves near-optimal speedups. But if there are fewer models
than devices, \textsc{Hydra}~ may under-utilize the devices due to a limitation inherited from task parallelism. Hybridizing
data parallelism with SHARP can raise resource utilization in this regime. We leave such sophisticated
hybrid-of-hybrids parallelization to future work.
\section{Conclusion}
Training larger-than-GPU-memory DL models is an increasingly critical need for DL users. Yet the existing
landscape of ``model parallelism'' tools offers subpar scalability and parallelism, while often
massively underutilizing GPUs. We present \textsc{Hydra}, a new system for large-model DL training inspired
by the design and implementation of RDBMSs. We identify a judicious mix of data systems techniques--some novel
and some classical RDBMS ideas adapted to DL (such as sharding, spilling, and double buffering)--to enable large-model
training even on a single GPU. By further exploiting the high degree of parallelism in multi-model training, we
devise a novel hybrid parallel execution technique inspired by multi-query optimization.
Our work offers practical benefits to DL practitioners, opens new lines of investigation in model parallelism
research, and shows that the DL systems world can benefit from learning from the RDBMS world on data systems
techniques to enable more seamless scalability and parallelism for DL workloads.
\eat {
As for future work, we would like to improve the efficiency of \textsc{Hydra}~in cases where there are fewer models
than devices. This will require further hybridization of our ideas with data parallelism and pipeline parallelism.
We also plan to expand support in \textsc{Hydra}~for more complex non-sequential neural architectures such as
RNNs and GNNs. Finally, we aim to expand support in \textsc{Hydra}~for other popular DL tools and emerging
DL-oriented compute hardware beyond GPUs.
}
|
3,212,635,537,539 | arxiv | \section{\@startsection{section}{0}{\z@}{5.5ex plus .5ex minus
1.5ex}{2.3ex plus .2ex}{\large\bf}}
\def\subsection{\@startsection{subsection}{1}{\z@}{3.5ex plus .5ex minus
1.5ex}{1.3ex plus .2ex}{\normalsize\bf}}
\def\subsubsection{\@startsection{subsubsection}{2}{\z@}{-3.5ex plus
-1ex minus -.2ex}{2.3ex plus .2ex}{\normalsize\sl}}
\renewcommand{\@makecaption}[2]{%
\vskip 10pt
\setbox\@tempboxa\hbox{\small #1: #2}
\ifdim \wd\@tempboxa >\hsize
\small #1: #2\par
\else
\hbox to\hsize{\hfil\box\@tempboxa\hfil}
\fi}
\def\citenum#1{{\def\@cite##1##2{##1}\cite{#1}}}
\newcount\@tempcntc
\def\@citex[#1]#2{\if@filesw\immediate\write\@auxout{\string\citation{#2}}\fi
\@tempcnta\z@\@tempcntb\m@ne\def\@citea{}\@cite{\@for\@citeb:=#2\do
{\@ifundefined
{b@\@citeb}{\@citeo\@tempcntb\m@ne\@citea\def\@citea{,}{\bf ?}\@warning
{Citation `\@citeb' on page \thepage \space undefined}}%
{\setbox\z@\hbox{\global\@tempcntc0\csname b@\@citeb\endcsname\relax}%
\ifnum\@tempcntc=\z@ \@citeo\@tempcntb\m@ne
\@citea\def\@citea{,}\hbox{\csname b@\@citeb\endcsname}%
\else
\advance\@tempcntb\@ne
\ifnum\@tempcntb=\@tempcntc
\else\advance\@tempcntb\m@ne\@citeo
\@tempcnta\@tempcntc\@tempcntb\@tempcntc\fi\fi}}\@citeo}{#1}}
\def\@citeo{\ifnum\@tempcnta>\@tempcntb\else\@citea\def\@citea{,}%
\ifnum\@tempcnta=\@tempcntb\the\@tempcnta\else
{\advance\@tempcnta\@ne\ifnum\@tempcnta=\@tempcntb \else\def\@citea{--}\fi
\advance\@tempcnta\m@ne\the\@tempcnta\@citea\the\@tempcntb}\fi\fi}
\makeatother
\section{Introduction}
In the studies that are now being done to prepare for physics at the LHC,
many new approaches have been proposed to the old problem of generating
parton showers. The workhorse event generators
PYTHIA~\cite{PYTHIA} and HERWIG~\cite{HERWIG} generate parton
showers by successive radiations from individual partons. The `splitting
functions' that define the radiation pattern are taken to be the
kernels in the Altarelli-Parisi equation~\cite{AP,Dokshitzer}.
This guarantees that the
radiation pattern is correct in the region in which two partons become
collinear. Marchesini and Webber pointed out that it is also important
to include color interference between emissions from different
partons~\cite{MW}.
In the workhorse generators, this is implemented by angular ordering
of emissions.
The program ARIADNE, by Andersson, Gustafson, L\"onnblad, and
Pettersson, took a different approach,
implementing color coherence
by considering the QCD dipole to be the basic object that
radiates a parton~\cite{colordipoles,ARIADNE}. The
basic branching process in a parton shower is then a
splitting in which two partons forming a color dipole radiate a third
parton.
This approach has been
taken up recently by a number of authors. It is the basis for the
VINCIA shower by Giele, Kosower, and Skands~\cite{VINCIA} and the parton
shower implementation in SHERPA by Krauss and Winter~\cite{SHERPAdipoles}.
We are also developing a parton shower based on this approach~\cite{hydra}.
In the years between ARIADNE and the newer works, the term `dipole' has been
applied in QCD to a different strategy based on $1\to 2$ splittings with
recoil taken up by a third particle~\cite{CataniSeymour}. To avoid confusion,
we will follow \cite{VINCIA} in calling the initial two-parton state an
`antenna' and a branching process with $2\to 3$ splittings an `antenna
shower'.
Central to the antenna shower is the $2\to 3$ splitting function, the function
that gives the relative branching probabilities as a function of the final
momenta. The original ARIADNE program used an {\it ad hoc} proposal
satisfying the basic consistency requirements.
It would be better to have a prescription that can be directly
derived from QCD.
Splitting to three partons has been studied in great detail in the QCD
literature,
but not for this application. Collinear systems of three partons are a
part of the infrared structure of QCD at next-to-next-to-leading order,
and calculations that reach this level need an explicit prescription for
treating this set of infrared singularities. Kosower~\cite{Kosower}
defined the `antenna function' as a basic starting point for the
analysis of this problem. Many authors have computed
antenna functions~\cite{Campbell,Catani,Duhr}.
Quite recently, Gehrmann-De Ridder, Gehrmann, and Glover have built a complete
formalism of `antenna subtraction' for NNLO calculations~\cite{Gehrmann}.
The kernel in their theory can be
interpreted as a $2\to 3$ splitting function, and it has been used
to perform $2\to 3$ splitting in the VINCIA shower~\cite{VINCIA}.
In this paper, we will take a much more direct route to the construction
of $2\to 3$ splitting functions. We will compute these functions by
writing local operators that create two-parton final states and computing
their 3-parton matrix elements. These calculations are very straightforward.
They can be used to treat individually all possible sets of polarized
initial and final partons.
This paper is organized as follows: In Section 2, we will present our
complete set of spin-dependent
$2\to 3$ splitting functions. In Section 3,
we will give the derivation for the cases with total spin zero. In
Sections 4 and 5, we will give the derivation for the cases with nonzero total
spin.
All of these derivations will be done in the kinematics of final-state
radiation. This is the easiest situation to visualize and understand.
However, the same splitting functions can be used, after crossing, to describe
parton emissions that involve initial-state particles. We will explain
how to use our expressions for initial-state showers in Section 6.
The $1\to 2$ Altarelli-Parisi
splitting functions are universal in the sense that they result from a
well-defined singular limit of QCD amplitudes. For $2\to 3$ splitting
functions there is no such universality. The collinear and soft limits
must agree with the known universal values, but away from these limits
there is no unique prescription. Earlier in this introduction, we
made reference to a number of previous proposals for the spin-averaged
antenna splitting functions. All of these,
including the ARIADNE splitting functions, have
the correct soft and collinear limits and so satisfy the basic requirements.
In Section 7, we
will give a detailed comparison of the
$2\to 3$ splitting functions obtained using our method
to previous proposals for these splitting functions.
\section{Proposal for the $2\to 3$ splitting functions}
We begin by defining variables for $2\to 3$ splitting. There are
three cases of splittings that are needed for antenna showers: the
final-final (FF) splitting, in which a third particle is created by
coherent radiation from a two-particle system in the final state;
the initial-final (IF) splitting, in which a third particle is created by
coherent radiation from an initial- and a final-state particle; and
initial-initial (II) splitting, in which a third particle is created by
coherent radiation from two initial-state particles. It is easiest
to understand the kinematics of antenna splitting for the FF case.
In this section, we will explain this kinematics and give a precise
prescription for the splitting functions. In Section 6, we will extend
our prescription to the IF and II cases, in such a way that the same
splitting functions can be used in those cases.
Consider, then, a two-parton final-state system
$(A,B)$ that splits to a 3-parton system $(a, c, b)$, conserving
momentum, as shown in Fig.~\ref{fig:FFkinematics}(a).
Let $s_{ij} = (k_i + k_j)^2$, and let $Q = k_A + k_B = k_a + k_b + k_c$.
\begin{figure}
\begin{center}
\includegraphics[height=2.0in]{FFkinematics.pdf}
\caption{(a) Kinematics of $2\to 3$ splitting in the final state (FF) case.
(b)
Phase space for $2\to 3$ splitting in the FF case. The
six regions corresponding to different orderings of
$s_{ab}$, $s_{ac}$, $s_{bc}$ are shown. The region that should
be well described by an antenna splitting $AB\to acb$ is shaded.}
\label{fig:FFkinematics}
\end{center}
\end{figure}
The fractional invariant
masses in the final state are
\begin{equation}
y_{ab} = {s_{ab}\over s_{AB}} \ , \quad
y_{ac} = {s_{ac}\over s_{AB}} \ , \quad
y_{bc} = {s_{bc}\over s_{AB}} \ .
\eeq{ydefs}
The momentum fractions of the three particles in the $(AB)$ frame are
\begin{equation}
z_{a} = {2 Q \cdot k_a\over s_{AB}} \ , \quad
z_{b} = {2 Q \cdot k_b\over s_{AB}} \ , \quad
z_{c} = {2 Q\cdot k_c\over s_{AB}} \ .
\eeq{zdefs}
These obey the identities
\begin{equation}
y_{ab} = (1 - z_c)\ , \quad y_{ac} = (1 - z_b)\ , \quad
y_{bc} = (1 - z_a)\ .
\eeq{yzids}
and
\begin{equation}
y_{ab} + y_{ac} + y_{bc} = 1 \ , \quad z_a + z_b + z_c = 2 \ .
\eeq{yzmoreids}
The FF phase space covers the triangle $z_a \leq 1$, $z_b \leq 1$,
$z_a + z_b \geq 1$. We can divide this phase space into
six triangles, each of which has a different ordering of the
three quantities $y_{ab}$, $y_{ac}$, $y_{bc}$, as
shown in Fig.~\ref{fig:FFkinematics}(b). An
antenna shower should give an accurate description
of the dynamics in the two regions $y_{ac} < y_{bc} < y_{ab}$,
$y_{bc} < y_{ac} < y_{ab}$ that are shaded in the figure.
A general problem in the generation of QCD radiation is that of
possible double-counting. Consider, for example, the process
$e^+e^- \to q g g \bar q$. In some part of the phase space, the first
$g$ can be considered to be
radiated from the antenna of the $q$ and the second $g$; in
another, the second $g$ can be considered to be radiated from the
first $g$ and the $\bar q$. These regions should be disjoint in the
full 4-body phase space. The complete solution to the problem is
beyond the scope of this paper. In simple terms, though, we can
make the separation by choosing the radiated gluon to be softer
than the gluon from which it radiates. This corresponds to
integrating each antenna only over the shaded region
in Fig.~\ref{fig:FFkinematics}(b). A similar approximate
solution to the double-counting problem will apply
in the other kinematic regions discussed in Section 6. A more detailed
discussion of this issue can be found in \cite{VINCIA,hydra}.
Radiation from different QCD antenna is strictly independent
and non-interfering
only in the limit of a large number of colors in QCD, $N_c \gg 1$. Keeping
only terms leading in $N_c$ is known to be a good approximation to full
QCD in many circumstances. In particular, parton shower algorithms are
correct only to leading order in $N_c$. In this paper, we will explicitly
work only to the leading order for large $N_c$.
In the limit of large $N_c$,
the rate for a $2\to 3$ splitting is given by a formula of the form
\begin{equation}
N_c {\alpha_s\over 4\pi} \int dz_a dz_b \ \cdot {\cal S}(z_a,z_b,z_c)
\eeq{splitformal}
For example, in $e^+e^-\to q_- g_+ \bar q_+$,
\begin{equation}
{1\over \sigma_0} {d \sigma\over d z_a d z_b} = N_c
{\alpha_s\over 4\pi} { z_a^2\over (1-z_a)
(1-z_b) } \ ,
\eeq{exampleqgq}
where $(a,c,b)$ are the $(q,g,\bar q)$, respectively, $-$ and $+$ denote
left- and right-handed helicity, and
$\sigma_0$ is the cross section for
$e^+e^-\to q_- \bar q_+$~\cite{correctoneoverN}.
Eq. \leqn{splitformal} will be our basic formula of reference.
Using this notation, we can write the various $2\to 3$ splitting functions
as
\begin{equation}
{\cal S} = {{\cal N}(z_a,z_b,z_c) \over y_{ab} y_{ac} y_{bc}} \ ,
\eeq{Sproposal}
where the numerator is a simple function of the $z_i$. For example, for
the splitting $q_- \bar q_+ \to q_- g_+ \bar q_+$ given above,
\begin{equation}
{\cal N} = y_{ab} z_a^2 = (1 - z_c) z_a^2 \ .
\eeq{Nexample}
In Table~\ref{tab:allN}, we give our proposal for
the numerator functions for all possible
cases of massless quark and gluon splittings.
The expressions are all monomials in the $y_{ij}$ and $z_j$.
In the FF kinematics, all
of the $y_{ij}$ and $z_i$ are positive and so
${\cal S}(z_a,z_b,z_c)$ in \leqn{Sproposal},
is always positive, In IF and II kinematics, some $y_{ij}$ and $z_i$
will become negative. In most cases, the correct prescription is to
take ${\cal S}(z_a,z_b,z_c)$ to be the absolute value of the
expression in Table~\ref{tab:allN}. However, there is a line within the
IF region where $z_a$ or $z_b$ crosses from positive to negative values.
A few entries in the Table change sign across this line.
We recommend that those entries be set to zero when $z_a$ or $z_b$ are
negative. We will give a detailed discussion of these points in
Section 6.
The splitting functions ${\cal S}$ must give the correct universal
behavior in the soft and collinear limits. In the soft limit, $z_c \to 0$,
the numerators must go to 1 if the flavor and helicity of the final
partons $a$ and $b$ match those of the initial partons $A$ and $B$;
otherwise, the numerators must go to 0. It is easy to check that this
test is satisfied.
\begin{table}
\begin{center}
\begin{tabular}{c|cccccccc}
& $+++$ & $++-$ & $+-+$ & $-++$ & $--+$ & $-+-$ & $+--$ & $---$ \\ \hline
$ g_+ g_+ \to ggg$ &
1 & $ y_{ac}^4 $& $ y_{ab}^4$ & $ y_{bc}^4$
& 0 & 0 & 0 & 0 \\
$ g_- g_+ \to ggg $&
0 & 0 & $ y_{bc}^4$ & $z_a^4$ &
$ z_b^4$ &$ y_{ac}^4$ & 0& 0 \\
$ g_+ g_+ \to \bar q q g$ &
- & - & $ y_{ab}^3 y_{bc}$ & $y_{ab} y_{bc}^3$
& - & 0 & 0 & - \\
$ g_- g_+ \to \bar q q g$ &
- & - & $ y_{ab} y_{bc}^3 z_b^2 $ & $ z_a^2 z_b^2 y_{ab} y_{bc}$
& - & 0 & 0 & - \\
$ q_- \bar q_+ \to q g \bar q$ &
- & - & - & $ y_{ab} z_a^2 $ & $ y_{ab} z_b^2 $ & - & -
& - \\
$ q_- \bar q_- \to q g \bar q$ &
- & - & - & - & - & $y_{ab}^3$ &
- & $y_{ab}$ \\
$ q_- g_- \to qgg$ &
- & - & - & 0 & $ y_{ac}^4$ & $ y_{ab}^3 z_b$ & - & $z_a$ \\
$ q_- g_+ \to qgg$ &
- & - & - & $ z_a^3 $ & $ y_{ab} z_b^3$ & $y_{ac}^4$
& - & 0 \\
$ q_- g_- \to q \bar q q$&
- & - & - & - & $ y_{ab} y_{ac}^3$ & $y_{ab}^2 y_{ac} z_b$
& - & - \\
$ q_- g_+ \to q \bar q q$&
- & - & - & - & $ z_a y_{ab} y_{ac} z_b^2 $ & $ z_a y_{ab} y_{ac}^3$
& - & - \\
\end{tabular}
\caption{Numerator functions ${\cal N}(z_a,z_b,z_c)$ for the
spin-dependent
$2\to 3$ splitting functions $AB \to acb$: $ {\cal S } =
{\cal N}/(y_{ab} y_{ac} y_{bc})$. Each line gives a choice of $AB$.
The labels denote the polarization of the
three final particles with the radiated particle $c$ in the center:
$(h_a, h_c, h_b)$. The empty columns are forbidden by quark chiral
symmetry. By the P and C invariance of QCD, the same expressions
apply after exchanging $- \leftrightarrow +$, $q\leftrightarrow \bar q$,
or $ABacb \leftrightarrow BAbca$.}
\label{tab:allN}
\end{center}
\end{table}
In the collinear limits, we will insist that each antenna
has the collinear behavior required in QCD. One often hears the
following statement about soft and collinear limits:
In dipole splitting ($1\to 2$ emission), each dipole has the
correct collinear behavior but the correct soft behavior is obtained
by combining neighboring
dipoles. In antenna splitting ($2\to 3$ emission), each
antenna has the correct soft limit but the correct collinear behavior
is obtained by combining neighboring antennae. However, in the
large $N_c$ limit, which we take to guide our intuition, different antennae
are independent radiators with different, non-interfering, colors flowing
in them. From the viewpoint of this limit,
each antenna, separately, must give both
the correct pattern of soft radiation and the correct pattern of
collinear radiation.
This philosophy differs from that of the ARIADNE
group~\cite{colordipoles,ARIADNE} and of \cite{SHERPAdipoles}. We will
discuss this point further when we compare with their results in
Section 7.
The collinear radiation from a given hard gluon is
then the sum of two contributions, one from each of the two
antennae to which that hard gluon belongs. In the large $N_c$ limit,
these correspond to radiation from the color and anticolor lines of
the gluon. A single antenna, which has one of these contributions,
then has $\frac{1}{2}$ of the standard collinear emission rate. This factor
of $\frac{1}{2}$ enters the check will we perform in a moment. The
factor comes entirely from bookkeeping and is independent of the
question of double-counting discussed briefly earlier in this Section.
We now discuss the check of collinear limits.
Consider the limit in which $c$ becomes collinear with $a$. In
this limit,
\begin{equation}
z_c \to z\ ,\qquad z_a \to (1-z)\ , \quad
z_b \to 1\ , \quad y_{ac}\to 0 \ .
\eeq{collinear}
The $2\to 3$ splitting function must reduce to
\begin{equation}
{\cal S} \to {1\over y_{ac}} P(z) \ ,
\eeq
where $P(z)$ is the relevant spin-dependent
Altarelli-Parisi splitting function.
These were presented in the original Altarelli-Parisi paper~\cite{AP} and
are reviewed in Table~\ref{tab:Altarelli}. The functions are
normalized as in \leqn{splitformal}, and as described in the
previous paragraph: We take the large $N_c$
limit and divide by 2 where necessary to give the contribution from one
QCD antenna. The denominator of
\leqn{Sproposal} tends to $y_{ac} z(1-z)$ in this limit. Then it is
easy to check that the numerators match correctly in all cases. The limit
in which $c$ becomes collinear with $b$ can be checked in the same way.
\begin{table}
\begin{center}
\begin{tabular}{c | cccc}
& $++$ & $-+$ & $+-$ & $--$ \\ \hline
$ g_+ \to gg$\ :
& $ 1 / z(1-z)$ & $(1-z)^3/ z$ & $z^3 / (1-z)$ & 0 \\
$ g_+ \to q \bar q$\ :
& - & $ (1-z)^2$ & $ z^2 $ & - \\
$ q_- \to gq$\ :
& - & - & $(1-z)^2/z$ & $1/z$ \\
$ q_- \to qg$\ :
& - & $z^2/(1-z)$ & - & $1/(1-z)$ \\
\end{tabular}
\caption{Spin-dependent Altarelli-Parisi splitting functions $P(z)$
for splittings $B \to cb$. The
labels denote the polarization of the two final particles with the
radiated particle first: $(h_c,h_b)$. The empty columns are forbidden
by quark chiral
symmetry. By the P and C invariance of QCD,
the same expressions
apply after exchanging $- \leftrightarrow +$ or $q\leftrightarrow \bar q$.}
\label{tab:Altarelli}
\end{center}
\end{table}
When the collinear limits and the soft limit are all
nonzero, there is a unique monomial
of the $y$'s and $z$'s that gives all limits correctly. In the other
cases, there is some ambiguity. In all cases, it would be desirable
if the results in Table~\ref{tab:allN} could be derived directly by
simple Feynman diagram computations. In the next few sections, we will
present those derivations.
\section{Spin-0 case}
To compute the $2\to 3$ splitting functions, we will use the following method:
Write an operator that, at the leading order, creates a 2-parton state
with definite helicity. Then, compute the 3-particle matrix element.
This realizes in a very simple way the splitting process illustrated in
Fig.~\ref{fig:FFkinematics}.
To create massless quarks and antiquarks of definite helicity, we will use
the appropriate chiral fermion fields. To create gluons of definite
helicity, we will use the operators
\begin{equation}
\sigma \cdot F = \frac{1}{2} \sigma^m \bar \sigma^n F_{mn} \ , \qquad
\bar \sigma \cdot F = \frac{1}{2} \bar \sigma^m \sigma^n F_{mn} \ ,
\eeq{sigmaFs}
where $\sigma^m$, $\bar \sigma^m$ are the $2\times 2$ matrix entries of the
Dirac matrices in a chiral basis and $F_{mn}$ is the gluon field strength
tensor. At leading order, $\sigma \cdot F$ creates a $+$ helicity gluon,
and $\bar \sigma \cdot F$ creates a $-$ helicity gluon.
The 2-parton state $g_+g_+$ in the first line of Table~\ref{tab:allN} can
be created from the spin-0 operator
\begin{equation}
{\cal O} = \frac{1}{2} {\mbox{\rm tr}} [ (\sigma\cdot F)^2 ] \ .
\eeq{firstop}
We can then compute the splitting function for this polarized initial state
explicitly from the definition
\begin{equation}
{\cal S}(z_a, z_c, z_b) = Q^2 \left| {{\cal M}({\cal O} \to acb)
\over {\cal M}({\cal O} \to AB) } \right|^2
\eeq{Sdefin}
In the next few sections, we will compute all of the splitting functions
in Table~\ref{tab:allN} using this formula, with a different choice of
the operator ${\cal O}$ for each line of the table.
To evaluate \leqn{Sdefin}, we need to compute the matrix elements
of ${\cal O}$, with total momentum $Q$ injected, to 3-gluon final states.
The result can be expressed in terms of color-ordered amplitudes. We
identify the color-ordered amplitude that multiplies the color structure
${\mbox{\rm tr}} [T^a T^c T^b]$ with the splitting function. To carry out these
computations, we will use the spinor product formalism. That is,
instead of working with 4-vectors, we will use as our basic objects
the spinor products
\begin{equation}
\spa ij = \bar u_-(i) u_+(j) \ , \qquad \spb ij = \bar u_+(i) u_-(j)\ .
\eeq{spinorproducts}
These objects obey
\begin{equation}
|\spa ij|^2 = |\spb ij|^2 = s_{ij} \ .
\eeq{spsquare}
Methods for QCD
computations with spinor products and color-ordering are explained in
\cite{ManganoParke,Dixon}. In this notation,
the matrix element for ${\cal O}$ to create a
$g_+g_+$ final state is
\begin{equation}
\bra{g_+g_+}{\cal O} \ket{0} = {\spb AB}^2 \ .
\eeq{firstOnorm}
\begin{figure}
\begin{center}
\includegraphics[height=1.2in]{gggFeynman.pdf}
\caption{Feynman diagrams for the computation of the $gg\to ggg$ splitting
functions.}
\label{fig:ggg}
\end{center}
\end{figure}
The three-gluon matrix elements of the operator \leqn{firstop} are given
by the diagrams in Fig.~\ref{fig:ggg}. These diagrams have already been
analyzed by Dixon, Glover, and Khoze as a part of their analysis of
the coupling of the Higgs boson to multi-gluon states~\cite{LanceGlover}.
They find
\begin{Eqnarray}
{\cal A}({\cal O} \to g_+g_+g_+) &= & {s_{AB}^2\over \spa ac \spa cb \spa ba}
\nonumber \\
{\cal A}({\cal O} \to g_+g_+g_-) &= &
{{ \spb ac}^4\over \spb ac \spb cb \spb ba}
\nonumber \\
{\cal A}({\cal O} \to g_+g_-g_+) &= &
{{ \spb ab}^4\over \spb ac \spb cb \spb ba}
\nonumber \\
{\cal A}({\cal O} \to g_-g_+g_+) &= &
{{ \spb bc}^4\over \spb ac \spb cb \spb ba}
\nonumber \\
\eeqa{Aforggtogggzero}
and zero for the other four cases. After squaring, using \leqn{spsquare},
and dividing by the square of \leqn{firstOnorm}, we obtain the first line of
Table~\ref{tab:allN}.
One of the major points of \cite{LanceGlover} is that the results
\leqn{Aforggtogggzero} belong to series of Maximally Helicity Violating
(MHV) amplitudes that have a simple form for any number of gluons emitted.
Actually, all of the amplitudes that we will compute in this paper are
similarly simple and belong to MHV series. The use of MHV amplitudes
to study antenna splitting is explored for higher-order processes
in \cite{Duhr}.
In principle, the initial state $g_+g_+$ could also have been created by
an operator of spin 2, or some higher spin. This would have led to a more
complicated expression for the $2\to 3$ splitting function, with, however,
the same soft and collinear limits. This illustrates the ambiguity in the
definitions of $2\to 3$ splitting functions refered to in the introduction.
The simplest results are obtained using the operator of minimal spin, and
we will make that choice in all of the examples to follow.
The diagram shown in Fig.~\ref{fig:qqg} gives the splitting of the
two-gluon initial state to $\bar q q g$. We find
\begin{Eqnarray}
{\cal A}({\cal O} \to \bar q_+ q_- g_+) &= & { {\spb ab}^2\over \spb ac }
\nonumber \\
{\cal A}({\cal O} \to \bar q_- q_+ g_+) &= & { {\spb cb}^2\over \spb ac }
\eeqa{Aforggtoqqgzero}
There is no splitting to a final $g_-$. This gives the result in the
third line of the table.
\begin{figure}
\begin{center}
\includegraphics[height=1.0in]{qqgFeynman.pdf}
\caption{Feynman diagram for the computation of the $gg \to \bar q q g$
splitting functions.}
\label{fig:qqg}
\end{center}
\end{figure}
The initial state $q_- \bar q_- $ can also be created by a spin 0 operator
\begin{equation}
{\cal O} = \bar q_L q_R \ .
\eeq{qqspinzero}
The matrix element for this operator to create a $q_- \bar q_-$ final state
is
\begin{equation}
\bra{q_-\bar q_-}{\cal O} \ket{0} = \spa AB \ .
\eeq{secondOnorm}
A straightforward calculation gives
\begin{Eqnarray}
{\cal A}({\cal O} \to q_- g_+ \bar q_-) &= &
{ {\spa ab}^2\over \spa ac \spa cb }
\nonumber \\
{\cal A}({\cal O} \to q_- g_- \bar q_-) &= & { s_{AB}\over \spb ac \spb cb }
\eeqa{Aforqqtoqgqzero}
These give the results shown in the sixth line of the table.
\section{Spin-1 and spin-2 case}
In \cite{colordipoles}, the $2\to 3$ splitting function for $q\bar q \to
q g \bar q$ was derived from the cross section for $e^+e^-\to q g \bar q$.
From the point of view of the previous section, this corresponds to creating
the 2- and 3-parton final states using the operator
\begin{equation}
{\cal O} = \bar q_L \gamma^m q_L \ .
\eeq{spinoneoperator}
To obtain a definite matrix element, we must contract this operator with a
polarization vector. A convenient choice is to introduce two new massless
vectors 1 and 2, such that $k_1 + k_2 = k_A + k_B$, and to choose the
polarization vector to be $\epsilon^\mu = \apb 1 {\gamma^\mu} 2 $.
This is effectively the procedure of decaying the massive vector that
couples to the operator \leqn{spinoneoperator} into a pair of massless
vectors to facilitate the analysis; this is a standard method in spinor
product calculations~\cite{KleissStirling}. We then recast
\begin{equation}
{\cal O} = \frac{1}{2} \bar q_L \gamma^m q_L \ \apb 1 {\gamma_m} 2 \ .
\eeq{spinonetwo}
The matrix element of \leqn{spinonetwo} to a $q_- \bar q_+$ state is
\begin{equation}
\bra{q_-\bar q_+}{\cal O} \ket{0} = - \spa 1A \spb 2B \ .
\eeq{thirdOnorm}
The direction of the 1-2 system
chooses the helicity of the final partons. In this
case, there is only one choice, and so the amplitude vanishes when $1$ is
parallel to $A$ or $2$ is parallel to $B$. This will not always be
true in our later examples. But, we will always be able to choose
the desired helicity of $A$ and $B$ by choosing
$1$ parallel to $B$ and $2$ parallel to $A$.
The matrix elements for the operator \leqn{spinonetwo}
to create 3-parton final states are
\begin{Eqnarray}
{\cal A}({\cal O} \to q_- g_+ \bar q_+) &= & { {\spa 1a}^2\spb 12
\over \spa ac \spa cb }
\nonumber \\
{\cal A}({\cal O} \to q_- g_- \bar q_+) &= & { {\spb 2b}^2 \spa 12
\over \spb ac \spb cb } \ .
\eeqa{Aforqqtoqgqone}
To compute the results in the fifth line of the table, we must essentially
divide \leqn{Aforqqtoqgqone} by \leqn{thirdOnorm} and square the result.
To do this, we need a prescription for treating the expressions $\spa 1a$
and $\spb 2b$ in the numerators. The problem of relating
the vectors $a$, $b$, $c$ to $A$ and $B$ in an antenna splitting was
discussed at length by Kosower in \cite{KosowerII}; that paper
gives a general treatment in terms of {\it reconstruction functions}
to provide expressions that can be smoothly integrated in higher-order
QCD calculations. This discussion is generalized to
the initial-state channels in \cite{Daleo}.
Here, we will take a more {\it ad hoc} approach
that leads to the simplest formulae with correct singular limits.
Formulae for $\spa 1a$
and $\spb 2b$ that are simple and become
exact in the collinear and soft limits are found by approximating
$a$ collinear with
$A$ and $b$ collinear with $B$. Then identifying $1$ with $B$ and $2$ with
$A$ gives
\begin{equation}
|\spa 1a|^2 = s_{Ba} \to z_a s_{AB}\ , \quad
|\spa 1b|^2 \to 0 \ , \quad |\spa 2a|^2 \to 0 \ ,
|\spa 2b|^2 = s_{Ab} \to z_b s_{AB}\ ,
\eeq{onetworeduce}
and similarly for the conjugate products.
Using this prescription, one obtains the fifth line of the table. This is
a more formal version of the argument for these entries already given
in Section~2.
In our calculations, we will encounter two more numerator objects that
require reconstruction, namely, $\spa 1c$ and $\spa 2c$.
The prescription
above gives
\begin{equation}
|\spa 1c|^2 = s_{Bc} \to (y_{bc}/z_b) s_{AB}\ , \quad
|\spa 2c|^2 = s_{Ac} \to (y_{ac}/z_a) s_{AB}\ .
\eeq{onetworeducetwo}
However, it is potentially dangerous to write factors of $z_a$, $z_b$ in the
denominator. We will see in Section 6 that such factors would create
unphysical singularities when continued to the IF kinematics.
These unphysical singularities are avoided in the general formalism
used in \cite{KosowerII}, but at the price of introducing much more
complicated formulae. Fortunately,
we will see that $\spa 1c$ arises only in situations where there is no
collinear singularity with $c$ parallel to $b$. In such cases, the remaining
universal singular terms---the collinear singularity with $c$ parallel to
$a$ and the soft singularity---correspond to kinematic limits with $z_b \to 1$.
A similar consideration applies to $\spa 2c$. Thus, we choose, instead
of using \leqn{onetworeducetwo}, to evaluate these quantities as
\begin{equation}
|\spa 1c|^2 = s_{Bc} \to y_{bc} s_{AB}\ , \quad
|\spa 2c|^2 = s_{Ac} \to y_{ac} s_{AB}\ .
\eeq{onetworeducethree}
This gives an incorrect shape in a region where $a$ and $b$ are
collinear, but, hopefully, we will not use the $AB \to acb$ splitting function
to evaluate the rate to fill this region of phase space.
Another choice for evaluating $\spa 1c$ and $\spa 2c$ is to
replace both expressions by $z_c$.
However, the spinor product $\spa 1c$ vanishes in the $bc$ collinear
limit but not in the $ac$ collinear limit, and conversely for
$\spa 2c$, so this choice does not give the universal singularities
correctly.
We now apply this formalism to compute the second and fourth lines
of Table~\ref{tab:allN}, associated with the $g_-g_+$ antenna. This
antenna is created by the spin-2 operator ${\mbox{\rm tr}} [ \gamma^m (\bar\sigma \cdot F)
\gamma^n ( \sigma \cdot F) ] $. To make a definite calculation, we
need a spin-2 polarization vector. An appropriate choice can be found by
introducing the massless vectors 1 and 2 as above and writing
\begin{equation}
\epsilon^{mn} = \apb 1 {\gamma^m} 2 \ \apb 1 {\gamma^n} 2 \ .
\eeq{spintwoepsilon}
This effectively decays the masive spin-2 particle into two massless spinors.
This
method was introduced in \cite{LePeskin} to compute the relevant amplitudes
for the emission of massive gravitons at high-energy colliders.
With this prescription, we generate the $g_-g_+$ antenna using the operator
\begin{equation}
{\cal O} = {1\over 4}{\mbox{\rm tr}} [ \gamma^m (\bar\sigma \cdot F)
\gamma^n (\sigma \cdot F) ] \apb 1 {\gamma_m} 2 \apb 1 {\gamma_n} 2
\eeq{spintwo}
The matrix element of this operator that creates the 2-parton dipole is
\begin{equation}
\bra{g_- g_+}{\cal O} \ket{0} = {\spa 1A}^2 {\spb 2B}^2 \ .
\eeq{fourthOnorm}
To obtain the correct initial polarizations, we take $1 = B$, $2 = A$ as
before.
The matrix elements to the possible 3-parton final states are
\begin{Eqnarray}
{\cal A}({\cal O} \to g_+g_+g_+) &= & 0 \nonumber \\
{\cal A}({\cal O} \to g_+g_+g_-) &= & { {\spa 1b}^4{\spb 12}^2
\over \spa ab \spa ac \spa cb } \nonumber \\
{\cal A}({\cal O} \to g_+g_-g_+) &= & { {\spa 1c}^4{\spb 12}^2
\over \spa ab \spa ac \spa cb } \nonumber \\
{\cal A}({\cal O} \to g_-g_+g_+) &= & { {\spa 1a}^4{\spb 12}^2
\over \spa ab \spa ac \spa cb } \ ,
\eeqa{Aforgggtwo}
and the conjugates with $1 \leftrightarrow 2$ for the other four combinations.
Applying the reductions \leqn{onetworeduce}, \leqn{onetworeducetwo}, we find
the results given in the second line of the table.
The nonzero matrix elements of this operator to $\bar q q g$ final states are
\begin{Eqnarray}
{\cal A}({\cal O} \to \bar q_+ q_- g_+)&=& { {\spa 1c}^2{\spb 2b}^2
\over \spb ac } \nonumber \\
{\cal A}({\cal O} \to \bar q_- q_+ g_+) &=& {{\spa 1a}^2 {\spb 2b}^2
\over \spb ac } \ .
\eeqa{Aforqqgtwo}
The same reduction process gives the results in the fourth line of the
table.
\section{Spin-$\frac{1}{2}$ and spin-$\frac{3}{2}$ cases}
\begin{figure}
\begin{center}
\includegraphics[height=1.2in]{qggFeynman.pdf}
\caption{Feynman diagrams for the computation of the $qg\to qgg$ splitting
functions.}
\label{fig:qgg}
\end{center}
\end{figure}
The cases of quark-gluon antennae can be treated in the same way. There is
one additional subtlety. In QCD, quarks are color triplets and gluons
are color octets, so a quark-gluon operator carries net color. This
means that the matrix element for gluon emission from a
quark-gluon operator is not gauge-invariant unless we allow the gluon also
to be emitted from the initial state. This makes it unclear how to
define a quark-gluon antenna.
We resolve this problem with the following prescription: We consider the
quarks to be color octet particles like the gluons. Then, as in the
previous sections, we extract the color-ordered contribution corresponding
to emission from the antenna. In the limit of large $N_c$, the various
antennae in a process radiate independently. The diagrams contributing
to a quark-gluon antenna in this prescription are shown in Fig.~\ref{fig:qgg}.
The third diagram, with an intermediate quark line, does not appear
in QCD. However, it does nicely provide the missing piece that makes this
sum of diagrams gauge-invariant without radiation from the initial state.
This solution is the same as that found in the earlier work of
Gehrmann-De Ridder, Gehrmann, and Glover~\cite{Gehrmann}.
Those authors computed the
quark-gluon antennae by factorizing the amplitudes for
the decay of a neutralino into a massless gluino plus $gg$ or $q\bar q$.
In their calculation, the off-shell color octet fermion is the
gluino.
With this understanding, we proceed as in the previous Section.
We can generate the $q_-g_-$ antenna using the operator $\bar q_L
(\bar\sigma\cdot
F)$. The polarization spinor can be built by introducing massless
spinors 1 and 2 as above and taking $\ket{2}$ to be this spinor. Then
\begin{equation}
{\cal O} = -i \bar q_L (\bar \sigma \cdot F) \ket{2} \ .
\eeq{spinhalf}
The matrix element of this operator that creates the 2-parton dipole is
\begin{equation}
\bra{q_- g_-}{\cal O} \ket{0} = {\spa AB} {\spb B2} \ .
\eeq{fifthOnorm}
To obtain the correct initial polarizations, we take $1 = B$, $2 = A$.
The matrix elements to the possible 3-parton final states are
\begin{Eqnarray}
{\cal A}({\cal O} \to q_-g_+g_+) &= & 0 \nonumber \\
{\cal A}({\cal O} \to q_- g_-g_+) &= & { {\spa ac}^3{\spa 2c}
\over \spa ab \spa ac \spa cb } \nonumber \\
{\cal A}({\cal O} \to q_-g_+g_-) &= & { {\spa ab}^3{\spa 2b}
\over \spa ab \spa ac \spa cb } \nonumber \\
{\cal A}({\cal O} \to q_-g_-g_-) &= & { s_{AB} \spa 12 \spb 1a
\over \spb ab \spb ac \spb cb } \ .
\eeqa{Aforqgghalf}
Applying the reductions \leqn{onetworeduce}, \leqn{onetworeducetwo}, we find
the results given in the seventh line of the table.
The nonzero matrix elements of this operator to $ q \bar q q$ final
states are
\begin{Eqnarray}
{\cal A}({\cal O} \to q_- \bar q_- q_+)&=& {{\spa ac} {\spa 2c}
\over \spa cb } \nonumber \\
{\cal A}({\cal O} \to q_- \bar q_+ q_-) &=& - { {\spa ab}{\spa 2b}
\over \spa cb } \ .
\eeqa{Aforqqqhalf}
The same reduction process gives the results in the ninth line of the
table.
We generate the $q_-g_+$ antenna using the spin-$\frac{3}{2}$
operator $\bar q_L \gamma^m (\sigma\cdot
F)$. This is essentially the supersymmetry current of the system of
gluons and color octet fermions.
The polarization spinor can be built by introducing massless
spinors 1 and 2 as above:
\begin{equation}
{\cal O} = i \bar q_L \gamma^m (\sigma \cdot F) \, 2] \apb 1{ \gamma_m}2
\ .
\eeq{spinthalf}
The matrix element of this operator that creates the 2-parton dipole is
\begin{equation}
\bra{q_- g_+}{\cal O} \ket{0} = {\spa 1A} {\spb 2B}^2 \ .
\eeq{sixthOnorm}
To obtain the correct initial polarizations, we again take $1 = B$, $2 = A$.
The matrix elements to the possible 3-parton final states are
\begin{Eqnarray}
{\cal A}({\cal O} \to q_-g_+g_+) &= & { {\spa 1a}^3{\spb 12}^2
\over \spa ab \spa ac \spa cb } \nonumber \\
{\cal A}({\cal O} \to q_- g_-g_+) &= & { \spa ab {\spb 2b}^3{\spa 12}
\over \spb ab \spb ac \spb cb } \nonumber \\
{\cal A}({\cal O} \to q_-g_+g_-) &= & { \spa ac {\spb 2c}^3{\spa 12}
\over \spb ab \spb ac \spb cb } \nonumber \\
{\cal A}({\cal O} \to q_-g_-g_-) &= & 0 \ .
\eeqa{Aforqggthalf}
Applying the reductions \leqn{onetworeduce}, \leqn{onetworeducetwo}, we find
the results given in the eighth line of the table.
The nonzero matrix elements of this operator to $ q \bar q q$ final
states are
\begin{Eqnarray}
{\cal A}({\cal O} \to q_- \bar q_- q_+)&=& {{\spa 1a} {\spb 2b}^2
\over \spb cb } \nonumber \\
{\cal A}({\cal O} \to q_- \bar q_+ q_-) &=& - { {\spa 1a} {\spb 2c}^2
\over \spb cb } \ .
\eeqa{Aforqqqthalf}
The same reduction process gives the results in the tenth line of the
table.
\section{Initial-state showers}
The Feynman diagram computations that we have done to find the antenna
splitting functions for FF splittings can also be applied, by crossing,
to IF and II splittings. The expressions in Table~\ref{tab:allN} are
given in terms of invariant quantities that are unchanged under
crossing. Thus, we can use the expressions in this table directly in
other channels. At worst, a change of the overall sign is required in
some cases. In this section, we will clarify this statement by analyzing
the kinematics of IF and II splittings in the same variables as those used
in Section 2 for FF splittings. In all cases, the kinematics is done for
all massless partons only. The kinematic discussion in this section
is similar to that presented in \cite{Daleo}.
To begin, we will formalize some of the results quoted in Section 2
for the FF region. The cross section for a process $X \to acb$ is
\begin{equation}
\sigma(X\to acb) = {1\over \Phi_X} {s\over 128 \pi^3} \int dz_a dz_b
| {\cal M}(X\to acb)|^2 \ ,
\eeq{basicFF}
where $\Phi_X$ is the flux factor. Polarization and color indices have
been suppressed. The left-hand side has been integrated over the
orientation of the final state system but is otherwise exact. To
write an expression involving the antenna splitting function, we
approximate
\begin{equation}
{\cal M}(X\to acb) \approx {\cal M}(X \to AB) \cdot g T \cdot {{\cal M}({\cal O} \to acb)
\over {\cal M}({\cal O} \to AB) } \ ,
\eeq{MdecompFF}
where ${\cal O}$ is the operator used in Sections 3--5 to represent the state
$AB$. The factor $gT$ is the QCD coupling and color matrix; after squaring
and summing over colors, this becomes $4\pi \alpha_s N_c$. The splitting
function is defined by \leqn{Sdefin},
\begin{equation}
{\cal S}(z_a, z_c, z_b) = s_{AB} \left| {{\cal M}({\cal O} \to acb)
\over {\cal M}({\cal O} \to AB) } \right|^2
\eeq{Sdefintwo}
Then
\begin{equation}
\sigma(X\to acb) \approx \sigma(X\to AB)\cdot
{\alpha_s N_c\over 4\pi}\int dz_a dz_b {\cal S}(z_a, z_c, z_b) \ .
\eeq{finalFF}
It is important to note that, in this formula or in \leqn{MdecompFF},
the vectors $k_A$ and
$k_B$ are introduced as part of the approximation. They can be defined
in any way that is consistent with the requirements that $k_A$ and $k_B$
are lightlike, $k_A+k_B = Q$, and $k_A$ and $k_B$ become parallel to
$k_a$ and $k_b$, respectively, in the soft and collinear limits.
The logic of this derivation extends straightforwardly to the IF and II
regions. The major change is that, in these cases, we
need to introduce initial hadrons from which
the initial partons are extracted.
Consider first the IF case. The cross section for a proton of momentum
$P$ to scatter from
a color-singlet system $X$ transferring momentum $Q$ to create a 2-parton
system $cb$ is
\begin{equation}
\sigma (p X \to cb) = \int dx_a f(x_a)\ {1\over \Phi_{aX}} {1\over 16\pi}
\int d \cos\theta_* \ | {\cal M}(aX\to cb)|^2 \ ,
\eeq{basicIF}
where $\cos\theta_*$ is the scattering angle in the $cb$ center of mass
system.
We will approximate this formula using the expression
analogous to \leqn{MdecompFF}
\begin{equation}
{\cal M}(aX\to cb) \approx {\cal M}(AX \to B) \cdot g T \cdot {{\cal M}(a{\cal O} \to cb)
\over {\cal M}(A{\cal O} \to B) } \ .
\eeq{MdecompIF}
Then the splitting function is defined by
the same expression ${\cal S}$ as in \leqn{Sdefintwo}, but now
analytically continued into the new kinematic region. If a fermion line
is crossed from the final to the initial state, an extra factor (-1) should
be included. In addition, $s_{AB}$ in \leqn{Sdefintwo} is negative
in this region, giving an extra minus sign.
The decomposition of the amplitude is illustrated in
Fig.~\ref{fig:IFkinematics}(a). The kinematics can be described by
variables $y_{ij}$ and $z_i$ obeying the relations \leqn{ydefs} to
\leqn{yzmoreids}. However, the vectors $k_A$, $k_a$ now have
negative timelike
component, and the vector $Q = k_A + k_B = k_a + k_b + k_c$ is spacelike,
$Q^2 = s_{AB} < 0$. The phase space for this region covers the quadrilateral
shown in Fig.~\ref{fig:IFkinematics}(b). The region of integration is
infinite, since $z_a$ can become very large, but the integral is cut off
at large $z_a$ by the parton distribution function.
The line $z_a > 1$, $z_b = 1$
corresponds to the region of initial state radiation, $c$ parallel to $a$.
The line $z_a = 1$, $0 <z_b < 1$
corresponds to the region of final state radiation, $c$ parallel to $b$.
The line $z_a + z_b = 1$ corresponds to $b$ parallel to $a$, that is,
$b$ as initial state radiation from the primary $a$. An
antenna shower should give an accurate description
of the dynamics in the two regions $|y_{ac}| < |y_{bc}| < 1$,
$|y_{bc}| < |y_{ac}| < |y_{ab}|$ that are shaded in the figure. The new
constraint $|y_{bc}| < 1$ is just $|s_{bc}| < |Q^2|$,
which is stronger than the constraint that this invariant is less
than $|s_{ab}|$.
\begin{figure}
\begin{center}
\includegraphics[height=2.0in]{IFkinematics.pdf}
\caption{(a) Kinematics of $2\to 3$ splitting in the initial-final (IF) case.
(b) Phase space for $2\to 3$ splitting in the IF case. The
eight regions corresponding to different orderings of
$|s_{ab}|$, $|s_{ac}|$, $s_{bc}$, $|Q^2|$ are shown. The region that should
be well described by an antenna splitting $AB\to acb$ is shaded.}
\label{fig:IFkinematics}
\end{center}
\end{figure}
To decompose \leqn{basicIF} into an appropriate form, we choose $p_A$ and
$p_B$ and then change variables. Let $p_A$ be chosen in the direction of
$p_a$, so that $p_a = z_a p_A$, $z_a > 1$.
Then $p_B = Q - p_A$. We have
\begin{equation}
p_a = x_a P \ , \quad p_A = x_A P \ , \quad \mbox{so} \
\quad x_a = z_a x_A \ ,
\eeq{xzforIF}
with $x_A$ having the definite value $x_A = -Q^2/2P\cdot Q$ associated with
scattering a massless particle from a local current. For the reaction
$a Q \to bc$, $s+t+u = Q^2$, so $t+u = Q^2 - s = Q^2 z_a$. Then
\begin{equation}
t = Q^2 (1-z_b) = \frac{1}{2} Q^2 z_a (1-\cos\theta_*)
\eeq{zbzarel}
Using these formulae, we can change variables from
$(x_a, \cos\theta_*)$ to $(z_a, z_b)$. The Jacobian of this transformation
is
\begin{equation}
J = { \partial(x_a, \cos\theta_*)\over \partial(z_a, z_b)} = {2 x_A \over z_a}
\eeq{findJIF}
Thus,
\begin{Eqnarray}
\sigma (p X \to cb) &=& \int {dz_a\over z_a^2}
\int dz_b \int dx_A x_A f(z_a x_A) \
\delta(x_A + Q^2/2P\cdot Q) \nonumber \\
& & \hskip 1.6in \cdot {1\over \Phi_{AX}} {1\over 8\pi}
| {\cal M}(aX\to cb)|^2 \ .
\eeqa{exactIF}
This is an exact rewriting of \leqn{basicIF}. Now apply the
approximation \leqn{MdecompIF} and group terms to form
\begin{equation}
\sigma(AX \to B) = {1\over \Phi_{AX}} 2\pi \delta(Q^2 + x_A 2P\cdot Q)
| {\cal M}(AX\to B)|^2 \ .
\eeq{sigmaIFform}
Then
\begin{equation}
\sigma (p X \to cb) \approx \int {dz_a\over z_a^2}
\int dz_b \int dx_A f(z_a x_A)
\sigma(AX \to B)\cdot
{\alpha_s N_c\over 4\pi}\, {\cal S}(z_a, z_c, z_b) \ .
\eeq{finalIF}
As an example, consider using this formula to describe initial-state
gluon radiation in deep inelastic scattering from a quark. The total
gluon emission is given by the sum of the two spin-dependent
splitting functions
in the fifth line of Table~\ref{tab:allN},
equal to
\begin{equation}
\sum {\cal S} = - {z_a^2 + z_b^2 \over y_{ac} y_{cb}},
\eeq{sumSIF}
The extra minus sign comes from the sign of $s_{AB}$ in
\leqn{Sdefintwo}.
In the region
of initial state radiation, $z_a = 1/w$, $z_b \approx 1$,
$y_{ac} = -(1-z_b)$, $y_{bc} = (1 - 1/w)$. Then, setting
\begin{equation}
\int dz_b {1\over 1-z_b} = \log{ Q^2\over \mu^2} \ ,
\eeq{getQsq}
we obtain
\begin{equation}
\sigma (p X \to cb) \approx \int dx_A \, \int {dw\over w}
f({x_A\over w}) \,
\sigma(AX \to B)\cdot
{\alpha_s N_c\over 4\pi} {1 + w^2\over (1-w)}\log{Q^2\over\mu^2} \ ,
\eeq{testfinalIF}
which is correct.
This is an appropriate point to discuss again the signs of
the expressions in Table~\ref{tab:allN}. The antenna splitting functions
are probabilities; thus, they should be positive. However, we define the
splitting functions in the IF and II regions as analytic continuations
of the values in the FF region, so their positivity must be checked
explicitly.
As we move from the
FF region to the IF region with
$A$ and $a$ in the initial state, $y_{bc}$ becomes negative while
all other $y_{ab}$, $y_{ac}$, $z_a$, $z_b$ remain positive.
The factor $z_c$ can be negative, but $z_c$ does not appear
in the Table.
With
the minus sign from $s_{AB}$ in \leqn{Sdefintwo}, the denominator
of ${\cal S}(z_a,z_b,z_c)$ is positive, and so we need only check the
numerator functions in given in the Table.
The numerator
functions for $gg\to ggg$, $q\bar q \to q g \bar q$, and
$qg\to qq\bar q$ remain positive, while
the numerator functions for $gg \to \bar q q g$ become negative.
In this last case, a fermion not present in the 2-parton system
is crossed from the final to the
initial state, so we must supply an extra factor $(-1)$. Then all of
the expressions are positive, as required. However, if we then
cross from the region $z_b > 0 $ to the region $z_b < 0$, one
$qg\to qgg$ and one $qg \to q\bar q q$ amplitude changes sign.
This sign change is unphysical; presumably, it is due to the
simple method of reconstruction in \leqn{onetworeduce} and
\leqn{onetworeducethree}. We recommend setting these two
amplitudes to zero for $z_b < 0$.
The region $z_b < 0$ is outside the shaded region in
Fig.~\ref{fig:IFkinematics}(b) where we will generally use
the parton shower approximation, so most likely this
difficulty is not important in practice.
Similarly, for the FI region where $b$ and $B$ and taken to be in the
initial state, $y_{ac} < 0$. Then
the numerators that go negative as we cross into the
region are those in the $qg \to q\bar q q$ cases where a fermion is
crossed into the initial state. Now there are four amplitudes, one
each in the $qg \to qgg$ cases and both of those in $q_-g_+\to
q\bar q q$, that become negative when $z_a < 0$. Again, we recommend
that these amplitudes be set to zero in this region of unphysical
behavior.
In the II region, both $y_{ac}$ and $y_{bc}$ are negative. The
denominator of ${\cal S}(z_a,z_b,z_c)$ is positive. The
numerator terms that are negative because of the sign changes
are compensated by
minus signs from crossing. There are no unphysical sign changes.
The correct result is always obtained by taking the absolute
value of the numerator expression from Table~\ref{tab:allN}.
We now discuss the kinematics of the II case.
We begin from the formula for two protons of momentum
$P_A$, $P_B$ to produce a color-singlet system of momentum $Q$ plus a
massless parton $c$,
\begin{equation}
\sigma(pp \to cX) = \int dx_a \int dx_b f(x_a) f(x_b) \ {1\over 2s_{ab}}
{1\over 16\pi} \int d\cos\theta_* {2p_*\over \sqrt{s_{ab}}}\,
|{\cal M}(ab\to cX)|^2 \ ,
\eeq{basicII}
where $\cos\theta_*$ and $p_*$ are the scattering angle and the momentum
in the $cX$ center of mass frame.
\begin{figure}
\begin{center}
\includegraphics[height=2.0in]{IIkinematics.pdf}
\caption{(a) Kinematics of $2\to 3$ splitting in the initial state (II) case.
(b) Phase space for $2\to 3$ splitting in the II case. The
six regions corresponding to different orderings of
$|s_{ac}|$, $|s_{bc}|$, $|Q^2|$ are shown. The region that should
be well described by an antenna splitting $AB\to acb$ is shaded.}
\label{fig:IIkinematics}
\end{center}
\end{figure}
The decomposition of the amplitude is illustrated in
Fig.~\ref{fig:IIkinematics}(a). The kinematics can again be described by
variables $y_{ij}$ and $z_i$ obeying the relations \leqn{ydefs} to
\leqn{yzmoreids}. Now the vectors $k_A$, $k_a$ $k_B$, $k_b$ have
negative timelike
component, and the vector $Q = k_A + k_B = k_a + k_b + k_c$ is also
negative timelike, with $Q^2 > 0$.
The phase space for this region covers the quadrant
shown in Fig.~\ref{fig:IFkinematics}(b), with $z_a, z_b > 1$. Again,
the region of integration is
infinite, but the integral is cut off by the behavior of the parton
distribution functions.
The line $z_a > 1$, $z_b = 1$
corresponds to the region of initial state radiation with $c$ parallel to $a$.
The line $z_a = 1$, $z_b > 1$
corresponds to the region of initial state radiation with
$c$ parallel to $b$.
An antenna shower should give an accurate description
of the dynamics in the two regions $|y_{ac}| < |y_{bc}| < 1$,
$|y_{bc}| < |y_{ac}| < 1$ that are shaded in the figure. Again, the
limit 1 here corresponds to
constraints $|s_{ac}|, |s_{bc}| < |Q^2|$,
which are stronger than the constraints that these two invariants are less
than $|s_{ab}|$.
In the $ab\to cX$ process, the system $X$ must recoil with some nonzero
transverse momentum. Thus, it is not possible to choose $k_A$ and $k_B$
to be parallel to $k_a$, $k_b$. The invariants for the $ab\to cX$ scattering
process satisfy $s + t + u = Q^2$. Since $t = Q^2 (1-z_b)$,
$u = Q^2(1- z_a)$, this means that $s = Q^2 (z_a + z_b -1)$. Alternatively,
$s = x_a x_b \cdot 2P_A \cdot P_B$. We would like to choose the
longitudinal fractions of $A$ and $B$,
$x_A$ and $x_B$, to satisfy the relation
\begin{equation}
x_A x_B \cdot 2P_A\cdot P_B = Q^2 \ .
\eeq{xAxBrel}
To make this possible, we must write
\begin{equation}
x_a = z_a x_A {\cal C} \ , \qquad x_b = z_b x_B {\cal C} \ ,
\eeq{Cdefin}
with \cite{Daleoremark}
\begin{equation}
{\cal C }^2 = {z_a + z_b -1 \over z_a z_b}
\eeq{Cfind}
The function ${\cal C}(z_a,z_b)$
approaches 1 when {\it either} $z_a$ or $z_b$ goes to 1;
that is ${\cal C} \approx 1$ in both collinear regions.
Also, $t + u = Q^2( 2 - z_a - z_b) = Q^2 z_c$, so
\begin{equation}
t = Q^2 (1 - z_b) = \frac{1}{2} Q^2 z_c (1 - \cos\theta_*)\ .
\eeq{tforII}
We can now use \leqn{Cdefin} and \leqn{tforII} to
change variables from $(x_a, x_b,\cos\theta_*)$ to $(x_A,z_a, z_b)$,
holding $x_B$ fixed at the value $x_B = Q^2/x_A 2P_A\cdot P_B$.
The Jacobian of this transformation
is
\begin{equation}
J = { \partial(x_a, x_b, \cos\theta_*)\over \partial(x_A, z_a, z_b)}
= {2x_B\over z_c} = x_B {Q^2\over s_{ab}}{ \sqrt{s_{ab}}\over p_*} \ .
\eeq{findJII}
Then
\begin{Eqnarray}
\sigma (pp \to cX) &=& \int {dz_a\over z_a^2} {dz_b\over z_b^2}{1\over
{\cal C}^4}
\ \int dx_A dx_B\, f(z_a x_A {\cal C}) f(z_b x_B {\cal C})
\ x_B \delta(x_B - Q^2/x_A 2P_A\cdot P_B) \nonumber \\
& & \hskip 1.6in \cdot {1\over s_{AB}} {1\over 8\pi}
| {\cal M}(aX\to cb)|^2 \ .
\eeqa{exactII}
This is an exact rewriting of \leqn{basicII}. Now apply the
approximation analogous to \leqn{MdecompFF} or \leqn{MdecompIF}
and group terms to form
\begin{equation}
\sigma(AB \to X) = {1\over 2 s_{AB}} 2\pi
\delta(Q^2 - x_A x_B 2P_A\cdot P_B)
| {\cal M}(AX\to B)|^2 \ .
\eeq{sigmaIIform}
This gives, finally,
\begin{equation}
\sigma (pp \to cX) \approx \int {dz_a\over z_a^2} {dz_b\over z_b^2}{1\over
{\cal C}^4}
\ \int dx_A dx_B\, f(z_a x_A {\cal C}) f(z_b x_B {\cal C})
\sigma(AB \to X)\cdot
{\alpha_s N_c\over 4\pi}\, {\cal S}(z_a, z_c, z_b) \ .
\eeq{finalII}
To test this formula, consider the case of $q \bar q$ annihilation with
the emission of a gluon collinear with the quark $a$. The sum of
spin-dependent splitting functions for this case is again \leqn{sumSIF}.
In the collinear region of interest, $z_a = 1/w$, $z_b\approx 1$.
Repeating the step that led to \leqn{testfinalIF}, we find
\begin{equation}
\sigma (p p \to cX) \approx \int dx_A dx_B\, \int {dw\over w}
f({x_A\over w}) f(x_B) \,
\sigma(AB \to X)\cdot
{\alpha_s N_c\over 4\pi} {1 + w^2\over (1-w)}\log{Q^2\over\mu^2} \ ,
\eeq{testfinalII}
which is the correct limit.
\section{Comparison to previous results}
In the Introduction, we made reference to a number of previous
definitions of the antenna splitting
functions.
We noted that these definitions agree, as they must, in the singular
soft and collinear limits. However, these prescriptions differ
widely away from the boundaries of phase space. In this section, we
will compare our prescription to those of ARIADNE~\cite{colordipoles,ARIADNE}
and Gehrmann-De Ridder, {\it et al.}~\cite{Gehrmann}.
We will make this comparison over the natural
phase space discussed in the previous section--the entire $(z_a,z_b)$ plane
above the line $z_a + z_b = 1$. In order to describe antenna showers
for initial- as well as final-state emissions, the splitting functions
should extend into the region $z_a, z_b > 1$. Depending on the details of how
the shower is constructed, their use might be restricted to a polygon around
$z_a = z_b = 1$, or the expressions might be used for arbitrarily large
values of $z_a$ and $z_b$.
We note again that the IF regions include the lines $z_a = 0$ and
$z_b = 0$. Expressions
for the splitting functions that are well-behaved near $z_a = z_b = 1$
can possibly have a singularity on this line, though such a singularity in
the middle of the phase space would be unphysical. We used this criterion
in Section 4 to exclude factors of $1/z_a$ and $1/z_b$ from appearing
in \leqn{onetworeducethree}. The antenna functions of Duhr and
Maltoni~\cite{Duhr} are typically singular along this line and so cannot
be used in parton shower models in all regions.
The ARIADNE and Gehrmann-De Ridder
antenna functions give expressions summed over final
polarizations.
To compare our splitting functions to these, we must sum over a row
in Table \ref{tab:allN}. Our summed expressions are independent of the
initial polarization in the soft and collinear limits, but they depend on
the polarizations of $A$ and $B$ in the interior of the $(z_a,z_b)$ space.
The comparison to our expressions thus also reveals where this dependence
on polarization is an important effect.
The first antenna splitting functions were put forward
by the ARIADNE group \cite{colordipoles}. Their
approach started from the spin-averaged cross section
for the simple splitting process $q\bar{q}\rightarrow q
g\bar{q}$ in $e^+e^-$ annihilation. They then guessed the expressions
for the $qg\rightarrow qgg$ and $gg\rightarrow ggg$ splittings,
so that these would have a similar form to the $q\bar{q}\rightarrow q
g\bar{q}$ case,
\begin{equation}
{\cal S} = { z_a^{n_a} + z_b^{n_b} \over y_{ac} y_{bc}} \ ,
\eeq{ARIADNESproposal}
where $n_a, n_b$ = 2 for emission from a quark and 3 for emission from a
gluon.
Our philosophy, explained in Section 2, is that each individual antenna
should reproduce the collinear limit predicted by QCD. These expressions
are symmetric under interchange of identical particles, while
\leqn{ARIADNESproposal} does not have this property, so we would obtain
the complete splitting function by symmetrizing \leqn{ARIADNESproposal}.
This gives
\begin{Eqnarray}
q\bar{q}\:\:{\rm antenna\!:}&\:\:{\cal S}
&= {z_a^2+z_b^2 \over y_{ac}y_{bc}} \ ,\nonumber \\
gg\:\:{\rm antenna\!:}&\:\:{\cal S}
&= {z_a^3+z_b^3\over y_{ac} y_{bc}} + {z_a^3+z_c^3\over y_{ab} y_{bc}}
+ {z_b^3+z_c^3\over y_{ab}y_{ac}} \ ,\nonumber \\
qg\:\:{\rm antenna\!:}&\:\:{\cal S}
&= {z_a^2+z_b^3\over y_{ac}y_{bc}} + {z_a^2+z_c^3\over y_{ab} y_{bc}}\ .
\eeqa{ARIADNEant}
The summed terms are each positive in the FF kinematic region. To obtain
the ARIADNE splitting functions
in the other regions, we analytically continue these
formulae into the regions where $z_a$ or $z_b$ is greater than 1.
The analytic continuation of the ARIADNE and, below, the
Gehrmann-de Ritter results brings in the issue of the positivity
of these expressions, similar to the positivity issue for our
splitting functions discussed in Section 6.
For the ARIADNE and Gehrmann-De Ridder antenna functions, the expressions
given are summed over spins, and the individual pieces are not independent
of one another. So, if they become negative, that is a problem for
the complete, spin-summed, expression. For the Gehrmann-de Ridder
functions, it can be seen that this happens
only the regions $z_a < 0$ and $z_b < 0$, so this is not a serious
problem. However, the ARIADNE function involve $z_c^3$, which is
negative in the whole region $z_a + z_b > 2$. This problem cannot
be resolved by replacing $z_c$ with $|z_c|$, since this leads to
expressions that do not agree with the Altarelli-Parisi factorization
along the lines separating the IF regions from the II region. Fortunately,
the ARIADNE functions do not become actually become negative until
$z_a$ or $z_b$ becomes very large ($z_a$ or $z_b \sim 12$).
However, the idea that
the ARIADNE functions are sums of positive and negative terms in the
initial-state regions goes against the intuition used to propose these
expressions.
We are now in a position to compare the ARIADNE function to our proposal.
For the $q\bar q$ antenna, the expression above coincides with the
sum of row 5 of Table
\ref{tab:allN}. For the $gg$ and $gq$ cases, the ratio of the above
ARIADNE
functions to those defined in Table \ref{tab:allN} are illustrated in
Figs.~\ref{fig:ARIADNEggg}, \ref{fig:ARIADNE6},
and \ref{fig:ARIADNEqgg}. The notation in the figures is the following:
Each figure represents the ratio of the ARIADNE splitting function to
our results
for a specific initial set of polarized partons, summed over final
state polarizations. The ratio goes to 1 on
the lines $z_a = 1$ and $z_b = 1$, which correspond to the collinear
limits. Away from these lines, the contours on which the ratios are
1.2, 1.5, 2.0, 3.0, and 5.0 (toward the $+$ symbol),
and the inverses of these numbers (toward the $-$ symbol) are shown.
The $qg$ antenna function are asymmetric between partons $a$ and $b$.
The IF region in the lower right is that in which the quark is in the
intial state and the gluon is in the final state. The IF region
in the upper left is that in which the gluon is in the initial state and
the quark remains in the final state.
\begin{figure}
\begin{center}
{
\includegraphics[width=7.2cm]{ARIADNE1.pdf}
}
\hspace{0cm}
{
\includegraphics[width=7.2cm]{ARIADNE2.pdf}
}
\caption{Visualization of the ratio of the ARIADNE antenna
function to our antenna functions for the
processes $gg\rightarrow ggg$. The figures on the left and
right are the comparison of the ARIADNE
antenna function to our spin-summed antenna functions from
row 1 and row 2 in Table \ref
{tab:allN}, respectively. The boundaries of phase space for
the different kinematic regions are marked in blue. The contours
are plotted at ratios of 1.2, 1.5, 2.0, 3.0, and 5.0, with $+$
indicating a region in which the ratio is greater than 1.}
\label{fig:ARIADNEggg}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.2cm]{ARIADNE6.pdf}
\caption{Visualization of the ratio of the ARIADNE antenna
function to our antenna function for the
process $q_-\bar{q}_-\rightarrow qg\bar{q}$. Our antenna
function for the process $q_-\bar{q}_+\rightarrow
qg\bar{q}$ coincides with the ARIADNE result and so is not
included.
The notation is as in Fig.~\ref{fig:ARIADNEggg}.}
\label{fig:ARIADNE6}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
{
\includegraphics[width=7.2cm]{ARIADNE7.pdf}
}
\hspace{0cm}
{
\includegraphics[width=7.2cm]{ARIADNE8.pdf}
}
\caption{Visualization of the ratio of the ARIADNE antenna
function to our antenna functions for the
processes $qg\rightarrow qgg$. The figures on the left and
right are the comparison of the ARIADNE
antenna function to our spin-summed antenna functions from
row 7 and row 8 in Table \ref{tab:allN}, respectively.
The notation is as in Fig.~\ref{fig:ARIADNEggg}.}
\label{fig:ARIADNEqgg}
\end{center}
\end{figure}
The ARIADNE authors gave a different interpretation to the formulae
\leqn{ARIADNEant}. They took the philosophy that the collinear
limit need not result from a single antenna but rather should be
the result of summing over the possible antennae that would lead to
a specific final state. A three gluon final state could result from
any pair of the gluons radiating the third and so should be the sum
of three antennae. Then the second line of
\leqn{ARIADNEant} would be interpretated as the sum over these three antennae.
This is a reasonable point of view for the FF kinematics considered in
\cite{colordipoles}. However, in the IF and II regions, at least one
of the $z_i$ will be negative and so some of the
terms in the last two lines of \leqn{ARIADNEant} will become negative.
Such terms cannot be interpreted as independent radiators, each emittting
a gluon with positive probability. It is tempting to revise the formula
in \leqn{ARIADNEant} by taking the absolute
values of the negative terms. However, one can readily check that no such
prescription gives the correct Altarelli-Parisi limit along the lines
$z_a = 1$ and $z_b =1$ at the boundaries of the IF and II regions.
Thus, we believe, the ARIADNE formulae can be used in the IF and II regions
only by using the formulae \leqn{ARIADNEant} as written and accepting
that some negative signs will appear~\cite{thankJan}.
Gehrmann-De Ridder, Gehrmann and Glover \cite{Gehrmann} studied $2\to 3$
splitting from Feynman diagrams to develop an antenna subtraction program
for NNLO calculations. In doing
so, they were able to extract
unpolarized antenna functions for the processes
$gg\rightarrow ggg$, $qg\rightarrow
qgg$ and $qg\rightarrow q\bar{q}q$. To calculate the gluon-gluon
antenna function, they used the effective
Higgs coupling to gluons
\begin{equation}
{\cal L} = - {\lambda \over 4} h F^{\mu\nu}F_{\mu\nu}.
\eeq{GehrmannHgg}
This is essentially the same procedure that we used in Section 3, and it
yields the same result as the sum of row 1 in
Table \ref{tab:allN}. In our language, their antenna function
for the gluon-gluon dipole is~\cite{factorthird}
\begin{equation}
{\cal S} = {y_{ac}^2+y_{bc}^2+y_{ab}^2+y_{ac}^2y_{bc}^2
+y_{ab}^2y_{bc}^2+y_{ab}^2y_{ac}^2 \over y_{ab}y_{ac}y_{bc}} + 4\ .
\eeq{Gehrmannggant}
The comparison of this antenna function to the sum of row 2 of
Table \ref{tab:allN} is illustrated in Fig.~\ref{fig:Gehrmann2}.
This splitting function for $gg \rightarrow ggg$ is, however,
not precisely the form of the
splitting function that is used in the VINCIA parton shower~\cite{VINCIA}.
They use the `global' form of the Gehrmann-De Ridder antenna function,
which in our language is
\begin{equation}
{\cal S} = \frac{1}{2} \left[ \frac{2 y_{ab}^2+y_{ab}^2y_{ac}^2
+y_{ab}^2y_{bc}^2}{y_{ab}y_{ac}y_{bc}}+\frac{8}{3} \right].
\eeq{VINCIAimp}
To implement this antenna function, a similar procedure is used as with the
ARIADNE antenna
functions. That is, emissions from overlapping
antenna are summed. When the three antennae contributing
to $gg\to ggg$ are summed together, one recovers the result
\leqn{Gehrmannggant}. This prescription
works well in the FF kinematics. However, as
in the ARIADNE case, it might require negative contributions in
splitting functions for
some antennae in the IF and II kinematics.
To construct the antenna functions involving quarks, Gerhmann-De Ridder,
{\it et al.}, calculated
the decay of a neutralino $\chi$ to a
gluon and a gluino $\psi$ through the effective operator
\begin{equation}
{\cal L}=i\eta \bar{\psi} \sigma^{\mu\nu}\chi F_{\mu\nu}+ {\mbox{\rm h.c.}}
\eeq{Gehrmannqg}
In principle, our results should agree for case of a spin $\frac{1}{2}$
initial state. However,
our choices \leqn{onetworeduce} and \leqn{onetworeducethree}
for handling ambiguous momentum products, produce some differences.
In our language, their antenna functions involving quarks are
\begin{Eqnarray}
qg\rightarrow qgg:&\:\:{\cal S} &= {2y_{ab}^2+2y_{ac}^2
+y_{ab}y_{bc}^2+y_{ac}y_{bc}^2+2y_{ac}^2y_{ab}^2
\over y_{ab}y_{ac}y_{bc}}+2+2y_{ac}+2y_{ab} \ ,\nonumber \\
qg\rightarrow q\bar{q}q:&\:\:{\cal S}
&= {(y_{ac}+y_{ab})^2y_{ac}y_{ab}-2y_{ac}^2y_{ab}^2\over y_{ab}y_{ac}y_{bc}}
+y_{ab}+y_{ac} \ .
\eeqa{Gehrmannqgant}
The comparison to our antenna functions is illustrated in
Figs.~\ref{fig:Gehrmannqgg} and \ref{fig:Gehrmannqqq}. For
$qg\to qgg$, our result for the spin $\frac{1}{2}$ case is indeed very close to
the above expression in the FF region.
For $qg\to q\bar q q$, our prescription
\leqn{onetworeducethree} gives us an extra factor of $z_a$ near $z_a = 0$.
\begin{figure}
\begin{center}
{
\includegraphics[width=7.2cm]{Gehrmann2.pdf}
}
\caption{Visualization of the ratio of the Gehrmann-De Ridder antenna
function to our antenna function for the
process $g_-g_+\rightarrow ggg$. The antenna
function for the process $g_+g_+\rightarrow
ggg$ coincides with the Gehrmann-De Ridder result and so is not
included.
The notation is as in Fig.~\ref{fig:ARIADNEggg}.}
\label{fig:Gehrmann2}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
{
\includegraphics[width=7.2cm]{Gehrmann7.pdf}
}
\hspace{0cm}
{
\includegraphics[width=7.2cm]{Gehrmann8.pdf}
}
\caption{Visualization of the ratio of the Gehrmann-De Ridder antenna
functions to our antenna functions for the
processes $qg\rightarrow qgg$. The figures on the left and right are
the comparison of the
Gehrmann-De Ridder
antenna function to our spin-summed antenna functions from
row 7 and row 8 in Table \ref{tab:allN}, respectively.
The notation is as in Fig.~\ref{fig:ARIADNEggg}.}
\label{fig:Gehrmannqgg}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
{
\includegraphics[width=7.2cm]{Gehrmann9.pdf}
}
\hspace{0cm}
{
\includegraphics[width=7.2cm]{Gehrmann10.pdf}
}
\caption{Visualization of the ratio of the Gehrmann-De Ridder antenna
functions to our antenna functions for the
processes $qg\rightarrow qq\bar{q}$. The figures on the left and right are
the comparison of the
Gehrmann-De Ridder
antenna function to our spin-summed antenna functions from
row 9 and row 10 in Table \ref{tab:allN}, respectively.
The notation is as in Fig.~\ref{fig:ARIADNEggg}.}
\label{fig:Gehrmannqqq}
\end{center}
\end{figure}
In summary, we have shown that the antenna splitting functions represented
by \leqn{Sproposal} and Table~\ref{tab:allN} give a physically sensible
prescription for the construction of antenna showers. These splitting
functions can be used with the formulae \leqn{finalFF},
\leqn{finalIF}, \leqn{finalII} to generate antenna splittings in all
three relevant kinematic regions. We hope that this formalism will
provide a firm foundation for the construction of new parton showers
based on the antenna concept.
\bigskip \bigskip \begin{center} \begin{large
We thank Darren Forde, Tanju Gleisberg, Peter Skands,
and Jan Winter for instructive discussions and Claude Duhr and
Fabio Maltoni for a useful correspondence.
This work was aided by our participation in the Northwest Terascale
Workshop on Parton Showers and Event Structure at the LHC at the
University of Oregon. We thank
the participants and, especially, the organizer, Davison Soper.
The work was supported
by the US Department of Energy under contract DE--AC02--76SF00515.
|
3,212,635,537,540 | arxiv |
\subsection{Generation of equations}\label{sec:gener-gas-equat-1}
In order to generate gas equations (GE), we need to define the EVM gas
model, which is obtained by encoding the specification of the gas
consumption for each EVM instruction as provided in \cite{yellow}. The
EVM gas model is complex and unconventional, it has two components,
one which is related to the memory consumption, and another one that
depends on the bytecode executed. The first component is computed
separately as will be explained below. In this section we focus on
computing the gas attributed to the opcodes. For this purpose, we
provide a function $C_{opcode}:s \mapsto g$ which, for an EVM opcode,
takes a stack $s$ and returns a gas $g$ associated to it. We
distinguish three types of instructions:
(1) Most bytecode instructions have a \emph{fixed} constant gas consumption
that we encode precisely in the cost model $C_{opcode}$, i.e., $g$
is a constant.
(2) Bytecode instructions that have different \emph{constant} gas
consumption $g_1$ or $g_2$ depending on some given condition. This
is the case of \texttt{SSTORE} that costs $g_1=20000$ if
the storage value is set from zero to non-zero (first assignment),
and $g_2=5000$ otherwise. But it is also the case for \texttt{CALL}
and \texttt{SELFDESTRUCT}. In these cases we use $g=max(g_1,g_2)$ in
$C_{opcode}$.
(3) Bytecode instructions with a non-constant (\emph{parametric})
gas consumption that depends on the value of some stack
location. For instance, the gas consumption of \texttt{EXP} is
defined as $10+10\cdot(1+\lfloor log_{256}(\mu_{s}[1])\rfloor)$ if
$\mu_{s}[1]\neq 0$ where $\mu_s[0]$ is the top of the
stack. Therefore, we have to define $g$ in $C_{opcode}$ as a
parametric function that uses the involved location. Other bytecode
instructions with parametric cost are \texttt{CALLDATACOPY},
\texttt{CODECOPY}, \texttt{RETURNDATACOPY}, \texttt{CALL},
\texttt{SHA3}, \texttt{LOG*}, and \texttt{EXTCODECOPY}.
Given the RBR annotated with the nop information, the size relations,
and the cost model $C_{opcode}$, we can generate GE that define the
gas consumption of the corresponding code applying the classical
approach to cost analysis \cite{DBLP:journals/cacm/Wegbreit75} which
consists of the following basic steps:
(i) Each rule is transformed into a corresponding cost equation that
defines its cost. Example~\ref{size} also displays the GE obtained
for the rules \emph{jump1619} and \emph{block1731}. (ii) The nop
instructions determine the gas that the rule consumes according to the
gas cost model $C_{opcode}$ explained above. (iii) Calls to other
rules are replaced by calls to the corresponding cost equations. See
for instance the call to \emph{block1619} from rule \emph{block1731}
that is transformed into a call to the cost function \emph{block1619}
in Ex.~\ref{size}. (iv) Size relations are attached to rules to define
their applicability conditions and how the sizes of data change when
the equation is applied. See for instance the size relations attached
to \emph{jump1619} that have been explained in Ex.~\ref{size}.
As said before, the gas model includes a cost that comes from the
memory consumption which is as follows. Let $C_{mem}(a)$ be the memory
cost function for a given memory slot $a$ and defined as $
G_{memory}\cdot a + \left\lfloor{\frac{a^2}{512}}\right\rfloor \mbox{
where $G_{memory}=3$}$.
Given an EVM instruction, $\mu'_i$ and $\mu_i$
denote resp. the \emph{highest memory slot} accessed in the local memory,
resp., after and before the execution of such instruction. The
memory gas cost of every instruction is the difference
$C_{mem}(\mu'_i)-C_{mem}(\mu_i)$.
Besides \code{MLOAD} or \code{MSTORE},
instructions like \code{SHA3} or \code{CALL}, among others, make
use of the local memory, and hence can increase the memory gas cost.
In order to estimate this cost associated to all EVM instructions in the
code of the function, we first make the following observations:
(1) Computing the sum of all the
memory gas cost amounts to
computing the memory cost function for the
highest memory slot accessed by the instructions of the function
under analysis. This is because, as seen, $\mu_i$ and $\mu'_i$ refer
to this position in each operation and hence we pay for all the
memory up to this point. (2) This is not a standard memory
consumption analysis in which one obtains the total amount of memory
allocated by the function. Instead, in this case, we infer the
actual value of the highest slot accessed by any operation executed
in the function.
\vspace{-0.1cm}
\begin{example}
Let us show how we obtain the memory gas cost for
\emph{block1647}. In this case, the two instructions in this block
that cost memory are underlined in Fig.~\ref{fig:nop} and correspond
to a \code{MSTORE} and \code{SHA3} bytecodes. In this block, both
bytecodes operate on slot 0 of the memory, and they cost 3 units of
gas because they only activate up to slot 1 of the
memory.
\end{example}
\subsection{\tname{EthIR*}: from CFG to an annotated rule-based representation}\label{sec:ethirstar:-from-cfg}
\tname{EthIR*}, an extension of \tname{EthIR}~\cite{AlbertGLRS18}, is the next
component of our analyzer. \tname{EthIR} provides a rule-based representation
(RBR) for the CFG obtained from \tname{Oyente*}. Intuitively, for each
block in the CFG it generates a corresponding rule that contains a
high-level representation of all bytecode instructions in the block
(e.g., load and store operations are represented as assignments) and
that has as parameters an explicit representation of the stack, local,
state, and blockchain variables (details of the transformation are
in~\cite{AlbertGLRS18}). Conditional branching in the CFG is
represented by means of guards in the rules. \tname{EthIR*} provides three
extensions to the original version of \tname{EthIR}~\cite{AlbertGLRS18}:
(1) The first extension is related to the way function calls are
handled in the EVM, where instead of an explicit \texttt{CALL} opcode,
as we have seen before, a call to an internal function is transformed
into a \texttt{PUSH} of the return address in the stack followed by a
\texttt{JUMP} to the address where the code of the function
starts.
If the same function is called from different points of the program,
the resulting CFG shares for all these calls the same subgraph (the
one representing the code of the function) which ends with different
jumping addresses at the end. As described in~\cite{madmax}, there is
a need to clone parts of the CFG to explicitly link the \texttt{PUSH}
of the return address with the final \texttt{JUMP} to this
address.
This cloning in our implementation is done at the level of the RBR as
follows: Since the jumping addresses are known thanks to the symbolic
execution applied by \tname{Oyente}, we can find the connection between the
\texttt{PUSH} and the \texttt{JUMP} and clone the involved part of the
RBR (between the rule of the \texttt{PUSH} and of the \texttt{JUMP})
using different rule names for each cloning.
(2) The second extension is a flow analysis intended to reduce the
number of parameters of the rules of the RBR. This is crucial for
efficiency as the number of involved parameters is a bottleneck for
the successive analysis steps that we are applying. Basically, before
starting the translation phase, we compute the inverse connected
component for each block of the CFG, i.e, the set of its predecessor
blocks. During the generation of each rule, we identify the local,
state or blockchain variables that are used in the body of the
rule. Then, these variables have to be passed as arguments only to
those rules built from the blocks of its inverse connected component.
(3) When we find a store on an unknown memory location
``?'', we have to ``forget'' all the memory from that point on, since
the writing may affect any memory location, and it is not sound
anymore to assume the previous information. In the RBR, we achieve
this deletion by assigning fresh variables (thus unknown values) to
the memory locations at this point.
Optionally, \tname{EthIR} provides in the RBR the original bytecode
instructions (from which the higher-level ones are obtained) by simply
wrapping them within a nop functor (see Fig. \ref{fig:nop}). Although
nop annotations will be ignored by the size analysis,
they are needed later
to assign a precise gas consumption to every rule.
\vspace*{-0.5cm}
\begin{figure}[ht]
\begin{center}
{\scriptsize
\(
\begin{array}{|l|}
\hline
\begin{array}{lll}
{\it block1647}(\sot{10}, \svot,\lvot, \blot) \Rightarrow \\
\qquad nop({\tiny JUMPDEST}), \svar{11} = \svar{9}, \svar{9} = \svar{10}, \svar{10} =
\svar{11}, nop(SWAP), \svar{11} = 0, nop(PUSH),\\
\qquad \underline{\lvar{2} = \svar{10}}, nop(MSTORE),
\svar{10} = 32, nop(PUSH),
\svar{11} = 0, nop(PUSH), \underline{\svar{10} = {\it sha3}(\svar{11},
\svar{10})},\\
\qquad nop(SHA3), \svar{9} = \svar{10}+\svar{9}, nop(ADD), gl = \svar{9},
\svar{9} = fresh_0, nop(SLOAD), \svar{10} = \svar{6},\\
\qquad nop(DUP4), {\it call}({\it jump1647}(\sot{10},\svot,\lvot, \blot)),
nop(GT),nop(ISZERO), nop(ISZERO),\\
\qquad nop(PUSH),nop(JUMPI)\\
\end{array}\\
\hline
\end{array}
\)
}
\end{center}
\vspace*{-0.25cm}\caption{Selected rule including nop functions needed
for gas analysis}\vspace*{-0.7cm}
\label{fig:nop}
\end{figure}
\begin{example} Figure~\ref{fig:nop} shows the RBR for
\emph{block1647}. Bytecode instructions that load or
store information are transformed into assignments on the involved
variables. For arithmetic operations, operations on bits, sha,
etc., the variables they operate on are made explicit. Since stack
variables are always consecutive we denote by $\sot{n}$ the
decreasing sequence of all $\svar{i}$ form $n$ down to $0$.
$\lvot$ includes $\lvar{2}$ and $\lvar{0}$, which is the subset of
the local variables that are needed in this rule or in further
calls (second extension of \tname{EthIR*}). The unknown location ``?''
has become a fresh variable \emph{fresh$_{0}$} in
\emph{block1647}. For state variables, $\svot$ includes the needed
ones
$\gvar{11},\gvar{8},\gvar{7},\gvar{6},\gvar{5},\gvar{3},\gvar{2},\gvar{1},\gvar{0}$
($\gvar{i}$ is the $i$-th state variable). Finally, $\blot$
includes the needed blockchain state variables \texttt{\small
address}, \code{balance} and \code{timestamp}.
\end{example}
\section{Experimental Evaluation}\label{experiments}
This section presents the results of our evaluation
of \tname{Traductor}. In Sec.~\ref{accuracy}, we evaluate the accuracy of
the gas bounds inferred by \tname{Traductor} on the \textsf{EthereumPot} by comparing
them with the bounds computed by the \textsf{Solidity}
compiler.
In Sec.~\ref{statistics}, we evaluate the efficiency and effectiveness
of our tool by analyzing more than 29,000 Ethereum smart contracts. To
obtain these contracts, we pulled from
\textsf{etherscan.io}~\cite{etherscanSourceCodes} all Ethereum
contracts whose source code was available on January 2018. \tname{Traductor} is
available at \url{https://costa.fdi.ucm.es/gastap}.
\subsection{Gas Bounds for \textsf{EthereumPot} Case Study}\label{accuracy}
Table~\ref{fig:experiments} shows in column \textbf{solc} the gas
bound provided by the \textsf{Solidity} compiler
\textbf{solc}~\cite{solidity}, and in the next two columns the bounds
produced by \tname{Traductor} for opcode gas and memory gas, respectively,
for all public functions in the contract.
If we add the gas and memory bounds, it can be observed that, for
those functions with constant gas consumption, we are as
accurate as \textbf{solc}. Hence, we do not lose precision due to the
use of static analysis.
For those 6 functions that \textbf{solc} fails to infer constant gas
consumption, it returns $\infty$. For opcode gas, we are able to infer
precise \emph{parametric} bounds for five of them, \code{rewardWinner}
is linear on the size of the first and third state variables ($g1$ and
$g3$ represent resp. the sizes of the arrays \code{addresses} and
\code{slots} in Fig.~\ref{fig:solevm}), \code{getSlots} and
\code{findWinner} on the third, \code{getPlayers} on the first, and
\code{__callback} besides depends on the value of \code{result}
(second function parameter) and \code{proof} (last parameter). It is
important to note that, although the \textsf{Solidity} source code of
some functions (\emph{e.g.}\xspace, of \code{getSlots} and \code{getPlayers}) does
not contain loops, they are generated by the compiler and are only
visible at the EVM level. This also happens, for example, when a
function takes a \emph{string} or \emph{bytes} variable as argument.
This shows the need of developing the gas analyzer at the EVM
level.
For \code{joinPot} we cannot ensure that the gas consumption is
finite without embedding information about the blockchain in the
analyzer. This is because \code{joinPot} has a loop:
\code{for (uint i = msg.value; i >= minBetSize; i-= minBetSize)}
\code{\{tickets++;\}}, where \code{minBetSize} is a state variable that is initialized in the
definition line as \code{uint minBetSize = 0.01ether}, and
\code{ether} is the value of the \emph{Ether} at the time of
executing the instruction. This code has indeed several problems. The
first one is that the initialization of the state variable
\code{minBetSize} to the value \code{0.01ether} does not appear in
the EVM code available in the blockchain. This is because this
instruction is executed only once when the contract is created. So our
analyzer cannot find this instruction and the value of
\code{minBetSize} is unknown (and hence no bound can be found).
Besides, the loop indeed does not terminate if \code{minBetSize} in
not strictly greater than zero (which could indeed happen if
\code{ether} would take zero or a negative value). If we add the
initialization instruction, and embed in the analyzer the invariant
that \code{ether}$> 0$ (hence \code{minBetSize} becomes $ > 0$), then
we are able to infer a bound for \code{joinPot}.
\begin{table}[t
\scriptsize
\begin{center}
\begin{tabular}{l|c|c|c|}
\hline
\multicolumn{1}{|l|}{\bf function}& \textbf{solc}
&\textbf{opcode bound \tname{Traductor}}&\textbf{memory bound \tname{Traductor}} \\
\hline
\multicolumn{1}{|l|}{\code{totalBet}}& 790 & 775&15\\ \hline
\multicolumn{1}{|l|}{\code{locked}}& 706 & 691&15\\ \hline
\multicolumn{1}{|l|}{\code{getEndTime}}& 534 & 519&15\\ \hline
\multicolumn{1}{|l|}{\code{slots}}& 837 & 822&15\\ \hline
\multicolumn{1}{|l|}{\code{rewardWinner}}& $\infty$ &
80391+5057$\cdot$nat(g3)+5057$\cdot$nat(g1)&18\\ \hline
\multicolumn{1}{|l|}{\code{Kill}}& 30883 & 30874 & 9\\ \hline
\multicolumn{1}{|l|}{\code{amountWon}}& 438 & 423&15\\ \hline
\multicolumn{1}{|l|}{\code{getPlayers}}& $\infty$ &
1373+292$\cdot$nat(g1-1/32)&\\
\multicolumn{1}{|l|}{} &
&+75$\cdot$nat(g1+31/32)&
6$\cdot$nat(g1)+24+$\left\lfloor{\frac{(6\cdot nat(g1)+24)^2}{512}}\right\rfloor$\\ \hline
\multicolumn{1}{|l|}{\code{getSlots}}& $\infty$ &
1507+250$\cdot$nat(g3-1/32)&\\
\multicolumn{1}{|l|}{} & &+75$\cdot$nat(g3+31/32)& 6$\cdot$nat(g3)+24+$\left\lfloor{\frac{(6\cdot nat(g3)+24)^2}{512}}\right\rfloor$\\ \hline
\multicolumn{1}{|l|}{\code{winnerAddress}}& 750 & 735&15\\ \hline
\multicolumn{1}{|l|}{\code{\_\_callback}}& $\infty$ &
229380+3$\cdot$(nat(proof)/32)&
\\
\multicolumn{1}{|l|}{}&&+103$\cdot$nat(result/32)&\\
\multicolumn{1}{|l|}{}&&+50$\cdot$nat((32-nat(result)))& max\_error\\
\multicolumn{1}{|l|}{}&&+5836$\cdot$nat(g3)+5057$\cdot$nat(g1) &\\ \hline
\multicolumn{1}{|l|}{\code{owner}}& 662 & 647 &15\\ \hline
\multicolumn{1}{|l|}{\code{endTime}}& 460 & 445 &15\\ \hline
\multicolumn{1}{|l|}{\code{potTime}}& 746 & 731 &15\\ \hline
\multicolumn{1}{|l|}{\code{potSize}}& 570 & 555 &15\\ \hline
\multicolumn{1}{|l|}{\code{joinPot}}& $\infty$ & no\_rf &9\\ \hline
\multicolumn{1}{|l|}{\code{addresses}}& 1116 & 1101&15\\ \hline
\multicolumn{1}{|l|}{\code{findWinner}}& $\infty$ &
1555+779$\cdot$nat(g3) &1
\\ \hline
\multicolumn{1}{|l|}{\code{random\_number}}& 548 & 533&15\\
\hline
\end{tabular}
\caption{Gas bounds for \texttt{EthereumPot}. Function
\texttt{nat} defined as \texttt{nat(l)=max(0,l).}} \vspace{-0.6cm}
\label{fig:experiments}
\end{center}
\end{table}
For \code{__callback} we guarantee that the memory gas is
\emph{finite} but we cannot obtain an upper bound for it, \tname{Traductor}
yields a \emph{maximization error} which is a consequence of the
information loss due to the soundness requirement described in
extension 3 of Section~\ref{sec:ethirstar:-from-cfg}. Intuitively,
maximization errors may occur when the analyzer needs to
compose the cost of the different fragments of the code. For the
composition, it needs to maximize (\emph{i.e.}\xspace, find the maximal value) the
cost of inner components in their calling contexts (see \cite{pubs}
for details). If the maximization process
involves memory locations that have been ``forgotten'' by \tname{EthIR*}
(variables ``?''),
the upper bound cannot be inferred. Still, if there is no ranking
function error, we know that all loops terminate, thus the memory gas
consumption is finite.
Finally, this transaction is called always with a constant gas limit
of 400,000. This contrasts with the non-constant gas bound obtained
using \tname{Traductor}. Note that if the gas spent (without including the
\emph{refunds}) goes beyond the gas limit the transaction ends with an
out-of-gas exception. Since the size of $g3$ and $g1$ is the same as
the number of players, from our bound, we can conclude that from 16
players on the contract is in risk of running out-of-gas and get stuck
as the 400,000 gas limit cannot be changed. So using \tname{Traductor} we can
prevent an out-of-gas vulnerability: the contract should not allow
more than 15 players, or the gas limit must be increased from that
number on.
\subsection{Statistics for Analyzed Contracts}\label{statistics}
Our experimental setup consists on 29,061 contracts
taken from the blockchain as follows. We pulled all Ethereum contracts
from the blockchain as of January 2018, and removed duplicates. This
ended up in 10,796 files (each file often contains several contracts).
We have excluded the files where the decompilation phase fails in any
of the contracts it includes, since in that case we do not get any
information on the whole file. This failure is due to \tname{Oyente} in 1,230
files,
which represents a 11.39\% of the total and to \tname{EthIR} in 829
files,
which represents a 7.67\% of the total. The failures of \tname{EthIR} are
mainly due to the cloning mechanism in involved CFGs for which we fail
to find the relation between the jump instruction and the return
address.
After removing these files, our experimental evaluation has been
carried out on the remaining 8,737
files, containing 29,061 contracts. In total we have analyzed 258,541
public functions (and all auxiliary functions that are used from
them). Experiments have been performed on an Intel Core i7-7700T at
2.9GHz x 8 and 7.7GB of Memory, running Ubuntu 16.04. \tname{Traductor}
accepts smart contracts written in versions of \textsf{Solidity} up to
0.4.25 or bytecode for the Ethereum Virtual Machine v1.8.18. The
statistics that we have obtained in number of functions are summarized
in Table~\ref{fig:large-experiments}, and the time taken by the
analyzer in Table~\ref{fig:large-experiments-time}. The results for
the opcode and memory gas consumption are presented separately.
\begin{table}[t
{\small
\setlength{\tabcolsep}{1.5pt}
\begin{center}
\begin{tabular}{|l|c|c|c|c|}
\hline
{\bf Type of result}&{\bf \#opc}&{\bf \%opc} &{\bf \#mem}&{\bf \%mem}\\ \hline
Constant gas bound & 223,294 & 86.37\% &225,860& 87.36\%\\
Parametric gas bound & 14,167 & 5.48\% &13,312&5.15\%\\
Time out & 13,140 & 5.08\%& 13,539 &5.24\%\\
Finite gas bound (maximization error)& 7,095& 2.74\% & 5,830&2.25\%\\
Termination unknown (ranking function error) & 716 & 0.28\% &0&0\%\\
Complex control flow (cover point error) & 129 & 0.05\% &0& 0\%\\\hline
Total number of functions & 258,541 & 100\% &258,541 &100\%\\\hline
\end{tabular}
\end{center}
\caption{Statistics of gas usage on the analyzed 29,061 smart contracts from
Ethereum blockchain}\vspace{-0.6cm}
\label{fig:large-experiments}
}
\end{table}
Let us first discuss the results in Table~\ref{fig:large-experiments}
which aim at showing the effectiveness of \tname{Traductor}.
Columns \textbf{\#opc} and \textbf{\#mem} contain number of analyzed
functions for opcode and memory gas, resp., and columns preceded by
\textbf{\%} the percentage they represent. For the analyzed contracts,
we can see that a large number of functions, 86.37\% (resp. 87.36\%),
have a constant opcode (resp. memory) gas consumption. This is as
expected because of the nature of smart contracts, as well as because
of the Ethereum safety recommendations mentioned in
Section~\ref{intro}.
Still, there is a relevant
number of functions 5.48\% (resp. 5.15\%) for which we obtain an opcode (resp. memory)
gas bound that is not constant (and hence are potentially
vulnerable).
Additionally, 5.08\% of the analyzed functions for opcodes and 5.24\%
for memory reach the timeout (set to 1 minute) due to the further
complexity of solving the equations.
As the number of analyzed contracts is very large, a manual inspection
of all of them is not possible. Having inspected many of them
and, thanks to the information provided by the \tname{Pubs} solver used by
\tname{Traductor}, we are able to classify the types of errors that have led
to a ``\emph{don't-know}'' answer and which in turn explain the
sources of incompleteness by our analysis:
(i) \emph{Maximization error}:
In many cases, a \emph{maximization error} is a consequence of loss of
information by the size analysis or by
the decompilation when the values of memory locations are lost. As
mentioned, even if we do not produce the gas formula, we know that
the gas consumption is \emph{finite} (otherwise the system flags a
ranking function error described below).
(ii) \emph{Ranking function error:} The solver needs to find ranking
functions to bound the maximum number of iterations of all loops the
analyzed code might perform. If \tname{Traductor} fails at this step, it
outputs a \emph{ranking function error}.
%
Section~\ref{experiments} has described a scenario where we have
stumbled across this kind of error. We note that number of these
failures for \textbf{mem} is lower than for \textbf{opcode} because
when the cost accumulated in a loop is 0, \tname{Pubs} does not look for a
ranking function.
(iii) \emph{Cover point error:} The equations are transformed into
direct recursive form to be solved \cite{pubs}. If the
transformation is not feasible, a \emph{cover point error} is
thrown. This might happen when we have mutually recursive functions,
but it also happens for nested loops as in non-structured languages.
This is because they contain jump instructions from the inner loop
to the outer, and vice versa, and become mutually recursive. A loop
extraction transformation would solve this problem, and we leave its
implementation for the future work.
\begin{table}[t]
{\small
\setlength{\tabcolsep}{1.5pt}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
{\bf Phase} & {\bf T$_{opcode}$} (s) & {\bf T$_{\mathit{mem}}$} (s) &{\bf T$_{\mathit{total}}$} (s)& {\bf \%opc} & {\bf \%mem} &{\bf \%total}\\ \hline
CFG generation (\tname{Oyente*}) & --- & --- & 17,075.55&---&---&1.384\% \\
RBR generation (\tname{EthIR*}) & --- & --- & 81.37 &---&---&0.006\% \\
Size analysis (\tname{Saco}) &---&---&105,732 &---&---&8.57\%\\
Generation of gas equations & 141,576 & 125,760 &267,336 & 11.48\% &10.2\%&21.68\%\\
Solving gas equation (\tname{Pubs}) & 395,429 & 447,502 &842,931 & 32.06\% & 36.3\%&68.36\%\\ \hline
Total time \tname{Traductor} & && 1,233,155.92 &&& 100\% \\ \hline
\end{tabular}
\end{center}}
\caption{Timing breakdown for \tname{Traductor} on the analyzed 29,061 smart contracts}\vspace{-.6cm}
\label{fig:large-experiments-time}
\end{table}
As regards the efficiency of \tname{Traductor}, the total analysis time for
all functions is 1,233,155.92 sec (342.54 hours). Columns \textbf{T} and
\textbf{\%} show, resp., the time in seconds for each phase and the percentage
of the total for each type of gas bound. The first three rows are common for the
inference of the opcode and memory bounds, while equation generation
and solving is separated for opcode and memory. Most of the time is spent
in solving the GE (68.36\%), which includes some timeouts.
The time taken by
\tname{EthIR} is negligible, as it is a syntactic transformation process,
while all other parts require semantic reasoning.
All in all, we argue that the statistics from our experimental
evaluation show the accuracy, effectiveness and efficiency of our
tool.
Also, the sources of incompleteness point out directions for further
improvements of the tool.
\section{Introduction}\label{intro}
In the Ethereum consensus protocol, every operation on a replicated
blockchain state, which can be performed in a transactional manner
by executing a \emph{smart contract} code, costs a certain amount of
\emph{gas}~\cite{yellow}, a monetary value in \emph{Ether}, Ethereum's
currency, paid by a transaction-proposing party.
Computations (performed by invoking smart contracts) that require
\emph{more computational or storage resources}, cost more gas than
those that require fewer resources. As regards storage, the EVM has
three areas where it can store items: the \emph{storage} is where all
\emph{contract state} variables reside, every contract has its own
storage and it is persistent between external function calls
(transactions) and quite expensive to use; the \emph{memory} is used
to hold temporary values, and it is erased between transactions and is
cheaper to use; the \emph{stack} is used to carry out operations
and it is free to use, but can only hold a limited
amount of values.
The rationale behind the resource-aware smart contract semantics,
instrumented with gas consumption, is three-fold.
First, paying for gas at the moment of proposing the transaction does
not allow the emitter to waste other parties' (aka \emph{miners})
computational power by requiring them to perform a lot of worthless
intensive work.
Second, gas fees disincentivize users to consume too much of
replicated \emph{storage}, which is a valuable resource in a
blockchain-based consensus system.
Finally, such a semantics puts a cap on the number of computations
that a transaction can execute, hence prevents attacks based on
non-terminating executions (which could otherwise, \emph{e.g.}\xspace, make all
miners loop forever).
In general, the gas-aware operational semantics of EVM has introduced novel
challenges \emph{wrt.}\xspace sound static reasoning about resource consumption,
correctness, and security of replicated computations:
(1)\label{ch:a} While the EVM specification~\cite{yellow} provides the
precise gas consumption of the low-level operations, most of the smart
contracts are written in high-level languages, such as
\plname{Solidity}~\cite{solidity} or \plname{Vyper}~\cite{vyper}.
%
The translation of the high-level language constructs to the low-level
ones makes static estimation of runtime gas bounds challenging (as we
will see throughout this paper), and is implemented in an
\emph{ad-hoc} way by state-of-the art compilers, which are only able
to give constant gas bounds, or return $\infty$ otherwise.
(2)\label{ch:b} As noted in~\cite{madmax}, it is discouraged in the
Ethereum safety recommendations~\cite{SafetyWiki} that the gas
consumption of smart contracts depends on the size of the data it
stores (i.e., the \emph{contract state}), as well as on the size of
its functions inputs, or of the current state of the
blockchain. However, according to our experiments, almost 10\% of the
functions we have analyzed do.
%
The inability to estimate those dependencies, and the lack of analysis
tools, leads to design mistakes, which make a contract unsafe to run
or prone to exploits.
%
For instance, a contract whose state size exceeds a certain limit, can
be made forever \emph{stuck}, not being able to perform any operation
within a reasonable gas bound. Those vulnerabilities have been
recognized before, but only discovered by means of unsound,
pattern-based analysis~\cite{madmax}.
In this paper, we address these challenges in a principled way by
developing \tname{Traductor}, a \emph{Gas-Aware Smart contracT Analysis
Platform}, which is, to the best of our knowledge, the first
automatic gas analyzer for smart contracts.
\tname{Traductor} takes as input a smart contract provided in \plname{Solidity} source
code~\cite{solidity}, or in low-level (possibly
decompiled~\cite{porosity}) EVM code, and automatically infers an
upper bound on the gas consumption for each of its public functions.
The upper bounds that \tname{Traductor}
infers are given in terms of the sizes of the input parameters of the
functions, the contract state, and/or on the blockchain data that the gas
consumption depends upon (e.g., on the \emph{Ether} value).
The inference of gas requires complex transformation and analysis
processes on the code that include: (1) construction of the
control-flow graphs (CFGs), (2) decompilation from low-level code to a
higher-level representation, (3) inference of size relations, (4)
generation of gas equations, and (5) solving the equations into
closed-form gas bounds. Therefore, building an automatic gas analyzer
from EVM code requires a daunting implementation effort that has been
possible thanks to the availability of a number of existing
open-source tools that we have succeeded to extend and put together in
the \tname{Traductor} system. In particular, an extension of the tool
\tname{Oyente}~\cite{oyente} is used for (1), an improved representation of
\tname{EthIR} \cite{AlbertGLRS18} is used for (2), an adaptation of the size
analyzer of \tname{Saco} \cite{AlbertAFGGMPR14} is used to infer the size
relations, and the \tname{Pubs} \cite{pubs} solver for (5).
The most challenging aspect in the design of \tname{Traductor} has been the
approximation of the EVM gas model (which is formally specified in
\cite{yellow}) that is required to produce the gas equations in step
(4). This is because the EVM gas model is highly complex and
unconventional. The gas consumption of each instruction has two parts:
(i) the \emph{memory gas cost}, if the instruction accesses a location
in memory which is beyond the previously accessed locations (known as
\emph{active} memory \cite{yellow}), it pays a gas proportional to the
distance of the accessed location. (ii) The second part, the
\emph{opcode gas cost}, is related to the bytecode instruction itself.
This component is also complex to infer because it is not always a
constant amount, it might depend in some cases on the current global
and local state.
\tname{Traductor} has a wide range of applications for contract developers,
attackers and owners, including the detection of vulnerabilities,
debugging and verification/certification of gas usage. As contract
developers and owners, having a precise resource analyzer allows
answering the following query about a specific smart contract: ``what
is the amount of gas necessary to \emph{safely} (i.e., without an
out-of-gas exception) reach a certain execution point in the contract
code, or to execute a function''?
This can be used for debugging, verifying/certifying a safe amount of
gas for running, as well as ensuring progress conditions.
Besides, \tname{Traductor} allows us to calculate the safe amount of gas
that one should provide to an external data source (e.g., contracts
using Oraclize\cite{oraclize}) in order to enable a successful
callback.
As an attacker,
one might estimate, how much \emph{Ether}
(in gas), an adversary has to pour into a contract in order to execute
the DoS attack.
We note that such an attack may, however, be economically impractical.
Finally, we argue that our experimental evaluation shows that
\tname{Traductor} is an effective and efficient tool: we have analyzed more
than 29,000 real smart contracts pulled from \textsf{etherscan.io}~\cite{etherscanSourceCodes},
that in total contain 258,541 public functions, and inferred gas
bounds for 91.85\% of them in 342.54 hours. \tname{Traductor} can be used from
a web interface at \url{https://costa.fdi.ucm.es/gastap}.
\subsection{\tname{Oyente*}: from EVM to a complete CFG}\label{sec:oyente}
The first component of our tool, \tname{Oyente*}, is an extension of the
open-source tool \tname{Oyente}~\cite{oyente}, a symbolic
execution tool developed to analyze Ethereum smart contracts and find
potential security bugs.
As \tname{Oyente}'s aim is on symbolic execution rather than on generating a
complete CFG, some extensions are needed to this end. The \tname{EthIR}
framework~\cite{AlbertGLRS18} had already extended \tname{Oyente} for two
purposes: (1) to recover the list of addresses for unconditional
blocks with more than one possible jump address (as \tname{Oyente} originally
only kept the last processed one), and (2) to add more explicit
information to the CFG: jump operations are decorated with the jumping
address, discovered by \tname{Oyente}, and, other operations
like store or load are also decorated with the address they operate:
the number of state variable for operations on
storage; and the memory location for operations on memory if \tname{Oyente} is able to discover it (or with ``?''
otherwise).
However \tname{EthIR}'s extension still produced incomplete CFGs. \tname{Oyente*}
further extends it to handle a more subtle source of incompleteness in
the generated CFG that comes directly from the fact that \tname{Oyente} is
a symbolic execution engine. For symbolic execution, a bound on the
number of times a loop is iterated is given. Hence it may easily
happen that some (feasible) paths are not reached in the exploration
within this bound and they are lost.
To solve this problem, we have
modified \tname{Oyente} to remove the execution bound (as well as other
checks that were only used for their particular applications), and
have added information to the path under analysis. Namely, every time
a new jump is found, we check if the jumping point is already present
in the path. In such case, an edge to that point is added and the
exploration of the trace is
stopped. As a side effect, we not only produce a complete CFG, but
also avoid much useless exploration for our purposes which results in important
efficiency gain.
When applying \tname{Oyente*}, our extended/modified version of \tname{Oyente},
we obtain a \emph{complete} CFG, with the additional annotations
already provided by~\cite{AlbertGLRS18}.
\section{Related Work and Conclusions}
Analysis of Ethereum smart contracts for possible safety violations
and security and vulnerabilities is a popular topic that has received
a lot of attention recently, with numerous tools developed, leveraging
techniques based on symbolic
execution~\cite{Luu-al:CCS16,GrossmanAGMRSZ18,Nikolic-al:Maian,KruppR18,Kalra-al:NDSS18,TsankovDDGBV18},
SMT solving~\cite{Marescotti-al:ISoLA18,Kolluri-al:laws}, and
certified
programming~\cite{Bhargavan-al:PLAS16,Grishchenko-al:POST18,Amani-al:CPP18},
with only a small fraction of them focusing on analyzing gas
consumption.
The \tname{GASPER} tool identifies gas-costly programming
patterns~\cite{ChenLLZ17}, which can be optimized to consume less. For
doing so, it relies on matching specific control-flow patterns, SMT
solvers and symbolic computation, which makes their analysis neither
sound, nor complete.
In a similar vein, the recent work by Grech~\emph{et~al.}\xspace ~\cite{madmax}
identifies a number of classes of gas-focused vulnerabilities, and
provides \tname{MadMax}, a static analysis, also working on a
decompiled EVM bytecode, data-combining techniques from flow analysis
together with CFA context-sensitive analysis and modeling of memory
layout.
In its techniques, \tname{MadMax} differs from \tname{Traductor}, as it
focuses on identifying control- and data-flow patterns inherent for
the gas-related vulnerabilities, thus, working as a bug-finder, rather
than complexity analyzer. Since deriving accurate worst-case
complexity boundaries is not a goal of any of both \tname{GASPER} and
\tname{MadMax}, they are unsuitable for tackling the
challenge~\ref{ch:a}, which we have posed in the introduction.
In a concurrent work, Marescotti~\emph{et~al.}\xspace~identified three cases in which
computing gas consumption can help in making Ethereum more efficient:
(a) prevent errors causing contracts get stuck with
\emph{out-of-gas} exception, (b) place the right price on the gas
unit, and (c)~recognize semantically-equivalent smart
contracts~\cite{Marescotti-al:ISoLA18}.
They propose a methodology, based on the notion of the
so-called \emph{gas consumption paths} (GCPs) to estimate the
worst-case gas consumption using techniques from symbolic bounded
model checking~\cite{Biere-al:TACAS99}. Their approach is based on
symbolically enumerating all execution paths and unwinding loops to a
limit.
Instead, using resource analysis, \tname{Traductor} infers the maximal number
of iterations for loops and generates accurate gas bounds which are
valid for any possible execution of the function and not only for the
unwound paths.
Besides, the approach by Marescotti~\emph{et~al.}\xspace has not been implemented in
the context of EVM and has not been evaluated on real-world smart
contracts as ours.
\vspace{-0.3cm}
\paragraph{Conclusions.~}
Automated static reasoning about resource consumption is
critical for developing safe and secure blockchain-based replicated
computations, managing billions of dollars worth of virtual currency.
In this work, we employed state-of-the art techniques in resource
analysis, showing that such reasoning is feasible for Ethereum, where
it can be used at scale not only for detecting vulnerabilities, but
also for verification/certification of existing smart contracts.
\subsection{PUBS solver: from equations to closed-form bounds}\label{sec:pubsc-solv-from}
The last step of the gas bounds inference is the generation of a
\emph{closed-form gas upper bound}, i.e., a solution for the GE as a
non-recursive expression. As the GE we have generated have the
standard form of cost relations systems, they can be solved using
off-the-shelf solvers, such as \tname{Pubs} \cite{pubs} or \tname{Cofloco}
\cite{cofloco}, without requiring any modification. These systems are
able to find polynomial, logarithmic and exponential solutions for
cost relations in a fully automatic way. The gas bounds computed for
all public functions of \textsf{EthereumPot} using \tname{Pubs} can be found in
Table~\ref{fig:experiments}, note that they are parametric on
different state variables, input and blockchain data.
\section{Description of \tname{Traductor} Components}
\begin{figure}\vspace{-1cm}
\includegraphics[scale=0.4]{diagram.pdf} \vspace{-0.65cm}
\caption{Architecture of \tname{Traductor} (CFG: control flow graph; RBR: rule-based representation;
SR: size-relations; GE: gas equations)} \label{tool}
\end{figure}
Figure~\ref{tool} depicts the architecture of \tname{Traductor}.
In order to describe all components of our tool,
we use as
running example a simplified version (without calls to the external service Oraclize and the authenticity proof verifier)
of the
\code{EthereumPot} contract~\cite{etherpot} that implements a simple
lottery.
During a game, players
call a method \code{joinPot} to buy lottery
tickets; each player's address is appended to an array
\code{addresses} of current players, and the number of tickets is
appended to an array \code{slots}, both having variable length.
After some time has elapsed, anyone
can call \code{rewardWinner} which calls the \code{Oraclize}
service to obtain a random number for the winning ticket. If all goes
according to plan, the \code{Oraclize} service then responds by
calling the \code{__callback} method with this random number and the authenticity proof as arguments.
A new instance of the game is then started, and the winner
is allowed to withdraw her balance using a \code{withdraw} method. In
Fig.~\ref{fig:solevm}, an excerpt of the \textsf{Solidity} code
(including the public function \code
{findWinner}) and a
fragment of the EVM code produced by the compiler, are displayed. The
\textsf{Solidity} source code is showed for
readability, as \tname{Traductor} analyzes directly the EVM code (if it
receives the source, it first compiles it to obtain the EVM code).
\begin{figure}[t]
\begin{minipage}{0.74\textwidth}
\begin{center}
\input pot-ex.tex
\end{center}
\end{minipage}
\hspace{0.2cm}
\begin{minipage}{0.23\textwidth}
\input pot-ex-evm.tex
\end{minipage} \vspace{-0.4cm}
\caption{Excerpt of \textsf{Solidity} code for \code{EthereumPot} contract (left), and \hspace{2cm} $\mbox{ }$ \hspace*{1cm} fragment of EVM code for
function \code{__callback} (right)} \vspace*{-0.1cm} \vspace{-0.1cm}
\label{fig:solevm}
\end{figure}
\input{oyente}
\input{ethir}
\subsection{SACO: size relations for EVM smart contracts} \label{size}
In the next step, we generate \emph{size relations} (SR) from the RBR
using the \tname{Saco} tool~\cite{AlbertAFGGMPR14}. SR are equations and
inequations that state how the sizes of data change in the rule \cite{DBLP:conf/popl/CousotH78}.
This information is obtained by analyzing how each instruction of the
rules modifies the sizes of the data it uses, and propagating this
information as usual in dataflow analysis. SR are needed to build
the gas equations and then generate gas bounds in the last step of the
process. The size analysis of \tname{Saco} has been slightly modified to
ignore the $nop$ instructions. Besides, before sending the rules to \tname{Saco}, we
replace the instructions that cannot be handled (e.g., bit-wise
operations, hashes) by assignments with fresh variables (to represent
an unknown value).
Apart from this, we are able to adjust our representation to
make use of the approach followed by \tname{Saco}, which is based on
abstracting data (structures) to their sizes. For integer variables,
the size abstraction corresponds to their value and thus it works
directly.
However, a
language specific aspect of this step is the handling of data
structures like array, string or bytes (an array of byte). In the
case of array variables, \tname{Saco}'s size analysis works directly as in EVM the
slot assigned to the variable contains indeed its length (and the
address where the array content starts is obtained with the hash of the slot
address).
\vspace{-0.1cm}
\begin{example}\label{size}
Consider the following SR (those in brackets) generated for rule
\emph{jump1649} and
\emph{block1731}:\\
${\it jump1619}(\sot{10},\svot,\lvot, \blot) = {\it
block1633}(\sot{8},\svot,\lvot, \blot) \{\svar{10}<\svar{9}\}$\\
${\it block1731}(\sot{8}, \svot,\lvot, \blot) = 41 + {\it
block1619}(\spvar{8},\sot{7},\svot,\lvot, \blot)
\{\spvar{8}=1+\svar{8}\}$\\
The size relations for the \emph{jump1619} function involve the
\code{slots} array length ($g_3$ stored in $s_9$) and the local
variable \code{i} (in $s_8$ and copied to $s_{10}$). It corresponds
to the guard of the \code{for} loop in function \code{findWinner}
that compares \code{i} and \code{slots.length} and either exits the
loop or iterates (and hence consume different amount of gas). The
size relation on $s_8$ for \emph{block1731} corresponds to the size
increase in the loop counter.
\end{example}
\vspace{-0.1cm}
However, for bytes and
string it is more challenging, as the way they are stored depends on
their actual sizes. Roughly, if they are short (at most $31$ bytes
long) their data is stored in the same slot together with its
length. Otherwise, the slot contains the length (and the address where
the string or bytes content starts is obtained like for arrays). Our
approach to handle this issue is as follows.
In the presence of bytes or string, we can find in the rules
of the RBR a particular sequence of instructions (which are always the
same) that start pushing the contents of the string or bytes variable in the top of the stack, obtain its length, and leave it stored in the top of the
stack (at the same position).
Therefore, to avoid losing information, since \tname{Saco} is abstracting the
data structures to their sizes, every time we find this pattern of
instructions applied to a string or bytes variable, we just remove them
from the RBR (keeping the nops to account for their gas). Importantly, since the top of the stack has
indeed the size,
under \tname{Saco}'s abstraction it is equal to the string or bytes variable.
Being precise, assuming that we have placed the contents of the string
or bytes variable in the top of the stack, which is $\svar{i}$, the
transformation applied is the following:
$$
{\footnotesize\begin{array}{c}
\begin{array}{|l|}
\hline
\svar{i+1} = 1, nop(PUSH1), \svar{i+2} = \svar{i}, nop(DUP2),
\svar{i+3} = 1, nop(PUSH1),\\
\svar{i+2} = and(\svar{i+3}, \svar{i+2}), nop(AND),
\svar{i+2}= eq(\svar{i+2}, 0), nop(ISZERO), \\
\svar{i+3} = 256, nop(PUSH2), \svar{i+2} = \svar{i+3}*\svar{i+2}, nop(MUL), \svar{i+1} = \svar{i+2}-\svar{i+1}, ~~~\\
nop(SUB) \svar{i} = and(\svar{i+1}, \svar{i}), nop(AND), \svar{i+1} = 2, nop(PUSH1), \\
\svar{i+2} = \svar{i}, \svar{i} = \svar{i+1}, \svar{i+1} = \svar{i+2}, nop(SWAP1),
\svar{i} = \svar{i+1}/\svar{i}, nop(DIV) \\
\hline
\end{array} \\
\Downarrow \\
\begin{array}{|l|}
\hline
nop(PUSH1), nop(DUP2),nop(PUSH1), nop(AND), nop(ISZERO), nop(PUSH2), \\
nop(MUL), nop(SUB), nop(AND), nop(PUSH1), nop(SWAP1), nop(DIV) \\
\hline
\end{array}
\end{array}
}$$
Since the involved instructions include
bit-wise operations among others and, as said, the value of the stack variable
becomes unknown, without this transformation the relation between the stack variable and
the length of the string or bytes would be lost and, as a result, the tool
may fail to provide a bound on the gas consumption. This transformation is
applied when possible and, e.g., is needed to infer bounds
for the functions \code{getPlayers} and \code{getSlots} (see
Table~\ref{fig:large-experiments}).
|
3,212,635,537,541 | arxiv | \section{Introduction}
More or less attached to the name of Kurt Mahler (1903-1988), there are at least two celebrated unsolved problems:
\begin{itemize}
\item Lehmer's problem on algebraic numbers
\item The lower bound for the volume product of convex bodies.
\end{itemize}
The celebrity of these two problems comes from the fact that they are both very natural and easy to state, but still unsolved and that for almost one century, a lot of mathematicians gave partial results, equivalent statements or many generalizations. There are still a lot of interesting attempts to resolve those problems which appear every now and then and produce connections of those questions to many areas of modern mathematics.
Although we shall be interested here in the second one, for the curiosity of the reader we summarize the first one. Let $\alpha$ be an algebraic integer and $P$ be the minimal polynomial of $\alpha$, that is the polynomial with integer coefficients with the smallest degree $d$ such that the coefficient of $x^d$ is $1$ and $\mathcal P(\alpha)=0$. Let $\alpha_1=\alpha$ and $\alpha_2,\dots, \alpha_d\in \mathbb C$ be the other roots of $P$. The {\it Mahler measure} of $\alpha$ is the number $M(\alpha):= \prod_{k=1}^d \max(|\alpha_k |,1)$. By a classical result of Kronecker, if
$M(\alpha)=1$, then $\alpha$ is a root of unity. But how near can $\alpha$ be from $1$ when it is not a root of unity? Is there a constant $c>1$, independent of the degree of $\alpha$, such that $M(\alpha)>1$
implies that $M(\alpha)>c$? Derrick Henry Lehmer conjectured in 1933 \cite{Leh} that the answer to this question is positive (we note that for $c$ depending on the degree of $\alpha$ a lot of estimates were given) and Mahler contributed to it, at least, by defining the measure to which his name was given \cite{Sm, VG}.
We shall be mainly concerned here with a second problem: Let $K$ be a convex symmetric body in $\mathbb R^n$, which is the unit ball of a $n$-dimensional normed space $E$. Let $K^*$ be the polar body of $K$, which is the unit ball of $E^*$, the dual of $E$. What are the bounds for the {\it volume product} $\mathcal P(E)= \mathcal P(K):=\vol(K)\vol(K^*)$? It appears that the best upper bounds are known for a long time (Blaschke 1917 for $n=2,3$, \cite{Bl1}, Santal\'o 1949, $n\ge 4$ \cite{San2}), but to find the exact lower bounds is still an open conjecture, although the asymptotic behavior of $\min\{\mathcal P(E); E \mbox{ $n$-dimensional normed space}\}$ was discovered almost 40 year ago by Bourgain and V. Milman \cite{BM}. This problem has a lot of generalizations and specializations. One can ask a series of very natural questions including: What happens if $K$ is no longer centrally symmetric? What happens for special classes of convex bodies? Is there a functional version of the volume product? What are the possible applications and connections inside and outside convex geometry? We must also note that a lot of new methods were used, in particular from functional analysis, harmonic analysis, topology, differential geometry and probability, to prove properties of the volume product and to attack different cases of this question. We shall try here to explain just some of them and summarize the others.
The paper is structured in the following way. We introduce the volume product and prove its basic properties in Section \ref{sec1980}. In Section \ref{secshadow}, we describe the methods of shadow systems which turn out to be essential in the study of the bounds for volume product. In Section \ref{BS}, we discuss the upper bound for the volume product - the Blaschke-Santal\'o inequality. We present different proofs, including a proof via Steiner symmetrizations and a harmonic analysis approach; we also discuss a number of extensions of this inequality. In Section \ref{MC}, we discuss the conjectured lower bound - the Mahler conjecture. We present a solution in a number of partial cases, including the case of zonoids, of unconditional bodies and of dimension $2$ and a very recent solution for symmetric $3$-dimensional bodies. We also present here an approach to stability results to both upper and lower bounds. Section \ref{AE} is dedicated to the asymptotic lower estimates for the volume product, in particular to the Bourgain-Milman inequality. Here, we extend our presentation to the harmonic analytic and complex analytic approach to the volume product. Section~\ref{func} is dedicated to the functional inequalities related to the volume product with a special connection to transport inequalities. In section \ref{secMB}, we discuss the generalization of the volume product to the case of many functions and bodies. Finally, in section \ref{linsec}, we present a sample of connections of the bounds for volume product to other inequalities, including the slicing conjecture, Viterbo's conjecture, applications to the geometry of numbers and isosystolic inequalities.
We refer the reader to \cite{AGM1, AGM2, Ga, Ga2, Gru, Kol, Pi, Sc, Tom} for many additional information on convex bodies, volume and mixed volume, duality and other core objects in analysis, geometry and convexity used in this survey.
\vspace{.1in}
\noindent
{\bf Acknowledgments.}
We are grateful to Richard Gardner, Dmitry Faifman, Dylan Langharst, Erwin Lutwak and Shlomo Reisner
for many corrections, valuable discussions and suggestions.
\subsection{Notations and results before 1980} \label{sec1980}
A convex body $K$ in $\mathbb R^n$ is a convex compact subset of $\mathbb R^n$ with nonempty interior denoted $\inte(K)$. We say that $L\subset \mathbb R^n$ is centrally symmetric if $L=-L$. Let $K$ be a convex body in $\mathbb R^n$ such that $0\in \inte(K)$; for $x\in \mathbb R^n$, we define $$\|x\|_K=\inf\{t>0; \,\, tx\in K\}$$
to be the {\it gauge} of $K$; in particular, when $K$ is a convex symmetric body, $x\mapsto \|x\|_K$ is the norm on $\mathbb R^n$ for which $K$ is the closed unit ball. We endow $\mathbb R^n$ with its natural scalar product, denoted $\langle\ , \ \rangle,$ the associated Euclidean norm denoted $|\cdot |$; the Euclidean ball of radius one is denoted $B_2^n$. The canonical Lebesgue measure of a Borel set $A\subset \mathbb R^n$ is denoted by $\vol(A)$.
Let $K$ be a convex body. If $0\in \inte(K)$, the {\it polar body} $K^*$ is defined by
\begin{equation}\label{dualz}
K^*=\{y\in \mathbb R^n; \langle x,y\rangle \le 1 \hbox{ for all } x\in K\}.
\end{equation}
Then $K^*$ is also a convex body such that $0\in \inte(K^*)$ and if $T:\mathbb R^n\to \mathbb R^n$ is a linear isomorphism, one has $$\big(T(K)\big)^*= (T^*)^{-1}(K^*),$$ where $T^*$ is the adjoint of $T$. More generally, for a convex body $K$ and $z\in \inte(K)$, one defines {\it the polar body $K^z$ of $K$ with respect to $z$} by,
$$K^z=(K-z)^* +z.$$
The celebrated {\it bipolar theorem} asserts that if $0\in \inte(K)$, then $$(K^*)^*=K \hbox{ and consequently that } (K^z)^z=K$$ for any convex body $K$ and any $z\in \inte(K)$.
Let $h_K(y)=\max_ {x\in K}\langle x,y\rangle$ be the {\it support function} of $K$. Note that $K^*=\{h_K\le 1\},$ i.e. $h_K(x)=\|x\|_{K^*}$, when $0\in \inte(K).$ Moreover,
if $ z\in \inte(K)$, $$K^z= z+\{y \in \mathbb R^n; h_K(y)-\langle z,y\rangle \le 1\}$$ and thus
$$\vol(K^z)=\ \int_{K^*} \frac{1}{(1- \langle z,y\rangle)^{n+1}} dy.
$$
It follows that the map $z\mapsto\vol(K^z)$ is a strictly convex positive $C^{\infty}$ function on $\inte(K)$.
A small effort is enough to prove that $\vol(K^z)\to +\infty$ when $z$ approaches the boundary of $K$.
Consider $\theta \in S^{n-1}$,
by Brunn's theorem, the function $f_\theta:[-h_{K^*}(-\theta), h_{K^*}(\theta)]\to [0, \infty)$ defined by
$f_{\theta}(t):=\vol_{n-1} ( \{y\in K^*; \langle y,\theta\rangle=t\})$ satisfies that $f_\theta^{1/(n-1)}$ is concave. Hence, one has $f_\theta(t)\ge f_\theta(0) (1-h_{K^*}^{-1}(\theta)t )^{n-1}$ for $0\le t\le h_{K^*}(\theta)$. Let
$
r_K(\theta) =\min\{a\ge 0: \theta \in aK\}
$
be the radial function of $K$. Then $r_K(\theta)=h_{K^*}^{-1}(\theta)$
and letting $z=s\theta$ for $0\le s<r_K(\theta)$, we get
$$
\vol(K^z)
=\int_{K^*} \frac{1}{(1- \langle z,y\rangle)^{n+1}} dy
=\int_{-h_{K^*}(-\theta) }^{ h_{K^*}(\theta)} \frac{f_{\theta}(t)}{(1-st)^{n+1} }dt$$
$$\ge f_{\theta}(0)\int_{0}^{ h_{K^*}(\theta) }\frac{ (1-th^{-1}_{K^*}(\theta) )^{n-1} }{ (1-st)^{n+1} } dt
= \frac{f_{\theta}(0)}{n(r_{K}(\theta)-s) }
\ge\frac{ \min\limits_{\theta\in S^{n-1}} f_{\theta}(0)}{n(r_K(\theta)-s) }\to +\infty,$$
when $s\to r_{K}(\theta)$, that is $z\to \partial K$.
Consequently, the function $z\mapsto \vol(K^z)$ reaches its minimum on $\inte(K)$ at a unique point $s(K)$, called the {\it Santal\'o point} of $K$. Computing its differential, we see that $s(K)$ is characterized by the fact that the centroid (center of mass) of $K^{s(K)}$ is $s(K)$ (see \cite{MSW}). For $t>0$ big enough,
the sets $\{z\in \inte(K); \vol(K^z) \le t\}$, called {\it Santal\'o regions of $K$}, were studied in \cite{MW1} (see also \cite{MW2}).
\begin{deff}
The Santal\'o point of a convex body $K$, denoted $s(K)$, is the unique point in $\inte(K)$ such that
$$\vol(K^{s(K)})=\min_{z\in K} \vol(K^z).$$
The volume product of $K$ is
$$\mathcal P(K):= \vol(K) \vol(K^{s(K)}).$$
\end{deff}
\noindent We mention the following facts:
\begin{itemize}
\item If $K$ is centrally symmetric, then so is $K^*$, and one has $s(K)=0= s(K^*)$ and $\mathcal P(K)=\mathcal P(K^*)$. One has always $\mathcal P( K^{s(K) } )\le \mathcal P(K)$ and when $K$ is not centrally symmetric, it may happen that $\mathcal P( K^{s(K) } )<\mathcal P(K)$.
\item It is easy to see that $s(K)$ is the unique point of $\inte(K)$ such that $0$ is the center of mass of $\big(K-s(K)\big)^*$ or $s(K)$ is the center of mass of $K^{s(K)}$.
\item The map $K\mapsto \mathcal P(K)$ is affine invariant, that is if $A:\mathbb R^n\to \mathbb R^n$ is a one-to-one affine transform, then $\mathcal P(AK)=\mathcal P(K)$. This indicates that if $E$ if an $n$-dimensional normed space with a closed unit ball $B_E$ and if $\phi:E\to\mathbb R^n$ is a one-to-one linear mapping, then
$\mathcal P(E):=\mathcal P(\phi(B_E))$ does not depend on $\phi$. In particular, this property makes $\mathcal P(E)$ to be an important tool in the local theory of normed space (see \cite{Pi, LMi, Tom}).
\item Let $K$ be a convex body such that $0\in \inte(K)$ and let $E$ be a linear subspace of $\mathbb R^n$. Then, $K\cap E$ is a convex body in $E$, endowed with the scalar product inherited from the Euclidean structure of $\mathbb R^n$, and $(K\cap E)^*$ (with polarity inside $E$) can be identified with $P_E (K^*)$, where $P_E$ is the orthogonal projection from $\mathbb R^n$ onto $E$. Consequently, when $K$ is centrally symmetric, $\mathcal P(K\cap E)=\vol_E(K\cap E)\vol_E(P_E (K^*))$, where $\vol_E$ denotes the Lebesgue measure on $E$.
\end{itemize}
In view of these facts, a natural question is to compute, for fixed $n$, an upper and and a lower bound of $\mathcal P(K)$ for all convex bodies $K$ in $\mathbb R^n$. The existence of these bounds follows from the affine invariance which allows to consider the bounds of $K\mapsto \mathcal P(K)$ on a compact subset of the set of all convex bodies endowed with the Hausdorff metric. Indeed, if $B_2^n$ is the Euclidean ball, by John's theorem, one may reduce to study $\mathcal P(K)$ for $B_2^n\subset K\subset n B_2^n$ in the general case or
$B_2^n\subset K\subset \sqrt{n} B_2^n$ when $K$ is centrally symmetric, which gives already rough but concrete estimates for these bounds.
It seems that the first one who dealt with this problem was Wilhelm Blaschke (1885-1962), who proved that for $n=2$ and $n=3$, $\mathcal P(K) \le \mathcal P({\mathcal E})$, where ${\mathcal E}$ is any ellipsoid \cite{Bl1}, \cite{Bl2}. Then, Mahler gave exact lower bounds for $\mathcal P(K)$ for $n=2$ both in the general case and in the centrally symmetric case \cite{Ma1, Ma2}. In 1947, Luis Santal\'o (1911-2001) \cite{San2} extended the results of Blaschke to all $n$ with the same tools as him. The case of equality for the upper bound was first proved much later in 1978 by Petty \cite{Pe4} (the argument given in \cite{San2} was not quite valid).
\begin{thm} {\bf (Blaschke-Santal\'o-Petty)} If $K\subset \mathbb R^n$ is a convex body, then
\begin{equation}\label{BSE}
\mathcal P(K)\le \mathcal P(B_2^n),
\end{equation}
with
equality if and only if $K$ is an ellipsoid.
\end{thm}
Bambah \cite{B} gave rough lower bounds for $\mathcal P(K)$. Guggenheimer \cite{Gu1, Gu2} believed at some moment that he had a complete proof of the exact lower bound
$\mathcal P(K)\ge \mathcal P([-1,1]^n)=\frac{4^n}{n!}$ for $K$ centrally symmetric, but it appeared that his proof was incorrect.
This was the situation in the 80's, when new insights on the problem were given by Saint-Raymond \cite{SR1}, Reisner \cite{Re1,Re2}, Gordon and Reisner
\cite{GR} and Bourgain-Milman \cite{BM}.
We conclude this section by an important tool in this theory:
\subsection{Shadow systems}\label{secshadow}
\begin{deff}\label{defshadow} A {\bf shadow system} along a direction $\theta \in S^{n-1}$ is a family of convex sets $K_t \subset \mathbb R^n$
which are defined by
$K_t = \conv( \{x+ta(x)\theta; x\in M\} )$
where $M$ is a bounded subset in $\mathbb R^n$, $a:M\to \mathbb R$ is a bounded function and $t\in I$, an interval of $\mathbb R$ (and where $\conv(A)$ denotes the closed convex hull of a set $A \subset \mathbb R^n)$.
\end{deff}
It may be observed that the classical Steiner-symmetrization can be seen as a shadow system such that the volume of $K_t$ remains constant (see Remark~\ref{rk:steiner-shadow} below). The notion of shadow system was introduced by Rogers and Shephard \cite{RS,Sh2} and can be explained via an idea of Shephard in \cite{Sh2},
who pointed out that a shadow system of convex bodies in $\mathbb R^n$ can be seen as a family of
projections of a $(n+1)$-dimensional convex set on some $n$-dimensional subspace of $\mathbb R^{n+1}$. Namely, let $e_1, e_2, . . . , e_{n+1}$ be an
orthonormal basis of $\mathbb R^{n+1}$. Let $M$ be as before be a bounded subset of $\mathbb R^n$ (i.e. $M$ is contained in the linear span of $e_1, e_2, . . . , e_{n}$), let $a:M\to \mathbb R$ be a bounded function, $\theta\in S^{n-1}$ and $I$ an interval of $\mathbb R$. We define a convex set $\tilde{K}\subset \mathbb R^{n+1}$ be the $(n+1)$-dimensional
by
$$
\tilde{K}= \conv\{x + a(x)e_{n+1}: x \in M\}.
$$
For each $t \in I$, let $P_t:\mathbb R^{n+1}\to \mathbb R^n$ be the projection from $\mathbb R^{n+1}$ onto $\mathbb R^n$ along the
direction $e_{n+1} - t \theta$. Then, $$P_t(\tilde{K})=
\conv( \{x+ta(x)\theta; x\in M\} )= K_t.$$ This interpretation permits to see that
$t\mapsto \vol(K_t)$ is a convex function \cite{RS}. This result
is a powerful tool for obtaining geometric inequalities of isoperimetric type.
The following theorem connects shadow systems with the volume product. It was proved by Campi and Gronchi \cite{CG1} when the bodies $K_t$ are centrally symmetric and by Meyer and Reisner in the general case \cite{MR3} (see also \cite{FMZ}).
\begin{thm}\label{shadow}
Let $(K_t)_{t\in(a,b)}$ be a shadow system of convex bodies in $\mathbb R^n$. Then the function $t\mapsto \vol \big((K_t)^{s(K_t)}\big)^{-1}$ is convex on $(a,b)$.
\end{thm}
With the previous notations, if the $K_t$ are centrally symmetric, then $s(K_t)=0$ and thus
$$
(K_t)^{s(K_t)}=K_t^*=(P_t(\tilde{K}))^*.
$$
As it was discussed above, the polar of the orthogonal projection of a convex body on a subspace $E$ is the section of its polar by $E$. But here $P_t$ is not an orthogonal projection, and we get $$(P_t(\tilde{K}))^*=P_{e_{n+1}^\perp}(\tilde{K}^*\cap (e_{n+1} - t \theta)^\bot),$$ where $P_{e_{n+1}^\perp}$ is the orthogonal projection from $\mathbb R^{n+1}$ onto $\mathbb R^n=e_{n+1}^\bot$.
In that context, Campi-Gronchi's theorem appears as another formulation, with a new proof, of Busemann's theorem \cite{Bu} about the central hyperplane sections of a centrally symmetric body (see also \cite{MR5}).
This point of view was put forward in \cite{CFPP}, where such properties were also generalized to more general measures than Lebesgue measure.
An important component related to Theorem \ref{shadow}, which is proved
in \cite[Proposition 7]{MR3} (see also \cite{MR4}), is
the case when both $\vol(K_t)$ and $\vol((K_t)^{s(K_t)})^{-1}$ are affine functions of $t\in (a,b)$. In this case, all
the bodies $K_t$ are affine images of each other by special affine transformations. This
has been useful in identifying the cases of equality in inequalities involving volume
products, as well as in the proof of the results of \cite{MR4} and \cite{FMZ}. The proof of this component, that involves some ODE, was extended in \cite[Proposition 6.1]{MY} to generalize this result .
\section{The Blaschke-Santal\'o inequality}\label{BS}
The original proofs of the Blaschke-Santal\'o inequality \cite{Bl1, San1, San2, Leic1} were based on the affine isoperimetric inequality. We show now how this inequality implies the Blaschke-Santal\'o inequality and how conversely the Blaschke-Santal\'o inequality implies it. We refer to Section 10 in \cite{Sc} and to \cite{SW, Leic2} for details about the tools used in this section. If $M$ is a smooth convex body with positive curvature everywhere, its affine surface area $\mathcal{A}(M) $ is defined by
$$\mathcal{A}(M)= \int_{S^{n-1}} f_M(u)^{\frac{n}{n+1}} du,$$
where $f_M: S^{n-1}\to \mathbb R_+$ is the curvature function, i.e. the density of the
measure $\sigma_M$ on $S^{n-1}$ with respect to the Haar measure on $S^{n-1}$, where for a Borel subset $A$ of $S^{n-1}$, $\sigma_M(A)$ is the $(n-1)$-Hausdorff measure of the set of all points in $\partial M$ such that their unit normal to $M$ is in $A$.
The {\it affine isoperimetric inequality} says that at a fixed volume, ellipsoids have the largest affine surface area among all convex bodies with
positive continuous curvature. In other words,
\begin{equation}\label{affine} \mathcal{A}(M)^{n+1} \le n^{n+1} v_n^2\vol(M)^{n-1},
\end{equation}
where $v_n=\vol(B_2^n)$. Let $L$ be another convex body, H\"older's inequality, one has
\begin{align}\label{Holder}\mathcal{A}(M)^{n+1}&\le \left(\int_{S^{n-1}}
h_L(u)f_M(u) du\right)^n
\int_{S^{n-1}} h_L^{-n}(u)du \nonumber\\&=n^{n+1}V(M[n-1],L)^n \vol(L^*),
\end{align}
where $V(M[n-1],L)=V(M[n-1],L[1])=\frac{1}{n} \int_{S_{n-1}} h_L(u)f_M(u) du$ is a mixed volume of $M$ and $L$, which can also be defined by the formula, for $t\ge 0$, $$\vol(M+tL)=\sum_{k=1}^n \binom{n}{k}V(M[n-k],L[k])t^k.$$
We refer to \cite{Sc} for precise definition of properties mixed volumes. Using inequality (\ref{affine}) and the Minkowski first inequality
$$
V(M[n-1],L)^n\ge \vol(M)^{n-1} \vol(L),
$$ one gets
\begin{equation}\label{facile}\mathcal{A}(M)^{n+1}\le n^{n+1}v_n^2\vol(M)^{n-1}\le n^{n+1}v_n^2 V(M[n-1],L)^n \vol(L)^{-1}.
\end{equation}
Now given a convex body $K$, let $s=s(K)$ be its Santal\'o point; then $0$ is the centroid of $K-s$ so that
$$
\int_{S^{n-1}} uh_{K-s}(u)^{-(n+1)}du=0,
$$
and thus by Minkowski’s existence theorem (see Section 8.2 \cite{Sc}), there exists a convex body $M$
such that $f_M= h_{K-s}^{-(n+1)}$. Set $L=K-s$, then there is equality in (\ref{Holder}) so that by (\ref{facile})
\begin{align*}
n^{n+1}V(M[n-1],K-s)^n \vol((K-s)^*)&=\mathcal{A}(M)^{n+1} \\&\le n^{n+1}v_n^2 V(M[n-1],K)^n \vol(K-s)^{-1},
\end{align*}
which gives the Blaschke-Santal\'o inequality \eqref{BSE}.
Conversely, let $M$ be a convex body with positive curvature, and suppose that $0$ is the Santal\'o point of $M$ and that Blaschke-Santal\'o inequality holds for $M$. By (\ref{Holder}) with $L=M$, we get
$$ \mathcal{A}(M)^{n+1}\le n^{n+1}V(M[n-1],M)^n \vol(M^*)
\le n^{n+1}v_n^2 \vol(M)^{n-1},$$
which is the affine isoperimetric inequality.
For examples of other results of this type and relations between affine surface area and the volume product, see Petty \cite{Pe3, Pe4}, Lutwak \cite{Lu1}, Li, Sch\"utt and Werner and \cite{LSW}, Nasz\'odi, Nazarov and Ryabogin \cite{NNR} and Hug \cite{Hu}, who gave a proof of the affine isoperimetric inequality using Steiner symmetrization and studied the cases of equality.
\subsection{A proof of the Blaschke-Santal\'o inequality in the centrally symmetric case}\label{steiner-mp} In \cite{MP1, MP2}, Meyer and Pajor used the classical Steiner symmetrization for giving a proof which we sketch in this section. Various symmetrizations of sections appeared also in \cite{SR1} and \cite{Ba1}.
\vskip 2mm
\noindent{\it Step 1:} We prove first that if $H$ is a linear hyperplane of $\mathbb R^n$ and $S_H K$ is the {\it Steiner symmetral of $K$ with
respect to $H$}, as it will be defined below, then
$$
\vol(K^*) \le \vol \big( (S_HK)^*\big).
$$
To simplify notations, suppose that $H= \mathbb R^{n-1}\times \{0\}$; as before, let $P_H:\mathbb R^n\mapsto H$ be the orthogonal projection onto $H$. Then $K$ may be described as follows:
$$K=\{x+se_n;\ x\in P_HK, s\in I(x)\} $$ where for $x\in P_HK$, $I(x)=\{s\in\mathbb R; x+se_n\in K\}$ is a closed interval.
The Steiner symmetral of $K$ with respect to $H$, defined as
$$S_HK=\left\{x+se_n; x\in PK,s\in \frac{I(x)-I(x)}{2}\right\} $$
is a convex body symmetric with respect to $H$, such that
$$\vol(K)=\vol(S_H K).$$
For $t\in \mathbb R$, let $K^*(t):=\{y\in H; y+te_n \in K^*\}$ be the section of $K^*$ by the hyperplane $\{ x_n =t \}$ and $J=\{t\in \mathbb R; K^*(t)\not=\emptyset\}$. Then
$$K^* =\{ y+t e_n; t\in J, y\in K^*(t) \}.$$
By the symmetry of $K^*$, one has $K(-x)=-K(x) $ for every $x\in P_HK$, so that for every $t\in J$, $y_1\in K^*(t) $, $y_2\in K^*(-t) $ and $s_1,s_2\in K(x)$, one has:
$$
\langle x,y_1\rangle+s_1 t \le 1\hbox{ and } \langle x,y_2\rangle -s_2t \le 1.
$$
Adding these two inequalities, we get that for every $x\in P_HK$, $s=\frac{s_1-s_2}{2}\in \frac{1}{2} (I(X)-I(X))$, $t\in J$ and $y=\frac{y_1+y_2}{2}\in
\frac{1}{2}\big(K^*(t)+ K^*(-t)\big)$, one has
$$\langle x,y\rangle+st \le 1.$$
Thus for every $t\in J$,
\begin{equation}\label{steiner-inclusion}
\frac{1}{2}\big( K^*(t) +K^*(-t) \big)\subset (S_H K)^*(t).
\end{equation}
By the symmetry of $K$, one has $K^*(-t) = -K^*(t)$. It follows from Brunn-Minkowski's theorem that
$\vol_{n-1}\big( (S_H K)^*(t)\big) \ge \vol_{n-1}\big(K^*(t) \big)$ and, integrating in $t$, we get that
$$\vol\big((S_HK)^*\big)= \int \vol_{n-1}\big( (S_H K)^*(t)\big) dt \ge
\int \vol_{n-1}(K^*(t) ) dt=\vol(K^*).$$
One get thus that $\mathcal P(S_H K)\ge \mathcal P(K)$.
\noindent {\it Step 2:} It is well known that there exists a sequence of hyperplanes $(H_m)_m$ such that if $K_0=K$ and for $m\ge 1$, $K_m:=S_{H_m}K_{m-1}$, then the sequence $(K_m)_m$ converges to the Euclidean ball $R_KB_2^n$ with same volume as $K$ (see for example Section 10.3 in \cite{Sc}). Thus $$\mathcal P(K)\le \mathcal P(K_{n-1})\le \mathcal P( K_n)\le \mathcal P(R_KB_2^n)=\P(B_2^n).$$
The case of equality in Blaschke-Santal\'o inequality was first proved by Petty \cite{Pe4}, using sharp differential geometry arguments. When $K$ is centrally symmetric, a new elementary proof was given by Saint Raymond \cite{SR1}, using Minkowski symmetrization of the hyperplane sections (see also \cite{Ba1}). These ideas were generalized by Meyer-Pajor \cite{MP2} to give an elementary proof for the general case, and a somewhat stronger result, also based on Steiner's symmetrizations:
\begin{thm}{\bf (Meyer-Pajor \cite{MP2}) } For every convex body $K$ and every hyperplane $H$ separating $\mathbb R^n$ into two closed half space $H_+$ and $H_-$, denoting $\lambda=\frac{\vol( H_+\cap K)}{\vol(K)}$, there exists $z\in H$ such that
$\vol(K)\vol(K^z)\le \frac{\mathcal P({\mathcal E})}{ 4\lambda(1-\lambda)}.$
\end{thm}
\begin{remark}\label{rk:steiner-shadow} Notice
that the Steiner's symmetral of a convex body $K$ with respect to a direction $u\in S^{n-1}$ can be written as a part of a shadow system in the following way: for $y\in P_{u^\perp}K$, let $I(y)=\{s\in\mathbb R; y+su\in K\}$. For $t\in[-1,1]$, let
\[
K_t=\left\{y+su; s\in \frac{1+t}{2}I(y)-\frac{1-t}{2}I(y)\right\}.
\]
Then $K_1=K$, $K_0=S_{u^\perp}K$ and, for every $t\in[-1,1]$, $K_{-t}$ is the reflection of $K_t$ with respect to the hyperplane $u^\perp$. This implies that the function $t\mapsto\P(K_t)$ is even. Moreover, since $\vol(K_t)=\vol(K)$ is constant for $t\in[-1,1]$, using Theorem \ref{shadow}, the function $t\mapsto\P(K_t)^{-1}$ is convex. It follows that $\P(K_t)$ is maximized at $0$, which proves that the volume product is non-decreasing by Steiner symmetrization for any convex body, recovering Meyer-Pajor's result \cite{MP2}. Using again an appropriate sequence of Steiner symmetrizations, this gives the general Blaschke-Santal\'o inequality. The preceding observation is due to \cite{MR3}.
\end{remark}
\subsection{An harmonic analysis proof of the Blaschke-Santal\'o inequality} We follow the work of Bianchi and Kelli \cite{BK}.
Harmonic analysis plays an essential role in the study of duality and the volume product. The main idea was discovered by Nazarov \cite{Na}, who used it to provide a proof of the Bourgain-Milman inequality \cite{BM} and was adopted by Bianchi and Kelly to give a very elegant proof of the Blaschke-Santal\'o inequality. We refer to \cite{StWe} and \cite{Ho} for basic facts in harmonic analysis.
Let $K$ be a convex symmetric body in $\mathbb R^n$. Let $F \in L_2(\mathbb R^n)$ such that its Fourier transform $\widehat{F}=0$ a.e. on $\mathbb R^n\setminus K$. Then
$F$ is the restriction to $\mathbb R^n$ of the entire function still denoted $F$ defined by:
$F(z)=\int_{K} e^{2\pi i \langle z, \xi \rangle} \widehat{F}(\xi) d\xi \hbox{ for }z \in {\mathbb C}^n, $
which satisfies the following inequality giving a first hint on why the theory is so useful:
$$
|F(i y)|\!
=\!\left | \int_K e^{-2\pi \langle y, \xi \rangle} \widehat{F}(\xi) d\xi\right |
\le \int_K e^{2\pi \sup\limits_ {\xi \in K} |\langle \xi, y \rangle|} | \widehat{F}(\xi)| d\xi = e^{2 \pi \|y\|_{K^*} }\!\!\! \int_K | \widehat{F}(\xi)| d\xi.$$
Thus for some $C>0$, one has $|F(i y)| \le C e^{2\pi \|y\|_{K^*} }$ for all $y\in \mathbb R^n$, i.e. $F$ is {\it of exponential type $K^*$}. This fact is the elementary part of the following classical theorem:
\begin{thm}\label{PW}(Paley-Wiener)
Let $F \in L_2(\mathbb R^n)$ and $K$ be a convex symmetric body. Then the following are equivalent:
\begin{itemize}
\item $F$ is the restriction to $\mathbb R^n$ of an entire function of exponential type $K^*$.
\item The support of $\widehat{F}$ is contained in $K$.
\end{itemize}
\end{thm}
Now we are ready to present the
\vskip 2mm\noindent
\noindent {\bf Proof of the Blaschke-Santal\'o inequality adapted from Bianchi and Kelly \cite{BK}.}
Let $F=\frac{1}{\vol( K)} \widehat{ {\bf 1}_K}$, where ${\bf 1}_K$ is the characteristic function of $K$:
$$
{\bf 1}_K(x)=\begin{cases}
1, \mbox{ for } x \in K\\
0, \mbox{ for } x \not \in K.
\end{cases}
$$Then $F(0)=1$, $F\in L_2(\mathbb R^n)$, $F$ is continuous and even, and can be extended as an entire function on $ {\mathbb C}^n$, still denoted $F$,
as
$$
F(z_1,\dots, z_n)=\frac{1}{\vol( K)} \int_{K}e^{2i\pi(\sum_{k=1}^n z_i x_i)} dx_1\dots dx_n.
$$
For $\theta \in S^{n-1}$ and $z\in {\mathbb C}$, let $F_\theta(z)=F(z\theta)$. Then, by the easy part of Paley-Wiener theorem, $F_\theta $ is an even entire function of exponential type $[-\|\theta\|_{K^*}^{-1}, \|\theta\|_{K^*}^{-1}]$. Moreover, since $F$ is entire and even, there exists an entire function $H_\theta:{\mathbb C}\to {\mathbb C}$ such that $H_\theta(z^2)= F_\theta(z)=F(z\theta)$. Finally, we define $R_\theta: {\mathbb C}^n \to{\mathbb C}$ as a radial extension of $F_\theta$ by
$$R_\theta(z_1,\dots, z_n)=H_\theta(z_1^2+\dots +z_n^2).$$
Note that $z\mapsto R_\theta(z)$ is an entire function which satisfies $R_\theta(0)=F(0)=1$. Moreover, since $F_\theta$ is of exponential type $[-\|\theta\|_{K^*}^{-1}, \|\theta\|_{K^*}^{-1}]$, $R_\theta $ is of exponential type $\|\theta\|_{K^*}^{-1}B_2^n$. It follows, from the Paley-Wiener theorem, that the support of the restriction to $\mathbb R^n$ of $\widehat{R_\theta}$ is contained in $(\|\theta\|_{K^*}^{-1}B_2^n)^*=\|\theta\|_{K^*}B_2^n.$ Since $R_\theta\in L_2(\mathbb R^n)$, one may write, using the Plancherel equality and the Cauchy-Schwarz inequality with $v_n=\vol(B_2^n)$,
\begin{align} \label{CSfourier}
\int_{\mathbb R^n} |R_{\theta}(x) |^2dx=&\int_{\|\theta\|_{K^*} B_2^n} |\widehat{R_{\theta}}(x) |^2dx\ge \frac{|\int_{\|\theta\|_{K^*} B_2^n} \widehat{R_{\theta}}(x) dx|^2}{v_n \|\theta\|_{K^*}^n}\\
=&\frac{|\int_{\mathbb R^n} \widehat{R_{\theta}}(x) dx|^2}{v_n \|\theta\|_{K^*}^n}
=\frac{R_{\theta}(0)}{ v_n\|\theta\|_{K^*}^n }= \frac{1}{v_n\|\theta\|_{K^*}^n }. \nonumber
\end{align}
If $|x|=r=|re_1|$, one has
\begin{equation}\label{rth}
R_\theta(x)=F(|x|\theta)=F(r\theta)=R_\theta(re_1),
\end{equation}
so that $R_\theta$ is rotation invariant on $\mathbb R^n$.
Thus
\begin{align*}
\frac{1}{\vol(K) }=& \int_{\mathbb R^n} |{\widehat F}(x)|^2 dx=
\int_{\mathbb R^n} | F(x)|^2 dx=
\int_{S^{n-1}}\int_0^{+\infty} |F(r\theta)|^2 r^{n-1}dr d\theta\\
=&\int_{S^{n-1}}\int_0^{+\infty} |R_\theta(re_1)|^2 r^{n-1}dr d\theta
=\frac{1}{nv_n} \int_{\mathbb R^n} |R_\theta(x)|^2dx.
\end{align*}
It follows that
$$\frac{1}{\vol(K) }
= \frac{1}{nv_n} \int_{S^{n-1}} \int_{\mathbb R^n} |R_\theta(x)|^2 dxd\theta\ge \frac{1}{ nv_n^2} \int_{S^{n-1}} \|\theta\|_{K^*}^{-n} d\theta=\frac{\vol(K^*)}{\vol(B_2^n)^2}, $$
which is the Blaschke-Santal\'o inequality.
\vskip 1mm\noindent
Bianchi and Kelly \cite{BK} also provided a proof of the equality case. Indeed,
assume that we have equality in the Blaschke-Santal\'o inequality, then we must have equality in (\ref{CSfourier}). Thus, for every $\theta\in S^{n-1}$ and some $c_{\theta} \in \mathbb R$, one has $\widehat{R_{\theta}}=c_{\theta} {\bf 1}_{\|\theta\|_{K^*} B_2^n}$ on $\mathbb R^n$ and
$$R_{\theta}(x)= \int_{\mathbb R^n} e^{-2i\pi \langle x, y \rangle} \widehat{R_{\theta}}(y) dy=c_\theta \int_{\|\theta\|_{K^*} B_2^n} e^{-2i\pi \langle x, y \rangle} dy.$$
Moreover, since $R_{\theta}(0)= 1$, one gets $c_{\theta} \vol(\|\theta\|_{K^*} B_2^n)=1$. Next, by (\ref{rth}), we get
\begin{equation}\label{fur}\frac{1}{\vol( K)} \int_{K}e^{-2i\pi r\langle \theta, y \rangle} dy=F(r\theta)=\frac{1}{\vol(\|\theta\|_{K^*} B_2^n)} \int_{\|\theta\|_{K^*} B_2^n} e^{-2i\pi r \langle \theta, y \rangle} dy.
\end{equation}
If $M$ is a convex body, $\theta\in S^{n-1}$ and $t\in \mathbb R$, let $A_{M,\theta}(t)=\vol_{n-1}\big(M \cap (\theta^\perp +t \theta)\big)$. One has
$$
\widehat{A_{K, \theta}}(r)=\int_\mathbb R e^{-2i\pi r t } A_{K, \theta}(t) dt = \int_{K}e^{-2i\pi r\langle \theta, y \rangle} dy.
$$
Inverting the Fourier transform, it follows from (\ref{fur}) that for all $t\in \mathbb R$ and $\theta \in S^{n-1},$
\begin{equation}\label{koldob}
\frac{1}{\vol( K)} A_{K, \theta}(t) = \frac{A_{\|\theta\|_{K^*} B_2^n, \theta}(t)}{\vol(\|\theta\|_{K^*} B_2^n)} = \frac{A_{B_2^n, \theta}\left(t\|\theta\|_{K^*}^{-1}\right)}{\|\theta\|_{K^*}\vol( B_2^n)}.
\end{equation}
Now, for $\theta \in S^{n-1},$ one has by (\ref{koldob})
\begin{align*}
\int_K \langle x,\theta\rangle^2dx=\int_{\mathbb R} t^2 A_{K, \theta}(t) dt
&= \frac{\vol( K)}{\|\theta\|_{K^*}\vol( B_2^n)}\int_{\mathbb R} t^2A_{B_2^n, \theta}\left(t\|\theta\|_{K^*}^{-1}\right)dt\\
&= \frac{\vol( K)}{\vol( B_2^n)}\|\theta\|_{K^*}^2 \int_{\mathbb R} u^2A_{B_2^n,\theta}(u) du
\end{align*}
and since by rotation invariance $A_{B_2^n,\theta}(u) $ does not depend on $\theta\in S^{n-1}$, one gets for some $c>0$ and all $\theta
\in S^{n-1}$,
$$\|\theta\|_{K^*}=c\left(\int_K \langle x,\theta\rangle^2dx\right)^{1/2}, $$ which proves that $K^*$ and thus $K$ is an ellipsoid (the last arguments
are inspired from \cite{MR0}).
\subsection{Further results and generalizations} Let us present a few results which may be considered as offspring's of the Blaschke-Santal\'o inequality.
\subsubsection{Stability} K. B\"or\"ozcky \cite{Bor} established a stability version of the Blaschke-Santal\'o inequality, later improved by K. Ball and K. B\"or\"ozcky in \cite{BB}. Let $d_{BM}(K,L)$ be the Banach-Mazur distance between two convex bodies $K$ and $L$ in $\mathbb R^n$:
$$
d_{BM}(K,L)=\inf\{ d>0: K-x\subseteq T(L-y)\subseteq d (K-x), \mbox{ for some } T \in GL(n) \mbox{ and }x,y \in \mathbb R^n \}.
$$
The following stability theorem was proved in \cite{BB}:
\begin{thm} If $K$ is a convex body in $\mathbb R^n,$ $n\ge 3$, such that for some $\varepsilon>0$ one has
$$
(1+\varepsilon)\P(K)\ge \P(B_2^n),
$$
then
$$
\log\big(d_{BM}(K, B_2^n)\big) \le c_n\varepsilon^{\frac{1}{3(n+1)}}|\log \varepsilon|^\frac{2}{3(n+1)}.
$$
where $c_n$ is an absolute constant depending on $n$ only.
\end{thm}
If in the above theorem we assume that $K$ is symmetric, then the exponent of $\varepsilon$ can be improved to $2/3(n+1).$
\subsubsection{Local and restricted maxima} After having proved that convex bodies with maximal volume product are ellipsoids, one may ask about the local maxima of the volume product, in the sense of Hausdorff distance. Using Theorem \ref{shadow}, it was proved in \cite{MR4} that any local maximum is an ellipsoid, which gives another proof of Blaschke-Santal\'o's inequality.
One may also investigate maxima among certain classes of bodies not containing ellipsoids. For instance, in $\mathbb R^2$, among polygons with more than $m\ge 4$ vertices, the maxima are the affine images of regular polygons with $m$ vertices \cite{MR3}. For $n\ge 3$, the much more complicated situation was investigated by \cite{AFZ} using shadow systems. In particular, it was proved in \cite{AFZ} that a polytope with maximal volume product among polytopes with at most $m$ vertices is simplicial (all its facets are simplices) and has exactly $m$ vertices. It was also proved that, among polytopes with at most $n+2$ vertices, the volume product is maximized by $\conv(\Delta_{\lceil{\frac{n}{2}}\rceil},\Delta_{\lfloor{\frac{n}{2}}\rfloor})$, where $\Delta_{\lceil{\frac{n}{2}}\rceil}$ and $\Delta_{\lfloor{\frac{n}{2}}\rfloor}$ are simplices living in complementary subspaces of dimensions $\lceil{\frac{n}{2}}\rceil$ and $\lfloor{\frac{n}{2}}\rfloor$ respectively (by definition, for $\alpha\not\in\mathbb Z$, $\lfloor{\alpha}\rfloor$ is the integer part of $\alpha$ and $\lceil{\alpha}\rceil=\lfloor{\alpha}\rfloor+1$, for
$\alpha\in\mathbb Z$, $\lceil{\alpha}\rceil=\lfloor{\alpha}\rfloor=\alpha$). It is conjectured in \cite{AFZ} that, for $1\le k\le n$, among polytopes with at most $n+k$
vertices, the convex hull of $k$ simplices living in complementary subspaces of dimensions $\lceil{\frac{n}{k}}\rceil$ or $\lfloor{\frac{n}{k}}\rfloor$ have maximal volume product.
Among unit balls of finite dimensional Lipschitz-free spaces, which are polytopes with at most $(n+1)^2$ extreme points, some preliminary results were established in \cite{AFGZ} and it was shown that the maximizers of the volume product are simplicial polytopes.
\subsubsection{ $L_p$-centroid inequalities} In a series of works by Lutwak, Yang and Zhang \cite{LuZ, LuYZ, LuYZ2}, Blaschke-Santal\'o's inequality appears as a special case of a family of isoperimetric inequalities involving the so called $L_p$-centroid bodies and $L_p$-projection bodies. More precisely, consider a compact star-shaped body $K$ in $\mathbb R^n$ and $p\in [1,\infty],$ the polar $L_p$-centroid body $\Gamma^*_pK$ is defined via it's norm:
$$
\|x\|_{\Gamma_p^*K}^p=\frac{1}{c_{n,p}\vol(K)}
\int_{K}|\langle x, y \rangle|^p dy,
$$
here the normalization constant $c_{n,p}$ is chosen so that $\Gamma^*_pB_2^n=B_2^n.$ It was proved in \cite{LuZ} that for all $p\in [1,\infty]$
\begin{equation}\label{LZ}
\vol(K)\vol(\Gamma_p^*K)\le \vol(B_2^n)^2,
\end{equation}
with equality if and only if $K$ is an ellipsoid centered at the origin. It turns out that if $K$ is a centrally symmetric convex body then $\Gamma_\infty^*K=K^*$ and thus the symmetric case of the Blaschke-Santal\'o inequality follows from (\ref{LZ}) when $p=\infty$. A stronger version of (\ref{LZ}) was proved in \cite{LuYZ}:
$$
\vol(\Gamma_pK)\ge \vol(K),
$$
for any star body in $\mathbb R^n$ and $p\in [1,\infty]$. This inequality, for $p=1$, links the theory to the Busemann-Petty centroid inequality \cite{Pe1} see also \cite{Ga,Sc}.
If $K$ and $L$ are compact subsets of $\mathbb R^n$, then for $p \ge 1$, it was proved in \cite[Corollary 6.3]{LuYZ2} that for some $ c(p,n)>0$, one has
$$\int_{K\times L}
|\langle x , y\rangle |^p dx dy \ge c(p,n)\big(\vol(K)\vol(L)\big)^{\frac{n+p}{n}}$$
with equality if and only if $K$ and $L$ are, up to sets of measure 0, dilates of polar-reciprocal, origin-centered ellipsoids.
When $p\to +\infty$, one gets
the following version of the symmetric Blaschke-Santal\'o's in \cite{LuYZ2}:
If $K,L$ are compact subsets of $\mathbb R^n$, then
$$
\vol(B_2^n)^2 \max\limits_{x\in K,y \in L}
|\langle x, y\rangle|^n \ge \vol (K)\vol(L).
$$
\subsubsection{Connection to affine quermassintegrals} Affine quermassintegrals were defined by Lutwak \cite{Lu0}. For $1\le k\le n$, the $k$-th affine quermassintegral of a convex body $K$ is:
$$
\Phi_k(K)=\frac{v_n}{v_k}\left(\int_{Gr(k,n)}\vol_k(P_FK)^{-n}\sigma_{n,k}(dF)\right)^{-1/n},
$$
where $Gr(k,n)$ is the Grassmann manifold of $k$-dimensional linear subspaces $F$ of $\mathbb R^n$, $\sigma_{n,k}$ is Haar probability measure on $Gr(k,n)$ and $P_F$ is the orthogonal projection onto $F$. It was proved by Grinberg \cite{Gri} that $\Phi_k(K)$ is invariant under volume preserving affine transformations. Let $R_K>0$ satisfy $\vol(R_KB_2^n)=\vol(K)$. Lutwak \cite{LuH} conjectured that for any convex body $K$ in $\mathbb R^n$ and any $k=1,\dots, n-1$, one has
\begin{equation}\label{eMY}
\Phi_k(K) \ge \Phi_k(R_KB_2^n)
\end{equation}
with equality if and only if $K$ is an ellipsoid. This conjecture was open for quite a long time. Lutwak proved that, for $k=1$, it follows directly from the Blaschke-Santal\'o inequality (and that the case $k=n-1$ is connected to an inequality of Petty \cite{Pe2} \cite{Pe3}). Recently, E. Milman and Yehudayoff \cite{MY} proved that this conjecture is true. As one of the steps in the proof, they showed that $\Phi_k(K) \ge \Phi_k(S_HK)$, generalizing the previous result of \cite{MP1}. In addition, a simplified proof of the Petty projection inequality was presented in \cite{MY}. Those interesting results suggest that (\ref{eMY}) can be viewed as a generalization of the Blaschke-Santal\'o inequality.
\subsubsection{A conjecture of K. Ball} Keith Ball \cite{Ba0} conjectured that if $K$ is a convex symmetric body in $\mathbb R^n$ then
\begin{equation}\label{conjB}
\int_K\int_{K^*}\langle x,y\rangle^2dxdy \le \int_{B_2^n}\int_{B_2^n}\langle x,y\rangle^2dxdy=\frac{n}{(n+2)^2} \vol(B_2^n)^2.
\end{equation}
and he proved a kind of reverse inequality:
$$
\frac{n \big(\vol(K)\vol(K^*)\big)^{\frac{n+2}{n}}}{(n+2)^2 \vol(B_2^n)^{\frac{4}{n}}} \le \int_K\int_{K^*}\langle x,y\rangle^2dxdy,
$$
which shows that inequality (\ref{conjB}) is stronger than the Blaschke-Santal\'o inequality. In \cite{Ba0, Ba1}, (\ref{conjB}) was proved for unconditional bodies. Generalizations are considered in \cite{KaSa} and \cite{Fa} (see section~\ref{Funk} for the later).
\subsubsection{Stochastic and log-concave measures extensions}\label{section:stoch}
Following the ideas initiated in \cite{PP}, the authors of \cite{CFPP} pursued a probabilistic approach of the Blaschke-Santal\'o inequality for symmetric bodies and established the following result.
\begin{thm}\label{cfpp}
For $N,n \ge 1$, let $(\Omega,\mathcal{B}, P)$ be a probability space and
\begin{itemize}
\item $X_1,\ldots, X_N:\Omega\to \mathbb R^n$ be independent random vectors, whose laws have densities with respect to Lebesgue measure which are bounded by one.
\item $Z_1,\dots, Z_N: \Omega\to \mathbb R^n$ be independent random vectors uniformly distributed in $rB_2^n$ with $\vol(rB_2^n)=1$.
\item $\mu$ be the rotation invariant measure on $\mathbb R^n$ with density $e^{\varphi(|x|)}$, $x\in \mathbb R^n$ with respect to Lebesgue measure, where $\varphi:\mathbb R_+\to\mathbb R_+$ is a non-increasing function.
\item $C_{X,N}(\omega)=\conv(\pm X_1(\omega),\ldots,\pm X_N(\omega))$ and $C_{Z,N}(\omega)=\conv(\pm Z_1(\omega),\ldots,\pm Z_N(\omega))$ for $\omega\in \Omega$.
\end{itemize}
Then for all $t\ge 0$, one has
$P(\{\omega\in \Omega; \mu(C_{X,N}(\omega)^*)\ge t\})\le
P(\{\omega\in \Omega; \mu(C_{Z,N}(\omega)^*)\ge t\})$.
\end{thm}
It follows of course that the same comparison holds in expectation. The tools used there are shadow systems as in the work of Campi and
Gronchi~\cite{CG1}, together with the rearrangement inequalities of Rogers \cite{R} and Brascamp-Lieb-Luttinger \cite{BLL}.
Applying Theorem \ref{cfpp} to $X_1, \dots, X_N$ uniformly distributed on a convex body $K$ and using that when $N\to +\infty$, the sequence of random polytopes $
P_{K,N}:=\conv(\pm X_1,\ldots, \pm X_N)$
converges almost surely to $K$ in the Hausdorff metric, we deduce that for measures $\mu$, as in theorem \ref{cfpp}, one has
\[
\mu(K^*)\le\mu((R_KB_2^n)^*)=\mu\left(\frac{B_2^n}{R_K}\right), \quad\hbox{where $R_K=\left(\frac{\vol(K)}{\vol(B_2^n)}\right)^\frac{1}{n}$}.
\]
Since clearly $\mu(K)\le\mu(R_K B_2^n)$, we deduce that $\mu(K)\mu(K^*)\le \mu(R_K B_2^n)\mu(B_2^n/R_K)$.
If, moreover, $t\mapsto \varphi(e^t)$ is concave, then $t\mapsto\mu(e^tB_2^n)$ is also log-concave (see \cite{CFM}). Thus, it follows that for such measures $\mu$ and for any symmetric convex body $K$, one has
\begin{equation}\label{bs-for-mu}
\mu(K)\mu(K^*)\le\mu(B_2^n)^2.
\end{equation}
It was proved in \cite{CR} that under those hypotheses, $t\mapsto \mu(e^tK)$ is log-concave (extending the same property for Gaussian measures established in \cite{CFM}).
It was asked in \cite{Co} whether \eqref{bs-for-mu} holds for all symmetric log-concave measures $\mu$.
We shall prove \eqref{bs-for-mu} when moreover
$\mu$ has an unconditional density $f$ with respect to the Lebesgue measure (a function $f:\mathbb R^n\to\mathbb R$
is said {\it unconditional} if for some basis $e_1,\dots, e_n$ of $\mathbb R^n$, one has for all $(\varepsilon_1,\dots, \varepsilon_n)\in\{-1;1\}^n$ and $(x_1,\dots,x_n)\in\mathbb R^n$, $f(\sum_{i=1}^n x_ie_i)=f(\sum_{i=1}^n \varepsilon_ix_ie_i)$).
\begin{thm}
If $\mu$ a measure on $\mathbb R^n$ with an unconditional and log-concave density with respect to the Lebesgue measure and $K$ is a symmetric convex body in $\mathbb R^n$, then $\mu(K)\mu(K^*)\le\mu(B_2^n)^2.$
\end{thm}
\begin{proof} We apply first a linear transform making the density of $\mu$ unconditional with respect to the canonical basis of $\mathbb R^n$.
Let $H$ be a coordinate hyperplane and let $S_HK$ be the Steiner symmetral of $K$ with respect to $H$. Using \eqref{steiner-inclusion} as in the proof of Meyer-Pajor \cite{MP1} (see section \ref{steiner-mp} above), we get $\mu(K^*)\le\mu((S_HK)^*)$. Moreover, it is easy to see that $\mu(K)\le\mu(S_HK)$. Thus, denoting by $L$ the convex body obtained from $K$ after $n$ successive Steiner symmetrisation with respect to the coordinate hyperplanes, we get $\mu(K)\mu(K^*)\le\mu(L)\mu(L^*)$. We are now reduced to the case when $\mu$ and $K$ are unconditional. Using the classical Prékopa-Leindler inequality (see for example \cite[page 3]{Pi}), it was shown in \cite{FM1} that then $\mu(L)\mu(L^*)\le\mu(B_2^n)^2$.
\end{proof}
\subsubsection{Blaschke-Santal\'o type inequality on the sphere}
Another inequality of Blaschke-Santal\'o type was established by Gao, Hug and Schneider \cite{GHS} on the sphere. We define the polar of $A\subset S^{n-1}$ by
\[
A^\circ:=\{y\in S^{n-1}; \langle x,y\rangle\le 0, \mbox{ for all } x\in A\}.
\]
If $\pos(A):=\{t x; x\in A,\ t\ge0\}$, then $A^\circ=(\pos(A))^*\cap S^{n-1}$. Let $\sigma$ be the Haar probability measure on $S^{n-1}$. A {\it spherical cap} is the non-empty intersection of $S^{n-1}$ with a halfspace. This work was further generalized by Hu and Li \cite{HuLi} who proved a number of Blaschke-Santal\'o type
inequalities in the sphere and hyperbolic space.
\begin{thm}\cite{GHS}
Let $A$ be a non-empty measurable subset of $S^{n-1}$ and $C$ be a spherical cap such that $\sigma(A)=\sigma(C)$. Then $\sigma(A^\circ)\le\sigma(C^\circ)$. If moreover $A$ is closed and $\sigma(A)<1/2$, there is equality if and only if $A$ is a spherical cap.
\end{thm}
Two proofs were given in \cite{GHS}. One of them uses a special type of symmetrization called the two-point symmetrization and for the equality case the results of \cite{AF}. Hack and Pivovarov \cite{HP} gave a stochastic extension of theorem 7 in the spirit of Theorem \ref{cfpp}.
\section{Mahler conjecture. Special cases}\label{MC}
The problem of the lower bound of $\mathcal P(K)$ is not yet solved, although significant progresses were done these last years.
The first results are due to Mahler for $n=2$, who proved that $\mathcal P(K)\ge \mathcal P(\Delta_2)=\frac{27}{4}$ where $\Delta_2$ is a triangle and in the centrally symmetric case that $\mathcal P(K)\ge \mathcal P([-1,1]^2)=\frac{8}{3}$ (see also \cite{To}). For the proofs, he used polygons and could not thus give the case of equality. Observe that he continued to be interested in this problem \cite{Ma3,Ma4}.
The case of equality in dimension $2$ was obtained by Meyer \cite{Me2} for general bodies and by Reisner \cite{Re1} (see also \cite {SR1, Me1, To}) for centrally symmetric bodies. What happens in dimension $n\ge 3$? There are two conjectures, the first one formulated explicitly by Mahler \cite{Ma1}, but not the second one.
\begin{conj}\label{mahler} For every convex body $K$ in $\mathbb R^n$, one has
$$\mathcal P(K)\ge \mathcal P(\Delta_n)=\frac{(n+1)^{n+1}}{(n!)^2},$$ where $\Delta_n$ is a simplex in $\mathbb R^n$, with equality if and only if $K=\Delta_n$.
\end{conj}
\begin{conj}\label{conjcube} For every centrally symmetric convex body $K$ in dimension $n$, one has $$\mathcal P(K)\ge \mathcal P(B_\infty^n)=\frac{4^n}{n!},$$
where $B_\infty^n= [-1,1]^n$ is a cube, with equality if and only if $K$ is a Hanner polytope (see Definition \ref{hanner} below).
\end{conj}
\subsection{The conjectured minimum in the symmetric case is not unique} To understand conjecture \ref{conjcube} and different phenomena related to it, we define Hanner polytopes \cite{Ha}, and first the $\ell_1$-sum $E\oplus_1 F$ and $\ell_{\infty}$-sum $E\oplus_{\infty} F$ of two normed spaces $E$ and $F$.
\begin{deff}
Let $(E, \|\cdot\|_E)$ and $(F,\|\cdot\|_F)$ be two normed spaces. Then on $E\times F$, we define two norms:
the norm of the $\ell_{\infty}$-sum $E\oplus_{\infty} F$ of $E$ and $F$ and of their $\ell_1$-sum $E\oplus_1 F$ by
\begin{itemize}
\item $\| (x,y) \|_{\infty}=\max(\| x \|_E, \| y \|_F)$.
\item $ \|(x,y) \|_{1}= \| x \|_E + \| y \|_F$.
\end{itemize}
\end{deff}
We note that if $E$ and $F$ are normed spaces then the unit ball of their $\ell_{\infty}$-sum is the Minkowki sum of the unit balls of $E$ and $F$ in $E\times F$ and the unit ball of their $\ell_{1}$-sum is their convex hull. Analogously, if we consider two convex bodies $K\subset \mathbb R^{n_1}$ and $L\subset \mathbb R^{n_2}$, we define two convex bodies in $\mathbb R^{n_1+n_2}$:
\begin{itemize}
\item $K\oplus_\infty L=K\times\{0\}+\{0\}\times L=\{x_1+x_2: x_1 \in K, x_2\in L\}$, their $\ell_\infty$-sum.
\item $K \oplus_1 L=\conv(K\times\{0\},\{0\}\times L)$, their $\ell_1$-sum.
\end{itemize}
One major property of $\ell_{1}$ and $\ell_{\infty}$-sums is that
\begin{equation}\label{oneinf}
(K\oplus_\infty L)^*=K^*\oplus_1 L^*.
\end{equation}
Now we are ready to define Hanner polytopes.
\begin{deff}\label{hanner} In dimension $1$, Hanner polytopes are symmetric segments. Suppose that Hanner polytopes are defined in all dimension $m\le n-1$. A Hanner
polytope in dimension $n$ is the unit ball of an $n$-dimensional normed space $H$ such that for some $k$-dimensional subspace $E$,
$1\le k\le n$, and
$(n-k)$-dimensional subspace $F$ of $H$, whose unit balls are Hanner polytopes, $1\le k\le n-1$, $H$ is the $\ell_{\infty}$-sum or the $\ell_1$-sum of $E$ and $F$.
\end{deff}
Let us now discuss the basic properties of Hanner polytopes:
\begin{itemize}
\item In $\mathbb R^2$, there is a unique (up to isomorphism) Hanner polytope, which is the square.
\item In $\mathbb R^3$, there
are exactly $2$ (up to isomorphism) Hanner polytopes, which are the cube
and the centrally symmetric octahedron.
\item In $\mathbb R^4$, there are, up two isomorphism, $4$ different classes of Hanner polytopes, including two which are not isomorphic to the cube or the crosspolytope. And in $\mathbb R^n$, their number increases quickly with $n$.
\item
The normed spaces whose unit balls $K$ are Hanner polytopes are up to isometry exactly those which satisfy the $3-2$-intersection property: for any three vectors $u_1, u_2$ and $u_3$ if $(K+u_i)\cap (K+u_j) \not=\emptyset,$ for all $1 \le i<j \le 3$, then the intersection of all $3$ balls is not empty \cite{HL}.
\item A Hanner polytope is unconditional (see Definition \ref{uncond} below).
\item If $K$ is a Hanner polytope, then so is $K^*$. This follows from (\ref{oneinf}).
\item If $K\subset \mathbb R^{n_1}$ and $L\subset \mathbb R^{n_2}$ are two convex bodies, then
$$\mathcal P(K\oplus_{\infty} L)= \mathcal P(K\oplus_1 L)=\frac{n_1!n_2!}{ (n_1+n_2)! }\mathcal P(K)\mathcal P(L).$$
\item Using induction, it follows that the volume product of a Hanner polytope in $\mathbb R^n$ is $\frac{4^n}{n!}$.
\end{itemize}
In some sense, Conjecture 1 seems easier than Conjecture 2 because up to an isomorphism, there is only one proposed minimum. But polarity is done with respect to the Santal\'o point of a convex body $K$, which is not always well located, so that one has to prove that for every $z\in \inte(K)$, $\vol(K)\vol(K^z)\ge \mathcal P(\Delta_n)$. Observe however that if $K$ has minimal volume product among all other convex bodies, then its Santal\'o point is also its center of gravity.
\subsection{The planar case} First, note that the conjecture holds with the case of equality for $n=2$ (Mahler \cite{Ma1}, Meyer\cite{Me2} for another proof and the case of equality). Let us sketch a proof of the planar case and use this opportunity to give an example of how the method of shadow systems as well as Theorem \ref{shadow} can be used; note that the method in this case can be traced back to the original proof from \cite{Ma1} and is almost identical for the general and the symmetric case. We concentrate on the general case.
\begin{proof}(Lower bound in $\mathbb R^2$) It is enough to show that $\mathcal P(T)\ge \mathcal P(\Delta_2)$ for all convex polygons $T \subset \mathbb R^2$.
The main idea is to remove vertices of $T$. We use induction on the number $k$ of vertices. Let $T$ be a polygon with $k\ge 4$ vertices. Suppose that
$T=\conv(v_1,v_2,v_3,\dots,v_k)$, with $v_1,v_2,v_3,...,v_k,$ written in the clockwise order.
We shall prove that $\mathcal P(T)\ge \mathcal P(Q)$, for a polygon $Q$ with only $k-1$ vertices. For $i\not= j$, let $\ell_{i,j}$ be a line through $v_i$ and $v_j$. Let $\theta \in S^1$ be parallel to the line $\ell_{1,k-1}$. And define $T_t=\conv(v_1,v_2,\dots,v_{k-1},v_k+t\theta)$ (i.e. we move $v_k$ on a line parallel to $\ell_{1,k-1}$).
The line $\{v_k+t\theta; t\in \mathbb R\}$ meets $\ell_{k-1,k}$ at $v'_k$ when $t=a$ and $\ell_{1,2}$ at $v'_1$ when $t=b$. Since $T_0=T$, one may assume that $a<0<b$. It is easy to see that, for $t\in [a,b]$, $t\mapsto T_t$ is a shadow system with $\vol(T_t)=\vol(T)$. By Theorem \ref{shadow}, $t\mapsto \mathcal P(T_t)^{-1}$ is convex on the interval $[a,b]$ and thus is maximal at its end points. Thus $\mathcal P(T)\ge \min(\mathcal P(T_a),\mathcal P(T_b))$ where
$T_a=\conv(v_1, \dots,v_{k-2}, v'_k)$ and $T_b= \conv(v'_1,v_2,\dots, v_{k-1})$ are polygons with only $k-1$ vertices.
\end{proof}
\begin{remark} The above method was used to prove a number of partial cases of Mahler's conjectures (see \cite{MR2, FMZ,AFZ, AFGZ, Sar}). Unfortunately, there seems to be no way to generalize this approach to dimension $3$ and higher, one of the reason is that if a vertex $v$ of a polytope $P$ may be a vertex of a lot of non simplicial faces, and how "moving" $v$ without breaking the combinatorial structure of $P$? And when the combinatorial structure of $P$ is broken, it is difficult to compute volumes.
\end{remark}
\begin{remark}
In \cite{Reb}, Rebollo Bueno established also stochastic versions of the planar case of Mahler's conjectures. With the notations of section \ref{section:stoch}, he proved that for any centrally symmetric convex body $K$ in the plane and any $r\ge1$,
\[
\mathbb E(\vol(P_{K,N}^*)^{-r})\le \mathbb E(\vol(P_{Q,N}^*)^{-r}),
\]
where $Q$ is a square with $\vol(Q)=\vol(K)$. For $r=1$ and $N\to+\infty$, this gives back the planar case of Mahler's conjecture. The same type of result is also established in \cite{Reb} for general convex bodies in the plane.
\end{remark}
\subsection{The case of zonoids} The conjecture holds for zonoids and polar of zonoids, with equality case for cubes (Reisner \cite{Re1, Re2}
and Gordon, Meyer and Reisner \cite{GMR} for a second proof). We recall that a {\it zonoid} in $\mathbb R^n$ is a Hausdorff limit of {\it zonotopes}, that is of finite sums of segments. Since a segment is symmetric with respect to its midpoint, any zonotope, and thus any zonoid is centrally symmetric. From now, when speaking of a zonoid $Z$, we shall suppose that $Z=-Z$.
Also, the polar bodies of zonoids can be seen as the unit balls of finite dimensional subspaces of $L_1([0,1], dx)$. Observe that every convex centrally symmetric body in $\mathbb R^2$ is a zonoid.
We refer to \cite{Bo, GW, Sc}
for basic properties of zonoids.
\begin{proof}(The lower bound of volume product for zonoids \cite{GMR})
For a zonoid $Z\subset \mathbb R^n$ , there exists a measure $\mu$ on $S^{n-1}$ such that $h_Z(x)=\frac{1}{2}\int_{S^{n-1}} |\langle x, u\rangle| d\mu(u)$ for all $x\in \mathbb R^n$.
Since $\vol(Z)=\frac{1}{n} \int_{S^{n-1}} \vol_{n-1}(P_{u^\perp}Z) d\mu(u)$, one has
\begin{align*}
\vol(Z^*)\int_{S^{n-1}}\vol_{n-1}(P_{u^\perp}Z) d\mu(u) &=n\vol(Z)\vol(Z^*)=
\frac{n+1}{2} \vol(Z)\int_{Z^*} h_K(x) dx\\
&=
\frac{n+1}{2} \vol(Z)\int_{S^{n-1}} \left(\int_{Z^*} |\langle x, u\rangle| dx\right) d\mu(u).
\end{align*}\
It follows that for some $u\in {S^{n-1}}$, one has
$$\vol(Z^*)\vol_{n-1}(P_{u^\perp}Z) \le \frac{n+1}{2} \vol(Z) \int_{Z^*} | \langle x, u\rangle | dx.$$
Now $\int_{Z^*} | \langle x, u\rangle | dx= 2\int_0^{\infty} tf(t) dt$, where $f(t)=\vol\big(Z^*\cap(u^{\perp} +tu)\big)$ is the volume in $u^{\perp} $ of the sections of
$Z^*$ with hyperplanes parallel to $u^{\perp}$. Note that $f(0)=\vol(Z^*\cap u^{\perp})$ and $2\int_0^{\infty} f(t) dt=\vol(Z^*)$.
By the Brunn-Minkowski theorem, the function $f^{\frac{1}{n-1}}$ is concave on its support. By a classical estimate (see for instance \cite{MiP}),
$$
\int_0^{\infty} tf(t) dt \le \frac{n}{n+1} \frac{(\int_0^{\infty} f(t) dt)^2 }{f(0)},$$
with equality if and only if $f(t)=f(0) (1-ct)^{n-1}_+$, for some $c>0$ and all $t\ge 0$.
This gives $$\int_{Z^*} | \langle x, u\rangle | dx\le 2\frac{n}{n+1} \frac{4^{-1 } \vol(Z^*)^2}{ \vol_{n-1}(Z^*\cap u^{\perp}) }=
\frac{n}{2(n+1) }
\frac{\vol(Z^*)^2 }{ \vol_{n-1}(Z^*\cap u^{\perp} ) }, $$
and thus
$$\vol(Z^*)\vol_{n-1}(P_{u^\perp}Z)\le \frac{n+1}{2} \vol(Z)
\frac{n}{2(n+1) } \frac{\vol(Z^*)^2}{ \vol_{n-1}(Z^*\cap u^{\perp}) }, $$
so that $$\vol(Z) \vol(Z^*)\ge \frac{4}{n}\vol_{n-1}(P_{u^\perp}Z)\vol_{n-1}(Z^*\cap u^{\perp}),$$
which allows to conclude by induction, with the case of equality, since $P_{u^\perp}Z$ is a zonoid in dimension $n-1$ and $(P_{u^\perp}Z)^*= Z^*\cap u^{\perp}$.
\end{proof}
\begin{remark} Campi and Gronchi \cite{CG2} presented a very interesting inequality on the volume of $L_p$-zonotopes, which givesinequality, in particular, another proof of the above result. It is interesting to note that the proof in \cite{CG2} is based on the shadow systems technique. Another proof using shadow systems was presented by Saroglou in \cite{Sar}.
\end{remark}
\begin{remark} Marc Meckes \cite{Mec}gaveanother proof of Mahler's conjecture for zonoids, based on the notion of {\it magnitude} introduced by Leinster \cite{Lei}, which is a numerical isometric invariant for metric spaces. He studies the magnitude of a convex body in hypermetric normed spaces (which include $\ell_p^n$, $p\in [1,2])$ and proved a new upper bound for magnitude on such spaces using the Holmes-Thompson intrinsic volumes of their unit balls.
\end{remark}
\subsection{The case of unconditional bodies}
\begin{deff}\label{uncond} Let $K$ in $\mathbb R^n$ be a convex body. We say that $K$ is {\em unconditional } if for some basis $e_1, \dots, e_n$ of $\mathbb R^n$ one has $x_1e_1+\dots + x_n e_n\in K$ if and only if $|x_1|e_1+\dots + |x_n| e_n \in K$. We say that $K$ is {\em almost unconditional} if for some basis $e_1, \dots, e_n$ of $\mathbb R^n$ for every $1\le i\le n$, one has $P_i K=K\cap H_i$, where $H_i$ is linear span of $ \{e_j, j\not=i\} $ and $P_i$ is the linear projection from $\mathbb R^n$ onto $H_i$ parallel to $e_i$.
\end{deff}
If $K$ is unconditional, after a linear transformation which does not change $\mathcal P(K)$,
we may suppose that $(e_1, \dots, e_n)$ is the canonical basis of $\mathbb R^n$. Unconditional bodies are almost unconditional and centrally symmetric. Observe also that
if $K$ is unconditional (resp. almost unconditional) with respect to some basis, then $K^*$ is also
unconditional (resp. almost unconditional) with respect to the dual basis.
We follow the proof of \cite{Me1} of the inequality $\mathcal P(K)\ge \mathcal P(B_\infty^n)$ (the first proof was given in \cite{SR1}). We don't prove the case of equality (Hanner polytopes), which is more involved.
\begin{proof}
We use induction on $n$. It is trivial for $n=1$. We suppose that $e_1, \dots, e_n$ is the canonical basis of $\mathbb R^n$.
Let $K_+=K\cap \mathbb R_+^n$, ${K^*}_+= K^*\cap \mathbb R_+^n$. Then
$\mathcal P(K)=4^n \vol(K_+)\vol(K^*_+)$.
For $x\in \mathbb R^n_+$, one has
$$ x\in K_+\hbox{ if and only if }\langle x,y\rangle \le 1\hbox{ for any }y\in K^*_+,$$
$$ y\in K^*_+\hbox{ if and only if }\langle x,y\rangle \le 1\hbox{ for any }x\in K_+.$$
For $1\le i \le n$, $K_i:= K\cap\{x_i=0\}$ is an unconditional body in $\mathbb R^{n-1}$ and $(K_i)^*= (K^*)_i$. Let $(K_i)_+ =K_i\cap (\mathbb R^+)^n$. For $x=(x_1,\dots, x_n)\in K_+$, let $C_i(x)$ be the convex hull of $\{x\}$ with $(K_i)_+$. Since $C_i(x)$ is a cone with apex $x$ and basis $(K_i)_+$, one has
$$\vol \big(C_i(x)\big)=
\frac {x_i}{n}\vol_{n-1}\big((K_i)_+\big).$$
Thus
\begin{equation}\label{meyer1}
\vol(K_+) \ge \vol \big(\cup_{i=1}^n C_i(x)\big)= \sum_{i=1}^n \vol \big(C_i(x)\big)=\frac{1}{ n} \sum_{i=1}^n x_i\vol_{n-1}\big( (K_i)_+\big).
\end{equation}
Let $a:=\frac{1}{n\vol(K_+) }\Big(\vol_{n-1}\big( (K_1)_+\big),\dots, \vol_{n-1}\big( (K_n)_+\big)\Big)$ in $\mathbb R^n$. By (\ref{meyer1})
one has $\langle a,x\rangle \le 1$ for all $x\in K_+$, that is
$a\in K^*_+$.
Also, $a^*:=\frac{1}{n\vol(K^*_+) } \Big(\vol_{n-1} \big( (K^*_1)_+\big),\dots, \vol_{n-1} \big( (K^*_n)_+ \big)\Big)\in K_+$. Thus $\langle a,a^*\rangle\le 1$, that is
$$\frac{\sum_{i=1}^n \vol_{n-1}\big((K_i)_+\big)\vol_{n-1}\big((K^*_i)_+\big)}{n^2 \vol(K_+)\vol((K^*_+)} \le 1, $$
so that
$$\mathcal P(K)=4^n \vol(K_+)\vol(K^*_+)\ge \frac{4^n}{ n^2} \sum_{i=1}^n \vol_{n-1}\big( (K_i)_+\big)\vol_{n-1}\big( (K^*_i)_+\big).$$
For $1\le i \le n$, one has $\vol_{n-1}(K_i)= 2^{n-1}\vol_{n-1}\big( (K_i)_+\big)$ and $\vol_{n-1}(K^*_i)= 2^{n-1}\vol_{n-1}\big( (K^*_i)_+\big)$. Since the $K_i$ are also unconditional, the induction hypothesis gives $\mathcal P(K_i)\ge \frac{ 4^{n-1}}{(n-1)!}$, $1\le i\le n$. Thus
$$\mathcal P(K)\ge \frac{4}{ n^2} \sum_{i=1}^n \vol_{n-1}(K_i)\vol_{n-1}(K^*_i)\ge \frac{4}{ n^2} \cdot n \cdot\frac{4^{n-1} }{(n-1)!}= \frac{4^n}{n!}.
$$
\end{proof}
\begin{remark} A small modification of this proof allows to treat the case of almost unconditional centrally symmetric bodies. Note that every centrally symmetric body in $\mathbb R^2$ is almost unconditional.
\end{remark}
\subsection{The 3-dimensional symmetric case}
The symmetric case in $\mathbb R^3$ was solved by Irieh and Shibota \cite{IS1} in 2017 with a quite involved proof of about sixty pages. We would like here to highlight the main ideas and to connect it with the unconditional case presented above. We will use the shorter proof given in \cite{FHMRZ}.
A symmetric body $K \subset \mathbb R^n$, $n\ge 3$ is not generally almost unconditional, and thus not unconditional. However, every planar convex body has an almost unconditional basis. For $n=3$, the goal is to show that a $3$-dimensional convex symmetric body $K$ may still have core properties of an unconditional body.
This is done with the help of the following equipartition result:
\begin{thm}\label{thm:equipart}
Let $K \subset \mathbb R^3$ be a symmetric convex body. Then there exist 3 planes $H_1,H_2,H_3$ passing through the origin such that:
\begin{itemize}
\item they split $K$ into $8$ pieces of equal volume, and
\item for each $i=1,2,3$, the section $K \cap H_i$ are split into $4$ parts of equal area by the other two planes.
\end{itemize}
\end{thm}
Note that theorem \ref{thm:equipart} belongs to the very rich theory of equipartitions. For example, a celebrated result of Hadwiger \cite{Hadwiger}, answering a question of Gr\"unbaum \cite{Grunbaum}, shows that for any absolutely continuous finite measure in $\mathbb R^3$, there exist three planes for which any octant has $1/8$ of the total mass. For proving Theorem \ref{thm:equipart}, one can use a result of Klartag (Theorem 2.1 of \cite{Kl1}); we refer to \cite{FHMRZ} for details.
Our goal is to create an analog of formula (\ref{meyer1}). Consider a sufficiently regular oriented hypersurface $A \subset \mathbb R^n$ and define the vector
$$
\overrightarrow{V}(A)=\int_A \overrightarrow{n_A}(x)dx,
$$
where $ \overrightarrow{n_A}(x)$ is the unit normal to $A$ at $x$ defined by its orientation. Next, for a convex body $K\subset \mathbb R^n$ with $0\in \inte(K)$, the orientation of a subset $A \subset \partial K$ is given by the outer normal $ \overrightarrow{n_K}$ to $K$. If
$\mathcal C(A):=\{rx;\ 0\le r\le1, x\in A\}$, then
$$
\vol (\mathcal C(A))=\frac{1}{n}\int_A \langle x, \overrightarrow{n_K}(x)\rangle dx.
$$
The following is a key proposition for our proof.
\begin{proposition}\label{Meyergen}
Let $K\subset \mathbb R^n$ be a convex body, with $0\in\inte(K)$, and let $A$ be a Borel subset of $\partial K$ with $\vol(\mathcal C(A)) \not=0$, then for all $x\in K$,
$$
\frac{1}{n} \langle x, \overrightarrow{V}(A)\rangle \leq \vol(\mathcal C(A)) {\rm{\ and\ thus \ }} \frac{\overrightarrow{V}(A)}{n\vol(\mathcal C(A))}\in K^*.
$$
\end{proposition}
\begin{proof} For all $x\in K$, we have $\langle x, \overrightarrow{n_K}(z)\rangle \le \langle z, \overrightarrow{n_K}(z)\rangle$ for every $z\in\partial K$. Thus for all $x\in K$,
$$
\langle x, \overrightarrow{V}(A)\rangle = \int_A \langle x, \overrightarrow{n_K}(z)\rangle d z\leq \int_A \langle z, \overrightarrow{n_K}(z)\rangle dz =n\vol(\mathcal C(A)).
$$
\end{proof}
\begin{corollary}\label{corme}
Let $K$ be a convex body in $\mathbb R^n$ with $0\in\inte(K)$. If $A \subset \partial K$ and $B\subset \partial K^*$ are Borel subsets such that $\vol(\mathcal C(A))>0$ and $\vol(\mathcal C(B))>0$, then
$$
\langle \overrightarrow{V}(A), \overrightarrow{V}(B) \rangle \le n^2\vol(\mathcal C(A))\vol(\mathcal C(B)).
$$
\end{corollary}
\begin{proof} We use the Proposition \ref{Meyergen} to get $\frac{\overrightarrow{V}(A)}{n\vol(\mathcal C(A))}\in K^*$ and $\frac{\overrightarrow{V}(B)}{n\vol(\mathcal C(B))}\in K$.
\end{proof}
\noindent {\bf Proof of Conjecture 2 for $n=3$:} Since the volume product is continuous, it is enough to prove the conjecture for a centrally symmetric, smooth, strictly convex body $K$ (see \cite{Sc} Section 3.4).
From the linear invariance of the volume product, we may assume that the equipartition property obtained in Theorem \ref{thm:equipart} is satisfied by the coordinates planes given by the canonical orthonormal basis $(e_1,e_2,e_3)$.
As in the unconditional case, we divide $\mathbb R^3$ and the body $K$ into the octants defined by this basis, which define cones as in Corollary \ref{corme}.
The main issue is that, in a sharp difference with the unconditional case, the dual cone to the cone defined as an intersection of $K$ with an octant is not the intersection of $K^*$ with this octant. We will need a bit of combinatorics to work around this issue.
For $\varepsilon\in\{-1;1\}^3$, let the $\varepsilon$-octant be $\{x\in \mathbb R^3; \varepsilon_i x_i\ge 0 \mbox{ for } i=1,2,3\}$ and for $L\subset\mathbb R^3$, let $L_{\varepsilon}$ be the intersection of $L$ with the $\varepsilon$-octant:
$
L_{\varepsilon}=\{x\in L; \varepsilon_i x_i\ge0;\ i=1,2,3\}.
$
Let $N(\varepsilon):=\{\varepsilon'\in \{-1,1\}^3: \sum_{i=1}^3 |\varepsilon_i-\varepsilon'_i |=2\}$. Then $\varepsilon' \in N(\varepsilon)$ iff $[\varepsilon, \varepsilon']$ is an edge $[-1,1]^3$.
If $K_{\varepsilon} \cap K_{\varepsilon'}$ is a hypersurface, we define $K_{\varepsilon} \overrightarrow{\cap} K_{\varepsilon'}$ to be oriented according to the outer normals of $\partial K_{\varepsilon}$. Using Stokes theorem, we obtain
$$
\overrightarrow{V}(\partial K_{\varepsilon} )=\int_{\partial K_{\varepsilon} } \overrightarrow{n_{\partial K_{\varepsilon}}}(x)dx
-\sum_{\varepsilon' \in N(\varepsilon)} \overrightarrow{V}(K_{\varepsilon} \overrightarrow{\cap} K_{\varepsilon'}).
$$
Using the equipartition of the areas of $K\cap e_i^{\perp}$,
we get
$$
\overrightarrow{V}(\partial K_{\varepsilon})=-\sum_{\varepsilon' \in N(\varepsilon)} \overrightarrow{V}(K_{\varepsilon} \overrightarrow{\cap} K_{\varepsilon'} )=\sum_{i=1}^3 \frac{\vol(K\cap
e_i^{\perp})} {4}
\varepsilon_{i} \overrightarrow{e_i}.
$$
Let us look at the dual.
Since $K$ is strictly convex and smooth, there exists a diffeomorphism $\varphi:\partial K\to \partial K^*$ such that $\langle \varphi(x),x\rangle=1$ for all $x\in \partial K$.
We extend $\varphi$ to $\mathbb R^3$ by homogeneity of degree one: $\varphi(\lambda x)=\lambda \varphi(x)$ for $\lambda\geq 0$.
Then
$$K^*=\bigcup_{\varepsilon}\varphi (K_{\varepsilon})\mbox{ and }\vol(K^*)=\sum_{\varepsilon}\vol\big(\varphi (K_{\varepsilon})\big). $$
From the equipartition of volumes, one has
$$
\vol(K)\vol(K^*) =\sum_{\varepsilon } \vol(K)\vol(\varphi (K_{\varepsilon}))=8\sum_{\varepsilon} \vol(K_{\varepsilon})\vol\big(\varphi (K_{\varepsilon})\big).
$$
From Corollary \ref{corme}, we deduce that for $\varepsilon\in\{-1,1\}^3$
\begin{eqnarray*}
\vol(K_{\varepsilon})\vol\big(\varphi (K_{\varepsilon})\big) \ge \frac{1}{9} \langle \overrightarrow{V}(\partial K_{\varepsilon}), \overrightarrow{V}\big(\varphi\big(\partial K_{\varepsilon})\big)\rangle.
\end{eqnarray*}
Thus
\begin{align}
\vol(K)\vol(K^*)&\ge\frac{8}{9}\sum_{\varepsilon} \langle \overrightarrow{V}( \partial K_{\varepsilon}),\overrightarrow{V}\big( \varphi(\partial K_{\varepsilon})\big)\rangle \nonumber \\
&=
\frac{8}{9} \sum_\varepsilon\langle \sum_{i=1}^3 \frac{ \vol(K\cap e_i^{\perp})}{4} \varepsilon_i \overrightarrow{e_i}, \overrightarrow{V}( \varphi\big(\partial K_{\varepsilon})\big)\rangle \nonumber\\
&= \frac{8}{9}\sum_{i=1}^3 \frac{ \vol( K\cap {e_i}^{\perp})}{4} \langle \overrightarrow{e_i}, \sum_\varepsilon \varepsilon_i \overrightarrow{V}( \varphi(\partial K_{\varepsilon})\rangle. \nonumber
\end{align}
Now we use Stokes theorem for $\varphi(\partial K)$ to get
$$
\overrightarrow{V}\big(\varphi( (\partial K_{\varepsilon})\big)=-\sum_{\varepsilon' \in N(\varepsilon)}\overrightarrow{V}\big(\varphi(K_{\varepsilon}\overrightarrow{\cap} K_{\varepsilon'}')\big).
$$
The next step requires a careful computation of the sums following orientation of all surfaces, which gives many cancellations.
Next one combines the correct parts of $K$ and $\varphi(K)$ to get
$$
\vol(K)\vol(K^*)\ge \frac{4}{9}\sum_{i=1}^3 \vol_{n-1}(K\cap e_i ^{\perp})\langle \overrightarrow{e_i}, V \big(\varphi(K\cap \overrightarrow{e_i}^\bot)\big)\rangle
$$
(see \cite{FHMRZ} for the precise computations). Let $P_i$ be the orthogonal projection onto ${e_i}^{\perp}$.
Then $P_i: \varphi(K\cap {e_i}^{\perp})\to P_i (K^*)$ is a bijection. Using Cauchy's formula for the volume of projections, we get
\begin{align*}\langle \overrightarrow{e_i}, V \big(\varphi(K\cap {e_i}^{\perp})\big)\rangle & = \int\limits_{\varphi(K\cap {e_i}^{\perp})} \langle \overrightarrow{n_{\varphi(K\cap {e_i}^{\perp})}}(x), \overrightarrow{e_i} \rangle dx \\ &=\vol_{n-1} \big(P_i( \varphi(K\cap {e_i}^{\perp}))\big)
= \vol_{n-1}\big(P_i(K^*)\big).
\end{align*}
and if $\varepsilon=(\varepsilon_1,\dots, \varepsilon_n)$,
$$
\vol(K)\vol(K^*)\ge \frac{8}{9}\sum_{i=1}^3 \frac{ \vol_{n-1}\big(K\cap{e_i}^{\perp}\big)}{4} \langle \overrightarrow{e_i}, \sum_\varepsilon \varepsilon_i \overrightarrow{V}\big( \varphi(\partial K_{\varepsilon})\big)\rangle.
$$
Finally
\begin{align*}
\vol(K)\vol(K^*)&\ge \frac{4}{9}\sum_{i=1}^3 \vol_{n-1}(K\cap e^\perp_{i}) \vol_{n-1}\big(P_i(K^*)\big) \\&= \frac{4}{9}\sum_{i=1}^3 \vol_{n-1}(K\cap e^\perp_{i})\vol_{n-1}\big((K\cap e^\perp_{i})^*\big)\\ &\ge \frac{4}{9}\times 3\times \frac{4^2}{2!}=\frac{4^3}{3!}.
\end{align*}
\begin{flushright}
$ \Box $ \\
\end{flushright}
\subsection{Further special cases where the conjectures hold}
Let us list here a number of other special cases in which the conjectured inequality was proved:
\begin{itemize}
\item Symmetric polytopes in $\mathbb R^n$ with $2n+2$ vertices for $n\le 9$ (Lopez and Reisner \cite{LR}) and for any $n$ (Karasev \cite{Ka}).
\item For $p\ge 1$, hyperplane sections through $0$ of $B_p^n=\{(x_1, \dots, x_n)\in\mathbb R^n; \sum_{i=1}^n |x_i|^p\le 1\}$ (Karasev
\cite{Ka}). Karasev’s proof of those results is, so far, one of the few concrete applications of the symplectic geometry, through billiards approach, to proving special cases of Mahler’s conjecture.
\item Bodies of revolution \cite{MR1}.
\item Some bodies with many symmetries: Barthe and Fradelizi in \cite{BF}, established that a convex body $K$ which is symmetric with respect to a family of hyperplanes whose intersection is reduced to one point, satisfies Conjecture \ref{mahler}. More generally, it is proved in \cite{BF} that if $K$ is invariant under the reflections fixing $P_1\times \cdots\times P_k$, where for $1\le i\le k$, the $P_i$ are regular polytopes or an Euclidean ball in a subspace $E_i$ and $\mathbb R^n=E_1\oplus\cdots\oplus E_k$, then
$\P(K)\ge\P(P_1\times \cdots\times P_k)$.
\item
Iriyeh and Shibata established similar results in \cite{IS2, IS3}. They determined the exact lower bound of the volume product of convex bodies invariant by some group of symmetries (many classical symmetry groups in dimension 3 \cite{IS2} and for the special orthogonal group of the simplex and of the cube \cite{IS3}).
\item Polytopes in $\mathbb R^n$ with not more that $n+3$ vertices \cite{MR2}.
\item Almost unconditional symmetric bodies (Saint Raymond \cite {SR1}) with equality case for Hanner polytopes (Meyer \cite{Me1}, Reisner \cite{Re3}).
Also in \cite {SR1} is proved a result for unconditional sums of convex bodies :
For $1\le i\le m$, let $K_i \subset \mathbb R^{d_i}$ be convex symmetric bodies and let $L\subset \mathbb R^m$ be an unconditional body with respect to the canonical basis $e_1, \dots, e_m$. We
define {\it the unconditional sum of $K_1,\dots, K_m$ with respect to $L$} by
$$\hskip 15mm
K_1\oplus_L\dots \oplus_L K_m = \{(x_1,\dots,x_m)\in \mathbb R^{d_1}\times\dots\times \mathbb R^{d_m}; \| x_1\|_{K_1}e_1+\dots +\| x_m\|_{K_m}e_m \in L\}.
$$
Clearly $K_1\oplus_L\dots \oplus_L K_m$ is a symmetric convex body in $\mathbb R^{d_1+\dots+d_m}$. Moreover it is easy to see that
$\big(K_1\oplus_L\dots \oplus_L K_m \big)^*= K_1^*\oplus_{L^*}\dots \oplus_{L^*} K_m^*$ and denoting $L_+=L\cap \mathbb R^m_+$
and $^*L_+=L^*\cap \mathbb R^m_+$, one has
$$
\mathcal P(K_1\oplus_L\dots \oplus_L K_m)= \Big(\int_{(t_1,\dots, t_m) \in L_+}\prod_{i=1}^m t_i^{d_i -1} dt_1\dots dt_m\Big) \times $$ $$\Big(\int_{(t_1,\dots, t_m) \in L^*_+} \prod_{i=1}^m t_i^{d_i -1} dt_1\dots dt_m \Big) \prod_{i=1}^m \mathcal P(K_i)$$
and
$$\hskip 10mm \Big(\int_{(t_1,\dots, t_m) \in L_+}\prod_{i=1}^m t_i^{d_i-1} dt_1\dots dt_m\Big) \Big(\int_{(t_1,\dots, t_m) \in L^*_+} \prod_{i=1}^m t_i^{d_i -1} dt_1\dots dt_m \Big)\ge \frac{d_1 !\times \dots\times d_m !}{(d_1+\dots +d_m)!}\ .$$
Observe that it follows from \cite{Me1} or \cite {Re3} that there is equality in the last inequality if and only if $L$ is a Hanner polytope.
Finally, if $\mathcal P(K_i)\ge 4^i/i!$, $1\le i\le m$, then
$$\mathcal P(K_1\oplus_L\dots \oplus_L K_m)\ge \frac{4^{d_1+\dots+d_m}}
{(d_1+\dots+d_m)!}. $$
\item Although their volumes have been computed (see \cite{SR2}), it is not known whether the unit ball of classical ideals of operators satisfy Conjecture 2.
\item An interpretation of Conjecture 2 in terms of wavelets was given in \cite{Ba2}.
\item Connections of Mahler's conjecture and the Blaschke-Santal\'o inequality to the maximal and minimal of $\lambda_1(K)\lambda_1(K^*)$, where $K$ is a convex body and $\lambda_1(K)$ is first eigenvalue of the Laplacian on the relative interior of K with Dirichlet condition $u = 0$ on $\partial K$ was given in \cite{BuF}.
\end{itemize}
\subsection{Local minimizers and stability results}
One may investigate the properties of the local minimizers for $\mathcal P(K)$. A natural open question is whether such a minimizer must be a polytope. A number of results in this direction were proved by studying convex bodies with positive curvature. Stancu \cite{St} proved that if $K$ is a convex body, which is smooth enough and has a strictly positive Gauss curvature everywhere, then the volume product of $K$ can not be a local minimum.
She showed it as a consequence of the fact that, for some $\delta(K)>0$, one has
\[
\vol(K_\delta)\vol((K_\delta)^*)\ge
\vol(K)\vol(K^*)\ge \vol(K^\delta)\vol((K^\delta)^*),
\]
for any $\delta\in(0,\delta(K))$, where $K_\delta$ and $K^\delta$ stand for the convex floating body and the illumination body associated to $K$ with parameter $\delta$.
A stronger result for local minimizers was proved in \cite{RSW}: if $K$ is a convex body which is local minimizer of volume product, then $K$ has no positive curvature at any point of its boundary. The study of local minimizers was continued in \cite{HHL}, where the authors computed the first and the second derivative of the volume product in terms of the support function.
Those results may be seen as a hint toward
the conjecture that a minimizer must be a polytope. We also note that \cite{GM} extended it to the functional
case (see Section \ref{func} below).
It is known that the conjectured global minimizers, that is Hanner polytopes in the centrally symmetric case and simplices in the general case, are actually local minimizers. This question originates from the blog of Tao \cite{T1, T2}, where a number of ideas that may lead to a better understanding of the volume product were discussed. Nazarov, Petrov, Ryabogin and Zvavitch \cite{NPRZ} were able to show that the cube and the cross-polytope are local minimizers. Kim and Reisner \cite{KiR} generalized this result to the case of non-symmetric bodies proving that the simplex is a local minimizer.
The most general result in the symmetric case was obtained by Kim \cite{Ki} who considered the case of
Hanner polytopes. More precisely, let
$$d_{BM}(K,L)=\inf\{d:\; d>0, \mbox{ there exists } T \in GL(n) \mbox{ such that
} K\subseteq TL\subseteq d K\}$$ be the Banach-Mazur
multiplicative distance between two symmetric convex bodies $K, L \subset \mathbb R^n$. Then
\begin{thm}\label{JKim}
There exist constants $\delta(n), c(n)>0$ depending only on $n$ such that if $K$ be a symmetric convex body in $\mathbb R^n$ with $$\min \{d_{BM}(K,H): H \mbox{ is a Hanner polytope in } \mathbb R^n \} = 1+ \delta,$$ for some $0<\delta \le \delta (n)$, then
$$
\mathcal P(K)\ge(1+c(n)\delta)\cdot\mathcal P(B_\infty^n).
$$
\end{thm}
The above theorem was used in \cite{KiZ} to show the stability of the volume product around the class of unconditional bodies. The question of stability for minima and maxima was also treated in various cases \cite{BMMR, BH, KiZ, Bor, FHMRZ}. A general approach to global stability of the volume product was considered in \cite{FHMRZ}, where the following natural lemma was proved:
\begin{lemma}\label{lem:metric}
Let $({\mathcal A}_1,d_1)$ be a compact metric space, $({\mathcal A}_2,d_2)$ be a metric space, $f:{\mathcal A}_1\to {\mathcal A}_2$ be a continuous function and $D$ be a closed subset of ${\mathcal A}_2$. Then,
\noindent(1) For any $\beta>0$, there exists $\alpha>0$, such that $d_1(x,f^{-1}(D))\ge\beta$ implies $d_2(f(x),D)\ge\alpha$.
\noindent(2) If for some $c_1,c_2>0$, $d_1(x,f^{-1}(D))<c_1$ implies $d_2(f(x), D))\ge c_2d_1(x,f^{-1}(D)),$ then for some $C>0$, one has
$d_1(x,f^{-1}(D)) \le c d_2(f(x), D))$ for every $x \in {\mathcal A}_1.$
\end{lemma}
Together with a local minima result (for example Theorem \ref{JKim}), Lemma \ref{lem:metric} gives almost immediately a stability result for known bounds of the volume product. Let us illustrate this technique in the case of symmetric convex bodies in $\mathbb R^3$.
\begin{thm}\label{thm:stability_BM}
There exists an absolute constant $C>0$, such that for every symmetric convex body $K \subset \mathbb R^3$ and $\delta>0$ satisfying
$\mathcal P(K) \leq (1+ \delta)\mathcal P(B_\infty^3)$, one has
$$
\min\{ d_{BM}(K, B_\infty^3), d_{BM}(K, B_1^3)\} \le 1+C\delta.
$$
\end{thm}
\begin{proof} Using the linear invariance of the volume product and John's theorem, we reduce to the case $B_2^3\subseteq K \subseteq \sqrt{3} B_2^3$. Our metric space ${\mathcal A}_1$ will be the set of such bodies with the Hausdorff metric $d_H$. Let ${\mathcal A}_2=\mathbb R$. Then $f:{\mathcal A}_1 \to {\mathcal A}_2$, defined by $f(K)=\mathcal P(K)$, is continuous on ${\mathcal A}_1$ (see for example \cite{FMZ}). Finally, let $D=\mathcal P(B_\infty^3)$. From the description of the equality cases (i.e. that $K$ or $K^*$ must be a parallelepiped) proved in \cite{IS1, FHMRZ} we get
\begin{align*}
f^{-1}(D)&=\{K\in {\mathcal A}_1; \mathcal P(K)=\mathcal P(B_\infty^3)\}\\ &=\{K\in {\mathcal A}_1; K=S B_\infty^3\ \hbox{or}\ K=\sqrt{3}SB_1^3, \mbox{ for some } S\in {\rm SO}(3) \}.
\end{align*}
Note that $B_\infty^3$ is in John position (see for example \cite{AGM1}) and thus if $ B_2^3 \subset T B_\infty^3 \subset \sqrt{3}B_2^3$ for some $T \in GL(3)$, then $T \in SO(3)$.
Next,we show that the assumptions in the second part of Lemma \ref{lem:metric} are satisfied. Since $d_{BM}(K^*, L^*)= d_{BM}(K, L)$, we may restate the $\mathbb R^3$ version of Theorem \ref{JKim} in the following form: there are absolute constants $c_1, c_2 >0$ such that for every symmetric convex body $K$ in $\mathbb R^3$ satisfying
$\min\{ d_{BM}(K, B_\infty^3), d_{BM}(K, B_1^3)\}:=1+d \le 1+c_1,$
one has
$
\mathcal P(K) \ge \mathcal P(B_\infty^3)+ c_2 d.
$
To finish checking the assumption, note that for all $K,L$ convex bodies such that $B_2^3\subseteq K,L \subseteq \sqrt{3} B_2^3$, one has:
\begin{eqnarray}\label{eq:dist}
d_{BM}(K, L)-1 \le \min_{T\in GL(3)} d_H (TK, L) \le \sqrt{3}(d_{BM}(K, L) -1 ).
\end{eqnarray}
Applying Lemma \ref{lem:metric}, we deduce that there exists $c>0$ such that if $B_2^3\subseteq K \subseteq \sqrt{3} B_2^3$, then
\[
\min_{S\in SO(3)}\min(d_H(K,SB_\infty^3), d_H(K,S\sqrt{3}B_1^3))\le c|\mathcal P(K)-\mathcal P(B_\infty^3)|.
\]
Using (\ref{eq:dist}) we conclude the proof.
\end{proof}
\section{Asymptotic estimates and Bourgain-Milman's theorem} \label{AE}
If Conjecture 2 holds true for centrally symmetric bodies $K$, then one has
$$ \frac{4} {n!^{\frac{1}{n} } } \le \mathcal P(K)^{\frac{1}{ n} }\le \frac{\pi} { \Gamma(1+\frac{n}{2} )^{\frac{2}{n}} }\ ,$$
so that
\begin{equation}\label{BM}
\frac{4e+o(1)}{n}\le \mathcal P(K)^{\frac{1}{n}} \le \frac{2e\pi+o(1)}{n}.
\end{equation}
Similarly, the truth of Conjecture \ref{mahler} would imply that for any convex body $K$, one has
\[\P(K)^\frac{1}{n}\ge \mathcal P(\Delta_n)^\frac{1}{n}\ge \frac{e^2+o(1)}{n}.
\]
So that the function $K\mapsto n\mathcal P(K)^{\frac{1}{ n} }$ would vary between two positive constants.
This last fact was actually proved by Bourgain and Milman \cite{BM} in 1986. Indeed, the upper bound is insured by the Blaschke-Santal\'o inequality. For the lower bound, the first important step was done by Gordon and Reisner \cite{GR}, who proved that
$$\mathcal P(K)^{\frac{1}{ n} }\ge \frac{c}{n\log(n)}\ .$$
Then, Bourgain and Milman \cite{BM} proved that
\begin{equation}\label{BourgMil}
\mathcal P(K)^{\frac{1}{n}}\ge \frac{c}{n}.
\end{equation}
For the original proof of (\ref{BourgMil}) and other proofs of the same type, see \cite{BM, LMi, Pi}. The constant $c$ obtained in those proofs was not at all explicit, and even if so,
was quite small. After having given a low technology proof of Gordon-Reisner result \cite{Ku1}, G. Kuperberg \cite{Ku2} gave another proof of \eqref{BourgMil} based on differential geometry, and got the explicit constant $c=\pi e$ in \eqref{BourgMil} in the symmetric case, which is not far from the best possible bound $4e$ and is the best constant known for now. The best constant in the general (i.e not necessary symmetric) case may be obtained using Rogers-Shephard inequality, see the end of this section. Using Fourier transform techniques, other proofs were given by Nazarov \cite{Na} (see also
Blocki \cite{Blo1, Blo2}, Berndtsson\cite{Be1, Be2} and Mastroianis and Rubinstein \cite{MaR}). Giannopoulos, Paouris and Vritsiou gave also a proof using classical techniques of the local theory of Banach spaces \cite{GPV}.
The isomorphic version of the lower bound in (\ref{BM}) is "the best possible step" one can make, before actually proving (or disproving) the Mahler conjecture. Indeed, assume we can achieve an asymptotic behavior better than $\mathcal P(K) \ge c^n \mathcal P(B_\infty^n)$, $0<c<1$, i.e. we have
\begin{equation}\label{BMimprove}
\alpha(n)\mathcal P(B_\infty^n) \le \mathcal P(K), \mbox{ and } \lim\limits_{n\to\infty}\alpha(n)/c^n=\infty,
\end{equation}
but there is a dimension, say $l$, such that the Mahler conjecture is false in $\mathbb R^l$, i.e. there exists a convex symmetric body $K \subset \mathbb R^l$ such that $\mathcal P(K) < \mathcal P(B_\infty^l)$ or
$$
\mathcal P(K) \le c_2 \mathcal P(B_\infty^l), \mbox{ for some } 0<c_2<1.
$$
Let $K'$ to be the $m$-th direct sum of copies of $K$, $K'=K\oplus \dots \oplus K\subset \mathbb R^{n}$, $n=ml$, using the direct sum formula and (\ref{BMimprove}) inequality we get
$$
\alpha(lm)\mathcal P(B_\infty^{l m})\le\mathcal P(K')=\mathcal P(K\oplus \dots \oplus K) \le c_2^m \mathcal P(B_\infty^{l m})=(c_2^{1/l})^{lm} \mathcal P(B_\infty^{l m}).
$$
This yields $\alpha(n)\le c^n$ for $n=ml$ and $c=c_2^{1/l}$, and we get a contradiction for $m$ big enough with $\lim\limits_{n\to\infty}\alpha(n)/c^n=\infty$.
We note that (\ref{BourgMil}) for general convex bodies follows (with a constant divided by two) from the symmetric case. Indeed, let $L$ be a convex body in $\mathbb R^n$ and let $z\in \inte(L)$. Let $K=L-z$. Then by the Rogers-Shephard inequality \cite{RS}, $\vol(\frac{K-K}{2})\le 2^{-n}\binom{2n}{n} \vol(K)\le 2^n \vol(K)$ and
\[
\vol\left(\left(\frac{K-K}{2}\right)^*\right)=\frac{1}{n}\int_{S^{n-1}}\!\!\left(\frac{h_K(u)+h_{-K}(u)}{2}\right)^{-n}\!\!\!d\sigma(u)\le \frac{1}{n}\int_{S^{n-1}}\!\!h_K(u)^{-n}d\sigma(u)=\vol(K^*).
\]
It follows that
$$
\vol(K) \vol(K^*)\ge 2^{-n}\mathcal P\left(\frac{K-K}{2}\right).
$$
Since this holds for every $z\in\inte(L)$, it follows that
$\mathcal P(L)\ge 2^{-n} \mathcal P\left(\frac{L-L}{2}\right)$.
From this relation and Kuperberg's best bound $c=\pi e$ in \eqref{BourgMil} for symmetric bodies, it follows that for general convex bodies, \eqref{BourgMil} holds with $c=\pi e/2$.
\subsection{Approach via Milman's quotient of subspace's theorem}
The next lemma is a consequence of the Rogers-Shephard inequality \cite{RS}.
\begin{lemma}\label{RS}
Let $K$ be a convex symmetric body in $\mathbb R^n$, let $E$ be an $m$-dimensional subspace of $E$ and $E^{\perp}$ be its orthogonal subspace. Then
$$ \binom{n}{m}^{-2}\mathcal P(K\cap E) \mathcal P(K\cap E^{\perp})\le \mathcal P(K)\le \mathcal P(K\cap E) \mathcal P(K\cap E^{\perp}).$$
\end{lemma}
The following result is the {\it quotient of subspace theorem} of V. Milman (\cite{Mi}, see \cite{Gor} for a simple proof).
\begin{thm}\label{Milman} Let $K$ be a convex symmetric body in $\mathbb R^n$, with $n$ a multiple of $4$. Then, there exists a constant $c>0$, independent on $n$, a $\frac{n}{2}$-dimensional subspace $E$ of $\mathbb R^n$, a $\frac{n}{4}$-dimensional subspace $F$ of $E$ and an ellipsoid ${\mathcal E}\subset F$ such that
$$ {\mathcal E}\subset P_F(E\cap K) \subset c {\mathcal E}$$
where, as before, $P_F$ is the orthogonal projection onto $F$.
\end{thm}
\noindent
{\it The proof of Bourgain-Milman's theorem by Pisier \cite{Pi}.}
For a convex symmetric body $K\subset \mathbb R^n$, with $n$ multiple of $4$, let $a_n(K)=n\mathcal P(K)^{\frac{1}{n}}$. Let $E$ and $F$ be the subspaces of $\mathbb R^n$
chosen in Theorem \ref{Milman}. By lemma \ref{RS},
for some constant $d>0$ independent on $n$, one has
$$d\sqrt{a_{\frac{ n}{2}}( K\cap E) a_{\frac{ n}{2}}( K\cap E^{\perp})} \le a_n(K)\le \sqrt{a_{\frac{ n}{2}}( K\cap E) a_{\frac{ n}{2}}( K\cap E^{\perp})} $$
and
$$
d\sqrt{a_{\frac{ n}{4}}( P_F(K\cap E)) a_{\frac{ n}{4}}( K\cap E\cap F^{\perp}} ) \le a_{\frac{ n}{2}}( K\cap E)
\le \sqrt{a_{\frac{ n}{4}}( P_F(K\cap E)) a_{\frac{ n}{4}}( K\cap E\cap F^{\perp}} ).
$$
Next, from Theorem \ref{Milman}, for some absolute constants $c',d'>0$, one has
$$
c'\le a_{\frac{ n}{4}}( P_F(K\cap E))\le d'.
$$
It follows that for some universal constant $c>0$, one has
\begin{equation}\label{fund}
a_n(K)\ge
c \big(a_{\frac{ n}{2}}( K\cap E^{\perp}) \big)^{\frac{1}{2}} \big(a_{\frac{n}{4} }( K\cap E\cap F^{\perp}) \big)^{\frac{1}{4}}.
\end{equation}
Defining now for every $n\ge 1$,
$$
a_n=\min\{a_m(L); 1\le m\le n, \mbox{ $L$ convex symmetric body in $ \mathbb R^m$} \}.
$$
Observing that $a_n>0$, one gets from (\ref{fund}) that
\begin{equation}\label{fin}
a_n\ge c \big(a_n\big)^{\frac{1}{2}} \big(a_n\big)^{\frac{1}{4}}.
\end{equation}
Thus $a_n\ge c^4$.
\subsection{Complex analysis approach}
Let us very briefly discuss an approach via complex and harmonic analysis which was initiated by Nazarov \cite{Na}. We will follow here a work of Berndtsson \cite{Be2}, which is done via functional inequalities (central for the next section).
We will consider a special case of the Bergman spaces. Let $\psi: {\mathbb C}^n \to \mathbb R\cup\{+\infty\}$ be a convex function and $\Omega=\{(x,y)\in \mathbb R^{2n}: \psi(x+iy)<\infty\}.$ The {\it Bergman space} $A^2(e^{-\psi})$ is the Hilbert space of holomorphic functions $f$ on $\Omega$ such that
$$\|f\|^2=\int_{\Omega} |f(x+iy)|^2 e^{-\psi(x+iy)} dxdy <\infty.
$$
The (diagonal) Bergman kernel $B$ for $A^2(e^{-\psi})$ is defined as
$$
B(z)=\sup_{f \in A^2( e^{-\psi})} \frac{|f(z)|^2}{\|f\|^2}.
$$
Next, consider an even convex function $\phi:\mathbb R^n \to\mathbb R\cup \{+\infty\}$ such that $ e^{-\phi(x)}$ is integrable over $\mathbb R^n$. For $\alpha \in {\mathbb C}$, consider the Bergman kernel $B_\alpha(z)$ corresponding to the function $\psi(z)=\phi({\rm Re}(z))+ \phi({\rm Re}(\alpha z))$. The main theorem in \cite{Be2} is the claim that
\begin{equation}\label{berndtsson}
B_i(0) \le c^n B_1(0),
\end{equation}
where $c$
is an absolute constant (precisely computed in \cite{Be2}). We note that $B_1$ is the Bergman kernel for $\psi(x+iy)=2\phi(x)$, i.e. independent of the ${\rm Im}(z)$, and that $B_i$ is the Bergman kernel for $\psi(x+iy)=\phi(x)+\phi(y)$. It is essential to understand that the Bergman spaces corresponding to those densities are different. Thus the connection is not immediate. For example, the function $f=1$ belongs to the second space but does not belong to the space corresponding to $\psi(x+iy)=2\phi(x)$. Using $f=1$ we get
$$
B_i(0)\ge \frac{1}{\int_{\mathbb R^n} e^{-\phi(x)} dx \int_{\mathbb R^n} e^{-\phi(y)}dy}\ .
$$
Together with (\ref{berndtsson}) this gives
\begin{equation}\label{berndtssonM}
B_1(0) \ge \frac{c^{-n}}{\left(\int_{\mathbb R^n} e^{-\phi(x)} dx \right)^2}\ ,
\end{equation}
which is an essential estimate for proving the Bourgain-Milman inequality.
The proof of (\ref{berndtsson}) in \cite{Be2} is based on a very nice and tricky approach of "linking" $B_i$ and $B_1$ via $B_\alpha$. Indeed. it turns out that $b(\alpha):=\log B_\alpha(0)$ is subharmonic in ${\mathbb C}$ (see \cite{Be0, Be2}), and moreover $b(\alpha) \le C+n \log|\alpha|^2, $ which can be seen from the change of variables
\begin{align*}
\|f\|_\alpha^2&=\int_{{\mathbb C}^n}|f(z)|^2 e^{-(\phi({\rm Re}(z)))+ \phi({\rm Re}(\alpha z)))} dz \\ &=|\alpha|^{-2n} \int_{{\mathbb C}^n}|f(z/\alpha)|^2 e^{-(\phi({\rm Re}(z/\alpha)))+ \phi({\rm Re}(z)))} dz.
\end{align*}
Thus $B_\alpha(0)=|\alpha|^{2n}B_{1/\alpha}(0).$ Moreover, $B_{1/\alpha}(0)$ is bounded as $\alpha \to \infty.$ Thus one can apply the Poisson representation formula in the upper half plane to the function $b(\alpha)-n\log|\alpha|^2$ to get
$$
\log B_i(0)=b(i)\le \frac{1}{\pi}\int_{-\infty}^\infty \frac{b(s) - n \log(s^2)}{1+s^2} ds=\frac{2}{\pi}\int_{0}^\infty \frac{b(s) - n \log(s^2)}{1+s^2} ds.
$$
Using that $s\mapsto \phi(s x)$ is increasing on $(0, 1]$ one has, for $s\in (0, 1]$,
$$
\|f\|_s^2=\int_{{\mathbb C}^n}|f(z)|^2 e^{-(\phi({\rm Re}(z))+ \phi({\rm Re}(s z))} dz \ge \|f\|_1^2,
$$
and hence $b(s) \le b(1)$. If $s \ge 1$,
$$
\|f\|_s^2 =|s|^{-2n} \int_{{\mathbb C}^n}|f(z/s)|^2 e^{-(\phi({\rm Re}(z/\alpha))+ \phi({\rm Re}(z))} dz \ge s^{-2n}\|f\|_1^2,
$$
thus $b(s)\le b(1)+n\log s^2$. Putting those estimates together completes the proof of
(\ref{berndtsson}).
The next step is to adapt the Paley-Wiener space associated to a convex body (discussed in Theorem \ref{PW}) to the case of a convex function. For a convex function $\varphi: \mathbb R^n \to \mathbb R \cup \{+\infty\}$, we denote by $PW(e^\varphi)$ the space of holomorphic functions $f$ of the form
$$f(z)=\int_{\mathbb R^n} e^{\langle z, \xi \rangle} \tilde{f}(\xi) d\xi, \hbox{ where }z \in {\mathbb C}^n, $$
for which
$$
\|f\|_{PW}^2 =\int_{\mathbb R^n} |\tilde{f}|^2 e^{\varphi} dt < \infty,
$$
for some function $\tilde{f}$, so that the two formulas above make sense.
The classical Paley-Wiener space discussed in Theorem \ref{PW} then corresponds to the case when $\varphi(x)=0$ for $x\in K$ and $\varphi(x)=+\infty$ for $x \not \in K.$ For a convex function $\psi$ on $\mathbb R^n$, let us consider its logarithmic Laplace transform given by
$$
\Lambda \psi (\xi)= \log \int_{\mathbb R^n} e^{2\langle x, \xi \rangle} e^{-\psi}dx.
$$
The second key ingredient in Berndtsson's proof is the fact that the spaces $PW(e^{\Lambda \psi})$ and $A^2(e^{-\psi})$ coincide and that \begin{equation}\label{normequality}\|f\|_{A^2}^2=(2\pi)^{2n}\|f\|_{PW(e^{\Lambda \psi})}^2.
\end{equation}
This fact originates from the observation that any $f\in PW(e^{\Lambda \psi})$ is the Fourier-Laplace transform of $\tilde{f}$ and $e^{\langle x, t \rangle}\tilde{f}(t)$ belongs to $L_2(\mathbb R^n)$ for all $x$ such that $\psi(x)<\infty$. Then, we apply Parseval's formula to get
$$
\int_{\mathbb R^n}|f(x+iy)|^2 dy=(2\pi)^n \int_{\mathbb R^n} e^{2\langle x, t \rangle}|\tilde{f}(t)|^2dt.
$$
Multiplying the above equality by $e^{-\psi(x)}$ and integrating with respect to $x$, we get
$$
\int_{\mathbb R^n}\int_{\mathbb R^n}|f(x+iy)|^2e^{-\psi(x)}dxdy=(2\pi)^n \int_{\mathbb R^n} |\tilde{f}(t)|^2 e^{\Lambda \psi(t)}dt.
$$
Thus $f \in A^2(e^{-\psi})$ and the $A^2$ norm coincide with a multiple of the norm in $PW(e^{\Lambda \psi})$. This confirms that the Paley-Wiener space is isometrically embedded into the corresponding Bergman space and the rest follows from the observation that it is dense.
One can compute the Bergman kernel for $PW(e^{\Lambda \psi})$ and use (\ref{normequality}) to show that the Bergman kernel for $A^2(e^{-\psi})$ is equal to
\begin{equation}\label{PWestimate}
(2\pi)^{-n}\int_{\mathbb R^n} e^{2\langle x, t \rangle - \Lambda \psi(t)}dt.
\end{equation}
We will use (\ref{PWestimate}) to give an estimate from above of the value of the Bergman kernel at zero. The Legendre transform $\L\psi$ of a function $\psi:\mathbb R^n\to\mathbb R\cup\{+\infty\}$ is defined by
\begin{equation}\label{legan}
\L\psi(y)=\sup_{x\in \mathbb R^n}(\langle x,y\rangle -\psi(x)),\quad \mbox{ for $y\in \mathbb R^n$}.
\end{equation}
Consider the Bergman space $A^2(e^{-2\phi(x)}),$ where $\phi:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is convex and even (as in (\ref{berndtssonM})). Then,
\begin{equation}\label{eqmain}
B(0) \le \pi^{-n}\frac{\int_{\mathbb R^n} e^{-\L\phi(y)}dy}{\int_{\mathbb R^n}e^{-\phi(x)}dx}.
\end{equation}
Indeed, using (\ref{PWestimate}) we get
\begin{equation}\label{eqb0}
B(0) \le (2\pi)^{-n}\int_{\mathbb R^n} e^{ - \Lambda (2\phi(t))}dt.
\end{equation}
Note that, for any $y\in \mathbb R^n$, one has
\begin{align*}
e^{ \Lambda (2\phi(t))}&=2^{-n}\int_{\mathbb R^n}e^{\langle t,u\rangle - 2 \phi(u/2) }du = 2^{-n} e^{\langle t,y\rangle} \int_{\mathbb R^n}e^{\langle t,v\rangle - 2 \phi(v/2+y/2) }dv\\
&\ge 2^{-n} e^{\langle t,y\rangle-\phi(y)} \int_{\mathbb R^n}e^{\langle t,v\rangle - \phi(v) }dv,
\end{align*}
where in the last inequality we used the convexity of $\phi$. Using that $\phi$ is even, we get that
$$\int_{\mathbb R^n}e^{\langle t,v\rangle - \phi(v) }dv \ge \int_{\mathbb R^n}e^{- \phi(v) }dv$$
and
$$
e^{ \Lambda (2\phi(t))}\ge 2^{-n} e^{\langle t,y\rangle-\phi(y)} \int_{\mathbb R^n}e^{- \phi(v) }dv.
$$
Taking the supremum over all $y\in \mathbb R^n$, we get
$$
e^{ \Lambda (2\phi(t))}\ge 2^{-n} e^{\L\phi(t)} \int_{\mathbb R^n}e^{- \phi(v) }dv.
$$
Together with (\ref{eqb0}), this gives (\ref{eqmain}). Combining (\ref{eqmain}) with (\ref{berndtssonM}), we get the following theorem,
\begin{thm}\label{fbm}{\bf (Functional version of the Bourgain-Milman inequality)} Let $\phi:\mathbb R^n \to \mathbb R \cup \{+\infty\}$ be even and convex; then for some $c>0$ independant on $n$, one has
$$
\int_{\mathbb R^n} e^{ - \phi(x)} dx \int_{\mathbb R^n} e^{ - \L\phi(x)} dx\ge c^n.
$$
\end{thm}
\begin{remark} Theorem \ref{fbm} was first proved, via Bourgain-Milman inequality for symmetric convex bodies in \cite{AKM} and then generalized to non-even functions in \cite{FM3}. It implies the classical Bourgain-Milman's inequality for convex bodies as we shall see in the next section (Remark \ref{bodyfun} below).
\end{remark}
\section{Functional inequalities and link with transport inequalities}\label{func}
We dedicate this section to the study of functional inequalities related to volume product.
\subsection{Upper bounds}
The following general form of the functional Blaschke-Santal\'o inequality was proved by Ball \cite{Ba0} for $f$ even, by Fradelizi and Meyer \cite{FM4} for $f$ log-concave and by Lehec \cite{Le1} in the general case.
\begin{thm}\label{thm:bs-func-gen}
Let $f: \mathbb R^n \to \mathbb R_+$ be Lebesgue integrable. There exists $z \in \mathbb R^n$ such that for any $\rho: \mathbb R_+ \to \mathbb R_+$ and any $g:\mathbb R^n \to \mathbb R_+$ measurable satisfying $$f(x+z)g(y) \leq \rho(\langle x,y \rangle)^2\mbox{ for all $x,y\in \mathbb R^n$ satisfying $\langle x,y \rangle >0$} ,$$
one has
$$
\int f(x)\,dx \int g(y)\,dy \leq \left(\int \rho(|x|^2)\,dx\right)^2 .
$$
If $f$ is even, one can take $z=0$.
\end{thm}
Applying this result to $\rho={\bf 1}_{[0,1]}$ and $f={\bf 1}_K$, one recovers the Blaschke-Santal\'o inequality for convex sets.
Applying it to $\rho(t)=e^{-t/2}$, it gives a proof of the following functional Blaschke-Santal\'o inequality for the Legendre transform due to Artstein, Klartag and Milman \cite{AKM} (and \cite{Le2} for another proof).
\begin{thm}\label{thm:bs-func} Let $\varphi:\mathbb R^n\to\mathbb R\cup\{+\infty\}$ satisfy $0<\int e^{-\varphi}<+\infty$. If for $x,y\in \mathbb R^n$, $\varphi_y(x):=\varphi(x+y)$, there exists $z\in\mathbb R^n$ such that
$$
\int_{\mathbb R^n}e^{-\varphi(x)}dx\int_{\mathbb R^n}e^{-\L(\varphi_z)(y)}dy\le\left(\int_{\mathbb R^n}e^{-\frac{|x|^2}{2}}dx\right)^2=(2\pi)^n,
$$
with equality if and only $\varphi_z(x)=|Ax|^2$ for some invertible linear map $A$ and some $z\in \mathbb R^n$.
\end{thm}
\begin{remark}\label{rk:lehec-centered}
In \cite{Le2}, Lehec deduced from Theorem \ref{thm:bs-func} that if the "barycenter" $b(\varphi):=\int xe^{-\varphi(x)}dx/\int e^{-\varphi}$ satisfies $b(\varphi)=0$, then
$$
\int e^{-\varphi}\int e^{-\L\varphi}\le(2\pi)^n.
$$
Indeed, for any $z$, one has $\L(\varphi_z)(y)=\L\varphi(y)-\langle y,z\rangle$. It follows that $\L((\L\varphi)_z)(y)=\L\L\varphi(y)-\langle y,z\rangle\le\varphi(y)-\langle y,z\rangle$. Using Jensen's inequality and $b(\varphi)=0$, we get
\[
\int e^{-\L((\L\varphi)_z)}\ge\int e^{-\varphi(y)+\langle y,z\rangle}dy\ge e^{\langle b(\varphi),z\rangle}\int e^{-\varphi}=\int e^{-\varphi}.
\]
Applying Theorem \ref{thm:bs-func} to $\L\varphi$, there exists thus a $z$ such that
\[
\int e^{-\varphi}\int e^{-\L\varphi}\le \int e^{-\L((\L\varphi)_z)}\int e^{-\L\varphi}\le(2\pi)^n.
\]
\end{remark}
As Lehec observed also, this gives a new proof of the result of Lutwak \cite{Lu91}:
\begin{proposition}\label{propLut} For starshaped body $K\subset\mathbb R^n$ (for all $(x,t) \in K\times[0,1]$, one has $tx\in K$) with barycenter at $0$, one has $$\vol(K)\vol(K^*)\le\vol(B_2^n)^2.$$
\end{proposition}
\begin{proof} Let $\varphi(x)=\frac{\|x\|_K^2}{2}$. Then since
\[
\int_{\mathbb R^n}xe^{-\frac{\|x\|_K^2}{2}}dx=\int_{\mathbb R^n}x\int_{\|x\|_K}^{+\infty}te^{-\frac{t^2}{2}}dtdx=\int_0^{+\infty}t^{n+1}e^{-\frac{t^2}{2}}dt\int_K xdx=0,
\]
one has $b(\varphi)=0$. Moreover for any $y\in\mathbb R^n$, one has $\L\varphi(y)=\sup_x\langle x,y\rangle -\frac{\|x\|_K^2}{2}=\frac{\|y\|_{K^*}^2}{2}$ and
$\int_{\mathbb R^n}e^{-\frac{\|x\|_K^2}{2} }dx=
2^{\frac{n}{2}} \Gamma(\frac{n}{2}+1)\vol(K).$
\end{proof}
Before giving sketches of various proofs of Theorems \ref{thm:bs-func-gen} and \ref{thm:bs-func}, we need a lemma:
\begin{lemma}\label{lem:PLmult}
Let $\alpha,\beta, \gamma:\mathbb R_+\to\mathbb R_+$ be measurable functions such that for every $s,t>0$ one has $\alpha(s)\beta(t)\le \gamma(\sqrt{st})^2$, then
$
\int_{\mathbb R_+}\alpha(t)dt\int_{\mathbb R_+}\beta(t)dt\le\left(\int_{\mathbb R_+}\gamma(t)dt\right)^2.
$
\end{lemma}
\begin{proof}
Define $f,g,h:\mathbb R\to\mathbb R$ by $f(x)=\alpha(e^x)e^x$, $g(x)=\beta(e^x)e^x$ and $h(x)=\gamma(e^x)e^x$. Then $f(x)g(y)\le h(\frac{x+y}{2})$ for all $x,y\in\mathbb R$. By Prékopa-Leindler inequality (see \cite{Pi}, p.3) we get
$
\int_{\mathbb R}f(x)dx\int_{\mathbb R}g(x)dx\le\left(\int_{\mathbb R}h(x)dx\right)^2.
$
We conclude with a change of variables.
\end{proof}
\vskip 2mm
\noindent{\bf Proofs of Theorem \ref{thm:bs-func-gen}:} \\
1) In the case when $f$ is even and $\rho$ is decreasing, this proof is due to Ball \cite{Ba0}. For $s,t\in\mathbb R_+$, let $K_s=\{f\ge s\}$ and $L_t=\{g\ge t\}$. The hypothesis on $f$ and $g$ implies that $L_t\subset \rho^{-1}(\sqrt{st})K_s^*$. Since $f$ is even, $K_s$ is symmetric. We deduce from Blaschke-Santal\'o inequality that for every $s,t\in\mathbb R_+$, if $\alpha(s)=\vol(K_s)$ and $\beta(t)=\vol(L_t)$, one has
$$
\alpha(s)\beta(t)=\vol(K_s)\vol(L_t) \le(\rho^{-1}(\sqrt{st}))^n\vol(K_s)\vol(K_s^*)\le(\rho^{-1}(\sqrt{st}))^n\vol(B_2^n)^2.
$$
Denoting $\gamma(t)=(\rho^{-1}(t))^{n/2}\vol(B_2^n)$, we apply Lemma \ref{lem:PLmult} and
use that $\int_{\mathbb R^n} f(x) dx=\int_0^{+\infty}\alpha(s) ds $... to conclude.
\vskip 1mm
\noindent
2) In the case when $f$ is not supposed to be even, but is log-concave, the proof of Theorem \ref{thm:bs-func-gen} given in \cite{FM4} uses the so-called Ball's body $K_f(z)$ associated to a log-concave function $f$, which is defined by
$$
K_f(z)=\left\{x\in\mathbb R^n; \int_0^{+\infty} r^{n-1}f(z+rx)dr\ge1\right\}.
$$
It follows from Ball's results \cite{Ba00} that $ K_f(z)$ is convex
and that its radial function is $r_{K_f(z)}(x)= \left(\int_0^{+\infty} r^{n-1}f(z+rx)dr\right)^{\frac{1}{n}}$ for $x\in\mathbb R^n\setminus\{0\}$.
If $x,y\in\mathbb R^n$ satisfy $\langle x,y\rangle>0$, define for $r\ge0$, $\alpha(r)=r^{n-1}f(z+rx)$, $\beta(r)=r^{n-1}g(rx)$ and $\gamma(r)=r^{n-1}\rho(r^2\langle x,y\rangle)$. It follows from Lemma
\ref{lem:PLmult} that
$$
\int_0^{+\infty} r^{n-1}f(z+rx)dr\int_0^{+\infty} r^{n-1}g(rx)dr\le\left(\int_0^{+\infty}r^{n-1}\rho(r^2\langle x,y\rangle)dr\right)^2.
$$
This means that
$$
\langle x,y\rangle\le \frac{ c_n(\rho)}{r_{K_f(z)}(x)r_{K_g(0)}(y)} ,
\mbox{ where }c_n(\rho):=\left(\int_0^{+\infty}r^{n-1}\rho(r^2)dr\right)^{2/n},
$$
or in other words $K_g(0)\subset c_n(\rho) K_f(z)^*$. Moreover, one has
$$
\int_{\mathbb R^n} f(x) dx=n\vol\big(K_f(z)\big)\mbox{
for every }z\in \supp(f).
$$
Using Brouwer's fixed point theorem, it was proved in \cite{FM4} that for some $z\in \mathbb R^n$, the center of mass of $K_f(z)$ is at the origin. The result follows then from Blaschke-Santal\'o inequality.
This method was also used in \cite{BBF} to prove stability versions of the functional forms of Blaschke-Santal\'o inequality.\\
\noindent{\bf Proofs of Theorem \ref{thm:bs-func}:}
\noindent 1) The proof given in \cite{AKM} attaches to $\varphi:\mathbb R^n\to\mathbb R\cup\{+\infty\}$, {\it supposed here to be even}, the functions $f_m(x)=\left(1-\frac{\varphi(x)}{m}\right)_+^m$, for $m\ge1$ and the convex bodies
$$
K_m(f_m):=\{(x,y)\in\mathbb R^{n+m};|y|\le f_m(\sqrt{m}x)^{1/m}\}.
$$
When $m\to+\infty$, $f_m\to e^{-\varphi}$ and
$$
m^\frac{n}{2}\frac{\vol\big(K_m(f_m)\big)}{\vol(B_2^m)}=\int_{\mathbb R^n}f_m(x)dx\to \int_{\mathbb R^n}e^{-\varphi(x)}dx.
$$
Moreover $K_m(f_m)^*=K_m(\L_mf_m)$, where
$
\L_m(f_m)(y)=\inf_{x}\frac{\left(1-\frac{\langle x,y\rangle}{m}\right)^m_+}{f_m(x)}.
$
Also, when $m\to+\infty$
$$\L_m(f_m)(y)\to e^{-\L\varphi(y)}\mbox{ and }
m^\frac{n}{2}\frac{\vol\big(K_m(\L_m\varphi)\big)}{\vol(B_2^m|}=\int_{\mathbb R^n}\L_mf_m(x)dx\to \int_{\mathbb R^n}e^{-\L\varphi(x)}dx.
$$
One then applies the Blaschke-Santal\'o inequality to the bodies $K_m(f_m)$.\\
\noindent
2) Lehec's proof \cite{Le2} of Theorem \ref{thm:bs-func} uses induction on the dimension. For $n=1$, choose $z\in \mathbb R$ such that $\int_z^{+\infty}e^{-\varphi(t)}dt=\int_{\mathbb R}e^{-\varphi(t)}dt/2$. For all $s,t\ge0$, one has $\varphi_z(s)+\L(\varphi_z)(t)\ge st$. Thus, the functions $\alpha(s)=e^{-\varphi_z(s)}$, $\beta(t)=e^{-\L(\varphi_z)(t)}$ and $\gamma(u)=e^{-u^2/2}$ satisfy $\alpha(s)\beta(t)\le \gamma(\sqrt{st})^2$, for every $s,t\ge0$. It follows from Lemma \ref{lem:PLmult} that
\begin{equation}\label{pl-func-dim1}
\int_0^{+\infty}e^{-\varphi_z(t)}dt\int_0^{+\infty}e^{-\L(\varphi_z(t))}dt=\int_{\mathbb R_+}\alpha(t)dt\int_{\mathbb R_+}\beta(t)dt\le\left(\int_{\mathbb R_+}\gamma(u)du\right)^2=\frac{\pi}{2}.
\end{equation}
This inequality also holds on $\mathbb R_{-}$; adding the two inequalities, we get the result.
Now suppose that the results holds for $n$, and let us do the induction step. Let $\varphi:\mathbb R^{n+1}\to\mathbb R\cup\{+\infty\}$. If $X\in\mathbb R^{n+1}$, we denote $X=(x,s)\in\mathbb R^n\times \mathbb R$. Let
$$\mathcal P(\varphi):=\min_z\int_{\mathbb R^{n+1}}e^{-\varphi(X)}dX\int_{\mathbb R^{n+1}} e^{-\L(\varphi_z)(X)}dX.$$
For any invertible affine map $A$, one has $\mathcal P(\varphi\circ A)=\mathcal P(\varphi)$. Translating $\varphi$ in the $e_{n+1}$ direction, we may assume that $$\int_{s>0}\int e^{-\varphi(x,s)}dxds=\int_{s<0}\int e^{-\varphi(x,s)}dxds.$$ Define $b_+(\varphi)$ and $b_-(\varphi)$ in $\mathbb R^{n+1}$ by
\[
b_+(\varphi)=\frac{\int_{s>0}\int(x,s)e^{-\varphi(x,s)}dxds}{\int_{s>0}\int e^{-\varphi}dxds}\quad \hbox{and}\quad b_-(\varphi)=\frac{\int_{s<0}\int(x,s)e^{-\varphi(x,s)}dxds}{\int_{s<0}\int e^{-\varphi}dxds}.
\]
Since $\langle b_+(\varphi),e_{n+1}\rangle>0$ and $\langle b_-(\varphi),e_{n+1}\rangle<0$, the point $\{z\}:=[b_{-}(\varphi),b_+(\varphi)]\cap e_{n+1}^\bot$ is well defined. By translating $\varphi$ in the remaining directions, we may assume that $z=0$.
Let $A$ be the linear invertible map defined by $Ax=x$ for $x\in\mathbb R^n$ and $Ae_{n+1}=b_+(\varphi)$. Then $b_+(\varphi\circ A)=A^{-1}b_+(\varphi)=e_{n+1}$. Changing $\varphi$ into $\varphi\circ A$, we may assume that $b_+(\varphi)=e_{n+1}$. We define $\Phi, \Psi:\mathbb R^n\to\mathbb R\cup\{+\infty\}$ by
\[
e^{-\Phi(x)}=\int_0^{+\infty}e^{-\varphi(x,t)}dt\quad \hbox{and}\quad
e^{-\Psi(x)}=\int_0^{+\infty}e^{-\L\varphi(x,t)}dt.
\]
Since $b_+(\varphi)=e_{n+1}$, we get
$\int_{\mathbb R^n}xe^{-\Phi(x)}dx=\int_{t>0}\int_{\mathbb R^n}xe^{-\varphi(x,t)}dxdt=0.$
Hence $b(\Phi)=0$. From the induction hypothesis and the remark after Theorem \ref{thm:bs-func}, it follows that
\begin{equation}\label{induction-func}
\int_{\mathbb R^n}e^{-\Phi(x)}dx\int_{\mathbb R^n}e^{-\L\Phi(y)}dy\le (2\pi)^n.
\end{equation}
For every $x,y\in\mathbb R^n$ and $s,t\in\mathbb R$, let $\varphi^x(s)=\varphi(x,s)$ and $(\L\varphi)^y(t)=\L\varphi(y,t)$. Applying again Lemma \ref{lem:PLmult} as in \eqref{pl-func-dim1}, we get
\[
\int_0^{+\infty}e^{-\varphi^x(s)}ds\int_0^{+\infty}e^{-\L(\varphi^x)(t)}dt\le\frac{\pi}{2}.
\]
Since $\varphi^x(s)+(\L\varphi)^y(t)\ge\langle x,y\rangle +st$, one has $(\L\varphi)^y(t)-\langle x,y\rangle\ge \L(\varphi^x)(t)$. Thus for $x,y\in\mathbb R^n$,
\[
e^{-\Phi(x)-\Psi(y)}=\int_0^{+\infty}e^{-\varphi^x(s)}ds\int_0^{+\infty}e^{-(\L\varphi)^y(t)}dt\le \frac{\pi}{2}e^{-\langle x,y\rangle}.
\]
This implies that $e^{-\Psi(y)}\le\frac{\pi}{2}e^{-\L\Phi(y)}$. Using \eqref{induction-func}, we get
\[
\int_{\mathbb R^n}e^{-\Phi(x)}dx\int_{\mathbb R^n}e^{-\Psi(y)}dy\le \frac{\pi}{2}(2\pi)^n.
\]
that is
\[
\int_{0}^{+\infty}\int_{\mathbb R^n}e^{-\varphi(x,s)}dxds\int_{0}^{+\infty}\int_{\mathbb R^n}e^{-\L\varphi(y,t)}dydt\le \frac{\pi}{2}(2\pi)^n.
\]
Adding this to the analogous bound for $s<0$ and using $\int_{s>0}\int e^{-\varphi(x,s)}dxds=\int_{s<0}\int e^{-\varphi(x,s)}dxds$, we conclude.
\begin{remark} Various $L_p$-versions of the functional Blaschke-Santal\'o's inequalities have been given (see for instance \cite{HJM}). Also, Blaschke-Santal\'o's type inequality were established in the study of extremal general affine surface area \cite{GHSW, Y, Hoe}. A consequence of the Blaschke-Santal\'o inequality was recently given in \cite{VY}.
\end{remark}
\subsection{Lower bounds of the volume product of log-concave functions}
Let $\varphi:\mathbb R^n\to\mathbb R\cup\{+\infty\}$ be convex. The {\it domain of $\varphi$} is $\mathrm{dom}(\varphi):=\{x\in\mathbb R^n; \varphi(x)<+\infty\}$. If $0<\int e^{-\varphi}<+\infty$, we define
the {\it functional volume product of $\varphi$} is
$$\mathcal P(\varphi)=\min_z\int_{\mathbb R^n}e^{-\varphi(x)}dx\int_{\mathbb R^n}e^{-\L(\varphi_z)(y)}dy.$$
If $\varphi$ is even, this minimum is reached at $0$.
The following conjectures were proposed in \cite{FM3}.
\begin{conj}\label{mahler-func-conj} If $n\ge1$ and $\varphi:\mathbb R^n\to\mathbb R\cup\{+\infty\}$ is a convex function such that $0<\int e^{-\varphi}<+\infty$. Then
\[
\int_{\mathbb R^n}e^{-\varphi(x)}dx\int_{\mathbb R^n}e^{-\L\varphi(y)}dy\ge e^n,
\]
with equality if and only if there is a constant $c>0$ and an invertible linear map $T$ such that
\[e^{-\varphi(Tx)}=c\prod_{i=1}^ne^{-x_i}{\bf 1}_{[-1,+\infty)}(x_i).
\]
\end{conj}
\begin{conj}\label{mahler-func-conj-even} If $n\ge1$ and $\varphi:\mathbb R^n\to\mathbb R\cup\{+\infty\}$ is an even convex function such that $0<\int e^{-\varphi}<+\infty$. Then
\[
\int_{\mathbb R^n}e^{-\varphi(x)}dx\int_{\mathbb R^n}e^{-\L\varphi(y)}dy\ge 4^n,
\]
with equality if and only if there exist a constant $c>0$, two complementary subspaces $F_1$ and $F_2$ and two Hanner polytopes $K_1\subset F_1$ and $K_2\subset F_2$ such that for all $(x_1,x_2)\in F_1\times F_2$,
$$e^{-\varphi(x_1+x_2)}=ce^{-||x_1||_{K_1} }{\bf 1}_{K_ 2}(x_2).$$
\end{conj}
\begin{remark}
With a different duality for a convex function $\varphi$, another Blaschke-Santal\'o and inverse Santal\'o inequality were obtained in \cite{AS, FS}.
Another extension of Blaschke-Santal\'o inequality and of its functional form was considered in \cite{HoS}, where duality is combined with the study of inequalities related to monotone non-trivial Minkowski endomorphisms.
\end{remark}
Partial results toward the proofs of these conjectures are gathered in the following theorem.
\begin{thm} Let $n\ge1$ and $\varphi:\mathbb R^n\to\mathbb R\cup\{+\infty\}$ be a convex function such that $0<\int e^{-\varphi}<+\infty$. Then
\begin{enumerate}
\item Conjecture \ref{mahler-func-conj} holds for $n=1$. It holds also for all $n\ge 1$, if there exists an invertible affine map $T$ such that $\mathrm{dom}(\varphi\circ T)=\mathbb R_+^n$ and $\varphi\circ T$ is non-decreasing on $P$, in the sense that if $x_i\le y_i$ for all $1\le i\le n$, $(\varphi\circ T)(x_1,\dots, x_n)\le(\varphi\circ T)(y_1,\dots,y_n)$.
\item Conjecture \ref{mahler-func-conj-even} holds if $n=1$ or $n=2$. It holds also for all $n\ge 1$ if $\varphi$ is unconditional, in the sense that there exists an invertible linear map $T$ such that $(\varphi\circ T)(x_1,\dots,x_n)=(\varphi\circ T)(|x_1|,\dots,|x_n|)$ for all $(x_1,\dots,x_n)\in \mathbb R^n$.
\end{enumerate}
\end{thm}
\noindent(1) For $n=1$, Conjecture \ref{mahler-func-conj} was proved in two different ways in \cite{FM1, FM3}. The case of non-decreasing convex functions on the positive octant was also proved in \cite{FM3}.
\noindent(2) For unconditional convex functions on $\mathbb R^n$, Conjecture \ref{mahler-func-conj-even} was established in two different ways in \cite{FM2, FM3}, with the case of equality in \cite{FGMR}). In particular, this settles the general case $n=1$. For $n=2$, it was proved in \cite{FN}.
\begin{remark}\label{bodyfun}
There is a strong link between Conjectures \ref{mahler} and \ref{conjcube} forconvex bodies and their functional counterparts Conjectures \ref{mahler-func-conj} and \ref{mahler-func-conj-even}. Indeed, as it was observed in \cite{FM3}, given a symmetric convex body $K$ in $\mathbb R^n$, if $\varphi_K(x)=\|x\|_K$, we get $e^{-\L\varphi_K}={\bf 1}_{K^*}$, and integrating on level sets,
$\P(\varphi_K)=n!\P(K)$. Therefore, if Conjecture \ref{mahler-func-conj-even} holds for $\varphi_K$, then Conjecture \ref{conjcube} holds for $K$. Reciprocally, if Conjecture \ref{conjcube} holds in $\mathbb R^n$ for every dimension $n$ then, given an even, convex function $\varphi:\mathbb R^n\to\mathbb R\cup\{+\infty\}$, we can apply it
in dimension $n+m$ to the convex sets
\[
K_m(\varphi)=\left\{(x,y)\in\mathbb R^n\times\mathbb R^m; \|y\|_\infty\le \left( 1-\frac{\varphi(mx)}{m}\right)_+\right\}.
\]
Using
$$
\vol_{n+m}(K_m(\varphi))=\frac{2^m}{m^n}\int_{\mathbb R^n}\left( 1-\frac{\varphi(x)}{m}\right)_+^mdx
$$
and
\[
K_m(\varphi)^*=\left\{(x,y)\in\mathbb R^n\times\mathbb R^m; \|y\|_1\le \inf_{\varphi(x')\le m} \frac{(1-\langle x,x'\rangle)_+}{1-\frac{\varphi(x')}{m}}\right\},
\]
it is proved in \cite{FM3} that when $m\to+\infty$, the inequality $\P(K_m(\varphi))\ge\frac{4^{n+m}}{(n+m)!}$ gives $\P(\varphi)\ge 4^n$. In a similar way, if Conjecture \ref{mahler-func-conj} holds in dimension $n+1$, given a convex body $K$ in $\mathbb R^n$ with Santal\'o point at the origin, we apply it
to $\varphi:\mathbb R^n\times\mathbb R\to\mathbb R\cup\{+\infty\}$ defined,
by
\[
e^{-\varphi(x,t)}={\bf 1}_{[-n-1,+\infty)}(t){\bf 1}_{(t+n+1)K}(x)e^{-t}.
\]
Then the Legendre transform of $\varphi$ is
\[
e^{-\L\varphi(y,s)}={\bf 1}_{(-\infty,1]}(s){\bf 1}_{(1-s)K^*}(y)e^{(n+1)(s-1)},
\]
and
\[
\P(\varphi)=\frac{(n!)^2e^{n+1}}{(n+1)^{n+1}}|K||K^*|.
\]
This proves that if Conjecture \ref{mahler-func-conj} holds for $\varphi$, then
\[
\P(K)\ge\P(\Delta_n)=\frac{(n+1)^{n+1}}{(n!)^2},
\]
which is Conjecture \ref{mahler} for $K$. Lastly, as shown in \cite{FM3}, one can adapt the arguments for even functions to prove that, given a convex function $\varphi:\mathbb R^n\to\mathbb R\cup\{+\infty\}$. Conjecture \ref{conjcube} applied to a well chosen sequence of bodies $\Delta_m(\varphi)$ in dimension $n+m$ gives Conjecture \ref{mahler-func-conj} for $\varphi$ when $m\to+\infty$.
\end{remark}
It was also proved in \cite{GM} that if $\mathcal P(\varphi)$ is minimal, then $\varphi$ has no positive Hessian at any point.
Asymptotic estimates hold too: in the even case, it was proved in \cite{KM} that for some constant
$c>0$, one has for all even convex functions $\varphi$ and all $ n\ge 1$, $\mathcal P(\varphi)\ge c^n$. This was generalized to all convex functions in \cite{FM3}.
\subsection{Volume product and transport inequalities}
Maurey \cite{Mau} introduced the following property $(\tau)$:
Let $\mu$ be a measure on $\mathbb R^n$ and $c:\mathbb R^n\times\mathbb R^n\to\mathbb R_+$ be a lower semi-continuous function (called a {\it cost function}); we say that the couple $(\mu,c)$ satisfies {\it property $(\tau)$} if for any continuous and bounded function $f:\mathbb R^n\to\mathbb R$, defining
$$Q_cf(y)=\inf_x\left(f(x)+c(x,y)\right)\mbox{ for $y\in \mathbb R^n$}, $$
one has
\[
\int_{\mathbb R^n}e^{-f(x)}d\mu(x)\int_{\mathbb R^n}e^{Q_cf(y)}d\mu(y)\le 1.
\]
Maurey \cite{Mau} showed that if $\gamma_n$ is the standard Gaussian probability measure on $\mathbb R^n$,
with density $(2\pi)^{-n/2}e^{-|x|^2/2}$,
and $c_2(x,y)=\frac{1}{2}|x-y|^2$ then as a consequence of the Prékopa-Leindler inequality, $(\gamma_n,\frac{c_2}{2})$ satisfies property $(\tau)$.
In \cite{AKM}, it was pointed out that the functional form of the Blaschke-Santal\'o inequality for the Legendre transform (Theorem~\ref{thm:bs-func}) is equivalent to an improved property $(\tau)$ for even functions: we say that the pair $(\gamma_n,c_2)$ satisfies the {\it even property $(\tau)$} if for any even function $f$, one has
\begin{equation}\label{tau-prop}
\int_{\mathbb R^n}e^{-f(x)}d\gamma_n(x)\int_{\mathbb R^n}e^{Q_{c_2}f(y)}d\gamma_n(y)\le 1.
\end{equation}
This equivalence follows from the change of function: $\varphi(x)=f(x)+\frac{|x|^2}{2}$ and the fact that
\[
-\L\varphi(y)=\inf_x\left(f(x)+\frac{|x|^2}{2}-\langle x,y\rangle\right)=Q_{c_2}f(y)+\frac{|y|^2}{2}.
\]
A direct proof of \eqref{tau-prop} was then given by Lehec in \cite{Le}. And it follows from Remark~\ref{rk:lehec-centered} above, due to Lehec \cite{Le2}, that \eqref{tau-prop} also holds as soon as $\int_{\mathbb R^n}xe^{-f(x)}d\gamma_n(x)=0$.
Moreover, as shown for example in Proposition 8.2 of \cite{GL}, there is a general equivalence between property $(\tau)$ and symmetrized forms of transport-entropy inequality. These transport-entropy inequalities were introduced by Talagrand \cite{Ta}, who showed that, for every probability measure $\nu$ on $\mathbb R^n$, one has
\begin{equation}\label{tala}
W_2^2(\nu,\gamma_n)\le 2 H(\nu|\gamma_n),
\end{equation}
where $W_2$ is the {\it Kantorovich-Wasserstein distance} defined by
\[
W_2^2(\nu,\gamma_n)=\inf\left\{\int_{\mathbb R^n\times\mathbb R^n}|x-y|^2d\pi(x,y); \pi\in\Pi(\nu,\gamma_n) \right\},
\]
where $\Pi(\nu,\gamma_n)$ is the set of probability measures on $\mathbb R^n\times\mathbb R^n$ whose first marginal is $\nu$ and second marginal is $\gamma_n$ and $H$ is the {\it relative entropy } defined for $d\nu=fd\gamma_n$ by
\[
H(\nu|\gamma_n)=-\int_{\mathbb R^n}f\log fd\gamma_n.
\]
Using this type of equivalence between property $(\tau)$ and transport-entropy inequalities, Fathi \cite{Fat} proved the following symmetrized form of Talagrand's transport-entropy inequality: if $\nu_1$ (or $\nu_2$) is centered, in the sense that $\int xd\nu_1(x)=0$, then
\begin{equation}\label{tal-fathi}
W_2^2(\nu_1,\nu_2)\le 2 (H(\nu_1|\gamma_n)+H(\nu_2|\gamma_n)).
\end{equation}
He showed actually that \eqref{tal-fathi} is equivalent to the functional form of Blaschke-Santal\'o's inequality (Theorem~\ref{thm:bs-func}).
Applying \eqref{tal-fathi} to $\nu_1=\gamma_n$, one recovers Talagrand's inequality \eqref{tala}. In his proof, Fathi used a reverse logarithmic Sobolev inequality for log-concave functions established in \cite{AKSW} under some regularity assumptions, removed later with a simplified proof in \cite{CFGLSW}.
In a similar way, Gozlan \cite{Go} gave equivalent transport-entropy forms of Conjectures 3 and 4 and of Bourgain-Milman's asymptotic inequality. This work was pursued in \cite{FGZ}, where new proofs of the one-dimensional case of Conjectures 3 and 4 are also provided.
\section{Generalization to many functions and bodies}\label{secMB}
The following intriguing conjecture was proposed by Kolesnikov and Werner \cite{KoW}.
\begin{conj}\label{KW} Let $\rho:\mathbb R\to \mathbb R^+$ be increasing and for $m\ge 2$, let $f_i:\mathbb R^n\to \mathbb R,$ $i=1, \dots, m$ be even Lebesgue integrable functions satisfying
$$
\prod_{i=1}^m f_i(x_i) \le \rho\left(\sum\limits_{1\le i<j\le m} \langle x_i, x_j\rangle \right) \mbox{ for all } x_1, \dots, x_m \in \mathbb R^n.
$$
Then
$$
\prod_{i=1}^m \int_{\mathbb R^n} f_i(x_i) dx_i \le \left(\int_{\mathbb R^n}\rho^{\frac{1}{m}}\left(\frac{m(m-1)}{2} |u|^2\right) du\right)^m.
$$
\end{conj}
Conjecture \ref{KW} was proved by Kolesnikov and Werner when the functions $f_i$ are unconditional.
Observe that Conjecture \ref{KW} is a functional form of a new conjectured Blaschke-Santal\'o inequality involving more than two convex bodies.
Indeed, for $1\le i\le m$, let $K_i$ be starbodies, $f_i(x)=v_n(2\pi)^{-n/2} e^{-\|x\|_{K_i}^2/2}$ and $\rho(t)=e^{-t/(m-1)}$.
Since $\vol(K_i)=v_n(2\pi)^{-n/2} \int_{\mathbb R^n} e^{-\|x\|_{K_i}^2/2}dx, $
we get from Conjecture \ref{KW}:
\begin{conj}\label{KWBS} Let $m \ge 2$ and let $K_1, \dots, K_m$ be symmetric convex bodies in $\mathbb R^n$ such that
\begin{equation}\label{polKW}
\sum\limits_{1\le i<j\le m} \langle x_i, x_j\rangle \le \frac{m-1}{2}\sum_{i=1}^m\|x_i\|^2_{K_i}, \mbox{ for all } x_1, \dots, x_m \in \mathbb R^n,
\end{equation}
then
$
\prod_{i=1}^m \vol(K_i) \le \vol(B_2^n)^m.
$
\end{conj}
Conjecture \ref{KW} has been confirmed in \cite{KoW} for unconditional bodies; for $m\ge 3$, it was shown then that there is equality if and only if $K_i=B_2^n,$ for $i=1,\dots,m.$
This direction was further developed by Kalantzopoulos and Saroglou \cite{KaSa} who generalized the polarity condition (\ref{polKW}). For $2\le p\le m$ and $ x_1, \dots, x_m \in \mathbb R^n,$ let
$$
{\mathcal S}_p(x_1,\dots, x_m)=\binom{m}{p}^{-1}\sum_{l=1}^n s_p(x_1(l), \dots, x_m(l)),
$$
where $x_i=\sum_{l=1}^n x_i(l) e_l$ and $s_p$ is the elementary symmetric polynomial in $m$ variables of degree $p$. The case $p=2$ corresponds to the sum of scalar products, i.e.
$${\mathcal S}_2(x_1,\dots, x_m)=\frac{2}{m(m-1)}\sum_{1\le i <j \le m} \langle x_i, x_j \rangle.$$ In \cite{KaSa} the following {\it $p$-Santal\'o conjecture} was proposed:
\begin{conj}\label{conKS} Let $2\le p\le m$ be two integers. If $K_1,\dots,K_m$ are symmetric convex bodies in $\mathbb R^n$, such that
$$
{\mathcal S}_p(x_1,\dots, x_m) \le 1, \mbox{ for all } x_i\in K_i,
$$
then $\prod_{i=1}^m \vol(K_i) \le \vol(B_p^n)^m,$ where $B_p^n$ is the unit ball of the $\ell^n_p$-norm.
\end{conj}
Kalantzopoulos and Saroglou \cite{KaSa} were able to confirm Conjecture \ref{conKS} when $p=m$, and in the case of unconditional convex bodies for all $p=2,\dots, m$. Moreover when $p=2$, it is enough to assume that only $K_3,\dots, K_m$ are unconditional. In all of those known cases, the conjectured inequality is actually sharp for $K_1=\dots=K_m=B_p^n$. A functional analog of Conjecture \ref{conKS} was also proposed in \cite{KaSa}.
\section{Links to other inequalities}\label{linsec}
In this section we present just a sample of connections of the volume product to other inequalities in convex geometry. As before, we refer to the books \cite{AGM1, AGM2, Ga, Ga2, Gru, Kol, Pi, Sc, Tom} and especially to the amazing diagram of connections of different open problems in convex geometry constructed by Richard Gardner in \cite[Figure 1]{Ga2}.
\subsection{Slicing Conjecture} Klartag \cite{Kl2} found a connection between a sharp version of Bourgain's slicing conjecture and Mahler's conjecture for general convex bodies (conjecture~\ref{mahler}-. The covariance matrix of a convex body $K$ in $\mathbb R^n$ is the $n\times n$ matrix ${\rm Cov}(K)$ defined by
$$
{\rm Cov}(K)_{i,j}=\frac{\int_K x_ix_jdx}{\vol(K)}- \frac{\int_K x_idx}{\vol(K)} \frac{\int_K x_jdx}{\vol(K)}
$$
The isotropic constant $L_K$ is defined as $L_K^{2n}=\rm{det}({\rm Cov}(K)) \vol(K)^{-2}.$
It is well-known that $L_K$ is bounded from below by an absolute positive constant which is reached for ellipsoids. Bourgain’s slicing problem asks whether for some universal constant $C>0$, one has $L_K \le C $ for every convex body $K$. The name {\it slicing conjecture} comes from the following very interesting equivalent reformulation: is it true that for some universal $c>0$, every convex body of volume one in $\mathbb R^n$ has an hyperplane section with $(n-1)$-dimensional volume greater than $c$? (see \cite{MiP} for other equivalent statements). The boundedness of $L_K$ by an absolute constant is still an open question. Bourgain \cite{Bou} proved that $L_K\le Cn^{1/4}$ up to a logarithmic factor, which was removed by Klartag \cite{Kl3}.
Chen \cite{Ch} proved that $L_K\le C_\varepsilon n^{\varepsilon}$ for every $\varepsilon>0$. Then, Klartag and Lehec \cite{KlL} established
a polylogarithmic bound $L_K\le C\log^5n$, which was then further improved to $L_K\le C\log^{2.2}n$ by Jambulapati, Lee and Vempala \cite{JLV}. A {\it strong version of the slicing conjecture} asks the following:
\begin{conj}\label{strongBourgain}For any convex body $K$ in $\mathbb R^n$ one has
\begin{equation}\label{bourgainstron}
L_K \le L_{\Delta_n}=\frac{(n!)^{1/n}}{(n+1)^{\frac{n+1}{2n}}\sqrt{n+2}}.
\end{equation}
\end{conj}
Let $K$ be a local minimizer of the volume product among the set of all convex bodies in $\mathbb R^n$ endowed with the Hausdorff distance, then Klartag \cite{Kl2} was able to prove that
$${\rm Cov}(K^*) \ge (n+2)^{-2}{\rm Cov}(K)^{-1}.$$
Taking the determinant and raising to the power $1/n$, one gets
\begin{equation}\label{Klartagmin}
\frac{1}{n+2} \le L_K L_{K^*} \mathcal P(K)^{1/n}.
\end{equation}
Thus combining (\ref{Klartagmin}) and (\ref{bourgainstron}),
$$
\frac{1}{n+2} \le L_K L_{K^*} \mathcal P(K)^{1/n} \le \frac{(n!)^{2/n}}{(n+1)^{\frac{n+1}{n}}(n+2)}\mathcal P(K)^{1/n}.
$$
Thus, we proved the following theorem:
\begin{thm} (Klartag)
The strong version of Bourgain’s slicing conjecture given in Conjecture \ref{strongBourgain}
implies Conjecture \ref{mahler} (Mahler’s conjecture) for general convex bodies.
\end{thm}
In connection with his proof of the Bourgain-Milman inequality, Kuperberg asked in \cite{Ku2} whether the quantity
$$\frac{1}{\vol(K)\vol(K^*)} \int_K\int_{K^*}\langle x,y\rangle^2dxdy
$$
is maximized for ellipsoids in the class of convex symmetric bodies $K\subset\mathbb R^n$. Alonso-Guti\'errez \cite{AG} proved that this conjecture implies both the Blaschke-Santaló inequality and the hyperplane conjecture and that it holds true for $B_p^n$, the unit ball of $\ell_p^n$, for $p\ge 1$. The connection to the hyperplane conjecture was also studied in \cite{Gi}. Kuperberg had not much hope for his conjecture and
Klartag \cite{Kl2} showed that it is false in high dimension, even in the case of unconditional bodies.
\subsection{Symplectic geometry and Viterbo's conjecture} Artstein-Avidan, Karasev and Ostrover in \cite{AKO} discovered an amazing connection between the volume product and symplectic geometry. Let $(X,\omega)$ be a symplectic manifold: $X$ is a smooth manifold with a closed non-degenerate two-form $
\omega$. For instance, $(\mathbb R^{2n}, \omega_{st})$, where $\mathbb R^{2n}=\mathbb R^n_p\times \mathbb R^n_q$ and $\omega_{st}=\sum d p_i \wedge d q_i$. A core fact in symplectic geometry states that symplectic manifolds have no local invariants (except the dimension). This, clearly, makes the structure very different from that of Riemannian manifolds. The first examples of global symplectic invariants were introduced by Gromov \cite{Gro} and are known as Gromov’s “non-squeezing theorem”. Gromov's work inspired the introduction of global symplectic invariants - symplectic capacities - which may be seen as a way to measure the symplectic size of sets in $\mathbb R^{2n}$. More precisely, a {\it symplectic capacity} $c$ on $(\mathbb R^{2n}, \omega_{st})$ is a
mapping
$c: {\mathcal S}(\mathbb R^{2n}) \to \mathbb R_+$,
where ${\mathcal S}(\mathbb R^{2n})$
is the set of all subsets of $\mathbb R^{2n}$,
which satisfies the following conditions
\begin{itemize}
\item Monotonicity: $c(U)\le c(V),$ for all $U\subset V.$
\item Conformality: $c(\phi(U))=|\alpha|c(U)$, for all diffeomorphism $\phi$ such that $\phi^* \omega_{st}=\alpha \omega_{st}.$
\item Normalization: $c(B^{2n}_2)= c(B^2_2 \times \mathbb R^{2(n-1)})= \pi.$
\end{itemize}
The following is the conjecture of Viterbo \cite{Vi} for symplectic capacities of convex bodies.
\begin{conj}\label{vitc} For any symplectic capacity $c$ and any convex body $\Sigma$ in $\mathbb R^{2n}$, one has
$$
\frac{c(\Sigma)}{c(B_2^{2n})} \le \left(\frac{\vol_{2n}(\Sigma)}{\vol_{2n}(B_2^{2n})}\right)^\frac{1}{n}.
$$
\end{conj}
Conjecture \ref{vitc} is of isoperimetric type: indeed, it claims that among all convex bodies in $\mathbb R^{2n}$ of a given fixed volume, the Euclidean ball of the same volume has the maximal symplectic capacity. It is open even for $n=2$, but it holds for certain classes of convex bodies,
including ellipsoids \cite{H} and up to a universal multiplicative
constant \cite{AMO}.
The following was proved in \cite{AKO}.
\begin{thm}
Conjecture \ref{vitc} implies Conjecture 2.
\end{thm}
More precisely, it was proved in \cite{AKO} that for any convex symmetric body $K\subset \mathbb R^n$, $c_{HZ}(K\times K^*)=4$, where $c_{HZ}$ denotes the Hofer-Zehnder capacity, which is one of the important symplectic capacities. This fact, together with Conjecture \ref{vitc} and the normalization property of $c_{HZ}$ immediately gives an affirmative answer to conjecture 2:
$$
\frac{4^n}{\pi^n} = \left(\frac{c_{HZ}(K \times K^*)}{c_{HZ}(B_2^{2n})}\right)^n \le \frac{\vol_{2n}(K \times K^*)}{\vol_{2n}(B_2^{2n})} = \frac{n!\vol_{2n}(K \times K^*)}{\pi^n}.
$$
We refer to \cite{AKO} and \cite{O} for more details on these connections. The connections of Conjecture 2 with symplectic geometry were further continued in \cite{ABKaS, BeKa, Ka, KaS}. In \cite{Ru}, Viterbo’s conjecture was connected with
Minkowski versions of worm problems, inspired by the well-known Moser worm
problem from geometry. For the special case of Lagrangian products, this relation provides further links to systolic Minkowski billiard inequalities and Mahler’s conjecture.
\subsection{Funk geometry}\label{Funk} A very interesting connection of the volume product with Funk geometry was recently discovered by Faifman \cite{Fa}. We refer to \cite{PT} for a detailed introduction to Finsler manifolds and Funk geometry. We will remind a few of the most basic ideas.
A non-reversible Finsler manifold $(M,F)$ is a smooth manifold $M$ equipped with a smooth function $F$ on the tangent bundle of $M$ which, when restricted on any tangent space, is the gauge of some convex body. The crucial difference with Riemannian geometry is the lack of inner product. The tangent unit ball at a point $x\in M$ is denoted by $B_xM$ and consists of all vectors $v$ in the tangent space $T_xM$ such that $F(x,v)\le 1.$
For a convex body $K$ in a fixed affine space, the Funk metric on the interior of $K$ is given by $B_xK =K$, i.e. at any point $x$ in interior of $K$, the body $K$ with origin at $x$ is the unit ball. We done in the following way: Consider $x, y\in\inte(K)$ and let $R(x,y)$ be the ray starting at $x$ passing through $y$. Let $a(x,y)=R(x,y)\cap \partial K$, then the Funk metric, defined for $x\not=y\in \inte(K)$, is
$$
d_K^F(x,y)=\log\frac{|x-a(x,y)|}{|y-a(x,y)|},
$$
and $d_K^F(x,x)=0.$ The Funk metric is projective, i.e. straight segments are geodesics. The outward ball of radius $r>0$ and center $z \in \inte(K)$ is
$$
B^F_K(z,r)=\{x\in \inte(K): d_K^F(z,x) \le r\}=(1-e^{-r})(K-z)+z.
$$
The Holmes-Thompson volume of $A \subset \inte(K)$ is defined as
$$
\vol_K^F(A)=\frac{1}{v_n}\int_A \vol(K^x) dx.
$$
Asymptotically as $r\to 0$, the volume of $B^F_K(z,r)$ behaves as $v_n^{-1}\vol_{2n}(K\times K^z) r^n$. It was also shown in \cite{BBV} that for a strictly convex and smooth body $K$, when $r\to+\infty$, the volume of $B^F_K(z,r)$ behaves as $c_ne^{\frac{n-1}{2}r}\mathcal{A}(K,z)$, where $c_n>0$ depends only on $n$ and
$\mathcal{A}(K,z)$ is the centro-affine
surface area of $K$ defined by:
$$ \mathcal{A}(K,z)= \int_{\partial K} \frac{\kappa^{1/2}_K(x)}{\langle x-z, n_K(x)\rangle^{(n-1)/2}} dx,
$$
where $\kappa_k(x)$ is Gauss curvature of $\partial K$ at point $x$ and $n_K(x)$ is outer normal vector, note that $\mathcal{A}(K,0)=\mathcal{A}(K)$.
The following duality relation for $\vol_K^F$, for centrally symmetirc $K$, is proved in \cite{Fa}:
$$
\vol_K^F(B^F_K(0,r))=\vol_{K^*}^F(B^F_{K^*}(0,r)).
$$
The existence of an analog of the Santal\'o point $s(K)$ of a convex body $K$ in the Funk geometry was proved in \cite{FaVW}:
For any $r>0$, there is a unique point $s_r(K)\in\inte(K)$ that minimizes the Funk volume of $B_K^F(q,r)$. One has $s_r(K)=0$ for symmetric $K$ and $s_r(K)\to s(K)$ as $r\to 0$. Let $$
M_r(K)=v_n\vol_K^F(B_K^F(s_r(K),r))
$$
The following conjecture was proposed in \cite{Fa}:
\begin{conj}\label{conF} For all $r>0$,
$M_r(K)$ is maximal when $K$ is an ellipsoid.
\end{conj}
The limiting cases of Conjecture
\ref{conF} are the Blaschke-Santal\'o inequality as $r\to 0$ and the centro-affine isoperimetric inequality as $r\to \infty$. Faifman \cite{Fa} was able to show that Conjecture
\ref{conF} holds for unconditional bodies $K$. The idea of the proof includes the generalization of the conjecture of K. Ball (see inequality (\ref{conjB})), namely
\begin{equation}
\int_K\int_{K^*}\langle x,y\rangle^{2j}dxdy \le \int_{B_2^n}\int_{B_2^n}\langle x,y\rangle^{2j}dxdy,
\end{equation}
for all $j\in {\mathbb N}$, which Faifman was able to confirm for $K$ unconditional.
A lower bound for the quantity $M_r(K)$ was proposed in \cite{FaVW}:
\begin{conj}\label{conFaVW} For $r>0$,
$M_r(K)$
is minimized by simplices in general and by Hanner polytopes for symmetric bodies $K$.
\end{conj}
The limiting cases as $r\to 0$ of Conjecture \ref{conFaVW} for symmetric $K$ is Conjecture 2 and
as $r\to +\infty$ is a conjecture of Kalai \cite{K} on the minimization of the flag number of $K$. Conjecture \ref{conFaVW} is proved in \cite{FaVW} for unconditional bodies and follows from an interesting new inequality discovered in \cite{FaVW} and proved for unconditional bodies
\begin{equation}\label{eq:lowerH}
\int_H\int_{H^*}\langle x,y\rangle^{2j}dxdy \le \int_K\int_{K^*}\langle x,y\rangle^{2j}dxdy,
\end{equation}
where $H$ is a Hanner polytope in $\mathbb R^n$ and $j\in {\mathbb N}.$ The proof of (\ref{eq:lowerH}) in \cite{FaVW} is based on the functional inverse Santal\'o inequality \cite{FM2}.
\subsection{Geometry of numbers and isosystolic inequalities} The volume product is a standard tool in the geometry of numbers. The connection goes back to the theorem of Mahler \cite{Ma2} (see \cite{BM}, \cite{Gru}, Chapter 3 or \cite{Ev}) on the bound of the successive minima of a convex body and its dual.
Let us here present yet another connection of volume product with the geometry of numbers and the systolic geometry discovered by \'Alvarez Paiva, Balacheff and Tzanev \cite{APBT}.
Minkowski’s first theorem in the geometry of numbers states that if $K$ is a symmetric convex body in $\mathbb R^n$ with $\vol(K) \ge 2^n$, then $K$ contains at least one non-zero integer point (in $\mathbb Z^n$). The symmetry assumption is needed, as there are convex bodies $K$ of large volume containing the origin and no other integer point. We know that such bodies must be "flat" \cite{KL} and \'Alvarez Paiva, Balacheff \cite{APB} conjectured that the volume of their polars $K^*$ is not too small:
\begin{conj}\label{APconj} Let $K\subset \mathbb R^n$ be a convex body such that $\inte(K)\cap\mathbb Z^n=\{0\}$. Then $\vol(K^*) \ge (n + 1)/n!$, with equality if and only if
$K$ is a simplex with vertices in $\mathbb Z^n$ and no other integer points than its vertices and $0$.\end{conj}
In \cite{APBT}, Conjecture \ref{APconj} was proved in $\mathbb R^2$ and an isomorphic bound for $\vol(K^*)$ was given in all dimensions. Namely, for some absolute constant $c>0$, one has $\vol(K^*) \ge c^n(n + 1)/n!$ for any convex body $K$ in $\mathbb R^n$ such that $\inte(K)$ contains no integer point other than the origin. The proof of this fact in \cite{APBT} uses Bourgain-Milman inequality, and it is shown that this isomorphic version of Conjecture \ref{APconj} is actually equivalent to it.
Conjecture \ref{APconj} can be further generalized to a conjecture in systolic geometry. We refer to \cite{APBT} for exact statements and definitions. We mention here a version of the conjecture in the language of Finsler geometry (see Section \ref{Funk}). The Holmes-Thompson volume of a Finsler manifold $(M,F)$ is defined as
$$
\vol_{HT}(M,F)=\frac{1}{v_n}\int_M \vol((B_xM)^*) dx.
$$
\begin{conj}\label{syst} For any Finsler metric $F$ on $\mathbb{RP}^n$, there exists a closed non-contractible geodesic with length bounded by $\frac{(n!v_n)^{1/n}}{2}\vol_{HT}(\mathbb{RP}^n, F)^{1/n}$.
\end{conj}
We remind that a set which can be reduced to one of its points by a continuous deformation, is said to be contractible. For $n=2$, Conjecture \ref{syst} follows from the works of Ivanov \cite{Iv1, Iv2}. The next theorem was proved in \cite{APBT}:
\begin{thm}\label{systM}
Conjecture \ref{syst} implies Conjecture 2 for centrally symmetric bodies.
\end{thm}
The proof of Theorem \ref{systM} uses the Finsler metric on a convex symmetric body $K$ which coincides at each point with the norm corresponding to $K$. By identifying the points $x$ and $-x$ in $\partial K$, we obtain a length space (a space in which the intrinsic metric coincides with the original metric) on $\mathbb{RP}^n$. We denote this Finsler space by $(\mathbb{RP}^n,d_K)$. It turns out that one has
$$
\vol(\mathbb{RP}^n,d_K)=\frac{1}{v_n}\mathcal P(K)
$$
and that the length of the systoles (the shortest noncontractible geodesics) in $(\mathbb{RP}^n,d_K)$ is equal to $2$. Combining those commutations and assuming that Conjecture \ref{syst} holds, we get from Conjecture \ref{syst} a proof of Conjecture 2 for symmetric convex bodies in $\mathbb R^n$.
|
3,212,635,537,542 | arxiv |
\section{INTRODUCTION}\label{intro}
In recent years there have been tremendous advances in the area of wireless networking, sensing, computing, and control.
These advances are revolutionizing the role and importance of wireless control networks in various areas, such as Cyber-Physical Systems \cite{2012:Sztipanovits} and Internet of Things (IoT) applications \cite{2016:Bello_Zeadally}.
Wireless control networks consist of various sensor nodes which sample and transmit data over a wireless channel to controllers, which are in charge of deciding the necessary actions based on the received data.
Wireless networks play an important role in the rapid expansion of the areas of embedded computing, advanced control, cloud computing, and emerging applications in autonomous vehicles \cite{2015:Alam_Besselink_Martensson_Johansson, 2016:Demir_Ergen}, coordination of UAVs \cite{2016:Nowzari_Pappas}, and sensor networks \cite{2013:Aziz_Ivanovich}.
A recent survey which analyzes the importance of wireless networks in emerging control systems can be found in \cite{2018:Park_Ergen_Fischione_Lu_Johansson}.
By their nature, wireless networks do not have a fixed infrastructure and do not use centralized methods for organization.
This flexibility enables their use even when a fixed infrastructure is unavailable and makes them attractive for numerous applications (ranging from military, civil, industrial or environmental monitoring in hostile environments).
The absence of cables for data communication motivates the removal of the power supply from the nodes in order to achieve even more flexibility.
Therefore, nodes need to rely on (i) battery storage, and/or (ii) energy harvesting techniques for their operation.
Prolonging the lifetime of nodes and enhancing network flexibility through efficient battery management and/or energy harvesting techniques has received a lot of attention in recent years in the area of networked control systems \cite{2011:Park_Fischione_Bonivento_Johansson_Vincentelli, 2010:Ploennigs_Vasyutynskyy_Kabitzsch, 2019:Knorn_Dey_Ahlen_Quevedo, 2020:Ma_Lan_Hassan_Hu_Das, 2013:Aziz_Ivanovich}.
Battery-driven operation is of particular importance in cases where wireless networks are deployed in remote and inaccessible places (with no fixed infrastructure).
However, modern wireless networks may consist of thousands of nodes and replacing/recharging batteries may be a costly, time-consuming or even infeasible task.
Therefore, researchers have proposed several battery-driven energy conservation strategies to ensure energy efficient network operation.
In \cite{2010:Quevedo_Ahlen_Ostergaard} the authors present a power and coding control algorithm for state estimation with wireless sensors.
An online scheduling scheme was developed in \cite{2014:Han_Cheng_Chen_Shi} under communication energy constraints for remote state estimation.
In \cite{2014:Gatsis_Ribeiro_Pappas} the authors study the control scheme of a linear plant when state information is being transmitted from a sensor to the controller over a wireless fading channel.
Additionally, the graph routing problem optimized for maximizing network lifetime is analyzed in \cite{2016:Wu_Gunatilaka_Saifullah_Sha_Tiwari_Lu_Chen}.
Finally, self-triggered control determines its next execution time according to the triggering rule and previously received data \cite{2009:Wang_Lemmon} allowing for better allocation of network resources.
Note, however, that battery-driven operation consists of limited energy supply constraints.
Energy constraints are widely regarded as a fundamental limitation of wireless devices.
For this reason, many researchers aimed to develop alternative energy provision mechanisms by utilizing ambient energy.
A sensor network can have near perpetual operation, by utilizing energy harvesting strategies in order to harvest ambient energy \cite{2020:Ma_Lan_Hassan_Hu_Das}.
However, limitations on energy harvesting opportunities have led researchers to also work on the direction of efficient energy management in wireless sensor networks with energy harvesting capabilities.
Specifically, power control policies for maximizing throughput or minimizing mean delay or transmission completion time were presented in \cite{2010:Sharma_Mukherji_Joseph_Gupta, 2012:Tutuncuoglu_Yener}, whereas power control algorithms for maximizing the mutual information of a wireless link were discussed in \cite{2012:Ho_Zhang}.
Jointly controlling data queue and battery buffer in order to maximize the long-term average sensing rate was presented in \cite{2012:Mao_Koksal_Shroff}, while in \cite{2013:Nayyar_Basar_Teneketzis_Veeravalli} a sensor with energy harvesting capabilities that sends its measurements toward a remote estimator is considered.
The design of optimal sensor transmission power control schemes was presented in \cite{2017:Li_Zhang_Quevedo_Lau_Dey_Shi}, and in \cite{2017:Du_Yang_Shen_Kwak} the distortion minimization problem by optimizing sleep-wake scheduling and transmit power was considered.
In \cite{2019:Knorn_Dey_Ahlen_Quevedo}, the authors presented optimal transmission scheduling schemes in the presence of energy-leaking sensor batteries.
The authors of \cite{2019:Bhat_Motani_Lim} improved throughput performance by considering circuit power and non-ideal batteries.
In \cite{2020:Vasconcelos_Gagrani_Nayyar_Mitra} an energy-harvesting scheduler is considered which makes independent observations and in \cite{2020:Nobar_Mansourkiaie_Ahmed} an optimization framework to minimize the summation of packet dropping probability is presented.
In \cite{2021:Seifullaev_Knorn_Ahlen} the authors construct and analyze a simplified sensor battery model that is used to predict sensor battery dynamics.
However, in practice, the amount of harvested energy may be random and sometimes insufficient.
Thus, energy efficient algorithms for wireless sensor networks still require further development.
In today's emerging technologies, the limited battery or harvested energy associated with wireless networks is a major bottleneck.
This bottleneck may trammel the two main advantages of battery powered or energy harvesting wireless networks, which are (i) autonomy of the operating nodes, and (ii) network flexibility.
Efficient usage of the limited energy resources is critically important to support these advantages and prolong the lifetime of the nodes.
For this reason, there is an increasing demand for large scale network coordination algorithms which operate over battery powered networks or networks with energy harvesting capabilities, and whose efficient operation (i) reduces energy consumption, and (ii) achieves a lifetime of several years using nodes that carry merely hundreds of joules of stored/harvested energy.
In this paper, we focus on distributed control and coordination over wireless networks nodes that are battery powered or utilize energy harvesting techniques.
We focus on the average consensus problem in which a group of nodes reaches agreement to a common value that is equal to the average of the initial states of the nodes \cite{2018:BOOK_Hadj, 2013:Themis_Hadj_Johansson}.
Calculating the average of their initial states allows nodes to coordinate their actions via a common decision and is useful in many applications such as load balancing, voting schemes, quantized privacy protocols (can be used as the basis for various encoding schemes), and distributed optimization.
However, in practical scenarios there exist constraints on the bandwidth of communication links and the capacity of physical memories.
This means that network links can only allow messages of limited length (i.e., \textit{quantized}) to be stored and transmitted between nodes \cite{2016:Chamie_Basar, 2011:Cai_Ishii, 2020:Rikos_Quant_Cons, 2021:Rikos_Hadj_Splitting}.
Furthermore, the desire to achieve more efficient usage of network resources has increased the interest in event-triggered algorithms for distributed coordination and, more generally, distributed control \cite{2013:Dimarogonas_Johansson, 2016:nowzari_cortes, 2018:Manitara_Hadj}.
In most existing applications of wireless networks which consider a limited source of energy, their operation is not designed to guarantee efficient communication and energy preservation.
Specifically, (i) nodes may operate in a way that is not ``event-based'' (i.e., nodes transmit their state at each iteration, which leads to increased energy consumption), or (ii) nodes do not have a distributed strategy to determine whether convergence has been achieved (thus they continue transmitting even after convergence has been achieved, which leads to increased energy and bandwidth consumption), or (iii) node operation considers the transmission of real-valued states (thus not guaranteeing efficient communication).
This paper aims to fill this gap by proposing a distributed coordination algorithm that fulfills all above requirements.
Specifically, we focus on three main strategies for slowing down the depletion of energy in wireless networks: (i) efficient communication, (ii) event-triggered operation, and (iii) transmission stopping.
To the authors' knowledge, only \cite{2021:Rikos_TaskScheduling} presents an algorithm which allows nodes in a data center to coordinate and perform task allocation by exchanging quantized messages and eventually stopping their operation according to a distributed mechanism.
However, the distributed stopping mechanism in \cite{2021:Rikos_TaskScheduling} requires knowledge of the digraph diameter which is a global parameter.
Thus, the design of distributed coordination algorithms which (i) operate in an event-based fashion, (ii) consider efficient (quantized) communication, (iii) converge to the \textit{exact} quantized average of the initial states without any quantization error, and (iv) utilize a distributed stopping mechanism for ceasing transmissions without knowledge of global network parameters, is still an open question.
\subsection{Main Contributions}\label{Contributions}
In this paper, we present a novel distributed average consensus algorithm for battery powered or energy harvesting wireless networks, that combines the desirable features mentioned above.
More specifically, average consensus is reached in finite time; processing, storing, and exchange of information between neighboring nodes is subject to uniform quantization; and the control actuation at each node is ``event-driven''.
The main contributions of our paper are the following.
\begin{itemize}
\item We present a novel distributed algorithm that is able to calculate the \textit{exact} average of the initial values in the form of a quantized fraction (i.e., as the ratio of two integer values) introducing no quantization error\footnote{Note that most algorithms in the available literature (see \cite{2007:Basar, 2009:Nedic, 2011:Cai_Ishii}) are able to converge to the ceiling or the floor of the initial average thus introducing a quantization error.}.
\item We show that our algorithm converges to the desired result after a finite number of iterations, and we provide a polynomial upper bound on the number of time steps needed for convergence\footnote{The operation of our algorithm in this paper is analyzed over static directed graphs.
However, it can be extended also to dynamic networks which is a more suitable scenario for controlling UAV swarms and autonomous vehicles.}.
\item We show that our algorithm utilizes its distributed stopping capability and transmissions are ceased for every node once the exact quantized average of the initial states is calculated.
\item We calculate an upper bound on the number of transmissions and computations each node performs during the operation of our algorithm.
\item We analyze the consumption of available resources by calculating an upper bound on the required memory and the required energy for each node during the operation of our algorithm.
\item We demonstrate the operation of our algorithm via examples while we analyze its potential advantages and its transmission stopping capabilities.
\end{itemize}
The operation of the proposed event-triggered algorithm essentially involves directed transmissions and broadcast transmissions from every node according to multiple event-triggered conditions.
Specifically, every node broadcasts to every out-neighbor the value of its initial quantized state.
This means that every node learns the maximum initial state in the network.
Then, the nodes which have an initial state less than the maximum value transmit their quantized initial state directly to one neighboring node.
The nodes that receive multiple directed messages from neighboring nodes sum the values, update their state and broadcast the updated state according to a set of event triggered conditions.
The operation of the algorithm ensures that every initial quantized state is summed in a single node in the network.
Then this node broadcasts its updated state (which is equal to the exact average of the initial quantized states) and every node in the network learns and updates its own state to be equal to the exact average.
Once every node learns the state that is equal to the exact average convergence has been achieved and transmissions are ceased.
Following \cite{2007:Basar, 2011:Cai_Ishii} we assume that the node's states are integer-valued (which comprises a uniform class of quantization effects).
Note that most work dealing with quantization has concentrated on the scenario where the nodes can store and process real-valued states but can transmit only quantized values through limited rate channels (see, \cite{2016:Chamie_Basar}).
However, by contrast, our assumption is also suited to the case where the states are stored in digital memories of finite capacity (as in \cite{2009:Nedic, 2007:Basar, 2011:Cai_Ishii}), as long as the initial values are also quantized.
\subsection{Paper Organization}\label{Organization}
The remainder of this paper is organized as follows.
In Section~\ref{preliminaries}, we introduce the notation used throughout the paper.
In Section~\ref{probForm} we formulate the finite transmission quantized average consensus problem.
In Section~\ref{MaxAlgorithm}, we present a deterministic event-triggered distributed algorithm, which (i) allows the nodes to reach consensus to the \textit{exact} quantized average of the initial values after a finite number of steps, and (ii) allows them to cease transmissions once quantized average consensus is reached.
Furthermore, we analyze the algorithm's operation via an example, and we calculate a worst case upper bound on the number of time steps required for convergence.
In Section~\ref{bound_trans_comp}, we present a deterministic upper bound on the number of transmissions and computations each node performs during the operation of the algorithm.
In Section~\ref{energy_constr}, we analyze the consumption of resources by calculating an upper bound on the required memory and the required energy of each node for the execution of the proposed algorithm.
In Section~\ref{results}, we present simulation results and comparisons.
We conclude in Section~\ref{future} with a brief summary and remarks about future work.
\section{NOTATION AND BACKGROUND}\label{preliminaries}
The sets of real, rational, integer and natural numbers are denoted by $ \mathbb{R}, \mathbb{Q}, \mathbb{Z}$ and $\mathbb{N}$, respectively.
The symbol $\mathbb{Z}_+$ denotes the set of nonnegative integers.
\subsection{Graph-Theoretic Notions}
Consider a network of $n$ ($n \geq 2$) nodes communicating only with their immediate neighbors.
The communication topology can be captured by a directed graph (digraph), called \textit{communication digraph}.
A digraph is defined as $\mathcal{G}_d = (\mathcal{V}, \mathcal{E})$, where $\mathcal{V} = \{v_1, v_2, \dots, v_n\}$ is the set of nodes and $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V} - \{ (v_j, v_j) \ | \ v_j \in \mathcal{V} \}$ is the set of edges (self-edges excluded).
A directed edge from node $v_i$ to node $v_j$ is denoted by $m_{ji} \triangleq (v_j, v_i) \in \mathcal{E}$, and captures the fact that node $v_j$ can receive information from node $v_i$ (but not the other way around).
We assume that the given digraph $\mathcal{G}_d = (\mathcal{V}, \mathcal{E})$ is \textit{strongly connected} (i.e., for each pair of nodes $v_j, v_i \in \mathcal{V}$, $v_j \neq v_i$, there exists a directed \textit{path}\footnote{A directed \textit{path} from $v_i$ to $v_j$ exists if we can find a sequence of vertices $v_i \equiv v_{l_0},v_{l_1}, \dots, v_{l_t} \equiv v_j$ such that $(v_{l_{\tau+1}},v_{l_{\tau}}) \in \mathcal{E}$ for $ \tau = 0, 1, \dots , t-1$.} from $v_i$ to $v_j$), which is the necessary (and sufficient) requirement for average consensus to be possible.
The subset of nodes that can directly transmit information to node $v_j$ is called the set of in-neighbors of $v_j$ and is represented by $\mathcal{N}_j^- = \{ v_i \in \mathcal{V} \; | \; (v_j,v_i)\in \mathcal{E}\}$, while the subset of nodes that can directly receive information from node $v_j$ is called the set of out-neighbors of $v_j$ and is represented by $\mathcal{N}_j^+ = \{ v_l \in \mathcal{V} \; | \; (v_l,v_j)\in \mathcal{E}\}$.
The cardinality of $\mathcal{N}_j^-$ is called the \textit{in-degree} of $v_j$ and is denoted by $\mathcal{D}_j^- = | \mathcal{N}_j^- |$, while the cardinality of $\mathcal{N}_j^+$ is called the \textit{out-degree} of $v_j$ and is denoted by $\mathcal{D}_j^+ = | \mathcal{N}_j^+ |$.
\subsection{Node Operation}
With respect to quantization of information flow, we have that at time step $k \in \mathbb{Z}_+$ (where $\mathbb{Z}_+$ is the set of nonnegative integers), each node $v_j \in \mathcal{V}$ maintains the transmission variables $S\_br_j \in \mathbb{N}$ and $M\_tr_j \in \mathbb{N}$, the state variables $y^s_j[k] \in \mathbb{Z}$, $z^s_j[k] \in \mathbb{Z}_+$ and $q_j^s[k] = \frac{y_j^s[k]}{z_j^s[k]}$, and the mass variables $y_j \in \mathbb{Z}$ and $z_j \in \mathbb{Z}_+$.
Note here that for every node $v_j$ the transmission variables $S\_br_j$, $M\_tr_j$ are used to decide whether it will broadcast its state variables or transmit its mass variables, the state variables $y^s_j[k], z^s_j[k], q_j^s[k]$ are used to store the received messages and calculate the quantized average of the initial values, and the mass variables $y_j[k], z_j[k]$ are used to communicate with other nodes by either transmitting or receiving messages.
Furthermore, we assume that each node is aware of its out-neighbors and can directly transmit messages to each out-neighbor; however, it cannot necessarily receive messages (at least not directly) from them.
In the proposed distributed protocol, each node $v_j$ assigns a \textit{unique order} in the set $\{0,1,..., \mathcal{D}_j^+ -1\}$ to each of its outgoing edges $m_{lj}$, where $v_l \in \mathcal{N}^+_j$.
More specifically, the order of link $(v_l,v_j)$ for node $v_j$ is denoted by $P_{lj}$ (such that $\{P_{lj} \; | \; v_l \in \mathcal{N}^+_j\} = \{0,1,..., \mathcal{D}_j^+ -1\}$).
This unique predetermined order is used during the execution of the proposed distributed algorithm as a way of allowing node $v_j$ to transmit messages to its out-neighbors in a \textit{round-robin}\footnote{When executing the proposed protocol, each node $v_j$ transmits to its out-neighbors, one at a time, by following a predetermined order. The next time it transmits to an out-neighbor, it continues from the outgoing edge it stopped the previous time and cycles through the edges in a round-robin fashion according to the predetermined ordering.} fashion.
\section{PROBLEM FORMULATION}\label{probForm}
Consider a strongly connected digraph $\mathcal{G}_d = (\mathcal{V}, \mathcal{E})$, where each node $v_j \in \mathcal{V}$ has an initial (i.e., for $k=0$) quantized value $y_j[0]$ (for simplicity, we take $y_j[0] \in \mathbb{Z}$).
In this paper, we develop a distributed algorithm that allows nodes to address the problem presented below, while processing and transmitting \textit{quantized} information via available communication links.
Each node $v_j$ obtains, after a finite number of steps, a fraction $q^s$ which is equal to the \textit{exact} average $q$ of the initial values of the nodes (i.e., there is no quantization error), where
\begin{equation}
q = \frac{\sum_{l=1}^{n}{y_l[0]}}{n} .
\end{equation}
Specifically, we argue that there exists $k_0$ so that for every $k \geq k_0$ we have
\begin{equation}\label{alpha_z_y}
y^s_j[k] = \frac{\sum_{l=1}^{n}{y_l[0]}}{\alpha} \ \ \text{and} \ \ z^s_j[k] = \frac{n}{\alpha} ,
\end{equation}
where $\alpha \in \mathbb{N}$. This means that
\begin{equation}\label{alpha_q}
q^s_j[k] = \frac{(\sum_{l=1}^{n}{y_l[0]}) / \alpha}{n / \alpha} \coloneqq q ,
\end{equation}
for every $v_j \in \mathcal{V}$ (i.e., for $k \geq k_0$ every node $v_j$ has calculated $q$ as the ratio of two integer values).
Furthermore, we have that every node $v_j$ stops performing transmissions towards its out-neighbors $v_l \in \mathcal{N}^+_j$ once its state variables $y^s_j$, $z^s_j$, $q^s_j$ fulfill \eqref{alpha_z_y} and \eqref{alpha_q}, respectively.
\section{EVENT-TRIGGERED QUANTIZED AVERAGE CONSENSUS ALGORITHM WITH FINITE TRANSMISSION CAPABILITIES}\label{MaxAlgorithm}
In this section we present a distributed algorithm which achieves \textit{exact} quantized average consensus in a finite number of time steps.
Also, once average consensus is reached, all transmissions are ceased.
The main idea is to maintain a separate mechanism for broadcasting the state variables and the mass variables of each node (as long as they satisfy certain event trigger conditions).
This way, nodes learn the average but also have a way to decide when (or not) to transmit.
\subsection{Finite Transmission Event-Triggered Algorithm}
The details of the distributed algorithm with transmission stopping capabilities can be seen in Algorithm~\ref{algorithm_max}.
\noindent
\vspace{-0.5cm}
\begin{varalgorithm}{1}
\caption{Finite Transmission Event-Triggered Quantized Average Consensus}
\textbf{Input:} A strongly connected digraph $\mathcal{G}_d = (\mathcal{V}, \mathcal{E})$ with $n=|\mathcal{V}|$ nodes and $m=|\mathcal{E}|$ edges. Each node $v_j\in \mathcal{V}$ has an initial state $y_j[0] \in \mathbb{Z}$.
\\
\textbf{Initialization:} Each node $v_j \in \mathcal{V}$ does the following:
\begin{list4}
\item[1)] Assigns to each outgoing edge $v_l \in \mathcal{N}^+_j$ a \textit{unique order} $P_{lj}$ in the set $\{0,1,..., \mathcal{D}_j^+ -1\}$.
\item[2)] Sets $z_j[0] = 1$, $z^s_j[0] = 1$, $y^s_j[0] = y_j[0]$, $q^s_j[0] = y^s_j[0] / z^s_j[0]$ and $S\_br_j = 0$, $M\_tr_j = 0$.
\item[3)] Broadcasts $z^s_j[0]$, $y^s_j[0]$ to every $v_l \in \mathcal{N}_j^+$.
\end{list4}
\textbf{Iteration:} For $k=0,1,2,\dots$, each node $v_j \in \mathcal{V}$ does the following:
\begin{list4}
\item[1)] Receives $y^s_i[k]$, $z^s_i[k]$ from every $v_i \in \mathcal{N}_j^-$ (if no message is received it sets $y^s_i[k] = 0$, $z^s_i[k] = 0$).
\item[2)] Receives $y_i[k]$, $z_i[k]$ from each $v_i \in \mathcal{N}_j^-$ and sets
$$
y_j[k+1] = y_j[k] + \sum_{v_i \in \mathcal{N}_j^-} w_{ji}[k]y_i[k] ,
$$ \vspace{-.3cm}
$$
z_j[k+1] = z_j[k] + \sum_{v_i \in \mathcal{N}_j^-} w_{ji}[k]z_i[k] ,
$$
where $w_{ji}[k]=1$ if a message with $y_i[k]$, $z_i[k]$ is received from in-neighbor $v_i$, otherwise $w_{ji}[k]=0$.
\item[3)] \textbf{If} $w_{ji}[k] \neq 0$ or $z^s_i[k] \neq 0$ for some $v_i \in \mathcal{N}_j^-$ \textbf{then}
\begin{list4}
\item[3a)] Calls Algorithm~\ref{algorithm_max_1a}.
\item[3b)] \textbf{If} $M\_tr_j = 1$ \textbf{then} chooses $v_l \in \mathcal{N}_j^+$ according to $P_{lj}$ (in a round-robin fashion) and transmits $y_j[k]$, $z_j[k]$.
Then, sets $y_j[k] = 0$, $z_j[k] = 0$, $M\_tr_j = 0$.
\item[3c)] \textbf{If} $S\_br_j = 1$ \textbf{then} broadcasts $z^s_j[k+1]$, $y^s_j[k+1]$ to every $v_l \in \mathcal{N}_j^+$.
Then, sets $S\_br_j = 0$.
\end{list4}
\item[4)] Repeats (increases $k$ to $k + 1$ and goes back to Step~$1$).
\end{list4}
\textbf{Output:} \eqref{alpha_q} holds for every $v_j \in \mathcal{V}$.
\label{algorithm_max}
\end{varalgorithm}
\noindent
\vspace{-0.5cm}
\begin{varalgorithm}{1.A}
\caption{Event-Triggered Conditions for Algorithm~\ref{algorithm_max} (for each node $v_j$)}
\textbf{Input}
\\ $y^s_j[k]$, $z^s_j[k]$, $q^s_j[k]$, $y_j[k+1]$, $z_j[k+1]$, $S\_br_j$, $M\_tr_j$ and the received $y^s_i[k]$, $z^s_i[k]$ from every $v_i \in \mathcal{N}_j^-$.
\\
\textbf{Execution}
\begin{list4}
\item[1)] \underline{Event Trigger Conditions~$1$:} \textbf{If}
\\ Condition~$(i)$: $z^s_i[k] > z^s_j[k]$, or
\\ Condition~$(ii)$: $z^s_i[k] = z^s_j[k]$ and $y^s_i[k] > y^s_j[k]$,
\\ \textbf{then} sets
$$
z^s_j[k+1] = \max_{v_i \in \mathcal{N}_j^-} z^s_i[k] , \ \ \text{and}
$$
$$
y^s_j[k+1] = \max_{v_i \in \{v_{i'} \in \mathcal{N}_j^- | z^s_{i'}[k] = z^s_j[k+1]\}} y^s_i[k] ,
$$
and sets $q^s_j[k+1] = \frac{y^s_j[k+1]}{z^s_j[k+1]}$, and $S\_br_j = 1$.
\item[2)] \underline{Event Trigger Conditions~$2$:} \textbf{If}
\\ Condition~$(i)$: $z_j[k+1] > z^s_j[k+1]$, or
\\ Condition~$(ii)$: $z_j[k+1] = z^s_j[k+1]$ and $y_j[k+1] > y^s_j[k+1]$,
\\ \textbf{then} sets $z^s_j[k+1] = z_j[k+1]$, $y^s_j[k+1] = y_j[k+1]$ and sets $q^s_j[k+1] = \frac{y^s_j[k+1]}{z^s_j[k+1]}$ and $S\_br_j = 1$.
\item[3)] \underline{Event Trigger Conditions~$3$:} \textbf{If}
\\ Condition~$(i)$: $0 < z_j[k+1] < z^s_j[k+1]$ or
\\ Condition~$(ii)$: $z_j[k+1] = z^s_j[k+1]$ and $y_j[k+1] < y^s_j[k+1]$,
\\ \textbf{then} sets $M\_tr_j = 1$.
\end{list4}
\textbf{Output}
\\ $y^s_j[k]$, $z^s_j[k]$, $q^s_j[k]$, $S\_br_j$, $M\_tr_j$.
\label{algorithm_max_1a}
\end{varalgorithm}
\vspace{.1cm}
The intuition behind Algorithm~\ref{algorithm_max} is the following.
Let us first consider the notion of ``leading mass''.
During time step $k$, the node that holds the pair of mass variables with the largest $z[k]$ value is referred to as the ``leading mass'' (pair).
In case there are multiple nodes with pairs of mass variables that have the largest $z[k]$, then the ``leading mass'' is the pair (or pairs) of mass variables that has (or have) the largest $y[k]$ value among the pairs of mass variables with the largest $z[k]$.
Note that a formal definition of the ``leading mass'' is presented in Subsection~\ref{sec:Conv_analysis}.
During the Initialization process, each node $v_j$ considers its set of stored mass variables to be the ``leading mass''.
For this reason, it sets its state variables to be equal to the stored mass variables, and then broadcasts the values of its state variables.
During the Iteration process, each node $v_j$ (i) receives any (possibly) transmitted set of state variables from its in-neighbors and, (ii) receives and stores any (possibly) transmitted set of mass variables from its in-neighbors.
If it received a set of state variables and/or a set of mass variables from its in-neighbors, then it executes Algorithm~\ref{algorithm_max_1a}.
During the execution of Algorithm~\ref{algorithm_max_1a}, each node checks (i) Event Trigger Conditions~$1$, (ii) Event Trigger Conditions~$2$, and (iii) Event Trigger Conditions~$3$.
In Event Trigger Conditions~$1$, it checks whether the received set of state variables is equal to the ``leading mass'' (in case it receives messages from multiple in-neighbors it checks which set of state variables is the ``leading mass'').
If Event Trigger Conditions~$1$ hold, it sets its state variables to be equal to the received set of state variables which is the ``leading mass'' and decides to broadcast its updated state variables (i.e., sets its transmission variable $S\_br_j = 1$).
In Event Trigger Conditions~$2$, it checks whether the set of mass variables it stored is the ``leading mass''.
If Event Trigger Conditions~$2$ hold, it sets its state variables to be equal to the stored set of mass variables and decides to broadcast its updated state variables (i.e., sets its transmission variable $S\_br_j = 1$).
In Event Trigger Conditions~$3$, it checks whether the set of mass variables it stored is not the ``leading mass''.
Specifically, it checks whether its state variables are equal to the ``leading mass''.
If Event Trigger Conditions~$3$ hold, this means that the mass variables of another node in the network is the ``leading mass'' (and the state variables of node $v_j$ became equal to the ``leading mass'' from Event Trigger Conditions~$1$).
This means the stored mass variables is not the ``leading mass'' and thus $v_j$ decides to transmit its stored mass variables (i.e., sets its transmission variable $M\_tr_j = 1$).
Once Algorithm~\ref{algorithm_max_1a} is executed, $v_j$ returns to the Iteration process of Algorithm~\ref{algorithm_max} and broadcasts its state variables and/or transmits its mass variables according to its transmission variables.
Then, it repeats the procedure.
\begin{remark}
Notice here that each node $v_j$, during time step $k$, is able to perform two types of transmissions towards its out-neighbors $v_l \in \mathcal{N}_j^+$.
It can broadcast (to all of its out-neighbors) its state variables $y^s_j[k]$ and $z^s_j[k]$ (if Event Trigger Conditions~$1$ and/or Event Trigger Conditions~$2$ hold) and it can transmit its mass variables $y_j[k]$ and $z_j[k]$ to a single out-neighbor, chosen according to the predetermined order $P_{lj}$ (if Event Trigger Conditions~$3$ hold).
This may seem as a departure from the literature on average consensus which assumes only a broadcast primitive (see \cite{2010:christoforos, 2011:Christoforos-Themis, 2011:Franceschelli} and references therein) and the literature on quantized average consensus which assumes only a unicast primitive (i.e., directed transmissions) \cite{2007:Basar, 2011:Cai_Ishii, 2020:Rikos_Quant_Cons, 2018:RikosHadj_CDC}, or a broadcast primitive \cite{2016:Chamie_Basar}.
However, with broadcast as sole primitive and without additional assumptions, achieving the exact average (i.e., avoiding an error introduced due to quantization) would be impossible (see \cite{2015:Hendrickx_Tsitsiklis}) while with unicast as a sole primitive exhibiting distributed stopping capabilities in order to terminate transmissions also appears difficult (e.g., see \cite{2020:Rikos_Quant_Cons} where the number of transmissions at each time step monotonically decreases but it never becomes equal to zero).
For this reason, each node $v_j$ is allowed to perform both types of transmissions (broadcast and unicast) and, as we will also see later, this achieves both exact average and transmission termination during the operation of Algorithm~\ref{algorithm_max}.
This is a direct result of Event Trigger Conditions~$1$, $2$ and $3$ that characterize the operation of our algorithm and they effectively imply that no transmission is performed if no set of conditions holds when using Algorithm~\ref{algorithm_max_1a} to check them.
It is also important to note that Algorithm~\ref{algorithm_max} can be applied to the standard average consensus problem, where the initial value of each node and the transmitted messages are real values.
In this case, our proposed protocol allows deterministic convergence to the {\em exact} value after a finite number of time steps.
This is an important aspect as most finite time algorithms are only able to calculate the average of the initial values within an error bound (e.g., see \cite{2016:Manitara_Hadj} and references therein) which is a direct consequence of their asymptotic convergence. $\hfill \blacksquare$
\end{remark}
During the operation of Algorithm~\ref{algorithm_max}, nodes are able to reach quantized average consensus after a finite number of steps.
Depending on the graph structure and the initial mass variables of each node, we have the following two possible scenarios:
\begin{enumerate}
\item[A.] Full Mass Summation (i.e., there exists $k'_0 \in \mathbb{Z}_+$ where we have $y_j[k'_0] =\sum_{l=1}^{n}{y_l[0]} \ \ \text{and} \ \ z_j[k'_0] = n$, for some node $v_j \in \mathcal{V}$, and $y_i[k'_0] = 0 \ \ \text{and} \ \ z_i[k'_0] = 0$, for each $ v_i \in \mathcal{V} - \{ v_j \}$).
In this scenario (\ref{alpha_z_y}) and (\ref{alpha_q}) hold (eventually, for some $k_0 > k_0'$) for each node $v_j$ for the case where $\alpha = 1$.
\item[B.] Partial Mass Summation (i.e., there exists $k'_0 \in \mathbb{Z}_+$ so that for every $k \geq k'_0$ there exists a set $\mathcal{V}^p[k] \subseteq \mathcal{V}$ in which we have $y_j[k] = y_i[k]$ and $z_j[k] = z_i[k]$, $\forall v_j, v_i \in \mathcal{V}^p[k]$ and $y_l[k] = 0 \ \ \text{and} \ \ z_l[k] = 0$, for each $ v_l \in \mathcal{V} - \mathcal{V}^p[k]$).
In this scenario, (\ref{alpha_z_y}) and (\ref{alpha_q}) hold (eventually, for some $k_0 > k_0'$) for each node $v_j$ for the case where $\alpha = | \mathcal{V}^p[k] |$.
\end{enumerate}
We now illustrate the event-triggered behavior of the proposed distributed algorithm via an example where we have ``Partial Mass Summation''.
\begin{example}
Consider a strongly connected digraph $\mathcal{G}_d = (\mathcal{V}, \mathcal{E})$, shown in Fig.~\ref{max_example}, with $\mathcal{V} = \{ v_1, v_2, v_3, v_4 \}$ and $\mathcal{E} = \{ m_{31}, m_{41}, m_{12}, m_{13}, m_{43}, m_{24} \}$ where each node has an initial quantized value $y_1[0] = 2$, $y_2[0] = 4$, $y_3[0] = 7$ and $y_4[0] = 9$ respectively.
The average of the initial values, is equal to $q = \frac{22}{4}$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.3\columnwidth]{Figures/max_example}
\caption{Example of digraph for partial mass summation when using Algorithm~\ref{algorithm_max}.}
\label{max_example}
\end{center}
\end{figure}
Each node $v_j \in \mathcal{V}$ follows the Initialization steps in Algorithm~\ref{algorithm_max}.
This means that it assigns to each of its outgoing edges $v_l \in \mathcal{N}^+_j$ a unique order $P_{lj}$ in the set $\{0,1,..., \mathcal{D}_j^+ -1\}$.
Assume that the unique orders assigned by each node are the following:
$
v_1: P_{41} = 0, P_{31} = 1; \ v_2: P_{12} = 0; \ v_3: P_{13} = 0, P_{43} = 1, \ v_4: P_{24} = 0.
$
For example, node $v_1$ will first transmit towards node $v_4$ and then transmit towards node $v_3$.
Furthermore, according to the Initialization steps, each node $v_j$ sets its transmission variables $S\_br_j = 0$, $M\_tr_j = 0$ and then it broadcasts its state variables $z^s_j[0]$ and $y^s_j[0]$ to every out-neighbor $v_l \in \mathcal{N}_j^+$.
The initial mass and state variables, at time step $k=0$, for nodes $v_1, v_2, v_3, v_4$ are shown in Table~\ref{table_max1}.
During the operation of Algorithm~\ref{algorithm_max}, at time step $k=0$ each node $v_j$ will receive the state variables $z^s_i[0]$ and $y^s_i[0]$ from every in-neighbor $v_i \in \mathcal{N}_j^-$ (however, it will not receive any set of mass variables $z_i[0]$ and $y_i[0]$ from any in-neighbor $v_i \in \mathcal{N}_j^-$).
According to the Event Trigger Conditions~$1$, nodes $v_1$ and $v_2$ will update their state variables and then set $S\_br_1 = 1$, $S\_br_2 = 1$ (nodes $v_3$ and $v_4$ will maintain $S\_br_3 = 0$ and $S\_br_4 = 0$ since Event Trigger Conditions~$1$ do not hold for them).
Furthermore, Event Trigger Conditions~$2$ do not hold for any node but Event Trigger Conditions~$3$ hold for nodes $v_1$ and $v_2$ who set $M\_tr_1 = 1$ and $M\_tr_2 = 1$ (nodes $v_3$ and $v_4$ will maintain $M\_tr_3 = 0$ and $M\_tr_4 = 0$ since Event Trigger Conditions~$3$ do not hold for them).
Then nodes $v_1$ and $v_2$ will broadcast their state variables to every out-neighbor (since $S\_br_1 = 1$, $S\_br_2 = 1$) and then they will transmit their mass variables according to their unique predetermined order (since $M\_tr_1 = 1$ and $M\_tr_2 = 1$).
Furthermore, they will set $S\_br_1 = 0$, $M\_tr_1 = 0$ and $S\_br_2 = 0$, $M\_tr_2 = 0$ since they transmitted their state and mass variables.
The mass and state variables, at time step $k=1$, for nodes $v_1, v_2, v_3, v_4$ are shown in Table~\ref{table_max1}.
During time step $k=1$, each node $v_j$ will receive the state variables and the mass variables from every in-neighbor.
Specifically, node $v_1$ will receive the mass variables of node $v_2$ and node $v_4$ will receive the mass variables of node $v_1$.
Following Event Trigger Conditions~$1$, $v_1$ will update its state variables and set $S\_br_1 = 1$.
Then, following Event Trigger Conditions~$2$, node $v_4$ will update its mass variables and set $S\_br_4 = 1$.
Following Event Trigger Conditions~$3$, node $v_1$ will set $M\_tr_1 = 1$.
Nodes $v_1$ and $v_4$ will broadcast their state variables to every out-neighbor (since $S\_br_1 = 1$, $S\_br_4 = 1$) and node $v_1$ will transmit its mass variables according to its unique predetermined order towards node $v_3$ (since $M\_tr_1 = 1$).
Then, nodes $v_1$, $v_4$ will set $S\_br_1 = 0$, $S\_br_4 = 0$ and node $v_1$ will set $M\_tr_1 = 0$.
The mass and state variables, at time step $k=2$, for nodes $v_1, v_2, v_3, v_4$ are shown in Table~\ref{table_max1}.
Repeating the steps above, in Table~\ref{table_max1} we can see the mass and state variables, at time step $k=3$.
In Table~\ref{table_max1} we can see that for the set $\mathcal{V}^p[3] = \{v_3, v_4\}$ we have $y_3[3] = y_4[3]$ and $z_3[3] = z_4[3]$ while, for the set $ \mathcal{V} - \mathcal{V}^p[3] = \{v_1, v_2\}$ we have $y_1[3] = y_2[3] = 0$ and $z_1[3] = z_2[3] = 0$.
This means that we have a ``Partial Mass Summation'' scenario.
In this case, we will see that Event Trigger Conditions~$2$ will not hold again for any node for time steps $k > 3$ (i.e., no node will transmit again its mass variables).
\begin{table}[t]
\begin{center}
\captionof{table}{Mass and State Variables for Fig.~\ref{max_example}.}
\label{table_max1}
{\small
\begin{tabular}{|c||c|c|c|c|c|}
\hline
Node &\multicolumn{5}{c|}{Mass and State Variables for $k=0$}\\
&$y_j[0]$&$z_j[0]$&$y^s_j[0]$&$z^s_j[0]$&$q^s_j[0]$\\
\cline{1-6}
& & & & & \\
$v_1$ & 2 & 1 & 2 & 1 & 2 / 1\\
$v_2$ & 4 & 1 & 4 & 1 & 4 / 1\\
$v_3$ & 7 & 1 & 7 & 1 & 7 / 1\\
$v_4$ & 9 & 1 & 9 & 1 & 9 / 1\\
\hline
Node &\multicolumn{5}{c|}{Mass and State Variables for $k=1$}\\
&$y_j[1]$&$z_j[1]$&$y^s_j[1]$&$z^s_j[1]$&$q^s_j[1]$\\
\cline{1-6}
& & & & & \\
$v_1$ & 2 & 1 & 7 & 1 & 7 / 1\\
$v_2$ & 4 & 1 & 9 & 1 & 9 / 1\\
$v_3$ & 7 & 1 & 7 & 1 & 7 / 1\\
$v_4$ & 9 & 1 & 9 & 1 & 9 / 1\\
\hline
Node &\multicolumn{5}{c|}{Mass and State Variables for $k=2$}\\
&$y_j[2]$&$z_j[2]$&$y^s_j[2]$&$z^s_j[2]$&$q^s_j[2]$\\
\cline{1-6}
& & & & & \\
$v_1$ & 4 & 1 & 9 & 1 & 9 / 1\\
$v_2$ & 0 & 0 & 9 & 1 & 9 / 1\\
$v_3$ & 7 & 1 & 7 & 1 & 7 / 1\\
$v_4$ & 11 & 2 & 11 & 2 & 11 / 2\\
\hline
Node &\multicolumn{5}{c|}{Mass and State Variables for $k=3$}\\
&$y_j[3]$&$z_j[3]$&$y^s_j[3]$&$z^s_j[3]$&$q^s_j[3]$\\
\cline{1-6}
& & & & & \\
$v_1$ & 0 & 0 & 9 & 1 & 9 / 1\\
$v_2$ & 0 & 0 & 11 & 2 & 11 / 2\\
$v_3$ & 11 & 2 & 11 & 2 & 11 / 2\\
$v_4$ & 11 & 2 & 11 & 2 & 11 / 2\\
\hline
Node &\multicolumn{5}{c|}{Mass and State Variables for $k=4$}\\
&$y_j[4]$&$z_j[4]$&$y^s_j[4]$&$z^s_j[4]$&$q^s_j[4]$\\
\cline{1-6}
& & & & & \\
$v_1$ & 0 & 0 & 11 & 2 & 11 / 2\\
$v_2$ & 0 & 0 & 11 & 2 & 11 / 2\\
$v_3$ & 11 & 2 & 11 & 2 & 11 / 2\\
$v_4$ & 11 & 2 & 11 & 2 & 11 / 2\\
\hline
\end{tabular}
}
\end{center}
\end{table}
\vspace{0.2cm}
During time step $k=4$, each node will receive the state variables and the mass variables from every in-neighbor.
Node $v_1$ will update its state variables and set $S\_br_1 = 1$ (according to Event Trigger Conditions~$1$).
Then, no transmissions of mass variables will be performed since Event Trigger Conditions~$3$ do not hold for any node; thus, for every node $v_j$ we have $M\_tr_j = 0$.
Node $v_1$ will broadcast its state variables to every out-neighbor (since $S\_br_1 = 1$) and every node $v_j$ will set $S\_br_j = 0$, $M\_tr_j = 0$.
However, since the state variables of nodes $v_3$, $v_4$ are the same as the updated state variables of node $v_1$, we have the mass and state variables of every node at time step $k=5$ remain the same as time step $k=4$ (shown in Table~\ref{table_max1}).
In Table~\ref{table_max1}, we can see that (\ref{alpha_z_y}) and (\ref{alpha_q}) hold for every node for $\alpha = 2$ (i.e., every node has reached quantized average consensus).
Notice that no set of event trigger conditions holds for any node during time step $k = 5$.
This means that no node will perform any transmissions of its state or mass variables for time steps $k \geq 5$. \hspace*{\fill} $\square$
\end{example}
\begin{remark}
It is interesting to consider here the case where node $v_1$ sets its priorities as $P_{31} = 0$ and $P_{41} = 1$, during the Initialization of Algorithm~\ref{algorithm_max}.
In this case, we will notice the scenario of ``Full Mass Summation'' for node $v_4$ instead of ``Partial Mass Summation'' (i.e., \eqref{alpha_z_y} and \eqref{alpha_q} hold for every node for $\alpha = 1$). $\hfill \blacksquare$
\end{remark}
\subsection{Deterministic Convergence Analysis}\label{sec:Conv_analysis}
We now analyze the functionality of Algorithm~\ref{algorithm_max} and prove that it allows all nodes to reach quantized average consensus after a finite number of steps.
Furthermore, we will also show that once quantized average consensus is reached, transmissions from each node cease.
We first consider the following setup and then state Lemma~\ref{before_second_lemma} and Lemma~\ref{second_lemma} which are necessary for our subsequent development.
{\it Setup:} Consider a strongly connected digraph $\mathcal{G}_d = (\mathcal{V}, \mathcal{E})$ with $n=|\mathcal{V}|$ nodes and $m=|\mathcal{E}|$ edges.
During the execution of Algorithm~\ref{algorithm_max}, at time step $k_0$, there is at least one node $v_{j'} \in \mathcal{V}$, for which
\begin{equation}\label{great_z_prop1_det}
z_{j'}[k_0] \geq z_i[k_0], \ \forall v_i \in \mathcal{V}.
\end{equation}
Then, among the nodes $v_{j'}$ for which (\ref{great_z_prop1_det}) holds, there is at least one node $v_j$ for which
\begin{equation}\label{great_z_prop2_det}
y_j[k_0] \geq y_{l}[k_0] , \ \text{where} \ \ v_j, v_{l} \in \{ v_{j'} \in \mathcal{V} \ | \ (\ref{great_z_prop1_det}) \ \text{holds} \}.
\end{equation}
For notational convenience we will call the pair of mass variables of node $v_j$ for which (\ref{great_z_prop1_det}) and (\ref{great_z_prop2_det}) hold as the ``leading mass'' (or ``leading masses'' if multiple nodes hold such a pair of values) and the pairs of mass variables of a node $v_l$ for which $z_l[k_0]>0$ but (\ref{great_z_prop1_det}) and (\ref{great_z_prop2_det}) do not hold as the ``follower mass'' (or ``follower masses'').
Furthermore, if two (or more) masses reach a node simultaneously then we say that they ``merge'', i.e., the receiving node ``merges'' the mass variables it receives by summing their numerators and their denominators (according to Step~$2$ of the Iteration of Algorithm~\ref{algorithm_max}).
This way a set of mass variables with a greater denominator is created.
\begin{lemma}\label{before_second_lemma}
If, during time step $k_0$ of Algorithm~\ref{algorithm_max}, the mass variables of node $v_j$ fulfill (\ref{great_z_prop1_det}) and (\ref{great_z_prop2_det}), then the state variables of every node $v_i \in \mathcal{V}$ satisfy
\begin{eqnarray}\label{first_z}
z_i^s[k_0] \leq z_j[k_0] ,
\end{eqnarray}
or
\begin{eqnarray}\label{first_zy}
z_i^s[k_0] = z_j[k_0] \ \ \text{and} \ \ y_i^s[k_0] \leq y_j[k_0].
\end{eqnarray}
\end{lemma}
\begin{pf}
Let us consider the variable
$
z^{(m)}[k] = \max_{v_l \in \mathcal{V}} z_l[k] .
$
From Iteration Step~$2$ of Algorithm~\ref{algorithm_max} we have that $z^{(m)}[k]$ is non-decreasing (i.e., $z^{(m)}[k+1] \geq z^{(m)}[k]$, for every $k$).
Furthermore, since the mass variables of node $v_j$ fulfill (\ref{great_z_prop1_det}) and (\ref{great_z_prop2_det}), then, during time step $k$, it holds that
$
z_j[k] = z^{(m)}[k].
$
In addition, for every $k$, during Execution Steps~$1$ and $2$ of Algorithm~\ref{algorithm_max_1a}, for every node $v_i \in \mathcal{V}$, we have that $z_i^s[k]$ is either less than $z^{(m)}[k]$ (i.e., $z_i^s[k] < z^{(m)}[k]$) or equal to $z^{(m)}[k]$ (i.e., $z_i^s[k] = z^{(m)}[k]$).
This is a direct result of $z^{(m)}[k]$ being non-decreasing and Event Trigger Conditions~$1$ and $2$ where, at time step $k$, the state variables $z_i^s[k]$, $y_i^s[k]$ of a node $v_i$ are updated to be (i) either equal to $z_i[k]$, $y_i[k]$ if $z_i^s[k] > z_i[k]$ or $z_i^s[k] = z_i[k]$, $y_i^s[k] > y_i[k]$ or (ii) equal to $z_{i'}^s[k]$, $y_{i'}^s[k]$, $v_{i'} \in \mathcal{N}_i^-$ if $z_{i'}^s[k] > z_{i}^s[k]$ or $z_{i'}^s[k] = z_{i}^s[k]$, $y_{i'}^s[k] > y_{i}^s[k]$.
As a result, at time step $k$, the state variables of every node $v_i \in \mathcal{V}$ satisfy
$
z_i^s[k] \leq z_j[k] .
$
Finally, from Execution Steps~$1$ and $2$ of Algorithm~\ref{algorithm_max_1a}, for every $k$, we have that if $z_i^s[k] = z_j[k]$ holds for some node $v_i$, then we have that either $y_i^s[k] < y_j[k]$ or $y_i^s[k] = y_j[k]$.
[Note here that if $z_i^s[k] = z_j[k]$ and $y_i^s[k] > y_j[k]$, then the mass variables of $v_j$ do not fulfill (\ref{great_z_prop1_det}) and (\ref{great_z_prop2_det}) which is a contradiction.]
As a result we have that if the mass variables of node $v_j$ fulfill (\ref{great_z_prop1_det}) and (\ref{great_z_prop2_det}), then the state variables of every node $v_i \in \mathcal{V}$ satisfy (\ref{first_z}) or (\ref{first_zy}). \hspace*{\fill} $\square$
\end{pf}
\begin{lemma}\label{second_lemma}
If, during time step $k_0$ of Algorithm~\ref{algorithm_max}, the mass variables of {\em each} node $v_j$ with nonzero mass variables fulfill (\ref{great_z_prop1_det}) and (\ref{great_z_prop2_det}), then we have only leading masses and no follower masses.
This means that the Event Trigger Conditions~$3$ will never hold again for future time steps $k \geq k_0$.
As a result, the transmissions that (may) take place will be only via broadcasting (from Event Trigger Conditions~$1$ and $2$) for at most $n-1$ time steps and then they will cease.
\end{lemma}
\begin{pf}
Let us assume that during time step $k_0$ two (or more) mass variables merge at nodes $v_j$, $v_i$, so that these two nodes simultaneously become leading masses (more generally, we could have more than two leading masses) and all other nodes have zero mass variables.
Since the mass variables of nodes $v_j$, $v_i$, during time step $k_0$, become leading masses then there exists a set $\mathcal{V}^p[k_0] \subseteq \mathcal{V}$ in which we have $y_j[k_0] = y_i[k_0]$ and $z_j[k_0] = z_i[k_0]$, $\forall v_j, v_i \in \mathcal{V}^p[k_0]$ and $y_l[k_0] = 0 \ \ \text{and} \ \ z_l[k_0] = 0$, for each $ v_l \in \mathcal{V} - \mathcal{V}^p[k_0]$.
Once this merge occurs then we have that, for both $v_j$ and $v_i$, the Event Trigger Conditions $1$ and the Event Trigger Conditions $3$ do not hold, but Event Trigger Conditions $2$ do hold.
This means that $v_j$ and $v_i$ do not transmit their mass variables but rather they broadcast their new state variables to their out-neighbors.
Then, their out-neighbors, $v_{l_j}$ and $v_{l_i}$ respectively, will update their state variables and broadcast their new state variables towards their out-neighbors.
The updating and broadcasting of state variables will continue, until all nodes obtain state variables equal to $z^s_j[k_0]$ and $y^s_j[k_0]$.
Note that during this update and broadcasting of state variables, no node transmits its mass variables.
After at most $n-1$ steps, all nodes will be aware of the values $z^s_j[k_0]$ and $y^s_j[k_0]$, and at that point all transmissions will seize. \hspace*{\fill} $\square$
\end{pf}
\begin{theorem}
\label{PROP1_max}
The execution of Algorithm~\ref{algorithm_max} allows each node $v_j \in \mathcal{V}$ to reach quantized average consensus after a finite number of steps $k_0$ upper bounded by $n^2 + (n-1)m^2$, where $n$ is the number of nodes and $m$ is the number of edges in the network.
Furthermore, each node stops transmitting towards its out-neighbors once quantized average consensus is reached.
\end{theorem}
\begin{pf}
Before starting the analysis of Algorithm~\ref{algorithm_max}, it is important to note that the leading mass will not fulfill the Event Trigger Conditions~$3$ in Execution Step~$3$ of Algorithm~\ref{algorithm_max_1a}.
This means that the corresponding node (say $v_j$) will not transmit its mass variables to its out-neighbors $v_l \in \mathcal{N}_j^+$ according to its predetermined priority.
In this proof we will show that there exists $k_0 \in \mathbb{Z}_+$ for which the mass variables of {\em every} node $v_j$ (for which $z_j[k] > 0$) fulfill (\ref{great_z_prop1_det}) and (\ref{great_z_prop2_det}), for every $k \geq k_0$.
This means that for $k \geq k_0$ we have only leading masses.
Furthermore, from Lemma~\ref{second_lemma}, we have that there exists $k_1 > k_0$, where for every $k \geq k_1$ the state variables of every node $v_j \in \mathcal{V}$ fulfill (\ref{alpha_z_y}) and (\ref{alpha_q}) for $\alpha \in \mathbb{Z}_+$ (i.e., every node has reached quantized consensus) and thus transmissions cease.
During the Initialization steps of Algorithm~\ref{algorithm_max}, we have that each node will broadcast its state variables to every out-neighbor.
Then, during Iteration Step~$1$, each node will receive and update its state variables.
When checking Event Trigger Conditions~$1$ it will decide to broadcast towards its out-neighbors the updated values (of the state variables).
This means that after $n$ iterations (assuming, that no other mass variables merged during $n$ time steps), the state variables of each node $v_i \in \mathcal{V}$ satisfy $z^s_i[n] = z_{j_1}[0]$ and $y^s_i[n] = y_{j_1}[0]$,
where the mass variables of node $v_{j_1}$ are the leading mass.
As a result, after $n$ iterations, we have that the Event Trigger Conditions~$3$ will hold for every node $v_i \in \mathcal{V} - \{ v_{j_1} \}$.
Thus, every node (except node $v_{j_1}$ which is the leading mass) will transmit its mass variables toward its out-neighbors according to its unique priority.
Note here that the number of iterations required for the follower mass to reach every node $v_i \in \mathcal{V}$ is bounded by $m^2$, where $m = | \mathcal{E} |$ is the number of edges of the given digraph $\mathcal{G}_d$ (in this case Proposition~$3$ in \cite{2014:RikosHadj} provides a bound for the follower mass to travel via each edge in the graph and thus necessarily also reach every other node).
Let us assume now that, after executing Algorithm~\ref{algorithm_max} for additional $m^2$ steps, we have that the mass variables $z_{i_1}[0]$, $y_{i_1}[0]$ and $z_{i_2}[0]$, $y_{i_2}[0]$ of nodes $v_{i_1}$ and $v_{i_2}$ respectively, meet (and merge) in node $v_{j_2}$, and after this merge they become the leading mass.
This means that node $v_{j_2}$ will not transmit its mass variables during time step $n + m^2$ (because Event Trigger Conditions~$3$ do not hold) but it will broadcast its state variables to every out-neighbor (because Event Trigger Conditions~$2$ hold).
Thus, after additional $n$ iterations, the state variables of each node $v_i \in \mathcal{V}$ satisfy $z^s_i[2n + m^2] = z_{j_2}[n + m^2]$ and $y^s_i[2n + m^2] = y_{j_2}[n + m^2]$, where the mass variables of node $v_{j_2}$ are now the leading mass.
This means that Event Trigger Conditions~$3$ will hold for every node $v_i \in \mathcal{V} - \{ v_{j_2} \}$.
Thus, every node (except node $v_{j_2}$ which is now the leading mass) will transmit its mass variables toward its out-neighbors according to its unique priority.
Note here that also node $v_{j_1}$ will transmit its mass variables toward its out-neighbors (since the state variables of $v_{j_1}$ are equal to the mass variables of the leading mass $v_{j_2}$, this means that Event Trigger Conditions~$3$ will also hold for $v_{j_1}$).
Let us assume now that, after executing Algorithm~\ref{algorithm_max} for additional $m^2$ steps, the mass variables $z_{i_3}[0]$, $y_{i_3}[0]$ and $z_{i_4}[0]$, $y_{i_4}[0]$ of nodes $v_{i_3}$ and $v_{i_4}$ respectively, meet (and merge) in node $v_{j_3}$.
After this merge they become the leading mass.
Again, this means that during time step $2n + 2m^2$, node $v_{j_3}$ will not transmit its mass variables (because Event Trigger Conditions~$3$ do not hold) but it will broadcast its state variables to every out-neighbor (because Event Trigger Conditions~$2$ hold).
After additional $n$ iterations, the state variables of each node $v_i \in \mathcal{V}$ satisfy $z^s_i[3n + 2m^2] = z_{j_3}[2n + 2m^2]$ and $y^s_i[3n + 2m^2] = y_{j_3}[2n + 2m^2]$, where the mass variables of node $v_{j_3}$ are now the new leading mass.
By continuing this analysis, we can see that every $n + m^2$ time steps at least two follower masses merge and become the leading mass.
Since, during the Initialization steps of Algorithm~\ref{algorithm_max}, we have $n$ initial mass variables this means that after $(n - 1)(n + m^2)$ time steps {\it all} initial mass variables will merge into one mass (obviously the mass variables in which every initial mass has merged is the leading mass).
Thus, at time step $(n-1)n + (n-1)m^2$ we have only leading masses and no follower masses.
From Lemma~\ref{second_lemma}, we have that after $n$ additional time steps every node will have state variables equal to the leading mass (i.e., $z^s_i[n^2 + (n-1)m^2] = n$ and $y^s_i[n^2 + (n-1)m^2] = \sum_{l=1}^{n}{y_l[0]}$, for every $v_i \in \mathcal{V}$).
As a result, each node $v_j \in \mathcal{V}$ will reach quantized average consensus after a finite number of steps $k_0$ upper bounded by $n^2 + (n-1)m^2$ and then transmissions will be ceased.
Note that so far we considered the scenario where there is only one leading mass during every time step $k$ and it merges with only one nonzero mass variable every $n + m^2$ time steps.
In other scenarios, we can consider multiple leading masses (i.e., the nonzero mass variables fulfill (\ref{great_z_prop1_det}) and (\ref{great_z_prop2_det}) for more than one node) which will speed up convergence since the follower masses will merge more frequently. \hspace*{\fill} $\square$
\end{pf}
The proof of Theorem~\ref{PROP1_max} analyzes the operation of Algorithm~\ref{algorithm_max} over a static and strongly connected digraph.
It shows that the convergence time of the algorithm is equal to $n^2 + (n-1)m^2$ (where $n$ is the number of nodes and $m$ is the number of edges in the network).
Note that the finite time \textit{deterministic} convergence of the algorithm is achievable due to the unique order $P_{lj}$ that each node $v_j$ assigns to its out-neighbors during the initialization steps.
However, in most applications nowadays, such as control and coordination of autonomous vehicles or UAV swarms, we have that the structure of the network is time-varying.
Analyzing the operation of Algorithm~\ref{algorithm_max} over time-varying digraphs is outside the scope of this paper and an extension of the unique order transmission strategy over time-varying networks is currently an open question.
An extension of Algorithm~\ref{algorithm_max} over time-varying digraphs can be done through the analysis in \cite{2021:Rikos_Hadj_Splitting}.
Specifically, \cite{2021:Rikos_Hadj_Splitting} presents an algorithm in which each node performs randomized transmissions towards its out-neighbors (i.e., it chooses an out-neighbor to perform a transmission according to a nonzero probability) and is shown to operate both in static and time-varying digraphs allowing convergence with high probability.
Thus, an important future research direction is to extend the operation of Algorithm~\ref{algorithm_max} by applying a randomized transmission strategy.
This extension will enhance the algorithm's convergence speed (see Fig.~$3$ and Fig.~$4$ in \cite{2021:Rikos_Hadj_Splitting}) and will allow convergence over time-varying digraphs; however, it will eliminate its \textit{deterministic} convergence characteristics, allowing instead finite time convergence with high probability.
\section{BOUNDING NUMBER OF TRANSMISSIONS AND COMPUTATIONS}\label{bound_trans_comp}
In this section we calculate an upper bound on the number of transmissions and the number of computations each node $v_j$ performs during Algorithm~\ref{algorithm_max}.
\begin{theorem}\label{bound_trans}
During the operation of Algorithm~\ref{algorithm_max}, each node $v_j$ will perform at most $n + (n-1)m$ transmissions (where $n$ is the number of nodes and $m$ is the number of edges in the network) before quantized average consensus is reached and transmissions are ceased.
\end{theorem}
\begin{pf}
During the operation of Algorithm~\ref{algorithm_max} every node $v_j$ performs (i) broadcast transmissions and (ii) directed transmissions.
We provide an upper bound for each transmission type separately.
\noindent
Broadcast Transmissions:
It is easy to see that during Algorithm~\ref{algorithm_max} there are at most $n-1$ updates of the leading mass (see Theorem~\ref{PROP1_max}) which trigger broadcast transmissions.
Considering also the broadcast transmission performed during the initialization procedure, we have that each node $v_j$ will perform at most $n$ broadcastings towards its out-neigbors during Algorithm~\ref{algorithm_max}.
\noindent
Directed Transmissions (or Unicast Transmissions):
Our analysis is based on Proposition~$3$ in \cite{2014:RikosHadj} which provides a bound for the follower mass to travel via each edge in the digraph and thus necessarily also reach every other node.
Specifically, considering a strongly connected digraph $\mathcal{G}_d = (\mathcal{V}, \mathcal{E})$ (with $n=|\mathcal{V}|$ nodes and $m=|\mathcal{E}|$ edges) we have that the number of iterations required for the follower mass to reach every node $v_i \in \mathcal{V}$ is bounded by $m^2$, where $m = | \mathcal{E} |$ is the number of edges.
This result is derived from the fact that digraph $\mathcal{G}_d$ consists of $C_\beta$ cycles, where $C_\beta \in \{ 1, 2, ... m \}$ (see Proposition~$3$ in \cite{2014:RikosHadj}), and each cycle is traversed at most $m$ times by a follower mass until it reaches the node (say $v_i$) whose mass variables are the leading mass.
Let us consider now that each node $v_j$ performs one directed transmission every time one follower mass traverses the cycle $C_{\beta_0}$ (to which node $v_j$ belongs to).
This means that node $v_j$ will perform at most $m$ directed transmissions until this specific follower mass merges with the leading mass.
Furthermore, we have that initially there are at most $n$ mass variables and during the operation of Algorithm~\ref{algorithm_max} there are at most $n-1$ mergings.
As a result, $n-1$ follower masses will traverse the digraph in order to merge with the leading mass.
This means that each node $v_j$ will perform at most $(n-1)m$ directed transmissions during Algorithm~\ref{algorithm_max}.
Combining the results for (i) broadcast transmissions and (ii) directed transmissions, we have that during the operation of Algorithm~\ref{algorithm_max} each node $v_j$ will perform at most $n + (n-1)m$ transmissions before quantized average consensus is reached and transmissions are ceased. \hspace*{\fill} $\square$
\end{pf}
\begin{theorem}\label{bound_comp}
During Algorithm~\ref{algorithm_max}, each node $v_j$ will perform at most $1 + (n-1) (m + 1 + \mathcal{D}_{max}^-)$ computations (where $n$ is the number of nodes, $m$ is the number of edges and $\mathcal{D}_{max}^- = \max_{v_j \in \mathcal{V}} \mathcal{D}_j^-$) before quantized average consensus is reached and transmissions are ceased.
\end{theorem}
\begin{pf}
During the operation of Algorithm~\ref{algorithm_max}, we consider that a node $v_j$ performs computations if it executes (i) the Initialization steps or (ii) any of the Iteration Steps~$1$, $2$, $3$.
The Initialization steps are executed only once.
The Iteration steps are executed at time step $k$ only if a node $v_j$ receives a set of state variables $z^s_i[k]$, $y^s_i[k]$ during Iteration Step~$1$ or a set of mass variables $z_i[k]$, $y_i[k]$ during Iteration Step~$2$.
Thus, the upper bound on computations is calculated according to the number of messages each node $v_j$ receives during Algorithm~\ref{algorithm_max}.
Computations due to Received Mass Variables:
We consider the cases of computations due to received mass variables that (i) fulfill Event-Triggered Conditions~$2$ of Algorithm~\ref{algorithm_max_1a} and (ii) fulfill Event-Triggered Conditions~$3$ of Algorithm~\ref{algorithm_max_1a}.
\\ \noindent
For the first case we have that there are at most $n-1$ mergings of mass variables (see Theorem~\ref{PROP1_max}).
This means that the amount of computations due to received mass variables that fulfill Event-Triggered Conditions~$2$ of Algorithm~\ref{algorithm_max_1a}, is upper bounded by $n-1$.
For the second case we have that each node $v_j$ will perform a directed transmission for $(n-1)m$ times (see Theorem~\ref{bound_trans}).
This directed transmission is the result of receiving a set of mass variables which fulfills Event-Triggered Conditions~$3$ of Algorithm~\ref{algorithm_max_1a}.
This means that the amount of computations due to received mass variables that fulfill Event-Triggered Conditions~$3$ of Algorithm~\ref{algorithm_max_1a}, is upper bounded by $(n-1)m$.
As a result, node $v_j$ will perform computations for received mass variables (by checking Event-Triggered Conditions~$2$ or Event-Triggered Conditions~$3$) for at most $(n-1)(m+1)$ times.
Computations due to Received State Variables:
From Theorem~\ref{PROP1_max} we have that during Algorithm~\ref{algorithm_max} there are at most $n-1$ updates of the leading mass which trigger broadcast transmissions among nodes in the network.
This means that node $v_j$ will receive a set of state variables $z^s_i[k]$, $y^s_i[k]$ from its in-neighbors for at most $(n-1)\mathcal{D}_{max}^-$ times, where $\mathcal{D}_{max}^- = \max_{v_j \in \mathcal{V}} \mathcal{D}_j^-$.
As a result, node $v_j$ will perform computations for received state variables (by checking Event-Triggered Conditions~$1$) for at most $(n-1)\mathcal{D}_{max}^-$ times.
We now consider the computations during the Initilization steps and we combine them with the results of computations for (i) received mass variables and (ii) received state variables.
As a result, we have that during the operation of Algorithm~\ref{algorithm_max}, each node $v_j$ will perform at most $1 + (n-1) (m + 1 + \mathcal{D}_{max}^-)$ computations (where $\mathcal{D}_{max}^- = \max_{v_j \in \mathcal{V}} \mathcal{D}_j^-$) before quantized average consensus is reached, and computations along with transmissions are ceased. \hspace*{\fill} $\square$
\end{pf}
The result of Theorem~\ref{bound_comp} depends on the number of incoming messages since, from the operation of Algorithm~\ref{algorithm_max}, each node performs a computation only after a transmission has been received.
Furthermore, if no messages are received (i.e., no mass or state variables are received) then, during the operation of Algorithm~\ref{algorithm_max}, each node will not execute Iteration Steps~$1$, $2$ and $3$ and thus it will remain in hibernation mode (i.e., awaiting to receive signals without performing any computations).
As a result, since the number of transmissions that each node performs during the operation of Algorithm~\ref{algorithm_max} is upper bounded (see Theorem~\ref{bound_trans}) then the number of computations is also upper bounded and the bound depends on the number of incoming messages.
\section{MEMORY AND ENERGY REQUIREMENTS FOR ACHIEVING QUANTIZED AVERAGE CONSENSUS}\label{energy_constr}
In this section we focus on the consumption of available resources from each node in the network.
Specifically, we calculate an upper bound on the memory and energy each node $v_j$ requires during Algorithm~\ref{algorithm_max}.
\subsection{Required Memory}
We first characterize the memory requirements of each node $v_j$ during the operation of Algorithm~\ref{algorithm_max}.
\begin{prop}\label{Memory_prop}
During the operation of Algorithm~\ref{algorithm_max}, the memory requirement of each node $v_j$ is (i) $7 + 4 \mathcal{D}_{j}^-$ locations for integer values, and (ii) $ 2 + (3 + 2\mathcal{D}_{j}^-) \lceil \log_{2} n \rceil + (3 + 2\mathcal{D}_{j}^-) \lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil $ bits for binary numbers.
\end{prop}
\begin{pf}
During the operation of Algorithm~\ref{algorithm_max}, each node $v_j$ needs to store (i) $2$ transmission variables $S\_br_j$, $M\_tr_j$, (ii) $2$ mass variables $y_j[k]$, $z_j[k]$, (iii) $3$ state variables $y^s_j[k]$, $z^s_j[k]$, $q^s_j[k]$, and (iv) $4$ mass and state variables $y_i[k]$, $z_i[k]$, $y^s_i[k]$, $z^s_i[k]$ for each $v_i \in \mathcal{N}_j^-$ (i.e., $2$ mass variables and $2$ state variables that node $v_j$ may receive from each in-neighbor respectively).
This means that the memory requirements of node $v_j$ is $7 + 4 \mathcal{D}_{j}^-$ locations for integer values (decimal numbers).
In order to calculate the memory requirements for binary numbers we need to calculate the required bits for each one of the $7 + 4 \mathcal{D}_{j}^-$ integer values each node $v_j$ stores during the operation of the algorithm. Specifically, we have that node $v_j$ requires (i) $2$ bits for the binary transmission variables $S\_br_j$, $M\_tr_j$, (ii) $\lceil \log_{2} n \rceil$ and $\lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil$ bits for the mass variables $z_j[k]$, $y_j[k]$ respectively, (iii) $\lceil \log_{2} n \rceil$, $\lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil$ and $\lceil \log_{2} n \rceil + \lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil$ bits for the state variables $z^s_j[k]$, $y^s_j[k]$, $q^s_j[k]$ respectively, and (iv) $2 ( \lceil \log_{2} n \rceil ) \mathcal{D}_{j}^-$ and $2 ( \lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil ) \mathcal{D}_{j}^-$ bits for the mass and state variables $y_i[k]$, $z_i[k]$, $y^s_i[k]$, $z^s_i[k]$ for each $v_i \in \mathcal{N}_j^-$.
Combining these $4$ cases, we have that the memory requirements of each node $v_j$ is $ 2 + (3 + 2\mathcal{D}_{j}^-) \lceil \log_{2} n \rceil + (3 + 2\mathcal{D}_{j}^-) \lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil $ bits for binary numbers. \hspace*{\fill} $\square$
\end{pf}
It is important to note here that during the operation of Algorithm~\ref{algorithm_max} the memory requirements of each node $v_j$ are greater than most algorithms in the current literature (e.g., \cite{2016:Chamie_Basar, 2011:Cai_Ishii} and references therein).
In most algorithms each node needs to store at most $2 + 2 \mathcal{D}_{j}^-$ integer values (i.e., at most $2$ integer values for the node's state and at most $2$ integer values for the state of each in-neighbor).
However, in Algorithm~\ref{algorithm_max} each node $v_j$ needs to store $7 + 4 \mathcal{D}_{j}^-$ integer values in order to establish finite time convergence and transmission stopping.
Specifically, each node $v_j$ needs to store (i) $2 + 2 \mathcal{D}_{j}^-$ integer values for mass variables, in order to establish finite time convergence, and (ii) $3 + 2 \mathcal{D}_{j}^-$ integer values for state variables along with $2$ integer values for transmission variables, in order to establish transmission stopping.
This increase in the required memory of each node leads to preservation of the number of transmissions and the utilized bandwidth (since each node ceases transmissions once it achieves convergence to the quantized average).
Considering that the energy cost for performing transmissions is much higher than the energy cost for performing computations \cite{2002:Chandrakasan}, this aspect of Algorithm~\ref{algorithm_max} is of particular importance since it leads to energy savings during the operation of each node.
As a result, this characteristic makes Algorithm~\ref{algorithm_max} suitable for battery powered networks (as will be seen in the following section).
\subsection{Required Energy}
We now discuss the energy requirements of each node $v_j$ during the operation of Algorithm~\ref{algorithm_max}.
Energy is consumed mainly during (i) communication, (ii) processing, and (iii) sensing.
Therefore, before analyzing the operation of each node, we introduce the energy model from \cite{2002:Chandrakasan} which we will use to calculate the required energy per operation.
\begin{list4}
\item[3. Sensing:] For each node $v_j$, the energy required to sense a bit is constant and equal to $\alpha_3$. The sensing power is
\begin{equation}\label{eq_sense}
p_{\text{sense}} = \alpha_3 r ,
\end{equation}
for a sensing rate of $r$ bits/sec. A typical value of $\alpha_3$ is $50$ nJ/bit.
\item [2. Processing:] For each node $v_j$, the energy required for aggregating $n_{\text{agg}}$ data streams into one stream is
\begin{equation}\label{eq_process}
p_{\text{comp}} = \alpha_4 n_{\text{agg}} r ,
\end{equation}
where $r$ is the rate (bits/sec) and $\alpha_4$ is a constant (typically $5$ nJ/bit).
\item[1. Communication:] For each node $v_j$, the energy required for transmitting to node $v_l$ is
\begin{equation}\label{eq_trans}
p_{\text{trans}} = (\alpha_{11} + \alpha_{2} d(v_j, v_l)^n) r ,
\end{equation}
where $r$ is the rate (bits/sec), $d(v_j, v_l)$ is the distance between nodes $v_j$, $v_l$, $n$ is the path loss index, and $\alpha_{11}$, $\alpha_{2}$ are constants (typically $45$ nJ/bit and $135$ nJ/bit, respectively).
\end{list4}
For convenience we assume that during the operation of our algorithm, the above operations occur for $1$ second.
We next analyze the required energy for each of the above operations separately.
Then, the energy requirements of each node $v_j$ during the operation of Algorithm~\ref{algorithm_max} is the sum of these three results.
\begin{lemma}\label{energy_receive}
During Algorithm~\ref{algorithm_max}, each node $v_j$ requires
\begin{align}
& p^j_{\text{sense}} = & \nonumber \\
& \alpha_3 (m + 1 + \mathcal{D}_{max}^-) (n-1) (\lceil \log_{2} n \rceil + \lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil) & \label{energy_receive_result}
\end{align}
nJ of energy for its sensing operation (i.e., for receiving values from its in-neighbors), where $n$ is the number of nodes, $m$ is the number of edges in the network, $\mathcal{D}_{max}^- = \max_{v_j \in \mathcal{V}} \mathcal{D}_j^-$, and $\alpha_3$ is decided by the specifications of the receiver node $v_j$ (typical value of $\alpha_3$ is $50$ nJ/bit).
\end{lemma}
\begin{pf}
In order to calculate the required energy for each node $v_j$, we consider the analysis of Theorem~\ref{bound_comp} and Proposition~\ref{Memory_prop}.
Specifically, we combine (i) the number of times each node $v_j$ received a set of mass and state variables (see Theorem~\ref{bound_comp}), (ii) the number of bits the received sets of mass and state variables consist of (see Proposition~\ref{Memory_prop}), and (iii) the required energy for each node $v_j$ to sense a bit (shown in \eqref{eq_sense}).
From Theorem~\ref{bound_comp}, we have that each node $v_j$ will receive a set of state variables $z^s_i[k]$, $y^s_i[k]$ from its in-neighbors for at most $(n-1)\mathcal{D}_{max}^-$ times.
Furthermore, each node $v_j$ will receive a set of mass variables $z_i[k]$, $y_i[k]$ from its in-neighbors for at most $(n-1)(m+1)$ times.
From Proposition~\ref{Memory_prop}, we have that each set of mass or state variables consists of at most $\lceil \log_{2} n \rceil + \lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil$ bits.
This means that for each node $v_j$, during the operation of Algorithm~\ref{algorithm_max}, the upper bound regarding the number of received bits is equal to $[(n-1)\mathcal{D}_{max}^- + (n-1)(m+1)] (\lceil \log_{2} n \rceil + \lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil)$.
Combining the upper bound regarding the number of received bits with the required energy to sense a bit in \eqref{eq_sense}, we have that each node $v_j$ requires $p^j_{\text{sense}}$ energy as in \eqref{energy_receive_result} for its sensing operation. \hspace*{\fill} $\square$
\end{pf}
\begin{lemma}\label{energy_process}
During Algorithm~\ref{algorithm_max}, each node $v_j$ requires
\begin{align}
& p^j_{\text{comp}} = & \nonumber \\
& \alpha_4 [1 + 2(\mathcal{D}_{max}^-)^2] (n-1) (\lceil \log_{2} n \rceil + \lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil) & \label{energy_process_result}
\end{align}
nJ of energy for its processing operation (i.e., for aggregating multiple streams into one stream), where where $n$ is the number of nodes, $m$ is the number of edges in the network, $\mathcal{D}_{max}^- = \max_{v_j \in \mathcal{V}} \mathcal{D}_j^-$, and $\alpha_4$ is decided by the specifications of the processing node $v_j$ (typical value of $\alpha_4$ is $5$ nJ/bit).
\end{lemma}
\begin{pf}
In order to calculate the required energy for each node $v_j$, we consider the analysis of Theorem~\ref{bound_comp} and Proposition~\ref{Memory_prop} where we combine (i) the number of times each node $v_j$ received a set of mass and state variables, (ii) the number of bits the received sets of mass and state variables consist of, and (iii) the required energy for each node $v_j$ to aggregate multiple data streams into one stream (shown in \eqref{eq_process}).
From Theorem~\ref{bound_comp}, we have that each node $v_j$ will receive a set of state variables $z^s_i[k]$, $y^s_i[k]$ from its in-neighbors for at most $(n-1)\mathcal{D}_{max}^-$ times.
This means that node $v_j$ will have to aggregate $2 \mathcal{D}_{max}^-$ streams into two streams for at most $(n-1)\mathcal{D}_{max}^-$ times.
Specifically, $v_j$ will have to aggregate $\mathcal{D}_{max}^-$ streams of $z^s_i[k]$, $v_i \in \mathcal{N}_j^-$, values into one stream $z^s_j[k+1]$ and $\mathcal{D}_{max}^-$ streams of $y^s_i[k]$, $v_i \in \mathcal{N}_j^-$, values into one stream $y^s_j[k+1]$.
As a result, the required energy for node $v_j$ to process the received state variables is
\begin{equation}\label{process_state}
p_1^j = \alpha_4 2(\mathcal{D}_{max}^-)^2 (n-1) (\lceil \log_{2} n \rceil + \lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil) \ \text{nJ} .
\end{equation}
Furthermore, each node $v_j$ will receive a set of mass variables $z_i[k]$, $y_i[k]$ from its in-neighbors for at most $(n-1)(m+1)$ times.
However, the maximum number of aggregations of received mass variables $z_i[k]$, $y_i[k]$ is upper bounded by $n-1$ (i.e., there are at most $n-1$ aggregations of mass variables during the operation of Algorithm~\ref{algorithm_max}).
This means that the required energy for node $v_j$ to process the received mass variables is
\begin{equation}\label{process_mass}
p_2^j = \alpha_4 (n-1) (\lceil \log_{2} n \rceil + \lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil) \ \text{nJ} .
\end{equation}
As a result, combining \eqref{process_state} and \eqref{process_mass} we have that each node $v_j$ requires $p^j_{\text{comp}}$ energy as in \eqref{energy_process_result} for its processing operation. \hspace*{\fill} $\square$
\end{pf}
\begin{lemma}\label{energy_transmit}
During Algorithm~\ref{algorithm_max}, each node $v_j$ requires
\begin{align}
& p^j_{\text{trans}} = & (n-1) (\alpha_{11} + \alpha_{2} d(v_j)^n) (m + 1) A & \label{energy_transmit_result}
\end{align}
nJ of energy for its transmission operation (i.e., for performing transmissions towards its out-neighbors), where
$$
A = \lceil \log_{2} n \rceil + \lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil ,
$$
and $n$ is the number of nodes, $m$ is the number of edges in the network, $d(v_j)$ is the distance between node $v_j$ and every $v_l \in \mathcal{N}_j^+$, $n$ is the path loss index, and $\alpha_{11}$, $\alpha_{2}$ are constants (typically $45$ nJ/bit and $135$ nJ/bit, respectively).
[For notational simplicity, we assume that $d(v_j)^n = d(v_j, v_l)^n = d(v_j, v_{l'})^n$, for every $v_l, v_{l'} \in \mathcal{N}_j^+$.]
\end{lemma}
\begin{pf}
In order to calculate the required energy for each node $v_j$, we combine (i) the number of times each node $v_j$ performs a transmission of a set of mass and state variables (see Theorem~\ref{bound_trans}), (ii) the number of bits involved in the transmitted sets of mass and state variables (see Proposition~\ref{Memory_prop}), and (iii) the required energy for each node $v_j$ to perform a transmission towards its out-neighbors (shown in \eqref{eq_trans}).
From Theorem~\ref{bound_trans}, we have that each node $v_j$ will transmit a set of state variables $z^s_j[k]$, $y^s_j[k]$ to its in-neighbors for at most $(n-1)$ times.
Furthermore, each node $v_j$ will transmit a set of mass variables $z_j[k]$, $y_j[k]$ towards its in-neighbors for at most $(n-1) m$ times.
From Proposition~\ref{Memory_prop}, we have that each set of mass or state variables consists of at most $\lceil \log_{2} n \rceil + \lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil$ bits.
This means that for each node $v_j$, during the operation of Algorithm~\ref{algorithm_max}, the upper bound regarding the number of transmitted bits is equal to $[(n-1)(m+1)] (\lceil \log_{2} n \rceil + \lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil)$.
Combining the upper bound regarding the number of transmitted bits with the required energy to transmit a bit in \eqref{eq_trans}, we have that each node $v_j$ requires $p^j_{\text{trans}}$ energy as in \eqref{energy_transmit_result} for its transmission operation. \hspace*{\fill} $\square$
\end{pf}
As a result, if we combine the above results, we obtain the total energy requirements of each node $v_j$ during the operation of Algorithm~\ref{algorithm_max}, which is equal to
\begin{align}
& p^j_{\text{total}} = & \nonumber \\
& (n-1) (\alpha_3 (m + 1 + \mathcal{D}_{max}^-) + \alpha_4 [1 + 2(\mathcal{D}_{max}^-)^2]) A \ + & \nonumber \\
& (n-1)(\alpha_{11} + \alpha_{2} d(v_j)^n) (m + 1) A \ \text{nJ}, & \label{energy_energy_result}
\end{align}
where
$
A = \lceil \log_{2} n \rceil + \lceil \log_{2} \sum_{j=1}^n | y_j[0] | \rceil ,
$
$n$ is the number of nodes, $m$ is the number of edges, $\mathcal{D}_{max}^- = \max_{v_j \in \mathcal{V}} \mathcal{D}_j^-$, $\alpha_3$ is decided by the specifications of the receiver node $v_j$ (typical value of $\alpha_3$ is $50$ nJ/bit), $\alpha_4$ is decided by the specifications of the processing node $v_j$ (typical value of $\alpha_4$ is $5$ nJ/bit) and and $\alpha_{11}$, $\alpha_{2}$ are constants (typically $45$ nJ/bit and $135$ nJ/bit, respectively).
\begin{remark}
Note here that in \eqref{energy_energy_result}, we calculate the total required power of each node for the worst case scenario.
The results in Section~\ref{bound_trans_comp} and Section~\ref{energy_constr} can also be extended for the cases where we want to (i) calculate the minimum required energy per node during the operation of Algorithm~\ref{algorithm_max}, and (ii) tune Algorithm~\ref{algorithm_max} to perform under an energy budget.
In the first case, the analysis can consider the topology of the network, the distribution of the initial states of the nodes and the unique order each node assigns to its outgoing edges.
In the second case, by utilizing the analysis of the first case, we can show how we can tune Algorithm~\ref{algorithm_max} to perform under an energy budget.
Calculating the minimum required energy per node and adjusting the algorithm to perform under an energy budget are interesting directions for future research (but outside the scope of this paper). $\hfill \blacksquare$
\end{remark}
\section{SIMULATION RESULTS}\label{results}
In this section, we illustrate the behavior of Algorithm~\ref{algorithm_max} and the advantages of its event triggered operation.
Specifically, for $1000$ randomly generated digraphs of $20$ nodes with identical (randomly chosen) integer initial values with average equal to $q = 214 / 20 = 10.7$, we show in Fig.~\ref{fig_10_7} (i) the average value of each node state variable at each time step, (ii) the average number of transmissions accumulated until each time step, and (iii) the average number of nodes performing transmissions at each time step.
In Table~\ref{table_max_min_10_7}, we show the minimum, maximum and average values of the (i) total transmissions, and (ii) total required number of time steps for convergence, during the execution of Algorithm~\ref{algorithm_max} over these $1000$ randomly generated digraphs of $20$ nodes.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\columnwidth]{Figures/fig_10_7}
\caption{Execution of Algorithm~\ref{algorithm_max} averaged over $1000$ random digraphs of $20$ nodes.
\textit{Top figure:} Average values of node state variables plotted against the number of iterations (averaged over $1000$ random digraphs of $20$ nodes). \textit{Middle Figure:} Average accumulated number of transmissions plotted against the number of iterations (averaged over $1000$ random digraphs of $20$ nodes). \textit{Bottom Figure:} Average number of nodes performing transmissions plotted against the number of iterations (averaged over $1000$ random digraphs of $20$ nodes)\vspace{-0.2cm}.}
\label{fig_10_7}
\end{center}
\end{figure}
In Fig.~\ref{fig_10_7} it is interesting to notice the drop in the average number of nodes which perform transmissions at each time step (see bottom figure).
Specifically, at time step $k=1$ we have that $57$ transmissions are performed because during initialization each node $v_j$ transmits its state variables and then, Event Trigger Conditions~$1$, $2$ and $3$ hold for some nodes in the digraph.
However, the average number of transmissions at time step $k=10$ drops to $1.485$ and becomes almost equal to $1$ for time steps $k \geq 20$ which means that only one node performs transmissions after approximately $20$ time steps.
Furthermore, we can see that for $k \geq 210$ the average number of transmissions becomes equal to zero (meaning that no node performs transmissions any more) since Event Trigger Conditions~$1$, $2$ and $3$ do not hold for any node.
This means that every node has reached a common value, equal to $10.7$, which is equal to the average of the initial values (see top figure).
As a result, from Fig.~\ref{fig_10_7} we have that Algorithm~\ref{algorithm_max}, allows the nodes to reach quantized average consensus after an average number of $210$ time steps and $240.5$ transmissions.
From Table~\ref{table_max_min_10_7}, notice that the minimum number of transmissions is $103$ and the maximum number of transmissions is $368$ with the average being $240.547$.
Furthermore, it is interesting to note that the minimum number of time steps for convergence is $5$, the maximum is $209$, with the average being $103.875$.
Both results show that in practical scenarios (implemented over random directed graphs) the total number of transmissions and the total number of required time steps for convergence are much lower than the worst case upper bounds calculated in Section~\ref{bound_trans_comp} and Section~\ref{sec:Conv_analysis}, respectively.
\begin{table}[t]
\begin{center}
\captionof{table}{Minimum, Maximum, and Average Number of (i) Total Transmissions and (ii) Time Steps for Convergence, of Algorithm~\ref{algorithm_max} over $1000$ random digraphs of $20$ nodes.}
\label{table_max_min_10_7}
{\small
\begin{tabular}{|c||c|c|c|}
\hline
&\multicolumn{3}{c|}{$\#$ of Transmissions and Time Steps}\\
&Min. & Max. & Average \\
\cline{1-4}
$\#$ of Transmissions & $103$ & $368$ & $240.547$\\
$\#$ of Time Steps & $5$ & $209$ & $103.875$\\
\hline
\end{tabular}
}
\end{center}
\end{table}
\vspace{0.2cm}
\section{CONCLUSIONS}\label{future}
In this work, we analyzed the quantized average consensus problem over wireless networks with nodes that are battery powered or utilize energy harvesting techniques.
Quantized average consensus plays a key role in a number of applications, which aim at more efficient usage of network resources.
We solved the quantized average consensus problem using a novel event-triggered distributed algorithm, which calculates the exact average (i.e., avoids the error introduced due to quantization) after a finite number of iterations, which we explicitly bounded.
Furthermore, we showed that once the quantized average is calculated, transmissions are ceased from each node in the network.
Then, we presented upper bounds on the number of transmissions and computations each node performs during the operation of the algorithm and used them to bound the memory and energy requirements of each node.
Finally, we concluded with simulations which demonstrated the performance and the advantages of our algorithm.
Note here that to the best of our knowledge, this is the first {\em deterministic} algorithm, which allows convergence to the exact quantized average of the initial values after a finite number of time steps without any specific requirements regarding the network that describes the underlying communication topology (see \cite{2016:Chamie_Basar}), while it achieves more efficient usage of available network resources due to its event-triggering operation and its transmission stopping capabilities.
In the future, we plan to extend Algorithm~\ref{algorithm_max} to cases where (i) it performs under an energy budget, (ii) we have time-varying communication topologies, with bounded or unbounded transmission delays, and (iii) nodes aim to preserve the privacy of their initial states.
\vspace{-0.3cm}
\bibliographystyle{plain}
|
3,212,635,537,543 | arxiv | \section{Introduction}
\subsection{Complex reflection groups}
Let ${V}$ be a complex vector space of dimension $n$. A {\it pseudo-reflection} is a non-trivial element $r$ of $\GL({V})$ that acts trivially on a hyperplane, called the reflecting hyperplane of $r$. Let $W$ be a finite subgroup of $\GL({V})$ generated by pseudo-reflections. The pair $({V}, W)$ is call a {\it complex reflection group} and ${{V}}$ is called the {\it reflection representation} of $W$. We assume that ${{V}}$ is irreducible as a representation of $W$.
\subsection{} Denote by $\mathcal{A}$ the set of reflecting hyperplanes of $({V} , W)$ and set $N := | \mathcal{A} |$. Similarly, denote by $\mathcal{R}$ the set of pseudo-reflections of $({V} , W$) and set $N^\ast:= |\mathcal{R}|$.
\subsection{}\label{c-fn} Let $z = \sum_{r\in \mathcal{R}} (1-r)$, a central element of $\mathbb{C} [W]$. For any $U\in \irr{W}$, we set $c_U$ to be the integer by which $z$ acts on $U$. We define the {\it generalised Coxeter number} $h$ to be the integer $c_{{V}}$. An elementary calculation shows that \begin{equation} \label{coxeternumber} h = \frac{N+N^\ast}{n}.\end{equation}
\subsection{Invariant theory} Let $P$ denote ring of polynomial functions on ${V}$. This carries a homogeneous action of $W$ and we set $(P_+^W)$ to be ideal of $P$ generated by $W$-invariant polynomials with zero constant term. The coinvariant algebra $P^{\mathsf{co} W} := P/(P_+^W)$ carries the regular representation of $W$. Given $U\in\irr{W}$, the {\it exponents} of $U$ $$e_1(U)\leq \ldots \leq e_{\dim U}(U)$$ are the homogeneous embedding degrees of $U$ in $P^{\mathsf{co} W}$. These may be recorded in the {\it fake degree} of $U$ $$f_U(q) = \sum_{i=1}^{\dim U} q^{e_i(U)}.$$ Set $d_i = e_i({V}) +1$ for $i=1, \ldots , n$: these are the {\it degrees} of a minimal set of homogeneous elements generating $P^W$.
\subsection{}
There is a permutation $\Psi \in \Perm (\irr{W})$ such that the fake degrees have a palindromic property
\begin{equation}\label{palin}f_U(q) = q^{c_U} f_{\Psi(U^*)} (q^{-1}).\end{equation}
For complex reflection groups this was first observed by Malle, \cite[Section 6B]{Mal}, and then explained in a case-free manner by Opdam, \cite[Proposition 7.4]{Opd}.
\subsection{$q$-Fuss-Catalan numbers} \label{fuss}
For any positive integer $i$, set $[i]_q := 1+ q+ \cdots + q^{i-1}$. For a non-negative integer $m$, we define the $m$th {\it $q$-Fuss-Catalan number} of $({V} , W)$ to be \begin{equation} \label{thenumbers} C^{(m)}_W(q) = \prod_{i=1}^n \frac{[mh+1+e_i(\Psi^m({V}^*)^*)]_q}{[d_i]_q}.\end{equation}
\begin{theorem} \label{mainthm} The rational function $C_W^{(m)}(q)$ belongs to $\mathbb{N}[q]$. Assuming Hypothesis~\ref{hecke} holds, two reasons for this are: \begin{enumerate} \item $C_W^{(m)}(q)$ is the Hilbert series of $(P/(\Theta))^W$ where $\Theta$ is a homogeneous system of parameters of degree $mh+1$ carrying the $W$-representation $\Psi^m({V}^*)$;
\item $C_W^{(m)}(q)$ is the graded character of the finite dimensional irreducible representation $eL_{m+1/h}({\sf triv})$ of the spherical rational Cherednik algebra $U_{m+1/h }(W)$.
\end{enumerate}
\end{theorem}
\subsection{$(q,t)$-Catalan numbers} \label{q,t}
The description of $C_W^{(1)}(q)$ in the second part of the theorem allows us to define a $(q,t)$-Catalan number for all $W$ as follows. The rational Cherednik algebra representation $L_{1+1/h}({\sf triv})$ contains a unique copy of $\wedge^n {V}^*\in \irr{W}$: using this element to generate $L_{1+1/h}({\sf triv})$ one can then construct a filtration whose associated graded module carries the $(q,t)$-Catalan number by definition. By \cite[Theorem 5.11]{GoSt} this agrees with the definition of Garsia-Haiman in the symmetric group case, and should agree with the conjectural construction in \cite{stump}.
\subsection{Cyclic sieving phenomena} Let $d$ be a {\it regular} number for $(V,W)$ and let $\zeta = \exp(2\pi \sqrt{-1}/d)$, see for example \cite[2.2]{BeRe}. As pointed out in \cite[(5.1)]{OrSo}, for any $U\in \irr{W}$ there exists a permutation $\sigma \in \mathfrak{S}_n$ such that $$e_i(U) + e_{\sigma (i)}(U^*) \equiv 0 \quad \text{mod }d.$$ Combining this with the duality encoded in \eqref{palin} one shows by induction on $m$ that there exists a permutation $\rho\in\mathfrak{S}_n$ such that for all $i$ $$d_{\rho(i)} \equiv mh+1+ e_i(\Psi^m(V^*)^*) \quad \text{mod }d.$$ It follows from \cite[Proposition 3.2]{BeRe} that $C_W^{(m)}(\zeta^t)$ is a positive rational number for any $t\in \mathbb{Z}$; by the theorem it is also an algebraic integer. Hence $C_W^{(m)}(\zeta^t)$ is a positive integer for all $m$ and all $t$, and therefore a candidate for a cyclic sieving phenomena.
\subsection{Well-generated case} The pair $({V} , W)$ is {\it
well-generated} if $W$ can be generated by $n$ pseudo-reflections. It
is observed in \cite[Section 5]{OrSo} that in this case $h=d_n$. It
can be shown using \eqref{palin} and \cite[Proposition 5.2]{OrSo} that $e_i(V) + e_{n-i}(V^*)= h = e_i(\Psi^m(V^*)^*) + e_{n-i}(V^*)$, so that in this case the formula for the $q$-Fuss-Catalan number simplifies, $$C^{(m)}_W(q) = \prod_{i=1}^n \frac{[mh+d_i]_q}{[d_i]_q}.$$ This is the standard definition of $q$-Fuss-Catalan numbers for well-generated groups which is used throughout the literature.
\subsection{}
In fact, a case-by-case observation made by Malle \cite[Corollary
4.9]{Mal} shows that $\Psi ({V}^\ast) = {V}^\ast$ if and only if
$({V}, W)$ is well-generated. Thus, the first part of the theorem
above confirms \cite[Conjecture 4.3(i)]{BeRe} for those $W$ for which
Hypothesis~\ref{hecke} holds.
\subsection{Galois twists} We prove an analogue of the theorem above
for rational Cherednik algebras at any parameter $p/h$ where $p$ is a
positive integer coprime to $h$. The formulation of this theorem uses
certain twists of ${V}$ by Galois automorphisms of $\mathbb{C}$, as well as the permutation $\Psi$ of $\irr{W}$. See Theorem \ref{theorem1} for the precise statement.
\subsection{}
In particular, for well-generated groups the graded character of $eL_{p/h}({\sf triv})$ is
\begin{equation} \label{twistcat}\prod_{i=1}^n \frac{[p+e_i({}^g V)]_q}{[d_i]_q},\end{equation}
where $g$ is an automorphism of $\mathbb{C}$ which maps $e^{2 \pi \sqrt{-1}/h}$ to $e^{2 \pi \sqrt{-1}p/h}$. This generalises the formula for the symmetric group $\mathfrak{S}_n$ $$\frac{1}{[n+p]_q} \left[ \begin{matrix} n+p \\ n \end{matrix}
\right]_q.$$
Moreover, if $p = mh-1$ then \eqref{twistcat} confirms
\cite[Conjecture 4.3(ii)]{BeRe} on ``positive" $q$-Fuss-Catalan
numbers for those $W$ for which Hypothesis~\ref{hecke} holds.
\subsection{Layout of the paper} We give the proof of Theorem \ref{mainthm} and its Galois twist version in the next section, together with details for the others results mentioned here. Our key tools are the equivalences of highest weight categories discovered by Rouquier in \cite{Rou} and Opdam's study of the monodromy of the Knizhnik-Zamalodchikov connection in \cite{Opd}. In the third section we give data for exponents of $\Psi^m({V}^*)^*$ in the case where $({V}, W)$ is not well-generated, thus giving explicit formulae for the associated $q$-Fuss-Catalan numbers. These data were gathered with the help of the Chevie program in GAP.
\section{Proofs}
\subsection{Rational Cherednik algebras}
Let $R=\mathbb{C}[[{\bf k}]]$ be the ring of
formal power series in the indeterminate ${\bf k}$ and $Q = \mathbb{C}(( {\bf k}))$. For any $\mathbb{C}$-vector space $M$, we write $M_R$ for the extension $R\otimes_{\mathbb{C}} M$. Let $S$ be the ring $\textsf{Sym}(V)$ of symmetric functions on $V$. Let $k$ be a rational number which we will call the {\it parameter}.
\subsection{} The
\emph{rational Cherednik algebra} $\mathbb{H}_{R, {k}}({V} ,W)$ is the quotient of the $R$-algebra $T({V} \oplus {V}^*)_R \rtimes W$ by relations $xy=yx$ if
$x,y \in {V}$ or $x,y \in {V}^*$, and
\begin{equation}
yx-xy=\langle x,y \rangle+ ({\bf k} + k)\sum_{H \in \mathcal{A}} \frac{\langle x,\alpha_H^\vee
\rangle \langle \alpha_H, y \rangle}{\langle \alpha_H, \alpha_H^\vee \rangle} \sum_{w\in W_H} (1-\det(w)^{-1})w
\end{equation} if $x \in {V}^*$ and $y \in {V}$. Both $S_{R}$ and $P_{R}$ are subalgebras of $\mathbb{H}_{R, k}({V} , W)$. Here and throughout we will drop as many parts of the notation as we can: for instance we will write $\mathbb{H}_{R}$ if both $k$ and the pair $({V} , W)$ are clear from the context. We write $\mathbb{H}_{\mathbb{C}, k}({V} , W)$ or $\mathbb{H}_{\mathbb{C}}$ for the specialisation of $\mathbb{H}_{R, k}$ to $\mathbb{C}$ and similarly $\mathbb{H}_{Q, k}({V} , W)$ or $\mathbb{H}_Q$.
\subsection{Category $\mathcal{O}$}
\emph{Category $\mathcal{O}_{R, k}$} is the full subcategory of
finitely generated $\mathbb{H}_{R}$-modules consisting of on which the operators in $V\subset S_{R} \subset \mathbb{H}_{R}$ act locally nilpotently. It is a highest weight category, \cite[Definition 4.11]{Rou},
with standard objects $\Delta_{R,k}(U) :=\text{Ind}^{\mathbb{H}_{R}}_{S_{R} \rtimes W} U_{R}$ labelled by $U\in \irr{W}$ and partial order $U <_{k} U'$ if
$k (c_{U'}-c_{U}) \in \mathbb{Z}_{>0}$ in the notation of \ref{c-fn}. We let $\mathcal{O}_{R, k}^{\Delta}$ denote the full subcategory of $\mathcal{O}_{R, k}$ whose objects admit a filtration by standard objects.
There are analogous definitions for $\mathcal{O}_{\mathbb{C},k}$ and $\mathcal{O}_{Q,k}$, and base-change functors from $\mathcal{O}_{\bf k}$ to both categories.
\subsection{Hecke algebras and the KZ functor}
\label{hecke}
For each hyperplane $H \in \mathcal{A}$ let $e_H$ be the order
of the subgroup $W_H$ of $W$ that fixes $H$
pointwise. Let ${\fh^{\text{reg}}} = {V} \setminus \bigcup_{H\in \mathcal{A}} H$, choose $x_0\in {\fh^{\text{reg}}}$, and let $B_W = \pi_1 ({\fh^{\text{reg}}}/W , x_0) $, the braid group of $W$. Let $\mathbf{H}_{R, k}$ be the
\emph{Hecke algebra} of $W$ over $R$, \cite[Section 4.C]{BrMaRo}, the quotient of the group
algebra $R[B_W]$ by relations
\begin{equation}
\label{Heckerelation}
(T_H - e^{2\pi i ({\bf k} + k)})\prod_{j=1}^{e_H-1} (T_H - \zeta_H^j) = 0
\end{equation} where $T_H$ is a set of generators for $B_W$ running
over a minimal set of reflecting hyperplanes.
\begin{hypothesis}
The algebra ${\bf H}_{R, k}$ is free over $R$ of rank $|W|$.
\end{hypothesis}
This is currently known to hold for all real $W$, all $W$ in the
infinite family $G(r,p,n)$, and most of the exceptional groups; see \cite{MaMi} for a recent report.
\subsection{} Following \cite[Section 5]{GGOR} there is an exact functor $${\sf KZ}_{R, k} : \mathcal{O}_{R, k} \longrightarrow {\bf H}_{R, k}\md $$ such that for $M,N \in \mathcal{O}_{R,k}^{\Delta}$ the natural map $\Hom_{\mathcal{O}_{R}} (M,N) \longrightarrow \Hom_{{\bf H}_{R}}({\sf KZ}_{R}(M) , {\sf KZ}_{R}(N))$ is an isomorphism. Similarly, there are functors ${\sf KZ}_{Q}: \mathcal{O}_{Q} \longrightarrow {\bf H}_{Q}\md$ and ${\sf KZ}_k: \mathcal{O}_{k} \longrightarrow {\bf H}_{k}\md$. The first is even an equivalence of categories, \cite[Corollary 2.20 and Theorem 5.14]{GGOR}, and so gives a bijection $$\tau_k: \irr{W} = \irr {\mathcal{O}_{Q,k}} \stackrel{\sim}\longrightarrow \irr {{\bf H}_{Q,k}}.$$
If $e^{2 \pi i k}\neq \zeta_H^j$ for all hyperplanes $H$ and integers $1\leq j \leq e_H-1$, then ${\sf KZ}_{R, k}$ is ``$1$-faithful" in the sense that if $M,N \in \mathcal{O}_{R,k}^{\Delta}$ then $\text{Ext}^1_{\mathcal{O}_R}(M,N) \longrightarrow \text{Ext}^1_{{\bf H}_R}({\sf KZ}_R(M),{\sf KZ}_R(N))$ is an isomorphism , \cite[Theorem 5.3]{Rou}.
\subsection{Equivalences} \label{equivalences}
Let $g\in\text{Aut}(\mathbb{C}/\mathbb{Q})$ and
let $k$ and $k'$ be parameters such that
\begin{equation} \label{g related}
g(e^{2 \pi i k})=e^{2 \pi i k'}.
\end{equation} Then $g$ extends to an automorphism of $R$ which fixes
$\mathbf{k}$, and the isomorphism $\gamma: R[B_W] \longrightarrow R[B_W]^g$ defined by $\gamma(\sum c_b b) = \sum g(c_b) b$
descends to an isomorphism of ${\bf H}_{R,k}$ onto
${\bf H}_{R, k'}^g$. Let $\mathcal{O}_{R,k'}^g$ denote the category whose objects and morphisms are the same as those for $\mathcal{O}_{R,k'}$ as sets, but such that the $R$-linear structure on morphisms is twisted by $g$. Then ${\sf KZ}_{R,k'}$ induces an $R$-linear functor from $\mathcal{O}_{R,k}^g$ to ${\bf H}_{R,k}^g\md$.
Changing base to $K$ this defines a permutation $\phi_{k,k'}^g\in\text{Perm}(\irr{W})$ via the isomorphism in ${\bf H}_{Q,k'}^g\md$ \begin{equation} \label{perm} {}^{\gamma}{\sf KZ}_{Q,k}(\Delta_{Q,k}(U)) \cong {\sf KZ}_{Q,k'}(\Delta_{Q,k'}(\phi_{k,k'}^g(U)))\end{equation}
\subsection{} The following theorem is at the heart of our work.
\begin{theorem}[{\cite[Theorems 4.49 and 5.5]{Rou}}] \label{equiv theorem}
Keep the above notation and assume that ${\sf KZ}_{R,k}$ and ${\sf KZ}_{R,k'}$ are both $1$-faithful. If $U <_{k} U'$ if and only if $\phi_{k,k'}^g(U)<_{k'} \phi_{k,k'}^g(U')$, then there is an equivalence
$S_{k,k'}:\mathcal{O}_{R,k} \stackrel{\sim}\longrightarrow \mathcal{O}_{R,k'}^g$
of highest weight categories such that
$S_{k,k'}(\Delta_{R,k}(U))=\Delta_{R,k'}(\phi_{k,k'}^g(U))$ for all $U\in\irr{W}$.
\end{theorem}
The equivalence of categories in this theorem can be specialised to produce an equivalence $\mathcal{O}_{\mathbb{C}, k}\longrightarrow \mathcal{O}_{\mathbb{C},k'}^g$ with the same properties as above.
\subsection{Local data} \label{local} By construction, the permutation $\phi_{k,k'}^g$ preserves the dimension of a representation $U\in\irr{W}$. It is an important result of Opdam that $\phi_{k,k'}^g$ also respects the {\it local data} of $U$, that is the set of integers $\{ n_{H,j}^U \}$ defined by $$\text{Res}_{W_H}^W U \cong \bigoplus_{0\leq j\leq e_H-1} n_{H,j}^U \text{det}^{-j}.$$ To be explicit, \cite[(3.8)]{Opd} shows that the action of $T_H$ on ${\sf KZ}_{K,k}(\Delta_{K,k}(U))$ diagonalises to $$M_H(k) = \text{diag}( \zeta_H^0 e^{2\pi i {\bf k} + k}\id_{n_{H,0}^U}, \zeta_H^1 e^{2\pi i {\bf k}}\id_{n_{H,1}^U}, \cdots , \zeta_H^{e_H-1} e^{2\pi i {\bf k}}\id_{n_{H,e_H-1}^U})$$ It follows then from the definition in \eqref{perm} that $n_{H,j}^{\phi_{k,k'}^g(U)} = n_{H,j}^{{}^{g}U}$ where ${}^gU$ is the representation of $W$ obtained by applying $g$ to the entries in a matrix representation of $U$.
There are two useful consequences. First, $\phi_{k,k'}^g(\textsf{triv}) = \textsf{triv}$ for any automorphism $g$. Second, recall the element $z\in \mathbb{Z}[W]$ introduced in \ref{c-fn}. It may be written as $z = N+N^{\ast} - \sum_{H\in \mathcal{A}} \sum_{w\in W_H}w$ and from this it follows that $c_U = N+N^{\ast} - (\dim U)^{-1}\sum_{H\in\mathcal{A}} e_Hn_{H,0}^U.$ Since twisting by $g$ does not change the dimension of the trivial eigenspace, we have $n_{H,0}^{\phi_{k,k'}^g(U)} = n_{H,0}^{{}^gU} = n_{H,0}^U$. We deduce that $c_{\phi_{k,k'}^g(U)}=c_U
$ for all $U\in \irr{W}$.
\subsection{}
We can now check the hypothesis of the above theorem in the case we will require.
\begin{corollary} \label{Galois equiv}
Let $g$ be an automorphism of $R$ as in \ref{equivalences}, with $g(e^{2\pi i k}) = e^{2\pi ir k}$ where $r\in \mathbb{R}_{> 0}$.
If ${\sf KZ_{R,k}}$ is $1$-faithful, then there is an equivalence
$\mathcal{O}_{\mathbb{C},k} \longrightarrow \mathcal{O}_{\mathbb{C},rk}^g$ of highest weight covers of $\mathcal{H}_{\mathbb{C},k} \cong \mathcal{H}_{\mathbb{C},rk}^g$, mapping $\Delta_{k}(U)$ to $\Delta_{rk}(\phi_{k,k'}^g(U))$ for all $U \in \irr{W}$.
\end{corollary}
\begin{proof}
Observe first that ${\sf KZ}_{R,rk}$ is $1$-faithful by \eqref{g
related} since ${\sf KZ}_{R,k}$ is $1$-faithful. By \eqref{g related}
both $e^{2 \pi i k}$ and $e^{2\pi i rk}$ are roots of unity of the
same order, so $k(c_{U'} - c_{U})\in \mathbb{Z}$ if and only if $rk
(c_{\phi_{k,k'}^g(U')}- c_{\phi_{k,k'}^g(U)} ) = rk(c_{U'} - c_{U})\in
\mathbb{Z}$. Since $r$ is positive, we deduce that $U<_k U'$ if and only if
$\phi_{k,k'}^g(U)<_{rk} \phi_{k,k'}^g(U')$. The corollary follows from
the statement following Theorem \ref{equiv theorem}. \end{proof}
\subsection{Catalan numbers}
Let $L_{k}(\textsf{triv})$ denote the simple quotient of $\Delta_k(\textsf{triv})$. The following lemma is proved by the same argument as \cite[Proposition 2.1]{BEG}.
\begin{lemma} \label{basic} $L_{-\frac{1}{h}}(\textsf{triv}) = \textsf{triv}$.\end{lemma}
\subsection{} We now prove our main result, answering a question of the second author, \cite[Section 8]{Gri}, and giving a general and case-free construction of the Koszul resolutions produced in \cite{BEG, Gor, Gri, Val}.
\begin{theorem} \label{theorem1}
Let $r$ be a positive integer coprime to $h$ and suppose $g\in\text{Aut}(\mathbb{C}/\mathbb{Q})$ sends $e^{-2 \pi i /h}$ to $e^{-2 \pi i r/h}$. Then there is an exact sequence in $\mathcal{O}_{\mathbb{C},-\frac{r}{h}}$
$$0\rightarrow \Delta_{-\frac{r}{h}}(\wedge^n\phi_{k,k'}^g({V}^*)) \rightarrow \cdots \rightarrow \Delta_{-\frac{r}{h}}(\wedge^2\phi_{k,k'}^g({V}^*)) \rightarrow \Delta_{-\frac{r}{h}}(\phi_{k,k'}^g({V}^*)) \rightarrow \Delta_{-\frac{r}{h}}(\textsf{triv}) \rightarrow L_{-\frac{r}{h}}(\textsf{triv}) \rightarrow 0$$
with $L_{-\frac{r}{h}}(\textsf{triv})$ finite dimensional. \end{theorem}
\begin{proof}
The rank one case is easy and left to the reader; we assume that the rank is at least two. In case $k=-1/h$ we have by Lemma \ref{basic} an exact sequence $\Delta_{-\frac{1}{h}}({V}^*) \longrightarrow \Delta_{-\frac{1}{h}}(\text{triv}) \longrightarrow L_{-\frac{1}{h}}(\textsf{triv}) \rightarrow 0.$ Since $L_{-\frac{1}{h}}(\textsf{triv})$ is finite dimensional, it is elementary that this extends to a resolution in $\mathcal{O}_{\mathbb{C},-\frac{1}{h}}$ \begin{equation} \label{resol} 0\rightarrow \Delta_{-\frac{1}{h}}(\wedge^n{V}^*) \rightarrow \cdots \rightarrow \Delta_{-\frac{1}{h}}(\wedge^2{V}^*)\rightarrow \Delta_{-\frac{1}{h}}({V}^*) \rightarrow \Delta_{-\frac{1}{h}}(\textsf{triv}) \rightarrow L_{-\frac{1}{h}}(\textsf{triv}) \rightarrow 0,\end{equation} see \cite[Lemma 3.1]{Gri}.
We wish to apply Corollary~\ref{Galois equiv} to this resolution, so we need to know that for all complex reflection groups of rank
at least $2$ we have inequalities $e^{-2 \pi i/h} \neq \zeta_H^j$
for all hyperplanes $H$ and $1\leq j\leq e_H-1$. To see this, observe first that $
nh=\sum_{H \in \mathcal{A}} e_H$. Since $W$ is acting irreducibly on ${V}$ there are at least $n$ summands on the right hand side
accounted for by a $W$-orbit of hyperplanes $H$ maximizing $e_H$. Thus
$h \geq e_H$ for all $H$. If we suppose that $h=e_H$ for some $H$, then we must have $\mathcal{A} = W\cdot H$ and $N=n$. Choosing linear
forms defining the hyperplanes gives a basis of ${V}^*$, and restricting any invariant polynomial to these hyperplanes shows that the degrees, $d_i$, of all the homogeneous generators of $P^W$ have degree divisible by $e_H$. By Molien's theorem, however, $\sum d_i = N^* + n = ne_H$ and so $d_i = e_H$ for each $i$. This implies that $W$ is a product of $n$ cyclic groups, but since there was assumed to be only one orbit of hyperplanes, we deduce that $n=1$. Thus, since we assume the rank is greater than $1$, we have $h>e_H$ for all $H$ and that implies the required inequalities.
Applying the equivalence of Corollary~\ref{Galois equiv} to \eqref{resol} produces an exact sequence in $\mathcal{O}_{-\frac{r}{h}}^g$
\begin{eqnarray} \label{resol1} 0\rightarrow \Delta_{-\frac{r}{h}}(\phi_{k,k'}^g(\wedge^n{V}^*)) \rightarrow \cdots &\rightarrow &\Delta_{-\frac{r}{h}}(\phi_{k,k'}^g(\wedge^2 {V}^*)) \rightarrow \Delta_{-\frac{r}{h}}(\phi_{k,k'}^g({V}^*)) \notag \\ && \rightarrow \Delta_{-\frac{r}{h}}(\phi_{k,k'}^g(\textsf{triv})) \rightarrow L_{-\frac{r}{h}}(\phi_{k,k'}^g(\textsf{triv})) \rightarrow 0,\end{eqnarray}
By \cite[Corollary 4.14]{GGOR} $L_{-\frac{r}{h}}(\phi_{k,k'}^g(\textsf{triv}))$ is finite dimensional and by \ref{local} $\phi_{k,k'}^g(\textsf{triv}) = \textsf{triv}$. It follows that the image of the generating weight space $\phi_{k,k'}^g(V^*)\subset \Delta_{-\frac{r}{h}}(\phi_{k,k'}^g(V^*))$ in $\Delta_{-\frac{r}{h}}(\textsf{triv})\cong P$ is the linear span of a regular sequence $\Theta$. Thus \eqref{resol1} is just a Koszul resolution when restricted to $P\rtimes W$ and it follows that the generating weight spaces $\phi_{k,k'}^g(\wedge^iV^*)\subset \Delta_{-\frac{r}{h}}(\phi_{k,k'}^g(\wedge^iV^*))$ must be isomorphic to $\wedge^i\phi_{k,k'}^g(V^*)$. Considering the sequence in $\mathcal{O}_{\mathbb{C}, -\frac{r}{h}}$ instead of $\mathcal{O}_{\mathbb{C}, -\frac{r}{h}}^g$ completes the proof of the theorem.
\end{proof}
\subsection{} We apply this theorem first to the case $r = mh+1$ for some positive integer $m$. In this case we may take $g$ to be the identity. It is not true, however, that $\phi_{k,k'}^{\text{id}}$ is the trivial permutation of $\irr{W}$! This is a consequence of the fact that the ${\sf KZ}$ functor varies in the parameter ${\bf k}$ rather than in its exponential $e^{2\pi i {\bf k}}$: this phenomenon has been studied and applied by Opdam, \cite{Opd, Opd1}.
Let $\Psi = \phi_{-\frac{1}{h}, -1-\frac{1}{h}}^{\text{id}} \in
\text{Perm}(\irr{W})$, so that in this case $\Psi$ is the permutation
on $\irr{W}$ induced by the equivalence ${\sf KZ}^{-1}_{K,k-1}\circ
{\sf KZ}_{K,k} : \mathcal{O}_{K,k} \longrightarrow
\mathcal{O}_{K,k-1}$ applied at $k=-\frac{1}{h}$. Then $\Psi^m$ equals $\phi_{-\frac{1}{h}, -m - \frac{1}{h}}^{\text{id}}$ for general $m$. It follows from \cite[Proposition 7.4]{Opd} that $\Psi$ satisfies \eqref{palin}.
\subsection{Proof of Theorem \ref{mainthm}}
Part (1) is now a straightforward application of the standard invariant theory arguments in \cite[Proposition 4.2]{BeRe}. To apply these we need to know the degree of the image of the generating set $\Theta \subset \Delta_{-m-\frac{1}{h}}{\Psi^m(V^*)}$ in $\Delta_{-m-\frac{1}{h}}(\textsf{triv}) \cong P$ used in Theorem \ref{theorem1}: this is just $(m+\frac{1}{h})c_{\Psi^m(V^*)} = (m+\frac{1}{h})c_{V^*} = mh+1$. It now follows from \cite[(4.3)]{BeRe} that the graded $W$-character of $(P/(\Theta))^W$ is given by the formula for $C^{(m)}_W(q)$ given in \ref{fuss}. (Although the formula in \cite{BeRe} is stated only for Galois conjugates ${}^{\sigma}V$ of $V$, this hypothesis is used to know that $(P\otimes \wedge^{\bullet} ({}^{\sigma}V))$ is free over $P^W$; but by \cite[Theorem 3.1]{OrSo} freeness holds for all representations $U\in \irr{W}$ satisfying $\sum_{i} e_i(U) = e_1(\wedge^{\text{top}} U)$. This equality holds for $U = \Psi^m({V}^*)$ since, by \cite[Lemma 2.1]{Opd}, $\sum_i e_i(U) = \sum_{H\in \mathcal{A}} \sum_{j=1}^{e_H-1} jn_{H,j}^U$ for any $U\in\irr{W}$, so by \ref{local} $\sum_i e_i(\Psi^m {V}^*) = \sum_i e_i({V}^*) = e_1(\wedge^{\text{top}} {V}^*) = e_1 (\wedge^{\text{top}} (\Psi^m {V}^*)).$)
Part (2) follows since for any pair of dual bases $\{x_i\}, \{y_i\}$ of ${V}^*$ and ${V}$ the element ${\bf h} = \frac{1}{2}\sum_{i=1}^n (x_iy_i + y_ix_i)\in \mathbb{H}_{\mathbb{C},-m-\frac{1}{h}}$ acts as a grading operator where non-zero elements of ${V}^*\subset P$ have degree $1$, see for instance \cite[Section 3.1]{GGOR}. \hfill $\Box$
\subsection{(q,t)-Catalan numbers} To deduce the existence of $(q,t)$-Catalan numbers as in \ref{q,t} we need to show that $\wedge^n {V}^*$ appears with multiplicity one in $L_{-1-\frac{1}{h}}(\textsf{triv})$. Now a reflection $s_H$ acts on $\wedge^{\sf top}U$ by the scalar $\zeta_H^{- \sum_j n_{H,j}^U}$, so it follows from \ref{local} that $\wedge^n {V}^* \cong \wedge^n \Psi ({V}^*)$. The multiplicity one result is \cite[Theorem 3.2]{Gri} if we show that $e_i(\Psi ({V}^*)) + d_{n-i} = h+1$ for all $i$. But this is an immediate consequence of \eqref{palin}.
\subsection{} We also remark that the above observation produces a $W$-stable quotient of the diagonal coinvariant ring ${\sf Sym}({V} \oplus {V}^*)^{{\sf co} W}$ with pleasant properties, see \cite[Theorem 3.2]{Gri}.
\subsection{Calculating $\phi_{k,k'}^g$} \label{calculate} Let
$E={\mathbb C}[{\bf v}^{\pm 1}]$ be the ring of Laurent polynomials in the variable ${\bf v}$ and let $K= \mathbb{C} ( {\bf v})$. We may define a Hecke algebra ${\bf H}_{E}$ as a quotient of $E[B_W]$ by the relations \eqref{Heckerelation}, where we replace $e^{2\pi i ({\bf k}+k)}$ by ${\bf v}^{\ell}$ for some positive integer $\ell$.
By \cite[Theorem 6.7]{Opd}, for $\ell$ large enough depending on $W$, $K$ is a splitting field for $\mathbf{H}_{E}$. (In fact by \cite[Corollary 4.8]{Mal} we may take $\ell$ to be number of roots of unity belonging to the field of definition of the $W$-representation $V$.) Fix such an $\ell$ and let $\chi_U$ be the character of ${\sf KZ}_{K,k}(\Delta_{K,k}(U))$, so that $\chi_U (b) \in K$ for all $b\in B_W$. By definition we then have $$\chi_{\phi_{k,k'}^g(U)}(b)\vert_{{\bf v} = e^{2 \pi i ({\bf k}+k')/\ell}}=g\left(\chi_{U}(b) \vert_{{\bf v} = e^{2 \pi i ({\bf k}+k)/\ell}} \right).$$ This may be rewritten as \begin{equation} \label{whatweuse}\chi_{\phi_{k,k'}^g(U)}(b) ({\bf v}) =(g \chi_{U}(b))(\eta{\bf v})\end{equation} where $\eta = e^{-2 \pi i k'/\ell} g \left(e^{2\pi i k/\ell}\right)$ is an $\ell$th root of unity and $g$ acts on $\chi_U(b) \in K$ fixing ${\bf v}$. This last formula gives an effective method for calculating $\phi^g_{k,k'}$ in examples.
\subsection{} \label{dual} We can now also illustrate Theorem \ref{theorem1} by applying it in the case $k=-\frac{1}{h}$ and $r = mh-1$ with $g$ being complex conjugation. In this situation we have $\eta = e^{2\pi i m/\ell}$, so it follows from \eqref{whatweuse} that $\phi^g_{-\frac{1}{h}, -m+\frac{1}{h}} (U) = \phi_{-\frac{1}{h},-m-\frac{1}{h}}^{\text{id}}(U^*) = \Psi^m(U^*)$. Considerations similar to the Proof of Theorem \ref{mainthm} then show that in this case $(P/(\Theta))^W$ has Hilbert series $$\prod_{i=1}^n \frac{[mh-1+e_i(\Psi^m(V)^*)]_q}{[d_i]_q}.$$
\subsection{Well-generated case} \label{wgc} With the exception of confirming Hypothesis \ref{hecke}, our arguments so far have been entirely case-free. Arguing case-by-case, however, \cite[Corollary 4.9]{Mal} shows that $\mathbb{C}({\bf v}^{\ell})$ is a splitting field for the reflection representation of ${\bf H}_E$ and its dual precisely when $(V,W)$ is well-generated. It follows that $\chi_{{V}^*}(b)$ is a function in ${\bf v}^{\ell}$ and so \eqref{whatweuse} implies that $\phi^g_{k,k'}({V}^*) = {}^g ({V}^*)$. Thus, in this case, Theorem \ref{mainthm} confirms \cite[Conjecture 4.3(i)]{BeRe} and Theorem \ref{theorem1} combined with the analysis in \ref{dual} confirms \cite[Conjecture 4.3(ii)]{BeRe}.
\subsection{Remarks on several parameters}
In general rational Cherednik algebras depend on parameters $\{ k_{H,j} : H\in \mathcal{A}/W, 0\leq j \leq e_H-1 \}$; we have considered only the case where $k_{H,0} = k$ and $k_{H,j} = 0$ for $j\neq 0$. Nevertheless, the techniques we use here to analyse the equivalences of \cite{Rou} extend to the general case.
Explicitly, there is a permutation $\phi^g_{(k_{H,j}), (k'_{H,j})} \in \text{Perm}(\irr{W})$ attached to a potential shift from $\mathcal{O}_{\mathbb{C}, (k_{H,j})}$ to $\mathcal{O}_{\mathbb{C}, (k'_{H,j})}$ and to apply \cite[Theorems 4.49 and 5.5]{Rou} to deduce an equivalence, one must check the condition $U <_{(k_{H,j})} U'$ if and only if $\phi_{(k_{H,j}),(k'_{H,j})}^g(U)<_{(k'_{H,j})} \phi_{(k_{H,j}),(k'_{H,j})}^g(U')$. By definition, $U <_{(k_{H,j})} U'$ if $\sum_{H\in \mathcal{A}} \sum_{j=0}^{e_H-1} e_H k_{H,j}\left((\dim U)^{-1}n_{H,-j}^U - (\dim U')^{-1}n_{H,-j}^{U'}\right)\in \mathbb{Z}_{>0}$, so the condition can be calculated from the local data. As in \ref{local} we have then $n_{H,j}^{\phi^g_{(k_{H,j}), (k'_{H,j})}(U)} = n_{H,j}^{{}^gU}$ and so we must check $U <_{(k_{H,j})} U'$ if and only if ${}^gU<_{(k'_{H,j})}{}^gU'$.
In particular, if $(k'_{H,j})$ is obtained from $(k'_{H,j})$ by the addition of integers then we may take $g = \text{id}$ and we see that we can apply \cite[Theorem 5.5]{Rou} as stated; more generally, in \cite[Proposition 5.14]{Rou} we should use the formalism here.
\section{Data}
Here we record the details necessary to calculate the Fuss-Catalan numbers defined by \eqref{thenumbers} for all irreducible complex reflection groups that are not well-generated. For the imprimitive groups $G(de,e,n)$ these data can be found in \cite[Section 8]{Gri}; for the exceptional groups we used \ref{calculate} combined with the detailed data of characters and Schur elements for Hecke algebras available in the Chevie program in GAP. (Note that degrees of $W$ can be read off from the exponents of ${V} = \Psi^0({V}^*)^*$.)
\medskip
{\footnotesize
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Group & Generalised & Order of $\Psi$ & Exponents of $\Psi^m({V}^*)^*$ \\
& Coxeter number & acting on $V^*$ & \\
\hline \hline
$G(de,e,n)$ & $de(n-1)+d$ & $e$ & $d(e-m)-1, d(2e-m)-1 , \ldots , d((n-1)e-m) - 1, $ \\
$e>1, d>1$ & & & $d(m(n-1) + n) - 1$ ($0\leq m \leq e-1$) \\
\hline
$G_7$ & 18 & 2 & 11, 11 ($m=0$) \\
& & & 5, 17 ($m=1$) \\
\hline
$G_{11}$ & 36 & 2 & 23, 23 ($m=0$) \\
& & & 11, 35 ($m=1$) \\
\hline
$G_{12}$ & 12 & 2 & 5, 7 ($m=0$) \\ & & & 1, 11 ($m=1$) \\
\hline
$G_{13}$ & 18 & 4 & 7, 11 ($m=0$) \\ &&& 5, 13 ($m=1$) \\ &&& 7,11 ($m=2$) \\ &&& 1,17 ($m=3$) \\
\hline
$G_{15}$ & 30 & 4 & 11, 23 ($m=0$) \\ &&& 17,17 ($m=1$) \\ &&& 11,23 ($m=2$) \\ &&& 5, 29 ($m=3$) \\
\hline
$G_{19}$ & 90 & 2 & 59, 59 ($m=0$) \\ &&& 29, 89 ($m=1$) \\
\hline
$G_{22}$ & 30 & 2 & 11, 19 ($m=0$) \\ &&& 1, 29 ($m=1$) \\
\hline
$G_{31}$ & 30 & 2 & 7,11,19,23 ($m=0$) \\ &&& 1,13,17,29 ($m=1$) \\
\hline
\end{tabular}
\end{center}}
\def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
|
3,212,635,537,544 | arxiv | \section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus
-.2ex}{2.3ex plus .2ex}{\large\sc}}
\def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus
-.2ex}{1.5ex plus .2ex}{\normalsize\sc}}
\makeatother
\makeatletter
\@addtoreset{equation}{section}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\makeatother
\newcommand{\newcommand}{\newcommand}
\newcommand{\renewcommand}{\renewcommand}
\newcommand{\chap}[1]{{\clearpage}%
\begin{center}%
{\noindent\underline{\large\sc #1}}{\addcontentsline{toc}{section}{#1}}%
\end{center}%
{\vspace*{0.3cm}}}
\newcommand{\subs}[1]{{\vspace*{0.2cm}}%
{\noindent\underline{\small\sc
#1}}{\addcontentsline{toc}{subsubsection}{#1}}%
{\vspace*{0.2cm}}}
\newcommand{\be}{\begin{equation}}
\newcommand{\ee}{\end{equation}}
\newcommand{\bea}{\begin{eqnarray}}
\newcommand{\eea}{\end{eqnarray}}
\newcommand{\trac}[2]{{\textstyle\frac{#1}{#2}}}
\newcommand{\ex}[1]{\mbox{e}^{\,\textstyle#1}}
\newcommand{\CC}{\Bbb{C}}
\newcommand{\HH}{\Bbb{H}}
\newcommand{\PP}{\Bbb{P}}
\newcommand{\RR}{\Bbb{R}}
\newcommand{\ZZ}{\Bbb{Z}}
\newcommand{\II}{\Bbb{I}}
\newcommand{\EE}{\Bbb{E}}
\newcommand{\TT}{\Bbb{T}}
\newcommand{\DD}{\mathrm{I}\!\mathrm{D}}
\renewcommand{\d}{\delta}
\newcommand{\symx}{{\mathhexbox@\msafam@73}}
\newsymbol\smallsmile 1360
\newsymbol\smallfrown 1361
\newcommand{\ad}{\mathop{\mbox{ad}}\nolimits}
\newcommand{\tr}{\mathop{\mbox{tr}}\nolimits}
\newcommand{\Tr}{\mathop{\mbox{Tr}}\nolimits}
\newcommand{\Det}{\mathop{\mbox{Det}}\nolimits}
\renewcommand{\det}{\mathop{\mbox{det}}\nolimits}
\newcommand{\rk}{\mathop{\mbox{rk}}\nolimits}
\newcommand{\del}{\partial}
\newcommand{\diag}{\mathop{\mbox{diag}}\nolimits}
\newcommand{\ra}{\rightarrow}
\newcommand{\Ra}{\Rightarrow}
\newcommand{\LRa}{\Leftrightarrow}
\newcommand{\lra}{\leftrightarrow}
\newcommand{\ot}{\otimes}
\renewcommand{\ss}{\subset}
\newcommand{\nul}{\noindent\underline}
\newcommand{\non}{\nonumber\\}
\newcommand{\mat}[4]{\left(\begin{array}{cc}#1\\#3\end{array}\right)}
\renewcommand{\lg}{\frak{g}}
\newcommand{\G}[3]{\Gamma^{#1}_{\;{#2}{#3}}}
\newcommand{\nam}{\nabla_{\mu}}
\newcommand{\nan}{\nabla_{\nu}}
\newcommand{\dx}{\dot{x}}
\newcommand{\dxl}{\dot{x}^{\la}}
\newcommand{\dxm}{\dot{x}^{\mu}}
\newcommand{\dxn}{\dot{x}^{\nu}}
\newcommand{\ddx}{\ddot{x}}
\newcommand{\ddxm}{\ddot{x}^{\mu}}
\newcommand{\ddxn}{\ddot{x}^{\nu}}
\newcommand{\dxi}{\dot{\xi}}
\newcommand{\ddxi}{\ddot{\xi}}
\newcommand{\lsf}{\ell_s^{\mathrm{eff}}}
\newcommand{\lpf}{\ell_p^{\mathrm{eff}}}
\newcommand{\sqg}{\sqrt{g^{11}}}
\begin{document}
\vspace*{2cm}
\begin{center}
{\Large\sc Geometry of Schr\"odinger Space-Times,\\[.2in]
Global Coordinates, and Harmonic Trapping}
\end{center}
\vspace{1cm}
\begin{center}
{\large\sc Matthias Blau, Jelle Hartong, Blaise Rollier}\\[.3cm]
{\it Institut f\"ur Theoretische Physik\\ Universit\"at Bern\\
Sidlerstrasse 5 \\ 3012 Bern, Switzerland}
\end{center}
\vspace{1cm}
We study various geometrical aspects of Schr\"odinger space-times with
dynamical exponent $z>1$ and compare them with the properties of AdS
($z=1$). The Schr\"odinger metrics are singular for $1<z<2$ while the
usual Poincar\'e coordinates are incomplete for $z\geq 2$. For $z=2$
we obtain a global coordinate system and we explain the relations
among its geodesic completeness, the choice of global time, and the
harmonic trapping of non-relativistic CFTs. For $z>2$, we show that the
Schr\"odinger space-times admit no global timelike Killing vectors.
\newpage
\section{Introduction}
Recently,
initiated by \cite{son,balamac},
there has been some interest in extending the AdS/CFT
correspondence to non-relativistic
field theories in $d$ spatial dimensions that exhibit an anisotropic
scale invariance $(t,x^i) \ra (\lambda^z t, \lambda x^i)$ parametrised by
the dynamical critical exponent $z\geq 1$, and
corresponding to a dispersion relation of the form $\omega \sim k^z$.
While there is a plethora of non-relativistic symmetry algebras, some
of them are subalgebras of the relativistic conformal (or AdS isometry) algebra.
Systems exhibiting such a symmetry therefore potentially have bulk gravitational
duals that can be realised as suitable deformations of AdS.
The simplest of these are the Lifshitz
and Schr\"odinger space-times $\mathsf{Lif}_z$ \cite{lif}
and $\mathsf{Sch}_z$ \cite{son,balamac},
whose metrics in Poincar\'e-like coordinates take the form
\be
\label{metrics}
\begin{aligned}
\mathsf{Lif}_z:
ds^2 &= -\frac{dt^2}{r^{2z}} + \frac{1}{r^2}\left(dr^2 + d\vec{x}^2 \right)\\
\mathsf{Sch}_z:
ds^2 &= -\frac{dt^2}{r^{2z}} + \frac{1}{r^2}\left(-2dtd\xi + dr^2 + d\vec{x}^2\right)
\end{aligned}
\ee
where $d\vec{x}^2 = (dx^1)^2 + \ldots (dx^d)^2$. Subsequently, various
geometrical aspects of such a non-relativistic correspondence
were investigated e.g.\ in
\cite{gold}-\cite{sakura}.\footnote{For an
updated account of these developments, and references to the CFT side
of the story, see also \cite{hartnoll}.} Nevertheless it is probably
fair to say that the holographic dictionary and
the issue of holographic renormalisation in these space-times are not yet nearly as
well understood as in the AdS case.
In the usual AdS/CFT correspondence, while for most practical intents
and purposes it is sufficient to work in Euclidean signature (an
option not readily available for the $\mathsf{Sch}_z$ metrics)
or perhaps on the Minkowskian Poincar\'e patch, certain conceptual
issues of the correspondence are greatly clarified by formulating
the Lorentzian correspondence in global coordinates (see e.g.\
\cite{bkl,bklt,marolf}). For these reasons, and in order to highlight
the analogies respectively differences between AdS ($z=1$) and $z>1$,
it is important to gain a better understanding of the global geometry
of the Lifshitz and Schr\"odinger space-times.
For example, while $\mathsf{Lif}_z$ and $\mathsf{Sch}_z$
are geodesically complete at $r\ra 0$ for
all $z\geq 1$, for $z>1$ the detailed behaviour of geodesics near
$r=0$ differs somewhat from the AdS case ($r=0$ is ``harder to
reach''), and this may well have implications for holography and,
in particular, for an appropriate notion of ``boundary'' in this
context.
At the ``other end'' $r\ra\infty$, for all $z\geq 1$ the above
Poincar\'e-like coordinate system \eqref{metrics} is incomplete in
the sense that e.g. timelike geodesics can reach $r=\infty$ in
finite proper time. The implications of this run-away behaviour of
the geodesics, i.e.\ whether this indicates a genuine pathology of
the space-time (geodesic incompleteness, singularity) or a mere
coordinate singularity, requiring one to extend the space-time
beyond $r=\infty$, depend on the behaviour of the geometry
as $r\ra\infty$. For example, it is of course well known
that in the $z=1$ AdS case the above Poincar\'e coordinates cover
only one-half of the complete (non-singular and maximally symmetric)
AdS space-time. On the other hand it has already been noted in
\cite{lif,hartnoll} that for all $z>1$ the Lifshitz geometries are
singular as $r\ra\infty$ in the sense of pp-curvature singularites
(infinite tidal forces) and are thus geodesically incomplete. For a discussion
of the possible implications of this for the
$\mathsf{Lif}_z$ / CFT correspondence see \cite{hartnoll}.
The situation is somewhat more interesting for the Schr\"odinger metrics
$\mathsf{Sch}_z$. Our starting point is the observation that
in this case qualitatively (for the precise statement see \eqref{tidals} below)
the tidal forces of causal geodesics behave as
\be
\label{tidal1}
\mathsf{Sch}_z: \text{Tidal Forces} \propto (z-1)r^{4-2z}\;\;.
\ee
In particular, while these space-times are geodesically incomplete for $1<
z < 2$, there are no infinite tidal forces not only for the AdS case $z=1$
but also for all $z \geq 2$, and there are freely falling
observers that reach $r=\infty$ in finite proper time without encountering
any singularity. One thus needs to provide them with a map and
extend the space-time beyond $r=\infty$.
In this note we will address this issue and obtain a global,
geodesically complete, coordinate system for $z=2$. We also show
that the Schr\"odinger space-times for $z>2$ admit no global timelike
Killing vector fields, so that a global metric will necessarily be
time-dependent.
Taking our clue from global AdS, where global time corresponds to the generator
$P_0 + K_0$ of the isometry algebra ($K_0$ is a special conformal transformation),
we oberve that only the $z=2$ Schr\"odinger algebra has a potential counterpart of
this generator, namely $H+C$, where $H$ is the generator of $t$-translations (in the above
Poincar\'e-like coordinates) and $C$ is the special conformal generator of the $z=2$ algebra.
By considering the combination $H+\omega^2 C$ we are led to the metric
\be
\label{igma}
ds^2 =-\frac{dT^2}{R^4} + \frac{1}{R^2}(-2dT dV -\omega^2(R^2 + \vec{X}^2)dT^2 + dR^2 +
d\vec{X}^2)\;\;.
\ee
which has a number of remarkable properties. First of all,
this coordinate system, in which the metric simply has the form of a plane wave deformation of the
Poincar\'e-like metric \eqref{metrics}, is indeed geodesically complete for $\omega>0$ and in this
sense provides global coordinates for the $\mathsf{Sch}_{z=2}$ space-time
(for $\omega=0$ the metric reduces to the incomplete Poincar\'e-patch metric
\eqref{metrics}).
Moreover, this metric is closely related to the harmonic
trapping of non-relativstic CFTs that plays an important role in the
non-relativistic operator-state correspondence \cite{nison} and whose
holographic implementation was investigated in \cite{gold,bafu}.
Our derivation of the above metric shows that precisely for $z=2$
(and for AdS $z=1$) the plane wave
deformation \eqref{igma} of the $\mathsf{Sch}_z$
Poincar\'e metric \eqref{metrics} that accomplishes this trapping
is just a coordinate transformation, namely the one that relates the
Poincar\'e time Hamiltonian $H=\del_t$ to the trapped Hamiltonian
$H+ \omega^2 C = \del_T$.
The geodesic completeness of this coordinate system
can be physically understood in terms of the
trapping of geodesics induced by the harmonic oscillator term
$\omega^2 (R^2 + \vec{X}^2)$ in the metric. Moreover, the
spatial harmonic oscillator provides an IR cut-off that is the
counterpart of
the topological spatial IR cut-off (space is a sphere)
provided by AdS global coordinates.
To set the stage, in section 2 we briefly recall some elementary
aspects of the geometry (isometries, geodesics) of the $\mathsf{Lif}_z$
and $\mathsf{Sch}_z$ metrics in the Poincar\'e-like
coordinates \eqref{metrics}. In section 3.1, we motivate the introduction of $H+C$
as the generator of global time by analogy with AdS, and we show that
the Schr\"odinger space-times for $z\neq 1,2$ have no global timelike Killing vectors.
We obtain the
desired coordinate transformation and the metric in global coordinates in
section 3.2, and in section 3.3 we establish the geodesic completeness
and discuss the other results mentioned above. In section
3.4 we briefly look at some related issues for pure AdS ($z=1$) and
make some comments on the case $z>2$.
Finally, In section 4, we analyse the Klein-Gordon equation in global coordinates
and compare with the Poincar\'e-patch analysis of \cite{son,balamac}
and the Hamiltonian analysis of \cite{bafu}.
\section{Schr\"odinger and Lifshitz Space-Times in Poincar\'e Coordinates}
In this section, to motivate our investigation, and as a preparation
for the considerations of section 3, we briefly summarise some basic
facts about the geometry of the Schr\"odinger and Lifshitz space-times,
whose metrics in Poincar\'e-like coordinates (that we will henceforth
simply refer to as Poincar\'e coordinates) have been
given in \eqref{metrics}. Obviously for $z=1$ these reduce to the $(d+2)$-
(respectively $(d+3)$-) dimensional AdS Poincar\'e metric,
and we will consider the range $z\geq 1$.
In addition to the manifest
translational isometries in $t$ and $\vec{x}$ and spatial rotations
these space-times have the characteristic anisotropic dilatation symmetry
\be
\label{dil}
\begin{aligned}
\mathsf{Lif}_z:\;
& (r,\vec{x},t) \ra (\lambda r, \lambda \vec{x}, \lambda^z t)\\
\mathsf{Sch}_z: \;
& (r,\vec{x},t,\xi) \ra (\lambda r, \lambda \vec{x}, \lambda^z t, \lambda^{2-z}\xi)\;\;.
\end{aligned}
\ee
These comprise the so-called Lifshitz algebra \cite{lif}. The larger
Schr\"odinger isometry algebra of $\mathsf{Sch}_z$ contains, in addition,
Galilean boosts and null translations in $\xi$, the latter playing
the role of the central extension or mass operator of the Galilean
algebra. Moreover, for $z=2$
there is one extra special conformal
generator $C$ which will turn out to play an important role in the
considerations of section 3.
One can use the conserved momenta $E,\vec{P}$ (and $P_\xi$) corresponding
to the manifest $t$, $\vec{x}$ (and $\xi$) translational isometries of
the metrics \eqref{metrics} to reduce the geodesic equations to a single
radial (effective potential) equation
\be
\label{veff}
\begin{aligned}
\mathsf{Lif}_z:
k &= \frac{\dot{r}^2}{r^2} + r^2 \vec{P}^2 - r^{2z} E^2\\
\mathsf{Sch}_z:
k &= \frac{\dot{r}^2}{r^2} + r^2 (\vec{P}^2 - 2 E P_\xi) + r^{4-2z}
P_\xi^2
\end{aligned}
\ee
($k=0,\mp 1$ for null, timelike and spacelike
geodesics). We will first compare and contrast the qualitative
behaviour of causal AdS geodesics ($z=1$) as $r\ra 0$ with that for $z>1$,
and then consider the (for our purposes more crucial) behaviour as
$r\ra\infty$.
In the AdS case $\mathsf{Lif}_{z=1}$ it follows from \eqref{veff}
that timelike geodesics require $E^2 - \vec{P}^2 \equiv M^2 >0$,
and that these have a minimal radius $r_{\mathrm{min}} = 1/M$, while
null geodesics ($\dot{r} = \pm M r^2$) can reach $r=0$ at infinite
values of the affine parameter. Since $\dot{t} = E r^2$, it also
follows that $r(t) = \pm (E/M)t$, so that lightrays can reach the
boundary $r=0$ and bounce back again to a stationary observer in
finite coordinate time $t$.
The behaviour of $\mathsf{Lif}_{z>1}$ causal geodesics is qualitatively
similar to the AdS case, with one perhaps crucial difference: namely,
timelike geodesics still have a minimal radius $r_{\mathrm{min}}>0$,
but here so do null geodesics unless $\vec{P}=0$ (since $r^2\vec{P}^2$
dominates over $r^{2z}E^2$ as $r\ra 0$ unless $\vec{P}=0$). Thus
for $z>1$ only purely radial null geodesics reach $r=0$, and up to
a reparametrisation $r^z \ra r$ these are identical to null geodesics
in AdS${}_2$.
The $\mathsf{Sch}_{z>1}$ space-times exhibit a somewhat stronger
deviation from the AdS behaviour, since here neither timelike nor
null geodesics ever reach $r=0$. This is due to the fact that for
$z>1$ the dominant term in the effective potential is the positive
term $r^{4-2z}P_{\xi}^2$ unless $P_{\xi}=0$, and that there are
neither timelike geodesics, nor null geodesics with $\dot{r}\neq
0$, for $P_{\xi}=0$.
Now let us look at the behaviour as $r\ra\infty$.
It is easy to see that for all $z\geq 1$ the Poincar\'e
coordinate system \eqref{metrics}
is incomplete. Indeed, it follows immediately from \eqref{veff} that
for $z\geq 1$ the leading large $r$ behaviour of null (and timelike) geodesics
as functions of the affine parameter $\tau$ is
\be
\begin{aligned}
\mathsf{Lif}_z:
r(\tau)& \propto |\tau - \tau_0|^{-1/z}\\
\mathsf{Sch}_z:
r(\tau)& \propto |\tau - \tau_0|^{-1} \quad \forall z\geq 1
\end{aligned}
\ee
so that $r\ra\infty$ for $\tau \ra\tau_0$. Generically, i.e.\ unless
some of the constants of motion are set to zero, all other coordinates also approach
infinity (for $\mathsf{Sch}_z$ at exactly the same rate as $r(\tau)$).
In order to assess the implications of this, one needs to look more
closely at the geometry of the space-time at $r\ra\infty$. The AdS case
$z=1$ is of course well understood: $r=\infty$ is only a coordinate singularity,
Poincar\'e coordinates cover only one-half of the complete AdS space-time,
and it is possible to introduce global coordinates that cover the entire space-time.
It is also easy to see that (for any $z$) all scalar curvature
invariants are constant (and in particular finite at $r=\infty$). This is a consequence
of the homogeneity of the Lifshitz and Schr\"odinger space-times, in particular
the dilatation isometry \eqref{dil},
since any scalar curvature invariant can only be a function of $r$,
upon which dilatation-invariance implies that the invariant is
actually constant.
However, as is well known, e.g.\ in the context
of pp-waves, all of whoses scalar curvature invariants are identically
zero, this does not by itself imply that the space-time should necessarily
be considered to be non-singular: freely falling observers may
nevertheless experience infinite tidal forces (and therefore a
physical singularity), in the form of divergent parallel propagated
orthonormal frame components of the Riemann tensor.
The explicit calculation of the tidal forces is pretty straightforward
in the case of Lifshitz metrics, and it has already been noted
in \cite{lif} that they are singular in this sense for $r\ra\infty$.
One finds that for null geodesics
(and up to a cosmological constant term for timelike geodesics)
the tidal forces are proportional to
\be
\label{tidall}
\mathsf{Lif}_z: \text{Tidal Forces} \propto (z-1)r^{2z}\;\;.
\ee
This is in complete agreement with the result reported in \cite[foootnote
5]{hartnoll} and shows that the $\mathsf{Lif}_z$ space-times are
geodesically incomplete and singular at $r\ra\infty$ for all $z>1$.
For the Schr\"odinger metrics the corresponding calculation is slightly more
involved but
the result is also somewhat more interesting and
qualitatively quite different from \eqref{tidall}.
The relevant parallel propagated orthonormal frame components
$R^{(\tau)}_{\;(\alpha)(\tau)(\beta)}$ of the curvature tensor are
\be
\label{tidals}
\mathsf{Sch}_z: \text{Tidal Forces:} \left\{\begin{array}{ll}
R^{(\tau)}_{\;(i)(\tau)(j)} &= -(1+ P_\xi^2 (z-1) r(\tau)^{4-2z})\d_{ij}\\
R^{(\tau)}_{\;(\xi)(\tau)(\xi)} &= -(1+ 2P_\xi^2 z(z-1) r(\tau)^{4-2z}\sin(\tau)^2)\\
R^{(\tau)}_{\;(r)(\tau)(r)} &= -(1+ 2P_\xi^2 z(z-1) r(\tau)^{4-2z}\cos(\tau)^2)
\end{array}\right\}
\propto (z-1)r^{4-2z}
\ee
with $(\tau)$ referring to the tangent
of the timelike geodesic, i.e.\ $e_{(\tau)}^{\alpha} = \dot{x}^{\alpha}$.
While, as expected, $z=1$ is non-singular, this result, that can also
be deduced from the calculation of geodesic deviation in general Siklos
space-times in \cite{podolski} and \cite{cgr}, shows some perhaps surprising
features.
Namely, while a static (non-geodesic) observer may have have been
inclined to believe that the metric is asymptotically AdS for
any $z>1$ as $r\ra\infty$, since the $r^{-2z}dt^2$ term appears to be subleading,
this is an illusion caused by that observer's acceleration.
Indeed \eqref{tidals} shows that there is a singularity in the form of
infinite tidal forces at $r=\infty$ for $1< z < 2$, experienced by all
timelike and null geodesics (since for these $P_{\xi}\neq 0$), and the
situation is therefore very much like that of the Lifshitz space-times
for $z>1$. For $z\geq 2$, however, the tidal forces again remain finite
as $r\ra\infty$.\footnote{For $z>2$, the dangerous region may appear to
be $r\ra 0$, but this is deceptive since (as discussed above) timelike
and null geodesics have a non-zero minimal radius $r_{\mathrm{min}}>0$.}
\textit{Mutatis mutandis} the above result is also valid for the tidal
forces experienced by null geodesics; the first (cosmological constant)
term does not contribute in that case. Thus for $z\geq 2$ causal
geodesics reach $r=\infty$ at finite values of the affine parameter
without encountering any singularity.
\section{Global Coordinates for $z=2$ Schr\"odinger Space-Times}
The above analysis points to the
necessity of constructing suitable coordinates that cover the space-time
region beyond $r=\infty$. In this section we will obtain a global,
geodesically complete, coordinate system for $\mathsf{Sch}_{z=2}$ and
describe some of its properties. We also make some comments on the
(qualitatively quite different) case $z>2$.
\subsection{Towards Global Coordinates}
We begin by recalling the situation for AdS, i.e.\ $\mathsf{Sch}_{z=1}$. In this case,
it is well known how to construct global coordinates $(T,R,\text{angles})$ in terms of
which the $(d+3)$-dimensional AdS metric takes the form
\be
\label{adsg}
\mathsf{AdS:}\; ds^2 = -(1+R^2)dT^2 + (1+R^2)^{-1}dR^2 + R^2 d\Omega_{d+1}^2\;\;.
\ee
The most straightforward way to find these global coordinates is to make use of the embedding of
the unit curvature radius $(d+3)$-dimensional AdS space-time
into $\RR^{2,d+2}$ with coordinates $Z^A, A=0,\ldots, d+3$ and metric
\be
ds^2 = -(dZ^0)^2 + (dZ^1)^2 + \ldots + (dZ^{d+2})^2 - (dZ^{d+3})^2
\ee
as the (universal covering space of the) hyperboloid
\be
\label{adsem}
-(Z^0)^2 + (Z^1)^2 + \ldots + (Z^{d+2})^2 - (Z^{d+3})^2 = -1\;\;.
\ee
Writing this as
\be
(Z^0)^2 + (Z^{d+3})^2 = 1 + (Z^1)^2 + \ldots + (Z^{d+2})^2 \equiv 1+R^2
\ee
suggests the parametrisation
\be
Z^0 = (1+R^2)^{1/2} \sin T\qquad Z^{d+3} = (1+R^2)^{1/2} \cos T \;\;,
\ee
which identifies $\del_T$ with the generator of rotations
$M_{0,d+3}$ in the timelike $(Z^0,Z^{d+3})$-plane.
This indeed gives rise on the nose to the global metric \eqref{adsg}.
Since the $\mathsf{Sch}_z$ metric \eqref{metrics} differs from the AdS
Poincar\'e metric only by the characteristic first term $-dt^2/r^{2z}$,
when seeking global coordinates for $\mathsf{Sch}_z$, one's first thought
may perhaps be to simply employ the usual transformation from AdS Poincar\'e
coordinates
\be
\label{adsp}
ds^2 = \frac{-dt^2 + d\vec{y}^2 + dr^2}{r^2}\;\;,
\ee
to global coordinates. However, since e.g.\ the relation between global and Poincar\'e time
is
\be
\label{adsT}
\tan T = \frac{2t}{1+ r^2 + \vec{y}^2 - t^2}\;\;,
\ee
this results in a fairly complicated metric that explicitly depends on all of the coordinates. In
particular, none of the Schr\"odinger isometries of the metric will be manifest and, regardless of
whether or not this procedure leads to a geodesically complete coordinate system for $z\geq 2$, it appears
to provide no additional insight into the geometry of Schr\"odinger space-times.
Another possibility is to try to find a deformation of the AdS embedding \eqref{adsem} that
breaks the conformal algebra down to its Schr\"odinger sub-algebra.
Unfortunately, it is not hard to see that such an embedding of
$\mathsf{Sch}_z$ into one dimension higher does not exist: any
hypersurface invariant under the Schr\"odinger algebra turns out to be
automatically invariant under the entire conformal algebra, and leads
to the standard AdS hyperboloid.
However, there is yet another aspect of the AdS construction that
does turn out to generalise to the Schr\"odinger case, and does
provide global coordinates, but only for $z=2$. Namely,
under the usual
identification of the generators $M_{AB}$ of $\frak{so}(d+2,2)$ with the generators
$(P_{\mu},K_{\mu},M_{\mu\nu},D)$ of the relativistic conformal algebra $\frak{conf}(d+1,1)$, such that
e.g.\
\be
P_0 = \del_t \quad,\quad K_0 = t(r \del_{r} + y^{i}\del_{y^i})+ \trac{1}{2}(t^2 + r^2 + \vec{y}^2)\del_t
\ee
in standard Poincar\'e coordinates \eqref{adsp},
the definition of AdS global time is equivalent to the identification
\be
\label{adstpk}
\del_T = P_0 + K_0\;\;.
\ee
Thus global AdS time ``diagonalises'' the modified Hamiltonian operator $P_0 +
K_0$. In the Schr\"odinger algebra, the role of the
Hamiltonian is played by the lightcone Hamiltonian $H \equiv P_+$,
and the Poincar\'e coordinates \eqref{metrics} are such that
this Hamiltonian is diagonalised, $H = \del_t$. Now, generically the
Schr\"odinger algebra does not possess any counterpart of the special
conformal generators $K_{\mu}$. Precisely for $z=2$, however (for which
the Schr\"odinger algebra can be characterised as the subalgebra of
$\frak{conf}(d+1,1)$ that commutes with the lightcone momentum $P_{-}$),
there is one extra special conformal generator, namely $C\equiv K_-$.
In Poincar\'e coordinates $C$ takes the form
\be
\label{C}
C = t(t\del_t + r\del_r + x^i\del_{x^i}) + \trac{1}{2}(r^2 + \vec{x}^2)\del_{\xi}
\ee
with
\be
{}[H,C] = D\qquad [D,C]=2C \qquad [D,H] = -2H\;\;,
\ee
where
\be
D=2t\del_t + r\del_r + x^i\del_{x^i}
\ee
is the generator of dilatations \eqref{dil} for $z=2$. Thus for $z=2$, there is a natural candidate counterpart
of the AdS global Hamiltonian $P_0 + K_0$, namely
\be
\label{hg}
P_+ + K_- = H + C\;\;.
\ee
As a first check on this we can calculate the norm of this Killing vector in the
Poincar\'e-coordinates of \eqref{metrics},
\be
||H+C||^2 = -1-\frac{\vec x^2}{r^2}-\frac{(1+t^2)^2}{r^4} \leq -1\;\;.
\ee
Here the constant term $-1$ arises from the cross-term between $\del_t$ and the
$r^2$-term in $C$ \eqref{C}.
Thus unlike $H=\del_t$, whose norm goes to zero as $r\ra\infty$, this Killing vector
is everywhere timelike in the Poincar\'e patch and thus has a chance of providing a
well-defined notion of time also beyond the Poincar\'e patch.
We will show below that diagonalising this generator of the isometry algebra
indeed leads to global time (and other global coordinates) for $z=2$.
Before turning to that, let us briefly look at the situation for
$z\neq 1,2$. In that case, $C$ is absent but one could e.g.\
consider a linear combination $H + aD + bP_-$, $P_-=\del_\xi$, of
the Killing vectors that are invariant under spatial rotations.
Such Killing vectors necessarily become spacelike somewhere inside the
Poincar\'e patch if $a\neq 0$, while the norm of $H + bP_-$ still
goes to zero as $r\ra\infty$ (in any case,
replacing $H$ by $H+ bP_-$ just
amounts to passing from $(t,\xi)$ to some linear combinations of $t$ and
$\xi$). Including the remaining Killing vectors (rotations, translations,
boosts) in this analysis does not improve the situation. We can
therefore conclude that, unlike for $z=2$,
the Schr\"odinger space-times $\mathsf{Sch}_z$
for $z>2$ have no global timelike Killing vector fields. We will
come back to this result in section 3.4.
\subsection{Global Coordinates for $z=2$}
Since the $z=2$ algebra
has the central element $P_-=\del_\xi$, we seek new coordinates
\be
(t,r,\vec{x},\xi) \mapsto (T,R, \vec{X},V)
\ee
in which $H+C$ \eqref{hg} and $P_-$ are simultaneously diagonal,
\be
H+C = \del_T\quad,\quad P_- = \del_V\;\;.
\ee
This is accomplished by the coordinate transformation
\be
\label{ct1}
\begin{aligned}
t&=\tan T\quad,\quad r=\frac{R}{\cos T}\quad,\quad \vec{x}= \frac{\vec{X}}{\cos T}\\
\xi &= V + \frac{1}{2}\left(R^2 + \vec{X}^2\right) \tan T
\end{aligned}
\ee
(chosen to also keep the metric as diagonal as possible -
no off-diagonal terms in the new radial coordinate $R$ - see also section 3.4 for
further comments on this transformation), and in these coordinates
the metric reads
\be
\label{gm1}
\begin{aligned}
\mathsf{Sch}_{z=2}:\;
ds^2 &=-\left( \frac{1}{R^4} + (1 + \frac{\vec{X}^2}{R^2})\right)dT^2 + \frac{1}{R^2}(-2dT dV + dR^2 +
d\vec{X}^2)\\
&=-\frac{dT^2}{R^4} + \frac{1}{R^2}(-2dT dV -(R^2 + \vec{X}^2)dT^2 + dR^2 +
d\vec{X}^2)\;\;.
\end{aligned}
\ee
This metric has several noteworthy properties. First of all, it is indeed
geodesically complete, i.e.\ all geodesics can be extended to infinite
values of their affine parameter. We postpone a detailed proof of this
assertion to section 3.3, but already here draw attention to the fact
that a crucial role in establishing this is played by the
harmonic oscillator potential $R^2 + \vec{X}^2$ induced by the
isotropic plane wave metric $-2dT dV -(R^2 + \vec{X}^2)dT^2 + dR^2 +
d\vec{X}^2$ in \eqref{gm1}
which replaces
the flat Poincar\'e coordinate metric $-2dtd\xi + dr^2 + d\vec{x}^2$.
In section 3.3 we will also discuss other
issues related to this term and its interpretation in terms of the harmonic
trapping \cite{nison} of non-relativistic conformal field theories.
It is perhaps quite surprising that the above transformation between
Poincar\'e and global cooordinates is so much simpler than its AdS
counterpart. For instance, instead of the AdS relation \eqref{adsT}
one has the simple relation $t=\tan T$ \eqref{ct1} between Poincar\'e
and global time for $z=2$. The success of the simple coordinate
transformation \eqref{cta} can be traced back or attributed to the
fact (we will briefly recall in section 3.4) that it is precisely
the coordinate transformation that exhibits the conformal flatness of
isotropic plane waves.
One remarkable (and related) feature of the global metric \eqref{gm1}
is that it differs from the Poincar\'e-metric \eqref{metrics} only by a
single term, namely the plane wave harmonic oscillator frequency term
$\sim (dT)^2$. This is in marked contrast to the global AdS metric
\eqref{adsg} which appears to bear no resemblance whatsoever to the
Poincar\'e metric. This statement can even be sharpened somewhat by
introducing a real (and without loss of generality positive) parameter
$\omega$ into the coordinate transformation \eqref{ct1}, via
\be
\label{cta}
\begin{aligned}
t&=\omega^{-1}\tan \omega T\quad,\quad r=\frac{R}{\cos \omega T}\quad,\quad \vec{x}= \frac{\vec{X}}{\cos \omega T}\\
\xi &= V + \frac{\omega }{2}\left(R^2 + \vec{X}^2\right) \tan \omega T\;\;.
\end{aligned}
\ee
In terms of these coordinates the metric now takes the form
\be
\label{gma}
\mathsf{Sch}_{z=2}:\;
ds^2 =-\frac{dT^2}{R^4} + \frac{1}{R^2}(-2dT dV -\omega^2(R^2 + \vec{X}^2)dT^2 + dR^2 +
d\vec{X}^2)\;\;.
\ee
Thus this metric interpolates between the Poincar\'e metric for $\omega =0$
(for which \eqref{cta} obligingly reduces to the identity transformation)
and the global metric for ${\omega}=1$. The metric is actually geodesically
complete for any ${\omega} > 0$ since
\eqref{gma} can be obtained from \eqref{gm1} by the scaling
$(R,T,\vec{X},V)\ra (\sqrt{{\omega}}R,{\omega}T,\sqrt{{\omega}}\vec{X},V)$. This happens to
looks very much like the
$z=2$ dilatation symmetry \eqref{dil} in Poincar\'e coordinates, but acting
on global coordinates this is not an isometry but rather the transformation
that turns \eqref{gm1} into \eqref{gma}.
To better understand what is going on here, note that the coordinate
transformation \eqref{cta} diagonalises not $H+C$ but $H+{\omega}^2 C$,
so that it is not too surprising that one finds the Poincar\'e
metric for ${\omega}=0$ and the global metric \eqref{gm1} for ${\omega}=1$. Thus
any non-trivial linear combination of $H$ and $C$ (with a relative
positive coefficient) gives rise to a global Hamiltonian.
While this explains the form \eqref{gma} of the $z=2$ global metric,
this begs the
question if one cannot adopt a similar procedure in the AdS case,
diagonalising not $P_0 + K_0$ (which, as we know, gives rise to global coordinates)
but $P_0 + {\lambda}^2 K_0$ and finding a metric
depending on ${\lambda}$ that interpolates between the Poincar\'e metric
for ${\lambda}=0$ and the global
metric for ${\lambda}=1$. At first this may perhaps appear unlikely precisely because the global
metric \eqref{adsg} is so unlike the Poincar\'e metric, but nevertheless
this is indeed possible. For notational simplicity, we exhibit this
1-parameter family of interpolating metrics only in the $(3+1)$-dimensional
($d=1$) case,
\be
\label{adsb}
\mathsf{AdS}:\; ds^2 = -({\lambda}^2 + R^2) dT^2 + ({\lambda}^2 + R^2)^{-1} dR^2 + R^2
\left(d\Theta^2 + \cos^2(\frac{{\lambda}^2\pi}{2} - {\lambda} \Theta)d\Phi^2\right)\;\;.
\ee
The explicit coordinate
transformation, which we will not give here in detail (instead of
\eqref{adsT} one now has
\be
\label{adsTa}
\tan {\lambda}T = \frac{2{\lambda}t}{1+ {\lambda}^2(r^2 + \vec{y}^2 - t^2)}
\ee
etc.) shows that $({\lambda}T,{\lambda}\Theta,{\lambda}\Phi)$ are standard angles. Thus,
on the one hand the above metric reduces to the global metric
\eqref{adsg} for ${\lambda}=1$,
while on the other hand for ${\lambda}\ra 0$ the time-coordinate
becomes non-compact and the spatial sphere decompactifies to Euclidean space, so
that one obtains the standard Poincar\'e metric \eqref{adsp} with $R=1/r$ and $\{y^i\}=\{\Theta,\Phi\}$.
\subsection{The Global Metric: Harmonic Trapping and Geodesic Completeness}
There is one aspect of the global $\mathsf{Sch}_{z=2}$ metric
constructed above that merits particular attention, and that we already alluded
to above, namely its
relation to the harmonic trapping of non-relativistic CFTs \cite{nison}
and its geometric realisation \cite{gold,bafu}. Recall that we were led
to the metric \eqref{gm1} by analogy with the AdS case and by the
realisation that there is an essentially unique counterpart of the
global AdS Hamiltonian $P_0 + K_0$ in the $z=2$ Schr\"odinger
algebra, namely the generator $P_+ + K_- \equiv H+C$.
Non-relativistic CFT, on the other hand, provides an a priori
completely different rationale for studying the modified Hamiltonian
$H \ra H+C$, because the non-relativistic operator-state correspondence
\cite{nison} relates primary operators of the Schr\"odinger algebra
(those that commute with $C$ and Galilean boosts) with energy
eigenstates of $H+C$. Since essentially the effect of $C$ is to add
a harmonic potential to the Hamiltonian, this corresponds to putting
the system into a harmonic trap.
In \cite{bafu}, the question was investigated how this trapping could
be realised holographically via a deformation of the (Poincar\'e patch)
Schr\"odinger metric \eqref{metrics}. The deformation that was found
to accomplish a harmonic trapping both in the spatial directions of
the CFT and in the holographic radial coordinate $r$ turns out (when
specialised to $z=2$)\footnote{The emphasis in \cite{bafu} (and also in
\cite{gold}) was on $z=1$ and the attempt to find a pure AdS DLCQ dual
realisation of systems with $z=2$ Schr\"odinger symmetry.} to agree
precisely with the global metric \eqref{gma}. Our derivation of this
metric shows that precisely for $z=2$ (and for $z=1$, see section 3.4)
the required deformation of the metric that accomplishes this trapping is
actually a pure gauge deformation, namely a coordinate transformation
that relates the Poincar\'e time generator $H=\del_t$ to the trapping
(or global) time generator $H+{\omega}^2 C= \del_T$.
The plane wave deformation \eqref{gma} of the Poincar\'e metric \eqref{metrics}
achieves this trapping deformation of the Hamiltonian for the same
reason that the massless Klein-Gordon equation for a scalar field
$\Phi$ in a pp-wave metric background
\be
ds^2 = - 2dtd\xi - U(\vec{x},t) dt^2 + d\vec{x}^2
\ee
reduces
to the Schr\"odinger equation with a potential $V = mU/2$ in a sector with
fixed lightcone momentum = mass,
\be
\Box \Phi = 0 \;,\; i\del_{\xi}\Phi = m\Phi
\quad \Ra\quad i\del_t \Phi = - \frac{1}{2m} \Delta \Phi +
\frac{m}{2}U\Phi\;\;.
\ee
For an isotropic harmonic oscillator potential, this is precisely
the plane wave metric that appears in \eqref{gma}, and we will also
encounter this trapping of the scalar field in the analysis of the
Klein-Gordon equation in the metric \eqref{gma} in the next section.
We will now show that
the completeness of the coordinate system
\eqref{cta}
is a consequence of the harmonic trapping of geodesics induced by this coordinate
transformation. To study the effect of the harmonic oscillator term
${\omega}^2(R^2 + \vec{X}^2)$
in the metric on the behaviour of geodesics, let us compare
the $z=2$ Poincar\'e radial effective potential equation \eqref{veff}
\be
\label{vp}
k = \frac{\dot{r}^2}{r^2} + r^2 (\vec{P}^2 - 2 E P_\xi) + P_\xi^2
\ee
with the corresponding equation one obtains from the global metric
\eqref{gma}, namely
\be
\label{vg}
k = \frac{\dot{R}^2}{R^2} + R^2 (P^2 - 2 E P_V) + P_V^2 + {\omega}^2P_V^2 R^4\;\;.
\ee
A minor difference between
\eqref{vp} and \eqref{vg} is the fact that the constant of
motion denoted by $P^2$ in \eqref{vg} arises not like the $\vec{P}^2$-term in
\eqref{vp} as a consequence of translation invariance (which \eqref{gma} does not
manifest), but rather as the conserved energy
\be
P^2 \equiv \left(\frac{1}{R^2} \frac{d}{d\tau} \vec{X}\right)^2
+ {\omega}^2 P_V^2 \vec{X}^2
\ee
associated to the transverse harmonic
oscillator equations
\be
\frac{1}{R^2} \frac{d}{d\tau} \left(\frac{1}{R^2} \frac{d}{d\tau}
\vec{X}\right) = - {\omega}^2 P_V^2 \vec{X}\;\;.
\ee
The main (and crucial) difference between \eqref{vp} and \eqref{vg}, however,
lies in the last term ${\omega}^2 P_V^2 R^4$ in \eqref{vg}. This term is negligible for
$R\ra 0$, where \eqref{vg} reduces to \eqref{vp} which, as we already discussed, is
well-behaved as $r\ra 0$. On the other hand, since this term dominates the
large $R$ behaviour, it prevents any geodesic (for any $k$) from reaching $R=\infty$
(even for infinite values of the affine parameter $\tau$) unless ${\omega}=0$ (the
Poincar\'e-patch metric, which as we know is incomplete at large radius) or $P_V=0$.
When $P_V=0$, the right-hand-side of \eqref{vg} is a sum of squares, and thus only
spacelike geodesics $k=+1$ are possible. When $P^2\neq 0$, there is again
a maximal radius, $R_{\mathrm{max}}= 1/P$, and when $P_V=P=0$, one has $\dot{R}=\pm
R$, and thus these are the only geodesics that can reach $R=\infty$, but they only do
so for $|\tau|\ra\infty$.
The motion in the $\vec{X}$-direction is bounded by the harmonic oscillator potential,
and that in the remaining $(T,V)$-directions is determined by that of $R$ and
$\vec{X}$ and remains at finite values of the coordinates for all finite $\tau$.
This establishes that, as claimed in section 3.2, the $\mathsf{Sch}_{z=2}$
metric written in the coordinates (\ref{gm1},\ref{gma}) is geodesically
complete.
We close this section with one more remark on the significance of the trapping
exhibited by the global metric and the comparison with the global AdS metric
\eqref{adsg}. As is well known the slices of constant $R$ there have the topology
$\RR\times S^{d+1}$.
\be
\begin{aligned}
\mathsf{AdS:}\; ds^2|_{R=const} &= -(1+R^2)dT^2 + R^2 d\Omega_{d+1}^2\\
&\stackrel{R\ra \infty}{\longrightarrow}
R^2(-dT^2 + d\Omega_{d+1}^2)
\end{aligned}
\ee
Thus the spatial part of the induced
boundary metric has topology $S^{d+1}$, with finite volume,
and thus in particular provides a topological IR cut-off for the boundary theory.
Without committing ourselves to a particular notion
of boundary in the Schr\"odinger case, roughly speaking the dual CFT should be
considered to live on the slices of constant $R$ (the holographic coordinate) and $V$
(dual to the particle number). The metric induced on these slices can, via some
constant rescalings of the coordinates, be written as
\be
\label{rvc}
\begin{aligned}
\mathsf{Sch}_{z=2}:\;ds^2|_{R,V = const.} &\sim -(1 + {\omega}^2|\vec{X}|)^2 dT^2 + d\vec{X}^2 \\
&= -(1 +{\omega}^2\rho^2 ) dT^2 + d\rho^2 + \rho^2 d\Omega_{d-1}^2\;\;.
\end{aligned}
\ee
Thus in the Schr\"odinger case there is no topological cut-off,
but the trapping in the spatial directions $\vec{X}$ can be thought
of as providing an IR cut-off through the harmonic potential.
In particular, the induced metric \eqref{rvc} has the standard
form of the Newtonian limit of a relativistic metric, here in a spherically
symmetric gravitational
harmonic oscillator potential $\frac{1}{2}{\omega}^2\rho^2$.
This Newtonian limit aspect of the metric
of course fits in well with the non-relativistic symmetries and potential dual
dynamics.
\subsection{Some Comments on $z=1$ and $z>2$}
While we have motivated the coordinate transformation \eqref{cta} through
the special (conformal) symmetries that the $z=2$ Schr\"odinger algebra
and metric possess, we can apply it to the $\mathsf{Sch}_z$ metric for
any $z$. If one does that, one finds the metric
\be
\label{gmz}
\mathsf{Sch}_z: ds^2 = -(\cos^2\omega T)^{z-2}\frac{dT^2}{R^{2z}} +
\frac{1}{R^2}(-2dT dV - {\omega}^2(R^2 + \vec{X}^2)dT^2 + dR^2 +
d\vec{X}^2)\;\;.
\ee
Note that under this transformation the $r^{-2z}$ and $r^{-2}$ terms
of the Poincar\'e metric \eqref{metrics} do not mix and transform
separately into the corresponding terms in the metric \eqref{gmz}.
Evidently this reduces to \eqref{gma} for $z=2$.
The $\mathsf{Sch}_z$
Poincar\'e metric reduces to the AdS metric either for $z=1$ or if we set
the coefficient of the $r^{-2z}$-term to zero. Thus by the
same token, choosing $z=1$ or setting the coefficient of the first term to zero
in \eqref{gmz} (the two choices are related by a simple shift of $V$), we obtain a
1-parameter family of AdS metrics, namely
\be
\label{newads}
\mathsf{AdS}:\;
ds^2 = \frac{1}{R^2}(-2dT dV - {\omega}^2(R^2 + \vec{X}^2)dT^2 + dR^2 +
d\vec{X}^2)\;\;.
\ee
This is AdS in trapping coordinates, and that
this AdS plane wave is indeed just pure AdS in disguise was already noted in
\cite{cgr}.\footnote{To see the relation between \eqref{ct1} and
the coordinate transformation given in \cite{cgr}, note that the Schwarzian derivative
$\{\tan t,t\}=2$ is constant.} Insight into this equivalence is provided by the
observation that the coordinate transformation \eqref{cta} is such that
\be
-2dtd\xi + d\vec{x}^2 = \left(\cos^2 {\omega} T\right)^{-1}\left(
-2dT dV - {\omega}^2 \vec{X}^2 dT^2 + d\vec{X}^2\right)\;\;,
\ee
thus exhibiting the conformal flatness of the isotropic plane wave metric
appearing on the right hand side.
Translated into
plane wave lightcone Hamiltonians, it thus
conformally relates free particle (untrapped) and
isotropic harmonic oscillator (trapped) dynamics.
Lifting this transformation to the AdS Poincar\'e-patch by
adding the $r$-transformation in \eqref{cta}
then provides a direct means of establishing \eqref{newads},
while the reasoning of section 3.1
provides the additional insight that this coordinate system
diagonalises the action
of the modified lightcone Hamiltonian $P_+ + {\omega}^2K_-$ and the lightcone momentum $P_-$.
The metric \eqref{newads} captures the $R\ra\infty$ (bulk) behaviour of the global
$\mathsf{Sch}_{z=2}$ metric \eqref{gma}, and, in spite of its
similarity to the Poincar\'e metric, for ${\omega}>0$ this is a geodesically
complete form of the AdS metric, the trapping harmonic oscillator
potential preventing, as above, the geodesics from running off to
infinity for finite values of the affine parameter.
Let us conclude this section with some comments on the metric
\eqref{gmz} for $z>2$. Recall from section 2 that also for $z>2$
the Poincar\'e metric is incomplete but non-singular as $r\ra\infty$.
It remains to be seen if \eqref{gmz} provides a geodesically
complete form of the metric also in this case. A new (and perhaps
at first disturbing) feature of \eqref{gmz} for $z\neq 2$ is its
$T$-dependence. However this is an unavoidable feature of global
coordinates for $z>2$. Indeed, in section 3.1 we established
the result that the Schr\"odinger space-times have no global timelike
Killing vector fields. Conversely this implies, just as in the case
of de Sitter space, that the metric in global coordinates will
necessarily be time-dependent. While this argument does not prove
that \eqref{gmz} provides a geodesically complete coordinate system
for the $z>2$ metrics, it shows that the time-dependence of \eqref{gmz}
is no reason to dismiss it. It may in any case be worth understanding
if the dependence of the metric on global time is reflected in some
manner in non-relativistic scale-invariant field theories
with $z>2$ Schr\"odinger symmetry.
\section{Scalar Fields in Global Coordinates}
In order to further study the effect of the trapping term $\omega^2(\vec{X}^2 + R^2)$
of the global metric \eqref{gma},
we will look at scalar fields in this section and
compare with what is know about scalars fields in the Poincar\'e patch
\cite{son,balamac}, as well as with the analysis of \cite{bafu}.
Thus, consider the Klein--Gordon equation for a massive complex scalar
field $\phi$ of mass $m_0$,
\begin{equation}
\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}g^{\mu\nu}\partial_\nu\Phi\right)-
m_0^2\Phi=0\,,
\end{equation} in the global $\mathsf{Sch}_{z=2}$ metric \eqref{gma}.
We will consider modes $\Phi_{E,m}$ with a definite
(trapping) energy $E>0$ and mass (particle number) $m>0$, i.e.\
eigenfunctions of $H+a^2C=\partial_T$ and $P_-= \del_V$ of the form
\begin{equation}
\label{eq:modes}
\Phi_{E,m}(R,\vec{X},T,V)=\ex{-iET}\ex{-im V}\phi(R,\vec X)\,,
\end{equation}
Introducing spherical coordinates $\{\rho,\text{angles}\}$
in the $\vec X$-plane, schematically expanding $\phi(R,\vec{X}) =\sum Y_L \varphi_L(\rho)\phi_L(R)$
using spherical harmonics $Y_L$ (eigenfunctions of the Laplacian
on $S^{d-1}$ with eigenvalue $-L(L+d-2)$), one finds that
the solutions to the separated $\rho$-equation
that are regular at the origin and well-behaved for $\rho\ra\infty$
are given by the functions
\begin{equation}
\label{rholag}
\varphi_{L,n}(\rho)=\ex{-\tfrac{1}{2}\omega
m\rho^2}\rho^LL_n^{L-1+d/2}(\omega m\rho^2)\;\;,
\end{equation}
where the $L_n^{L-1+d/2}$ are generalised Laguerre polynomials.
The resulting $\phi_{L,n}(R)$-equation
\begin{equation}
\label{dphi}
\phi''_{L,n}-\frac{d+1}{R}\phi'_{L,n}+\left(2Em -4m\omega(n+\frac{L}{2} + \frac{d}{4}) - \omega^2 m^2 R^2 -
\frac{m^2 + m_0^2}{R^2}\right) \phi_{L,n} = 0
\end{equation}
can then be reduced to a standard confluent hypergeometric
differential equation
\be
u F''(u) + ( 1 + \frac{\Delta_+ - \Delta_-}{2} - u) F' -
(n + \frac{L}{2} + \frac{d}{4} -\frac{E}{2\omega})F = 0
\ee
with the ansatz
\be
\phi_{L,n}(R) = \ex{-\frac{1}{2}u} u^{\Delta_+/2}F(u) \;\;,
\ee
where $u = \omega m R^2$ and
\be
\Delta_{\pm}= \frac{d+2}{2} \pm \frac{1}{2}\sqrt{(d+2)^2 + 4 (m^2 + m_0^2)}\;\;,
\ee
The leading asymptotic behaviour of the two linearly independent
solutions $\phi^\pm$ as $R\ra\infty$ is
\be
\phi_{L,n}^\pm \sim \ex{\pm \tfrac{1}{2}\omega m R^2}\;\;.
\ee
This is the analogue of the
behaviour of the Bessel functions $I_{\nu}$ and $K_{\nu}$ encountered
in the standard AdS/CFT correspondence and in the $z=2$ Poincar\'e-patch
analysis of \cite{son,balamac}.
The leading behavior of the solution near $R=0$ can also be deduced from the exact solution,
but can more readily be read off directly from
\eqref{dphi} by neglecting the constant terms
and, in particular, the trapping term $\omega^2 m^2 R^2$.
The behaviour of the solutions,
\begin{equation}\label{eq:modesnearboundary}
\phi_{L,n}\sim R^{\Delta_\pm}\,,
\end{equation}
thus necessarily becomes identical to that found in \cite{son,balamac}
in terms of plane wave Fourier modes $\phi_{\vec{k}}$, namely
\begin{equation}
\phi_{\vec k}\sim r^{\Delta_\pm}\,.
\end{equation}
We see that, as in the case of geodesics, the harmonic trapping term has little influence
on the dynamics near $R=0$ but strongly modifies the behaviour as $R\ra\infty$. In particular,
the solution associated with $\phi^-$ has the characteristic harmonic oscillator fall-off behaviour
\be
\Phi_{E,m}^- \sim \ex{-\tfrac{1}{2}\omega m(R^2 + \vec{X}^2)}\;\;.
\ee
To compare with the Hamiltonian analysis of \cite{bafu}, we just note that separating out the $V$-dependence
as in \eqref{eq:modes} turns the Klein-Gordon equation into a Schr\"odinger equation and that
after the unitary transformations that eliminates the first-order derivatives from
\eqref{dphi} and its counterpart for $\varphi_{L,n}(\rho)$
the corresponding Hamiltonian agrees with the radial Hamiltonians written down in \cite{bafu}.
\subsection*{Acknowledgements}
This work has been supported by the Swiss National Science Foundation and
the ``Innovations- und Kooperationsprojekt C-13'' of the Schweizerische
Universit\"atskonferenz SUK/CUS. We are grateful to Patrick Meessen and
Martin O'Loughlin for useful discussions.
\renewcommand{\Large}{\normalsize}
|
3,212,635,537,545 | arxiv | \part[Results of top quark production in heavy ion collisions at $\sqrt{s_{NN}} = $ 5.02 TeV and 8.16 TeV with CMS\\ \phantom{x}\hspace{4ex}\it{Luis F. Alcerro on behalf of the CMS Collaboration}]{}
\section{Introduction}
The top quark was discovered in 1995 by the D0 \cite{D0:1995jca} and CDF \cite{CDF:1995wbb} Collaborations in the Tevatron experiment. With a mass of roughly $173$ GeV, it is mainly produced at LHC in $t\overline{t}$ pairs by gluon fusion and decays most of time in a b quark and a W boson, which eventually could decay either hadronically (leptonically) with a branching fraction of $\sim 66 \%$ ($\sim 33 \%$). Then we can classify the $t\overline{t}$ decay modes according to the decay products of the W bosons, namely:
\begin{itemize}
\item $\ell +\text{jets}$ (semileptonic): \[t\overline{t}\rightarrow bb'W(\rightarrow \ell \nu)W'(\rightarrow q \overline{q}') \]
\item Dilepton (leptonic):
\[t\overline{t} \rightarrow bb'W(\rightarrow \ell \nu)W'(\rightarrow \ell'\nu')\]
\item All jets (hadronic):
\[t\overline{t} \rightarrow bb'W(\rightarrow q\overline{q}')W'(\rightarrow q''\overline{q}''') \]
\end{itemize}
The semileptonic channel is characterized for its high branching ratio while the dilepton for its purity. The hadronic one represents the dirtiest and more challenging channel. \\
The study of top quark is key to understand various questions of QCD. In proton-proton collisions, top quark production is relevant to constrain proton PDF as well as to determine SM critical parameters, such as the $|V_{tb}|$ element of the CKM matrix. Moreover, heavy-ion collisions profits from proton-proton measurements at the same center-of-mass energies. In the case of proton-nucleus and nucleus-nucleus collisions, top quark production could serve as a probe to test nuclear PDFs as well as it paves the way for using the top quark as a probe to unveil the time structure of the QGP. \\
With a short lifetime with roughly $10^{-24}$ seconds, the top quark does not hadronize and decays before QCD mechanisms start acting. Unlike other jet quenching probes (eg. dijets, $Z/\gamma$ + jets), which are produced simultaneously with the collision, top quark can decay before of within the QGP, depending on its momentum. Taking "snapshots" at different times (or momentum) one could resolve the QGP time evolution. For this purpose, the semileptonic $t\overline{t}$ represents a golden channel due to its high branching fraction and signal/background discrimination. \\
A recent study shows the potential that hadronically decaying W bosons in top-antitop quark pair has to provide insights of the time structure of the QGP \cite{Apolinario:2017sob} . This is consequence of a "time delay" between the moment of the collision and that when the $q\overline{q}$ product of the W boson starts feeling the strong interaction of the QGP. This way, the $q\overline{q}$ pair propagates a in a certain decoherence time $\tau_d$ before start acting with the medium, so the $t \rightarrow b + W \rightarrow q \overline{q}$ decay does not see the full QGP, only the portion after \[\tau_{tot} = \gamma_{t, top} \tau_{top} + \gamma_{t,W} \tau_W + \tau_d ,\]
with $\gamma_{t,X} = (p_{t,X}^2/m_X^2 +1)^{1/2}$. In Fig. \ref{pheno_1} we can appreciate the significant contribution of the decoherence time to the total delay time.
\begin{figure}
\begin{center}
\includegraphics[width=0.47\textwidth]{tau_d.png}
\includegraphics[width=0.47\textwidth]{tau_m.png}
\caption{Left: Total delay time $\tau_{tot}$ and the contributions from each component. Right: maximum medium quenching end-time, $\tau_m$, that
can be distinguished as a function of integrated luminosity. Plots taken from \cite{Apolinario:2017sob}. }
\label{pheno_1}
\end{center}
\end{figure}
Using this approach, from Fig. \ref{pheno_1} we can have an idea of the current expectations of using the top quark as a probe to resolve the time dimension of the QGP in the context of future colliders and higher luminosities. We see that shorter QGP scenarios are potentially reachable during the High-Luminosity LHC era, while future colliders could resolve the full QGP evolution. \\
On the experimental side, the CMS Collaboration \cite{CMS:2008xjf} has observed top quark $t\overline{t}$ in proton-proton collisions at center-of-mass energies of 5 \cite{CMS:2017zpm}, 7, 8 \cite{CMS:2016yys,CMS:2016csa} and 13 TeV \cite{CMS:2017xrt, CMS:2016hbk}, testing different regions in Bjorken-x of the proton PDF. Top quark pair has been also measured at CMS in proton-lead (pPb) collisions at 8 TeV \cite{CMS:2017hnw} and there is evidence in lead-lead (PbPb) collisions at 5 TeV \cite{CMS:2020aem}.
\section{$t\overline{t}$ in $pp$ at 5.02 TeV}
The first measurement of the total cross section of $t\overline{t}$, $\sigma_{t\overline{t}}$, at 5.02 TeV was performed by the CMS Collaboration using data recorded in November 2015 \cite{CMS:2017zpm} with a data sample that corresponds to an integrated luminosity of 27.4 pb$^{-1}$. For this measurement the $\ell$ + jets and dilepton channels were analyzed. The former is characterized by the presence in the final state of at least two b jets, two light jets, one lepton and momentum imbalance due two the undetected neutrino while the later involves two b jets, two light jets, two high energy leptons and momentum imbalance. \\
Control samples in data are used to estimate backgrounds coming from multijets (Drell-Yan, referred to as $Z/\gamma^*$) in $\ell$ + jets (dilepton) channels. All other contributions in both channels are estimated from simulations. \\
In the $\ell$ + jets channel, events are classified into 3 b jet multiplicity categories: 0b, 1b, $\geq 2$b (see Fig. \ref{pp_1}). The cross section is extracted by means of likelihood fits in the angular distance of the two light jets ($j,j'$) produced from the W boson hadronic decay, $\Delta R(j, j')$. In the dilepton analysis, the cross section is extracted with an event counting technique. Fig. \ref{pp_2} shows the jet multiplicity and missing transverse momentum distributions for events passing the dilepton criteria.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{CMS-TOP-16-023_Figure_001.png}
\caption{Distributions of the invariant mass $M(j,j')$ and minimum angular distance $\Delta R_{min}(j,j')$ of the light jets $j,j'$. Events are classified into 0 b (left), 1 b (center) and $\geq 2$ b tagged jets categories. Plots taken from \cite{CMS:2017zpm}.}
\label{pp_1}
\end{center}
\end{figure}
Cross sections measured in both channels are combined to determine an overall $t\overline{t}$ cross section:
\[\sigma_{t\overline{t}} = 69.5 \pm 6.1 \text{ }(stat)\pm 5.6 \text{ }(syst) \pm 1.6 \text{ }(lumi) \text{ pb.}\]
This result is in agreement with theory and has been used in a QCD analysis showing a moderate reduction of the uncertainty in the gluon PDF of the proton, as shown in Fig. \ref{pp_3}.\\ It is worth to mention that this result has been recently updated with an increase in integrated luminosity of more
than an order of magnitude compared to the data set previously mentioned \cite{CMS:2021gwv}. This analysis takes into account the dilepton channel only, obtaining a cross section of $60.7\pm 5.0(stat) \pm 2.8(syst)\pm 1.1 (lumi)$ pb. In combination with the $\ell+$jets result from 2015 data \cite{CMS:2017zpm}, the updated cross section measurement is
\[\sigma_{t\overline{t}} = 63.0 \pm 4.1 \text{ }(stat)\pm 3.0 \text{ }(syst+lumi) \text{ pb.}\]
\begin{figure}
\begin{center}
\includegraphics[width=0.47\textwidth]{CMS-TOP-16-023_Figure_003-a.png}
\includegraphics[width=0.47\textwidth]{CMS-TOP-16-023_Figure_004-a.png}
\caption{Distributions of jet multiplicity (left) and missing transverse momentum (right) for events passing the dilepton criteria. Plots taken from \cite{CMS:2017zpm}.}
\label{pp_2}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{CMS-TOP-16-023_Figure_005.png}
\includegraphics[width=0.51\textwidth]{CMS-TOP-16-023_Figure_006.png}
\caption{Left: Inclusive $\sigma_{t\overline{t}}$ measurements at $\sqrt{s}=5,7,8,13$ TeV compared to theoretical predictions. Right: The relative uncertainties in the gluon distribution function of the proton as a function
of $x$ from a QCD analysis. Plots taken from \cite{CMS:2017zpm}.}
\label{pp_3}
\end{center}
\end{figure}
\section{$t\overline{t}$ in $pPb$ at 8.16 TeV}
The CMS Collaboration performed the first observation of top quark in proton-nucleus collisions using proton-lead at $\sqrt{s}=8.16$ TeV data taken in 2016 equivalent to integrated luminosity of 174 nb$^{-1}$ nb \cite{CMS:2017hnw}. For this analysis, due to its high branching fraction and moderate background contamination, only the $\ell$ + jets channel is considered. Events are required to contain exactly one muon or
electron with $p_T>30$ GeV and $|\eta|<2.1$ (except in the electron case where there is the transition region $1.444<|\eta|<1.566$ between the barrel and endcap) and to be isolated from hadronic activity in both cases. Events are also required to have at least four anti-$k_T$ jets with cone size of 0.4, $p_T>25$ GeV and $|\eta|<2.5$. Jets coming from b quarks are identified based on the presence of a secondary vertex from B-hadron decays. \\
The main sources of background are QCD multijet and W + jets (collectively labeled as “non-top” background) which are taken from simulation. Since the presence of two b jets is very uncommon in non-top background, the number of jets passing a threshold of a b jet identification discriminant is used to categorize each event candidate into no (0b), exactly one (1b), or at least two (2b) tagged-jet categories. The invariant mass of the two light-flavor jets ($m_{jj^{'}}$) produced from the decay of the W boson is used as input for a likelihood fit for the cross section extraction (see Fig. \ref{ppb_1}). As a further examination of the hypothesis that the selected data are consistent with the production of top quarks, a proxy of the top quark mass, $m_{\text{top}}$, is constructed
as the invariant mass of candidates formed by pairing the W candidate with a b-tagged jet, $t \rightarrow b j j^{'}$. Figure \ref{ppb_2} shows the distribution of $m_{\text{top}}$ reconstructed for events in the 0, 1, and 2 b-tagged jet categories.\\
The combined fit to both channels ($e/\mu$ + jets) and all three b-tagged jet categories yields the cross section:
\[\sigma_{t\overline{t}} = 45 \pm 8 \text{ nb,}\]
compatible with theory and proton-proton scaled data, as shown in Fig. \ref{ppb_3}.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{CMS-HIN-17-002_Figure_001.png}
\caption{Invariant mass distributions of the W candidate, $m_{jj^{′}}$, in the 0 (left), 1 (center), and 2 (right) b-tagged jet categories. Plots taken from \cite{CMS:2017hnw}.}
\label{ppb_1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{CMS-HIN-17-002_Figure_002.png}
\caption{Distributions of $m_{\text{top}}$, in the 0 (left), 1 (center), and 2 (right) b-tagged jet categories. Plots taken from \cite{CMS:2017hnw}. }
\label{ppb_2}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{CMS-HIN-17-002_Figure_003.png}
\caption{Total $t\overline{t}$ cross sections in $e$+jets, $\mu$+jets and combined $\ell$+jets channels compared to theory and pp scaled data. Plots taken from \cite{CMS:2017hnw}.}
\label{ppb_3}
\end{center}
\end{figure}
\section{$t\overline{t}$ in PbPb at 5.02 TeV}
The very first evidence of top quark production in heavy nuclei was reported by the CMS Collaboration using data recorded in 2018, corresponding to an integrated luminosity of 1.7 nb$^{-1}$ of lead-lead collisions at center-of-mass energy of $\sqrt{s}=5.02$ TeV \cite{CMS:2020aem}. The purity of the dilepton channel is exploited in this analysis with and without inclusion of information coming from b tagged jets. \\
The data is filtered to contain two opposite sign (OS) leptons with $p_T>25$ (20) GeV and $|\eta|<2.1$ (2.4) for electrons (muons) with no nearby hadronic activity. The presence of b-tagged jets is further exploited in a second method to enhance the signal. Jets are tagged using information of secondary vertices. \\
The main sources of background are Drell-Yan (referred to as “$Z/\gamma∗$”) and W+jets and QCD multijets (referred as "nonprompt"). Drell-Yan processes are modeled from Monte Carlo with corrections obtained from data while nonprompt are directly derived from control regions in the data. \\
In both methods, a Boosted Decision Tree (BDT) is trained to discriminate genuine leptons with high $p_T$ between signal and background processes (see Figs. \ref{pbpb_1} and \ref{pbpb_2} ). In order to minimize effects of the imprecise knowledge of the jet properties in the heavy ion environment, BDTs use kinematic properties only. Likelihood fits to binned BDT distributions are performed separately for the two methods to extract the cross section, obtaining:
\[\sigma_{t\overline{t}} = 2.03^{+0.71}_{-0.64} \text{ }\mu \text{b}\]
with the $2\ell_{\text{OS}} +$ b-jets method and
\[\sigma_{t\overline{t}} = 2.54^{+0.84}_{-0.74} \text{ }\mu \text{b}\]
with $2\ell_{\text{OS}}$. The results are compatible with theoretical calculations and pp at 5 TeV scaled by the number of binary nucleon-nucleon collisions in PbPb as well, as is shown in Fig. \ref{pbpb_3}.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{CMS-HIN-19-001_Figure_002.png}
\caption{BDT discriminator distributions in the $e^+e^−$ (left), $\mu^+ \mu^-$ (middle), and $e^\pm \mu^\mp$ (right) final states. Plots taken from \cite{CMS:2020aem}.}
\label{pbpb_1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{CMS-HIN-19-001_Figure_003.png}
\caption{ BDT discriminator distributions in the $e^+e^−$ (left), $\mu^+ \mu^-$ (middle), and $e^\pm \mu^\mp$ (right) final states separately for the 0b-, 1b-, and 2b-tagged jet multiplicity categories. Plots taken from \cite{CMS:2020aem}.}
\label{pbpb_2}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{CMS-HIN-19-001_Figure_004.png}
\caption{Inclusive $t\overline{t}$ cross sections measured with two methods in the combined $e^+e^−$, $\mu^+ \mu^-$, and $e^\pm \mu^\mp$ final states in PbPb collisions at $\sqrt{s}$= 5.02 TeV, and pp results at the same energy. Plots taken from \cite{CMS:2020aem}.}
\label{pbpb_3}
\end{center}
\end{figure}
\section{Summary}
The CMS experiment has shown the capability to perform top quark studies both in different systems and energies obtaining results in agreement with simulations. In particular, the evidence of top quark in nucleus-nucleus prepares the way to explore the time evolution of the QGP using top quark with LHC higher luminosities and future colliders.
\section*{Acknowledgements}
The work is supported in whole by the Nuclear Physics (NP) program of the U.S. Department of Energy (DOE) with number \href{https://pamspublic.science.energy.gov/WebPAMSExternal/Interface/Common/ViewPublicAbstract.aspx?rv=00d4fe0f-48a0-4d4a-baf1-c70867d9e499&rtc=24&PRoleId=10}{DE-FG02-96ER40981}.
\nocite{*}
\bibliographystyle{auto_generated.bst}
\part[Light hadron and photon production in $p{\rm Pb}$ collisions at LHCb\\ \phantom{x}\hspace{4ex}\it{Thomas Boettcher on behalf of the LHCb Collaboration}]{}
\section{Introduction}
The LHCb detector is a general purpose detector at forward rapidity at
the LHC~\cite{LHCb:2014set}. Because of its forward acceptance, LHCb
is also able to study the structure of colliding hadrons in a
kinematic regime complementary to that probed by central region
detectors. LHCb's kinematic coverage is shown in
Fig.~\ref{fig:coverage}. Proton-lead collisions at LHCb probe
interactions between low- and high-$x$ partons. Hadron and photon
production in proton-lead collisions at LHCb are sensitive to nuclear
partons with momentum fractions $x$ as small as $10^{-6}$. As a
result, LHCb data can constrain nuclear parton distribution functions
(nPDFs) in the relatively unexplored low-$x$
regime~\cite{Helenius:2014qla}. Data in this regime can also constrain
models of parton saturation, such as the color glass condensate (CGC)
effective field theory~\cite{Lappi:2013zma}. Furthermore, by
alternating the directions of the proton and lead beams, LHCb is able
to collect data at large backward rapidities. The LHCb forward and
backward data cover a large range in $x$ and can be used to constrain
models of particle production in nuclear collisions and to study the
onset of low-$x$ nuclear effects.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{figs/coverage_pPb.pdf}
\caption{LHCb coverage in the $x$-$Q^2$ plane for $p{\rm Pb}$
collisions at $\sqrt{s_{\rm NN}}=8.16\,{\rm TeV}$.}
\label{fig:coverage}
\end{figure}
\section{$D^0$ meson production}
LHCb has measured $D^0$ meson production in $p{\rm Pb}$ collisions at
nucleon-nucleon center-of-mass energy $\sqrt{s_{\rm NN}}=5.02\,{\rm
TeV}$~\cite{LHCb:2017yua}. The measured nuclear modification factor
is shown in Fig.~\ref{fig:dprod}. The measurement was performed in
both the proton-going (forward) and lead-going (backward) directions
and is the first measurement of its kind at zero transverse
momentum. As a result, the LHCb $D^0$ production measurement is
particularly sensitive to nPDFs at low $x$ and low $Q^2$ and is
potentially sensitive to saturation effects. Results are compared to
perturbative QCD (pQCD) calculations using the
EPS09~\cite{Eskola:2009uj} and nCTEQ15~\cite{Kovarik:2015cma} nPDF
sets. The LHCb results agree with the nPDF predictions in both the
forward and backward directions. Additionally, the forward result is
also compared to a CGC calculation, showing good
agreement~\cite{Ducloue:2015gfa}. The forward measurement in
particular is much more precise than the nPDF prediction, suggesting
that this measurement can help constrain nPDFs at low $x$.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figs/dprod_Fig5a.pdf}
\includegraphics[width=0.48\textwidth]{figs/dprod_Fig5b.pdf}
\caption{LHCb measurement of the $D^0$ nuclear modification factor
in $p{\rm Pb}$ collisions at $\sqrt{s_{\rm NN}}=5.02\,{\rm
TeV}$~\cite{LHCb:2017yua} at (left) forward and (right) backward
rapidity. The measurement is performed for center of mass rapidity
$2.5<|y_{\rm CM}|<4.0$ for $p_{\rm T}<6\,{\rm GeV}/c$ and
$2.5<|y_{\rm CM}|<3.5$ for \mbox{$6<p_{\rm T}<10\,{\rm GeV}/c$}.}
\label{fig:dprod}
\end{figure}
The impact of LHCb's measurement of $D^0$ production on nPDFs has been
studied in Refs.~\cite{Kusina:2017gkz} and \cite{Eskola:2019bgf}. Both
analyses incorporate LHCb data using a Hessian reweighting
technique~\cite{Eskola:2019dui}. Incorporating the LHCb $D^0$
production data results in much smaller gluon nPDF uncertainties,
especially at low $x$. These analyses demonstrate LHCb's unique
ability to study nuclear structure at low $x$.
\section{Charged hadron production}
Measurements of $D^0$ production are not sensitive to nPDFs at
momentum scales $Q$ below the $D^0$ mass. Measurements of inclusive
charged particle production can be used to access lower scales. LHCb
has measured the nuclear modification factor of charged particles in
$p{\rm Pb}$ collisions at $\sqrt{s_{\rm NN}}=5.02\,{\rm
TeV}$~\cite{LHCb:2021vww}. Results are shown in
Fig.~\ref{fig:chg}. The measurement is performed as a function of both
$p_{\rm T}$ and center-of-mass pseudorapidity $\eta$. Total
uncertainties of less than $5\%$ are achieved in most bins.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/chg_Fig2-crop.pdf}
\caption{LHCb measurement of the charged-particle nuclear
modification factor in $p{\rm Pb}$ collisions at $\sqrt{s_{\rm
NN}}=5.02\,{\rm TeV}$~\cite{LHCb:2021vww}.}
\label{fig:chg}
\end{figure}
Results are compared to next-to-leading order (NLO) pQCD calculations
using the EPPS16 nPDF set~\cite{Eskola:2016oht} and DSS14
fragmentation functions~\cite{deFlorian:2014xna}. In addition, the
forward data are compared to CGC calculations~\cite{Lappi:2013zma}. At
forward $\eta$, the data agrees with the pQCD prediction and is much
more precise than the theoretical calculation. The data also shows
greater suppression at low $p_{\rm T}$ than predicted by the CGC
calculation. At backward $\eta$, the data shows a much larger
enhancement than predicted by the nPDF calculation. The backward data
are also compared to a pQCD calculation that includes fully coherent
energy loss in the nucleus~\cite{Kang:2013ufa}. This model
successfully describes a similar enhancement observed in $p{\rm Au}$
collisions at $\sqrt{s_{\rm NN}}=200\,{\rm GeV}$ by the PHENIX
collaboration~\cite{PHENIX:2019gix}, but fails to describe the LHCb
data. The failure of these calculations to describe the LHCb data
suggests contributions from other nuclear effects, such as the Cronin
effect, radial flow, or final state
recombination~\cite{Hwa:2004zd}. Measurements of the nuclear
modification factors of identified particles are needed to determine
the origin of the enhancement.
The LHCb charged particle production data are compared to measurements
from ALICE~\cite{ALICE:2018vuu} and CMS~\cite{CMS:2016xef} using
approximations of $x$ and $Q^2$ given by
\begin{align}
Q^2_{\rm exp}&=m^2+p_{\rm T}^2,\\
x_{\rm exp}&=\frac{Q_{\rm exp}}{\sqrt{s_{\rm NN}}}e^{-\eta_{\rm CM}},
\end{align}
where $m=256~{\rm MeV}/c^2$ is the average mass of charged particles
in $p{\rm Pb}$ collisions generated by EPOS-LHC. The comparison is
shown in Fig.~\ref{fig:chgcomp}. The results show continuous evolution
in $x_{\rm exp}$ at various values of $Q^2_{\rm exp}$ across multiple
experiments.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/chg_Fig3-crop.pdf}
\caption{Comparison of LHCb~\cite{LHCb:2021vww},
ALICE~\cite{ALICE:2018vuu}, and CMS~\cite{CMS:2016xef}
charged-particle nuclear modification factors as a function of
$x_{\rm exp}$ for various values of $Q^2_{\rm exp}$.}
\label{fig:chgcomp}
\end{figure}
\section{Neutral pion and direct photon production}
Inclusive charged particles and $\pi^0$s share similar production
processes and probe similar kinematic regimes. Additionally, $\pi^0$s
are reconstructed using their decays to two photons, so many of the
systematic uncertainties will be independent from systematic
uncertainties on charged particle production. As a result, a
measurement of $\pi^0$ production at LHCb would provide a
complementary probe of nPDFs at low $x$.
Identified particle measurements could also help explain the origin of
the charged-particle excess observed by LHCb in $p{\rm Pb}$ collisions
at $\sqrt{s_{\rm NN}}=5.02\,{\rm TeV}$. Radial flow, for example,
would produce a larger charged particle enhancement than $\pi^0$
enhancement due to the small $\pi^0$
mass~\cite{Ayala:2006bc}. Alternatively, enhanced nuclear parton
densities would produce similar charged particle and $\pi^0$
enhancements~\cite{Helenius:2014qla}.
A measurement of the $\pi^0$ nuclear modification factor in $p{\rm
Pb}$ collisions at $\sqrt{s_{\rm NN}}=8.16\,{\rm TeV}$ is in
progress at LHCb. Photon pairs from high-$p_{\rm T}$ forward $\pi^0$
decays are often reconstructed as single calorimeter clusters. To
avoid these so-called ``merged $\pi^0$s'', $\pi^0$s are reconstructed
using photons that convert to electron-positron pairs in the detector
material. Converted photons are combined with photons reconstructed in
the ECAL to produce $\pi^0$ candidates. Yields are extracted using
fits to the diphoton mass spectrum. An example fit is shown in
Fig.~\ref{fig:fit}. Backgrounds include combinatorial background, as
well as bremsstrahlung background consisting of converted photons
combined with the bremsstrahlung radiation from one of the conversion
electrons.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{figs/int_fit_pp.pdf}
\caption{Fit of the diphoton mass spectrum measured at LHCb in $pp$
collisions at $\sqrt{s}=13\,{\rm TeV}$. The total fit is given by
the solid blue line. The combinatorial background is shown in
light gray and the bremsstrahlung background is shown in dark
gray.}
\label{fig:fit}
\end{figure}
Fig.~\ref{fig:rpafake} shows NLO pQCD predictions for the $\pi^0$
nuclear modification factor~\cite{Helenius:2014qla}, as well as
expected LHCb uncertainties. These uncertainties are dominated by the
photon detection efficiency determination. The LHCb measurement is
expected to be much more precise than the nPDF calculation in the
forward region and could provide strong constraints at low $x$ in
future global nPDF fits.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/rpa_fake_backward.pdf}
\includegraphics[width=\textwidth]{figs/rpa_fake_forward.pdf}
\caption{Next-to-leading-order pQCD calculations of the $\pi^0$
nuclear modification factor~\cite{Helenius:2014qla} and expected
LHCb uncertainties at (top) backward and (bottom) forward rapidity
and $\sqrt{s_{\rm NN}}=8.16\,{\rm TeV}$.}
\label{fig:rpafake}
\end{figure}
Measuring the $\pi^0$ $p_{\rm T}$ spectrum is also necessary for
direct photon searches. Direct photons are photons that are not
produced in meson decays. They may be produced promptly in hard QCD
processes or via fragmentation in the parton shower. Direct photon
production at forward rapidity is an ideal probe of gluon
saturation. Furthermore, if a collision produces quark gluon plasma
(QGP), the QGP will radiate low-$p_{\rm T}$ thermal photons. Because
most photons produced in hadron collisions come from $\pi^0$ decays,
precise knowledge of the $\pi^0$ $p_{\rm T}$ spectrum is needed in
order to subtract backgrounds from decay photons. As a result, a
measurement of $\pi^0$ production at LHCb will provide the basis for
future direct photon searches.
\section{Conclusions}
The LHCb detector's unique angular acceptance allows it to study
nuclear effects in unexplored kinematic regimes. LHCb charm production
data has already been used to constrain nPDFs, especially at
low-$x$. Measurements of light hadron and photon production can
provide additional constraints at low $Q^2$, potentially probing the
effects of parton saturation. Additionally, LHCb sees a large
enhancement of charged-particle production in $p{\rm Pb}$ collisions
at backward rapidity that is not adequately explained by available
theoretical calculations. A measurement of $\pi^0$ production at LHCb
is underway and could clarify the origin of this enhancement.
Comments: Presented at the Low-$x$ Workshop, Elba Island, Italy, September 27--October 1 2021.
\section*{Acknowledgements}
The author is supported by the U.S. National Science Foundation.
\iffalse
\section{Introduction}
We introduce here our topic....
\section{Results}
We describe here the details our results.
The document should not exceed 10 pages excluding references (please contact the organizers if you need more pages). The proceedings will be published online for free by the University of Kansas (this will be an "official" publication) and printed copies will be made available for a reasonable cost. The deadline to send the \textbf{.tar} file of the contribution (\textbf{including the .pdf} which means a successful compilation) is December 15 2021 to \textbf{christophe.royon@ku.edu, lalcerro@ku.edu, gkrintir@ku.edu}. It is also possible to put the article on arXiv provided you mention on the relevant field
Comments: Presented at the Low-$x$ Workshop, Elba Island, Italy, September 27--October 1 2021.
\begin{figure}
\begin{center}
\epsfig{figure=jayhawk.eps,height=0.45\textwidth}
\caption{A beautiful figure}
\label{elcross}
\end{center}
\end{figure}
\section*{Acknowledgements}
The author thanks somebody.
\section{Introduction}
We introduce here our topic....
\section{Results}
We describe here the details our results.
The document should not exceed 10 pages excluding references (please contact the organizers if you need more pages). The proceedings will be published online for free by the University of Kansas (this will be an "official" publication) and printed copies will be made available for a reasonable cost. The deadline to send the \textbf{.tar} file of the contribution (\textbf{including the .pdf} which means a successful compilation) is December 15 2021 to \textbf{christophe.royon@ku.edu, lalcerro@ku.edu, gkrintir@ku.edu}. It is also possible to put the article on arXiv provided you mention on the relevant field
Comments: Presented at the Low-$x$ Workshop, Elba Island, Italy, September 27--October 1 2021.
\begin{figure}
\begin{center}
\epsfig{figure=jayhawk.eps,height=0.45\textwidth}
\caption{A beautiful figure}
\label{elcross}
\end{center}
\end{figure}
\section*{Acknowledgements}
The author thanks somebody.
\part[Novel Effects in QCD: Intrinsic Heavy Quarks, Color Transparency, and the Implications of Diffractive Contributions to Deep Inelastic Scattering for the Momentum Sum Rules\\ \phantom{x}\hspace{4ex}\it{Stanley J. Brodsky}]{}
\section{Intrinsic Heavy Quarks}
Quantum Chromodynamics (QCD), the underlying theory of strong interactions, with quarks and gluons as the fundamental degrees of freedom, predicts that the heavy quarks in the nucleon-sea to have both perturbative ``extrinsic" and nonperturbative ``intrinsic" origins. The extrinsic sea arises from gluon splitting which is triggered by a probe in the reaction. It can be calculated order-by-order in perturbation theory. In contrast, the intrinsic sea is encoded in the nonperturbative wave functions of the nucleon eigenstate.
The existence of nonperturbative intrinsic charm (IC) was originally proposed in the BHPS model~\cite{Brodsky:1980pb} and developed further in subsequent papers~\cite{Brodsky:1984nx,Harris:1995jx,Franz:2000ee}. The intrinsic contribution to the heavy quark distributions of hadrons at high $x$ corresponds to Fock states such as $|uud Q \bar Q>$ where the heavy quark
pair is multiply connected to two or more valence quarks of the proton. It is maximal at minimal off-shellness; i.e., when the constituents all have the same rapidity $y_I$, and thus
$x_i \propto \sqrt(m_i^2+ { \vec k_{\perp i}}^2 )$. Here $x= {k^+\over P^+} = {k^0 + k^3\over P^0 + P^3}$ is the frame-independent light-front momentum fraction carried by the heavy quark in a hadron with momentum $P^\mu$.
In the case of deep inelastic lepton-proton scattering, the LF momentum fraction variable $x$ in the proton structure functions can be identified with the Bjorken variable
$x = {Q^2\over 2 p \cdot q}.$
These heavy quark contributions
to the nucleon's PDF thus peak at large $x_{bj}$ and thus have important implication for LHC and EIC collider phenomenology, including Higgs and heavy hadron production at high $x_F$~\cite{Royon:2015eya}.
It also opens up new opportunities to study heavy quark phenomena in fixed target experiments such as the proposed AFTER~\cite{Brodsky:2015fna} fixed target facility at CERN. Other applications are presented in Refs.~\cite{Brodsky:2020zdq,Bednyakov:2017vck,Brodsky:2016fyh}.
The existence of intrinsic heavy quarks also illuminates fundamental aspects of nonperturbative QCD.
In light-front (LF) Hamiltonian theory, the intrinsic heavy quarks of the proton are associated with non-valence Fock states.
such as $|uud Q \bar Q>$ in the hadronic eigenstate of the LF Hamiltonian; this implies that the heavy quarks are multi-connected to the valence quarks. The probability for the heavy-quark Fock states scales as $1/m^2_Q$ in non-Abelian QCD. Since the LF wavefunction is maximal at minimum off-shell invariant mass; i.e., at equal rapidity, the intrinsic heavy quarks carry large momentum fraction $x_Q$. A key characteristic is different momentum and spin distributions for the intrinsic $Q$ and $\bar Q$ in the nucleon; for example the charm-anticharm asymmetry, since the comoving quarks are sensitive to the global quantum numbers of the nucleon~\cite{Brodsky:2015fna}. Furthermore, since all of the intrinsic quarks in the $|u[ud] Q \bar Q>$ Fock state have similar rapidities as the valence quarks, they can re-interact, leading to significant $Q$ vs $\bar Q$ asymmetries. The concept of intrinsic heavy quarks was also proposed in the context of meson-baryon fluctuation models~\cite{Navarra:1995rq,Pumplin:2005yf} where intrinsic charm was identified with two-body state $\bar{D}^0(u\bar{c})\Lambda^+_c(udc)$ in the proton. This identification predicts large asymmetries in the charm versus charm momentum and spin distributions, Since these heavy quark distributions depend on the correlations determined by the valence quark distributions, they are referred to as {\it intrinsic } contributions to the hadron's fundamental structure. A specific analysis of the intrinsic charm content of the deuteron is given in ref.~\cite{Brodsky:2018zdh}.
In contrast, the contribution to the heavy quark PDFS arising from gluon splitting are symmetric in $Q$ vs $\bar Q$. The contributions generated by DGLAP evolution at low $x$ can be considered as {\it extrinsic} contributions since they only depend on the gluon distribution. The gluon splitting contribution to the heavy-quark degrees of freedom is perturbatively calculable
using DGLAP
evolution. To first approximation, the perturbative extrinsic heavy quark distribution falls as $(1-x)$ times the gluon distribution and is limited to low $x_{bj}.$
Thus, unlike the conventional $\log m^2_Q$ dependence of the low $x$ extrinsic gluon-splitting contributions, the probabilities for the intrinsic heavy quark Fock states at high $x$ scale as $1\over m_Q^2$ in non-Abelian QCD, and the relative probability of intrinsic bottom to charm is of order ${m^2_c\over m^2_b} \sim {1\over 10}.$
In contrast, the probability for a higher Fock state containing heavy leptons in a QED atom scales as $1\over m_\ell^4$, corresponding to the twist-8 Euler-Heisenberg light-by-light self-energy insertion. Detailed derivations based on the OPE have been given in Ref. ~\cite{Brodsky:1984nx,Franz:2000ee}.
In an important recent development~\cite{Sufian:2020coz}, the difference of the charm and anticharm quark distributions in the proton, $\Delta c(x) = c(x) -\bar c(x)$, has been computed from first principles in QCD using lattice gauge theory. A key theoretical tool is the computation of the charm and anticharm quark contribution
to the electromagnetic form factor of the proton which would vanish if $c(x) =\bar c(x).$ The exclusive-inclusive connection, together with the LFHQCD formalism, predicts the asymmetry of structure functions $c(x)- \bar c(x)$ which is also odd under charm-anticharm interchange.
The predicted $c(x)- \bar c(x)$ distribution is large and nonzero at large at $x \sim 0.4$, consistent with the expectations of intrinsic charm.
The $c(x)$ vs. $\bar c(x)$ asymmetry can also be understood physically by identifying the $ |uud c\bar c>$ Fock state with the $|\Lambda_{udc} D_{u\bar c}>$ off shell excitation of the proton.
See Fig.~\ref{fig:ccbardis}. A related application of lattice gauge theory to the nonperturbative strange-quark sea from lattice QCD is given in ref.~\cite{Sufian:2018cpj}.
\begin{figure}[htp]
\begin{center}
\setlength\belowcaptionskip{-2pt}
\includegraphics[width=3.2in, height=2.3in]{ccbarplotjpsilog.pdf}
\caption{The difference of charm and anticharm structure functions $x[c(x)-\bar{c}(x)]$ obtained from the LFHQCD formalism using the lattice QCD input of charm electromagnetic form factors $G^c_{E,M}(Q^2)$ \label{fig:ccbardis}.
The outer cyan band indicates an estimate of systematic uncertainty in the $x[c(x)-\bar{c}(x)]$ distribution obtained from a variation of the hadron scale $\kappa_c$ by 5\%. From ref.~\cite{Sufian:2020coz}}.
\end{center}
\end{figure}
There have been many phenomenological calculations involving the existence of a non-zero IC component which can explain anomalies in the experimental data and to predict its novel signatures of IC in upcoming experiments~\cite{Brodsky:2015fna}. A recent measurement by LHCb is shown in Fig. 10.
The observed spectrum exhibits a sizable enhancement at forward Z rapidities, consistent with the effect expected if the proton contains the $ |uud \bar c c>$ Fock state predicted by LFQCD.~\cite{LHCb:2021stx}
\begin{figure}
\begin{center}
\includegraphics[height= 10cm,width=15cm]{Bledslides1LHCb.pdf}
\end{center}
\caption{The charm distribution in the proton determined from LHCb measurements of
$Z$ bosons produced in association with charm at forward rapidity~\cite{LHCb:2021stx}.}
\label{Bledslides1LHCb.pdf}
\end{figure}
Thus QCD predicts two separate and distinct contributions to the heavy quark distributions $q(x,Q^2)$ of the nucleons at low and high $x$.
Here $x= {k^+\over P^+} = {k^0 + k^3\over P^0 + P^3}$ is the frame-independent light-front momentum fraction carried by the heavy quark in a hadron with momentum $P^\mu$.
In the case of deep inelastic lepton-proton scattering, the LF momentum fraction variable $x$ in the proton structure functions can be identified with the Bjorken variable
$x = {Q^2\over 2 p \cdot q}.$
At small $x$, heavy-quark pairs are dominantly produced via the standard gluon-splitting subprocess $g \to Q \bar Q$.
The presence of the heavy
quarks in nucleon from this contribution is a result of the QCD DGLAP evolution of the light quark and gluon
PDFs. Unlike the conventional $\log m^2_Q$ dependence of the low $x$ extrinsic gluon-splitting contributions, the probabilities for the intrinsic heavy quark Fock states at high $x$ scale as $1\over m_Q^2$ in non-Abelian QCD. Thus the relative probability of intrinsic bottom to charm is of order ${m^2_c\over m^2_b} \sim {1\over 10}.$
In contrast, the probability for a higher Fock state containing heavy leptons in a QED atom scales as $1\over m_\ell^4$, corresponding to the twist-8 Euler-Heisenberg light-by-light self-energy insertion. Detailed derivations based on the OPE have been given in Ref. ~\cite{Brodsky:1984nx,Franz:2000ee}.
\section{Color Transparency}
One of the most striking properties of QCD phenomenology is ``color transparency"~\cite{Brodsky:1988xz}, the reduced absorption of a hadron as it propagates through nuclear matter, if it is produced at high transverse momentum in a hard exclusive process, such as elastic lepton-proton scattering. The nuclear absorption reflects the size of the color dipole moment of the propagating hadron; i.e., the separation between its colored constituents.
The key quantity which measures the transverse size of a scattered hadron in a given Fock state is~\cite{sjbGdT}
$$a_\perp = \sum_{i=1}^{n-1} x_i b_{\perp i}$$
The LF QCD formula for form factors can then be written compactly in impact space as
$$F(Q^2) = \int^1_0 dx d^2 a_\perp e^{i \vec q_\perp\cdot a_\perp} q(x, a_\perp)$$
and thus
$$a^2_\perp(Q^2) =4{{d \over dQ^2} F(Q^2) \over F(Q^2)}$$
measures the slope of the hadron factor.
We can use LF holography to predict for the valence Fock state
$$a^2_\perp = 4 {\tau-1\over Q^2}$$
which shows that, as expected, the hadronic size decreases with the momentum transfer $Q^2$, and that the size of the hadron increases with its twist $\tau$.
\begin{figure}
\begin{center}
\includegraphics[height= 10cm,width=15cm]{Bledslides10.pdf}
\end{center}
\caption{Predictions from LF holography for the effective transverse size of hadrons}
\label{Bledslides10.pdf}
\end{figure}
A key prediction is that the size of $a_\perp$ is smaller for mesons $(\tau =2$ than for baryons with $\tau=3,4$, corresponding to the quark-diquark Fock states with $ L=0 $ and $ L=1 $ respectively.
In fact the proton is predicted to have ``two-stage" color transparency $Q^2>14~GeV^2$ for the $ |[ud] u>$ twist-3 Fock state with orbital angular momentum $ L=0$
and $Q^2 > 16~GeV^2$ for the later onset of CT for its $L=1$ twist-4 component.
In fact, LF holography predicts equal quark probability for the $L=0$ and $ L=1$ Fock states.
\begin{figure}
\begin{center}
\includegraphics[height= 10cm,width=15cm]{Bledslides9.pdf}
\end{center}
\caption{Two-stage color transparency and transmission probability of the proton in a nuclear medium from LF Holography }
\label{Bledslides9.pdf}
\end{figure}
Color transparency is thus predicted to occur at a significantly higher $Q^2$ for baryons $(Q^2 > 14~GeV^2)$ than for mesons $(Q^2 > 4~GeV^2)$.
This is consistent with a recent test of color transparency at JLab which has confirmed color transparency for the the $\pi$ and $\rho$ ~\cite{HallC:2020ijh}; however, the measurements
in this experiment are limited to values below the range of $Q^2$ where proton color transparency is predicted to occur.
Remarkably, color transparency for the production of an intact deuteron nucleus in $e A \to d + X_{(A-2)}$ quasi-exclusive reactions should be observed at $Q^2 > 50~GeV^2$. This can be tested in $e d \to e d $ elastic scattering in a nuclear background.
It has been speculated~\cite{Caplow-Munro:2021xwi} that the ``Feynman mechanism", where the behavior of the struck quark at $x \sim 1$ in the proton LFWF plays a key role for hard exclusive hadronic processes
does not predict color transparency. However, LF wavefunctions are functions of the invariant mass
$\sum_i {\vec k^2_{\perp i }+ m^2_i \over x_i}$
so that their behavior at large $k_\perp$ and large $x$ are correlated. Thus color transparency occurs for scattering amplitudes involving both the large transverse momentum and large $x$ domains. The three-dimensional LF spatial symmetry of LFWFs also leads to the
exclusive-inclusive connection, relating the counting rules for the behavior of form factors at large $Q^2$ and structure functions at $x_{bj} \to 1$.
\section{Is the Momentum Sum Rule Valid for Nuclear Structure Functions? }
Sum rules for DIS processes are analyzed using the operator product expansion of the forward virtual Compton amplitude, assuming it depends in the limit $Q^2 \to \infty$ on matrix elements of local operators such as the energy-momentum tensor. The moments of the structure function and other distributions can then be evaluated as overlaps of the target hadron's light-front wavefunction, as in the Drell-Yan-West formulae for hadronic form factors~\cite{Brodsky:1980zm,Liuti:2013cna,Mondal:2015uha,Lorce:2011dv}.
The real phase of the resulting DIS amplitude and its OPE matrix elements reflects the real phase of the stable target hadron's wavefunction.
The ``handbag" approximation to deeply virtual Compton scattering also defines the ``static" contribution~\cite{Brodsky:2008xe,Brodsky:2009dv} to the measured parton distribution functions (PDF), transverse momentum distributions, etc. The resulting momentum, spin and other sum rules reflect the properties of the hadron's light-front wavefunction.
However, final-state interactions which occur {\it after} the lepton scatters on the quark, can give non-trivial contributions to deep inelastic scattering processes at leading twist and thus survive at high $Q^2$ and high $W^2 = (q+p)^2.$ For example, the pseudo-$T$-odd Sivers effect~\cite{Brodsky:2002cx} is directly sensitive to the rescattering of the struck quark.
Similarly, diffractive deep inelastic scattering involves the exchange of a gluon after the quark has been struck by the lepton~\cite{Brodsky:2002ue}. In each case the corresponding DVCS amplitude is not given by the handbag diagram since interactions between the two currents are essential.
These ``lensing" corrections survive when both $W^2$ and $Q^2$ are large since the vector gluon couplings grow with energy. Part of the phase can be associated with a Wilson line as an augmented LFWF~\cite{Brodsky:2010vs} which do not affect the moments.
The cross section for deep inelastic lepton-proton scattering (DIS) $\ell p \to \ell' p' X)$
includes a diffractive deep inelastic (DDIS) contribution
in which the proton remains intact with a large longitudinal momentum fraction $x_F>0.9$
greater than 0.9 and small transverse momentum. The DDIS events, which can be identified with Pomeron exchange in the
$t$-channel, account for approximately 10\%
of all of the DIS events.
Diffractive DIS contributes at leading-twist (Bjorken scaling) and is the essential component of the two-step amplitude which causes shadowing and antishadowing of the nuclear PDF ~\cite{Brodsky:2021jmj}. It is important to analyze whether the momentum and other sum rules derived from the OPE expansion in terms of local operators remain valid when these dynamical rescattering corrections to the nuclear PDF are included. The OPE is derived assuming that the LF time separation between the virtual photons in the forward virtual Compton amplitude
$\gamma^* A \to \gamma^* A$ scales as $1/Q^2$.
However, the propagation of the vector system $V$ produced by the diffractive DIS interaction on the front face and its inelastic interaction with the nucleons in the nuclear interior $V + N_b \to X$ are characterized by a longer LF time which scales as $ {1/W^2}$. Thus the leading-twist multi-nucleon processes that produce shadowing and antishadowing in a nucleus are evidently not present in the $Q^2 \to \infty$ OPE analysis.
Thus, when one measures DIS, one automatically includes the leading-twist Bjorken-scaling DDIS events as a contribution to the DIS cross section, whether or not the final-state proton is explicitly detected. In such events, the missing momentum fraction
in the DDIS events could be misidentified with the light-front momentum fraction carried by sea quarks or gluons in the protons' Fock structure. The underlying QCD Pomeron-exchange amplitude which produces the DDIS events thus does not obey the operator product expansion nor satisfy momentum sum rules -- the quark and gluon distributions measured in DIS experiments will be misidentified, unless the measurements explicitly exclude the DDIS events~\cite{Brodsky:2019jla,Brodsky:2021bkt}
The Glauber propagation of the vector system $V$ produced by the diffractive DIS interaction on the nuclear front face and its subsequent inelastic interaction with the nucleons in the nuclear interior $V + N_b \to X$ occurs after the lepton interacts with the struck quark.
Because of the rescattering dynamics, the DDIS amplitude acquires a complex phase from Pomeron and Regge exchange; thus final-state rescattering corrections lead to nontrivial ``dynamical" contributions to the measured PDFs; i.e., they involve the physics aspects of the scattering process itself~\cite{Brodsky:2013oya}. The $ I = 1$ Reggeon contribution to diffractive DIS on the front-face nucleon leads to flavor-dependent antishadowing~\cite{Brodsky:1989qz,Brodsky:2004qa}. This could explain why the NuTeV charged current measurement $\mu A \to \nu X$ scattering does not appear to show antishadowing
in contrast to deep inelastic electron nucleus scattering~\cite{Schienbein:2007fs}.
Again, the corresponding DVCS amplitude is not given by the handbag diagram since interactions between the two currents are essential to explain the physical phenomena.
It should be emphasized that shadowing in deep inelastic lepton scattering on a nucleus involves nucleons at or near the front surface; i.e, the nucleons facing the incoming lepton beam. This geometrical orientation is not built into the frame-independent nuclear LFWFs used to evaluate the matrix elements of local currents. Thus the dynamical phenomena of leading-twist shadowing and antishadowing appear to invalidate the sum rules for nuclear PDFs. The same complications occur in the leading-twist analysis of deeply virtual Compton scattering $\gamma^* A \to \gamma^* A$ on a nuclear target.
\section*{Acknowledgements}
Presented at the Low-$x$ Workshop, Elba Island, Italy, September 27--October 1, 2021. I thank Christophe Royon for inviting me to make a contribution to this meeting.
I am also grateful to my collaborators on our recent work, especially Guy de T\'eramond, Alexandre Deur, Guenter Dosch, Sabbir Sufian, Simonetta Liuti, Ivan Schmidt, and Valery Lyubovitskij. This work is supported by the Department of Energy, Contract DE--AC02--76SF00515. SLAC-PUB-17651
\iffalse
\part[Measurement of $W$ and $Z$ boson production in association with jets at ATLAS\\ \phantom{x}\hspace{4ex}\it{Laura Fabbri on behalf of the ATLAS Collaboration}]{}
\section{Introduction}
From 2015 to 2018 the ATLAS experiment \cite{Atlas} collected nearly $140$ fb$^{-1}$ of "good for physics" data, most of which with a pile-up of up to 40 simultanous interactions per bunch-crossing.
The physics program at the LHC is the most ambitious and successful plan at high energy physics. The huge dataset available and the well understood detector performance allow for precision measurements, access to rare processes, perform an extensive research program of weak interactions or physics at or above the TeV scale, study new states of matter.
The present article focuses on the latest ATLAS results on Standard Model physics.
The Standard Model is an extremely predictive theory successfully verified by experiments for about fifty years. Since the discovery of the Higgs boson, ten years ago, the two main goals are to test and validate the model in a new energy regime, improving the accuracy of parameter measurements, and to search for new physics, both directly and indirectly, trying to access new physics effects in the collected events.
One of the more promising signals for obtaining such results is the production of a vector boson in association with jets ($V$+jets).
Indeed, this signal has two advantages: a large production cross section and a clear experimental signature which can be precisely measured thanks to the easily identifiable decays of the $Z$ boson to charged leptonic final states. That makes the $Z$+jets a "standard candle" for testing Standard Model.
Furthermore, studying $Z$+jets is very important to improve our ability to select specific signals. In fact, such process constitute non-negligible background for studies of the Higgs bosons \cite{higgs1,higgs2}
and in searches for new phenomena \cite{newP1,newP2,newP3}
, which often exploit the presence of high-$ p_\mathrm{T}$ jets to enrich a data sample with potential signal. The extrapolation of $W/Z$+jets backgrounds from control regions to signal regions and the modelling of the final discriminant distribution largely benefit from reliable predictions. Improving our prediction is mandatory for the success of the analysis.
Additionally, this measurement constitutes a powerful test of perturbative quantum chromodynamics (pQCD) \cite{QCD1,QCD2}
and, in the case of high-energy jets, it allows to probe the interplay of QCD with higher-order electroweak (EW) processes \cite{EW1,EW2,EW3,EW4}.
Otherwise, looking at $Z +b$-jets, where the jets originate from $b$-quarks in proton-proton ($pp$) collisions, provides a test of the production mechanism of heavy-flavoured partons. Particularly, current predictions for $Z+b$-jets production are known at the next-to-leading-order (NLO) accuracy in pQCD, and they can be derived in either a 4-flavour number scheme (4FNS) or a 5-flavour number scheme (5FNS) \cite{FNS1,FNS2,FNS3,FNS4}
. In the 4FNS, $b$-quarks do not contribute to the parton distribution functions (PDFs) of the proton and, in QCD, they only appear in a massive final state due to gluon splitting ($g \rightarrow bb$). In the 5FNS, on the contrary, $b$-quark density is allowed in the initial state via a $b$-quark PDF, with the $b$-quark typically being treated as massless. The measurement of this cross-section is therefore a useful tool to constrain the $b$-quark PDF inside proton.
\section{Differential cross-sections for $Z+b$-jets}
\begin{figure}[b]
\begin{center}
\epsfig{figure=bjets_fig_07a.png,height=0.45\textwidth}
\caption{Measured cross-section for $Z$+≥ 1 $b$-jet. The data are compared with the predictions from Sherpa 5FNS (NLO), Alpgen + Py6 4 FNS (LO), Sherpa Fusing 4FNS+5FNS (NLO), Sherpa Zbb 4FNS (NLO), MGaMC + Py8 5FNS (LO), MGaMC + Py8 Zbb 4FNS (NLO) and MGaMC + Py8 5FNS (NLO). The yellow band corresponds to the statistical uncertainty of the data, and the green band to statistical and systematic uncertainties of the data, added in quadrature. The error bars on the Sherpa 5FNS (NLO) predictions correspond to the statistical and theoretical uncertainties added in quadrature. Only statistical uncertainties are shown for the other predictions. More details can be found in \cite{Z+bjets}.}
\label{inclusiveXS}
\end{center}
\end{figure}
To measure the $Z+b$-jets production cross-section a sample of $35.6$ fb$^{-1}$ of $pp$ collision data collected by the ATLAS experiment at $\sqrt{s}$= 13 TeV in 2015 and 2016 has been used.
Events are selected considering the decay of the $Z$-boson in muon or electron pairs passing specific kinematic criteria and containing at least one $b$-jet (jets passing the b-tagging algorithm at 70\% efficiency \cite{btag,btag2}). The background is dominated by two main contributions: $t\bar t$ process and $Z$ boson associated with light-jets or $c$-jets, misidentified as $b$-jets.
Integrated and differential cross sections as a function of several kinematic observables are compared with a variety of Monte Carlo generators.
In general, the 5FNS calculations at NLO accuracy well predict inclusive cross sections, while the 4FNS LO calculations largely underestimate the data as reported in Figure \ref{inclusiveXS}.
Figure \ref{Z+b} shows the cross-sections distribution as a function of the $ p_\mathrm{T}$ of the leading $b$-jet (left) and the distance in the pseudo rapidity-azimuthal plane ($\Delta R = \sqrt{(\Delta \eta)^2 +(\Delta \phi)^2}$) between the Z-boson candidate and the leading $b$-jet (center) in events with at least one b-jet. The measured cross-section as a function of invariant mass of the two leading b-jets is presented in Figure \ref{Z+b} (right).
Experimental data are compared with the predictions of different Monte Carlo generators.
The best agreement with the data is provided by the NLO Sherpa2.2.1 5FNS predictions; the 5FNS LO calculation from MadGraph5\_aMC@NLO
interfaced to Pythia8 (MG5aMC+Py8 in the following) better describes data with respect to the NLO calculation due to a larger number of partons in the matrix element. In general all the 4FNS predictions underestimate the cross section.
Figure \ref{Z+b} (center) shows that the 5FNS generators provide a good prediction also for this distribution that is more sensitive to the presence of additional QCD radiation, soft corrections and boosted particles decaying into $Z$-boson and $b$-quarks.
\begin{figure}
\centering
\begin{minipage}{.32\textwidth}
\centering
\epsfig{figure=bjets_fig_08b.png,height=1.4\textwidth}
\end{minipage}
\begin{minipage}{.32\textwidth}
\centering
\epsfig{figure=bjets_fig_11.png, height=1.4\textwidth}
\end{minipage}
\begin{minipage}{.32\textwidth}
\centering
\epsfig{figure=bjets_fig_13b.png, height=1.4\textwidth}
\end{minipage}
\caption{Measured cross-section as a function of leading $b$-jet $p_\mathrm{T}$ (left) and $\Delta R$ between the $Z$-boson candidate and the leading $b$-jet (center) in events with at least one $b$-jet. Measured cross-sections as a function of invariant mass of the two leading $b$-jets (right). The data are compared with different Monte Carlo generator predictions. The error bars correspond to the statistical uncertainty, and the hatched bands to the data statistical and systematic uncertainties added in quadrature. More details can be found in \cite{Z+bjets}. }
\label{Z+b}
\end{figure}
The NLO 5FNS predictions from Sherpa2.2.1 are in good agreement with data in the low range of the di-$b$jets invariant mass distribution ($m_{bb}$) as reported in Figure \ref{Z+b} (right). None of the considered calculations provide a reasonable description of data for $m_{bb}$ >300 GeV.
\section{Z+jets at high $ p_\mathrm{T}$}
In the calculation of Z+jet production at NLO, real and virtual QCD and EW effects play a not negligible role, including topologies corresponding to dijet events, where a real Z boson is emitted from an incoming or outgoing quark leg \cite{EW1,EW2,EW3,EW4}
. These effects lead to enhancements in production that increase with the transverse momentum of the jets.
To test predictions the measurement of the production cross-section of Z in association with high $ p_\mathrm{T}$ jets, where the additional contributions are more evident, was performed \cite{Z+jets}.
Moreover, since QCD processes are sensitive to the kinematics between the Z boson and the closest jet, two topologies of events are identified: soft real emission of a Z boson from a jet ({\it “collinear”}), characterised by a small angular separation between the Z boson and the jet ($\Delta R(Z,j)<1.4$) and hard Z boson production ({\it“back-to-back”}) with $\Delta R(Z,j)>2$.
Measurements are performed in events containing a Z boson candidate reconstructed in the muon and electron decay channels and jets with $ p_\mathrm{T} >100$ GeV.
Only the QCD component of the Z+jets production is considered in the analysis, while the EW contribution is treated as background as well as $t \bar t$ and diboson, the main dominant background.
Integrated and differential cross sections are measured in a fiducial phase space and compared with state-of-art Monte Carlo generator predictions. Predictions from Sherpa 2.2.1 and LO MG5\_ aMC+Py8 CKKWL overestimate the measured cross sections (see Figure \ref{Z+jet-fig} (left)). MG5\_ aMC+Py8 FxFx and Sherpa 2.2.11 provide a good description of data for the full set of observables considered in the analysis for both the collinear and the back-to-back topologies (see Figure \ref{Z+jet-fig} (right)). The improvement obtained by these two predictions with respect to the previous versions of the Monte Carlo generators can be explained with the higher number of parton in the matrix element.
In addition Sherpa 2.2.11 contains NLO virtual EW corrections and treatment of unordered emissions in the PS.
\begin{figure}
\centering
\begin{minipage}{.48\textwidth}
\centering
\epsfig{figure=Coll_fig_04,height=0.8\textwidth}
\end{minipage}
\quad
\begin{minipage}{.48\textwidth}
\centering
\epsfig{figure=Coll_fig_07b,height=0.8\textwidth}
\end{minipage}
\caption{Summary of integrated cross-section results (left). Differential cross-sections for $Z$+jets at high $ p_\mathrm{T}$ as a function of the ratio of $Z$ and jet transverse momentum $r_{Z, j}$ (right).
The measured cross sections are shown with black points and the error bars represent the total uncertainty.
Data are compared with several predictions. The uncertainties on predictions are given by the quadratic sum of the uncertainties from the variations of PDF, QCD scale and, for Sherpa v.2.2.11, virtual EW contributions. More details in \cite{Z+jets}.}
\label{Z+jet-fig}
\end{figure}
\section{Conclusion}
The statistics collected by the ATLAS experiment during the LHC Run 2 allows extremely precise measurements that better probe MC generator performances. Perturbative QCD and quark PDF are tested in presence of b-jets. Data confirm that the 5FNS calculation at NLO better describes the inclusive and differential cross-sections.
All Sherpa predictions provide a good modelling of shape, while other predictions, more sensitive to gluon splitting, show various discrepancies. None of the generators correctly describe the region $m_{bb} > 300$ GeV region.
For the first time collinear and back-to-back $Z$ emission are disentangled and the kinematics between $Z$ and closest jet is studied.
\\
\\
Comments: Presented at the Low-$x$ Workshop, Elba Island, Italy, September 27--October 1 2021.
\iffalse
\part[Glueballs. Updates from Lattice and Holography\\ \phantom{x}\hspace{4ex}\it{Dmitry Melnikov}]{}
\vspace{1cm}
In~\cite{TOTEM:2020zzr} the D0 and TOTEM collaborations announced a $3.4\sigma$ divergence of the $pp$ and $p\bar p$ cross sections, compatible with a $t$-channel exchange of a colorless $C$-odd particle, an odderon~\cite{Lukaszuk:1973nt}. Further work is under way to improve the statistics and discover more properties of this particle.
The principle candidate for the odderon is a $C$-odd three-gluon bound state -- a glueball state $1^{--}$ in the $J^{PC}$ classification, and its Regge trajectory. Apart from this progress in hadron scattering the interest to purely gluonic states appeared recently in the context of models of physics beyond the Standard Model, e.g.~\cite{Kang:2008ea,Juknevich:2009ji,Juknevich:2009gg}. Despite the fact that the existence of such bound states was conjectured long ago, their experimental detection, let alone physical properties, turned out to be a very complicated task. The main problem is heavy mixing of such nonquarkyonic and apparently short-lived states with heavy and excited states of ordinary mesons. Most of the presently available information about glueballs comes from lattice simulations.
Lattice studies focus on the lowest states, kinematically stable in the pure glue theory. The work of Morningstar and Peardon~\cite{Morningstar:1999rf} found 12 such lightest states with spins varying from $J=0$ to $J=3$ in the pure glue $SU(3)$ Yang-Mills theory.\footnote{See~\cite{Michael:1988jr,Michael:1989ry,Bali:1993fb} for yet earlier work on the lattice spectrum.} The ensuing lattice studies improved the original predictions of the glueball masses~\cite{Chen:2005mg,Meyer:2004gx,Athenodorou:2020ani}, observing additional (excited) states, studied the effects of introducing quarks~\cite{Bali:2000vr,Hart:2001fp,Hart:2006ps,Richards:2010ck,Gregory:2012hu,Sun:2017ipk,Brett:2019tzr}, investigated the dependence of the gauge group and its rank~\cite{Teper:1998kw,Lucini:2004my,Lucini:2010nv,Holligan:2019lma,Bennett:2020hqd,Bennett:2020qtj,Athenodorou:2021qvs} and attempted to estimate the decay constants of a few lightest states~\cite{Chen:2005mg}, cf.~\cite{Sexton:1994wg,Sexton:1996ed,Yamanaka:2019yek,Llanes-Estrada:2021evz}.
One fact about glueballs, corroborated by the lattice analysis, is relatively light dependence of their masses of the number of colors,\footnote{In the $Sp(N_c)$ case the leading correction to the $N_c\to\infty $ value is $O(1/N_c)$.}
\begin{equation}\bea\ds
\label{MassNc}
m \ \simeq \ m_\infty + \frac{c}{N_c^2}\,.
\ea\end{equation}
In table~\ref{tab:SUN} we demonstrate the $SU(N_c)$~\cite{Lucini:2004my} and $Sp(N_c)$~\cite{Bennett:2020qtj} lattice results for the mass of the lightest $0^{++}$ glueball and its ratio with the next in mass $2^{++}$ state. For $SU(N_c)$ the reported variation of the masses is within 14-16\%, while the variation of the ratios is even smaller, about 10\%. In the $Sp(N_c)$ case the variation of the ratio is even smaller, about 5\%.
\begin{table}[h]
\centering
\begin{tabular}{||c|cccccc||}
\hline \Trule\Brule $G$ & $SU(2)$ & $SU(3)$ & $SU(4)$ & $SU(6)$ & $SU(8)$ & $SU(\infty)$ \\
\hline\Trule\Brule $m_{0^{++}}$ & 3.78 & 3.55 & 3.36 & 3.25 & 3.55 & 3.31 \\
\Trule\Brule$m_{2^{++}}/m_{0^{++}}$ & 1.44 & 1.35 & 1.45 & 1.46 & 1.32 & 1.46 \\
\hline\hline \Trule\Brule $G$ & $Sp(1)$ & $Sp(2)$ & $Sp(3)$ & $Sp(4)$ & -- & $Sp(\infty)$ \\
\hline \Trule\Brule $m_{2^{++}}/m_{0^{++}}$ & 1.41 & 1.41 & 1.48 & 1.41 & -- & 1.47 \\ \hline
\end{tabular}
\caption{Lattice masses (in appropriate units) of the $0^{++}$ and $2^{++}$ glueballs in the pure glue $SU(N_c)$~\cite{Lucini:2004my} and $Sp(N_c)$ theories~\cite{Bennett:2020qtj}.}
\label{tab:SUN}
\end{table}
The second observation is a reasonably light effect of quark mixing in the unquenched version of the Yang-Mills theory. The phenomenological OZI rule~\cite{Okubo:1963fa,Zweig:1964jf,Iizuka:1966fk} does not favor quark mixing with purely gluonic states. Lattice simulations somewhat confirm this effect, as can be seen from the data in table~\ref{tab:unquenched}. The match is better for the lighter states (5\% for $0^{++}$) and the ratio $m_{2^{++}}/m_{0^{++}}\sim 1.46$ obtained for QCD with three flavors is compatible with the $SU(N_c)$ and $Sp(N_c)$ results~(universality of this ratio was acknowledged in~\cite{Bennett:2020hqd}). Mass ratios appears to be even more stable with respect to model variation, which might indicate that non only $m_\infty$, but also $c$ in equation~(\ref{MassNc}) is a universal quantity.
\begin{table}[h]
\centering
\begin{tabular}{||c|cccc||ccc||}
\hline \Trule\Brule $J^{PC}$ & $0^{++}$ & $2^{++}$ & $1^{+-}$ & $0^{-+}$ & $m_{2^{++}}/m_{0^{++}}$ & $m_{1^{+-}}/m_{2^{++}}$ & $m_{0^{-+}}/m_{2^{++}}$ \\
\hline \Trule\Brule YM & 1710 & 2390 & 2980 & 3640 & 1.40 & 1.25 & 1.52 \\
\Trule\Brule QCD$_3$ & 1795 & 2620 & 3270 & 4490 & 1.46 & 1.25 & 1.71 \\ \hline
\end{tabular}
\caption{Comparison of the lattice masses of glueballs (GeV) in the quenched Yang-Mills and QCD with three flavors~\cite{Gregory:2012hu}. The ratio of the masses of $m_{2^{++}}$ and $m_{0^{++}}$ for the two cases are $1.40$ and $1.46$ respectively.}
\label{tab:unquenched}
\end{table}
The above features of the glueball spectrum make this data interesting from the point of view of the holographic models. Classical gravity limit of holography~\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj} can only capture the limit of large number of colors $N_c\to\infty$ and large coupling constant $\lambda=g_{YM}^2N_c\to\infty$ of Yang-Mills theory. Without implementing quantum corrections to gravity one can only hope accessing some universal properties of hadrons. Lattice simulations indicate that the glueball spectrum is one such property.
Sparing most of the details, the holographic approach in the classical gravity limit consists of choosing a background (solution of an appropriate supergravity theory) that is expected to be equivalent to the gauge theory of interest and calculating the spectrum of linearized equations of motion over the background solution. If the background is truly equivalent to the sought gauge theory in the corresponding limit then the gravity spectrum is equivalent to spectrum of the gauge theory. Higher order correlation functions are also available through the calculations beyond linear order.
Early take on glueballs in holography~\cite{Csaki:1998qr,Brower:2000rp} invoked the so-called Witten's model~\cite{Witten:1998zw}. The background of this model is obtained from a new-horizon limit of the world-volume theory of D4 branes compactified on a circle. This background preserves no supersymmetry and gives a qualitative idea about the gauge theory spectrum, but also exhibits features incompatible with the Yang-Mills theory, such as mass degeneracy for different glueball families, which indicates to symmetries absent in the Yang-Mills. It also shows larger spacing of the mass eigenvalues, as demonstrated in table~\ref{tab:holomodels}. In Witten's model the ratio $m_{2^{++}}/m_{0^{++}}\sim 1.74$ and the discrepancy grows for heavier states.
\begin{table}[h]
\centering
\begin{tabular}{||c|ccccc||}
\hline \Trule\Brule $J^{\,PC}$ & $SU(\infty)$ & $Sp(\infty)$ & BMT & BBC$_D$ & BBC$_N$ \\
\hline
\Trule\Brule ${0^{++}}$ & 1 & 1 & 1 & 1 & 1 \\
\Trule\Brule ${2^{++}}$ & 1.49 & 1.47 & 1.74 & 1.48 & 1.56\\
\Trule\Brule ${0^{-+}}$ & 1.53 & 1.54 & 2.09 & -- & --\\
\Trule\Brule $1^{+-}$ & 1.88 & -- & 2.70 & -- & -- \\
\Trule\Brule $1^{--}$ & 2.32 & -- & 3.37 & -- & -- \\
\Trule\Brule $0^{+-}$ & 3.01 & -- & -- & -- & -- \\
\hline\hline
\Trule\Brule $0^{++\ast}$ & 1.89 & 1.94 & 2.53 & 1.63 & 1.83 \\
\Trule\Brule $2^{++\ast}$ & 2.11 & -- & 2.76 & 2.15 & 2.49 \\
\hline
\end{tabular}
\caption{Spectra of the lightest glueballs in the Witten's (BMT~\cite{Brower:2000rp}) and hard wall models (BBC~\cite{Boschi-Filho:2005xct}) compared to the state of the art lattice extrapolations of $m_\infty$ for $SU(N)$~\cite{Athenodorou:2021qvs} and $Sp(N)$~\cite{Bennett:2020qtj}. The masses are normalized to the mass of the $0^{++}$ state. Models BBC$_N$ and BBC$_D$ used different boundary conditions (Dirichlet or Neumann) in the calculation of the spectrum.}
\label{tab:holomodels}
\end{table}
A simpler approach towards the properties of the holographic hadrons can be taken via the class of bottom-up models. The so-called hard wall model~\cite{Erlich:2005qh,Polchinski:2001tt,Polchinski:2002jw} is its simplest representative. For the purpose of this note, Light-Front Holography models~\cite{Brodsky:2006uqa} can also be attributed to this class. In the hard wall model the background is five-dimensional anti de Sitter space, whose group of symmetries is precisely the conformal group in 3+1 dimensions, and a cutoff is introduced to break this symmetry explicitly. The dual gauge theory to such a background is expected to be approximately conformal (3+1)-dimensional gauge theory at high energies, with an IR scale defining the masses, as in Yang-Mills or QCD. One then considers wave equations of different spin matter in the $AdS_5$ background. This problem leads to the spectrum of the light states shown in table~\ref{tab:holomodels}, as calculated by~\cite{Boschi-Filho:2005xct}.
Remarkably, the simplest hard wall model predicts the value $m_{2^{++}}/m_{0^{++}}\sim 1.48$, closest to the extrapolated ratio of $m_\infty$ on the lattice. This ``magic number" can be expressed as a ratio of the first non-trivial zeroes $x_{2,1}$ and $x_{4,1}$ of Bessel functions $J_2(x)$ and $J_4(x)$,
\begin{equation}\bea\ds
\text{hard wall:}\quad \frac{m_{2^{++}}}{m_{0^{++}}} \ = \ \frac{x_{4,1}}{x_{2,1}} \ \simeq \ 1.47759\,.
\ea\end{equation}
At the same time, it is not possible to implement states with non-trivial parity and charge conjugation, since the simple model does not have a natural way to implement the corresponding symmetries. In contrast, top-down holographic constructions inherit these symmetries from string theory, as in the example of Witten's model.
The Klebanov-Strassler model~\cite{Klebanov:2000hb} is another example of a top-down holographic model, which possesses rich physics. It is a model of D3 and D5 branes in type IIB string theory compactified on a six-dimensional cone (conifold). In the low-energy limit the compactification gives rise to a type IIB supergravity background on a space which is a warped product of $AdS_5$ and a pair of spheres $S^3\times S^2$~\cite{Klebanov:1998hh,Klebanov:1999rd,Klebanov:2000nc}. Smoothing out the singularity of the cone (deformation of the conifold) introduces a scale that breaks (3+1)-dimensional conformal invariance and provides an interesting example of a holographic model with a logarithmic running of the coupling constant. The background has $SU(2)\times SU(2)$ global symmetries, so its spectrum is organized in the irreducible representations of this group.
The Klebanov-Strassler background preserves ${\cal N}=1$ supersymmetry, so the dual gauge theory is a non-conformal ${\cal N}=1$ supersymmetric Yang-Mills theory with additional matter fields transforming under the global symmetry group, subject to a unique superpotential. In the IR the theory flows to a strongly coupled fixed point (it shows an interesting RG flow known as ``cascade'' of Seiberg dualities~\cite{Klebanov:2000hb,Strassler:2005qs}) which was initially expected to be in the universality class of the pure supersymmetric Yang-Mills theory. It was then understood that the IR Klebanov-Strassler theory has light operators and massless states due to spontaneous breaking of the baryon $U(1)$ symmetry by the baryonic operators~\cite{Gubser:2004qj,Gubser:2004tf,Benna:2006ib}.
Despite being a modification of the pure ${\cal N}=1$ Yang-Mills, the Klebanov-Strassler gauge theory inherits many properties of the parent. Apart from the mentioned logarithmic running of the coupling, the holographic theory exhibits the same mechanism of breaking of the $U(1)$ R-symmetry~\cite{Klebanov:2002gr}. The singlet sector mostly contains the super Yang-Mills operators~\cite{Apreda:2003gc,Cassani:2010na}. In view of the universality of the spectrum, one can hope that $SU(2)\times SU(2)$-singlet states can reproduce the appropriate limit of the low-energy spectrum of the supersymmetric, or even pure bosonic Yang-Mills, since little is known about glueballs of the supersymmetric theory. Let us summarize the results of the study of the spectrum of the singlet states in the Klebanov-Strassler theory.
The spectrum of singlet fluctuations of the Klebanov-Strassler background was studied in a series of papers including~\cite{Amador:2004pz,Berg:2005pd,Dymarsky:2006hn,Berg:2006xy,Dymarsky:2007zs,Benna:2007mb,Dymarsky:2008wd,Gordeli:2009nw,Gordeli:2013jea,Melnikov:2020cxe}. (See also~\cite{Caceres:2005yx,Elander:2009bm,Bianchi:2010cy,Elander:2014ola,Elander:2017cle,Elander:2017hyr} for the studies of non-singlet states or deformations of the Klebanov-Strassler theory.) The structure of the spectrum in the supergravity limit was explained in~\cite{Gordeli:2009nw, Gordeli:2013jea,Melnikov:2020cxe}: it contains two massless scalar supermultiplets and thirteen massive supermultiplets classified as one graviton, two gravitino, four vector and six scalar supermultiplets, according to the highest spin of the contained fields (with the exception of the scalar multiplets that contain scalars and spin half fermions). The massless states are not part of the Yang-Mills spectrum, as well as the associated (light) massive vector supermultiplet, containing a $0^{+-}$ scalar~\cite{Benna:2007mb} and a $1^{+-}$ vector~\cite{Dymarsky:2008wd}. In the scalar sector there is one multiplet containing non-Yang-Mills degrees of freedom. The remaining eleven massive multiplets can be compared with the spectrum of the Yang-Mills. This is done in table~\ref{tab:KSsinglets} and illustrated in figure~\ref{fig:fullspec}.
\begin{table}[h]
\centering
\begin{tabular}{||c|c|c|cc|c||}
\hline \Trule\Brule & $J^{\,PC}$ & Multiplet & $m^{\rm GS}$ & $m^{\ast}$ & Ref.\\
\hline
\Trule\Brule 1 & $1^{++},2^{++}$ & graviton & 1 & 1.51 & \cite{Dymarsky:2006hn}\\ \hline
\Trule\Brule 2 & $1^{+-},1^{--}$ & gravitino & 1.30 & 1.85 & \cite{Dymarsky:2008wd} \\
\Trule\Brule 3 & $1^{+-},1^{--}$ & gravitino & 1.64 & 2.15 & \cite{Dymarsky:2008wd} \\ \hline
\Trule\Brule 4 & $0^{--},1^{--}$ & vector & 1.47 & 2.00 & \cite{Benna:2007mb} \\
\Trule\Brule 5 & $0^{+-},1^{+-}$ & vector & 2.01 & 2.55 & \cite{Benna:2007mb} \\ \hline
\Trule\Brule 6 & $0^{++},1^{++}$ & vector & 1.99 & 2.53 & \cite{Gordeli:2009nw} \\ \hline\hline
\Trule\Brule 7 & $0^{++},0^{-+}$ & scalar & 0.421 & 0.894 & \cite{Berg:2006xy} \\ \hline
\Trule\Brule 8 & $0^{++},0^{-+}$ & scalar & 0.640 & 1.25 & \cite{Berg:2006xy} \\
\Trule\Brule 9 & $0^{++},0^{-+}$ & scalar & 1.11 & 1.58 & \cite{Berg:2006xy} \\ \hline\hline
\Trule\Brule 10 & $0^{++},0^{-+}$ & scalar & 1.36 & 1.84 & \cite{Berg:2006xy} \\
\Trule\Brule 11 & $0^{++},0^{-+}$ & scalar & 2.32 & 2.87 & \cite{Berg:2006xy} \\ \hline
\end{tabular} \quad
\begin{tabular}{||c|cc||}
\hline \Trule\Brule $J^{\,PC}$ & $m^{\rm GS}$ & $m^{\ast}$ \\
\hline
\Trule\Brule $2^{++}$ & 1 & 1.43 \\ \hline
\Trule\Brule $1^{+-}$ & 1.25 & 1.62 \\
\Trule\Brule $1^{--}$ & 1.58 & $\geq 1.88$ \\ \hline
\Trule\Brule & & \\
\Trule\Brule $0^{+-}$ & $\geq2.01$ & -- \\ \hline
\Trule\Brule & & \\ \hline\hline
\Trule\Brule & & \\ \hline
\Trule\Brule $0^{++}$ & 0.668 & 1.27 \\
\Trule\Brule $0^{-+}$ & 1.02 & 1.53 \\ \hline\hline
\Trule\Brule & & \\
\Trule\Brule $0^{++}$ & -- & -- \\ \hline
\end{tabular}
\caption{Spectrum of the lightest supermultiplets of the supersymmetric Yang-Mills sector of the Klebanov-Strassler theory (left table). The bosonic members of the multiplets are indicated. The masses of the ground and first excited states, in units of the mass of the ground state of $2^{++}$, are shown, as well the references to the works, from which the spectra can be extracted. For the scalar sector, the assignment was made based on the analysis of~\cite{Melnikov:2020cxe,RodriguesFilho:2020rae}. The masses are compared with the masses recently obtained for the $SU(\infty)$ bosonic Yang-Mills on the lattice~\cite{Athenodorou:2021qvs} (right table). Lower bounds indicate states with known masses, but with a problem of confirming the $J^{PC}$ numbers in the continuum limit.}
\label{tab:KSsinglets}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{fullspec.pdf} \quad
\includegraphics[width=0.45\linewidth]{KSvsSUi.pdf}
\caption{Full light $SU(2)\times SU(2)$-singlet spectrum of the bosonic states in the Klebanov-Strassler theory (left). The ground states of each supermultiplet are shown in full color, while the excited states are faded. All the mass values are computed in the units of the ground state of $2^{++}$. The discrepancy of the $0^{++}$~\cite{Berg:2006xy} and $0^{-+}$~\cite{Melnikov:2020cxe} sectors is visible. The right plot compares the masses of the ground states in the supersymmetric Yang-Mills sector (fill rectangles) and the ground states in $SU(N)$ bosonic Yang-Mills, extrapolated to $N=\infty$ (empty rectangles). In the $0^{++}/0^{-+}$ sector the results of~\cite{Berg:2006xy} are shown in full color, while that of~\cite{Melnikov:2020cxe} are faded. States that do not have pairs correspond to hybrid glueballs with no analog in the bosonic theory, except for the heaviest $0^{++}$ that has not been approached on the lattice.}
\label{fig:fullspec}
\end{figure}
In table~\ref{tab:KSsinglets} we normalize all the masses to the mass of the ground state of the graviton multiplet, that is $2^{++}$ glueball. This state is particularly easy in the supergravity analysis, so it serves a natural reference for the remaining states. The $C$-odd sector is also relatively simple, although it involves some mixing between states with similar quantum numbers. The masses of the $1^{+-}$, $1^{--}$ and $0^{+-}$ glueballs can be compared with the $SU(N)$ bosonic Yang-Mills extrapolated to $N=\infty$~\cite{Athenodorou:2021qvs}. One can observe a very reasonable match of the masses of the ground states. For the excited states the correspondence is not so good: the holographic supersymmetric masses of excited states tend to grow more rapidly. Note that masses of $1^{--\ast}$ and $0^{+-}$ states appear with lower bounds in the $SU(\infty)$ part of the table. The continuum limit of the lattice result for this state does not allow to determine the $J^{PC}$ numbers unambiguously for the states with the indicated mass, although different arguments support the above identification~\cite{Athenodorou:2021qvs}. In case of $0^{+-}$ we can view the match with the holographic results as an additional argument in favor of the identification.
One vector multiplet in the $C$-odd sector contains no bosonic states, so there are no lattice results to compare. The same is true for the only vector multiplet in the $C$-even sector. The latter mode is also relatively simple.
The most complex is the scalar sector, originally studied in~\cite{Berg:2005pd,Berg:2006xy}. There are six scalars-pseudoscalar pairs with similar quantum numbers that mix together. This mix of scalars contains the bosonic $0^{++}$ and $0^{-+}$ modes and scalars dual to the gluino bilinear. The latter fermionic state is expected to be the lightest in the spectrum of the supersymmetric Yang-Mills. Higher gluon operators are also predicted by holography.
The spectrum of $0^{++}$, found in~\cite{Berg:2006xy} does not identify the operator origin of the mass eigenvalues. Identifying the six ground states and the corresponding excited states is a complicated task. In~\cite{Melnikov:2020cxe} the spectrum of $0^{-+}$ was calculated as a consistency check, since the two spectra are expected to be related by supersymmetry, yet the separation of the full scalar spectrum into six families was not completed. It was argued that the following ordering of states is consistent with the two approaches and with the lattice results~\cite{RodriguesFilho:2020rae},
\begin{equation}\bea\ds
m_{\lambda\lambda} < m_{0^{++}} < m_{\lambda\lambda}^* < m_{0^{-+}} < m_{0^{++}}^* < \cdots\,,
\ea\end{equation}
where $m_{\lambda\lambda}$ is the mass of the ground state of the gluino bilinear, $m_{\lambda\lambda}^\ast$ is its first excited state, $m_{0^{++}}$ and $m_{0^{-+}}$ are masses of the states in the spectrum of the two gluon operators ${\rm tr}\, F_{\mu\nu}F^{\mu\nu}$ and ${\rm tr}\, F_{\mu\nu}\tilde{F}^{\mu\nu}$, studied on the lattice. Here we also conjectured the masses of the ground states of the remaining two multiplets, including that of the heaviest four-gluon operator ${\rm tr}\, (F_{\mu\nu}F^{\mu\nu})^2$ to be seen on the lattice. This is done by applying quadratic fits to the values of mass squared. For the multiplets other than scalar the quadratic fits work quite well. The results of the fitting of the pseudoscalar sector~\cite{Melnikov:2020cxe} are shown in figure~\ref{fig:fits}. The following fits work very well for the heavy part of the spectrum, while for the light states we end up with a few noticeable deviations.
\begin{eqnarray}
m_{\lambda\lambda}^2 \ = \ 0.223 +0.499n + 0.257 n^2 \,, && m^{\rm GS} \ = \ 0.472\,, \label{mll}\\
m_{0^{++}}^2 \ = \ 0.480 + 0.854 n + 0.260 n^2\,, & & m^{\rm GS} \ = \ 0.693\,,\\
m_{0{-+}}^2 \ = \ 1.22 + 1.26 n + 0.257 n^2\,, & & m^{\rm GS} \ = \ 1.11 \,, \\
m_{10}^2 \ = \ 1.97 + 1.15 n + 0.267 n^2\,, & & m^{\rm GS} \ = \ 1.40 \,, \\
m_{AB}^2 \ = \ 4.24 + 2.14 n + 0.261 n^2\,, & & m^{\rm GS} \ = \ 2.06 \,, \\
m_{11}^2 \ = \ 5.92 + 2.49 n + 0.260 n^2\,, & & m^{\rm GS} \ = \ 2.43 \,, \label{m11}
\end{eqnarray}
where all the values are given in units of the mass of the ground state of $2^{++}$. Here $m_{10}$ and $m_{11}$ are pseudoscalar predictions for the masses of the ground states in the entries 10 and 11 of table~\ref{tab:KSsinglets}. Eigenvalue $m_{AB}$ is expected to be absent from the spectrum of the pure supersymmetric Yang-Mills. We note that the fits slightly improve the convergence to the $SU(\infty)$ lattice values.
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{fitslight.pdf}
\quad
\includegraphics[width=0.45\linewidth]{fitsall.pdf}
\caption{Quadratic fits of the $0^{-+}$ spectrum of $m^2$ found in~\cite{Melnikov:2020cxe}. The black dots (as well as the grids) indicate the mass eigenvalues and the curves are fits~(\ref{mll})-(\ref{m11}). The left plot shows the behavior of the fits for $m^2\leq 10$. The intercepts correspond to the ground states. The right plots illustrates the quality of the fits for $m^2\geq 10$. All mass values are shown in the units of the ground state mass of $2^{++}$.}
\label{fig:fits}
\end{figure}
It is necessary to mention that the spectrum of pseudoscalars computed in~\cite{Melnikov:2020cxe}, although found to be consistent with the spectrum of $0^{++}$ in~\cite{Berg:2006xy}, contains a subset of eigenvalues exhibiting divergence, particularly significant at low mass. For example, the predictions for the masses of the three lightest scalar ground states were found to be $m_{\lambda\lambda}\simeq 0.511\,m_{2^{++}}$, $m_{0^{++}}\simeq 0.701\,m_{2^{++}}$ and $m_{0^{-+}}\simeq 1.15\,m_{2^{++}}$, cf. (\ref{mll})-(\ref{m11}), to be compared with the entries 7, 8 and 9 in table~\ref{tab:KSsinglets}. See also figure~\ref{fig:fullspec}. The spectrum of~\cite{Berg:2006xy} was independently confirmed in~\cite{Elander:2014ola}, so the status of the pseudoscalar calculation remains unclear.
As a final comment we remind that no states of spin $J\geq 2$ are accessible to the holographic analysis in the supergravity limit, except for the $2^{++}$.
\bigskip
In conclusion we would like to discuss the relevance of the results presented in this note. Recent progress in lattice calculations largely confirmed the expectations of the universality of the glueball masses and allowed to extrapolate the results to the limit of large number of colors. The comparison of the lattice results for $SU(\infty)$ with the Yang-Mills subsector of the Klebanov-Strassler theory demonstrated consistency at the level expected from a holographic model. For the five lightest states with $J<2$ of the bosonic Yang-Mills theory computed on the lattice we observed the same hierarchy of the spectrum in the Klebanov-Strassler theory and a sub-ten-percent match for the masses, as summarized by table~\ref{tab:KSratios} for either $SU(\infty)$ and $SU(3)$ theories. With the support of the universality of the spectrum, these results give an independent check of the consistency of the lattice approach.
\begin{table}[h]
\centering
\begin{tabular}{||c|c|ccccc||}
\hline \Trule\Brule $J^{\,PC}$ & $2^{++}$ & $0^{++}$ & $0^{-+}$ & $1^{+-}$ & $1^{--}$ & $0^{+-}$\\
\hline
\Trule\Brule Holography/$SU(\infty)$ & 1 & 0.959 & 1.081 & 1.037 & 1.038 & 0.999 \\
\Trule\Brule Holography/$SU(3)$ & 1 & 0.920 & 1.027 & 1.048 & 0.967 & 1.057 \\ \hline
\end{tabular}
\caption{Comparison of the holographic (Klebanov-Strassler) and lattice predictions for the masses of the lightest glueball ground states in the $SU(\infty)$ and $SU(3)$ Yang-Mills theories~\cite{Athenodorou:2021qvs}.}
\label{tab:KSratios}
\end{table}
More importantly, glueball spectrum turns out to be a rare guide to independent testing of the holographic correspondence. It shows that despite the unnaturalness of the classical holographic limit, first principle lattice calculations can be used to access physical observables in such a regime.
Despite a very reasonable match in table~\ref{tab:KSratios} and a possible room for improvement of convergence, the Klebanov-Strassler theory is not expected to give precisely the same results as the $SU(\infty)$ Yang-Mills. The results summarized in figure~\ref{fig:fullspec} should rather serve an approximation to the spectrum of the supersymmetric $SU(\infty)$ Yang-Mills, deformed by the presence of some matter. The consistency with the bosonic theory tells that the results presented here can give a similar, or even better match with the spectrum of the supersymmetric theory. The supersymmetric case gives a larger base for comparison with a multitude of additional states, but it unfortunately makes it a very complicated story for the lattice. We hope that such a comparison will be possible in the future.
We close the discussion with a prediction for the odderon, from the lattice and from holography. One can predict the value of the odderon to pomeron mass ratio to be close to
\begin{equation}\bea\ds
\frac{m_{1^{--}}}{m_{2^{++}}} \ \simeq \ 1.58 ~\text{(lattice)} \qquad \text{or} \qquad \frac{m_{1^{--}}}{m_{2^{++}}} \ \simeq \ 1.64 ~\text{(holography)}\,.
\ea\end{equation}
\paragraph{Acknowledgments.} These notes include the results of the collaboration with A.~Dymarsky and C.~Rodrigues Filho, whom the author would like to thank for discussions. This work was done with support of the Simons Foundation, award \#884966, via the Association International Institute of Physics (AIIF) and of the grant of the agency CNPq of the Brazilian Ministry of Science, Technology and Innovation \#433935/2018-9.
The material for these notes was originally prepared for the talk given at the Low X 2021 conference at Elba. While the talk was being prepared, a very timely paper~\cite{Athenodorou:2021qvs} appeared with new results on the $SU(\infty)$ spectrum on the lattice, which allowed to make a more detailed comparison of the spectrum in these notes, beyond $SU(3)$ and recent results on $Sp(\infty)$~\cite{Bennett:2020qtj} discussed in the talk.
\iffalse
\part[TOTEM results\\ \phantom{x}\hspace{4ex}\it{Frigyes Nemes}]{}
\section{INTRODUCTION}
The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment has been designed to measure the total proton-proton (${\rm pp}$) cross-section, elastic scattering and
diffractive processes at the LHC~\cite{Anelli:2008zza}, see e.g. Fig.~\ref{elasticscatteringsummary}.
The experimental apparatus of TOTEM is composed of three subdetectors: the Roman Pots (RP), the T1 and the T2 inelastic forward telescopes. The detectors are placed symmetrically on both sides of the Interaction
Point 5 (IP5), which is shared with the CMS experiment.
The RPs are moveable beam-pipe insertions, hosting edgeless silicon detectors to detect leading protons scattered at very small angles.
In order to maximize the acceptance of the experiment for elastically scattered protons, the RPs are able to approach the beam center to a transverse
distance as small as 1 mm. The alignment of the RPs is optimized by reconstructing common tracks going through the overlap between the vertical and
horizontal RPs as well as by studying elastic events~\cite{TOTEM:2013vay}.
The T1 telescope is based on cathode strip chambers placed at $\pm$9~m and covers the pseudorapidity range $3.1 \le |\eta| \le 4.7$; the T2
telescope is based on gas electron multiplier chambers placed at $\pm$13.5~m and covers the pseudorapidity range $5.3 \le |\eta| \le 6.5$.
The pseudorapidity coverage of the two telescopes at $\sqrt{s} = 2.76,~7,~8$ and 13 TeV allows the detection of about 96~\%, 95~\%, 94~\% and 92~\%, respectively, of the inelastic ${\rm pp}$ collisions, including collisions producing diffractive mass above about 2.1 GeV, 3.4 GeV, 3.6 GeV and 4.6 GeV, respectively~\cite{Antchev:2013haa, Antchev:2013paa, Antchev:2017dia}.
Before the LHC long shutdown one (LS1) the RPs, used for the measurements, were located at distances of 215--220 m from IP5~\cite{Anelli:2008zza}. The actual layout, i.e., after the LHC LS1, is different in RP
location and quantity. The RP stations previously installed at $\pm$147~m, from IP5, have been relocated to $\pm$210~m. Moreover, newly designed horizontal RPs have been installed
between the two units of the $\pm220$~m station~\cite{TOTEM:2013iga,Albrow:2014lrm}.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{sigma_tot_el_inel_vs_s_new_STAR.pdf}\vspace{-3mm}
\caption{A compilation of total, inelastic and elastic pp cross-section measurements, see Ref.~\cite{Tanabashi2018pdg, Antchev:2017yns,STAR:2020phn} and references therein. The red points indicate the TOTEM total cross-section measurements at $\sqrt{s}=2.76, 7, 8$ and 13~TeV using the luminosity independent
method.}
\label{elasticscatteringsummary}
\end{figure}
\section{ELASTIC SCATTERING AND TOTAL CROSS-SECTION $\sigma_{\rm tot}$
MEASUREMENTS}
For each tagged elastic event the four-momentum transfer squared~$t$ is reconstructed using the LHC optical functions, characterized with the so-called
betatron amplitude at IP5~$\beta^{*}$~\cite{Anelli:2008zza}. The TOTEM experiment developed a novel experimental method to estimate the optical functions at the RP locations,
using the measured elastically scattered protons~\cite{Antchev:2014voa,Burkhardt:2012zza}.
The total inelastic rate $N_{\rm inel}$, measured by the T1 and T2 telescopes, and the total nuclear elastic rate $N_{\rm el}$
with its extrapolation to zero four-momentum transfer squared~$t=0$ are combined with the optical theorem to obtain the total cross-section
in a luminosity, $\mathcal{L}$, independent way
\begin{equation}
\sigma_{\rm tot}=\frac{16\pi}{1+\rho^{2}}\cdot\left.\frac{{\rm d}N_{\rm el}}{{\rm d}t}\right|_{t=0}\cdot(N_{\rm el}+N_{\rm inel})^{-1}.
\end{equation}
The measured elastic $N_{\rm el}$ and inelastic rates $N_{\rm inel}$ allow for the determination of the elastic and inelastic cross-sections as well.
The TOTEM experiment determined the total pp cross-section at $\sqrt{s}=7$~TeV using the luminosity independent method~\cite{Antchev:2013iaa}, which was shown to be consistent with the total cross-sections
measured in independent ways, see Table~\ref{totalcrosssections}. The elastic and inelastic cross-sections were found to be $\sigma_{\rm el}=(25.1 \pm 1.1)$~mb and $\sigma_{\rm inel}=(72.9 \pm 1.5)$~mb, respectively.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|c|} \hline
Method & $\mathcal{L}$ independent~\cite{Antchev:2013iaa} & $N_{\rm inel}$ rate-indep.~\cite{Antchev:2011vs} &$N_{\rm inel}$ rate-indep.~\cite{Antchev:2013gaa} &$\rho$ indep.~\cite{Antchev:2013iaa}\\ \hline
$\sigma_{\rm tot}$ [mb] & 98.0 $\pm$ 2.5 & 98.3 $\pm$ 2.8 & 98.6 $\pm$ 2.2 & 99.1 $\pm$ 4.3 \\ \hline
\end{tabular}
\caption{The total cross-section $\sigma_{\rm tot}$ results measured by the TOTEM experiment at $\sqrt{s}=7$~TeV with three different methods and two different data sets.}
\label{totalcrosssections}
\end{table}
The luminosity-independent measurements were repeated at $\sqrt{s}=2.76$, $8$ and $13$~TeV. At $\sqrt{s}=2.76$~TeV, the total cross-section was found to be $\sigma_{\rm tot}=(84.7 \pm 3.3)$~mb, while the elastic and inelastic cross-section were $\sigma_{\rm el}=(21.8 \pm 1.4)$~mb and $\sigma_{\rm inel}=(62.8 \pm 2.9)$~mb, respectively~\cite{Antchev:2017dia}. At $\sqrt{s}=8$~TeV, the total, elastic and inelastic cross-sections of $\sigma_{\rm tot}=(101.7\pm2.9)$~mb, $\sigma_{\rm el}=(27.1\pm1.4)$~mb and $\sigma_{\rm inel}=(74.7\pm1.7)$~mb, respectively, were obtained~~\cite{Antchev:2013paa}. Finally at $\sqrt{s}=13$~TeV, the total, elastic and inelastic cross-sections were found to be $\sigma_{\rm tot}=(110.6~\pm~3.4$)~mb, $\sigma_{\rm el}=(31.0~\pm~1.7)$~mb and $\sigma_{\rm inel}=(79.5~\pm~1.8)$~mb, respectively~~\cite{Antchev:2017dia}.
In 2016 TOTEM took data during a special run with $\beta^{*}= 2500$~m optics at 13~TeV collision energy, which allowed to probe sufficiently low $|t|$-values to be sensitive to the Coulomb amplitude allowing a first total ${\rm pp}$ cross section measurement at LHC with Coulomb normalization $\sigma_{\rm tot}=(110.3\pm3.5$)~mb~\cite{Antchev:2017yns}. Combining the two uncorrelated TOTEM measurement at 13 TeV, luminosity independent and Coulomb normalization, yields $\sigma_{\rm tot}=(110.5 \pm 2.4)$~mb. Fig.~\ref{elasticscatteringsummary} shows a compilation of all the results together with other LHC measurements. The observed cross-sections are in agreement with the extrapolation of low-energy data to LHC and cosmic ray results as well.
Thanks to a high statistics $\beta^{*}=90$~m data set at $\sqrt{s}=8$~TeV energy, the TOTEM experiment excluded a purely exponential elastic pp differential cross-section~\cite{Antchev:2015zza}. The
significance of the exclusion is greater than 7$\sigma$ in the $|t|$ range from 0.027 to 0.2 GeV$^{2}$. Using refined parametrizations for the extrapolation
to the optical point, $t=0$, yields total cross-section values $\sigma_{\rm tot}=(101.5\pm2.1)$~mb and $\sigma_{\rm tot}=(101.9\pm2.1)$~mb, compatible with the previous measurements. The deviation from the purely exponential elastic pp differential cross-section has been confirmed
at 13 TeV~\cite{Antchev:2018edk}.
The TOTEM experiment performed its first measurement of elastic scattering in the Coulomb-nuclear interference (CNI) region~\cite{Antchev:2016vpy}.
The data have been collected at $\sqrt{s}=8$~TeV with a special beam optics of $\beta^{*}=1000$~m in 2012.
The $\rho$ parameter was for the first time at LHC extracted via the Coulomb-nuclear interference, and
was found to be~$\rho = 0.12\pm0.03$. Taking the Coulomb-nuclear interference into account in the extrapolation
to the optical point, $t=0$, yields total cross-section values of $\sigma_{\rm tot}=(102.9\pm2.3)$~mb and $\sigma_{\rm tot}=(103.0\pm2.3)$~mb for central and peripheral phase descriptions, respectively, compatible with the previous measurements.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{rho.png}
\caption{Predictions of COMPETE models for pp interactions. Each model is represented by one line (see legend). The red points represent the reference TOTEM measurements.}
\label{fig_4}
\end{figure}
The special run with $\beta^{*}= 2500$~m optics at 13~TeV collision energy with higher statistics allowed also for a precise measurement of the $\rho$ yielding $\rho = 0.09 \pm 0.01$ and $\rho = 0.10 \pm 0.01$, depending on different physics assumptions and mathematical modelling.
This $\rho$ result combined with all the TOTEM $\sigma_{\rm tot}$ measurements indicate that it is not sufficient to include only photon and colourless C-even 2-gluon compound exchange, the so-called Pomeron, in the $t$-channel to properly describe elastic ${\rm pp}$ scattering.
A significantly better description is obtained both in the Regge-like frameworks and QCD by adding colourless C-odd 3-gluon compound exchange in the $t$-channel~\cite{Antchev:2017yns}, the so-called Odderon. On the contrary, if shown that the C-odd 3-gluon compound $t$-channel exchange is not of importance for the description of elastic ${\rm pp}$ scattering at low $|t|$, the $\rho$ value determined by TOTEM would represent a first evidence of a slowing down of the total cross-section growth at higher energies.
The $\rho$ and $\sigma_{\rm tot}$ results are incompatible with models with Pomeron exchange only and provide evidence of odderon exchange effects with significance between 3.4$\sigma$ and 4.6$\sigma$, see Fig.~\ref{fig_4}~\cite{Antchev:2017yns} and Ref.~\cite{TOTEM:2020zzr_nemes}.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{final_result.png}
\caption{(color) Differential elastic cross-section ${\rm d}\sigma/{\rm d}t$ at $\sqrt{s} = 13$~TeV. The statistical and $|t|$-dependent correlated systematic uncertainty envelope is shown
as a yellow band.}
\label{finalresult90m}
\end{figure}
At 13 TeV the differential cross-section has been measured in the [0.04; 4]~GeV$^{2}$ range of $|t|$ using a very-high statistics sample (more than 10$^9$) of elastic events taken in 2015 using a dedicated data acquisition system allowing an increased data taking rate by an order of magnitude. This sample allowed for a precise measurement of the non-exponential part that contains a diffractive minimum ``dip'' and a secondary maximum ``bump'', see Fig.~\ref{finalresult90m}~~\cite{Antchev:2018edk}. The dip position at 13 TeV was found to be $|t_{\rm dip}|=(0.47\pm0.004^{\rm stat}\pm0.01^{\rm syst})$~GeV$^{2}$ and the ratio of the ${\rm d}\sigma/{\rm d}t$ at the bump
and at the dip $1.77\pm0.01^{\rm stat}$. Using $\beta^*$ = 11 m optics data taken in 2013, also the dip and bump at $\sqrt{s}=~$2.76~TeV could be observed; the position of the dip is $|t_{\rm dip}|=(0.61\pm0.03)$~GeV$^{2}$ and the bump-dip cross-section ratio $1.7\pm0.2$, as shown in Fig.~\ref{totemd0}~\cite{TOTEM:2018psk}. These new results confirm the ${\rm d}\sigma/{\rm d}t$ feature of dip and bump at TeV scale already observed at 7 TeV with a dip position of $|t_{\rm dip}|=(0.53\pm0.01^{\rm stat}\pm0.01^{\rm syst})$~GeV$^{2}$
and a bump-dip cross-section ratio of $1.7\pm0.1$~\cite{Antchev:2011zz, Antchev:2017yns}. The result is confirmed at 8~TeV as well~\cite{TOTEM:2021imi}. The series of TOTEM elastic ${\rm pp}$ measurements show that the dip is a permanent feature of the ${\rm pp}$ differential cross-section at TeV scale.
This is expressed by a bump-to-dip cross section ratio R significantly larger than 1, see Fig.~\ref{elcross}~\cite{TOTEM:2020zzr_nemes}. However, for $\rm p\bar{p}$ at TeV scale, this R-value is close to 1, i.e. there is no dip and no bump in the differential cross section.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{totem_d0.png}
\caption{(color) The differential cross sections ${\rm d}\sigma/{\rm d}t$ at $\sqrt{s}=2.76$~TeV measured by the TOTEM experiment and the elastic $\rm p\bar{p}$ measurement of the D0 experiment at 1.96 TeV~\cite{Abazov:2012qb}. The
green dashed line indicates the normalization uncertainty of the D0 measurement.}
\label{totemd0}
\end{figure}
When the 2.76 TeV ${\rm d}\sigma/{\rm d}t$ measurement of TOTEM is compared directly to the proton-antiproton (${\rm p\bar{p}}$) measurement of the D0 experiment at $\sqrt{s} = 1.96$~TeV, a significant difference can be observed, see Fig.~\ref{totemd0}. Under the assumption that possible effects due to the energy difference between TOTEM and D0 can be neglected, the result provides evidence for a colourless C-odd 3-gluon compound exchange in the $t$-channel of
${\rm pp}$ and ${\rm p\bar{p}}$ elastic scattering, cf. also~\cite{TOTEM:2020zzr_nemes}. This conclusion has also been acknowledged by Ref.~\cite{Leader:2021zkf}.
\section{CONCLUSIONS}
The TOTEM experiment has measured elastic ${\rm pp}$ scattering at $\sqrt{s}=2.76, 7, 8$ and 13~TeV. The
total, elastic and inelastic ${\rm pp}$ cross-sections have been derived for all energies using
the luminosity independent method and the optical theorem. At $\sqrt{s}=8$~TeV, TOTEM has also excluded a purely
exponential nuclear ${\rm pp}$ differential cross-section at low $|t|$. This deviation has been confirmed at 13 TeV. At 13 TeV, the $\rho$ parameter has been precisely measured and the total ${\rm pp}$ cross-section using the Coulomb amplitude has been derived for the first time at the LHC.
The $\rho$ measurement combined with all the TOTEM $\sigma_{\rm tot}$ measurements indicate the necessity to add the exchange of a colourless C-odd 3-gluon compound in the $t$-channel of elastic ${\rm pp}$ scattering.
At $\sqrt{s} = 2.76$~TeV, a diffractive minimum ``dip'' and a secondary maximum ``bump'' has been observed; when compared to the ${\rm p\bar{p}}$ measurement of the D0 experiment at
$\sqrt{s} = 1.96$~TeV, a significant difference can be observed. Under the assumption that possible effects due to the energy difference between TOTEM and D0 can be neglected, the result provides evidence for the exchange of a colourless C-odd 3-gluon compound in the $t$-channel of ${\rm pp}$ and ${\rm p\bar{p}}$ elastic scattering.
At 13 TeV, the differential cross-section has been measured in the [0.04 GeV$^{2}$; 4 GeV$^{2}$] $|t|$-range allowing for the precise measurement of the dip. The series of TOTEM elastic ${\rm pp}$ measurements show that the dip is a permanent feature of the ${\rm pp}$
differential cross-section at the TeV scale.\newline\vspace{2mm}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.8\textwidth]{c_logxy.pdf}
\caption{The ratio $R$ of the cross sections at the bump and dip as a function of $\sqrt{s}$ for ${\rm pp}$ and ${\rm p\bar{p}}$. The pp data are fitted to the function noted in the legend.}
\label{elcross}
\end{center}
\end{figure}
Comments: Presented at the Low-$x$ Workshop, Elba Island, Italy, September 27--October 1 2021.
\iffalse
\part[$\rho$ photoproduction in ALICE\\ \phantom{x}\hspace{4ex}\it{Spencer R. Klein on behalf of the ALICE Collaboration}]{}
\section{Introduction}
Ultra-peripheral collisions (UPCs) at heavy-ion colliders are a prolific source of photonuclear interactions; they are the energy frontier for photon-mediated reactions \cite{Baltz:2007kq,Bertulani:2005ru,Klein:2020fmr,Contreras:2015dqa}. In UPCs, the ions do not interact hadronically, so the product of the photon-mediated interaction is visible. In a simple approximation, the impact parameter $b$ must be more than twice the nuclear radius, $R_A$. UPCs include both two-photon interactions and photoproduction. UPCs at the Large Hadron Collider (LHC) represent the energy frontier for photon-mediated interactions. In pp collisions, photon-proton center of mass energies up to about 3 TeV are accessible, while in pPb collisions, the maximum energy is about 700 GeV. The photons have a small $p_T$, roughly $p_Z/\gamma$, where $\gamma$ is the ion Lorentz boost, so it is possible to use the $p_T$ distribution to probe the size of the nuclei. Unfortunately, the photon $p_T$ is a conjugate variable to the impact parameter, so if restrictions are imposed on $b$ (such as $b>2R_A$), the mean $p_T$ will increase; this increase is not calculable without some assumptions \cite{Klein:2020jom}.
In vector meson photoproduction, an incident photon fluctuates to a quark-antiquark dipole which then scatters elastically from a target nucleus, emerging as a real vector meson. During the elastic scattering, the vector meson retains the same quantum numbers (including helicity) as the incident photon.
The $\rho^0$ is of special interest as the most copiously photoproduced vector meson \cite{STAR:2002caw}. It is the lightest vector meson, corresponding to the largest dipole, so is the most subject to nuclear effects. In addition to $\rho$ photoproduction, the photon can fluctuate directly to a $\pi^+\pi^-$ pair, which then scatters, emerging as a real pion pair \cite{Bauer:1977iq}. These two possibilities are indistinguishable, so interfere with each other, enhancing the spectrum below the $\rho^0$ mass, and suppressing it at higher masses. Higher-mass, excited $\rho$ states are also possible, and can lead to higher mass $\pi^+\pi^-$ pairs.
One complication in PbPb UPCs is that the coupling constant $Z\alpha\approx 0.6$ is large, so for a collision with moderate impact parameters, it is very possible to exchange more than one photon, complicating the reaction \cite{Baltz:2002pp,Baur:2003ar}. A second photon is likely to excite one of the nuclei, while a third photon may excite the other nucleus, leading to mutual Coulomb dissociation \cite{Baltz:1996as}. The excitation may be collective, such as a Giant Dipole Resonance (GDR) or higher excitation, or an excitation of a single nucleon to a $\Delta$ or higher resonance, or, for higher energy photons, a more complex hadronic interaction. Production of an additional vector meson is also possible. Most of these reactions involve nuclear dissociation, leading to the emission of one or more neutrons, or, less frequently, one or more protons. It is also possible to produce a vector meson and excite the nucleus via one-photon exchange leading to incoherent photoproduction.
In these reactions, the photons are emitted independently \cite{Baur:2003ar}, connected only by a common impact parameter. The impact parameter affects the photon flux and maximum energy. And, since the photons are polarized with their electric field vectors parallel to the impact parameter vector, the photons share the same polarization.
These additional reaction products complicate the analysis of UPC photoproduction, since one can no longer focus exclusively on exclusive reactions. In most cases, the additional reaction only leads to the production of neutrons, but sometimes $\pi^{\pm}$ may be created. The cross-section for having multiple reactions may be computed in impact parameter ($b$) space. For example, the cross-section to produce a $\rho$ with multiple Coulomb excitation is
\begin{equation}
\sigma = \int {\rm d}^2b P_{\rho} (b) P_{X1}(b) P_{X2}(b)
\label{eq:mult}
\end{equation}
where $P_{\rho} (b)$, $P_{X1}(b)$, and $P_{X2}(b)$ are the probabilities to produce a $\rho$, and excite the first and second nuclei respectively. These probabilities are given by the product of the differential photon flux ${\rm d}^2N_\gamma/{\rm d}b^2$ and the $\gamma A$ cross-sections. For nuclear excitations, the $\gamma A$ cross-sections must include a wide range of reactions, including collective nuclear excitations like the Giant Dipole Resonance (GDR), nucleon excitations, such as the $\Delta^+$ resonance, and partonic excitations from high-energy photons, spanning a wide photon energy range from about 10 MeV (in the target frame) up to the kinematic limit. These cross-sections are usually determined using tabulations of data from multiple sources \cite{Baltz:1996as}. When the cross-sections are large, it may be necessary to include a unitarity correction, since multiple photons may contribute to excitate a single target to a higher level.
For photons below a cutoff energy (when $k< \gamma \hbar c/b$), the photon flux has a $1/b^2$ dependence, so, the more photons that are exchanged, the smaller $\langle b\rangle$ \cite{Baur:2003ar}. So, one can use the number of exchanged photons to preferentially select different ranges of impact parameter. They also all share the same photon polarization, so polarization correlations are expected.
\section{Detector and Data Analysis}
The data used here were collected using the ALICE detector, which comprises a large central detector and a forward muon spectrometer \cite{ALICE:2008ngc}. For the analyses discussed here, the most important components are an inner silicon detector and large time projection chamber, in a 0.5 T solenoidal magnet.
The events analyzed here were collected with a special trigger optimized for ultra-peripheral collisions \cite{ALICE:2020ugp,ALICE:2021jnv}. It required two pairs of signals in the silicon detector, with azimuthal angular separation greater than 153 degrees. Each signal pair, consisting of hits in different silicon layers, was consistent with one track. The azimuthal angle requirement was to select pairs where the tracks were roughly back-to-back.
The other trigger requirements provided vetos to reject events that contained additional particles. The four veto detectors, and their pseudorapidity ($\eta$) coverage are: V0A ($2.8 < \eta < 5.1$), V0C ($-3.7 < \eta<-1.7$), ADA ($4.7 < \eta < 6.3$) and ADC ($-4.9 > \eta > -6.9$).
Data from the zero degree calorimeters (ZDCs) were used in the analysis, but not in the trigger.
The analysis selected events with exactly two good oppositely charged tracks, consistent with a vertex in the interaction region. The tracks were required to have specific energy loss (${\rm d}E/{\rm d}x$) in the TPC consistent with being $\pi^\pm$.
\begin{figure}
\begin{center}
\epsfig{figure=rhomass.pdf,height=0.35\textwidth}
\epsfig{figure=rhopt.pdf,height=0.35\textwidth}
\caption{The mass (left) and $p_T$ spectra for selected pairs. A cut of $p_T<200$ MeV/c is applied to the mass spectrum, to emphasize coherent production, while the $p_T$ spectrum includes a wide range of masses. For $p_T < 200$ MeV/c, the like-sign background is orders of magnitude below the coherent production signal. From \cite{ALICE:2020ugp}.}
\label{fig:first}
\end{center}
\end{figure}
\section{$\rho$ production results}
These cuts left a clean signal. Figure \ref{fig:first} shows the dipion invariant mass and $p_T$ spectra. The two peaks in the $p_T$ spectrum below 200 MeV/c, correspond to the first two diffractive maxima, clearly showing the diffractive nature of the production. The like-sign pairs, which are a proxy for most backgrounds (notably grazing hadronic collisions), are far below the oppositely charged pairs, showing that there is little background in the events. Two remaining backgrounds, from photoproduction of the $\omega$, followed by $\omega\rightarrow\pi^+\pi^-\pi^0$ and $\gamma\gamma\rightarrow\mu^+\mu^-$ are also small, and mostly concentrated at low $M_{\pi\pi}$. The latter is a background since we cannot effectively distinguish $\mu^\pm$ and $\pi^\pm$.
The $M_{\pi\pi}$ mass spectrum can be fit with a relativistic Breit-Wigner distribution for the $\rho^0$ plus a constant term for direct $\pi^+\pi^-$, and an interference term between the two. $\gamma\gamma\rightarrow\mu^+\mu^-$ is included with a template based on STARlight \cite{Klein:2016yzr}, while $\omega\rightarrow\pi^+\pi^-\pi^0$ is removed by only fitting in the region $M_{\pi\pi}>0.62$ GeV.
\begin{figure}
\begin{center}
\epsfig{figure=Rhodiagrams.pdf,height=0.3\textwidth}
\epsfig{figure=impactparameters.png,height=0.3\textwidth}
\caption{(left) Four diagrams that contribute to $\rho$ photoproduction: (a) coherent production, plus coherent production with an additional (b) one or (c) two photons exchanged, and (d) incoherent photoproduction. The products from reactions (b) and (d) are not completely distinguishable. (Right) The impact parameter distributions for different nuclear excitations: no nuclear excitation (0n0n, diagram (a), single nuclear excitation (0nXn, diagram (b)) and mutual Coulomb excitation (XnXn, diagram (c)). Nuclear excitation preferentially selects events with smaller impact parameters. Incoherent photoproduction corresponds to single-photon exchange, so it has a similar impact parameter distribution as 0n0n. The right panel is from Ref. \cite{Klein:2020fmr}.
}
\label{fig:diagrams}
\end{center}
\end{figure}
The produced $\rho$ and direct $\pi^+\pi^-$ may be accompanied by neutrons, which can occur when the two nuclei exchange additional photons, as in Figs. \ref{fig:diagrams}(b) and (c), or from
an incoherent photoproduction reaction (Figs. \ref{fig:diagrams}(d)) involving a single photon. The photons are expected to be emitted independently, sharing only a common impact parameter \cite{Gupta:1955zza,Baur:2003ar}.
Figure \ref{fig:dsdt} shows the cross-sections for all events, and for the same events divided into three classes. The top-left panel shows the total measured $\rho$ cross-section, compared to five models. STARlight is based on parameterized HERA $\gamma {\rm p}$ data, with a Glauber-like eikonal formalism to handle nuclear targets \cite{Klein:1999qj}. The GKZ predictions are based on a modified vector-meson-dominance model, using a Glauber-Gribov formalism for nuclear targets \cite{Guzey:2013jaa}. The Glauber-Gribov approach allows for a dipole to interact multiple times as it traverses a target. Each individual interaction can be inelastic, with the intermediate states (between interactions) allowed to include high-mass fluctuations. The CCKT predictions are based on a calculation of dipoles passing through a nuclear target, which is modeled in terms of gluon density as a function of impact parameter \cite{Cepila:2016uku}. The gluon density includes gluonic hot-spots. Finally, the GMMNS model is another dipole based calculation that includes an implementation of gluon saturation \cite{Goncalves:2017wgg}. Most of the models do a reasonable job of matching the data, although STARlight is a bit on the low side, and the CCKT (nuclear) model is a bit high.
Per Eq. \ref{eq:mult}, the cross-sections for additional photon exchange may be easily calculated given a $\sigma_{\gamma p}$.
The STARlight neutron calculation is done within the STARlight code \cite{Klein:2016yzr}, while the CCKT simulation used the {\bf $n_0^0n$} afterburner \cite{Broz:2019kpl}. To the extent that these calculations are based on the same parameterized photoexcitation data, they should give the same relative cross-sections for $\rho^0$ production accompanied by neutron emission. However, the relative cross-sections do not agree perfectly. For example, in the two upper panels (total coherent $\rho$ cross-section and $\rho$ without neutron emission),
the CCKT (nuclear) cross-section is well above the other calculations, while in the lower panels, where neutron emission is required, it is relatively lower. A similar trend is evident for the CCKT curve. It may be that {\bf $n_0^0n$} predicts lower excitation probabilities than STARlight.
Experimentally, neutron emission, expected in most photonuclear reactions, is easy to detect using the ALICE ZDCs. However, there is a complication. Some of the photoexcitation occurs at high energies, and the reactions can lead to emission of one or more $\pi^\pm$ or heavier particles. If these particles hit any of the detectors used as event vetoes (the ADA, ADC, V0A and V0C), causing a loss of $\rho$ signal. The loss is substantial; it is $26\pm4\%$ for events with neutrons in one ZDC, and $43\pm5\%$ for events with neutrons in both ZDCs. This loss is estimated using control triggers which do not include the veto, and appropriate corrections are applied.
\begin{figure}
\begin{center}
\epsfig{figure=dsdy1.pdf,height=0.35\textwidth}
\epsfig{figure=dsdy2.pdf,height=0.35\textwidth}
\epsfig{figure=dsdy3.pdf,height=0.35\textwidth}
\epsfig{figure=dsdy4.pdf,height=0.35\textwidth}
\caption{$d\sigma/dy$ for $\rho$ photoproduction for (top left) all events, and three different classes of neutron emission: (top right) no neutrons, (bottom left) neutrons in one ZDC only, and (bottom right) neutrons in both ZDCs. Each panel is compared with several different theoretical calculations. From
\cite{ALICE:2020ugp}.
}
\label{fig:dsdt}
\end{center}
\end{figure}
\section{$\rho$ photoproduction in XeXe collisions}
The $A$ dependence of $\rho$ photoproduction can provide an important clue about the presence of saturation or other high-density nuclear phenomena. In 2017, the LHC collided xenon atoms, at $\sqrt{s_{NN}}=5.44$ TeV. ALICE used the same UPC trigger as for lead-lead running and measured the cross-section, using similar methods \cite{ALICE:2021jnv}. Figure \ref{fig:highmass} shows the $\rho$ photoproduction as a function of atomic number. At mid-rapidity,
\begin{equation}
\frac{{\rm d}\sigma}{{\rm d}y} = 131.5 \pm 5.6 ({\rm stat.})^{+17.5}_{-16.9} ({\rm syst.})\ {\rm mb}.
\end{equation}
This is slightly below the STARlight predictions, slightly below the lower bound of the GMMNS prediction, and below the GKZ band. However, none of these deviations are very significant.
The cross-section scales as $A^\alpha$, with $\alpha=0.96\pm0.02$, dominated by systematic uncertainty. This shows the presence of substantial nuclear effects. Without nuclear effects, the coherent cross-section would scale as $A^{4/3}$. This is the product of two scaling relations: the forward cross-section scales as $A^2$, while the $p_T$ range over which coherent production is possible scales as $A^{-2/3}$, leaving the $4/3$ exponent. On the other hand, it is also considerably above the prediction of a black disk model, in which the cross-section scales as the frontal area of the nucleus, $A^{2/3}$.
\section{A high-mass state}
A number of excited, higher-mass $\rho$ states can decay to $\pi^+\pi^-$. Figure \ref{fig:highmass} (right) shows the $\pi^+\pi^-$ mass spectrum for events with $p_T < 200$ MeV/c in lead-lead collisions. The expected tail of the $\rho^0$ is visible, with a broad resonance on top of it. The spectrum is fit using an exponential for the $\rho^0$ plus the direct $\pi^+\pi^-$ tail, plus a Gaussian for the resonance. The null (no-resonance) hypothesis is rejected with 4.5 $\sigma$ significance. The resonance best-fit parameters are mass $M=1725\pm 17$ MeV and width $\Gamma=143\pm21$ MeV. The resonance is similar to that seen by STAR in gold-gold UPCs, with $M=1653\pm 10$ MeV and width $\Gamma=164\pm15$ MeV \cite{Klein:2016dtn}. STAR pointed out that the peak might be compatible with photoproduction of the $\rho_3 (1690)$. The ZEUS Collaboration also saw resonances in exclusive $\pi^+\pi^-$ electroproduction ($Q^2>2$ GeV$^2$), with masses of $1350 \pm 20 ^{+20}_{-30}$ MeV and $1780 \pm 20 ^{+15}_{-20}$ MeV \cite{ZEUS:2011tzw}.
\begin{figure}
\begin{center}
\epsfig{figure=rhocrosssectionvsA.pdf,height=0.3\textwidth}
\epsfig{figure=MassHighState.pdf,height=0.3\textwidth}
\caption{(left) $\gamma A\rightarrow \rho A$ cross-sections vs. atomic number $A$ for pp, XeXe and PbPb collisions, for 65 GeV photons. (right)$M_{\pi\pi}$ for exclusive production for events with pair $p_T<200$ MeV/c. A broad resonance is visible over the high-mass tail of the $\rho^0$ plus direct $\pi^+\pi^-$. From \cite{ALICE:2020ugp}.
}
\label{fig:highmass}
\end{center}
\end{figure}
\section{Future plans}
During LHC Run 3 and Run 4, ALICE will have many improvements which will improve charged particle reconstruction, raise ALICE's rate capability and remove trigger bottlenecks for UPC data collection. The TPC endcaps have been replaced with GEM based readouts to allow continuous (rather than gated) TPC readout and a new ITS2 silicon tracker will use monolithic active pixel sensors to greatly improve vertexing, especially for open charm hadrons.
For UPCs, the biggest improvement will be a new streaming readout which will do away with triggering. All data will flow to the data acquisition system, where it can be studied with high-level event selection algorithms \cite{Antonioli:2013ppp}; for lead-lead running, all data will be saved. This will give an enormous boost to UPC data collection, since triggering is usually the limiting factor for UPC studies. During Run 3 and Run 4, a total of 13 pb$^{-1}$ of lead-lead data should be collected. This is equivalent to 5.5 billion $\rho^0\rightarrow\pi^+\pi^-$ within the ALICE acceptance, along with 210 million $\rho'\rightarrow\pi^+\pi^-\pi^+\pi^-$. The $J/\psi$ sample should include 1.1 million $J/\psi\rightarrow\mu^+\mu^-$ in the central detector, a similar number of $e^+e^-$ plus about 600,000 $\mu^+\mu^-$ in the forward spectrometer \cite{Citron:2018lsq}. For the $\psi'$, the rates are about 35,000 and 19,000 respectively. Photoproduction of $\Upsilon(1S)\rightarrow\mu^+\mu^-$ should also be visible, with 2,800 events expected in the central detector, and 880 in the forward muon spectrometer. This should be enough for detailed studies of the spectroscopy of the light vector mesons, including of the substructure of the heavier mesons. It should also be possible to measure the production characteristics of heavy quarkonium, comparing the effect of shadowing on mesons of different masses. The removal of the trigger bias and the improved vertex measurements will also facilitate the study of photoproduction of open charm.
In addition to lead-lead running a short (0.5 nb$^{-1}$?) oxygen-oxygen run has been proposed for Run 3 \cite{ALICE:2021wim,Brewer:2021kiv}. This offers two unique opportunities for UPCs.
The first is to study incoherent photoproduction of the $\rho$ on oxygen targets. Incoherent photoproduction is of great interest because, in the Good-Walker paradigm, it is related to event-by-event fluctuations in the nuclear configuration - the phase space that includes both the individual nucleon positions and, more importantly, the presence of gluonic `hot spot' fluctuations \cite{Klein:2019qfb}. The $p_T$ spectrum for incoherent production is loosely tied to the length scale for these fluctuations, so it is desirable to measure over as wide a range in $p_T$ as possible.
It is difficult to study incoherent photoproduction in lead-lead collisions because of the large background from coherent production. In oxygen-oxygen running, the ratio of incoherent to coherent production is expected to be larger than in lead-lead. As Fig. \ref{fig:oxygen} shows, the predicted coherent peak is still larger than the incoherent, but by less than in lead-lead collisions. The other difference with lead-lead collisions is that oxygen is only charge eight, so most reactions only involve single photon exchange. Multi-photon exchange, as discussed above, is almost absent, so the presence of neutrons is a clear sign of nuclear breakup. Although not all nuclear dissociation will produce neutrons, most will do so. Photonic excitation is also possible, but oxygen is a very stable, doubly magic nucleus, with a lowest lying excited state at 6.05 MeV \cite{IAEA}, so nucleon emission is likely to be predominant.
The second opportunity is to study the competition between photoproduction and double-diffractive interactions. Both of these reactions can lead to $\pi^+\pi^-$ final states. Photoproduction dominates in heavy-ion collisions, while double-diffractive interactions dominate in pp collisions. With medium-charge nuclei, the amplitudes should be similar, so interference may be possible among final states with the same spin/parity.
\begin{figure}
\begin{center}
\epsfig{figure=oxygen.png,height=0.3\textwidth}
\caption{(left) Simulated $p_T$ spectrum for coherent and incoherent $\rho$ photoproduction in oxygen-oxygen collisions at $\sqrt{s_{NN}}=6.37$ TeV. From \cite{ALICE:2021wim}.
}
\label{fig:oxygen}
\end{center}
\end{figure}
\section{Conclusions}
The $\rho^0$ is copiously photoproduced in ultra-peripheral collisions of heavy ions. The cross-section for coherent $\rho$ photoproduction is quite well reproduced in models that use Glauber or dipole calculations to predict the cross-sections. The cross-section scales with the atomic number $A$ as $A^{0.96\pm 0.02}$, showing that nuclear effects substantially moderate the $A^{4/3}$ dependence expected for full coherence, without nuclear suppression.
The cross-section for coherent $\rho$ photoproduction accompanied by neutron emission is consistent with a model whereby the neutron production comes through the exchange of one or more additional photons, which are independent of the $\rho$ production, except for sharing a common impact parameter.
We have also observed a heavy state, with a mass of 1650 MeV, decaying to $\pi^+\pi^-$. The mass and cross-section may be consistent with that expected for the $\rho_3(1690)$.
Looking ahead, ALICE expects a rich bounty of UPC results during LHC Run 3 and Run 4. The new flow-through data acquisition system will eliminate the bottleneck formerly imposed by the requirements of a low-multiplicity UPC trigger. Run 3 has been proposed to include a short oxygen-oxygen run, which should offer the opportunity to study incoherent $\rho$ photoproduction on an intermediate mass target.
\section*{Acknowledgements}
This work is supported in part by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under contract number DE- AC02-05CH11231.
\iffalse
\part[Overview of ATLAS Forward Proton detectors for LHC Run 3 and plans for the HL-LHC\\ \phantom{x}\hspace{4ex}\it{Maciej P. Lewicki on behalf of the ATLAS Collaboration}]{}
\section{Introduction}
The predictions of forward proton scattering arise in a diverse range of physics, including the hard \cite{jl2,jl3} and nonperturbative QCD \cite{jl4}, interactions at electroweak scale \cite{jl7,jl8,jl9,jl10}, and searches for physics beyond Standard Model \cite{jl5,jl6,jl11,jl12,jl13,jl14}.
Such events, usually called diffractive, involve an exchange of a colourless object between interacting protons, one or both of which may remain intact.
Moreover, a \textit{rapidity gap} will be present -- an absence of particles produced into kinematic vicinity of the intact proton.
Historically, rapidity gap is a standard experimental signature of a diffractive event, however, it is frequently outside the acceptance of detector, or is destroyed due to background, i.e. particles coming from \textit{pile-up} -- independent collisions happening in the same bunch crossing.
An alternative method of identifying diffractive events is a direct measurement (\textit{tagging}) of the scattered proton, which requires additional devices called \textit{forward detectors} far downstream from the interaction point.
\begin{figure}[h]
\hspace*{-1.7cm}\includegraphics[width=1.1\textwidth]{figures/forward_detectors}
\caption{A diagram of Forward Detectors in the ATLAS experiment showing their placement with respect to beam lines and optical instrumentation: dipoles (D1, D2), quadrupoles (Q1--6) and collimators (TCL4--6).}
\label{fig:fwd_det}
\end{figure}
\section{Forward Spectrometers in ATLAS}
ATLAS \cite{atlas_maciej} forward spectrometers are a set of instruments housed in Roman Pot devices registering the protons scattered at very small angles. A proton scattered at the interaction point (IP) is deflected outside the beam envelope by dipole and quadrupole magnets of the LHC~\cite{evans}. Its momentum can be determined by measuring points on its trajectory~\cite{tdr,optics}. The schematic layout of the forward spectrometers in ATLAS experiment with respect to the beam lines, optics instrumentation and other forward detectors is shown in Figure \ref{fig:fwd_det}.
\paragraph{Absolute Luminosity For ATLAS (ALFA)} performs measurements of soft diffraction and elastic scattering. It also provides an important input for Monte-Carlo generators, in particular, for modelling cosmic ray showers and simulation of the pile-up background.
The ALFA spectrometer system consists of four vacuum-sealed spectrometers housed in Roman Pots, which are inserted vertically (top and bottom) onto the beam line.
The NEAR and FAR stations are placed on each side of the ATLAS Interaction Point at 237 and 241/245\footnote{FAR station was initially installed at 241 m (Run 1) and then moved to 245 m (Run 2) to improve the reconstruction of proton kinematics} m respectively with the distance of the tracker's edge to the beam during normal operation at below 2 mm.
Each station houses a multi-layer scintillating fibre (SciFi) consisting of two main detectors (10 layers of 64 fibers each) used for tracking, and 4 outer layers for the purpose of precise alignment. Achieved tracking resolution is approximately $\sigma=30$ $\mu$m in both, vertical and horizontal direction.
The read-out is performed by the Multi-Anode-Photo-Multipliers and dedicated scintillators provide the triggering capability. ALFA detectors require special running conditions of low pile-up as well as high $\beta^*$ optics.
\paragraph{ATLAS Forward Protons (AFP)} spectrometer system consists of four Roman Pot stations. Their placement with respect to the beam lines is shown schematically in Figure~\ref{fig:sketch}. ``NEAR'' and ``FAR'' devices are placed at 205 m and 217 m on both sides of the IP and are inserted horizontally towards the beam.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{figures/AFP_sketch}\\[-0.9cm]
\caption{A schematic diagram of the ATLAS Forward Proton detectors.}
\label{fig:sketch}
\end{figure}
Each station houses four planes of 3D silicon pixel sensors~\cite{sit1,sit2,sit3,sit4} forming the silicon tracker (SiT), which measures the trajectories of the scattered protons. The sensors have 336$\times$80 pixels with
50$\times$250 $\mu$m$^2$ area each, providing the combined spatial resolution of reconstructed proton tracks of 6 $\mu$m and 30 $\mu$m in $x$ and $y$ directions, respectively \cite{sit5}. Optimal resolution in $x$ coordinate is achieved with the sensors tilted by 14 degrees about the $x$-axis.
\begin{figure}[h]
\centering
\hspace*{-0.7cm}
\includegraphics[height=0.3\linewidth]{figures/ksi_pT_data2.pdf}~~~~~~
\includegraphics[height=0.3\linewidth]{figures/fig_02}~~~~~~
\includegraphics[height=0.3\linewidth]{figures/resolution}
\caption{Left: example distribution of reconstructed track positions ($x$ and $y$, transverse to the beam); the beam spot is approximately at (0,10) mm and deviations from this position are related to proton energy loss, as well as its transverse momentum (taken from \cite{proc_lhcp}).
Middle: Simulated geometric acceptance of the AFP detector as a function of the proton relative energy loss and its transverse momentum (from \cite{tdr}).
Right: AFP reconstruction resolution in dependence on proton relative energy loss $\xi$ calculated accounting also for multiple scattering and unknown position of the collision vertex \cite{tdr}.
}
\label{fig:xy}
\end{figure}
The reconstruction of position of protons traversing the AFP detectors (see Figure \ref{fig:xy}, left panel), in a known magnetic field, allows the estimation of proton energy and transverse momentum \cite{unfold}. The main observable measured by the AFP is the proton fractional energy loss, defined as: $\xi=1-E_{\text{proton}}/E_{\text{beam}}$. The precision of unfolding proton kinematics based on the positions in AFP is directly affected by its spatial resolution. Figure \ref{fig:xy} (right panel) illustrates how the resolution changes with proton relative energy loss ($\xi$). Additional effects that affect the resolution of proton energy reconstruction include the unknown position of the primary vertex and multiple scattering.
The typical acceptance in $\xi$ and $p_\text{T}$ is illustrated in middle panel of Figure \ref{fig:xy}.
\paragraph{Time-Of-Flight (ToF) detectors} are additional equipment present in Roman Pots at FAR stations. For processes in which protons are reconstructed on both sides of the IP, this allows rejection of background from pile-up by using the difference between the A and C-side ToF measurements to reconstruct the primary vertex position.
The ToF detectors are based on Cherenkov radiation in quartz crystals, which leads to an excellent timing resolution. The performance of the ToF devices was measured for the data gathered in 2017 \cite{tof1,tof2} and obtained time resolution reaches values of 20 $\pm$ 4 ps and 26 $\pm$ 5 ps for sides A and C, respectively. Achieved time resolution translates to determination of primary vertex $z$-position with an accuracy of 5.5 $\pm$ 2.7 mm. Such level of precision allows for a substantial reduction of background in `double-tag' events, as shown in Figure \ref{fig:tofres} (right). However, the observed efficiency of ToF reconstruction was very low ($\approx$7\%) due to fast PMT degradation during the data taking. Recently, new PMTs were installed and preliminary tests show readiness for use in upcoming data-taking campaigns of Run 3.
\begin{figure}[h]
\centering
\includegraphics[height=0.36\linewidth]{figures/tof_resolution}~~~~~~
\includegraphics[height=0.36\linewidth]{figures/tof_pileup}\\
\caption{Left: Example distribution of $z_{\text{ATLAS}}-z_{\text{ToF}}$ measured in events with ToF signals on both sides of the interaction region, where $z_{\text{ATLAS}}$ stands for vertex $z$-positions reconstructed as primary ones by ATLAS (taken from \cite{tof0}).
Right: A simulation of the fraction of pile-up events present in the data sample with a double AFP tag shown in dependence of the mean pile-up and for various timing resolutions (taken from Ref. \cite{tdr}).}
\label{fig:tofres}
\end{figure}
\FloatBarrier
\paragraph{AFP global alignment} is performed by comparing the proton relative energy loss measured in the AFP $\xi_{\text{AFP}}$ with a corresponding value calculated based on the kinematics of produced lepton pair $\xi_{ll}$:
\begin{equation}
\xi_{\text{AFP}}=1-\frac{E_{\text{proton}}}{E_{\text{beam}}},~~~~~~~\xi_{ll}^{\text{A/C}}=\frac{m_{ll} e^{(+/-) y_{ll}}}{\sqrt{s}}
\label{eq:xill}
\end{equation}
The AFP alignment parameters are adjusted in such a way that the maximum of the distribution of $\xi_{\text{AFP}}-\xi_{\mu\mu}$ is at zero. Figure \ref{fig:alignment} illustrates the differences between projected track $x$ position based on lepton kinematics and the one measured by AFP, before and after alignment correction. A valuable advantage of such method is a low and well-modelled background, which allows achieving alignment precision currently quoted at 300 $\mu$m. Continued studies of data and simulation show promise that this value can be further improved.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\linewidth]{figures/before_alignment_wide}
\includegraphics[width=0.4\linewidth]{figures/after_alignment_wide}\\
\includegraphics[width=0.4\linewidth]{figures/before_alignment_mag}
\includegraphics[width=0.4\linewidth]{figures/after_alignment_mag}\\
\caption{The distribution of differences between measured track position $x_{\text{AFP}}$ and position expected based on dilepton system kinematics $x_{\mu\mu}$, compared with the background model based on event-mixing. The figures on the right show the same as those on the left after correcting the $x$ coordinates of all events by the alignment constant (taken from Ref. \cite{afp_align}).}
\label{fig:alignment}
\end{figure}
\paragraph{AFP track reconstruction efficiency} is calculated using a so called `tag-and-probe' method. The efficiency is defined as the ratio of events in which the track is recorded in one station (\textit{tag}) and not in the other (\textit{probe}) to the total number of events with tagged tracks. Measured track reconstruction efficiency during the 2017 data taking campaigns are shown in Figure \ref{fig:eff}.
\begin{figure}[h]
\includegraphics[width=1\linewidth]{figures/highmu_efficiencies}
\caption{Track reconstruction efficiencies recorded in high-luminosity runs during 2017 data-taking (taken from Ref. \cite{afp_align}).}
\label{fig:eff}
\end{figure}
NEAR stations record efficiency over 98\% for all studied datasets. A possible effect that might contribute to lower efficiencies of FAR stations (95\% -- 98\%) is the radiation degradation of the silicon tracker, as the FAR stations are inserted slightly closer (<1 mm) to the beam and are more exposed to the beam halo. However, lower efficiencies observed in FAR stations, are also a natural consequence of the `tag-and-probe' method used in this analysis, as the downstream stations are additionally affected by the showers created by interactions with detector material in the upstream station. The existence of showers is evident when examining the long non-Poisson tail in hit multiplicity per plane, which is higher for each consecutive pixel layer. Additionally, each consecutive plane registers on average higher number of hits (pixels that record a signal exceeding a threshold) and higher charge deposits, which is also expected under the presence of cascades created by interactions with SiT and Roman Pot floor. Both effects are illustrated in Figure \ref{fig:charge}.
\begin{figure}[h]
\centering
\includegraphics[height=0.35\linewidth]{figures/fig4a.pdf}
\includegraphics[height=0.35\linewidth]{figures/magn_charges_331020_afp_trigger_noisy.pdf}\\
\caption{Distribution of the number of hits per event (left) and the difference of recorded charge for each of silicon planes in C-NEAR station (right). Secondary interactions with detector material result in higher number of hits and higher recorded charge for consecutive SiT planes (taken from Ref. \cite{proc_lhcp}).}
\label{fig:charge}
\end{figure}
\FloatBarrier
\section{Status and plans for Run 3}
No major changes between Run 2 and Run 3 are planned in terms of detector technology, although numerous hardware and software updates were implemented.
The AFP tracker system was equipped with newly produced tracking modules and new heat exchangers were installed to improve cooling capabilities.
Due to the repeated failures of MCP-PMTs in vacuum, the AFP ToF detector was redesigned with MCP-PMTs placed out of the detector secondary vacuum. The newly designed construction of the R2D2 based MCP-PMT back-end electronics was developed, successfully tested and used in the construction of the new ToF device. Additionally, a set of new, glue-less LQBars was installed, as well as picoTDC, single-channel pre-amplifiers, modified trigger and pulser modules.
Updated systems were exposed to proton beams at DESY and CERN SPS and successfully underwent performance tests.
The design of ALFA trackers remained unchanged as well and minor hardware updates include improvements to the cooling system and exchange of the readout electronics (due to radiation damage).
All subsystems of both AFP and ALFA spectrometers are installed in LHC tunnels and are fully prepared for data-taking campaigns starting in 2022.
Similarly, good progress is observed in development of software and simulation necessary for physics analyses of forward protons in ATLAS. Advances in studies of tracker performance and beam optics, as well as improvements in precision of detector alignment allow delivering a high-accuracy proton physics object to ATLAS. The properties of forward protons reconstructed with AFP and ALFA are used in several analyses across different working groups in ATLAS. A dedicated task force (Proton Combined Performance group) leads efforts to improve understanding of the proton object, including assessment of tracker efficiency or susceptibility to physical conditions, leading to a possible reduction of systematic uncertainties. Additionally, an ongoing work aiming at implementation of full \textsc{Geant4} simulation will allow to better understand possible effects related to detector geometry, alignment or interactions with detector material. Progresses in areas listed above contribute to advances with physics analysis, which are discussed in more detail in Section \ref{sec:analyses}.
\begin{table}[h]
{\renewcommand{\arraystretch}{1.4}
\small
\centering
\begin{tabular}{m{4cm}ll}
& \textbf{Run 2} & \textbf{Run 3 plans (requests)} \\
beam and optics & $\sqrt{s}=13$ TeV, ~~$\beta^*=0.3$ m, 0.4 m & $\sqrt{s}=13$ TeV, ~~$0.2 < \beta^* < 1.1$ m \\
AFP setup & one-arm (2016), two-arms (2017)& two-arms + TOF \\
Standard runs & $\langle\mu\rangle$$\approx$35, int. lumi. 46.9 fb$^{-1}$ & $\langle\mu\rangle$$<$60, $\mathcal{O}(500~\text{fb}^{-1})$ \\
Special runs at $\mu$$\approx$0 \newline (soft diffraction) & int. lumi.: $\approx$ 100 nb$^{-1}$ & $\mathcal{O}(100~\text{nb}^{-1})$ \\
Special runs at 0.3$\lesssim$$\mu$$\lesssim$1 \newline (low $p_\text{T}$ jets) & int. lumi.: $\approx$ 1.15 pb$^{-1}$ & $\mathcal{O}(1~\text{pb}^{-1})$ \\
Special runs at $\mu$$\approx$2 \newline (EW, hard diffr., SD $t\bar{t}$) & int. lumi.: $\approx$ 150 pb$^{-1}$ & $\mathcal{O}(100~\text{pb}^{-1})$ \\
\end{tabular}\\
}
\caption{Comparison of most important properties of data taken by AFP in Run 2 and requests for Run 3 data taking.}
\label{tab:data_afp}
\end{table}
\begin{table}[h]
{\renewcommand{\arraystretch}{1.4}
\small
\centering
\begin{tabular}{m{3cm}lm{5cm}}
& \textbf{Run 2} & \textbf{Run 3 plans (requests)} \\
collision energy & $\sqrt{s}$=13 TeV & $\sqrt{s}\geq$ 13.5 TeV, \\
beam conditions & $\beta^*$=90 m, 2.5 km & $\beta^*$= 3, 6 km and/or \newline $\beta^*_x$= 3 km, $\beta^*_y$= 6 km \\
& $\langle\mu\rangle$$\approx$35, $\langle\mu\rangle$$\approx$0 & only at $\langle\mu\rangle$$\approx$0 \\
\end{tabular}\\
}
\caption{Comparison of most important properties of data taken by ALFA in Run 2 and possible plans for Run 3}
\label{tab:data_alfa}
\end{table}
\section{Recent ATLAS physics results with forward proton tag}
\label{sec:analyses}
\subsection{First physics analysis with AFP proton tag}
The analysis of the rich data collected by ATLAS Forward Protons detector is ongoing and recently the results on semi-exclusive dilepton production associated with forward proton scattering were published \cite{dileptons}, delivering the cross-sections measurements for $p \,p \rightarrow p\,(\gamma\gamma\rightarrow l\bar{l}) \,p \,p^*$ processes. Modelling of photon fusion in proton-proton interactions is poorly constrained, particularly at high $\gamma\gamma$ invariant masses. Direct proton measurement allows for a strong background suppression by means of kinematic matching of tagged forward protons and leptons measured in the central detector. The fractional proton energy loss measured in AFP, $\xi_{\text{AFP}}$, can be compared against the value of $\xi_{l\bar{l}}$ that can be derived from dilepton system kinematics as defined in Eq. (\ref{eq:xill}). With the criterium of $|\xi_{\text{AFP}}-\xi_{l\bar{l}}|<0.005$ a total of 57 and 123 candidates in the $ee+p$ and $\mu\mu+p$ final states were observed. With a background rejection on the level of $\approx$85\% and signal acceptance on the level of $\approx$95\% this corresponds to statistical significance of over 5$\sigma$ in each production channel, thus providing a direct evidence of forward proton scattering in association with electron and muon pairs produced via photon fusion. Table \ref{tab:dilepton} summarizes obtained cross sections for these processes in the detector fiducial region compared with the relevant theoretical predictions. Figure \ref{fig:dilepton_xi} shows a distribution of measured $\xi_{\text{AFP}}-\xi_{l\bar{l}}$ for both sides A and C together with predictions associated with various production channels.
\begin{table}[h]
\centering
\begin{tabular}{lrr}
$\sigma_{\text{\textsc{Herwig+Lpair}}} \times S_{\text{surv}}$ & $\sigma_{ee+p}^{\text{fid.}}$ (fb) & $\sigma_{\mu\mu +p}^{\text{fid.}}$ (fb) \\
\hline
$S_{\text{surv}}=1$ & 15.5 $\pm$ 1.2 & 13.5 $\pm$ 1.1 \\
$S_{\text{surv}}$ using {\footnotesize$^{\text{EPJC 76 (2016) 9}}_{\text{PLB 741 (2015) 66}}$} & 10.9 $\pm$ 0.8 & 9.4 $\pm$ 0.7 \\
\textsc{Superchic} & 12.2 $\pm$ 0.9 & 10.4 $\pm$ 0.7 \\
\textbf{Measurement} & \textbf{11.0 $\pm$ 2.9} & \textbf{7.2 $\pm$ 1.8} \\
\hline
\end{tabular}\\
\caption{Summary of model predictions on cross-sections for diffractive processes of di-lepton production compared with AFP measurements.}
\label{tab:dilepton}
\end{table}
\begin{figure}[h]
{\centering
\includegraphics[width=0.7\linewidth]{figures/pp-ppll}\\
}
\caption{Distributions of $\xi_{\text{AFP}}-\xi_{l\bar{l}}$ with $\xi_{l\bar{l}}$ and $\xi_{\text{AFP}}$ both in range [0.02, 0.12]. Black points with error bars illustrate data and its statistical uncertainty and coloured stacks represent model predictions for various processes contributing to the measured signal (p$^*$ denotes dissociated proton) with hatched area indicating their combined uncertainty (plots from Ref. \cite{dileptons}).}
\label{fig:dilepton_xi}
\end{figure}
\subsection{Single diffraction results with ALFA proton tag}
In a recent publication \cite{alfa_sd}, a dedicated sample of low-luminosity (mean pile-up $\langle\mu\rangle$$<$$0.08$) proton-proton collision data at $\sqrt{s}$ = 8 TeV is used to study of the dynamics of the inclusive single-diffractive dissociation process $p\, p \rightarrow X\,p$. Improving on previous related analyses, besides measurements of charged particles from the dissociated system $X$ performed by the central ATLAS detector components, the ALFA forward spectrometer provides reconstruction of the final-state intact protons.
The differential cross sections are measured as a function of the fractional proton
energy loss ($-4.0$ $<$ $\log_{10}$$\xi$ $<$ $-1.6$), the squared four-momentum transfer ($0.016$ $<$ $|t|$ $<$ $0.43~\text{GeV}^2$), and the size of the rapidity gap $\Delta\eta$.
The total cross section integrated across the fiducial range is shown in Table \ref{tab:xsec_alfa} with additional information on predictions of relevant theoretical models.
\begin{table}[h]
\centering
\begin{tabular}{lll}
Distribution & $\sigma_{\text{SD}}^{\text{fiducial}(\xi,t)}$ [mb] & $\sigma_{\text{SD}}^{t-\text{extrap}}$ [mb] \\ \hline
\textsc{Pythia8} A2 (Schuler-Sj\"ostrand) & 3.69 & 4.35\\
\textsc{Pythia8} A3 (Donnachie-Landshoff) & 2.52 & 2.98\\
\textsc{Herwig7} & 4.96 & 6.11\\
\textbf{Measurement} & \textbf{1.59 $\pm$ 0.13} & \textbf{1.88 $\pm$ 0.15} \\
\hline
\end{tabular}\\
\caption{Summary of model predictions on cross-sections for single soft diffraction compared with the measurements by ALFA.}
\label{tab:xsec_alfa}
\end{table}
As shown in Figure \ref{fig:alfa_sd} the data are consistent with an exponential $t$ dependence, $\text{d}\sigma/\text{d}t \propto e^{Bt}$ with slope parameter $B$ = 7.65 $\pm$ 0.34 GeV$^{-2}$.
Interpreted in the framework of triple Regge phenomenology, the $\xi$ dependence leads to a Pomeron intercept of $\alpha(0) = 1.07 \pm 0.09$.
\begin{figure}[h]
{\centering
\includegraphics[width=0.5\linewidth]{figures/alfa_dsigdt}
\\}
\caption{The differential cross section as a function of $|t|$ with statistical and total uncertainties represented by inner and outer error bars, respectively. Red line shows fitted exponential function (plot taken from Ref. \cite{alfa_sd}).}
\label{fig:alfa_sd}
\end{figure}
\section{HL-LHC with AFP perspectives}
Predictions of forward proton scattering appear in a diverse range of topics, providing strong motivation for continuation of experimental efforts.
Diffractive processes were identified as a potential mechanism of top quark \cite{jl4}, as well as exclusive Higgs boson production \cite{jl5, jl6}.
Additionally, measurements of intact protons may provide a valuable input in the studies of two-photon processes in the context of SM electro-weak interactions \cite{jl9, jl10}, but also in high-mass sector beyond-SM \cite{jl11}.
Extending the potential of new physics discoveries, a capability of direct proton tagging may be an important tool in the searches of sleptons and dark matter \cite{jl13,jl14}, as well as axion-like particles \cite{jl12}.
An important input to understanding the QCD sector of the SM and beyond may be also provided by novel studies of exclusive diffractive events \cite{jl2,jl3}.
While the rich physics opportunities open up with the capability of forward proton tagging, a number of challenges remain that keep the presence of AFP detectors in ATLAS operation after Long Shutdown 3 under discussion. The HL-LHC will feature a novel beamline that includes crab cavities, collimators and magnets at new positions and different settings. The beam instrumentation properties planned for Run 4 place Point 1 optics at a disadvantage comparing to conditions in Runs 2 and 3. The RP acceptance might be limited due to placement of beam optics devices (collimators, cavities), but more importantly, the beam crossing angle of $\phi=180^\circ$ renders the trajectories of diffractive protons closer to the beam, which in turn impairs the resolution of energy measurement.
The second major challenge is the rejection of background under the conditions of mean pile up at $\langle\mu\rangle$ $\approx$ 200. A distinction of individual vertices would only be reliable with a sub-10 ps precision of ToF detectors (Silicon/LGAD/Cherenkov technology) and additional timing devices in the central ATLAS detector. Similarly, novel solutions will be required in the area of data acquisition and analysis due to high event rates and consequently large data volumes. Parallel to technological challenges, an important practical issue is the acquisition and sustainability of manpower and resources for a time period of over 20 years.
\section{Summary}
The ATLAS Forward Proton spectrometers, ALFA and AFP, provide a capability of forward proton tagging and measuring its kinematics, thus delivering an important data in the studies of diffractive physics.
Both detector systems recorded rich datasets of standard and special, low-luminosity conditions during LHC Run 2.
The analyses of collected data are ongoing and active efforts are directed into improvement of data quality, including the accuracy of detector alignment or the estimation of trackers efficiencies.
First experimental results on dilepton production with AFP tag were recently published and many more measurements of diffractive and exclusive events from Run 2 data will come in a near future.
Similarly, first results on single diffraction with a tag in ALFA were published in 2020 and more analyses on diffractive and elastic processes are ongoing.
Both AFP and ALFA spectrometers underwent hardware improvements and, after successful tests with proton beams at DESY and SPS, were installed in the LHC tunnel and are ready for further tests preparing for Run 3 data-taking campaigns. The collection of physics data is expected to begin in Spring 2022 and in the course of next four years it is expected that the AFP will collect an order of magnitude more data for studies of diffractive physics.
The continuation of forward physics programmes in the HL-LHC era is currently under discussion within ATLAS. While a wide range of physics topics would benefit from forward proton tagging, a number of experimental challenges remain. The constraints on preferred detector localization and utilized technology are being discussed and corresponding feasibility studies are performed in parallel. Were the AFP to take data in Run 4, an optimization of beam optics must be considered in order to enhance the spectrometer acceptance.
\section*{Acknowledgements}
This work was partially supported by the Polish National Science Centre grant: 2019/34/E/ST2/00393.
\iffalse
\part[Odderon observation: explanations and answers to questions/objections regarding the PRL publication\\ \phantom{x}\hspace{4ex}\it{Kenneth \"Osterberg on behalf of the D0 and TOTEM collaborations}]{}
\section{Introduction}
The D0 and TOTEM collaborations have recently published the observation of the odderon~\cite{Odderon-discovery}. The observation is based on combining two evidences for the odderon in complementary $|t|$-ranges using complementary TOTEM data sets: (1) a comparison of the proton-proton ($pp$) and proton-antiproton ($p\bar{p}$) elastic differential cross sections ($d\sigma_{el}/dt$) in the $|t|$-range of the diffractive minumum ("dip") and the secondary maximum ("bump") of the $pp$ $d\sigma_{el}/dt$ at $\sqrt{s}$ = 1.96 TeV~\cite{Odderon-discovery} and (2) the total cross section ($\sigma_{tot}$) and $\rho$ measurements at \mbox{very low $|t|$ in $pp$ collisions at the LHC~\cite{TOTEM-rho-13TeV}.} The methods, assumptions and choices used in the analyses have raised questions that are answered in detail and supplementary material is provided in this proceedings contribution. Furthermore, the objections raised to the odderon interpretation of the evidences are shown not to be valid.
The explanations and answers are organized as follows. First the comparison of the $pp$ and $p\bar{p}$ $d\sigma_{el}/dt$ is briefly presented, then explanations regarding the $pp$ and $p\bar{p}$ comparison are provided and questions and objections raised are answered. Next the odderon evidence from the TOTEM $\rho$ and $\sigma_{tot}$ measurements is introduced and afterwards replies to the questions and objections raised regarding the analysis and interpretation are given. Finally the combination of the odderon signatures is discussed and responses to issues raised concerning the combination are provided.
\section{The comparison of elastic $pp$ and $p\bar{p}$ cross sections}
Each $pp$ $d\sigma_{el}/dt$ measurement at TeV energy scale shows a characteristic dip, followed by a bump, as illustrated by Fig.~\ref{fig:dsigmadt} (left), whereas the $p\bar{p}$ $d\sigma_{el}/dt$ at TeV energy scale only exhibits a flat behaviour in the region of the expected positions of the dip and bump. This difference in the $pp$ and $p\bar{p}$ $d\sigma_{el}/dt$ would naturally occur for $t$-channel odderon exchange, since at the dip the dominant pomeron exchange is largely suppressed, and the odderon amplitude can play a significant role. Contrary to the pomeron amplitude, the odderon amplitude has a different sign for $pp$ and $p\bar{p}$.
To quantify the difference, eight characteristic points in the region of the dip and the bump, shown in Fig.~\ref{fig:dsigmadt} (right), of the TOTEM 2.76, 7, 8, and 13 TeV $pp$ $d\sigma_{el}/dt$ are extrapolated using a data-driven approach to obtain the 1.96 TeV $pp$ $d\sigma_{el}/dt$. The observed difference of 3.4$\sigma$ significance between the extrapolated $pp$ and the D0 $p\bar{p}$ $d\sigma_{el}/dt$ at 1.96 TeV in the region of the dip and the bump of the $pp$ $d\sigma_{el}/dt$, as shown by Fig.~\ref{fig:dsigmadt_comp}, is interpreted as evidence for odderon exchange. Note that the comparison is made in a common $t$-range (0.50 $\leq |t| \leq$ 0.96 GeV$^2$) of the $pp$ and $p\bar{p}$ $d\sigma_{el}/dt$.
\begin{figure}
\begin{minipage}{0.55\linewidth}
\centerline{\includegraphics[width=0.925\linewidth]{fig_1_no_D0_new}}
\end{minipage}
\hfill
\begin{minipage}{0.45\linewidth}
\centerline{\includegraphics[width=0.925\linewidth]{fig_3_new}}
\end{minipage}
\hfill
\caption[]{Left: The TOTEM $pp$ elastic cross sections at 2.76, 7, 8, and 13 TeV (full circles), and
the extrapolation to 1.96 TeV (empty circles). Right: Schematic definition of the characteristic points in the TOTEM differential cross section data. $A$ represents the vertical bump to dip distance.}
\label{fig:dsigmadt}
\end{figure}
\subsection{Questions and objections raised regarding the analysis and the interpretation}
A first objection that has been raised is a possible model dependence introduced by the formulas, $|t| = a \log (\sqrt{s} {\rm [TeV]}) + b$ and $d\sigma /dt = c \sqrt{s} {\rm [TeV]} + d$, used to extrapolate the TOTEM measured $|t|$ and differential cross section ($d\sigma /dt$) values at 2.76, 7, 8 and 13 TeV to 1.96 TeV to obtain the characteristic points of the $pp$ $d\sigma /dt$ at 1.96 TeV, see Fig.~\ref{fig:dsigmadt_extra}. Firstly, it should be noted that the $\sqrt{s}$ range of the extrapolation from 2.76 TeV is small, only about 8 \%, compared to the $\sqrt{s}$ range that the validity of formulas are tested with the fits. Secondly, for each characteristic point, the closest measured point to the characteristic point in terms of $d\sigma /dt$ is used as measured and if two adjacent points have about equal $d\sigma /dt$, the two bins are merged avoiding any model-dependent extrapolation between bins. Thirdly, having 3-4 data points limits the extrapolation formulas to ones with maximally two parameters. Alternative functional forms with other $\log \sqrt{s}$ or $\sqrt{s}$ powers yield extrapolated values at 1.96 TeV well within the uncertainties of the extrapolated values given by the fits using the above $\sqrt{s}$ dependence for $|t|$ and $d\sigma /dt$. Fourthly, it is not obvious that the same functional form would give good fits for all characteristics points both in $|t|$ and $d\sigma /dt$ (majority of $\chi^2$ values $\sim$1 per degree of freedom (d.o.f.)) that probably is related to some general energy independent properties of elastic scattering, see e.g. Refs.~\cite{Martynov_Nicolescu, Durham}. So if there is any model dependence at all, it is largely contained in the quoted uncertainties, in particular due to short extrapolation range and the generality of the functional form used for extrapolating the characteristic points. Note also that the shape and hierarchy of the extrapolated $pp$ $d\sigma /dt$ w.r.t. the measured $pp$ $d\sigma /dt$ is preserved as shown by Fig.~\ref{fig:dsigmadt} (left), i.e. a constant bump-to-dip $d\sigma /dt$ ratio with energy, a descreasing $|t|$ of the diffractive cone, dip and bump position with energy and decreasing values of the $d\sigma /dt$'s in the dip-bump region with energy. Extrapolating the measured cross sections is more robust that fitting the $pp$ $d\sigma /dt$ at each energy and extrapolating the fit parameters, which tend to compensate each other and whose correlations might be different at different energies.
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=0.60\linewidth]{fig_6_new}}
\caption{Comparison between the D0 $p\bar{p}$ measurement at 1.96 TeV and the extrapolated TOTEM $pp$ cross section (with its 1$\sigma$ uncertainty band), rescaled to match the D0 optical point. Note that the uncertainties at different $|t|$ values in the 1$\sigma$ uncertainty band are strongly correlated.}
\label{fig:dsigmadt_comp}
\end{center}
\end{figure}
A similar objection has been raised concerning a possible model dependence introduced by the formula, $\sigma_{tot} = b_1 \log^2 (\sqrt{s} {\rm [TeV]}) + b_2$, used to extrapolate the TOTEM measured total cross section ($\sigma_{tot}$) values at 2.76, 7, 8 and 13 TeV to 1.96 TeV as shown Fig.~\ref{fig:sigmatot_extra}, obtaining $\sigma_{tot} (pp)$ = 82.7 $\pm$ 3.7 mb at 1.96 TeV. Here the argumentation is similar to the one for the $|t|$ and $d\sigma/dt$ values. Firstly, the $\sqrt{s}$ range of the extrapolation is small, only about 8 \%, compared to the $\sqrt{s}$ range that the validity of formula is tested with the fit. Secondly, having four data points limits the extrapolation formulas to ones with maximally three parameters. Alternative functional forms such as $\log^2\sqrt{s} + \log\sqrt{s} + C$, $s + \sqrt{s} + C$ or $s^{1/4} + C$ gave extrapolated values at 1.96 TeV well within the quoted $\sigma_{tot} (pp)$ uncertainty. Thirdly, the fit to the TOTEM $\sigma_{tot}$ measurements gives a $\chi^2$ per d.o.f. smaller than 1. So in conclusion, if there is any model dependence, it is well within the quoted uncertainty. Note that 1.96 TeV is in a boundary region for $\sigma_{tot}$, dominated by a $\log \sqrt{s}$ dependence for lower energies and a $\log^2 \sqrt{s}$ dependence for higher energies. Therefore the extrapolation of the TOTEM $\sigma_{tot}$ measurements is only valid for $\sqrt{s} \ge $ 1 TeV, which is sufficient for the purpose above.
Also a somewhat similar objection has been raised concerning the interpolation of the characteristic points of the $pp$ $d\sigma_{el} /dt$ at 1.96 TeV to the $|t|$ values of the measured D0 $p\bar{p}$ $d\sigma_{el} /dt$ in the range 0.50 $\le |t| \le$ 0.96 GeV$^2$ using the double exponential:
\begin{eqnarray}
h (t) & = & a_1 e^{-a_2 |t|^2 - a_3 |t|} + a_4 e^{-a_5 |t|^3 -a_6 |t|^2 - a_7 |t|} \, \, ,
\label{eq:double_exp}
\end{eqnarray}
where the first exponential describes the diffractive cone (with a steeper slope towards the dip) and the second exponential the asymmetric bump structure and subsequent falloff. The fit to the characteristic points of the $pp$ $d\sigma_{el} /dt$ at 1.96 TeV using Eq.~\ref{eq:double_exp}, gives a $\chi^2$ per d.o.f. smaller than 1. The same functional form describes well the measured $pp$ $d\sigma_{el} /dt$ in the dip and bump region for at 2.76, 7, 8 and 13 TeV, as illustrated in Fig.~\ref{fig:dsigmadt} (left), with a $a_4$ term i.e. bump term significantly different from zero. This reassures that Eq.~\ref{eq:double_exp} can be safely used for the interpolation given that the functional form corresponds to a very distinct shape of the $d\sigma_{el} /dt$. The interpolation uncertainty is evaluated using a MC simulation where the cross section values of the eight uncorrelated characteristic points at 1.96 TeV are varied within their Gaussian uncertainties and new fits given by Eq.~\ref{eq:double_exp} are performed. This provides a $pp$ cross section value at each $|t|$ value that was checked to correspond to a Gaussian distribution with the quoted uncertainty. All of this suggests that the model dependence due to the interpolation must be well within the quoted uncertainty.
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=0.85\linewidth]{fig_4}}
\caption{Characteristic points in (a) |t| and (b) $d\sigma / dt$ from TOTEM measurements at 2.76, 7, 8 and 13 TeV (circles) as functions of $\sqrt{s}$ extrapolated to 1.96 TeV (stars). Filled symbols are from measured pints; open symbols are from extrapolations or definitions of the characteristic points.}
\label{fig:dsigmadt_extra}
\end{center}
\end{figure}
Another objection is the assumption that the optical points (OP) (${d\sigma_{el}/dt}\left \vert_{t = 0} \right.$) of $pp$ and $p\bar{p}$ are equal. The basis is the Pomeranchuk theorem~\cite{Pomeranchuk} stating that the ratio of the $pp$ and $p\bar{p}$ $\sigma_{tot}$ is 1, when $\sqrt{s}$ approaches infinity. Using the optical theorem, this leads to the ratio of the OPs of $pp$ and $p\bar{p}$ to be 1, when $\sqrt{s}$ approaches infinity. This doesn't imply that they are necessarily equal, however any possible difference between them must be due to the C-odd amplitude, which in the TeV-range is due to the odderon, since secondary reggeons can safely be ignored due to the decrease of their amplitude with $\sqrt{s}$, whereas the odderon amplitude is expected to increase with $\sqrt{s}$~\cite{Reggeons}. Therefore the assumption of equal $pp$ and $p\bar{p}$ OP is valid as long as the maximal possible odderon effect on the $\sigma_{tot}$ and hence on the OP is included as a systematic uncertainty for the OP.
The assumption of equal $pp$ and $p\bar{p}$ OP can be tested comparing the extrapolated ${d\sigma_{el}/dt}\left \vert_{t = 0} \right.$ = 357 $\pm$ 26 mb/GeV$^2$ at 1.96 TeV with the extrapolation of the D0 $d\sigma^{p\bar{p}}_{el}/dt$ measurement to $|t| = 0$ obtaining ${d\sigma_{el}/dt}\left \vert_{t = 0} \right.$ = 341 $\pm$ 49 mb/GeV$^2$. As can be noted they numerically agree well within the uncertainties, in fact the $p\bar{p}$ OP and its uncertainty encompasses the $pp$ OP and its uncertainty.
Since the $pp$ and $p\bar{p}$ OP measurements measure the same physics quantity in the assumption of equal $pp$ and $p\bar{p}$ OP, one can estimate a weighted average from them and conclude that the precision on the common OP is determined by the measurement with the better precision, i.e. the $pp$ OP. Therefore the uncertainty on the $p\bar{p}$ OP can be ignored, since the uncertainty of two independent measurement of the same quantity never can be larger than the smaller of the two uncertainties. This procedure is still valid even if the $pp$ and $p\bar{p}$ OP would correspond to two different physics quantities with a known difference as long as the difference is included in the overall uncertainty. The maximal possible difference due to odderon exchange on the OP is estimated from the maximal odderon model to be 2.9 \% at 1.96 TeV that is added in quadrature to the uncertainty of the experimental $pp$ OP to give an overall 7.4 \% relative uncertainty on the common OP. Effects on the OP from secondary reggeons and from differences between the $pp$ and $p\bar{p}$ $\rho$ values are negligible.
Also the ability to extrapolate the D0 $d\sigma^{p\bar{p}}_{el}/dt$ to the OP has been questioned, since the measurement only covers $|t|$-values down to 0.26 GeV$^2$. In particular, since the $B$-slope measurements in $p\bar{p}$ at 0.546 TeV seems to indicate that the $B$-slope is 10-15 \% steeper for low $|t|$-values ($\lesssim$ 0.15 GeV$^2$) than higher $|t|$-values~\cite{UA4_slope}. However neither CDF~\cite{CDF_slope} nor E710~\cite{E710_slope1} observe any indication of a change of $B$-slope of that size below $|t|$ = 0.25 GeV$^2$ at 1.8 TeV.
Even if the difference between the central values of the two E710 $B$-slope measurements~\cite{E710_slope1, E710_slope2} would be interpreted as an actual $B$-slope difference as a function of $|t|$, the change on the OP would be much smaller ($\sim$ 4 \%) than the luminosity uncertainty of 14.4 \% that dominates the D0 $p\bar{p}$ OP. Comparing TOTEM $\sigma_{tot}$ measurements at $\sqrt{s}$ = 8 and 13 TeV in $pp$ based on $B$-slopes extracted from data with and without acceptance in the Coulomb Nuclear Interference (CNI)-region, the ones with CNI-region data give about 1 \% higher $\sigma_{tot}$ thus about 2 \% higher OP (and steeper $B$-slope). So there is no indication that the D0 $p\bar{p}$ OP cannot be trusted. Note that if the pattern from 1.8 TeV $p\bar{p}$ and 8 and 13 TeV $pp$ measurements at low $|t|$ would be used to correct the D0 $p\bar{p}$ OP at $\sqrt{s}$ = 1.96 TeV for possible bias due to lack of such data, the central values of the D0 $p\bar{p}$ and TOTEM $pp$ OP's would be even closer.
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=0.55\linewidth]{fig_5}}
\caption{The $\sigma_{tot}$ from TOTEM measurements at 2.76, 7, 8 and 13 TeV (circles) as a functions of $\sqrt{s}$ extrapolated to the center-of-mass energy of the D0 measurement (star).}
\label{fig:sigmatot_extra}
\end{center}
\end{figure}
As a result of the interpolation from the characteristic points of the extrapolated $pp$ $d\sigma /dt$ to the $|t|$ values of the D0 $p\bar{p}$ $d\sigma /dt$, the $pp$ $d\sigma /dt$ at the $|t|$ values of the $p\bar{p}$ $d\sigma /dt$ are strongly correlated implying that the full covariance matrix of the $pp$ data points must be included in the $\chi^2$ for the comparison of the $pp$ and $p\bar{p}$ $d\sigma /dt$. The $\chi^2$-formula used:
\begin{eqnarray}
\chi^2 & = & \sum_{i,j=1}^8 \left\{ \left( \frac{d\sigma^{pp, norm}_{el,i}}{dt} - \frac{d\sigma^{p\bar{p}}_{el,i}}{dt} \right) C^{-1}_{i,j} \left( \frac{d\sigma^{pp, norm}_{el,j}}{dt} - \frac{d\sigma^{p\bar{p}}_{el,j}}{dt} \right) \right\} + \frac{(A-A_0)^2}{\sigma_A^2} + \frac{(B-B_0)^2}{\sigma_B^2} \, \, ,
\label{eq:chi2}
\end{eqnarray}
where $C_{i,j}$ is the covariance matrix, $A$ and $B$ are the two constraints and $d\sigma^{pp, norm}_{el,i} /dt$ is the $pp$ $d\sigma_{el}/dt$ normalized to the $p\bar{p}$ integral elastic cross section ($\sigma_{el}$) in the $|t|$ range of the comparison. The first constraint ($A$) is the normalization due to the matching of the $pp$ and $p\bar{p}$ OPs. The second constraint ($B$) is the matching of the $pp$ and $p\bar{p}$ $B$-slopes in the diffractive cone. The Pomeranchuk and the optical theorem infer that the ratio of the $pp$ and $p\bar{p}$ total $\sigma_{el}$ should be 1, when $\sqrt{s}$ goes to infinity. From this, one can deduce that the ratio of the $pp$ and $p\bar{p}$ elastic $B$-slopes should be 1, when $\sqrt{s}$ approaches infinity, since the $\sigma_{el}$ in the Coulomb region and in the region beyond the dip is negligible compared the one in the diffractive cone and the $d\sigma_{el}/dt$ in the diffractive cone is described by $e^{-B|t|}$~\cite{Cornille_Martin}. This doesn't imply that they are exactly equal but any difference between the $pp$ and $p\bar{p}$ elastic $B$-slopes at the TeV-scale is due to the odderon. Since the pomeron dominates in the diffractive cone region at 1.96 TeV, the $B$-slopes of $pp$ and $p\bar{p}$ are expected to be equal. This is verified to be true within the experimental uncertainties for the D0 $p\bar{p}$ and the TOTEM $pp$ $B$-slopes.
Therefore Eq.~\ref{eq:chi2} expresses the complete $\chi^2$, including the covariance matrix and the terms for the fully correlated uncertainties, thus also expressing the exact number of d.o.f. Eq.~\ref{eq:chi2} gives for six d.o.f. a significance of 3.4$\sigma$ for the difference between the TOTEM $pp$ and the D0 $p\bar{p}$ $d\sigma_{el}/dt$ at 1.96 TeV using the eight points in the region of the dip and the bump. The $\chi^2$ and therefore the significance is largely dominated by the first term of Eq.~\ref{eq:chi2} related to the shape of the $d\sigma_{el}/dt$. The obtained significance is confirmed by a Kolmogorov-Smirnov test of the difference between the $pp$ and $p\bar{p}$ $d\sigma_{el}/dt$ in the same $|t|$ range, where the correlations of the data points are included using Cholesky decomposition~\cite{Cholesky} and the normalisation difference via Stouffer's method~\cite{Stouffer}.
\section{The TOTEM $\rho$ and $\sigma_{tot}$ measurements}
The second evidence of odderon exchange in elastic scattering is from the measurements of $\rho$, the ratio of the real and imaginary part of the elastic hadronic amplitude at $t$ = 0, and $\sigma_{tot}$ in $pp$ collisions at the LHC~\cite{TOTEM-rho-13TeV}. Models~\cite{Compete, Durham, Block_Halzen} are unable describe both the TOTEM $\sigma_{tot}$ and $\rho$ measurements without including odderon exchange. The disagreement between the measurements and the models is between 3.4$\sigma$ and 4.6$\sigma$ depending on the model. Comparison between the predictions of the COMPETE models~\cite{Compete} and the TOTEM $\sigma_{tot}$ and $\rho$ measurements is shown in Fig.~\ref{fig:compete_sigmatot_rho}. Note that the COMPETE~\cite{Compete} and Block-Halzen~\cite{Block_Halzen} models include secondary Reggeon-like C-odd terms proportional to $\sim 1/\sqrt{s}$ to describe the difference of $pp$ and $p\bar{p}$ scattering below 0.1 TeV that should not be confused with odderon-like C-odd terms that are expected to increase with $\sqrt{s}$.
When comparing different $\rho$ measurements, it is important to make sure that the prescriptions (functional form for the hadronic amplitude and the phase, CNI formula and $|t|$-range) used in the extraction are as similar as possible, otherwise it doesn't necessarily lead to the same physics quantity. This is especially true in the comparison with previous $\rho$ measurements. The $\sqrt{s}$ trend in the TeV range predicted by odderon exchange~\cite{Martynov_Nicolescu, Durham13TeV} is observed for the most precise $\rho$ measurements for $pp$ and $p\bar{p}$ in the TeV range, when extracted using the same prescription: $\rho$ = 0.135 $\pm$ 0.015 at 0.546 TeV in $p\bar{p}$~\cite{UA4_rho} and $\rho$ = 0.09 $\pm$ 0.01 at 13 TeV in $pp$~\cite{TOTEM-rho-13TeV}. Note also that several groups, including A. Donnachie and P.V. Landshoff~\cite{Donnachie_Landshoff} and J.R. Cudell and O.V. Selyugin~\cite{Cudell_Selyugin}, have obtained compatible $\rho$ values (in the range 0.08-0.10), when taking the TOTEM 13 TeV CNI data as given and using a similar prescription as TOTEM~\cite{TOTEM-rho-13TeV}, contrary to the results they quote when they misinterpret or allow themselves the freedom to shift the TOTEM data and related uncertainties.
\subsection{Questions and objections raised concerning the analysis and the interpretation}
The authors of the PDG review of High Energy Soft QCD and Diffraction~\cite{PDG} claim that analyzing the whole ensemble of TeV-range elastic $pp$ and $p\bar{p}$ low $|t|$ data including the TOTEM measurements at LHC, a reasonable description can be obtained using a C-even amplitude (pomeron) only, that is, without an odderon, in contradiction with the conclusion by TOTEM. This statement does not hold once one start to examine the exact predictions. For example, the model of the authors~\cite{Durham} fails to describe both the TOTEM $\rho$ and $\sigma_{tot}$ measurements in $pp$ at 7, 8 and 13 TeV ($\sim$ 3.4$\sigma$ difference) and especially the elastic $d\sigma/dt$ in $p\bar{p}$ for the dip and bump region at 1.96 TeV ($\sim$ 4.3$\sigma$ difference). A good description of the LHC $pp$ data without the odderon, leads inevitably to a significantly worse description of the Tevatron $p\bar{p}$ data and vice versa.
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=0.735\linewidth]{compete_bands_si_tot_rho}}
\caption{Predictions of the $pp$ total cross section ($\sigma_{tot}$) and $\rho$ parameter as function of $\sqrt{s}$ by each COMPETE~\cite{Compete} model (see legend for model) with the TOTEM measurements marked in red.}
\label{fig:compete_sigmatot_rho}
\end{center}
\end{figure}
In the PDG review, the statement of the authors is backed up by a similar attempt by Donnachie-Landshoff~\cite{Donnachie_Landshoff}, which claim to describe the elastic $d\sigma/dt$ data at small $|t|$ from 13.76 GeV to 13 TeV without the odderon. Donnachie-Landshoff obtain a $\rho$ = 0.14 in $pp$ at $\sqrt{s}$ = 13 TeV, when using the TOTEM 8 TeV CNI data~\cite{TOTEM-rho-8TeV} in addition to the TOTEM 13 TeV CNI data~\cite{TOTEM-rho-13TeV}, whereas when using only the TOTEM 13 TeV CNI data they obtain a $\rho$ = 0.10. This is not possible, if experimental uncertainties are treated correctly, since the TOTEM 13 TeV CNI data is about a factor three more precise than the TOTEM 8 TeV CNI data when the normalisation uncertainty is not taken into account. It is likely that the normalisation uncertainty that is common to all data points has not been treated correctly as a separate term $A$ in the $\chi^2$ as in Eq.~\ref{eq:chi2} in the fits by Donnachie-Landshoff. Otherwise one cannot explain the large weight the TOTEM 8 TeV CNI data obtains in the Donnachie-Landshoff fits. The normalisation uncertainty is the dominating uncertainty in the TOTEM CNI data except at the smallest $|t|$ values and smaller in the TOTEM 8 TeV CNI data than in the TOTEM 13 TeV CNI data, 4.2 \% compared to 5.5 \%. Note also that in Ref.~\cite{Donnachie_Landshoff}, a trivial sum of Coulomb and nuclear elastic amplitudes is used, ignoring completely CNI effects on the amplitude, leading to relative deviations in the total elastic amplitude of several percent, see Fig.~\ref{fig:Kaspar_diff}.
The PDG review also states that the model RR(PL2)$^{qc}$ of COMPETE (dashed green line in Fig.~\ref{fig:compete_sigmatot_rho}) is consistent with the TOTEM 13 TeV $\rho$ and $\sigma_{tot}$ within 1$\sigma$~\cite{Cudell_Selyugin}, in contradiction with the statement that all COMPETE models are incompatible with the TOTEM $\rho$ and $\sigma_{tot}$ measurements. This agreement with the RR(PL2)$^{qc}$ model is obtained by modifying the normalization of the TOTEM 13 TeV elastic $d\sigma/dt$ by $\sim$2$\sigma$ (when including the Coulomb normalization that was not taken into account in Ref.~\cite{Cudell_Selyugin}). Since the normalization of the TOTEM 13 TeV CNI data is obtained from two completely independent data sets and methods (optical theorem and Coulomb amplitude) that agree very well, it is unlikely that it is off by $\sim$2$\sigma$. The standard approach in physics is not to modify the data but instead adjust the model to describe the data and not vice versa. Without modifying the normalization of the TOTEM 13 CNI data, the original version of the RR(PL2)$^{qc}$ model~\cite{Compete} fails to describe the $\sigma_{tot}$ in pp at $\sqrt{s}$ = 2.76, 7, 8 and 13 TeV ($\sim$5.4$\sigma$ difference).
Regarding the determination of $\rho$, it important to stress that most of the sensitivity to $\rho$ is contained in only a few data bins in the CNI region, between those at very low $|t|$ with a significantly larger Coulomb than CNI contribution and the large majority of data bins at higher $|t|$, where the hadronic amplitude dominates. Experience from TOTEM has shown that the fits should be done in several steps in separate $|t|$ ranges, first to fix the other parameters (hadronic amplitude and Coulomb normalisation) before the $\rho$ to avoid any bias in the $\rho$ determination from data bins with very little or without any sensitivity to $\rho$, see e.g. section 6.3 in Ref.~\cite{TOTEM-rho-13TeV}. In Refs.~\cite{Donnachie_Landshoff, Cudell_Selyugin} it is not stated whether the fits have been performed in several steps to avoid bias in the $\rho$ determination from data bins with minimal sensitivity to $\rho$ or whether they have been performed in a single step.
\begin{figure}
\begin{minipage}{0.50\linewidth}
\centerline{\includegraphics[width=0.885\linewidth]{Kaspar_Pol_plot_central}}
\end{minipage}
\hfill
\begin{minipage}{0.50\linewidth}
\centerline{\includegraphics[width=0.885\linewidth]{Kaspar_Pol_plot_peripheral}}
\end{minipage}
\hfill
\caption[]{The relative difference of the differential elastic cross section in the CNI region between various CNI formulae (see text) and the numerical calculation of the Coulomb and nuclear eikonals to all orders of $\alpha$ (denoted "ref") ~\cite{Kaspar} for central (left) and peripheral (right) nuclear amplitudes. The labelling refects the impact parameter behaviour: central nuclear amplitudes yield profile functions peaking at smaller impact parameter value than peripheral amplitudes.}
\label{fig:Kaspar_diff}
\end{figure}
In addition, the CNI formulae of Cahn~\cite{Cahn} and Kundr{\' a}t-Locaji{\v c}ek (KL)~\cite{KL} used for the $\rho$ determination at 13 TeV have been claimed to contain flaws including inexact approximation of the Coulomb amplitude and too early truncation of the power series of the electromagnetic coupling $\alpha$~\cite{Petrov}. A numerical calculation of the Coulomb and nuclear eikonals to all orders of $\alpha$~\cite{Kaspar} verified that the CNI formulae of Cahn and KL reproduce the numerical estimate for the phase and the $d\sigma/dt$ at a precision significantly below the current experimental one, as shown by Fig.~\ref{fig:Kaspar_diff}. Hence any approximations done by Cahn and KL do not have any detrimental effect on the $\rho$ determination. Instead, the CNI formula of Ref.~\cite{Petrov} and the sum of Coulomb and nuclear amplitudes~\cite{Godizov} were found to deviate from the numerical estimate by several percent. The SWY formula~\cite{SWY} reproduces the numerical estimate for central nuclear amplitudes but not for peripheral ones, see Fig.~\ref{fig:Kaspar_diff}. Also the effect of not including excited proton states in the eikonal have been estimated to be negligible compared to the current experimental precision~\cite{Bethe}. In conclusion, the formulae used for the 13 TeV $\rho$ determination provide more than adequate models for the CNI effects.
\section{The combination of the $pp$ and $p\bar{p}$ comparison and $\rho$ and $\sigma_{tot}$ measurements}
The significances of the measurements are combined using the Stouffer's method~\cite{Stouffer} in the order of sensitivity, starting from the $pp$ and $p\bar{p}$ comparison, adding the 13 TeV $\rho$ measurement and then finally if needed the $\sigma_{tot}$ measurements using the freedom provided by Stouffer's method to use only a subset of the significances (e.g. $\rho$ and the $pp$ and $p\bar{p}$ comparison) for testing the exclusion of a model. The $\chi^2$ for
the $\sigma_{tot}$ measurements at 2.76, 7, 8 and 13 TeV is computed with respect to the model predictions without odderon exchange~\cite{Compete, Durham, Block_Halzen} including also model uncertainties when specified. Same was done separately for the 13 TeV $\rho$ measurement. Unlike the COMPETE~\cite{Compete} and Block-Halzen~\cite{Block_Halzen} models, the Durham model~\cite{Durham} provides the predicted $d\sigma_{el}/dt$ without odderon exchange contribution. Therefore a direct comparison of the predicted Durham $d\sigma_{el}/dt$ at 1.96 TeV with the D0 $p\bar{p}$ $d\sigma_{el}/dt$ that gives a significance of 4.3$\sigma$ is used for the combined significance instead of the $pp$ and $p\bar{p}$ comparison. The 1.96 TeV $d\sigma_{el}/dt$ of the model is chosen since it is most sensitive to odderon exchange after the model has been tuned to the LHC elastic $pp$ data.
The 13 TeV $\rho$ measurement provides a 4.6 and 3.9$\sigma$ significance for the COMPETE "blue band" (see Fig.~\ref{fig:compete_sigmatot_rho}) and the Block-Halzen models~\cite{Block_Halzen}, respectively. The comparison of $\rho$ and $\sigma_{tot}$ measurements with the predictions of the Durham~\cite{Durham}, the COMPETE "magenta band" and "green band" (see Fig.~\ref{fig:compete_sigmatot_rho}) models give significances of 3.4, 4.0 and 4.6$\sigma$, respectively. Combining them with the significance of the $pp$ and $p\bar{p}$ comparison (or for Durham the one with D0) give combined significances ranging
from 5.2 to 5.7$\sigma$ for odderon exchange for all examined models~\cite{Compete, Durham, Block_Halzen}.
\subsection{Questions and objections raised about the combination}
The Stouffer's method~\cite{Stouffer} combines significances following $z_{\rm comb} = \sum^{k}_{i=1} z_i/\sqrt{k}$, where $z_i$ is the individual significances and $k$ the number of significances to be combined. The method is valid for independent measurements, whose significances obey the normal distribution. This is true for the odderon significances obtained from the $pp$ and $p\bar{p}$ comparison in the dip-bump region and the $pp$ $\rho$ and $\sigma_{tot}$ measurements at very low $|t|$, since they are based on results from completely separate $|t|$ regions and TOTEM data sets. When the 13 TeV $\rho$ and $\sigma_{tot}$ measurements are both used for the combined significance, values determined from independent TOTEM data sets are used.
It has also been questioned whether the $pp$ and $p\bar{p}$ comparison and the $\rho$ and $\sigma_{tot}$ measurements can be combined, since the former is a data to data comparison and the latter a data to model comparison. However, since the only way to produce a significant difference between the $pp$ and $p\bar{p}$ $d\sigma_{el}/dt$ at TeV energy scale is through odderon exchange, a model without odderon exchange would produce a $p\bar{p}$ $d\sigma_{el}/dt$ at 1.96 TeV similar to the extrapolated $pp$ $d\sigma_{el}/dt$ if the model still has to describe the $pp$ $d\sigma_{el}/dt$'s measured at LHC. This is illustrated by the Durham model without odderon contribution that fails to describe the D0 $p\bar{p}$ $d\sigma_{el}/dt$ at 1.96 TeV (at a 4.3$\sigma$ significance). Also the failure of the models to describe simultaneously both the $\rho$ and $\sigma_{tot}$ measurements in $pp$ points to a difference in elastic $pp$ and $p\bar{p}$ scattering and therefore to be quantitatively assessing the same thing, the existence of odderon exchange in elastic scattering, as the $pp$ and $p\bar{p}$ comparison.
\section{Conclusions}
Issues and objections raised regarding the D0-TOTEM comparison of the elastic $d\sigma/dt$ of $pp$ and $p\bar{p}$, the TOTEM $\rho$ and $\sigma_{tot}$ measurements in $pp$ as well as their combination and odderon interpretation have been adequately addressed. Both provide evidence of odderon exchange in elastic scattering and their combination constitute the first experimental observation of the odderon, acknowledged as convincing evidence of the existence of the odderon after a quest of almost 50 years~\cite{Leader}.
\iffalse
\part[Odderon: Lost or/and Found?\\ \phantom{x}\hspace{4ex}\it{Vladimir Petrov and Nikolay Tkachenko}]{}
\section{Introduction}
In a long and rich history of the studies on high energy hadron interactions two kinds of strong ("nuclear") forces are constantly featured: C-even and C-odd ones.
C-even forces represented by the Pomeron and f-Reggeon bear a universal character, i.e.
they are like the forces of gravity acting as the universal attraction of all hadrons to each other. On the contrary, C-odd forces, which in high-energy physics are associated with $ \rho $-, $ \omega $-, etc. Reggeons resemble electromagnetic interactions, when charges of the same sign (particle-particle) repel, and charges of different signs (particle-antiparticle) are attracted to each other.
Until the beginning of the 70s, the belief reigned that with the increase in collision energy, the main C-even agent, the Pomeron, plays an increasingly dominant role, while the C-odd forces become less and less significant ("die out") and finally can be neglected.
Such a paradigm was challenged in the works of B. Nicolescu et al.\cite{luk}, , in which a new notion was introduced, later dubbed "Odderon", which, being a part of the amplitude subleading (w.r.t.the Pomeron) in the imaginary part of the scattering amplitude, becomes the leading one in the real part.
It worth noticing that the very term" Odderon" looks akin to the name of some new Reggeon but according to \cite{luk} this was but a specific contribution to the C-odd part of the scattering amplitude. No Reggeon (or a particle) was associated with this "Odderon"\footnote{Later the Odderon option as formulated in \cite{luk} was dubbed (after serious modfifications) the "Maximal Odderon" in contrast to other Odderon incarnations (e.g. as a C-odd Reggeon). As we will not concern these models , we will use the term "Odderon" in the sense of "Maximal Odderon" as well.}.
Below we will try to trace almost a half-of-century history of the Odderon concept and attempts and efforts to find its experimental manifestations.
\section{Odderon: Predictions and Nature}
Which observables are potentially suitable for identifying the possible existence of the Odderon? Below we briefly describe a few options used.
\subsection{Difference of the proton-proton and anti proton-proton total cross-sections. }
\begin{wrapfigure}[36]{l}{110mm}
\vspace{-6.1mm}
\includegraphics[width=110mm]{Fig1.jpg}
\vspace{-7.6mm}
\caption{The energy evolution of the difference $ \Delta\sigma = \sigma_{tot}^{\bar{p} p} - \sigma_{tot}^{pp}$ at $ \sqrt{s} \leqslant 60\: GeV $.}
\label{p1}
\end{wrapfigure}
\begin{wrapfigure}[24]{l}{110mm}
\includegraphics[width=110mm]{Fig2.jpg}
\caption{The energy evolution of the cross-sections in $ \bar{p} p $ and $ pp $ collisions.}
\label{p2}
\end{wrapfigure}
The simplest one is the difference between the total cross-sections of $ \bar{p} p $ and $ p p $ interactions
\begin{center}
$ \Delta\sigma = \sigma_{tot}^{\bar{p} p} - \sigma_{tot}^{pp}.$
\end{center}
because it is exactly a C-odd quantity.
The first paper in Ref.\cite{luk} predicted that "at high energies($ \sqrt{s} $)"
\begin{center}
$ \mid \Delta\sigma \mid \sim \ln s $
\end{center}
i.e. grows indefinitely with energy.
Pre-ISR data showed that $ \Delta\sigma $ is positive and decreases with energy growth.
The ISR data ($ \sqrt{s}= 20\div 63\: GeV $) confirmed this trend and gave the last opportunity to compare $ \sigma_{tot}^{\bar{p} p} $ and $ \sigma_{tot}^{pp} $ at the same energy. The minimum value of the difference as measured at the ISR \cite {amb} was
\begin{center}
$ \Delta\sigma (52.8\mbox{ GeV}) = 1.49\pm 0.35\mbox{ mb}$
\end{center}
Fig.1 \cite{Ow} shows the early result for $\Delta\sigma$ for laboratory energies
$$E_{\mbox{\mbox{lab}}}\approx p_{\mbox{\small{lab}}}c \leqslant 2000\mbox{ GeV}$$
(cms energy up to $ 60\mbox{ GeV}$).
However, in the mentioned first article on the Odderon it was argued that $\Delta\sigma $ should drop till
$\sqrt{s} \approx 24$ GeV
where it should disappear and then (after it would turn negative) would begin to indefinitely grow in absolute value achieving $ - \:\mathcal{O}( 10 mb) $ at $ p_{\mbox{lab}}c = 10^{4}\mbox{ GeV}~ (\sqrt{s} \approx
150\mbox{ GeV}$). This would be a clear evidence in favour of the Odderon as formulated in \cite{luk} but the ISR measurements, as we see, ruled out such an option.
Meanwhile $\Delta\sigma $ well collaborated with the prediction of the Regge pole scheme
\begin{center}
$ \Delta\sigma \sim s^{\alpha_{-}(0)-1} $
\end{center}
where $ \alpha_{-}(0) $ is the intercept of the "secondary" Reggeons ($ \rho, \omega $ etc). As generically $ \alpha_{-}(0)\approx 0.5 $ we see that Fig.1 seemed to confirm the asymptotic disappearing of $ \Delta\sigma $.
This , however, did not discourage the Odderon proponents who argued that the crossover of $ \sigma_{\mbox{tot}}^{\bar{p} p}$ and
$\sigma_{\mbox{tot}}^{pp}$ had a chance to show up at higher energies.
\begin{wrapfigure}[18]{l}{110mm}
\vspace{-0.1mm}
\includegraphics[width=110mm]{Fig3.jpg}
\vspace{-4.6mm}
\caption{The energy evolution of $ \Delta\sigma $ as given by the COMPETE parametrization.}
\label{p3}
\end{wrapfigure}
Postponing the $ Sp\bar{p}S $ results for a bit later, let us come to the highest energies achieved by now, i.e. 2 TeV for $ \bar{p} p $ and 13 TeV for $ pp $. A straightforward comparison between the two channels is still impossible because of the absence of the relevant data at the same energy, so we take the COMPETE parametrization \cite{ed} which describes the data on $ \sigma_{tot}^{\bar{p}p} $ and $ \sigma_{tot}^{pp} $ very well. This is pictured in Fig.2 \cite{Ant2} .
\hfill \break
The COMPETE predicts for $ \Delta\sigma $ the stable decrease as is seen in Fig.3.
So, it seems that the difference in total cross sections is not the best place to look for manifestations of the Odderon\footnote{The enthusiasts of the "Maximal Odderon" still insist that the cross-over will occur though such claims already do not look very convincing} . However, this could only mean that the Odderon does not couple significantly to the imaginary part of the \textit{forward} scattering amplitude while a possibility of a noticeable coupling to the real part of the forward scattering amplitude is quite conceivable.
\subsection{Early sounding of the Odderon via ReF(s,0)/ImF(s,0).}
So, in addition to the difference between the cross sections , the quantity
\begin{center}
$ \rho = \mbox{Re}F(s,0)/\mbox{Im}F(s,0).$
\end{center}
($F(s,t)$ stands for the elastic scattering amplitude) seemed to be a suitable observable quite accessible at the $ Sp\bar{p}S $ collider. Although there were no corresponding $ pp $ data, the Odderon contribution could manifest itself in $ \rho^{\bar{p}p} $ through the dispersion relations, as a sort of "echo" from the u-channel.
During the functioning of the $ Sp\bar{p}S $, two dedicated measurements were made in the UA4 and UA4/2 experiments.
The result obtained in UA4 \cite{ua4/1} became a sensation: instead of the expected value of about $ 0.10\div 0.15 $, it turned out that
\begin{center}
$\rho^{\bar{p}p}~(\mbox{UA4}) = 0.24 \pm 0.04 !$
\end{center}
The number caused a flow of publications, often containing the most fantastic scenarios, but the enthusiasts of the "Maximal Odderon" felt themselves to be the main beneficiaries \cite{nic2} . It was the notorious "maximality" that seemed to be the reason for such a large value of $ \rho $.
Six years passed in discussions, conferences , talks and articles till a new sensation
broke out. A "remeasurement" undertaken by the UA4 collaboration (under the nickname UA4/2) produced the following result \cite{ua4/2} :
\begin{center}
$ \rho^{\bar{p}p}~ (\mbox{UA4}/2) = 0.135 \pm 0.015 .$
\end{center}
Concerning the 1987 result it was said:
"The previous result
$ \rho= 0.24 + 0.04 $ obtained with a poor beam optics, a factor eleven less statistics and much less control of systematic effects should be considered as superseded."\cite{ua4/2}.
So the "Maximal Odderon" was again not lucky: after several years of triumph, disappointment came.
But ahead there was a new take-off, although one had to wait a long time, almost a quarter of a century.
\subsection{A resurrection of the "Maximal Odderon" or...?}
In December 2017, one of us (V.P.) had a long discussion with S. Giani and J. Ka\v{s}par about their just obtained result on extracting the value of $ \rho $ from the data of the TOTEM collaboration on the differential cross section for elastic proton-proton scattering in the region of Coulomb-nuclear interference at 13 TeV.
The point was that this value ($ \rho = 0.1 $) essentially coincided with the value of $ \rho $ obtained earlier in the theoretical
article by B. Nicolescu and E. Martynov, in which an attempt was made to describe a large amount of data in the framework of a highly modified version of the "Maximum Odderon" model.
On this basis, the conclusion was made: a new particle was discovered, the "Odderon"consisting of 3 gluons!
The news attracted attention of the public media. For instance, a few months later an article appeared in "The Newsweek" under the title:
"What's an Odderon and Did CERN Just Revealed it Exists?" The more professional CERN Courier placed in March 2018 an article " Oddball Antics in pp Collisions" and finally in Match 2021 "Odderon Discovered" with a statement:
"The TOTEM collaboration at the LHC, in collaboration with the DØ collaboration at the former Tevatron collider at Fermilab, have announced the discovery of the Odderon – an elusive three-gluon state predicted almost 50 years ago."
Without a doubt, the discovery of a new particle is a great event and an outstanding achievement for any experiment.
Let us now look at two publications concerning these findings.
The first one appeared in 2019 \cite{TOT1} and was devoted to "probing
the existence of a colourless C-odd three-gluon compound state" on the basis of the retrieval of the parameter $ \rho $ from the data on elastic proton-proton scattering in the region of Coulomb-nuclear interference at 13 TeV. The second one will be commented in the next subsection.
As was mentioned above, the conclusion about the discovery of the "C-odd three-gluon compound state" was made because of coincidence of the measured value of $ \rho $ with the approximately the same value appearing in the model of "Maximal Odderon" \cite{nic3}.
What is interesting, in the model suggested in \cite{nic3} the authors do not deal with "gluons" at all ( because their arguments do not use QCD) and define the Odderon as follows:
"The Odderon is defined as a singularity in the complex $ j $-plane,
located at $ j=1 $ when $ t=0 $ and which contributes to the odd-
under-crossing amplitude $ F_{-} $".
However, the crossing-odd amplitude (negative signature)with $ j=1 $ cannot have singularity at $ t=0 $ because this is the \textit{physical} p-wave partial amplitude in $ \bar{p}p $ channel. Otherwise the axiomatic bounds (assuming non-zero mass gaps in any channel)would be violated while the authors use precisely these bounds to justify their "maximum" choice for the C-odd amplitude. Otherwise, we would be forced to assume that there is no color confinement. This in turn would naturally give rise to an infrared singularity at $ t=0 $. I do not believe that authors of \cite{nic3} meant such a radical scenario. Although, in this case, gluons would appear, indeed, but in an amount significantly exceeding 3.
In contrast, the C-even (positive signature) amplitude $ F_{+}(1,t) $ may well have a singularity at $ t=0 $ , since for it the value $ j=1 $ is not physical.
A detailed criticism of the "Maximal Odderon model" in both conceptual and descriptive aspects can be found in Ref.\cite{ptr1}.
There is one more aspect of this topic that I would like to touch upon.
In the article \cite{TOT1} some phenomenological model for the strong interaction amplitude was used for description of the data and hence, for retrieving parameters, e.g. $ \rho $. However, this model does not exhibit the Odderon singularity as does the strong interaction amplitude described in \cite{nic3}.
So, the coincidence of the values of $ \rho $ seems accidental, not related with the presence or absence of the Odderon singularity as assumed in \cite{TOT1}.
Our conclusion from this reasoning is that no specific value of the parameter $ \rho $ can be considered as an evidence of presence or absence of the Odderon.
Being the $ \rho $-parameter of the "forward origin" this is in line with the above conclusion about another forward observable $ \Delta\sigma $ and implies that the Odderon, if exists, should be probed at non-zero transferred momenta.
Its decoupling at $ t=0 $ is evidently related with absence of massless states in the
p-wave partial amplitude $ F_{-}(1,t) $ in the $ \bar{p}p $-channel, i.e. actually with confinement.
\pagebreak
\subsection{A step aside: the Odderon at nonzero t. An old friend? }
In no way all above said means that the Odderon does not exist or is unobservable. We argued only about forward observables. If $ t\neq 0 $ the only way to search for it is a comparison of differential cross-sections in $ pp $ and $ \bar{p}p $-channels.
\begin{wrapfigure}[25]{l}{90mm}
\vspace{-0.1mm}
\includegraphics[width=90mm]{Fig4.jpg}
\vspace{-4.6mm}
\caption{The comparison of ($ pp $) and $ \bar{p}p $ cross-sections.}
\label{p4}
\end{wrapfigure}
And here we cannot help but recall the good old ISR. We find that as early as in 1985 a dedicated measurements were made \cite{isrod} to compare the differential cross-section of elastic $ pp $ and $ \bar{p}p $ scattering at the same energy ($ \sqrt{s}= 53 $ GeV).
Fig.4 \cite{isrod} shows that the two cross-sections almost coincide except the vicinity of the dip ($ pp $) and shoulder ( $ \bar{p}p $).
Fig.4. The comparison of ($ pp $) and $ \bar{p}p $ cross-sections.
Fig.5 shows the ratio of the two cross-sections which differs from 1 only at $ t $ in the vicinity of dip/shoulder.
It is clear that the cause of the difference is a C-odd force. But which one? Is it the manifestation of the well known secondary Reggeons which are responsible for a non zero
$ \Delta\sigma $ at low energies ? If we try to blame them for the said difference, we will see that their contribution is only a small part of the visible effect.
It was understood by the authors of Ref.\cite{isrod} as they mentioned:
"{\it When we compare the available models to these data we find that none of them describes the data adequately.}"
That was true that no "adequate description" was provided that time but, nonetheless, the result did not remain unnoticed. In Ref.\cite{gaur} it was even called "the new great ISR discovery" with an "intriguing question":
"Is it the maximal odderon growth ?"
As we already mentioned the "Maximal Odderon", unfortunately, was not acceptable on conceptual grounds.
Howbeit, we have to admit that there was some new C-odd interaction agent observed in the experiment \cite{isrod} which was not of pure quark origin as $ \rho, \omega$ etc. In other words that, in this blurry meaning, the Odderon was discovered already 36 years ago.
But even if we admit this we do not know if this effect survives at high energies or dies off?
Meanwhile, the TOTEM Collaboration made measurements of the $ pp $ elastic scattering at 2.76 TeV. The closest results in energy was the DO (FNAL) measurement of the $\bar{p}p$ elastic scattering at 1.96 TeV. For lack of anything better, it was decided to compare the cross sections, albeit not at the same, but at relatively close energies.The comparison has shown that the effect persists, although less pronounced.
Soon afterwards an attempt was made\cite{odde} with help of a specially designed extrapolation technique of the "data transfer" to provide the comparison at the same energy (1.96 TeV). The result appeared qualitatively the same with minor quantitative differences (Fig.6).
Some qualitative estimate of the energy dependence can be made if to consider the ratio
\begin{center}
$d\sigma^{\bar{p}p}/d\sigma^{pp} = f(\sqrt{s}, \tau= \mid t\mid/\mid
t_{\mbox{dip}}(s)\mid). $
\end{center}
\begin{wrapfigure}[39]{l}{90mm}
\vspace{-0.1mm}
\includegraphics[width=90mm]{Fig5.jpg}
\vspace{-6.6mm}
\caption{The ratio $d\sigma^{\bar{p}p}/d\sigma^{pp}$ at 53 GeV \cite{isrod}.}
\label{p5}
\vspace{1.1mm}
\includegraphics[width=90mm]{Fig6.jpg}
\vspace{-6.6mm}
\caption{The $\bar{p}p$ and (extrapolated) $pp$ data at $\sqrt{s}=
1.96 TeV$ \cite{odde}.}
\label{p6}
\end{wrapfigure}
The function $ f( \sqrt{s} , \tau) $ seems to be 1 almost at all $ \tau $ except a "bell" in the vicinity of $ \tau =1 $.
Then we can obtain for the height of the bell that \[f(53\mbox{ GeV},1)-1
= 3.5\pm 1.7 \]
while
\[f(1960\mbox{ GeV},1) -1 = 0.67\pm 1\sigma (?) .\]
Unfortunately, the values of the extrapolated $ pp $ data at 1.96 TeV are still kept secret, so we could not estimate the errors better and make a picture like Fig.5.
What can we conclude from this story?
1. The discovery of the Odderon as a new C-odd force superior to the "old" C-odd forces from the secondary quark Reggeons was successfully confirmed in the energy interval $ 53 \div 2000 $ GeV.
2.
The Odderon effect in the sense described above weakens with energy albeit very slow.
It remains to understand the Odderon nature in terms of, say, $ j $-plane singularity and to clarify its particle content.
We cannot agree that , as done in Ref.\cite{odde}, with a reference to the paper \cite{nicla}, that there was a "colorless C-odd gluonic compound" observed because the present data cannot inform us about quark-gluon content of the exchange. Only a direct detection of a state associated with this exchange with definition of its mass, width and spin-parity can be qualified as such an evidence. This can be compared with the discovery of pion occured only 12 years after publication of the Yukawa paper. Unfortunately, we can not say that the paper \cite{nicla} with its
erroneous theoretical content and bad description quality ($ p-value=8.5\cdot 10^{-71}$) \footnote{One can find the corresponding criticism in Ref.\cite{ptr1}. } can be likened to the Yukawa paper.
At the same time, we would like to pay due tribute to the commendable tenacity of the main and pioneering proponent of the Odderon, Basarab Nicolescu, who devoted many years to enthusiastic promotion of this idea.
Comments: Presented at the Low-$x$ Workshop, Elba Island, Italy, September 27--October 1 2021.
\section*{Acknowledgements}
V.P. thanks Christophe Royon for invitation to give this talk at a very interesting Workshop.
\iffalse
\part[Precision QCD measurements from CMS\\ \phantom{x}\hspace{4ex}\it{Toni M\"{a}kel\"{a} on behalf of the CMS Collaboration}]{}
\section{Introduction}
For a deeper understanding of quantum chromodynamics (QCD), it is essential to study the production of jets. Jets, and the objects produced in association with them, shed light on proton structure and can be used for extracting standard model (SM) parameters, such as the strong coupling and quark masses. Jets with high transverse momentum ${\boldsymbol p}$ can also probe the scale of new physics and are utilized in indirect searches for physics beyond the standard model (BSM).
A selection of the relevant achievements of the CMS Collaboration is presented.
A precision measurement of the Z invisible width~\cite{CMS-PAS-SMP-18-014} is described in Section~\ref{sec:ZinvWidth} and inclusive jet production at $\sqrt{s}=5.02\TeVns$~\cite{CMS-PAS-SMP-21-009} in Section~\ref{sec:incJets5}.
Section~\ref{sec:multijet} discusses multijet production at 13$\TeVns$~\cite{CMS-PAS-SMP-21-006}. Finally, Section~\ref{sec:incJets13} details the measurement of inclusive jet production cross sections at $\sqrt{s}=13\TeVns$ along with a QCD analysis incorporating these data~\cite{CMS-PAS-SMP-20-011}. A detailed description of the CMS detector, together with a definition of the coordinate system and relevant kinematic variables is given in Ref.~\cite{CMS-JINST-3-S08004}. The contents reported here reflect those available at the time of the Low-$x$ 2021 Workshop. Since then, the work in Ref.~\cite{CMS-PAS-SMP-20-011} has been submitted to \textit{JHEP} and the preprint is available at~\cite{CMS:2021yzl}.
\section{Precision measurement of the Z invisible width at $\sqrt{s} = 13\TeVns$}
\label{sec:ZinvWidth}
A measurement~\cite{CMS-PAS-SMP-18-014} of the $\ensuremath{\mathrm{Z}}$ boson invisible width is performed using data from $\ensuremath{\mathrm{p}}\Pp$ collisions at $\sqrt{s} = 13\TeVns$ recorded by the CMS experiment at the LHC, corresponding to an integrated luminosity of $36.3\,\mathrm{fb}^{-1}$. Jets are reconstructed with the anti-$k_t$ algorithm with the distance parameter $R=0.4$.
Besides being a search for dark matter using jets and missing transverse momentum ${\boldsymbol p}^{\textrm{miss}}$, the precise measurement of the $\ensuremath{\mathrm{Z}}$ boson invisible width constrains the number of neutrino species coupled to $\ensuremath{\mathrm{Z}}$. The invisible width arises from the decays of the $\ensuremath{\mathrm{Z}}$ boson to invisible final states, such as neutrinos, and is given by
\begin{equation}
\Gamma(\ensuremath{\mathrm{Z}}\rightarrow\nu\overline{\nu})
= \frac{ \sigma(\ensuremath{\mathrm{Z}} + \textrm{jets}) \mathcal{B}(\ensuremath{\mathrm{Z}}\rightarrow\nu\overline{\nu}) }
{ \sigma(\ensuremath{\mathrm{Z}} + \textrm{jets}) \mathcal{B}(\ensuremath{\mathrm{Z}}\rightarrow\ell\ell) }
\Gamma(Z\rightarrow\ell\ell),
\label{ZinvisibleWidthEq}
\end{equation}
where $\Gamma(Z\rightarrow\ell\ell)$ is the decay width for visible leptonic decays. The $\sigma$ and $\mathcal{B}$ in Eq.~\eqref{ZinvisibleWidthEq} denote cross sections and branching ratios for the processes denoted in the brackets, respectively.
The extraction of the invisible width requires the $\ensuremath{\mathrm{Z}}/\gamma^* \rightarrow \ell\ell$ process to be corrected to $\ensuremath{\mathrm{Z}} \rightarrow \ell\ell$. The contribution of the $\gamma^* \rightarrow \ell\ell$ channel and its interference with $\ensuremath{\mathrm{Z}} \rightarrow \ell\ell$ is simulated with \MADGRAPH{}5\_aMC@NLO\xspace 2.3~\cite{Alwall:2014hca, Alwall:2011uj}. The event selection criteria is given in detail in~\cite{CMS-PAS-SMP-18-014}. The signal processes of $\ensuremath{\mathrm{Z}}$ decaying to neutrinos and the Drell-Yan process of $\ensuremath{\mathrm{Z}}$ decaying to leptons as well as $\ensuremath{\mathrm{W}}$ boson production are simulated at next-to-leading order (NLO) with \MADGRAPH{}5\_aMC@NLO\xspace. Parton shower and hadronisation are obtained by using \textsc{Pythia} 8.212~\cite{Sjostrand:2014zea} with the CUETP8M1 tune \cite{CMS:2015wcf}. Correction factors accounting for NNLO QCD and NLO electroweak effects are applied. To account for background processes, the generation of $\ensuremath{{\PQt{}\overline{\PQt}}}$ events is done with \MADGRAPH{}5\_aMC@NLO\xspace and normalized to the NNLO inclusive cross section with next-to-next-to leading logarithmic corrections. Single top processes are computed at leading order (LO) using \textsc{POWHEG}~\cite{Nason:2004rx, Frixione:2007vw, Alioli:2010xd} and normalized to NLO $t$-channel and $\ensuremath{\mathrm{t}}\ensuremath{\mathrm{W}}$ production cross sections, whereas $s$-channel production is computed at NLO using \MADGRAPH{}5\_aMC@NLO\xspace. Multijet processes are considered at LO and computed with \textsc{Pythia}. All LO and NLO processes are generated using the NNPDF 3.0 PDFs~\cite{NNPDF:2014otw} at the corresponding order.
The dominant background contribution is due to $\ensuremath{\mathrm{W}}+\textrm{jets}$ events with a lepton outside of detector acceptance. It is estimated by using $\mu + \textrm{jets}$ and $e + \textrm{jets}$ as control samples and defining a transfer factor as the ratio of simulated event counts in the ${\boldsymbol p}^{\textrm{miss}} + \textrm{jets}$ signal and in the control samples, as a function of ${\boldsymbol p}^{\textrm{miss}}$. This is used for correcting the $\mu + \textrm{jets}$ and $e + \textrm{jets}$ event yields, so that the expected contribution of $W + \textrm{jets}$ to ${\boldsymbol p}^{\textrm{miss}} + \textrm{jets}$ can be obtained.
The transfer factor is validated by simultaneous likelihood fits to channels involving different combinations of leptons and jets, extracting any differences in the the ${\boldsymbol p}^{\textrm{miss}}$ distribution's shape and normalisation. The treatment of subdominant background sources is explained in Ref.~\cite{CMS-PAS-SMP-18-014}. Dominant systematic uncertainties arise from the identification of electrons and muons, the jet energy scale and pile-up~\cite{CMS-PAS-SMP-18-014}.
The $\ensuremath{\mathrm{Z}}$ boson invisible width is extracted from a simultaneous likelihood fit to data for the
${\boldsymbol p}^{\textrm{miss}} + \textrm{jets}$,
$\ensuremath{\mathrm{Z}}/\gamma^* \rightarrow ee + \textrm{jets}$,
$\ensuremath{\mathrm{Z}}/\gamma*\rightarrow \mu\mu + \textrm{jets}$,
$\mu + \textrm{jets}$, and
$e + \textrm{jets}$
channels. The transfer factor is included as a free unconstrained parameter in the fit, where it scales the $\ensuremath{\mathrm{W}} + \textrm{jets}$ process for ${\boldsymbol p}^{\textrm{miss}} + \textrm{jets}$ and $\ell + \textrm{jets}$. The data are compared with the pre- and postfit in Figure~\ref{ZinvisibleWidthData}, which also indicates the contributions of different processes in each channel.
\begin{figure}[h]
\centering
\includegraphics[width=130mm,trim={0mm 0mm 3mm 0mm},clip]{./figs/Z_inv_width_data.png}
\caption{Missing transverse momentum ${\boldsymbol p}^{\mathrm{miss}}$ distributions. Selected charged leptons do not contribute to ${\boldsymbol p}^{\mathrm{miss}}$. The plot also shows the ratios with respect to the SM postfit and the residuals as the difference between data and SM postfit.~\cite{CMS-PAS-SMP-18-014}}
\label{ZinvisibleWidthData}
\end{figure}
The resulting $\ensuremath{\mathrm{Z}}$ boson invisible width is $523 \pm 3\,\mathrm{(stat)} \pm 16 \,\mathrm{(syst)}\MeVns$, making this the most precise direct measurement to date as well as the first one using hadron collider data. A comparison to the results of the LEP experiments is shown in Figure~\ref{ZinvisibleWidthComparison}.
\begin{figure}[H]
\centering
\includegraphics[width=80mm]{./figs/invisible_Z_width.png}
\caption{Comparison of the $\ensuremath{\mathrm{Z}}$ boson invisible width measurement with the direct measurements from the experiments at the LEP. The present measurement is in good agreement with the LEP combination~\cite{CMS-PAS-SMP-18-014}.}
\label{ZinvisibleWidthComparison}
\end{figure}
\section{Measurement of the double-differential inclusive jet cross section at $5\TeVns$}
\label{sec:incJets5}
A measurement~\cite{CMS-PAS-SMP-21-009} of the inclusive jet production is performed using $\ensuremath{\mathrm{p}}\Pp$ collision data at $\sqrt{s}=5.02\TeVns$, corresponding to an integrated luminosity of $27.4\,\mathrm{pb}^{-1}$. The data was recorded by the CMS experiment during a special low-pileup run of the LHC, with 1.1 vertices per collision on average. The cross section is measured double-differentially as a function of jet transverse momentum ${\boldsymbol p}$ and rapidity $y$, and the jets are reconstructed using the anti-$k_t$ algorithm with distance parameter $R=0.4$ in a phase space given by $64 \GeVns < {\boldsymbol p} < 1 \TeVns$ and $|y| < 2.0$.
As the inclusive jet production is dominated by QCD and the background from electroweak processes is negligible, the data are compared to perturbative QCD predictions with a correction for nonperturbative effects~\cite{CMS-PAS-SMP-21-009}. The NLO prediction is obtained using NLOJet++~\cite{Nagy:2001fj, Nagy:2003tz} with the \textsc{FastNLO} framework~\cite{Britzger:2012bs}, and the NNLO prediction is determined with NNLOJET~\cite{Currie:2016bfm, Currie:2018xkj, Gehrmann:2018szu}.
Figure~\ref{5TeVdataVsTheory} shows the ratio of the unfolded inclusive jet cross section to theoretical predictions computed with the CT14 PDF~\cite{Dulat:2015mca}. Using the NLO predictions with the factorisation and renormalisation scales set to $\mu_f = \mu_r = {\boldsymbol p}$, the data is seen to be systematically below theory. However, $\ensuremath{H_\mathrm{T}}$ scale gives a good description of the data at both NLO and NNLO. Furthermore, the NNLO prediction turns out less dependent on the choice of scale than NLO~\cite{CMS-PAS-SMP-21-009}.
\begin{figure}[H]
\includegraphics[width=50mm]{./figs/5TeV/5TeV_NLO_HT.png}
\includegraphics[width=50mm]{./figs/5TeV/5TeV_NLO_pT.png}
\includegraphics[width=50mm]{./figs/5TeV/5TeV_NNLO_HT.png}
\caption{A comparison of the 5\TeVns inclusive jet cross section data to theory predictions at NLO using the $\ensuremath{H_\mathrm{T}}$ scale (left), at NLO using the ${\boldsymbol p}$ scale (middle), and at NNLO using the $\ensuremath{H_\mathrm{T}}$ scale~\cite{CMS-PAS-SMP-21-009}.}
\label{5TeVdataVsTheory}
\end{figure}
\section{Cross section measurements of jet multiplicity and jet transverse momenta in multijet events at $13\TeVns$}
\label{sec:multijet}
Multijet production in $\ensuremath{\mathrm{p}}\Pp$ collisions is a probe of QCD at high ${\boldsymbol p}$ and high jet multiplicities. A recent CMS measurement~\cite{CMS-PAS-SMP-21-006} of multijet production is performed at $13\TeVns$ corresponding to an integrated luminosity of $36.3\,\mathrm{fb}^{-1}$. The jets are reconstructed using the anti-$k_t$ algorithm with $R=0.4$. Events containing a leading jet with $p_{\mathrm{T}1} > 200\GeVns$ and a subleading jet with $p_{\mathrm{T}2} > 100\GeVns$ within $|y| < 2.5$ are selected. The leading and subleading jets form a dijet system, and the multiplicity of jets with ${\boldsymbol p} > 50\GeVns$ is measured for different regions of the leading jet ${\boldsymbol p}$ and in bins of the azimuthal angle $\Delta\phi_{1,2}$ between the jets in the dijet system. The differential cross section of four leading jets is measured as a function of their ${\boldsymbol p}$.
The measurements are compared to perturbative QCD predictions interfaced with different models for hadronisation and parton showering. The NLO matrix elements (ME) of 2 jet and 3 jet production are computed using \MADGRAPH{}5\_aMC@NLO\xspace~\cite{Alwall:2014hca}. For hadronisation, it is interfaced with \textsc{Pythia 8}~\cite{Sjostrand:2014zea} using the CUETP8M1 tune~\cite{CMS:2015wcf} and the NNPDF 3.0 NLO PDF~\cite{NNPDF:2014otw}. Alternatively, CASCADE3~\cite{Baranov:2021uol} is used together with employing the \textsc{Herwig6} subtraction terms in MCatNLO and the NLO PB TMD set 2~\cite{BermudezMartinez:2018fsv} for transverse momentum dependent parton densities. All computations are normalised to the measured dijet cross section. The factorisation and renormalisation scales $\mu_f$, $\mu_r$ are set equal to $1/2 \sum_i H_{\mathrm{T}i}$, with $H_{\mathrm{T}i}$ being the scalar transverse momenta and the sum taken over all produced partons. The scale uncertainty is obtained by varying $\mu_r$ and $\mu_f$ independently by a factor of 2 up and down, avoiding the cases with $\mu_r/\mu_f =4^{\pm 1}$.
The NLO generator gives a good description of the experimental data, particularly when using the transverse momentum dependent PDFs~\cite{BermudezMartinez:2018fsv}. This is illustrated in Figure~\ref{multijetFig}, which portrays the ${\boldsymbol p}$ distributions of the 3rd and 4th leading jets.
\begin{figure}[h]
\centering
\includegraphics[width=110mm]{./figs/SMP-21-006/3-4-jet.png}
\caption{A comparison of the measured ${\boldsymbol p}$ distributions of the 3rd and 4th leading jets to NLO predictions. The label (jj) refers to 2 jet and (jjj) to 3 jet production in the ME~\cite{CMS-PAS-SMP-21-006}.}
\label{multijetFig}
\end{figure}
For the first time, jet multiplicity has been measured in bins of leading jet ${\boldsymbol p}$ and the azimuthal angle between the two leading jets, with up to seven measurable jets. The results will be essential for comparisons of SM multijet production calculations, and are particularly beneficial for high jet multiplicity simulations with parton showers.
\section{Measurement and QCD analysis of double-differential inclusive jet cross sections at $13\TeVns$}
\label{sec:incJets13}
The $\ensuremath{\mathrm{p}}\Pp$ collison data at 13 TeV are used by the CMS Collaboration to measure the cross section of inclusive jet production~\cite{CMS-PAS-SMP-20-011}. The present results involve jets reconstructed using the anti-$k_t$ algorithm with the distance parameter $0.7$, for which the data correspond to an integrated luminosity of 33.5 $\mathrm{fb}^{-1}$.
The unfolding is performed two-dimensionally using least-square minimisation. Attention is paid to the smoothness of all bin-to-bin uncertainties, with tests of smoothness performed using Chebyshev polynomials. The data is shown in Figure~\ref{13TeV_cs_and_rm} along with the probability matrix, which is the response matrix normalised row-by-row.
\begin{figure}[h]
\includegraphics[width=77.5mm]{./figs/SMP-20-011/ak7_xsec.pdf}
\raisebox{4.3mm}{\includegraphics[width=71.2mm]{./figs/SMP-20-011/ak7_RM_pythia_2D.pdf}}
\caption{\textit{Left:} The inclusive jet cross sections with a comparison to NLO QCD predictions using the CT14 PDF~\cite{CMS-PAS-SMP-20-011}. \textit{Right:} The probability matrix. The horizontal and vertical axes correspond to particle and detector level jets, respectively.~\cite{CMS-PAS-SMP-20-011}}
\label{13TeV_cs_and_rm}
\end{figure}
The data are compared with fixed-order QCD predictions available at NLO and NNLO, obtained by using NLOJet++~\cite{Nagy:2001fj, Nagy:2003tz} and NNLOJET (rev5918)~\cite{Currie:2016bfm, Currie:2018xkj, Gehrmann:2018szu}. The NLO calculations are implemented in \textsc{FastNLO}~\cite{Britzger:2012bs}. The NLO cross-section is upgraded to NLO+NLL via correction factors obtained with the \textsc{NLL-Jet} calculation, provided by the authors of Ref.~\cite{Liu:2018ktv}, and the MEKS~\cite{Gao:2012he} code. Details of the electroweak and nonperturbative corrections are given in~\cite{CMS-PAS-SMP-20-011}. The comparison is performed using various global PDFs, and depicted in Figure~\ref{13TeV_data_vs_theory}. In particular, the scale uncertainty is observed to decrease noticeably at NNLO.
\begin{figure}[H]
\centering
\includegraphics[width=150mm]{./figs/SMP-20-011/ak7_comparisonToNNLO_2D.pdf}\\
\includegraphics[width=150mm]{./figs/SMP-20-011/ak7_comparisonToPDFs_2D.pdf}
\caption{Comparisons of the double differential cross section data to theoretical predictions using different PDFs and portraying the role of various uncertainties. In the bottom plot all histograms are divided by the NLO+NLL prediction and in the top plot by the NNLO prediction, obtained using the CT14 PDF~\cite{CMS-PAS-SMP-20-011}.}
\label{13TeV_data_vs_theory}
\end{figure}
The sensitivity of the present measurement to the proton PDFs and $\ensuremath{\alpha_S}(m_{\ensuremath{\mathrm{Z}}})$ is investigated in a comprehensive QCD analysis, where the double-differential inclusive jet production cross section is used together with the charged- and neutral-current deep inelastic scattering (DIS) cross sections of HERA~\cite{Abramowicz:2015mha}. In addition, the normalised triple-differential $\ensuremath{{\PQt{}\overline{\PQt}}}$ cross section~\cite{Sirunyan:2019zvx} from CMS is used. The scales $\mu_f$ and $\mu_r$ are set to the four-momentum transfer $Q$ for the DIS data and to the individual jet ${\boldsymbol p}$ for the inclusive jet production cross section. For $\ensuremath{{\PQt{}\overline{\PQt}}}$ production, they are set to half the sum of the transverse masses of the partons, as done in Ref.~\cite{Sirunyan:2019zvx}. The QCD analysis is performed in terms of the SM and standard model effective field theory (SMEFT) by using the \textsc{xFitter} QCD analysis framework~\cite{Bertone:2017tig, Alekhin:2014irh}, interfaced to \textsc{CIJET}~\cite{Gao:2012qpa, Gao:2013kp} for the SMEFT prediction. This allows for a simultaneous extraction of PDFs, $\ensuremath{\alpha_S}$, $m_{\ensuremath{\mathrm{t}}}^{\textrm{pole}}$ and the Wilson coefficient $c_1$ of 4-quark contact interactions (CI).
The CI are expected to appear as deviations from the SM jet cross section spectrum at low rapidity and high-${\boldsymbol p}$. However, SM predictions are based on PDFs which are derived assuming the validity of the SM at high jet ${\boldsymbol p}$. Hence there is a possibility that BSM effects are absorbed in the PDF fit. To ensure that the search for CI is non-biased, the PDFs are fitted simultaneously when using a SMEFT prediction. The SM is extended by
\begin{equation}
\mathcal{L}_{\textrm{SMEFT}} = \mathcal{L}_{\textrm{SM}}
+ \frac{4\pi}{2\Lambda^2}
\sum_{n} c_n O_n,
\label{SMEFT_Lagrangian}
\end{equation}
where $\Lambda$ is the scale of new physics, $c_n$ are Wilson coefficients and $O_n$ are dimension 6 operators for 4-quark CI corresponding to purely left-handed, vector-like or axial vector-like colour singlet exchanges.
The impact of the $13\TeVns$ data on a global PDF is assessed through a profiling procedure~\cite{Paukkunen:2014zia, Schmidt:2018hvu} performed using the CT14 PDF~\cite{Dulat:2015mca} at NLO and NNLO. The fractional uncertainty of the CT14 gluon PDF and the result of profiling with the CMS inclusive jet and $\ensuremath{{\PQt{}\overline{\PQt}}}$ data is shown in Figure~\ref{profilingPlots}, which also shows a scan for $\ensuremath{\alpha_S}$. The scan results in
$\ensuremath{\alpha_S}(m_{\ensuremath{\mathrm{Z}}}) = 0.1154 \pm 0.0009\,\mathrm{(fit)} \pm 0.0015\,\mathrm{(scale)}$,
where the scale uncertainty is obtained by varying $\mu_r$ and $\mu_f$ independently by factors of 1/2 and 2, excluding the combinations with $\mu_r/\mu_f =4^{\pm 1}$. The gluon PDF precision is improved significantly by the CMS data, and the profiled top quark mass $m_{\ensuremath{\mathrm{t}}}=170.3 \pm 0.5\,\mathrm{(fit)} \pm 0.2\,\mathrm{(scale)}\GeVns$ is consistent with Ref.~\cite{Sirunyan:2019zvx}.
However, the profiling procedure does not allow for a simultaneous extraction of the PDFs and non-PDF parameters. Therefore, a full fit is performed using SM predictions and, alternatively, assuming a SM+CI model. Uncertainties are estimated similarly to the HERAPDF2.0 method~\cite{Abramowicz:2015mha}, accounting for the fit, parameterisation and model uncertainties. The model uncertainties arise from variations in the fixed non-PDF parameter values, including the QCD scales, and the parameterisation uncertainties from adding and removing new parameters in the PDF parameterisation, one at a time.
The SM fit results in
$m_{\ensuremath{\mathrm{t}}}^{\textrm{pole}} = 170.4 \pm 0.6\,\mathrm{(fit)} \pm 0.3\,\mathrm{(model + par)}\GeVns$,
compatible with the previous CMS result~\cite{Sirunyan:2019zvx}, and
$\ensuremath{\alpha_S}(m_{\ensuremath{\mathrm{Z}}}) = 0.1187 \pm 0.0016\,\mathrm{(fit)} \pm 0.0030\,\mathrm{(model + par)}$,
compatible with the world average~\cite{10.1093/ptep/ptaa104}.
The PDFs, $\ensuremath{\alpha_S}(m_{\ensuremath{\mathrm{Z}}})$, and $m_{\ensuremath{\mathrm{t}}}^{\textrm{pole}}$ resulting from the SM fit and the SMEFT fits with all three CI models agree, and the PDFs are illustrated in Figure~\ref{SM_vs_CI_PDFs}. The SMEFT fits are sensitive to the ratio of the fitted $c_1$ to $\Lambda^2$, which expectedly remains constant as shown in Figure~\ref{Wilson2perLambda}. The negative $c_1$ imply a constructive interference with the SM gluon exchange, but are statistically compatible with zero. Thus, neither a deviation from the SM nor a risk of absorbing BSM effects in the SM PDF fit is observed.
\begin{figure}[h]
\centering
\includegraphics[width=59.9mm]{./figs/SMP-20-011/profiling/CT14nlo_vs_top+jets_g.pdf}
\hspace*{3mm}
\raisebox{2.1mm}{\includegraphics[width=65mm]{./figs/SMP-20-011/profiling/top+v4-1_NLO+NLL_as_scan_simple_preliminary.pdf}}
\caption{Profiling with CT14nlo at NLO, using the CMS inclusive jet and the triple-differential $\ensuremath{{\PQt{}\overline{\PQt}}}$ cross sections at $13\TeVns$. \textit{Left:} Relative uncertainty in the gluon PDF as a function of the momentum fraction $x$, at the scale $\mu_\text{f}=m_{\ensuremath{\mathrm{t}}}$. The CT14 uncertainty is shown in red and the profiling result in blue~\cite{CMS-PAS-SMP-20-011}. \textit{Right:} The $\chi^2$ scan for $\ensuremath{\alpha_S}(m_{\ensuremath{\mathrm{Z}}})$ profiling with the CT14 PDF series~\cite{CMS-PAS-SMP-20-011}.}
\label{profilingPlots}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=65mm]{./figs/SMP-20-011/QCDfits/SM_vs_CI/u_val_q2-07_combined.pdf}
\includegraphics[width=65mm]{./figs/SMP-20-011/QCDfits/SM_vs_CI/d_val_q2-07_combined.pdf}\\
\includegraphics[width=65mm]{./figs/SMP-20-011/QCDfits/SM_vs_CI/g_q2-07_combined.pdf}
\includegraphics[width=65mm]{./figs/SMP-20-011/QCDfits/SM_vs_CI/sea_q2-07_combined.pdf}
\caption{The $\ensuremath{\mathrm{u}}$ valence~(upper left), $\ensuremath{\mathrm{d}}$ valence~(upper right), gluon~(lower left), and sea quark~(lower right) PDFs as functions of the momentum fraction $x$ at the scale $\mu_\text{f}=m_{\ensuremath{\mathrm{t}}}$.
The red hashed band results from the SMEFT fit with the left-handed CI model using $\Lambda=10\TeVns$. It agrees with the gray band resulting from the SM fit; all differences are within the fit uncertainties~\cite{CMS-PAS-SMP-20-011}.}
\label{SM_vs_CI_PDFs}
\end{figure}
Conventional searches for CI are performed by scanning for $\Lambda$ with $c_1$ fixed to $+1$ for destructive or $-1$ for constructive interference with the SM gluon exchange. The results of the present fit are translated into non-biased 95\% CL exclusion limits on $\Lambda$ with $c_1=-1$. These are $24\TeVns$ for left-handed, $32\TeVns$ for vector-like, and 31\TeVns for axial-vector-like CI. The most stringent comparable result is $22\TeVns$ for left-handed CI with constructive interference, obtained by the ATLAS Collaboration using $13\TeVns$ dijet cross sections data~\cite{ATLAS:2017eqx}.
\begin{figure}[H]
\centering
\includegraphics[width=90mm]{./figs/SMP-20-011/Wilson2perLambda.png}
\caption{The Wilson coefficients $c_1$ obtained in the NLO SMEFT analysis, divided by $\Lambda^2$. The black error bars show the fit uncertainty at 68\% CL. The red (blue) lines correspond to the total uncertainty at 68\% (95\%) CL~\cite{CMS-PAS-SMP-20-011}.}
\label{Wilson2perLambda}
\end{figure}
\section{Summary}
Precision QCD measurements by the CMS Collaboration are reported, involving jet production in proton-proton collisions at $13\TeVns$ and $5\TeVns$. All results are in agreement with previous CMS results and world averages.
The first measurement of the $\ensuremath{\mathrm{Z}}$ boson invisible width at a hadron collider is also the most precise to date. The study of multijet production at $13\TeVns$ is the first measurement of jet multiplicity with up to seven measurable jets and making use of transverse momentum dependent parton densities. Furthermore, measurements of inclusive jet production cross sections are presented at $5\TeVns$ and $13\TeVns$. The $5\TeVns$ results provide a valuable reference for probing quark-gluon plasma in lead--proton collisions at $5.02\TeVns$, whereas the $13\TeVns$ data are incorporated in an analysis following a non-biased strategy, resulting in the simultaneous extraction of PDFs, $\ensuremath{\alpha_S}(m_{\ensuremath{\mathrm{Z}}})$, $m_{\ensuremath{\mathrm{t}}}^{\textrm{pole}}$ and the Wilson coefficient of 4-quark contact interactions for the first time using hadron collider data.
\iffalse
\part[Determination of Proton Parton Distribution Functions using ATLAS Data\\ \phantom{x}\hspace{4ex}\it{Zhiqing Zhang on behalf of the ATLAS Collaboration}]{}
\section{Introduction}
Parton distribution functions (PDFs) of the proton cannot be predicted from first principle and they used to be determined in deep inelastic scattering (DIS) experiments using point-like beams of charged or neutral leptons to probe the proton or other nucleon targets, in earlier fixed target mode and later collision mode at HERA. Thanks to the factorization theorem, the PDFs determined from one process can be used as prediction for other processes.
At hadron colliders such as the Large Hadron Collider (LHC), the uncertainty of the PDFs is now the dominant uncertainty source for precision measurements, one example is the recent determination of the $W$ boson mass~\cite{1701.07240} by ATLAS~\cite{detector}. It also becomes a limiting factor for searches beyond the Standard Model. It is therefore important to improve our knowledge on PDFs by using relevant measurements at the LHC.
Since 2012, several analyses at next-to-next-to-leading order (NNLO) accuracy in quantum chromodynamics (QCD) have been performed by the ATLAS experiment using Drell-Yan or top-quark pair cross-section data as outlined in Table~\ref{tab:fits}. In the following sections, these QCD analyses and their corresponding results will be briefly described and shown.
\begin{table}
\centering
\caption{Overview of ATLAS QCD analyses and the used data sets including the neutral current (NC) and charged current (CC) data from HERA-I and HERA-I+II, and various ATLAS measurements of the Drell-Yan processes $W\to\ell\nu$ and $Z(\gamma^\ast)\to\ell\ell$ and of top-quark pair production at different center-of-mass energies.}
\label{tab:fits}
\resizebox{\columnwidth}{!}
{
\begin{tabular}{c|cccc}
\hline
Data sets & epWZ12~\cite{1203.4051} & epWZ16~\cite{1612.03016} & epWZtop18~\cite{ATL-PHYS-PUB-2018-017} & epWZVjet20~\cite{2101.05095} \\\hline
HERA-I NC, CC~\cite{0911.0884} & \checkmark & & & \\
HERA-I+II NC, CC~\cite{1506.06042} & & \checkmark & \checkmark & \checkmark \\\hline
ATLAS $W$, $Z$, 7 TeV 35 pb$^{-1}$~\cite{1109.5141} & \checkmark & & & \\
ATLAS $W$, $Z/\gamma^\ast$, 7 TeV 4.2 fb$^{-1}$~\cite{1612.03016} & & \checkmark & \checkmark & \checkmark \\\hline
ATLAS $t\bar{t}$ $(\ell+$ jets, dilepton$)$ 8 TeV~\cite{1511.04716,1607.07281} & & & \checkmark & \\\hline
ATLAS $W/Z+$ jets 8 TeV~\cite{1711.03296,1907.06728} & & & & \checkmark \\\hline
\end{tabular}
}
\end{table}
\section{Analysis ATLAS-epWZ12}
In 2012, a NNLO QCD analysis, ATLAS-epWZ12~\cite{1203.4051}, was performed by the ATLAS experiment to assess the strange quark distribution, using the first differential cross-section measurements of inclusive $W^\pm$ and $Z$ boson production based on a $pp$ collision data sample corresponding to an integrated luminosity of 35 pb$^{-1}$ recorded in 2010 at a center-of-mass energy of $\sqrt{s}=7$~TeV~\cite{1109.5141}, together with the combined $e^\pm p$ neutral current (NC) and charged current (CC) cross-section measurements of H1 and ZEUS from HERA-I~\cite{0911.0884}.
The HERA measurements were included since they are the primary source for constraining PDFs with their large kinematic coverage for $Q^2$, the absolute four-momentum transfer squared, from near 1~GeV$^2$ to above $10^4$~GeV$^2$ and of Bjorken-$x$ from $\sim 0.6$ down to $10^{-4}$. However, they do have limitations. They cannot distinguish quark flavor between the down-type sea quarks, $\bar{d}$ and $\bar{s}$. Also the measurement precision at high $x$ and large $Q^2$ are statistically limited.
The ATLAS data, measured as functions of the $W$ decay lepton pseudorapidity, $\eta_\ell$, and of the $Z$ boson rapidity, $y_Z$, with a typical precision of $(2-3)\%$, access a kinematic range prescribed by the boson masses, $M_{W, Z}$, and the proton beam energy of $E_p=3.5$~TeV, corresponding to $Q^2\simeq M^2_{W,Z}$ and an $x$ range $0.001\lesssim x\lesssim 0.1$, with a mean $x=M_Z/2E_p=0.013$ for the $Z$ boson. The data provide new constraints on the strange quark distribution at high scale, $Q^2\sim M^2_{W,Z}$, which imply constraints at low $Q^2$ through perturbative QCD evolution.
The QCD analysis used the HERAFitter framework~\cite{HERAFitter}. The light quark coefficient functions were calculated to NNLO as implemented in QCDNUM~\cite{1005.1481}. The contributions of heavy quarks are calculated in the general-mass variable-flavor-number scheme of Refs.~\cite{9709442,0601245}. The electroweak (EW) parameters and corrections relevant for the $W$ and $Z$ boson production processes were determined following the procedure described in Ref.~\cite{1109.5141} and cross-checked between the FEWZ~\cite{0312266} and the DYNNLO~\cite{0903.2120} programs. The fit package used APPLGRID code~\cite{0911.2985} interfaced to the MCFM program~\cite{1007.3492} for fast calculation of the differential $W$ and $Z$ boson cross sections at next-to-leading order (NLO) and a $K$-factor technique to correct from NLO to NNLO predictions. The data were compared to the theory using the $\chi^2$ function defined in Ref.~\cite{0904.0929}.
The quark and gluon distributions at an initial scale of $Q^2_0=1.9$~GeV$^2$, chosen such that it is below the charm mass threshold $m_c$, were parameterized by generic forms:
\begin{eqnarray}
&& xq_i(x)=A_ix^{B_i}(1-x)^{C_i}P_i(x)\,,\nonumber\\
&& xg(x)=A_gx^{B_g}(1-x)^{C_g}P_g(x)-A^\prime_gx^{B^\prime_g}(1-x)^{C^\prime_g}\,,\nonumber
\end{eqnarray}
where $P_{i, g}$ denotes polynomials in powers of $x$. The fitted quark distributions were the valence quark distributions $(xu_v, xd_v)$ and the light quark sea distributions $(x\bar{u}, x\bar{d}, x\bar{s})$.
The parameters $A_{u_v}$ and $A_{d_v}$ were fixed using the quark counting rule, $A_g$ using the momentum sum rule, and $C^\prime_g$ was set to 25 to suppress negative contributions at high $x$. The normalization and slope parameters, $A$ and $B$, of $\bar{u}$ and $\bar{d}$ were set equal such that $x\bar{u} = x\bar{d}$ at $x\to 0$. For the strange quark distribution results presented here, it was parametrized with $P_{\bar{s}}=1$ and $B_{\bar{s}}=B_{\bar{d}}$. Terms were
added in the polynomial expansion $P_i(x)$ only if required by the data with a significant decrease in $\chi^2$ of the fit results, following the procedure described in Ref.~\cite{0911.0884}. This led to one additional term, $P_{u_v}=1+E_{u_v}x^2$, for the $u$ valence quark distribution, giving in total 15 free parameters for the fit.
The parton distributions at other $Q^2$ values are obtained using the evolution equations.
The fit resulted in a good overall $\chi^2$ value per number of degree of freedom of 538.4/565 and determined the ratio of the strange quark distribution over the down quark one at $Q^2_0$ and $x=0.023$ (the $x$ value corresponds to 0.013 at $Q^2=M^2_Z$) to be:
\begin{equation}
r_s=\frac{s+\bar{s}}{2\bar{d}}=1.00\pm 0.20 (\textrm{exp})\pm 0.07 (\textrm{mod})^{+0.10}_{-0.15} (\textrm{par}) ^{+0.06}_{-0.07} (\alpha_s)\pm 0.08 (\textrm{th})\,,\nonumber
\end{equation}
where uncertainties are experimental, model, parametrization, $\alpha_s$ and theoretical. The determination, consistent with the prediction that the light quark sea at low $x$ is flavor symmetric, was compared in Figure~\ref{fig:epwz12} with predictions obtained from four global PDF determinations. The ABKM09~\cite{0908.2766} and MSTW08~\cite{0901.0002} determinations gave a value around 0.5 and the NNPDF2.1~\cite{1107.2652} determination was lower at about 0.25. On the other hand, the CT10 (NLO)~\cite{1007.2241} determination gave a large ratio consistent with the ATLAS determination.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\columnwidth]{rs_epWZ12.pdf}
\caption{Relative strange-to-down sea quark ratio for $Q^2_0=1.9$~GeV$^2$ and $x=0.023$ comparing the determination from ATLAS (shown in the vertical line with bands for experimental and total uncertainties) with predictions from different PDF sets (shown in closed symbols with horizontal error bars). The plot is taken from Ref.~\cite{1203.4051}.}
\label{fig:epwz12}
\end{center}
\end{figure}
\section{Analysis ATLAS-epWZ16}
The ATLAS-epWZ16 analysis~\cite{1612.03016} is very similar to ATLAS-epWZ12. The main differences are that the HERA-I $ep$ data were replaced with the final NC and CC DIS HERA-I+II data~\cite{1506.06042}. The ATLAS Drell-Yan measurements used~\cite{1612.03016} were based on the full 7~TeV data sample of 4.6 fb$^{-1}$ with improved precision. In addition, the measurement of the $Z/\gamma^\ast$ process was extended in three mass ranges of $46<m_{\ell\ell}<66$~GeV, $66<m_{\ell\ell}<116$~GeV and $116<m_{\ell\ell}<150$~GeV for the central rapidity region of $|y_{\ell\ell}|<2.5$ and in two mass ranges $66<m_{\ell\ell}<116$~GeV and $116<m_{\ell\ell}<150$~GeV in the forward rapidity region up to 3.6. The NNLO QCD fit with the same PDF parameterization as the ATLAS-epWZ12 fit was performed using the xFitter program~\cite{HERAFitter}.
\sloppy
The strange quark ratio $r_s$ was determined with improved precision at $Q^2_0$ and $x=0.023$:
\begin{equation}
r_s=1.19\pm 0.07 (\textrm{exp})^{+0.13}_{-0.14} (\textrm{mod}+\textrm{par}+\textrm{th})\,.\nonumber
\end{equation}
The comparison with predictions from other global PDF sets ABM12~\cite{1310.3059}, NNPDF3.0~\cite{1410.8849}, MMHT14~\cite{1412.3989}, CT14~\cite{1506.07443}, as well as from ATLAS-epWZ12 is shown in Figure~\ref{fig:epwz16} (left). In addition, the ratio of strange quarks over the sum of up and down sea quarks $R_s=(s+\bar{s})/(\bar{u}+\bar{d})$ as a function of $x$ has also been obtained (Figure~\ref{fig:epwz16} (right)), though the uncertainty in particular the parametrization uncertainty was large.
\begin{figure}
\begin{center}
\includegraphics[width=0.55\columnwidth]{rs_epWZ16.pdf}
\includegraphics[width=0.4\columnwidth]{Rsx_epWZ16.pdf}
\caption{Left: Relative strange-to-down sea quark ratio $r_s$ for $Q^2_0=1.9$~GeV$^2$ and $x=0.023$ comparing the determination from ATLAS (shown in the vertical line with bands for experimental data, QCD fit and theoretical uncertainties) with predictions from different NNLO PDF sets (shown in closed symbols with horizontal error bars) as well as previous ATLAS results (shown in open square). Right: relative strange quark ratio over the sum of up and down sea quarks as a function of Bjorken-$x$. The plots are taken from Ref.~\cite{1612.03016}.}
\label{fig:epwz16}
\end{center}
\end{figure}
\section{Analysis ATLAS-epWZtop18}
Another NNLO QCD analysis, ATLAS-epWZtop18~\cite{ATL-PHYS-PUB-2018-017}, included, in addition to those data used in the ATLAS-epWZ16 fit, top-quark pair production data measured in the lepton + jets~\cite{1511.04716} and dilepton~\cite{1607.07281} channels at 8~TeV corresponding to an integrated luminosity of 20.2~fb$^{-1}$. The top-quark pair production data are complementary to the other data sets in their PDF constraining power since they are expected to be sensitive to the high-$x$ gluon distribution~\cite{1611.08609}. The corresponding NNLO prediction~\cite{1704.08551} were supplied in the form of FastNLO grids~\cite{1208.3641} for the data in the lepton + jets channel. For the dilepton channel, APPLGRID was interfaced to MCFM to produce NLO grids and a $K$-factor technique was used to correct from NLO to NNLO predictions.
The setup of the fit differs from that of ATLAS-epWZ16 in three aspects, apart from the addition of top-quark pair production data. Firstly, the low mass off-peak $Z/\gamma^\ast$ boson data was not used because it became clear that the lower $x$ region probed by the data was subject to further QCD~\cite{1710.05935} corrections which were not readily calculable, this had negligible impact on the fits. Secondly, the minimum $Q^2$ selection on the HERA data was raised from 7.5~GeV$^2$ to 10~GeV$^2$, this larger cut was already considered as one of the model variations of the ATLS-epWZ16 fit and was used as standard in the new analysis to avoid the region of the HERA data which may be subject to $\ln(1/x)$-resummation effects~\cite{1710.05935} that were not accounted for in the prediction. Thirdly, the ATLAS-epWZtop18 fit had one additional term, $P_{\bar{d}}=1+D_{\bar{d}}x$, for the down sea quark distribution than those of ATLAS-epWZ16 where it was considered as a parameterization variation.
The main result of the fit is presented in Figure~\ref{fig:top18}, showing the impact of the top-quark pair production data on the gluon distribution; reducing its uncertainty in particular at high $x$ and making it softer and harder at medium $x$ and high $x$, respectively.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\columnwidth]{xg_epWZtop18.pdf}
\caption{Gluon distribution as a function of Bjorken-$x$ from fitting HERA data and ATLAS $W$ and $Z/\gamma^\ast$ boson data plus the $t\bar{t}$ lepton + jet ${\boldsymbol p}^{t}$ and $m_{t\bar{t}}$ data and the $t\bar{t}$ dilepton $y_{t\bar{t}}$ data compared to the fit to HERA and ATLAS $W$ and $Z/\gamma^\ast$ boson data alone. The plot is taken from Ref.~\protect\cite{ATL-PHYS-PUB-2018-017}.}
\label{fig:top18}
\end{center}
\end{figure}
\section{Analysis ATLAS-epWZVjet20}
The analysis ATLAS-epWZVjet20~\cite{2101.05095} included new ATLAS data on the production of a vector ($W$ or $Z$) boson in association with at least on jet~\cite{1711.03296,1907.06728}. The measurements were based on the same data sample as for the top-quark pair production at 8~TeV. They provided a novel source of input to PDF determination that is sensitive to partons at higher $x$ and $Q^2$ than can be accessed by $W$ and $Z$ boson data alone. These data are thus complementary to the inclusive $W/Z$ boson measurements.
Predictions for $W +$ jets and $Z +$ jets production~\cite{1511.08692} were obtained similarly to the $W$ and $Z$ predictions to NLO in QCD and leading order in EW couplings by using the APPLGRID code interfaced to the MCFM program. NNLO (NLO) corrections in QCD (EW) were implemented as $K$-factors.
The setup of the ATLAS-epWZVjet20 fit differs from that of ATLAS-epWZ16 in that there was one more term, $P_g=1+D_gx$, for the gluon distribution and a tighter selection of $Q^2>10$~GeV$^2$ instead of 7.5~GeV$^2$ on the DIS data. The resulting down and strange sea quark distributions of the fit and $R_s$ at the starting scale $Q^2_0$ are shown in Figure~\ref{fig:ds_vjet20} in comparison with the results of the ATLAS-epWZ20 fit when the $V+$ jets data were not included. The inclusion of the $V+$ jets data made the $x\bar{d}$ distribution notably higher in the range of $x\gtrsim 0.02$ and the $x\bar{s}$ distribution lower in the same range, with significantly smaller experimental and parameterization uncertainties at high $x$. The difference between the two $R_s$ determinations at high $x$ is essentially covered by the large uncertainty dominated by the parameterization uncertainty in the ATLAS-epWZ20 fit. The new $R_s$ determination at $x=0.023$ was again compared with the predictions from ABMP16~\cite{1701.05838}, CT18 and CT18A~\cite{1912.10053}, MMHT14~\cite{1412.3989}, NNPDF3.1 and NNPDF3.1\_strange~\cite{1706.00428} and the ATLAS determinations ATLAS-epWZ20 and ATLAS-epWZ16, the latter two being different mainly in their PDF parameterizations. Better agreement was observed with the CT18A PDF set, which included both the data used in the CT18 fit and the ATLAS 7 TeV data, although tension remains with the NNPDF3.1\_strange PDF set, which also used this data.
The large change at high $x$ was investigated by performing a $\chi^2$ scan of the $C_{\bar{d}}$ parameter which controls the high-$x$ behavior of the $x\bar{d}$ distribution. The $\chi^2$ variation subtracting the minimum value is presented in Figure~\ref{fig:dm_vjet20}. The other data sets showed double minima in $\chi^2$ with a preferred solution for a $C_{\bar{d}}$ value around 10 corresponding to a soft $x\bar{d}$ distribution at high $x$. The inclusion of the $V+$ jet data allowed to resolve the ambiguity and resulted in a $C_{\bar{d}}$ value around 1.6 and thus a harder $x\bar{d}$ distribution at high $x$. Since the sum of $x\bar{d}+x\bar{s}$ is well constrained by e.g.\ the DIS data, the harder $x\bar{d}$ distribution gives thus the softer $x\bar{s}$ distribution and smaller $R_s$ at high $x$.
\begin{figure}
\begin{center}
\includegraphics[width=0.49\columnwidth]{xdbar_vjet20.pdf}
\includegraphics[width=0.49\columnwidth]{xsbar_vjet20.pdf}
\includegraphics[width=0.49\columnwidth]{Rsx_vjet20.pdf}
\includegraphics[width=0.49\columnwidth]{Rs_vjet20.pdf}
\caption{Top-left: Down sea quark distribution as a function of Bjorken-$x$ when including (blue) or without including (green) the $V+$ jets data in the fits. Top-right: similar plot for strange sea quark distribution. Bottom-left: similar plot for relative strange quark ratio over the sum of up and down sea quarks $R_s$ as a function of Bjorken-$x$. Bottom-right: $R_s$ determination in comparison with other predictions for $Q^2_0=1.9$~GeV$^2$ and $x=0.023$. The plots are taken from Ref.~\cite{2101.05095}.}
\label{fig:ds_vjet20}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\columnwidth]{doublemin_vjet20.pdf}
\caption{Variation of $\chi^2$ subtracting the minimum value when including (blue) or without including (green) the $V+$ jets data in the fits as a function of parameter $C_{\bar{d}}$ . The plot is taken from Ref.~\cite{2101.05095}.}
\label{fig:dm_vjet20}
\end{center}
\end{figure}
\FloatBarrier
\section{Summary and prospect}
The results of several NNLO QCD analyses performed using the Drell-Yan production and top-quark pair production cross sections measured by ATLAS with $pp$ collision data at the LHC taken at center-of-mass energies of 7 and 8~TeV, together with the neutral and charged current deep inelastic scattering cross sections from $ep$ collisions at HERA, have been presented. The analyses show the constraining power of the ATLAS measurements on parton distribution functions of the proton, complementary to the HERA data. In particular, the strange quark distribution at low $x$ is found unsuppressed contrary to what was assumed and obtained in the other PDF analyses. The shape of the strange quark ratio over the other light sea quarks at high $x$ has been better understood thanks to the measurement of vector-boson production in association with at least one jet.
Another valuable constraint for the strange quark distribution at the LHC concerns the associated production of $W$ bosons and charm quarks since it probes the strange quark content of the proton directly through the leading order processes $g + \bar{s} \to W^+ +\bar{c}$ and $g + s \to W^- +c$. Earlier analyses, not presented here, performed NLO fits using the measurements of ATLAS at 7~TeV~\cite{1402.6263} and of CMS at 7~TeV~\cite{1310.1138} and at 13~TeV~\cite{1811.10021}. With the recent available NNLO predictions~\cite{2011.01011}, there is a good prospect in including the $W+c$ data in the future NNLO analyses, though the interpretation of the data is sensitive to the modelling of $c$-quark fragmentation.
\iffalse
\part[The FACET Project: Foward Aperture CMS ExTension to search for new Long-Lived Particles\\ \phantom{x}\hspace{4ex}\it{Michael G. Albrow}]{}
\section{Introduction}
FACET, short for \textbf{F}orward \textbf{A}perture \textbf{C}MS \textbf{E}x\textbf{T}ension,
is a project under development to add a subsystem
to CMS to search for beyond the standard model (BSM) long-lived particles (LLPs)
in the high luminosity era
of the LHC, in Run 4 (2028) and beyond.
The project was initiated with a two-day meeting in April 2020 \cite{april2020,albrowguan}, with one day discussing
a forward hadron spectrometer for strong interaction physics, and one day on searching for long-lived particles.
A description and more details of the physics potential are given in Ref. \cite{facetpaper}.
We can compare FACET to the pioneering FASER experiment \cite{faser} which is approved to search for LLPs in the very forward direction in Run 3,
and an upgrade FASER-2 \cite{faser2} which is being developed for Run 4. Major differences with FACET are (a) FASER-2 is
480 m from IR5 (with ATLAS) while FACET is 100 - 127 m from IR1 (with CMS). (b) FACET has 4$\times$ the solid
angle: 54.5 $\mu$sr cf. 13.6 $\mu$sr. (c) FASER-2 has a 5 m-long decay volume; FACET has 18 m which is evacuated to
eliminate background from particle interactions. (d) FASER-2 is centered at polar angle $\theta$ = 0$^\circ$ while
FACET covers 1 mrad $< \theta <$ 4 mrad. (e) FASER-2 is behind $\sim$ 100 m of rock absorber while FACET has $\sim$ 50 m of
iron. However FACET is located inside the main LHC tunnel where
radiation levels are much higher while FASER is located in a side tunnel.
An important difference is that FASER is an independent experiment while FACET is not; it is proposed to be a new subsystem of CMS,
fully integrated and using the same advanced technology for its detectors. This has the added benefit of allowing
the study of correlations with the central event, and enables a standard model physics program especially in low pileup
$pp, pA$ and $AA$ collisions.
FACET will be located downstream of IR5 (at $z$ = 0) in an LHC straight section between the new (for Run 4)
superconducting beam separation dipole D1 at $z$ = 80 m and the TAXN absorber at $z$ = 128 m.
A schematic layout of the spectrometer is shown in Fig.~\ref{sketch}.
The beam pipe
between $z$= 101 m and 119 m will be enlarged to a radius of 50 cm. In front of the entrance window
will be a radiation-hard ``tagging'' hodoscope, with 2 - 3 planes of $\sim 1$ cm$^2$ quartz or radiation-hard scintillator
blocks.
This must have very high efficiency with a precision time measurement for charged
particles entering the pipe. These are all background particles to be ignored in the subsequent analysis.
Excellent time resolution, $\sim$ 30 ps, together
with fast timing on the tracks from another plane between the tracker and the calorimeter
will not only help the rejection of incoming background tracks but allows a study of their momenta and composition.
Neutral LLPs produced with polar angle
1 $< \theta <$ 4 mrad penetrate the iron of the LHC elements (quadrupoles Q1 - Q3 and dipole D1) and enter the big vacuum tank
where decays to SM particles can occur. The LHC-quality vacuum completely eliminates any background from interacting
particles inside a fiducial region starting behind the front window. The back window of the big pipe, where it transitions
from $R$ = 50 cm to $R$ = 18 cm, will be thin, e.g. 0.5 mm of Be with strengthening ribs,
to minimise multiple scattering of the decay tracks\footnote{The front window may also need to be thin to minimize interactions
behind the tagging hodoscope; this is under study.}
Behind that window, in air, the detector elements will be
3 m of silicon tracking (resolution $\sigma_x = \sigma_y \sim 30 \; \mu$m per plane) followed by a layer of
fast timing ($\sigma_t \sim$ 30 ps)\footnote{Since this Low-x Workshop
we note that measuring the time-of-flight of these background tracks over the 22 m between the two
hodoscopes, with a resolution $\delta \beta \lesssim 5 \times 10^{-4}$, together with the energy measured in the calorimeter,
will be very interesting. For example, consider particles with a delay relative to $\beta$ = 1 of 1 ns $\pm$ 50 ps with a shower
of energy $E_{cal}$. These can be 0.63 GeV/c $\mu^\pm$, 0.83 GeV/c $\pi^\pm$, 3 GeV/c $K^\pm$, or 5.6 GeV/c $p$ or $\bar{p}$, easily
distinguished thereby measuring the identify and spectrum of these background tracks. That would be useful for testing and tuning
\textsc{fluka}, the LHC standard for machine protection etc. It also uniquely enables calibration of the HGCAL with hadrons of known
momenta up to tens of GeV with high statistics even in short runs. The charge $Q$ is known from the Cherenkov light
amplitude and track $dE/dx$
enabling measurements of light isotopes with lifetimes $\gtrsim 10^{-9}$ s in the showers, and to search for
objects such as strangelets (nuclei with extra strange quarks and therefore anomalous low charge:mass ratio).}.
The tracking and timing will be followed by a high granularity
electromagnetic and hadronic calorimeter, the HGCAL design. Muons that penetrate the HGCAL are detected in more
silicon tracking through an iron toroid.
\begin{figure}[t]
\vspace{-0.5 in}
\begin{center}
\makebox[\textwidth][c]{\includegraphics[angle=270,origin=c,width=150mm]{albrowlowx/facetsketch106.pdf}}
\end{center}
\vspace{-1.5in}
\caption{Schematic layout of the proposed FACET spectrometer. The side view and top view are
the same since it is azimuthally symmetric. The IR5 collision region and the central CMS detectors
are 100 m to the left. An example of an LLP $X$ decaying inside the pipe is superimposed.}
\label{sketch}
\end{figure}
FACET is complementary to all other searches with unique
access to regions of mass and coupling (or lifetime) for many \emph{portals}, hypothetical particles
that couple very weakly to both standard model particles (directly or through mixing) and to dark matter
particles. Unlike most searches in the central detectors FACET is sensitive to a wide variety of possible LLPs.
It has the potential to discover dark photons ($A'$), dark higgs ($h$ or $\phi$), heavy neutral
leptons ($N_i$) and axion-like particles ($ALP$s or $a$) if they have large enough production cross section in the
very forward direction, small enough coupling to penetrate 300 $\lambda_{int}$ of iron, and lifetime
in the range $c\tau$ = 10 cm - 100 m before decaying to standard model charged particles and/or photons.
A key feature is the high (LHC quality) vacuum tank for decays,
1 m diameter and 18 m long (14 m$^3$), made by enlarging a section of the LHC beam pipe. This allows some channels,
e.g. $X^0 \rightarrow$ multihadrons, $ \tau^+ \tau^-, c + \bar{c}$ and $b + \bar{b}$ to have zero background even in
3 ab$^{-1}$, while $e^+e^-$ and $\mu^+\mu^-$ decays may have very low backgrounds especially for
masses $\gtrsim$ 0.8 GeV. In 3 ab$^{-1}$ we expect to observe several thousand $K^0_L \rightarrow \mu^+ \mu^-$ and also
$K^0$ decays to 4 charged tracks, compromising the region around $M(X^0) =$ 0.5 GeV.
Dark photons $A'$ are hypothetical neutral gauge bosons that do not have direct couplings with SM particles,
but they can interact indirectly by mixing with SM photons. If $M(A') < 1$ GeV their main production mechanism
is via the decays $\pi^0, \eta^0, \eta' \rightarrow \gamma \gamma$, the fluxes being highest at small polar angle $\theta$.
Fixed target experiments such as NA62 have higher luminosity, and the higher $\sqrt{s}$ of the LHC is not advantageous for
dark photons from these sources.
For $M(A') > 1$ GeV the higher $\sqrt{s}$ of the LHC
is important, as additional sources such as Drell-Yan and quark- and proton-bremsstrahlung
dominate. The LHC is essential if the source is a massive state such as a $Z'$ in the model of Ref.\cite{duaprime},
which would give FACET sensitivity up to $\sim$ 20 GeV.
The decay modes are the same as the final states in $e^+e^- \rightarrow \gamma^*$, with $\tau^+ \tau^-$,
$c + \bar{c}$ and multihadron decays being background-free above their thresholds.
Measuring the relative rates of different channels could establish the identity of candidates as dark photons.
Heavy neutral leptons $N_i$ (where $i$ represents flavor, perhaps with three different mass states to discover)
are present in many BSM theories; they may explain the light neutrino masses through the seesaw mechanism. Possible decay
modes are $N_{\mu} \rightarrow \mu^\pm W^{*\mp}$ with the virtual $W^*$ decaying to kinematically allowed leptonic
or hadronic channels, and the same modes but with $\mu^\pm$ replaced by $e^\pm, \tau^\pm$. If $N_i$ have masses in the few-GeV region
even a few good candidate events would be a discovery that would open a very rich new field of neutrino physics.
In the model of Ref.\cite{suchita} FACET has unique discovery reach up to $\sim$25 GeV.
Also very exciting would be the discovery of another Higgs boson, a dark higgs, $h$ or in general a scalar $\phi$,
having the same vacuum quantum numbers
as the H(125) but with mass possibly in the several GeV region. Present measurements of H(125) decays allow
an invisible decay fraction up to 5\%, which could be explained by an $h$ through mixing $H(125) \leftrightarrow h$ or
decay $H(125) \rightarrow h + h$\footnote{If one $h$ is detected in FACET the other should be more central and give rise to
missing transverse energy $E_T$. Whether it is possible to detect this in a high pileup bunch crossing remains to be seen.}.
If $M(h) \lesssim$ 4.5 GeV rare $b$-decays are a potential source, with competition especially from B-factories and LHC-b.
For 4.5 GeV $< M(h) <$ 60 GeV and a range of mixing angles FACET has unique coverage, as shown in Fig. \ref{darkhiggs}.
The most spectacular
decays are $h \rightarrow \tau^+ \tau^-, c \:+ \: \bar{c}$ and $b \: + \: \bar{b}$ if kinematically allowed, and with the heaviest
states favored; the scalar nature can be demonstrated by the relative decay fractions as well as the isotopic decay.
FACET has more sensitivity than FASER-2 due to its larger solid angle and longer decay volume, e.g.
if there is no background and if 10 candidates were to be detected in FACET, FASER-2 would expect $<$ 1
\footnote{While the coverage of FACET is limited by LHC restrictions
in the horizontal direction, the solid angle could be increased nearly a factor $\times$2 in the vertical direction
with a non-circular beam pipe.}.
Another possible portal is a heavy ALP, but the main decay mode to $\gamma + \gamma$ will have a high background
from random pairs of photons from $\pi^0$ and $\eta$ decay, etc. Even though the electromagnetic section of the HGCAL
measures the shower direction the vertex resolution is much worse than for charged tracks.
FACET will be live for every bunch crossing, with an expected pileup of $\sim$ 140 inelastic collisions, giving a total
integrated luminosity of $\sim$ 3 ab$^{-1}$. The \textsc{fluka} code, which is the LHC standard, predicts about 25 charged particle tracks
with 18 cm $< R <$ 50 cm in each bunch crossing. Their origin (apart from any BSM signal!) is (a) from interactions
of beam halo and secondary particles with the beam pipe, collimators, magnets, etc. (b) from decays of neutral
hadrons, mainly $K^0_S, K^0_L$, and $\Lambda^0$. (c) Very small angle ($\theta \lesssim$ 1 mrad) charged particles that
pass through the D1 aperture, which deflects them to the left and right sides. The acceptance for the latter is
limited to $\sim$ 2 TeV,
but they allow some standard model physics (e.g. measuring $\mu^+\mu^-$ pairs at Feynman-$x_F \sim$ 0.5).
In a fast Level-1 trigger the tracks will be projected upstream to the
2D hodoscope in front of the front window. The main purpose of the hodoscope is to tag all entering charged
particles with very high efficiency (inefficiency $\lesssim 10^{-5}$) and ignore them; they are all background.
Because of the high resolution of the tracker and because there is
no significant magnetic field the uncertainty on the projected entrance point is $<$ 1 mm.
Since in 3 ab$^{-1}$ there will be about $2 \times 10^{15}$ bunch crossings,
we still expect $\sim 10^5 - 10^6$ bunch crossings with two untagged tracks from different collisions entering the
decay volume, depending on the tagger inefficiency. However, the probability that these two background tracks intersect in space, i.e. have a distance of closest approach
$\lesssim 100 \; \mu$m inside the fiducial decay volume and matching in time effectively kills this pileup background.
\begin{figure}[t]
\vspace{-1.0in}
\begin{center}
\makebox[\textwidth][c]{\includegraphics[angle=0,origin=c,width=120mm]{darkhiggs.pdf}}
\end{center}
\vspace{-1.2in}
\caption{Reach of FACET and other existing and proposed experiments for a dark Higgs boson $\phi$
with the assumption of either 0\% (red lines) or 2.5\% (yellow lines) branching
fraction for the $H(125) \to \phi\phi$ decays. FACET offers a unique coverage all the way to half $M_H$
for a range of mixing angles. FACET and FASER-2 contours are calculated with \textsc{Foresee}~\protect\cite{foresee}.
Figure from Ref. \cite{facetpaper} which gives citations.}
\label{darkhiggs}
\end{figure}
Decays of $K^0_S, K^0_L$, and $\Lambda^0$ inside the pipe are a serious background for any LLPs with mass $M(X^0) \lesssim$
0.8 GeV decaying to hadrons. Their mass and momentum are reconstructed from the tracks and calorimeter energies (or muon momenta in the toroid),
and one can require pointing back to the IR, good timing and a flat distribution of decay distance (as it would be
for an LLP). However the background to a search for 2-body hadronic decays of an LLP is expected to be overwhelming
except for $M(X^0) \gtrsim$ 0.8 GeV. For higher masses 4-body decays become more probable, and a well-defined
vertex with $\geq$ 4 charged tracks should have zero background. The probability of two unrelated $K^0$ decays occurring
within the resolution in $x,y,z,t$ is very small but is being evaluated, as are all expected possible backgrounds.
A Letter of Intent to CMS is being prepared to officially propose FACET as a new subsystem and initiate a technical design study.
The most critical item is the enlarged beam pipe, since that cannnot be installed in short technical
stops, and the next planned long shutdown LS4 is in 2031. New sources of funding will be sought.
The detectors required represent $\lesssim$ 5\% of the CMS forward upgrades, and could
be installed (and upgraded if needed) in technical stops.
\section*{Acknowledgements}
The author thanks all the members of the FACET developing team, named as co-authors of Ref. \cite{facetpaper}. We acknowledge
valuable input from V. Baglin and P. Fessia (CERN) on the LHC pipe and infrastructure and V. Kashikhin (Fermilab) on the preliminary
toroid design.
\iffalse
\part[Two-particle Correlations in multi-Regge Kinematics\\ \phantom{x}\hspace{4ex}\it{ G. Chachamis et al}]{}
\section{Introduction}
An important question in Quantum Chromodynamics (QCD) is the structure of the high energy limit dynamics of multi-particle production at hadron and hadron-lepton colliders. Multi-particle events, and in particular, multi-jet
events, are difficult to describe when one wants to go beyond the leading order (LO) approximation in fixed order perturbation theory. Even in the early days of hadron colliders (Intersecting Storage Rings, ISR at CERN) when
the colliding energy was of the order of few tens of GeV and QCD was not yet the established theory of the strong interaction, multi-particle production was one of the key problems to tackle in order to probe the underlying dynamics
even on heuristic terms. The experimental data amassed in the last 50-60 years, starting from those early attempts,
have generally reinforced the view that the final state particles (pions, kaons, protons, etc) that end up on the
detector seem to belong in correlated ``clusters''~\cite{Dremin:1977wc} that are emitted in the hard scattering part
of the process or on the partonic level.
Our modern day picture of a generic partonic cross section
of two incoming partons that interact and produce two or more outgoing partons which in turn,
through a parton shower, radiate new quarks and gluons and form jets does not contradict
the heuristic notion of clusters.
The study of differential distributions and particle-particle correlations between final states had given
important insights about the strong interaction no matter whether
the final states were considered before hadronizations effects (final state partons/jets) or after (hadrons in the detector calorimeters).
Especially for jets, correlations summarize important properties without being too sensitive
to unresolved soft particles in the jet~\cite{Tannenbaum:2005by} (see also the introduction of Ref.~\cite{SanchisLozano:2008te}). It is thus very interesting to see whether the rapidity distributions of final state hadrons are any similar to the rapidity distributions of jets.
Quite a few different approaches exist that are useful to probe multi-particle hadroproduction beyond fixed order. These are resummation frameworks that resum different leading contributions to amplitudes (e.g. DGLAP~\cite{Gribov:1972ri,Gribov:1972rt,Lipatov:1974qm,Altarelli:1977zs,Dokshitzer:1977sg}, BFKL~\cite{Kuraev:1977fs,Kuraev:1976ge,Fadin:1975cb,Lipatov:1976zz,Lipatov:1985uk}, CCFM~\cite{Ciafaloni:1987ur,Catani:1989yc}, Linked Dipole Chain~\cite{Gustafson:1986db,Gustafson:1987rq,Andersson:1995ju}, Lund Model~\cite{Andersson:1998tv}, see also~\cite{LHCForwardPhysicsWorkingGroup:2016ote}).
The run 2 of the LHC at 13 TeV has provided lots of data which should be analyzed in detail. Here, we want to focus on a final state configurations which could be quite important to answer the question of what
is the range of applicability of different multi-particle production models. These configurations correspond to events with several final state jets where the two outermost in rapidity ones are also well separated in rapidity and can be tagged requiring that their transverse momenta are similar and generally large. These corresponds to a subset of the so-called Mueller-Navelet jets~\cite{Mueller:1986ey} and if additionally we require a very similar $p_T$ for the outermost jets (however, not within overlapping ranges) we can enforce that their influence on
the rapidity distributions of the jets in between will be symmetric.
Our aim here
is to report on our recent work~\cite{deLeon:2020myv} where we studied some of the characteristics that can be attributed to these processes. As the most striking features in multiperipheral models can be found in the rapidity space --the origin of these features can be traced at the decoupling of the transverse degrees of freedom from the longitudinal ones--, we will present single rapidity differential distributions and two-jet rapidity-rapidity correlations.
Assuming configurations in a hadron-hadron collider with fixed final state jet multiplicity $N$,
the single and double rapidity distributions for a given jet and a pair of jets are given respectively by
$\rho_{1}(y_i)$ and $\rho_{2}\left(y_{i}, y_{j}\right)$, where $i$ and $j$ are the positions of the jets
once they are ordered in rapidity with $1 \le i,j \le N$. We can formally define for the
single and double differential normalized distributions
\begin{equation}
\rho_{1}(y_i)=\frac{1}{\sigma} \int d^{2} p_{\perp} \frac{d^{3} \sigma}{d y_i d^{2} p_{\perp i}}
\end{equation}
and
\begin{equation}
\rho_{2}\left(y_{i}, y_{j}\right)=\frac{1}{\sigma} \int d^{2} p_{\perp i} d^{2} p_{\perp j} \frac{d^{6} \sigma}{d y_{i} d^{2} p_{\perp i} d y_{j} d^{2} p_{\perp j}}\,.
\end{equation}
After integrating over their transverse components we will have
\begin{equation}
\rho_{1}(y_i)=\frac{1}{\sigma} \frac{d \sigma}{d y_i}
\end{equation}
and
\begin{equation}
\rho_{2}\left(y_{i}, y_{j}\right)=\frac{1}{\sigma} \frac{d^{2} \sigma}{d y_{i} d y_{j} }\,.
\end{equation}
The two-particle rapidity-rapidity correlation function is then given by
\begin{eqnarray}
C_{2}\left(y_{i}, y_{j}\right)&=&\frac{1}{\sigma} \frac{d^{2} \sigma}{d y_{i} d y_{j} }-\frac{1}{\sigma^{2}} \frac{d \sigma}{d y_{i} } \frac{d \sigma}{d y_{j}}\\
&=& \rho_{2}\left(y_{i}, y_{j}\right) - \rho_{1}(y_i) \rho_{1}(y_j)
\end{eqnarray}
In practice however, the correlation function is usually computed with the following expression which is less sensitive
to experimental errors:
\begin{equation}
R_{2}\left(y_{1}, y_{2}\right)=\frac{C_{2}\left(y_{1}, y_{2}\right)}{\rho_{1}\left(y_{1}\right) \rho_{1}\left(y_{2}\right)}=\frac{\rho_{2}\left(y_{1}, y_{2}\right)}{\rho_{1}\left(y_{1}\right) \rho_{1}\left(y_{2}\right)}-1\,.
\label{correlation_function}
\end{equation}
In Section 2, we calculate the analytic expressions for the double and single rapidity distributions within an
old multiperipheral model, namely the Chew-Pignotti model~\cite{Chew:1968fe}, whereas, In Section 3,
we explain how we compute the same quantities for the gluon Green's function of a collinear BFKL model by
using Monte Carlo techniques. We conclude in Section 4.
\section{The Chew-Pignotti Model}
As we mentioned in the introduction, we will work with a Chew-Pignotti type of multiperipheral model~\cite{Chew:1968fe} following the analysis by DeTar in Ref.~\cite{Detar:1971qw}. The simplified features of the model
will give us the opportunity to produce analytic results that could in principle
be compared to the experimental data. For a more detailed review on multiperipheral models and the cluster concept in multiparticle production at hadron colliders, see Ref.~\cite{Dremin:1977wc} and references therein.
The key point is that in these types of models, the transverse coordinates decouple from the longitudinal degrees
of freedom (rapidity) and that is what allows to obtain analytic expressions for the rapidity distributions.
While the multiperipheral models were devised to describe multiple particle production, we will use
the Chew-Pignotti here for multiple jet production. We will assume that the rapidity separation of
the the bounding jets is Y and also that the jets in between have a fixed multiplicity $N$ such that in total
we will have $N+2$ final state jets in each event. The outermost in rapidity jets (the most forward/backward jets) are having rapidities $\pm \frac{Y}{2}$. It should be clear that we choose to work with limits $y_0=-\frac{Y}{2}$ and $y_{N+1}=\frac{Y}{2}$ because we want to cast our analytic expressions for the distributions in a symmetric
way with respect to the forward and backward rapidity direction. Naturally, one could also work with limits $y_0=0$ and $y_{N+1}=Y$. Actually for the results from the BFKL approach in section 3, we choose to present our plots in
the range $\left[0, Y \right]$ as we want to be closer to an experimental analysis setup.
The cross section for the production of $N+2$ final state jets is given by
\begin{eqnarray}
\sigma_{N+2} &=& \alpha^{N+2} \int_{0}^{Y} \prod_{i=1}^{N+1} dz_i \delta \left(Y-
\sum_{s=1}^{N+1} z_s \right) \nonumber\\
&=& \alpha^{N+2}
\int_{-\frac{Y}{2}}^{\frac{Y}{2}} dy_N \int_{-\frac{Y}{2}}^{y_N} dy_{N-1} \cdots \int_{-\frac{Y}{2}}^{y_3} dy_2
\int_{-\frac{Y}{2}}^{y_2} dy_1\, = \, \alpha^2
\frac{\left(\alpha Y\right)^N}{N!} \, ,
\label{sigma_N2}
\end{eqnarray}
which leads to a total cross section $\sigma_{\rm total} =
\sum_{N=0}^\infty \sigma_{N+2} = \alpha^2 e^{\alpha Y}$ and the rise with $Y$ would need to be tamed
by introducing unitarity corrections in transverse space. A rapidity
$y_l$, with $l=0, \dots, N+1$ is assigned to each of the final-state jets. At $y_0=-\frac{Y}{2}$ and $y_{N+1}=\frac{Y}{2}$ we have the positions of the outermost jets with the jet vertex reduced to a simple factor equal to $\alpha$, the strong coupling constant. The in-between jets will have rapidities $y_l = -\frac{Y}{2}+\sum_{j=1}^l z_j$,
where $l=1, \dots N$.
We are mainly interested in the description of the differential distributions for events with fixed final state multiplicity, firstly on a qualitative level. At this point we need to note the following:
the final jet multiplicity for a given final state is uniquely defined, it actually depends on the lower $p_T$ cutoff
we set for a mini-jet to qualify as a jet as well as on the chosen jet radius $R$ (in the rapidity-azimuthal angle plane) for the jet clustering algorithm. Nevertheless, we believe that our analysis in the following is valid
once a well defined mechanism for deciding the multiplicity of a final state is established.
This is not a trivial statement at is it implies that if a final state has initially been assigned multiplicity $N_1+2$ complies with the $N_1+2$ differential distributions, then, if a different set of parameters is chosen for the jet clustering algorithm which results in a different number of final state jets giving a shift
in multiplicity from $N_1$ to $N_2$, then that final state will also comply with the $N_2+2$ differential distributions.
The contribution for jet $l$ in a $N+2$ final state event, will have the following differential distribution in rapidity
\begin{eqnarray}
\frac{d \sigma_{N+2}^{(l)}}{d y_l} &=& \alpha^{N+2} \int_0^Y
\prod_{i=1}^{N+1} dz_i \delta \left(Y-\sum_{s=1}^{N+1} z_s \right)
\delta \left(y_l +\frac{Y}{2}- \sum_{j=1}^l z_j\right) \nonumber\\
&=& \alpha^{N+2} \int_{y_l}^\frac{Y}{2} dy_N \int_{y_l}^{y_N} dy_{N-1} \cdots \int_{y_l}^{y_{l+2}} dy_{l+1}
\int_{-\frac{Y}{2}}^{y_l} dy_{l-1} \cdots \int_{-\frac{Y}{2}}^{y_3} dy_2
\int_{-\frac{Y}{2}}^{y_2} dy_1\nonumber\\
&=& \alpha^{N+2} \frac{\left(\frac{Y}{2}-y_l \right)^{N-l}}{(N-l)!} \frac{\left(y_l+\frac{Y}{2}\right)^{l-1}}{(l-1)!} \, ,
\label{dsdy}
\end{eqnarray}
as derived from Eq.~(\ref{sigma_N2}).
For very large multiplicities this results to an asymptotic Poisson distribution as one can verify, {\it e.g.}, in the region $y \simeq - \frac{Y}{2}$ with $y = \left(\frac{\lambda}{N}-\frac{1}{2}\right) Y$ where
\begin{eqnarray}
\lim_{N \to \infty}{N-1 \choose l-1}
\left(1-\frac{\lambda}{N}\right)^{N-l}
\left(\frac{\lambda}{N}\right)^{l-1}
&=& e^{- \lambda} \frac{\lambda^{l-1}}{(l-1)!} \, .
\end{eqnarray}
Taking the limits $l \to 1$ and $y_l \to - \frac{Y}{2}$ in
Eq.~(\ref{dsdy}) we derive a normalized universal distribution for each $N$ when plotted versus $2y/Y$. The case for N=7+2 is plotted in Fig.~\ref{Multiplicity7yjets} (left) which is very characteristic for multiperipheral models. We remind the reader that the notation jet$_{i=1,2, \dots, N}$ is introduced for jets with ordered rapidities $y_1 < y_2 < \dots < y_N$.
\begin{figure}
\begin{center}
\begin{flushleft}
\hspace{1.cm}\includegraphics[width=7cm]{Multiplicity7yjets.pdf}
\end{flushleft}
\vspace{-7.6cm}
\begin{center}
\hspace{8cm}\includegraphics[width=7.cm]{Plot_of_Maxima.pdf}
\end{center}
\vspace{-.5cm}
\caption{Rapidity distributions for each of the jets in a final state with seven mini-jets (left). Their maxima are indicated by the symbol \textcircled{\tiny 7} (right). The positions of the $y$-distribution maxima in configurations with multiplicity $N$ are marked by \textcircled{\tiny N} (right).}
\label{Multiplicity7yjets}
\end{center}
\end{figure}
All these seven normalized $y$-distributions (one for each of the seven jets) spans an area of $\frac{2}{N}$. Their maxima are found at $y=\frac{2l-N-1}{2(N-1)} Y$ with a value that is
\begin{eqnarray}
{N-1 \choose l-1} \frac{(l-1)^{l-1}}{(N-1)^{N-1}} \left(N- l\right)^{N-l} \, .
\end{eqnarray}
In Fig.~\ref{Multiplicity7yjets} (right), we show the maxima for multiplicities $N$, where $\left(3+2\right) \le \left(N+2\right) \le \left(15+2\right)$.
In a similar manner, we get expressions for the double differential rapidity distributions for jet pairs, {\it i.e.}
\begin{eqnarray}
\frac{d^2 \sigma_{N+2}^{(l,m)}}{d y_l d y_m} &=& \alpha^{N+2}
\int_{0}^{Y}
\prod_{i=1}^{N+1} dz_i \delta \left(Y-\sum_{s=1}^{N+1} z_s \right)
\delta \left(y_l +\frac{Y}{2}- \sum_{j=1}^l z_j\right)
\delta \left(y_m +\frac{Y}{2}- \sum_{k=1}^m z_k\right) \nonumber\\
&=& \alpha^{N+2} \frac{\left(\frac{Y}{2}-y_l \right)^{N-l}}{(N-l)!}
\frac{(y_l-y_m)^{l-m-1}}{(l-m-1)!}
\frac{\left(y_m+\frac{Y}{2}\right)^{m-1}}{(m-1)!} \, .
\label{d2sdydy}
\end{eqnarray}
To calculate the correlation between the rapidities of jet $l$ and jet $m$ we use Eq.~(\ref{correlation_function}),
more precisely
\begin{eqnarray}
{\cal R}_{N+2} \left(x_l,x_m\right) = \sigma_{N+2}
\frac{ \frac{ d^2 \sigma_{N+2}^{(l,m)}}{d y_l d y_m} }{\frac{d \sigma_{N+2}^{(l)}}{d y_l} \frac{d \sigma_{N+2}^{(m)}}{d y_m}}-1
=
\frac{2^N}{N!}\frac{(N-m)!(l-1)!}{(l-m-1)!}
\frac{(x_l-x_m)^{l-m-1}}{\left(1+x_l\right)^{l-1}\left(1-x_m \right)^{N-m}}
-1
\, ,
\label{R}
\end{eqnarray}
where $Y > y_l > y_m > 0$, $l>m$ and $x_J = 2 y_J / Y$.
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{R514a.pdf}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{R531a.pdf}
\label{fig:sfig2}
\end{subfigure}
\caption{Left: ${\cal R}_{5+2} \left(x_4,x_1\right) = \sigma_{5+2}
\frac{ \frac{ d^2 \sigma_{5+2}^{(4,1)}}{d y_4 d y_1} }{\frac{d \sigma_{5+2}^{(4)}}{d y_4} \frac{d \sigma_{5+2}^{(1)}}{d y_1}}-1$.
Right: ${\cal R}_{5+2} \left(x_3,x_1\right) = \sigma_{5+2}
\frac{ \frac{ d^2 \sigma_{5+2}^{(3,1)}}{d y_3 d y_1} }{\frac{d \sigma_{5+2}^{(3)}}{d y_3} \frac{d \sigma_{5+2}^{(1)}}{d y_1}}-1$. }
\label{R7-1425}
\end{figure}
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{R551a.pdf}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{R542a.pdf}
\label{fig:sfig2}
\end{subfigure}
\caption{Left: ${\cal R}_{5+2} \left(x_5,x_1\right) = \sigma_{5+2}
\frac{ \frac{ d^2 \sigma_{5+2}^{(5,1)}}{d y_5 d y_1} }{\frac{d \sigma_{5+2}^{(5)}}{d y_5} \frac{d \sigma_{5+2}^{(1)}}{d y_1}}-1$.
Right: ${\cal R}_{5+2} \left(x_4,x_2\right) = \sigma_{5+2}
\frac{ \frac{ d^2 \sigma_{5+2}^{(4,2)}}{d y_4 d y_2} }{\frac{d \sigma_{5+2}^{(4)}}{d y_4} \frac{d \sigma_{5+2}^{(2)}}{d y_2}}-1$. }
\label{R7-1524}
\end{figure}
We plot the correlation functions using Eq.~(\ref{R}) in Figs.~\ref{R7-1425} and ~\ref{R7-1524}. We choose
multiplicity $5+2$ and we show the correlation between the rapidities of jet 1 and jet 4 (Fig.~\ref{R7-1425}, left),
jet 1 and jet 3 (Fig.~\ref{R7-1425}, right), jet 1 and jet 5 (Fig.~\ref{R7-1524}, left), jet 2 and jet 4 (Fig.~\ref{R7-1524}, right). The red lines in each of them underline the contour for which ${\cal R}=0$ while the white regions correspond to sectors of very rapid growth of ${\cal R}$.
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{pl1vs4R04.pdf}
\caption{}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{pl1vs3R04.pdf}
\caption{}
\label{fig:sfig2}
\end{subfigure}
\\
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{pl1vs4R07.pdf}
\caption{}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{pl1vs3R07.pdf}
\caption{}
\label{fig:sfig2}
\end{subfigure}
\caption{Top: The correlation functions of Fig.~\ref{R7-1425} with the collinear BFKL model and for jet radius $R = 0.4.$
Bottom: The same but for jet radius $R = 0.7$.}
\label{fig:coll1}
\end{figure}
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{pl1vs5R04.pdf}
\caption{}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{pl2vs4R04.pdf}
\caption{}
\label{fig:sfig2}
\end{subfigure}
\\
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{pl1vs5R07.pdf}
\caption{}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.7\linewidth]{pl2vs4R07.pdf}
\caption{}
\label{fig:sfig2}
\end{subfigure}
\caption{Top: The correlation functions of Fig.~\ref{R7-1524} with the collinear BFKL model and for jet radius $R = 0.4.$
Bottom: The same but for jet radius $R = 0.7$.}
\label{fig:coll2}
\end{figure}
Our analysis here is quite simple as no relevant dynamics from the transverse coordinates of the phase space is introduced. Nevertheless, it should still be enough to capture the gross features of single rapidity distributions and double rapidity correlations which should be generally independent of the selected $p_T$ of the outermost
in rapidity jets. In the next section, we will investigate whether these gross
features are any similar to predictions derived from the BFKL formalism using a simple collinear BFKL model
implemented as a special case in our Monte Carlo code {\tt BFKLex}~\cite{Chachamis:2011rw,Chachamis:2011nz,Chachamis:2012fk,Chachamis:2012qw,Chachamis:2015zzp,Chachamis:2015ico,deLeon:2021ecb}.
\section{Correlations in BFKL with BFKLex}
In this section we evaluate the correlation functions for the rapidities of jets within the BFKL formalism. We
work with the gluon Green's function, that is at partonic level, where an emitted gluon is named a jet if its
$p_T$ is above some cutoff and we leave for a future work the
more complicated and detailed study of the full BFKL dynamics at hadronic level.
At a collider with colliding energy $s$, at the leading logarithmic approximation,
where large logarithmic terms of the form ${\bar \alpha}_s^n \ln^n{s}$ are resummed
(using ${\bar \alpha}_s= {\bar \alpha}_s N_c / \pi$),
the differential partonic cross section for the production of two well separated in
rapidity jets and transverse momenta $\vec{p}_{i=1,2}$ is given by
\begin{eqnarray}
\frac{d \hat{\sigma}}{d^2 \vec{p}_1 d^2 \vec{p}_2} &=&
\frac{\pi^2 {\bar \alpha}_s^2}{2} \frac{f(\vec{p}_1^{~2}, \vec{p}_2^{~2},Y)}{\vec{p}_1^{~2} \vec{p}_2^{~2}} \,.
\end{eqnarray}
where we take the longitudinal momentum fractions of the colliding partons to be $x_{i=1,2}$
and the rapidity difference between the two jets $Y \sim \ln{x_1 x_2 s / \sqrt{\vec{p}_1^{~2} \vec{p}_2^{~2}}}$ .
Within the BFKL resummation framework, one can show that the gluon Green's function $f$ follows, in a collinear approximation, the integro-differential equation
\begin{eqnarray}
\frac{\partial f (K^2,Q^2,Y) }{\partial Y} &=& \delta (K^2-Q^2) \nonumber \\
&&\hspace{-1cm}+ \,
{\bar \alpha}_s \int_0^\infty d q^2 \left(\frac{\theta (K-q)}{K^2}+\frac{\theta (q-K)}{q^2} + 4 (\ln{2}-1) \delta(q^2-K^2)\right)
f (q^2,Q^2,Y) \, ,
\end{eqnarray}
the solution of which can be cast in an iterative form:
\begin{eqnarray}
f (K^2,Q^2,Y) &=& e^{4(\ln{2}-1){\bar \alpha}_s Y} \Bigg\{\delta (K^2-Q^2) \nonumber\\
&&\hspace{-1cm}+ \, \sum_{N=1}^\infty
\frac{({\bar \alpha}_s Y)^N}{N!} \left[ \prod_{L=1}^N
\int_0^\infty d x_L \left(\frac{\theta(x_{L-1}-x_L)}{x_{L-1}} + \frac{\theta(x_L-x_{L-1})}{x_{L}}\right) \right]
\delta (x_N-Q^2) \Bigg\} \,.
\label{FCSumMC}
\end{eqnarray}
Using
$\delta(K^2-Q^2)=\int \frac{d \gamma}{2 \pi i Q^2} \left(\frac{K^2}{Q^2}\right)^{\gamma-1}$
and
\begin{eqnarray}
&&\int_0^\infty d x_N
\left(
\frac{\theta(x_{N-1}-x_N)}{x_{N-1}}
+ \frac{\theta(x_N-x_{N-1})}{x_{N}}\right)
\left(\frac{x_N}{K^2}\right)^{\gamma-1} = \left(\frac{1}{\gamma}+\frac{1}{1-\gamma}\right) \left(\frac{x_{N-1}}{K^2}\right)^{\gamma-1},
\end{eqnarray}
which is valid for $0<\gamma<1$, we can write
\begin{eqnarray}
\left[ \prod_{L=1}^N \int_0^\infty d x_L
\left(\frac{\theta(x_{L-1}-x_L)}{x_{L-1}}
+ \frac{\theta(x_L - x_{L-1})}{x_{L}}\right) \right]
\left(\frac{x_N}{K^2}\right)^{\gamma-1}
= \left(\frac{1}{\gamma}+\frac{1}{1-\gamma}\right)^N
\end{eqnarray}
so that we then have
\begin{eqnarray}
f (K^2,Q^2,Y) &=& \frac{e^{4(\ln{2}-1) {\bar \alpha}_s Y}}{Q^2} \int \frac{d \gamma}{2 \pi i} \left(\frac{K^2}{Q^2}\right)^{\gamma-1}
\sum_{N=0}^\infty \frac{\left({\bar \alpha}_s Y\right)^N}{N!}
\left(\frac{1}{\gamma}+\frac{1}{1-\gamma}\right)^N \nonumber\\
&=& \int \frac{d \gamma}{2 \pi i Q^2} \left(\frac{K^2}{Q^2}\right)^{\gamma-1} e^{{\bar \alpha}_s Y \chi\left( \gamma\right)} \, ,
\label{FCMM}
\end{eqnarray}
with $\chi(\gamma) = 4 (\ln{2}-1)+ \gamma^{-1}+ (1-\gamma)^{-1}$.
From a simple inspection of Eq.~\ref{FCSumMC} one can infer that for the class of jet-jet rapidity correlations we studied in the Section 2, the predictions from this collinear BFKL model
should be very similar to those in the Chew-Pignotti simple model.
Actually, in Eq.~(\ref{FCSumMC}) every $N$ increase by one unit
accounts for the emission of a new final state gluon.
After making use of Eq.~(\ref{sigma_N2}), we can write
\begin{eqnarray}
f (K^2,Q^2,Y) &=& \sum_{N=0}^\infty {\bar \alpha}_s^{N}
\int_{0}^{Y} \prod_{i=1}^{N+1} dz_i \delta \left(Y-
\sum_{s=1}^{N+1} z_s \right) \xi^{(N)} (K^2,Q^2)
\end{eqnarray}
where
\begin{eqnarray}
\xi^{(N)} (K^2,Q^2) &=& \int \frac{d \gamma}{2 \pi i Q^2} \left(\frac{K^2}{Q^2}\right)^{\gamma-1} \chi^N\left( \gamma\right) \, .
\end{eqnarray}
Working in a manner that follows the logic we used to obtain Eqs.~(\ref{dsdy}) and
(\ref{d2sdydy}), {\it i.e.} we get here
\begin{eqnarray}
\frac{d f_{N}^{(l)} (K^2,Q^2,Y, y_l) }{d y_l} &=& {\bar \alpha}_s^{N} \frac{\left(\frac{Y}{2}-y_l \right)^{N-l}}{(N-l)!} \frac{\left(y_l+\frac{Y}{2}\right)^{l-1}}{(l-1)!} \xi^{(N)} (K^2,Q^2) \, , \\
\frac{d f_{N}^{(l,m)} (K^2,Q^2,Y, y_l,y_m) }{d y_l d y_m} &=&
{\bar \alpha}_s^{N} \frac{\left(\frac{Y}{2}-y_l \right)^{N-l}}{(N-l)!}
\frac{(y_l-y_m)^{l-m-1}}{(l-m-1)!}
\frac{\left(y_m+\frac{Y}{2}\right)^{m-1}}{(m-1)!} \xi^{(N)} (K^2,Q^2) \, .
\end{eqnarray}
Obviously, the
$\xi^{(N)}$ factor cancels out for normalized quantities
and thus we end up with the same expressions as for
the Chew-Pignotti model.
It is important to remember at this point that the full BFKL formalism
carries non-trivial dependences in rapidity, transverse momenta and azimuthal angles
which need to be studied in detail in future works.
Nevertheless, our findings here in rapidity space suggest that the
full BFKL predictions might not be totally different from
the old multiperipheral model approach.
To connect with future works, we have implemented the BFKL collinear model within our {\tt BFKLex} Monte Carlo code, setting the transverse momenta of the most forward and backward jets to be 30 and 35 GeV
respectively and their difference in rapidity $Y=4$. We also set the multiplicity of the emitted gluons (jets) to
be $N=5+2$. We use the anti-kt jet clustering algorithm in its implementation
in {\tt fastjet}~\cite{Cacciari:2011ma}.
We present results from the collinear model in Figs.~\ref{fig:coll1} and~\ref{fig:coll2}.
The jet radius was chosen to take two values, $R = 0.4$ and $R = 0.7$. In Fig.~\ref{fig:coll1} we plot
the same correlation functions as in Fig.~\ref{R7-1425} whereas in Fig.~\ref{fig:coll2}
the corresponding ones for Fig.~\ref{R7-1524}. It is not surprising, that the collinear model results are
very similar to the Chew-Pignotti ones and obviously the actual jet radius R does not
affect significantly the plots. Let us also note that in Figs.~\ref{fig:coll1} and~\ref{fig:coll2}
we kept the rapidity range from 0 to $Y$ to make the association to experimental data setups easier. Such
can be found for example in relevant dijet experimental analyses for 7 TeV data from both ATLAS and CMS~\cite{Aad:2011jz,Chatrchyan:2012pb,Aad:2014pua,Khachatryan:2016udy}.
\section{Conclusions}
The 13 TeV data from the run 2 of the LHC at low luminosity are suitable for
various studies of multi-jet physics. Following our recent work in Ref.~\cite{deLeon:2020myv},
we want to suggest the investigation of a particular subset of Mueller-Navelet jet events where the outermost jets
are very similar in $p_T$ and the jet multiplicity is kept fixed.
We believe that their experimental study is interesting as it might be possible to identify features of different
multi-particle production models such as those predicted by the BFKL formalism.
We have presented predictions for single and double differential distributions in jet rapidity as well as
the jet-jet correlation functions
from an old multiperipheral model, namely the Chew-Pignotti model, using analytic expressions we obtained
after performing an analysis based on the decoupling of the longitudinal and transverse coordinates.
We have also presented results for the jet-jet correlation
functions from a collinear BFKL model implemented in our Monte Carlo code {\tt BFKLex}.
In the future, we plan to perform a more complete study of these observables in high energy QCD
including the full dependence on the transverse coordinates
and moving from a partonic level analysis of the BFKL gluon Green's function
to the hadronic level with PDFs included and suitable phenomenological kinematic cuts.
Comments: Presented at the Low-$x$ Workshop, Elba Island, Italy, September 27--October 1 2021.
\section*{Acknowledgements}
We would like to thank the organizers of the Low-x Workshop for their excellent work.
This work has been supported by the Spanish Research Agency (Agencia Estatal de Investigaci{\'o}n) through the grant IFT Centro de Excelencia Severo Ochoa SEV-2016-0597 and the Spanish Government grant FPA2016-78022-P. It has also received funding from the European Union's Horizon 2020 research
and innovation programme under grant agreement No. 824093. The work of GC was supported by the Funda\c{c}{\~ a}o para a Ci{\^ e}ncia e a Tecnologia (Portugal) under project CERN/FIS-PAR/0024/2019 and contract 'Investigador auxiliar FCT - Individual Call/03216/2017'.
\iffalse
\part[Is BFKL factorization valid for Mueller-Tang jets?\\ \phantom{x}\hspace{4ex}\it{Dimitri Colferai}]{}
\section{Introduction}
Mueller-Tang (MT) jets~\cite{MuTa92} are important for studying perturbative
high-energy QCD and the Pomeron at hadron colliders. They are characterized by
final states with at least 2 jets with comparable hard transverse momenta
(${\boldsymbol k}_{J1}\sim{\boldsymbol k}_{J2} \gg \Lambda_{\mathrm QCD}$), well separated in rapidity
$Y\equiv y_{J1}-y_{J2}$, and absence of emission in a given interval of
pseudo-rapidity $\Delta\eta\lesssim Y$ in the central region (the so-called
gap). For this reason, they are also called ``jet-gap-jet'' events, and a
typical final state is depicted in fig.~\ref{f:jgj}a.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.35\linewidth]{jgj.pdf}
\hspace{0.1\linewidth}
\includegraphics[width=0.35\linewidth]{muellerTang.pdf}\\
(a)\hspace{0.45\linewidth}(b)
\caption{(a) Sketch of particle detection of a jet-gap-jet event in the
azimuth-rapidity plane. (b) Diagrammatic representation or the factorization
formula for MT jets.}
\label{f:jgj}
\end{center}
\end{figure}
The presence of the gap suggests that these events mainly occur when the momentum
exchange between the forward and backward systems is due to a colour-singlet
virtual state: a non-singlet exchange would be characterized most of the times
by final state radiation deposited in the central region.
A large rapidity interval is possible because at LHC the
center-of-mass (CM) energy is much larger than the jet transverse energy. In
this case, the coefficients of the perturbative series are enhanced by powers of
$Y\simeq \log(s/{\boldsymbol k}_J^2)$, and an all-order resummation of the leading terms
$\sim(\ensuremath{\alpha_S} Y)^n$ is needed for a proper determination of the amplitude.
\subsection{Cross section in leading logarithmic approximation}
At lowest perturbative order, a colour-singlet exchange in the $t$-channel is
due to two gluons in colour-singlet combination. At higher orders, as just
mentioned, the partonic elastic amplitude is affected by powers of
$Y\simeq \log(s/{\boldsymbol k}_J^2)$ due to gluon ladder-like diagrams. Such contributions
can be resummed into the so-called BFKL gluon Green function (GGF)~\cite{BFKL}.
It is interesting to observe that such LL loop diagrams are both UV and IR
finite.
By squaring the partonic amplitude, the LL partonic cross-section is then given
by the product of 2 GGFs, which embody the energy-dependence, and two impact
factors (IFs), that couple the gluons to the external particles. In the LL
approximation the IFs are just a trivial product of coupling constants and
colour factors.
Finally, the cross section for MT jets in the LL approximation can be expressed by the
factorization formula (see fig.~\ref{f:jgj}b)
\begin{align}
\frac{\mathrm{d}\sigma^{(LL)}}{\mathrm{d} J_1 \mathrm{d} J_2} \simeq \int
\mathrm{d} (x_1, x_2, {\boldsymbol l}_1, {\boldsymbol l}'_1, {\boldsymbol l}_2, {\boldsymbol l}'_2)\; &
f_A(x_1) \Phi_A(x_1,{\boldsymbol l}_1,{\boldsymbol l}_2;J_1) G(x_1 x_2 s,{\boldsymbol l}_1,{\boldsymbol l}_2) \nonumber\\
&\times G(x_1 x_2 s,{\boldsymbol l}'_1,{\boldsymbol l}'_2)
\Phi_B(x_2,{\boldsymbol l}'_1,{\boldsymbol l}'_2;J_2) f_B(x_2) \;. \label{ff}
\end{align}
Here $J=(y_J,{\boldsymbol k}_J)$ represents the set of jet variables,
the GGFs $G$ describe universal gluon dynamics, the IFs $\Phi_i$
describe the coupling of the reggeized gluons or pomerons to the external
particles, and the PDFs $f_i$ describe the partonic content of hadrons.
\subsection{First phenomenological analyses of jet-gap-jet events}
The importance of considering such BFKL contributions to the cross section has
been emphasized since the first analysis by CMS. The plot in
fig.~\ref{f:multiplicity} shows the number of events as a function of the
multiplicity of charged particles in the gap region. We see that both Herwig and
Pythia are able to describe the data if one or more particles are observed
between the jets, but only Herwig agrees in the first bin without observed
particles, and this happens because Herwig includes the contribution of
colour-singlet exchange from BFKL at LL
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.5\linewidth]{multiplicity.pdf}
\caption{CMS measurements of multiplicity of charged particles in the gap
region, and comparison with Pythia and Herwig predictions.}
\label{f:multiplicity}
\end{center}
\end{figure}
If one looks at differential distributions of JGJ events, like distributions in
$p_\perp$ or in rapidity distance $Y$, however, the situation is not so
nice. Here LL predictions are unable to describe data (see
fig.~\ref{f:d0}a). Even if one improves the BFKL GGF by adding next-to-leading
logarithmic (NLL) contributions~\cite{NLLFL,NLLCC,KMR10,EEI17}, none of the
implementations is able to simultaneously describe all the features of the
measurements (see fig.~\ref{f:d0}b).
\begin{figure}[hb]
\begin{center}
\includegraphics[width=0.9\linewidth]{d0.pdf}\\
\caption{Comparison of various differential measurements by D0 for jet-gap-jet
events, and comparison with various theoretical models implementing the BFKL
GGF in NLL approximation.}
\label{f:d0}
\end{center}
\end{figure}
\section{Impact factor in next-to-leading logarithmic approximation}
\subsection{Structure of the calculation and final result}
It appears thus compelling to provide a full NLL description of MT jets. The
idea is to generalize the BFKL factorization formula for MT jets to the NLL
approximation.
The NLL BKFL GGF is known in the non-forward case~\cite{NLLnf}, but due to its
complexity, only the forward version~\cite{NLLFL,NLLCC}, has been used in order
to estimate the contribution of NL logarithmic terms to the cross
section~\cite{KMR10}. However, this is not relevant for our study of the impact
factors.
The determination of the NL IF can be done with a NLO calculation, which is
affected, of course, by IR (soft and collinear) divergencies. Actually, the
very existence of NL IF is not a trivial statement. By summing virtual and real
contributions at first perturbative order, one has to prove that
\begin{itemize}
\item the $\log(s)$ terms from virtual corrections reproduce the BFKL kernel
(and this we already know);
\item the const term of the virtual corrections, which are IR divergent and
constitutes the virtual part of the IF, when combined with real emission
terms, must provide a finite remainder, after subtraction of the collinear
singularities (proportional to the Altarelli-Parisi splitting functions) to be
absorbed in the PDF;
\end{itemize}
Such finite remainder defines the next-to-leading impact factor. A sketch of
this decomposition is depicted in fig.~\ref{f:mtdec}
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=-90,width=0.5\linewidth]{mtdecomposition.pdf}
\caption{Schematics of the decomposition of the (real + virtual) NLO calculation
for the determination of the NL impact factors.}
\label{f:mtdec}
\end{center}
\end{figure}
The calculation of NL IF for MT jets was performed~\cite{HMMS14a,HMMS14b} using
Lipatov's effective action, and has been confirmed by our independent
calculation~\cite{CoDeRo21}. This is the structure of the result in the case
of incoming quark:
\begin{align*}
\Phi({\boldsymbol l}_1,{\boldsymbol l}_2,{\boldsymbol q})&=\frac{\ensuremath{\alpha_S}^3}{2\pi(N_c^2-1)}\int_0^1\mathrm{d} z\int\mathrm{d}{\boldsymbol k}\;
S_J({\boldsymbol k},{\boldsymbol q},z)\, C_F\frac{1+(1-z)^2}{z}\\
& \times \left\{C_F^2\frac{z^2{\boldsymbol q}^2}{{\boldsymbol k}^2({\boldsymbol k}-z{\boldsymbol q})^2}
+C_F C_A\, f_1({\boldsymbol l}_{1,2},{\boldsymbol k},{\boldsymbol q},z) + C_A^2\, f_2({\boldsymbol l}_{1,2},{\boldsymbol k},{\boldsymbol q})
\right\}
\end{align*}
It is important to understand the kinematics of the process (see
fig.~\ref{f:kinematics}): after the ``upper'' incoming quark interacts with the
two gluons in colour-singlet, a quark and a gluon can be found in the forward
hemisphere of the final state; the ``lower'' parton $p_2$ remains intact and is
just slightly deflected in the backward hemisphere.
\begin{figure}
\begin{center}
\includegraphics[width=0.27\linewidth]{muellerTangIFcin.pdf}%
\raisebox{5ex}{
$\times$
\begin{minipage}[c]{0.10\linewidth}
h.c.\\
$({\boldsymbol l}_1\leftrightarrow{\boldsymbol l}_2)$
\end{minipage}}
\caption{Kinematics of the calculation of the NL impact factor. Black
symbols denote 4-vectors, while purple ones denote longitudinal momentum
fraction.}
\label{f:kinematics}
\end{center}
\end{figure}
Let me denote with $k$ the outgoing gluon momentum, with ${\boldsymbol k}$ its transverse
momentum and with $z$ its longitudinal momentum fraction with respect to the
parent quark; $q$ is the overall $t$-channel transferred momentum. ${\boldsymbol k}$ and
$z$ are integration variables. Virtual contributions are contained as
delta-function contributions at $z = 0$ and ${\boldsymbol k} = 0$.
We can see the quark-to-gluon splitting function $P_{gq}$ as overall factor, and
then three terms with different colour structures. The integration in the phase
space of the gluon and quark final state has to be restricted by an IR-safe jet
algorithm $S_J$, such as the $k_\perp$-algorithm.
\subsection{Violation of BFKL factorization}
In the diffractive process we are considering, one quark moves in the backward
direction and is identified with the backward jet. The other two partons, whose
distance in azimuth-rapidity is denoted by
$\Delta\Omega = \sqrt{\Delta\phi^2+\Delta y^2}$, are emitted in the forward
hemisphere, so as to produce at least one jet. There are 3 possibilities:
\begin{itemize}
\item $\Omega < R$ corresponding to a composite jet;
\item $\Omega > R$ where the {\em gluon is the jet} and the quark is outside the
jet cone;
\item $\Omega > R$ where the {\em quark is the jet} and the gluon is outside the
jet cone.
\end{itemize}
In the last configuration there is a problem due to the $\mathrm{d} z/z$ integration
of the $C_A^2\, f_2$ term. In fact, when the quark is the jet, the gluon can
become soft and its phase-space integration is essentially unconstrained.
The limit $z\to 0$ at fixed ${\boldsymbol k}$ corresponds to find the gluon in the central
(and backward) region, where the emission probability of the gluon turns out to
be flat in rapidity, and formally the $z$ (or $y_g$) integration diverges.
If we believe the above transition probability to be reliable at least in the
forward hemisphere ($y_g>0$), the longitudinal integration yields a
$\log(s)$. But a $\log(s)$ in the IF is not acceptable, being against the spirit
of BFKL factorization where all the energy-dependence is embodied in the GGFs.
All this looks strange, because one would have argued that gluon emission in the
central region should be dynamical suppressed, due to the singlet exchange in
the $t$-channel. We will solve this puzzle later on. For the moment, we discuss
a proposal to cope with this fact in practice.
In order to avoid this problem, the authors of~\cite{HMMS14a,HMMS14b} impose an
upper limit $M_{X,\max}$ on the invariant mass of the forward diffractive
system. In that case, the $z$ variable is bounded from below and the $z$-integral
is finite. However, a crucial question then arises: do we really need to impose
a cut on the diffractive mass? Actually, {\em can} we impose such a constraint?
If it were possible, then one could avoid $\log(s)$ terms in the IF, though at the
price of introducing logarithms of the diffractive mass: $\log(M^2_{X,\max}/{\boldsymbol k}_J^2)$.
However, from the experimental point of view, in order to impose such constraint
one should be able to measure the spectator partons, i.e., the proton remnants
(or the intact proton in case of diffractive events).
Since this is not possible with the present experimental detectors at hadron
colliders, other solutions must be found.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.6\linewidth]{emissioneCentrale.pdf}
\caption{Examples of diagrams that involve a non-singlet emission ``above'' the
emitted gluon, thus producing a $\log(s)$ term in the impact factor. The pink
rectangles denote colour-singlet projection.}
\label{f:ec}
\end{center}
\end{figure}
In order to find the origin of such logarithmic contributions in the IF, let us
consider a pair of diagrams contributing to the $C_A^2\,f_2$ term and drawn in
fig.~\ref{f:ec}. It is clear that, if the two $t$-channel (vertical) gluons
emitted by the lower quark are in a colour-singlet state, by colour conservation
the (one or two) upper gluons cannot be in such a state, since a (coloured)
gluon is emitted in the final state. Therefore we cannot claim that this diagram
involves a colour-singlet exchange between the upper and lower system --- the
final state gluon being in the central region.
The option of defining MT jets by selecting those diagrams that involve only
colour-singlet exchanges is not viable, in particular this would end up in a
non-gauge-invariant procedure. We therefore claim that this problem cannot be
avoided and that MT jets are not describable by the naive factorization formula
originally proposed.
At this point, it remains to estimate the size of such violation and possibly to
resum another set of diagrams, if the violation is sizeable. Given the fact
that we cannot measure particles (partons, hadrons) below some energy threshold
$E_{\mathrm{th}}\sim 200\MeVns$ , we can at most require no activity above that
threshold within the rapidity gap. This prescription is IR safe, because it is
inclusive for gluon energies $E_g < E_{\mathrm{th}}$ (our analysis here proceeds
at partonic level). Since such soft gluons can have arbitrary values of rapidity
between the two jets, we can easily estimate that the logarithmic contribution
to the impact factor is of the order
\begin{equation}
\Phi_{log} \sim C_A^2 \frac{E_{\mathrm{th}}^2}{{\boldsymbol k}_J^2}\log\frac{s}{{\boldsymbol k}_J^2}
\end{equation}
Note that this term is regular for $E_{\mathrm{th}}\to 0$, actually it vanishes, at
variance with the $\ord{C_F^2}$ term in the IF which diverges in the same limit.
When evaluated with the values of energies and momenta of typical processes
analysed at LHC, this term turns out to be small, of order $1\%$ or less with
respect to other terms~\cite{CoDeRo21}.
Although not really needed from a quantitative point of view at the moment, one
could envisage to resum such logs in the same BFKL spirit, i.e. by considering
diagrams where an arbitrary number of soft (below threshold) gluons are emitted
in the gap (without being detected). This enlarge considerably the number of
diagrams to be taken into account. Some of them, like those in
fig.~\ref{f:resum}a, could be incorporated in the IF, which however acquire a
dependence on the jet rapidity as well as on the gap extension and threshold.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.5\linewidth]{emissioneMoltiGluoni.pdf}
\caption{Examples of diagrams resumming gluon emission from non-singlet gluon
exchange. (a) Corrections to the IF; the pink rectangles denote colour-singlet
projection. (b) Mueller-Navelet-like contributions outside the BFKL
factorization formula~\eqref{ff}.}
\label{f:resum}
\end{center}
\end{figure}
Other diagrams, like those of fig.~\ref{f:resum}b, contribute in a completely
different way, outside the structure of the MT factorization formula. Actually,
they are just diagrams of a Mueller-Navelet (two jet inclusive) process, with
the restriction that the energy of all gluons emitted in the gap region is below
threshold. For threshold energies $E_{\mathrm{th}}\ll |{\boldsymbol k}|$, this contribution
can be easily estimated in LL approximation, at least as far as the
$E_{\mathrm{th}}$-dependence is concerned: for soft emissions, virtual and real
corrections cancel each other. Since virtual corrections are always fully taken
into account, while imposing a void gap constrains only real emission, what
remains is essentially equal to the virtual corrections with momenta above
$E_{\mathrm{th}}$. In the LL approximation, virtual corrections are provided by
the exponentiation of the intercept of the reggeized gluon $\omega({\boldsymbol k})$ with
its internal momentum integrated below $E_{\mathrm{th}}$, resulting in
\begin{align*}
\omega_{\mathrm{th}}({\boldsymbol k}) &\simeq -\frac{\ensuremath{\alpha_S} N_c}{\pi}\log\frac{|{\boldsymbol k}|}{E_{\mathrm{th}}} \\
\frac{\mathrm{d}\sigma_{\mathrm{oct}}}{\mathrm{d} t} &\simeq \frac{\mathrm{d}\sigma_0}{\mathrm{d} t}
\exp\left(-\frac{\ensuremath{\alpha_S} N_c}{\pi}\log\frac{{\boldsymbol k}^2}{E_{\mathrm{th}}^2} Y\right)
\end{align*}
to be compared with the MT asymptotic cross section
\begin{align*}
\frac{\mathrm{d}\sigma_{\mathrm{sing}}}{\mathrm{d} t} \simeq \frac{\mathrm{d}\sigma_0}{\mathrm{d} t}
\frac{(\ensuremath{\alpha_S} C_F \pi)^2}{2}\frac{
\exp\left(\frac{\ensuremath{\alpha_S} N_c}{\pi}8\log2 Y\right)}{[\frac72\ensuremath{\alpha_S} N_C\zeta(3)Y]^3} \;,
\end{align*}
$\mathrm{d}\sigma_0/\mathrm{d} t$ being the lowest order (one-gluon exchange) cross section.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.5\linewidth]{oct_VS_sing.pdf}
\caption{Ratio of differential cross sections in rapidity $Y$ of non-singlet
(octet) exchanges (fig.~\ref{f:resum}b) emitting gluon having energies below
threshold and singlet-exchange (fig.~\ref{f:jgj}b). Two values of $\ensuremath{\alpha_S}$ are
considered.}
\label{f:singVSoct}
\end{center}
\end{figure}
Such comparison was already made by Mueller and Tang in their original
paper~\cite{MuTa92}, but for very large values of $Y\simeq 12$ and other
parameters not corresponding to LHC kinematics, where the non-singlet exchanges
is strongly suppressed with respect to the singlet ones. Repeating such
comparison with realistic LHC parameters (${\boldsymbol k}= 30 \GeVns$, $E_{\mathrm{th}}=0.2\GeVns$)
we find (fig.~\ref{f:singVSoct}) that for $Y\sim 3$ the two contributions are of
the same order, and at $Y\sim 4$ the non-singlet one is still important, about
10\% of the singlet one.
\section{Conclusions}
To summarize, we have demonstrated that, for jet-gap-jet observables, there is
violation of the standard BFKL factorization at NLL level, since the IFs present
logarithmically enhanced energy-dependent contributions. However such terms are
rather small, below 1\% for current measurements of Mueller-Tang jets at LHC,
and their resummation looks not compelling.
Nevertheless, colour non-singlet contributions are expected to be non-negligible
at LHC, in particular for small values of the rapidity distance $Y$ between
jets. Mueller-Navelet contribution below threshold should in this case be
included, unless NLL corrections to the latter provide a further suppression so
as to render them irrelevant. But this requires further studies.
\vspace{2ex}
Comments: Presented at the Low-$x$ Workshop, Elba Island, Italy, September 27--October 1 2021.
\section*{Acknowledgements}
I thank Krzysztof Kutak and Leszek Motyka for useful discussions on this
subject.\\[1mm]
\noindent
\includegraphics[width=2.2em,angle=90]{eu.pdf}~
\begin{minipage}[b]{0.9\linewidth}
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 824093.
\end{minipage}
\iffalse
\part[Charmonia photo-production in ultra-peripheral and peripheral PbPb collisions with LHCb\\ \phantom{x}\hspace{4ex}\it{Weisong Duan on behalf of the LHCb collaboration}]{}
\section{Introduction}
The LHCb detector is a single-arm forward spectrometer fully instrumented in the pseudorapidity range 2 < $\eta$ < 5~\cite{Collaboration_2008}. It has a high precision tracking system, which provides excellent vertex and momentum resolution, and full particle identification. Compared to the ALICE, CMS and ATLAS, LHCb covers the forward rapidity region, providing better access to the gluon distribution at small $x$.
The photonuclear production of vector mesons such as $J/\psi$ is sensitive to the gluon parton distribution function in the nucleus at small Bjorken-$x$, which is estimated by $x \approx (m_{J/\psi}\cdot e^{-y})/\sqrt{s_{NN}}$, where $m_{J/\psi}$ is mass of $J/\psi$ and $y$ is its rapidity. Coherent photoproduction of $J/\psi$ meson provides a way to study the nuclear shadowing effects at small Bjorken-$x$ ranging from $10^{-5}$ to $10^{-2}$ at LHC energies.
\section{Study of coherent $J/\psi$ production in ultra-peripheral lead-lead collisions at $\sqrt{s_{NN}}$ = 5 TeV}
The ultra-peripheral collisions, UPCs, are $\rm{PbPb \to Pb + Pb + X}$ in which two ions interact via their cloud of virtual photons. If the photon couples coherently to the nucleus as a whole, it is called coherent production. If the photon couples with one nucleon leading to the breakup of the target nucleus, it is called incoherent production.
In UPCs, coherent $J/\psi$ meson production can be described by the interaction between photons and gluons, according to the Regge theory~\cite{PhysRevC.86.014905, PhysRevC.93.055206}, gluons are considered as a single object with vacuum quantum numbers, which is called pomeron ($\enspace \rm P \kern -1em I$ $\enspace$). The cross-section for photoproduction gives constraints on the gluon parton distribution functions. This process has low multiplicities and very low transverse momentum $p_{T}$.
The $J/\psi$ mesons are reconstructed through the $J/\psi \to \mu^{+}\mu^{-}$ decay channel, using 2015 Pb-Pb data samples, corresponding to an integrated luminosity of 10 $\mu b^{-1}$. An ultra-peripheral electromagnetic interaction could occur simultaneously with the hadron collision. The HeRSCheL detector~\cite{Akiba_2018} is used to reject backgrounds from hadronic interactions.
The number of candidates are obtained by fitting the di-muon spectrum as shown in Fig.~\ref{massfit}. The $J/\psi$ and $\psi(2S)$ mass spectrum are modeled by a double-sided Crystal ball function, and the non-resonant background is modeled by exponential function multiplied by an first-order polynomial function. Thus we determine the number of $J/\psi$ candidates within the $J/\psi$ mass window 3040 MeV/$c^{2}$ $\sim$ 3165 MeV/$c^{2}$ and the number of $\psi(2S)$ candidates within the $\psi(2S)$ mass window 3608 MeV/$c^{2}$ $\sim$ 3763 MeV/$c^{2}$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=0.35\textwidth]{figs/Fig2.pdf}
\caption{The di-muon invariant mass distribution in the rapidity range between 2.0 and 4.5. The solid blue line corresponds to the $J/\psi$ meson. The solid green line represents the $\psi(2S)$ meson. The dashed black line represents non-resonant background.}
\label{massfit}
\end{center}
\end{figure}
To determine the coherent $J/\psi$ production, a fit to the ${\rm log} (p^{2}_{T})$ is performed to extract the coherent $J/\psi$ mesons within the $J/\psi$ mass window. The ${\rm log} (p^{2}_{T})$ distribution of the $J/\psi$ mesons is shown in Fig.~\ref{logpt2fit}. In short, the number of inclusive $J/\psi$ mesons is obtained by the invariant mass fit, and the number of coherent $J/\psi$ mesons is obtained by the ${\rm log} (p^{2}_{T})$ fit.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=0.35\textwidth]{figs/Fig3.pdf}
\caption{The ${\rm log} (p^{2}_{T})$ distribution of $J/\psi$ candidates in the interval 2.5 < $y$ < 3.0, where the unit of $p_{T}$ is GeV/c. The solid blue line represents coherent $J/\psi$ distribution. The dashed red line represents the incoherent distribution. The dashed green line represents feed down from $\psi(2S)$. The dashed black line represents non-resonant background.}
\label{logpt2fit}
\end{center}
\end{figure}
The results of the differential cross section are calculated in five rapidity bins as shown in Fig.~\ref{csresults}. We compared the results between experimental results and theoretical predictions~\cite{PhysRevC.93.055206, PhysRevC.84.011902, PhysRevC.97.024901, PhysRevD.96.094027, MANTYSAARI2017832}. The coherent $J/\psi$ cross-section production is given by:
\begin{equation}
\frac{\mathrm{d}\sigma_{coh,J/\psi}}{\mathrm{d}y} = \frac{N_{coh,J/\psi}}{\varepsilon_{t}\cdot \mathcal{L}\cdot\Delta y \cdot \mathcal{B}(J/\psi \to \mu^{+}\mu^{-})}
\end{equation}
where $\varepsilon_{t}$ is the total efficiency, $\mathcal{L}$ is an integrated luminosity of the Pb-Pb data sample, and $\mathcal{B} = (5.961 ± 0.033)\%$ is the $J/\psi \to \mu^{+}\mu^{-}$ branching ratio.
\begin{figure}
\begin{center}
\includegraphics[height=0.45\textwidth]{figs/Fig4.pdf}
\caption{Differential cross-section of coherent $J/\psi$ production as a function of rapidity, with comparisons to phenomenological models.}
\label{csresults}
\end{center}
\end{figure}
Similarly, ALICE measured the coherent $J/\psi$ production cross-section~\cite{2019134926}, so we also compared the coherent $J/\psi$ production cross-section between ALICE and LHCb, as shown in Fig.~\ref{cscompare}. The LHCb result is slightly lower than the ALICE measurement by around 1.3 $\sigma$. Measurements of the coherent $J/\psi$ and $\psi(2S)$ are currently underway using 2018 Pb-Pb data, corresponding to an integrated luminosity of 210 $\mu b^{-1}$. Fig.~\ref{2018massfit} shows the di-muon invariant mass distribution in the range between 2.7 and 4.0 GeV. The final results are expected in the near future.
\begin{figure}
\begin{center}
\includegraphics[height=0.45\textwidth]{figs/Fig5.pdf}
\caption{Differential cross-section of coherent $J/\psi$ production as a function of rapidity, with comparison to ALICE measurements.}
\label{cscompare}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[height=0.45\textwidth]{figs/fit_mass_allcut.pdf}
\caption{The invariant mass distribution of $J/\psi$ and $\psi(2S)$ candidates. The solid green line represents $J/\psi$ signal, the solid red line represents $\psi(2S)$ signal, and the yellow regions represents non-resonant background.}
\label{2018massfit}
\end{center}
\end{figure}
\section{Study of $J/\psi$ photo-production in lead-lead peripheral collisions at $\sqrt{s_{NN}}$ = 5 TeV}
The second result in this contribution is photo-production of $J/\psi$ at low $p_{T}$, studied in peripheral Pb-Pb collisions at $\sqrt{s_{NN}}$ = 5 TeV, using the data sample collected by LHCb in 2018, with an integrated luminosity of about 210 $\mu b^{-1}$~\cite{lhcbcollaboration2021study}.
The $J/\psi$ candidates are selected through the $J/\psi \to \mu^{+}\mu^{-}$ decay channel. The di-muon invariant mass spectrum of the selected candidates in the range between 3.0 and 3.2 GeV is shown in the left of Fig.~\ref{mass_and_pt}, for the $J/\psi$ mesons with $\rm{p_{T}}$ <15.0 GeV/c and the number of participants $\left<N_{part}\right>$ = 10.6 $\pm$ 2.9 in full LHCb rapidity coverage 2 < $y$ <4.5.
The inclusive $J/\psi$ candidates consists of photo-produced and hadronically produced $J/\psi$ mesons, which are separated by an unbinned maximum likelihood fit to the ${\rm log} (p^{2}_{T})$ distribution, as shown in the right of Fig.~\ref{mass_and_pt}. In this figure, the transverse momentum of photo-produced $J/\psi$ yields (red dotted line) are visible in the range between 0 and 250 MeV/c.
Fig.~\ref{compare} shows the photo-produced $J/\psi$ meson yields as a function of $p_{T}$(right), and $\left<N_{part}\right>$ (left). The mean $p_{T}$ of the coherent $J/\psi$ is estimated to be $\left<p_{T}\right>$ = 64.9 $\pm$ 2.4 MeV/c. Theoretical predictions~\cite{PhysRevC.97.044910, PhysRevC.99.061901} are drawn in open circles, and are qualitatively in agreement with the experimental results in shape.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=0.33\textwidth]{figs/Fig1a.pdf}
\includegraphics[height=0.33\textwidth]{figs/Fig1b.pdf}
\caption{The left plot is the invariant mass distribution of $J/\psi$ candidates in $p_{T}$ < 15.0 GeV/c and 2 < $y$ <4.5, with $\left<N_{part}\right>$ = 10.6 $\pm$ 2.9. The right plot is the ${\rm ln} (p^{2}_{T})$ distribution of $J/\psi$ candidates after background subtraction for the same kinematic interval.}
\label{mass_and_pt}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=0.33\textwidth]{figs/Fig2b.pdf}
\includegraphics[height=0.33\textwidth]{figs/Fig2c.pdf}
\caption{Illustration of invariant yields of photo-produced $J/\psi$ mesons as a function of $N_{part}$ and $p_{T}$. Note that the blue error bars represent the statistical uncertainty and the red error bars the total uncertainty. Theoretical model predictions~\cite{PhysRevC.97.044910, PhysRevC.99.061901} are shown in open circles in black and green.}
\label{compare}
\end{center}
\end{figure}
\section{Conclusions}
LHCb detector's unique geometry acceptance allows us to study the nuclear shadowing in small $x$ region through the UPC collisions. The $J/\psi$ photoproduction in UPC using PbPb collisions at 5TeV is measured by LHCb, and is compared to results by ALICE. The photoproduced $J/\psi$ production in peripheral PbPb collisions is also measured with high precision at very low $p_{T}$, and is compared to theoretical calculations. These results demonstrate the capabilities of the LHCb detector in studying nuclear effects. More results from the large 2018 PbPb dataset are expected in the future.
\iffalse
\part[Precision measurements of jet production at the ATLAS experiment\\ \phantom{x}\hspace{4ex}\it{Francesco Giuli on behalf of the ATLAS Collaboration}]{}
\section{Introduction}
In this proceeding, a review of the most recent ATLAS measurements on jet production is reported. Four different analyses are presented: the first one refers to a measurement of soft-drop jet observables~\cite{ATLAS:2019mgf}, the second one to a measurement of hadronic event shapes in high-$p_{\mathrm{T}}$ multijet final states~\cite{ATLAS:2020vup}, the third one to a measurement of the Lund jet plane using charged particles~\cite{ATLAS:2020bbn} and the last one to a measurement of $b$-quark fragmentation properties in jets using the decay $B^{\pm}\rightarrow J/\psi K^{\pm}$~\cite{ATLAS:2021agf}. All these analyses use $pp$ collisions data collected with the ATLAS detector~\cite{ATLAS:2008xda} at $\sqrt{s}$ = 13~TeV at the Large Hadron Collider (LHC).
\section{Measurement of soft-drop jet observables}
Jet substructure quantities are measured using jets groomed with the \textit{soft-drop} grooming procedure~\cite{Larkoski:2014wba} in dijet events from data corresponding to an integrated luminosity of 32.9 fb$^{-1}$. This algorithm proceeds as follows. After a jet is clustered using any algorithm, its constituents are reclustered using the Cambrigde-Aachen (C/A) algorithm~\cite{Dokshitzer:1997in,Wobisch:1998wt}, which iteratively clusters the closest constituents in azimuth and rapidity. Then, the last step of the C/A clustering algorithm is undone, breaking the jet $j$ into two subjets, namely $j_{1}$ and $j_{2}$, which are used to evaluate the soft-drop condition:
\begin{equation}
\dfrac{\min(p_{\mathrm{T},j_{1}},p_{\mathrm{T},j_{2}})}{p_{\mathrm{T},j_{1}} + p_{\mathrm{T},j_{2}}} > \left(\dfrac{\Delta R_{12}}{R}\right)^{\beta},
\end{equation}
where $\Delta R_{12}$ is the distance between the two subjets, $R$ represents the jet radius and $p_{\mathrm{T},j_{i}}$ is the transverse momentum of the subjet $j_{i}$. The parameters $\beta$ and $z_{\mathrm{cut}}$ are algorithm parameters which determine the sensitivity of the algorithm to soft and wide-angle radiation. If the two subjets fail the soft-drop condition, the subjet characterised by the lower $p_{\mathrm{T}}$ is removed, and the other subjet is relabelled as $j$ and the procedure is iterated. When the soft-drop condition is satisfied, the algorithm is stopped, and the resulting jet is the soft-dropped jet.\\
This analysis presents two closely related substructure observables, which are calculated from jets after they have been groomed with the soft-drop algorithm:
\begin{itemize}
\item the dimensionless version of the jet mass, $\rho=\log(m^{2}/p_{\mathrm{T}}^{2})$;
\item the opening angle between the two subjets that pass the soft-drop condition, $r_{g}$.
\end{itemize}
The unfolded data are compared to Monte Carlo (MC) events generated at leading order (LO) with \texttt{PYTHIA8.186}~\cite{Sjostrand:2006za,Sjostrand:2007gs}, \texttt{SHERPA2.1}~\cite{Gleisberg:2008ta,Sherpa:2019gpd} and \texttt{HERWIG++ 2.7}~\cite{Bahr:2008pv,Corcella:2000bw}, as reported in Figure~\ref{fig:softdrop}.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.443\textwidth]{fig_06b.pdf}
\includegraphics[width=0.443\textwidth]{fig_06f.pdf}
\end{center}
\caption{Comparison of the unfolded distributions with MC predictions. The uncertainty bands include all sources of systematic uncertainties. Top left: $\rho$, $\beta=0$. Top right: $\rho$, $\beta=0$. These plots are taken from Ref.~\cite{ATLAS:2019mgf}.}
\label{fig:softdrop}
\end{figure}
Several trends are visible in these results. For $\rho$, the MC predictions are mostly accurate within 10\% except for the lowest relative masses, which are dominated by non-perturbative physical effects. This becomes more visible for larger values of $\beta$, where more soft radiation is included within the jet, increasing the size of the non-perturbative effects. In addition, in the high-relative-mass region, where the effects of the fixed-order (FO) calculation are relevant, some differences between MC generators are seen. A similar trend may be seen for $r_{g}$, where the small-angle region (i.e. where non-perturbative effects are largest) shows more pronounced differences between MC generators.\\
Several calculations have been performed to predict the $\rho$ distributions, and unfolded data are compared with these predictions (more details on the predictions can be found in Ref.~\cite{ATLAS:2019mgf}), as shown in Figure~\ref{fig:softdrop_LL}. The LO+next-to-next-to-leading-logarithm (NNLL) and next-to-leading-order+next-to-leading-logarithm (NLO+NLL) calculations are able to model the data in the resummation region ($-3\lesssim\rho\lesssim -1$), with the NLO+NLL calculation providing an accurate description of the data for high values of $\rho$. We can also observe how, in the region where the FO effects are dominant, the LO+NNLL and NNLL calculations do not model data well. This behaviour is expected, since the calculations do not include terms beyond LO at Matrix-Element (ME) level.
\section{Measurement of hadronic event shapes}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.443\textwidth]{fig_09a.pdf}
\includegraphics[width=0.443\textwidth]{fig_09f.pdf}
\end{center}
\caption{Comparison of the unfolded $\rho$ distribution with different theory predictions. The open marker style indicates that non-perturbative effects on the calculation are expected to be large. ``NP'' indicates that non-perturbative corrections have been applied. The uncertainty bands include all sources of systematic uncertainties. Left: $\beta=0$. Right: $\beta=2$. These plots are taken from Ref.~\cite{ATLAS:2019mgf}.}
\label{fig:softdrop_LL}
\end{figure}
Event shapes~\cite{Banfi:2004nk,Banfi:2010xy} are a class of observables that describe the dynamics of energy flows in multijet final states, they are usually defined to be infrared and collinear safe and they are sensitive to different aspects of the theoretical description of strong-interaction processes. For example, hard, wide-angle radiation is studied by investigating the tails of these distributions, while other regions of the event-shape distributions provide information about anisotropic, back-to-back configurations, which are sensitive to the details of the resummation of soft logarithms in the theoretical predictions.\\
The dataset used in this analysis comprises the 2015-2018 data taking period, corresponding to an integrated luminosity of 139 fb$^{-1}$. In this paper, several event-shape variable are presented. For each event, the thrust axis $\hat{n}_{\mathrm{T}}$ is defined as the direction with respect to the jet momentum $p_{\mathrm{T}}$ is maximised~\cite{Brandt:1964sa,Farhi:1977sg}. The transverse thrust $T_{\perp}$ and its minor component $T_{\mathrm{m}}$ can be expressed as:
\begin{equation}
T_{\perp} = \dfrac{\sum_{i}|\vec{p}_{\mathrm{T},i}\cdot\hat{n}_{\mathrm{T}}|}{\sum_{i}|\vec{p}_{\mathrm{T},i}|}; \;\;\;\;\; T_{\mathrm{m}} = \dfrac{\sum_{i}|\vec{p}_{\mathrm{T},i}\times\hat{n}_{\mathrm{T}}|}{\sum_{i}|\vec{p}_{\mathrm{T},i}|},
\end{equation}
where the index $i$ runs over all jets in the event. These two quantities are useful to define $\tau_{\perp}=1-T_{\perp}$.\\
Several MC samples were used for this analysis, and they were produced using \texttt{PYTHIA8.235}~\cite{Sjostrand:2014zea}, \texttt{SHERPA2.1}~\cite{Gleisberg:2008ta,Sherpa:2019gpd}, \texttt{HERWIG7.1.3}~\cite{Bellm:2017bvx} and \texttt{MadGraph5{\_}aMC@NLO 2.3.3}~\cite{Alwall:2014hca_giugli}, together with \texttt{PYTHIA8.212}~\cite{Sjostrand:2014zea} (hereafter referred to as \texttt{MG5{\_}aMC}). Unfolded data are compared to the above-mentioned MC predictions in various bins of the jet multiplicity, $n^{\mathrm{jet}}$ ($=$ 2, 3, 4, 5 and $\geq$ 6), and the scalar sum of transverse momenta of the two leading jets, $H_{\mathrm{T}2}=p_{\mathrm{T1}}+p_{\mathrm{T2}}$ (1~TeV $<H_{\mathrm{T}2}<$ 1.5~TeV, 1.5~TeV $<H_{\mathrm{T}2}<$ 2.0~TeV and $H_{\mathrm{T}2}>$ 2.0~TeV).\\
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.8\textwidth]{fig_04b.pdf}
\includegraphics[width=0.8\textwidth]{fig_05b.pdf}
\end{center}
\caption{Comparison between data and MC simulation for different jet multiplicities in the 1.5~TeV $<H_{\mathrm{T}2}<$ 2.0~TeV bin. The right panels show the ratios between the MC and the data distributions. The error bars show the total uncertainty (statistical and systematic added in quadrature) and the grey bands in the right panels show the systematic uncertainty. Top: normalised cross section as a function of $\tau_{\perp}$. Bottom: normalised cross section as a function of $T_{\mathrm{m}}$. These plots are taken from Ref.~\cite{ATLAS:2020vup}.}
\label{fig:eventshape}
\end{figure}
The normalised cross section as a function of $\tau_{\perp}$ and $T_{\mathrm{m}}$ is shown in Figure~\ref{fig:eventshape}. The MC simulations tend to underestimate the data in the intermediate region of $\tau_{\perp}$ for low jet multiplicities, while the measurements are underestimated by all MC predictions at high $\tau_{\perp}$ values. The shape of the distributions tends to agree with data for larger $n^{\mathrm{jet}}$. \texttt{HERWIG7} prediction based on dipole shower highly underestimates the ATLAS data at low values of $\tau_{\perp}$, whereas the measurements are overestimated by \texttt{PYTHIA8} in such region. Very similar conclusions can be drawn looking at the normalised cross section as a function of $T_{\mathrm{m}}$. \texttt{Sherpa} simulations predict fewer isotropic events than in data, while the \texttt{MG5{\_}aMC} predictions are closer to the measurements. As regards the $H_{\mathrm{T}2}$-dependence of the depicted results, it has been found that there are more isotropic events at low energies, with increasing alignment of jets with the thrust jet axis for higher energy scales.\\
In summary, none of the MC predictions provide a good description of the ATLAS measurements in all the regions of the phase space. \texttt{HERWIG7} and \texttt{MG5{\_}aMC} computations are closer to data (but with significant discrepancies), remarking the limited ability of parton shower (PS) models to simulate hard and wide angle radiation and further emphasising that the addition of $2\rightarrow3$ processes in the ME allows to improve the description of measured data.
\section{Measurement of the Lund jet plane}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.43\textwidth]{fig_01a.pdf}
\includegraphics[width=0.43\textwidth]{lp_construction_cartoon.pdf}
\end{center}
\caption{Schematic representation of the LJP. The left-hand side plots is taken from Ref.~\cite{ATLAS:2020bbn}.}
\label{fig:LJP}
\end{figure}
The Lund plane is a powerful representation for providing insight into jet substructure. A recent proposal~\cite{Dreyer:2018nbf} describes a method to construct an observable analog of the Lund plane using jets, which captures the salient features of this representation. Jets are formed using clustering algorithms that sequentially combine pairs of proto-jets starting from the initial set of constituents~\cite{Salam:2010nqg}. In this proposal, a jet's constituents are reclustered using the C/A algorithm~\cite{Dokshitzer:1997in,Wobisch:1998wt}. Then, the C/A history is reversed, and each jet is declustered, starting from the hardest proto-jet. The Lund plane can be approximated by using the harder (softer) proto-jet to present the core (emission) in the original theoretical depiction. For each proto-jet pair, at each step in the C/A declustering sequence, an entry is made in the primary Lund Jet plane (LJP) through the observables $\ln(1/z)$ and $\ln(R/\Delta R)$, with
\begin{equation}
z=\dfrac{p_{\mathrm{T}}^{\mathrm{emission}}}{p_{\mathrm{T}}^{\mathrm{emission}}+p_{\mathrm{T}}^{\mathrm{core}}} \;\; \mathrm{and} \;\; \Delta R^{2} = (y_{\mathrm{emission}} - y_{\mathrm{core}})^{2} + (\phi_{\mathrm{emission}} - \phi_{\mathrm{core}})^{2}.
\end{equation}
A schematic representation of the LJP can be found in Figure~\ref{fig:LJP}.\\
This measurement is conducted using the full Run 2 statistics, for an integrated luminosity of 139 fb$^{-1}$. To perform the data unfolding, several samples of dijet events were simulated. \texttt{PYTHIA8.186} \cite{Sjostrand:2006za,Sjostrand:2007gs} was used for simulating the nominal sample. Additional samples were simulated using NLO MEs from \texttt{POWHEG}~\cite{Nason:2004rx_giugli,Frixione:2007vw_giugli,Alioli:2010xa,Alioli:2010xd_giugli} and \texttt{Sherpa2.2.5}~\cite{Sherpa:2019gpd} or \texttt{HERWIG 7.1.3}~\cite{Bellm:2017bvx}.\\
The data from two seleceted slices of the LJP, together with the breakdown of the major systematic uncertainties, are shown in Figure~\ref{fig:LJP_data}. ATLAS data and several MC predictions are compared. It is visible how the \texttt{Herwig7.1.3} angle-ordered prediction provides the best description across most of the plane, while any prediction describes the data accurately in all the regions. The differences in the hadronization algorithms implemented in \texttt{Sherpa2.2.5} are particularly visible at the transition between perturbtive and non-perturbative regions of the plane. The \texttt{POWHEG+PYTHIA} and \texttt{PYTHIA} predictions only differ significantly for hard and wide-angle perturbative emissions, where ME corrections are relevant.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.443\textwidth]{fig_03a.pdf}
\includegraphics[width=0.443\textwidth]{fig_03b.pdf}
\end{center}
\caption{Representative horizontal and vertical slices through the LJP. Unfolded data are compared with particle-level simulation from several MC generators. The uncertainty band includes all sources of systematic and statistical uncertainty. The inset triangle illustrates which slice of the plane is depicted Left: 0.67 $<\ln(R/\Delta R)<$ 1.0. Right: 1.80 $<\ln(1/z)<$ 2.08. These plots are taken from Ref.~\cite{ATLAS:2020bbn}.}
\label{fig:LJP_data}
\end{figure}
\section{Measurement of $b$-quark fragmentation properties}
The fragmentation of heavy quarks is a crucial aspect of Quantum ChromoDynamics (QCD). Detailed studies and precision measurements of the heavy-quark fragmentation properties allow a deeper understanding of QCD. The MC predictions used at the LHC are tuned to describe the measurements in $e^{+}e^{-}$ collisions at relatively low $\sqrt{s}$. Therefore, new measurements of $b$-quark fragmentation can be used to improve MC simulations at LHC energy scales.\\
This analysis presents a measurement of $b$-quark fragmentation into $B^{\pm}$ mesons and it uses the full Run 2 data set, corresponding to an integrated luminosity of 139 fb$^{-1}$. The $B^{\pm}$ mesons are then recostructed via the $B^{\pm}\rightarrow J/\psi K^{\pm}\rightarrow\mu^{+}\mu^{-}K^{\pm}$ decay chain. After the matching between the jet and the reconstructed $B$ meson, two variables of interest are built as follows:
\begin{equation}
z=\dfrac{\vec{p}_{B}\cdot\vec{p}_{j}}{|\vec{p}_{j}|^{2}} \;\; \mathrm{and} \;\; p_{\mathrm{T}}^{\mathrm{rel}}=\dfrac{\vec{p}_{B}\times\vec{p}_{j}}{|\vec{p}_{j}|},
\end{equation}
where $\vec{p}_{B}$ is the three-momentum of the $B$ hadron and $\vec{p}_{j}$ is the jet three-momentum. The measurement is performed in three different intervals of the jet transverse momentum, namely: 50 $<p_{\mathrm{T}}^{\mathrm{jet}}<$ 70~GeV, 70 $<p_{\mathrm{T}}^{\mathrm{jet}}<$ 100~GeV and $p_{\mathrm{T}}^{\mathrm{jet}}>$ 100~GeV.\\
Several different models of multijet production are used. These samples have been generated using \texttt{SHERPA2.2.5}~\cite{Sherpa:2019gpd}, \texttt{PYTHIA8.240}~\cite{Sjostrand:2006za,Sjostrand:2007gs} and \texttt{HERWIG7.2.1}~\cite{Bellm:2017bvx}, with substantial differences in the ME calculations, as well as PS algorithms and hadronisation models. The decays of the $B$ mesons were modelled using \texttt{EVTGEN1.6.0} code~\cite{Lange:2001uf} for all the above-mentioned samples.\\
These predictions are then comprared with the particle-level results, as shown in Figure~\ref{fig:bfrag}, where the longitudinal ($z$) and transverse ($p_{\mathrm{T}}^{\mathrm{rel}}$) profiles for each $p_{\mathrm{T}}$ bin are reported. The results show important differences between the low and high $p_{\mathrm{T}}$ bins. Particularly, the lower tails of the $z$ distributions contain a larger fraction of the high-$p_{\mathrm{T}}$ data due to the larger probability of having gluon splitting, $g\rightarrow b\bar{b}$ when considering high values of the jet $p_{\mathrm{T}}$.\\
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.443\textwidth]{fig_06a_bfrag.pdf}
\includegraphics[width=0.443\textwidth]{fig_06b_bfrag.pdf}
\includegraphics[width=0.443\textwidth]{fig_07a_bfrag.pdf}
\includegraphics[width=0.443\textwidth]{fig_07b_bfrag.pdf}
\includegraphics[width=0.443\textwidth]{fig_08a_bfrag.pdf}
\includegraphics[width=0.443\textwidth]{fig_08b_bfrag.pdf}
\end{center}
\caption{Distributions of the longitudinal profile $z$ and the transverse profile $p_{\mathrm{T}}^{\mathrm{rel}}$ together with different predictions from \texttt{PYTHIA8}, \texttt{SHERPA} and \texttt{HERWIG 7}. The vertical error bars represent the total experimental uncertainties. Top: 50 $<p_{\mathrm{T}}^{\mathrm{jet}}<$ 70~GeV. Middle: 70 $<p_{\mathrm{T}}^{\mathrm{jet}}<$ 100~GeV. Bottom: $p_{\mathrm{T}}^{\mathrm{jet}}>$ 100~GeV. These plots are taken from Ref.~\cite{ATLAS:2021agf}.}
\label{fig:bfrag}
\end{figure}
The \texttt{SHERPA} predictions give a reasonable description of the $z$ distributions in the low and medium $p_{\mathrm{T}}$ bins, although they differ from data for very high values of $z$. They also show large discrepancies for low values of $p_{\mathrm{T}}^{\mathrm{rel}}$, which increase when moving towards higher bins of the jet $p_{\mathrm{T}}$. All the various \texttt{PYTHIA8} samples provide a good description of the $z$ and $p_{\mathrm{T}}^{\mathrm{rel}}$ distributions, being compatible with data within the systematic uncertainties across the different jet-$p_{\mathrm{T}}$ bins. The results for the longitudinal profile show reasonable agreement with the \texttt{HERWIG7} prediction with the angle-ordered PS, while large discrepancies are observed with the dipole parton shower. Due to the larger gluon splitting fractions, the \texttt{HERWIG7} sample with the dipole PS significantly overestimates the data in the tails of the $p_{\mathrm{T}}^{\mathrm{rel}}$ distributions at low $p_{\mathrm{T}}$, while the differences are smaller with increasing $p_{\mathrm{T}}$. The Herwig angle-ordered PS gives a better description of the $p_{\mathrm{T}}^{\mathrm{rel}}$ distributions, although non-negligible discrepancies are also observed.
\section{Conclusion}
Measurements of variables probing the properties of the multijet energy flow and of the Lund Plane using charged particles, as well as a measurement of the fragmentation properties of $b$-quark initiated jets, have been presented in this proceeding. Ref.~\cite{ATLAS:2019mgf} demonstrates differences between the soft-drop jet substructure observables in their sensitivity to the quark and gluon composition of the sample, which are most pronounced for the least amount of grooming. In Ref~\cite{ATLAS:2020vup} the discrepancies between event shapes data and all the investigated MC show that further refinement of the current MC predictions is needed to describe the data in some regions, particularly at high jet multiplicities. Ref.~\cite{ATLAS:2020bbn} illustrates the ability of the Lund jet plane to isolate various physical effects, and will provide useful input to both perturbative and nonperturbative model development and tuning. Finally, Ref.~\cite{ATLAS:2021agf} provides key measurements with which to better understand the fragmentation functions of heavy quarks. As has been shown, significant differences among different MC models are observed, and also between the models and the data. Some of the discrepancies are understood to arise from poor modelling of the $g\rightarrow b\bar{b}$ splittings, to which the present analysis has substantial sensitivity. Including the present measurements in a future tune of the MC predictions may help to improve the description and reduce the theoretical uncertainties of processes where heavy-flavour quarks are present in the final state, such as top quark pair production or Higgs boson decays into heavy quark pairs.
\iffalse
\part[From small to large $x$: toward a unified formalism for particle production in high energy collisionss\\ \phantom{x}\hspace{4ex}\it{Jamal Jalilian-Marian}]{}
\section{Introduction}
Collinear factorization formalism in perturbative QCD (pQCD) has been an extremely useful
approach to particle production in high energy hadronic collisions. Very roughly and as applied to single inclusive hadron production it states that single inclusive hadron production in a proton-proton
collision can be written as a convolution (in $x$) of three independent parts; parton distribution functions of the incoming hadrons, parton-parton scattering cross section, and parton-hadron fragmentation function. It guarantees a clean separation of short distance, perturbative physics
from that of long-distance non-perturbative physics. The short-distance part, parton-parton scattering cross section is process dependent but can be calculated in perturbation theory, in principle, to
any order desired. The non-perturbative parts, parton distribution and fragmentation functions
however are not amenable to weak-coupling methods but are universal, i.e. process independent,
on which the predictive power of the approach depends. Despite its enormous success in predicting particle production yields in high energy collisions, collinear factorization formalism has severe
limitations, mainly, it is applicable at asymptotically high $Q^2 \rightarrow \infty$ limit
dominated by leading twist operators. At any finite $Q^2$ there are corrections to this leading twist approximation which can be large, and worse, break collinear factorization. It is sobering to realize that high $Q^2$ processes occupy a very tiny corner of the QCD phase space and that particle production is dominated by low momentum processes due to the fast (power-like) fall off of the differential production cross section with transverse momentum $p_t$ of the produced particle. Recalling that transverse momentum and rapidity of a produced particle (and the center of mass energy of the collision) determine the momentum fractions $x_1, x_2$ of the partons inside the projectile and target hadrons participating in a collision as in
\bejamal
x_{1,2} = {p_t \over \sqrt{s}}\, e^{\pm y}
\eejamal
it becomes clear that processes with low momentum particles produced probe the small $x$ partons
of the projectile and or target hadron/nucleus at high center of mass energy. Therefore as one increases the center of mass energy of a collison, for example proton-proton scattering at RHIC vs the LHC) at low to intermediate momenta one probes smaller and smaller $x$ region of a hadron/nucleus wave function.
It is an experimental fact that parton (specially gluon) distribution functions grow very fast with decreasing $x$ due to the large radiation phase space ${d x \over x}$ becoming available for soft (carrying small longitudinal momentum fraction $x$) radiation. Therefore one expects that at very small values of $x$ a hadron or nucleus will be a state with a very large number of small $x$ gluons in it. Such a state is referred to as a Color Glass Condensate (CGC) due to the large gluon occupation number of the state and the fact that gluons are colored. Glass refers to the strikingly different time scales involved in dynamics of small vs large $x$ partons \cite{cgc-reviews,cgc-reviews2,cgc-reviews3,cgc-reviews4}.
The large parton densities at small $x$ renders collinear factorization formalism useless as now one has many gluons that participate in a collision. In the language of operator product expansion this means that operators of all twist contribute comparably to the cross section and a twist expansion upon which the collinear factorization formalism is based is not valid. Therefore a new formalism that takes into account high gluon densities in high energy hadrons/nuclei is needed. Such a formalism was proposed by McLerran and Venugopalan who realized that high gluon densities lead to emergence a new semi-hard scale, called saturation scale $Q_s^2 \gg \Lambda_{QCD}$ which allows a weak-coupling yet non-perturbative approach to gluon saturation. Their formalism is a semi-classsical approach in which the high gluon occupancy state at small $x$ is described as a classical field radiated coherently by the large $x$ partons, collectively treated as sources of color charge $\rho$. Distribution of the color charges at large $x$ is given by a weight functional and is non-perturbative and therefore modeled. However its dependence on (evolution with) $x$ can be calculated in perturbation theory. Due to the small $x$ kinematics the coupling of the color charges $\rho$ at large $x$ and the small $x$ gluons is taken to be eikonal which allows a re-summation of multiple scatterings, as required by the presence of large number of gluons in the projectile/target, into Wilson lines; path-ordered exponentials of gluon field along the direction of propagation. Quantum loop effects are included via a Wilsonian renrmalization group equation (RGE) that describes the dependence (evolution) of multi-point correlators of Wilson lines with $x$ (or equivalently, rapidity or energy). This Wilsonian RGE describing the evolution of the weight functional with $x$, or rapidity is the so-called JIMWLK evolution equation~\cite{jimwlk,jimwlk2,jimwlk3,jimwlk4,jimwlk5,jimwlk6,jimwlk7,jimwlk8,jimwlk9,jimwlk10}. It can be used to derive evolution equations for correlators of any number of Wilson lines which form the building blocks of the observables in this semi-classical approach. The JIMWLK equation is a functional renormalization group equation which reduces to a closed-form evolution equation for the two-point
function to the BK equation \cite{bk,bk2}.
While the semi-classical formalism of CGC is a well-motivated first principle approximation to QCD at small $x$ and has been very successful when applied to phenomenology it has some serious shortcomings. It is assumed that the gluon distribution function grows so fast that production cross sections are dominated by the lowest kinematic value of $x$ accessible called $x_{min}$. This ignores the contribution of all gluons with momentum fraction $x \ge x_{min}$~\cite{gsv}. Whereas this may not be important when one makes parametric estimates of physical observables, it is essential to go beyond this rather drastic approximation when precision calculations are required.
As the dream of an Electron-Ion Collider comes closer to becoming reality it is important to develop a more general formalism that can encompass the the two main approaches to particle production in high energy collisions so that one can continuously map the QCD dynamics from small to large $x$ and from low to high transverse momenta.
\section{A new formalism}
In the Color Glass Condensate approach to particle production in the so called dilute-dense kinematics the first step is to consider scattering of a parton, a quark or gluon, from the classical color field of a target \cite{hybrid,hybrid2,hybrid3,hybrid4,hybrid5,hybrid6,hybrid7,hybrid8}. Due to the high gluon density of small $x$ gluons in the target one needs to resum multiple scatterings of the projectile from the target. This is possible only in the eikonal approximation, i.e. infinite energy limit \cite{eikonal,eikonal2}. In this approximation one ignores the deflection of the projectile parton so that it stays on a straight line. In momentum space this corresponds to the final state projectile having transverse momentum much less than its longitudinal momentum which is $O (\sqrt{s})$. In this limit the exchanged momenta are transverse only and no longitudinal momentum is exchanged. It is possible to include non-eikonal contributions as power suppressed corrections to the first order or two~\cite{aaa,aaa2,aaa3,aaa4,aaa5,kps} which extends the validity of the CGC approach to higher $p_t$ but not to the pQCD region. To go beyond the small $x$ approximation which necessarily limits the formalism to low $p_t$ one must therefore consider and allow exchange of large longitudinal momenta. This is indeed what happens in the collinear factorization approach where the large longitudinal momenta of the incoming partons are converted to transverse momenta of the outgoing partons. This however can not happen in the CGC approach since the small $x$ gluons do not carry large longitudinal momenta.
To do this one must therefore include large $x$ gluons (and quarks in general) in the formalism. As a first step toward a particle production formalism valid at all $x$ and $p_t$ we consider generalizing the CGC approach valid at small $x$ to include the intermediate/large $x$ gluons of the target proton or nucleus from which a projectile parton can scatter \cite{jjm-largex,jjm-largex2}. We will therefore keep the kinematics of scattering from the large $x$ gluons of the target exact while treating scattering from the small $x$ gluons in the eikonal approximation. This allows us to resum multiple soft scatterings as in the CGC formalism while making a connection to pQCD and collinear factorization via inclusion of large $x$ gluons. Therefore in this new formalism, the first step is to consider multiple soft scatterings of a projectile from small $x$ gluons and one hard scattering from large $x$ gluons of the target. This is the analogue of a tree level pQCD and a classical CGC calculation. The next step then would be to do a one-loop correction to this leading order result. A one-loop calculation would then allow one to investigate the divergences that routinely appear in such calculations and to check whether they can be canceled or absorbed into physical quantities leading to their renormalization (evolution).
We therefore start by considering scattering of a projectile quark from a target including both small $x$ and large $x$ gluons of the target. The amplitude for the scattering is shown in Fig. (\ref{fig:qA}) where the ellipse denotes the target and
$p$ and $\bar{q}$ are the momenta of the incoming and outgoing projectile parton. The thick solid line denotes Wilson lines which encode multiple soft scatterings of the projectile from the small $x$ gluons of the target. The wavy line represents single scattering from the large $x$ gluons of the target. The top left diagram represents the standard eikonal scattering while the top right diagram corresponds to having multiple soft scatterings from the target, then a single scattering from the large $x$ gluons, and then more soft scatterings from the small $x$ gluons of the target. The large $x$ gluons can themselves interact from the small $x$ gluons as in the bottom two diagrams.
\begin{figure}
\begin{center}
\epsfig{figure=fig-sum.pdf,height=0.45\textwidth}
\caption{Scattering of a quark from the small and large $x$ gluons of a target.}
\label{fig:qA}
\end{center}
\end{figure}
The result for the full scattering amplitude at all $x$ (or any $p_t$) can thus be written
as \cite{jjm-largex,jjm-largex2}
\bejamal
i \mathcal{M} = i \mathcal{M}_{eik} + i \mathcal{M}_1 +
i \mathcal{M}_2 + i \mathcal{M}_3
\label{eq:qA-lo}
\eejamal
where $i \mathcal{M}_{eik}$ given by eq.~(\ref{eq:eik}) is the amplitude in the eikonal approximation used in CGC formalism while amplitudes $i \mathcal{M}_1$, $i \mathcal{M}_2$ and $i \mathcal{M}_3$ correspond to scattering from both small and large $x$ gluons of the target and
are given by eqs.~(\ref{eq:nsoft-hard-nsoft},\ref{eq:nsoft-on-hard},\ref{eq:nsoft-on-hard-nsoft-on-final}) respectively.
\bejamal
i \mathcal{M}_{eik} (p,q) = 2 \pi \delta (p^+ - q^+)\,
\bar{u} (q)\, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$}\, \int d^2 x_{t}\, e^{- i (q_t - p_t) \cdot x_{t}} \,
\left[V (x_t) - 1\right]\, u(p)
\label{eq:eik}
\eejamal
where the infinite Wilson line $V (x_t)$ is defined as
\bejamal
V (x_t) \equiv \hat{P}\,
\exp \left\{i g \int_{- \infty}^{+\infty} d x^+ \, S^-_a (x^+, x_t)\, t_a\right\}
\eejamal
\beajamal
i \mathcal{M}_1 &=& \int d^4 x\, d^2 z_t \, d^2 \bar{z}_t \,
\int {d^2 k_t \over (2 \pi)^2} \, {d^2 \bar{k}_t \over (2 \pi)^2} \,
e^{i (\bar{k} - k) x} \,
e^{- i (\bar{q}_t - \bar{k}_t)\cdot \bar{z}_t}\,
e^{- i (k_t - p_t)\cdot z_t}
\nonumber\\
&&
\bar{u} (\bar{q})\, \left[
\overline{V}_{AP} (x^+, \bar{z}_t) \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{n}$} \, {\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{k}$} \over 2 \bar{k}^+} \,
\left[ i g \, \raise.15ex\hbox{$/$}\kern-.73em\hbox{$A$} (x)\right]\,
{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$k$} \over 2 k^+} \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$} \, V_{AP} (z_t, x^+)
\right]\, u(p)
\label{eq:nsoft-hard-nsoft}
\eeajamal
with $k^+ = p^+, k^- = {k_t^2 \over 2 k^+}$,
$\bar{k}^+ = \bar{q}^+, \bar{k}^- = {\bar{k}_t^2 \over 2 \bar{k}^+}$
and the semi-infinite, anti path-ordered Wilson lines in the fundamental representation are now defined
as
\bejamal
\overline{V}_{AP} (x^+, \bar{z}_t) \equiv \hat{P}\,
\exp \left\{i g \int_{x^+}^{+\infty} d \bar{z}^+ \, \bar{S}^-_a
(\bar{z}_t, \bar{z}^+)\, t_a\right\}
\label{eq:Wilsonbar-si-fundamental}
\eejamal
and
\bejamal
V_{AP} (z_t, x^+) \equiv \hat{P}\,
\exp \left\{i g \int_{- \infty}^{x^+} d z^+ \, S^-_a (z_t, z^+)\, t_a\right\}.
\label{eq:Wilson-si-fundamental}
\eejamal
where anti path-ordering (AP) in the amplitude means fields with the largest argument appear to the left.
\beajamal
i \mathcal{M}_2 &=& {2\, i \over (p - \bar{q})^2} \, \int d^4 x\,
e^{i (\bar{q} - p) x} \,
\bar{u} (\bar{q})\, \bigg[ (i g \, t^a)\,
\left[\partial_{x^+}\, U_{AP}^\dagger (x_t, x^+)\right]^{a b} \nonumber\\
&&
\left[n \cdot (p - \bar{q}) \, \raise.15ex\hbox{$/$}\kern-.73em\hbox{$A$}_b (x) - (p - \bar{q}) \cdot A_b (x) \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$}\right]\,
\bigg]\, u(p)
\label{eq:nsoft-on-hard}
\eeajamal
where the derivative acts on the $+$ coordinate of the adjoint Wilson line (anti path-ordered in the amplitude)
and arises from the fact that
one can write the soft field at the last scattering point as a derivative on the Wilson line.
\beajamal
i \mathcal{M}_3 &\!\!=\!\!& - 2\, i\, \int d^4 x\, d^2 \bar{x}_t \, d \bar{x}^+\,
{d^2 \bar{p}_{1 t} \over (2 \pi)^2} \,
e^{i (\bar{q}^+ - p^+) x^-} \,
e^{- i (\bar{p}_{1 t} - p_t)\cdot x_t}\,
e^{- i (\bar{q}_t - \bar{p}_{1 t})\cdot \bar{x}_t}
\nonumber\\
&&
\bar{u} (\bar{q})\, \bigg[
\left[\partial_{\bar{x}^+}\, \overline{V}_{AP} (\bar{x}^+, \bar{x}_t)\right]\,
\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{n}$}\, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{p}_1$} \,
(i g t^a)\,
\left[\partial_{x^+}\, U^\dagger_{AP} (x_t, x^+)\right]^{a b}\nonumber\\
&&
{
\left[ n \cdot (p - \bar{q}) \, \raise.15ex\hbox{$/$}\kern-.73em\hbox{$A$}^b (x) - (p - \bar{p}_1) \cdot A^b (x) \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$}\right]
\over
\left[
2 n \cdot \bar{q} \, 2 n \cdot (p - \bar{q})\, p^-
- 2 n \cdot (p - \bar{q})\, \bar{p}^2_{1 t}
- 2 n \cdot \bar{q} \, (\bar{p}_{1 t} - p_t)^2
\right]
}
\bigg]\, u (p)
\label{eq:nsoft-on-hard-nsoft-on-final}
\eeajamal
The Leading Order quark-target scattering cross section can be obtained by squaring the
amplitude in~(\ref{eq:qA-lo}) keeping in mind that there is no overlap between the eikonal
and non-eikonal amplitudes as the non-eikonal amplitudes vanish in the eikonal limit by
construction.
In order to generalize the CGC formalism to large $x$ one must include the physics of
pQCD as contained in collinear factorization and its DGLAP evolution of parton distribution
and fragmentation functions~\cite{dglap}. To do this one must do a one-loop correction
to our one-loop
result which would involve radiation of gluons (and a quark anti-quark pair). One would
then integrate out one of the final state partons which would lead to divergences. These
divergences would then be either canceled by counter-terms or absorbed into running
(evolution) of the ingredients of the cross section such as parton distribution and
fragmentation functions as well as the $x$ evolution of the operators describing
interactions with the target and result in a factorized cross section which would
contain the JIMWLK evolution of the target at small $x$ and DGLAP evolution
of the target at large $x$. As quarks and gluons interact strongly with the target
this will be complicated. As a warm up we therefore consider radiation of a photon
instead which is much simpler since photons do not interact strongly.
Furthermore, double inclusive production of a photon and quark (hadron or jet) is interesting
on its own as it would allow investigation of two-particle correlations not only in the
forward-forward rapidity region~\cite{photon,photon2,photon3,photon4} as is the case in the CGC formalism, but also in the
forward-backward rapidity regions which is outside the domain of applicability of CGC.
\subsection{Double inclusive quark-photon production}
We now consider scattering of a projectile quark from a target accompanied by radiation
of a photon by the quark. The diagrams are shown in Fig. (\ref{fig:qA-pho}) and correspond to
radiation of a photon by quark before, during and after multiple soft and single hard scatterings
of the quark from the target. The wavy line represents a photon while the springy line
represents a large $x$ gluon (or a gluon exchange with full kinematics more accurately).
As a reminder, in the eikonal limit there are only two diagrams corresponding to radiation
of a photon either before or after multiple soft scatterings of the
quark \cite{photon,photon2,photon3,photon4}. There is no radiation while the quark is traveling inside the target as the target is
infinitely thin (shock wave) in the eikonal limit. The new diagrams correspond to radiation
of a photon from a quark while the quark is traversing the target, as such photon radiation
can happen anywhere. Furthermore, had we considered radiation of a gluon we would have
needed to include radiation from the large $x$ gluons themselves and include multiple soft
interactions of the radiated gluon with the target as well.
Note that the thick solid lines denote Wilson lines which resum multiple soft scatterings of the quark from the target. As a Wilson line also include no-interactions (the $1$ term in the expansion of the exponential) it should be clear that the case of photon radiation before or after any interaction with the target is also included in these diagrams as can be verified by expanding the corresponding Wilson lines. The amplitudes can be written as
\beajamal
i \mathcal{M}_1 (p,q,l) &=& e g\, \int {d^2 k_{2t} \over (2 \pi)^2} \,
{d^2 k_{3t} \over (2 \pi)^2} \,
{d^2 \bar{k}_{1 t} \over (2 \pi)^2} \,
\int d^4 x\, d^2 y_{1t} \, d^2 y_{2 t}\,
d^2 \bar{y}_{1 t} \, d z^+ \, \theta (x^+ - z^+) \,
e^{i (l^+ + \bar{q}^+ - p^+) x^-} \nonumber\\
&&
e^{- i (\bar{q}_t - \bar{k}_{1 t})\cdot \bar{y}_{1 t}}\,
e^{- i (\bar{k}_{1 t} - k_{3t})\cdot x_t}\,
e^{- i (k_{3 t} - k_{2 t})\cdot y_{2 t}}\,
e^{- i (l_t + k_{2t} - p_t)\cdot y_{1 t}}\,
\bar{u} (\bar{q})\, \overline{V} (\bar{y}_{1 t} ; x^+ , \infty) \,
{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{n}$} \,\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{k}_1$} \over 2 \bar{n}\cdot\bar{q}} \nonumber\\
&&
\raise.15ex\hbox{$/$}\kern-.73em\hbox{$A$} (x)\,
\left[{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$k_3$} \over 2 n\cdot (p - l)} \, V (y_{2 t} ; z^+, x^+) \,
{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$} \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$k_2$} \over 2 n\cdot (p - l)} \, +
i \, {\delta (x^+ - z^+) \over 2 n\cdot (p -l)} \raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$} \right]\nonumber\\
&&
\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\epsilon$} (l) \, {\raise.15ex\hbox{$/$}\kern-.53em\hbox{$k_1$} \over 2 n \cdot p} \,
V (y_{1 t}; - \infty , z^+)\, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$} \, u (p)
\eeajamal
corresponding to the first diagram at top left where photon radiation happens before hard scattering with large $x$ gluon non scattering. We also use the short-hand notation $k_1^+ = p^+ \, , \, k_{1t} = l_t + k_{2t} \, $, $k_2^+ = k_3^+ = p^+ - l^+ $ and $\bar{k}_1^+ = \bar{q}^+$. The diagram at the top right corresponds to the case of photon radiation after hard scattering and non-scattering large $x$ gluon. The amplitude for this case is given by
\begin{figure}
\begin{center}
\epsfig{figure=qA-pho.pdf,height=0.45\textwidth}
\caption{Radiation of a photon by a quark scattering from the small and large $x$ gluons of a target.}
\label{fig:qA-pho}
\end{center}
\end{figure}
\beajamal
i \mathcal{M}_2 (p,q,l) &=& eg \int {d^2 k_{1 t} \over (2 \pi)^2} \,
{d^2 \bar{k}_{1 t} \over (2 \pi)^2} \,
{d^2 \bar{k}_{2 t} \over (2 \pi)^2} \,
\int d^4 x\, d^2 y_{1 t} \, d^2 \bar{y}_{1 t}\,
d^2 \bar{y}_{2 t} \, d \bar{z}^+ \, \theta (\bar{z} - x^+) \,
e^{i (\bar{l}^+ + \bar{q}^+ - p^+) x^-} \nonumber\\
&&
e^{- i (\bar{q}_t + \bar{l}_t - \bar{k}_{2 t})\cdot \bar{y}_{2 t}}\,
e^{- i (\bar{k}_{2 t} - \bar{k}_{1 t})\cdot \bar{y}_{1 t} }\,
e^{- i (k_{1 t} - p_{t})\cdot y_{1 t} }\,
e^{- i (\bar{k}_{1 t} - k_{1 t})\cdot x_t}\,
\bar{u} (\bar{q})\, \overline{V} (\bar{y}_{2 t} ; \bar{z}^+ , \infty) \nonumber\\
&&
{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{n}$} \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{k}_3$} \over 2 \bar{n} \cdot \bar{q}} \,
\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\epsilon$} (\bar{l})\,
\left[{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{k}_2$} \over 2 \bar{n} \cdot \bar{k}_2} \,
\overline{V} (\bar{y}_{1 t} ; x^+, \bar{z}^+) \,
{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{n}$} \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{k}_1$} \over 2 \bar{n} \cdot \bar{k}_1} \, +
i \, {\delta (x^+ - \bar{z}^+) \over 2 \bar{n} \cdot (\bar{q} + \bar{l}) }
\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{n}$} \right]
\nonumber\\
&&
\raise.15ex\hbox{$/$}\kern-.73em\hbox{$A$} (x) \,
{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$k_1$} \over 2 n \cdot p} \, V ( y_{1 t} ; - \infty , x^+) \,
\raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$} \, u (p)
\eeajamal
with $k_1^+ = p^+ \, $,
$\bar{k}_1^+ = \bar{k}_2^+ = \bar{q}^+ + \bar{l}^+$,
$\bar{k}_3^+ = \bar{q}^+$ and
$ \bar{k}_{3 t} = \bar{k}_{2 t} - \bar{l}_t$.
The bottom diagrams correspond to the case when the large $x$ gluon scatters from the small $x$ gluons of the target itself. Again there are two possibilities; the photon can be radiate before or after scattering from the large $x$ gluons. The amplitudes for these cases are given by
\beajamal
i \mathcal{M}_3 (p,q,l)\!\!\! &=&\!\!\! e g \,
\int {d^2 k_{1 t} \over (2 \pi)^2} \,
{d^2 k_{3 t} \over (2 \pi)^2} \,
{d^2 \bar{k}_{1 t} \over (2 \pi)^2} \,
\int \, d^4 x\, d^2 y_{1 t} \, d^2 y_{2 t}\, d^2 \bar{y}_{1 t}\,
d z^+ \, d r^+ \, \theta (x^+ - r^+) \,\theta (r^+ - z^+) \nonumber\\
\!\!\! &&\!\!\!
e^{i (\bar{q}^+ + l^+ - p^+) r^-}
e^{- i (\bar{q}_t - \bar{k}_{1 t})\cdot \bar{y}_{1 t} }
e^{- i (\bar{k}_{1 t} - k_{3 t})\cdot x_t}
e^{- i (k_{3 t} + l_t - k_{1 t} ) \cdot y_{2 t} }
e^{- i (k_{1 t} - p_t) \cdot y_{1 t} }
\nonumber\\
\!\!\! &&\!\!\!
\bar{u} (\bar{q})\, \overline{V} (\bar{y}_{1 t} ; r^+ , + \infty) \,
{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{n}$} \,\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{k}_1$} \over 2 \bar{n}\cdot\bar{k}_1 }
t^a \,
\left[\partial_{x^+} \, U^\dagger_{AP} (x_t ; r^+ , x^+) \right]^{a b}\,
\left[\raise.15ex\hbox{$/$}\kern-.73em\hbox{$A$}^b (x) -
{(k_3 - \bar{k}_1) \cdot A^b (x) \over n \cdot (k_3 - \bar{k}_1)} \,
\raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$}\right] \nonumber\\
\!\!\! &&\!\!\!
\left[
{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$k_3$} \,\raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$} \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$k_2$} \over 2 n\cdot k_3 \, 2 n\cdot k_2 } \,
V (y_{2 t} ; z^+ , r^+) \, +
i {\delta (z^+ - r^+) \over 2 n\cdot k_2} \raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$} \right]\,
{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\epsilon$} (l) \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$k_1$} \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$} \over 2 n\cdot k_1 } \, V (y_{1 t} ; - \infty , z^+) \, u (p)
\eeajamal
with $n\cdot k_1 = n \cdot p$,
$n \cdot k_2 = n \cdot k_3 = n\cdot (p - l)$,
$\bar{n} \cdot \bar{k}_1 = \bar{n} \cdot \bar{q}$ and
$k_{2 t} = k_{1 t} - l_t$
and
\beajamal
i \mathcal{M}_4 (p,q,l)\!\!\! &=&\!\!\! e g \,
\int {d^2 k_{1t} \over (2 \pi)^2} \,
{d^2 \bar{k}_{1t} \over (2 \pi)^2} \,
{d^2 \bar{k}_{2t} \over (2 \pi)^2} \,
\int \, d^4 x\, d^2 y_{1 t} \, d^2 \bar{y}_{1 t}\, d^2 \bar{y}_{2 t}\,
d z^+ \, d r^+ \,
\theta (\bar{z}^+ - r^+) \,\theta (x^+ - r^+) \nonumber\\
\!\!\! &&\!\!\!
e^{i (\bar{q}^+ + \bar{l}^+ - p^+) r^-}
e^{- i (\bar{q}_t + \bar{l}_t - \bar{k}_{2 t} ) \cdot \bar{y}_{2 t} }
e^{- i (\bar{k}_{2 t} - \bar{k}_{1 t})\cdot \bar{y}_{1 t} }
e^{- i (\bar{k}_{1 t} - k_{1 t})\cdot x_t}
e^{- i (k_{1 t} - p_t) \cdot y_{1 t} }
\nonumber\\
\!\!\! &&\!\!\!
\bar{u} (\bar{q})\, \overline{V} (\bar{y}_{2 t} ; \bar{z}^+ , + \infty) \,
{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{n}$} \,\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{k}_3$} \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$\epsilon$} (\bar{l}) \over 2 \bar{n}\cdot\bar{k}_3 }
\left[
{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{k}_2$} \,\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{n}$} \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{k}_1$} \over 2 \bar{n}\cdot \bar{k}_2 \, 2 \bar{n}\cdot \bar{k}_1 } \,
\overline{V} (\bar{y}_{1 t} ; r^+ , \bar{z}^+) \, +
i {\delta (\bar{z}^+ - r^+) \over 2 \bar{n}\cdot \bar{k}_2} \raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{n}$} \right]
\nonumber\\
\!\!\! &&\!\!\!
t^a \left[\partial_{x^+} \, U^\dagger_{AP} (x_t ; r^+ , x^+) \right]^{a b}\,
\left[\raise.15ex\hbox{$/$}\kern-.73em\hbox{$A$}^b (x) -
{(k_1 - \bar{k}_1) \cdot A^b (x) \over n \cdot (k_1 - \bar{k}_1) } \,
\raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$}\right] \nonumber\\
\!\!\! &&\!\!\!
{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$k_1$} \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$} \over 2 n\cdot k_1 } \, V (y_{1 t} ; - \infty , r^+) \, u (p)
\eeajamal
where all $4$-momenta are on-shell with $n\cdot k_1 = n \cdot p$,
$\bar{n} \cdot \bar{k}_1 = \bar{n} \cdot \bar{k}_2 = \bar{n} \cdot (\bar{q} + \bar{l})$, $\bar{n}\cdot\bar{k}_3 = \bar{n}\cdot\bar{q}$ and
$\bar{k}_{3 t} = \bar{k}_{2 t} - \bar{l}_t$.
The next step in calculating the double inclusive production cross section would be to square this amplitude (sum of the $4$ amplitudes). Rather than doing this we use spinor helicity techniques~\cite{dixon} to
evaluate these amplitudes individually for a given helicity state. The advantage of this is that it allows us to consider spin asymmetries in the double photon-hadron production process which we intend to do in the future. For the details of the use of spinor helicity techniques in the CGC formalism we refer to~\cite{ahjt,ahjt2}.
Here we show our results for the helicity amplitudes for the first diagram. The other ones are similar and will be reported elsewhere. The Dirac numerators for the first diagram are defined as
\beajamal
\mathcal{N}_{1-1} &=&
\bar{u} (\bar{q})\, {\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{n}$} \,\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{k}_1$} \over 2 \bar{n}\cdot\bar{q}} \, \,
\raise.15ex\hbox{$/$}\kern-.73em\hbox{$A$} (x)\,
{\raise.15ex\hbox{$/$}\kern-.53em\hbox{$k_3$} \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$} \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$k_2$} \raise.15ex\hbox{$/$}\kern-.53em\hbox{$\epsilon$} (l) \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$k_1$} \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$} \over
2 n \cdot p \, 2 n\cdot (p - l)\, 2 n\cdot (p - l)}
\, u (p)\nonumber\\
\mathcal{N}_{1-2} &=&
\bar{u} (\bar{q})\, {\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{n}$} \,\raise.15ex\hbox{$/$}\kern-.53em\hbox{$\bar{k}_1$} \over 2 \bar{n}\cdot\bar{q}} \, \,
\raise.15ex\hbox{$/$}\kern-.73em\hbox{$A$} (x)\,
{ \raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$} \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$\epsilon$} (l) \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$k_1$} \, \raise.15ex\hbox{$/$}\kern-.53em\hbox{$n$} \over
2 n \cdot p \, 2 n\cdot (p -l)} \, u (p)
\eeajamal
\noindent and the corresponding helicity amplitudes are
\beajamal
\mathcal{N}^{+ +}_{1-1} &=& \left(\mathcal{N}^{- -}_{1-1}\right)^\star =
- \sqrt{{n\cdot p \over n\cdot (p - l)}} \,
{\left[n\cdot l \, k_{2\perp}\cdot\epsilon_\perp^\star -
n\cdot (p - l) \, l_\perp\cdot\epsilon_\perp^\star\right]
\over n\cdot l \, n\cdot (p -l)} \,
\langle \bar{k}_1^+ | \, \raise.15ex\hbox{$/$}\kern-.73em\hbox{$A$} (x) | k_3^+ \rangle \nonumber\\
\mathcal{N}^{+ +}_{1-2} &=& \left(\mathcal{N}^{- -}_{1-2}\right)^\star =
- \sqrt{{n\cdot p \over n\cdot (p - l)}} \,
\langle \bar{k}_1^+ | \, \raise.15ex\hbox{$/$}\kern-.73em\hbox{$A$} (x) | n^+ \rangle \nonumber\\
\mathcal{N}^{+ -}_{1-1} &=& \left(\mathcal{N}^{- +}_{1-1}\right)^\star =
- \sqrt{{n\cdot p \over n\cdot (p - l)}} \,
{\left[n\cdot p \, l_\perp\cdot\epsilon_\perp -
n\cdot l \, k_{1\perp}\cdot\epsilon_\perp\right]
\over n\cdot p \, n\cdot l} \,
\langle \bar{k}_1^+ | \, \raise.15ex\hbox{$/$}\kern-.73em\hbox{$A$} (x) | k_3^+ \rangle \nonumber\\
\mathcal{N}^{+ -}_{1-2} &=& \mathcal{N}^{- +}_{1-2} = 0
\eeajamal
where $\{\pm \pm\}$ refers to the helicity of the projectile quark and the real (transverse) photon.
Using these helicity amplitudes it is clear that there will be a spin asymmetry in the cross section, the so called $A_{LL}$.
There is much to gain from a formalism that can unify the pQCD and collinear factorization formalism with that valid at high $p_t$ (large $x$) with that of CGC applicable at low $p_t$ (small $x$). In addition to clarifying the kinematics where the CGC formalism is valid it will enable us to study spin asymmetries at intermediate-high $p_t$, forward-backward rapidity and azimuthal angular correlations and much more. It in principle can also be used to investigate jet energy loss from the earliest moments of a high energy heavy ion collision as well as the total neutrino-nucleon scattering cross sections~\cite{hj}.
Comments: Presented online at the Low-$x$ Workshop, Elba Island, Italy, September 27--October 1 2021.
\section*{Acknowledgements}
We acknowledge support by the DOE Office of Nuclear Physics
through Grant No.\ DE-SC0002307 and by PSC-CUNY through
grant No. 63158-0051.
\iffalse
\part[Collectivity in heavy ions at CMS\\ \phantom{x}\hspace{4ex}\it{Georgios Konstantinos Krintiras on behalf of the CMS Collaboration}]{}
\section{Introduction}
\label{sec:intro_GKK}
One of the main purposes of the Large Hadron Collider (LHC) is to create a hot and dense, strongly interacting QCD medium, referred to as the quark-gluon plasma (QGP).
The study of ultrarelativistic heavy ion collisions~\cite{Krintiras:2020imy}, including the decomposition of azimuthal particle distributions into Fourier series, revealed QGP properties consistent with a collectively expanding (``flowing'') medium. The associated flow vectors $V_n \equiv{}$ \ensuremath{v_{\mathrm{n}}}\xspace $e^{in\Psi_{n}}$, where \ensuremath{v_{\mathrm{n}}}\xspace is the magnitude of the $n^{\text{th}}$-order Fourier harmonic and $\Psi_n$ its phase (also known as the $n^{\text{th}}$-or order ``symmetry plane angle''), reflect the hydrodynamic response of the medium to the transverse overlap region and its subnucleon fluctuations. Measurements of flow vectors, their event-by-event fluctuations and correlations between different orders of harmonics or symmetry planes provide input to QGP modeling, in particular, details about initial-state conditions and the dynamics of the subsequent deconfined phase (see Section~\ref{sec:AA}).
Extensive measurements of light hadron azimuthal anisotropies are complemented by studies of heavy flavor (charm and bottom) quarks. In principle, their masses are much larger than the typical range of temperatures in the QGP, meaning thermal production of heavy quarks during the QGP phase is suppressed relative to that of light quarks. However, it is still postulated that charm quarks interact strongly enough to flow with the QGP. Experimental data at LHC (see Section~\ref{sec:AA}) so far reveal that charm quark hadrons have significant azimuthal anisotropies, suggesting that they participate in the overall collective flow of the medium. A nonthermalized probe is required to assess the interaction with the medium more thoroughly, with the heavier bottom quark being a natural candidate. Although a series of theoretical predictions for the azimuthal anisotropies of bottom quarks exist, only limited experimental data are currently available at LHC (see Section~\ref{sec:AA}).
The observation of QGP-like phenomena in the small systems produced in proton-proton (\ensuremath{\Pp\Pp}\xspace) and proton-lead (\ensuremath{\Pp\mathrm{Pb}}\xspace)
collisions indicates the possibility of final-state effects in such systems. In that sense, precision measurements of \ensuremath{\mathrm{Z}} bosons in peripheral nucleus-nucleus collisions (see Section~\ref{sec:AA}) can provide an experimental reference for the expected yields of hard probes in the absence of final-state effects, which may lead to an improved understanding of the onset of collectivity in those ``small systems''. Given features similar to those observed in heavy ion collisions are revealed when the same observables are used in conjunction with high particle multiplicities in \ensuremath{\Pp\Pp}\xspace and \ensuremath{\Pp\mathrm{Pb}}\xspace collisions (see Section~\ref{sec:SmallSystems}), it thus remains imperative that LHC experiments, like CMS~\cite{Chatrchyan:2008aa}, pursue their quest for a medium of a similar origin as in measurements involving heavy ion collisions. It is not yet clear, however, to what extent the $V_n$ is driven by the initial spatial anisotropy and/or whether competing mechanisms, \eg, gluon field momentum correlations at the initial state, contribute to the seen final-state anisotropy. In all cases, utmost caution must be exercised when interpreting flow-related signatures, especially in small systems, given the potential contamination from nonflow effects. Similar to the heavy ion case, it is also of interest to measure heavy flavor anisotropies in such collision systems so that we obtain information about the interaction of heavy quarks with the medium in the smallest hadronic collision systems at LHC (see Section~\ref{sec:SmallSystems}).
\section{Flow related measurements in heavy ion collisions}
\label{sec:AA}
The \ensuremath{v_{\mathrm{n}}}\xspace measurements had led to ``the discovery of the perfect liquid''~\cite{Rafelski:2019twp} and contributed significantly to the understanding of initial-state effects and final-state evolution mechanisms of the QGP in various collisions systems~\cite{CMS:2019cyz}, as shown in Fig.~\ref{fig:fig1} (left), and extensive phase space regions in pseudorapidity~\cite{CMS:2017xnj} and transverse momentum~\cite{CMS:2017xgk}. Details of the initial-state conditions and the subsequent dynamics can be further probed by more complex observables: event-by-event flow fluctuations, decorrelations of flow vectors along the longitudinal direction, higher order $V_n$, in particular their linear and nonlinear components.
By unfolding statistical resolution effects from the event-by-event measured cumulant \ensuremath{v_{\mathrm{n}}}\xspace distributions, higher order moments from the underlying probability distribution functions $p(\ensuremath{v_{\mathrm{n}}}\xspace)$ can be determined. For example, non-Gaussian fluctuations in the initial-state energy density lead to differences in the higher order cumulant $\ensuremath{v_{2}}\xspace\{k\}$, and asymmetry about the mean of the $p(\ensuremath{v_{2}}\xspace)$. Parameters like the mean eccentricity in the reaction plane can be further extracted from elliptic power function fits to $p(\ensuremath{v_{2}}\xspace)$: the extracted eccentricities~\cite{CMS:2017glf} are actually found smaller than predictions based on models of initial-state conditions.
Flow vector decorrelations can be studied by forming a factorization ratio $r_n ({\boldsymbol p}, \eta)$ such that when $\ensuremath{v_{\mathrm{n}}}\xspace$ and/or $\Psi_n$ decorrelate the $r_n$ deviates from unity. These measurements can provide important constraints on the longitudinal structure, currently a challenge for three-dimensional hydrodynamic models. More specifically, CMS studies of $\eta$-dependent factorization breakdown~\cite{CMS:2015xmx} gave an indication of initial-state
fluctuations along the longitudinal direction, and subsequent studies at LHC comparing $r_n$ in \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace and \ensuremath{\mathrm{Xe}\mathrm{Xe}}\xspace collisions~\cite{Aad:2020gfz} revealed that models tuned to describe the $\ensuremath{v_{\mathrm{n}}}\xspace$ in both systems failed to reproduce the $r_n$. Higher order $V_n$ can be expressed in terms of linear and nonlinear modes, each being proportional to the same or lower order eccentricities and/or a combination of them. Constraints to hydrodynamic models can be imposed by measurements of the corresponding nonlinear response coefficients $\chi_{n(pk)}$, where $n$ represents the order of $V_n$ and $p$, $k$ with respect to lower-order symmetry plane angle or angles. Model calculations failed to describe the measured $\chi$~\cite{Sirunyan:2019izh}, in particular, $\chi_{7223}$.
\begin{figure}[!htb]
\centering
\begin{minipage}{0.65\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{low_x_2021/figures/CMS-HIN-18-001_Figure_009.pdf}
\end{minipage}\hfill
\begin{minipage}{0.35\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{low_x_2021/figures/CMS-HIN-20-001_Figure_003.pdf}
\end{minipage}
\caption{Left: Centrality dependence of $\ensuremath{v_{2}}\xspace$, $\ensuremath{v_{3}}\xspace$, and $\ensuremath{v_{4}}\xspace$ harmonic coefficients from two-particle correlations method for $0.3 < {\boldsymbol p} < 3.0\,\GeVns$ for \ensuremath{\mathrm{Xe}\mathrm{Xe}}\xspace collisions at $5.44\,\TeVns$ and \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace collisions at $5.02\,\TeVns$~\cite{CMS:2019cyz}. The lower panels show the ratio of the results for the two systems. The bars and the shaded boxes represent statistical and systematic uncertainties, respectively. Theoretical predictions are compared to the data (shaded bands). Right: Prompt \PDz meson \ensuremath{\vtwo\{2\}}\xspace~\cite{CMS:2020bnz} and \ensuremath{\vtwo\{4\}}\xspace~\cite{CMS:2021qqk}, and $\ensuremath{\vtwo\{4\}}\xspace/\ensuremath{\vtwo\{2\}}\xspace$ compared to the same ratio for charged particles in the range $\abs{y}<1$ as a function of centrality for $2<{\boldsymbol p}<8\,\GeVns$ in \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace collisions at $5.02\,\TeVns$. The vertical bars represent statistical uncertainties and open boxes denote the systematic uncertainties. The lines represent model calculations.}
\label{fig:fig1}
\end{figure}
At LHC, significant $\ensuremath{v_{2}}\xspace$ flow signal is observed for mesons containing a charm quark, \eg, prompt $\PJGy$~\cite{CMS:2016mah} and prompt $\PDz$~\cite{CMS:2017vhp}, while the first measurements with bottom quarks, \eg, nonprompt $\PJGy$~\cite{CMS:2016mah}, and $\PGU$(1S) and $\PGU$(2S)~\cite{CMS:2020efs}, are compatible with zero in the kinematic region studied so far. For the charm quark case, the prompt $\PDz$ meson $\ensuremath{v_{2}}\xspace$ has been so far measured
using two-particle correlation methods, while the first $\ensuremath{\vtwo\{4\}}\xspace$ measurement, \ie, using
multiparticle correlations, is presented most recently~\cite{CMS:2021qqk} (Fig.~\ref{fig:fig1}, right). The results can also discriminate between models of heavy quark energy loss and constrain heavy quark transport coefficients in the QGP, complementing measurements of the nuclear modification factor, \eg, Ref.~\cite{CMS:2018bwt}.
\begin{figure}[!htb]
\centering
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{low_x_2021/figures/CMS-HIN-19-003_Figure_001.pdf}
\end{minipage}\hfill
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{low_x_2021/figures/CMS-HIN-19-003_Figure_004.pdf}
\end{minipage}
\caption{Left: The $\ensuremath{v_{2}}\xspace$ of \ensuremath{\mathrm{Z}} bosons in \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace collisions for various centrality bins. The error bars represent statistical uncertainties. The boxes represent systematic uncertainties and may be smaller than the markers~\protect\cite{CMS:2021kvd}. A measurement from the ATLAS Collaboration at $2.76\,\TeVns$ is also shown~\protect\cite{ATLAS:2012qdj}. Right: Normalized yields of \ensuremath{\mathrm{Z}} bosons as a function of centrality. The error bars, hollow boxes, and solid gray boxes represent the statistical, systematic, and model uncertainties, respectively. The value of the 0--90\% data point, and the scaled model prediction are shown for comparison, with the width of the bands representing the contribution from the total 0--90\% data point uncertainty.}
\label{fig:fig2}
\end{figure}
\ensuremath{\mathrm{Z}} boson yields and the elliptic flow coefficient have been measured with high
precision as a function of centrality in lead-lead collisions~\cite{CMS:2021kvd}. The \ensuremath{\mathrm{Z}} boson $\ensuremath{v_{2}}\xspace$ is compatible with zero (Fig.~\ref{fig:fig2}, left), consistent with the expectation of no significant final-state interactions in the QGP. Appropriately scaled \ensuremath{\mathrm{Z}} boson yields ($R_{\mathrm{AA}}$) are found constant versus centrality, but a decreasing trend is seen for the first time for more peripheral events (Fig.~\ref{fig:fig2}, right). This is compatible with the \textsc{hg-pythia} model prediction that accounts for initial collision geometry and centrality selection effects~\cite{Loizides:2017sqq}, and in contrast to the findings in Ref.~\cite{ATLAS:2019maq} by $\approx{5}$\%. A slightly increasing $R_{\mathrm{AA}}$ with decreasing centrality was previously explained by arguing that the nucleon–nucleon
cross section may be shadowed in nucleus–nucleus collisions~\cite{Eskola:2020lee}, an interpretation with quite significant consequences for the understanding of heavy ion data, in particular in the context of the Glauber model. Instead, an alternative explanation of the data was recently provided~\cite{Jonas:2021xju} by assuming that there is a mild bias present in the centrality determination of
the measurement in Ref.~\cite{ATLAS:2019maq} about the size of the related systematic uncertainty. Overall, these results provide a new experimental proxy for estimating the average nucleon-nucleon integrated luminosity as a function of centrality in heavy ion collisions. The ratio of \ensuremath{\mathrm{Z}} boson yields in \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace over \ensuremath{\Pp\Pp}\xspace collisions can be used as an alternative to Glauber-model-based scaling for hard scattering processes, which also automatically accounts for potential effects related to event selection and centrality calibration. A centrality-determination independent measurement of the \ensuremath{\mathrm{Z}} boson cross section is also possible in zero-bias \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace collisions but requires the larger data samples of Runs 3 and 4.
\section{Flow related measurements in proton-nucleus and proton-proton collisions}
\label{sec:SmallSystems}
A series of features in small systems are indicative of a collective behavior driven by the initial-state geometry and final-state effects: the near-side ridge in two-particle correlations, the {\boldsymbol p} and event activity dependence of $\ensuremath{v_{\mathrm{n}}}\xspace$, the $\ensuremath{v_{\mathrm{n}}}\xspace$ dependence upon the hadron species and scaling with the number of valence quarks in the hadron, multiparticle cumulants and their ratios. In proton-proton collisions, it is even less clear what mechanism gives rise to the observed finite azimuthal anisotropy. Until now, hints towards a final-state description were given, \eg, the indication of a mass ordering with multiparticle angular correlations~\cite{Khachatryan:2016txc}.
Event-by-event correlations among the $\ensuremath{v_{2}}\xspace$, $\ensuremath{v_{3}}\xspace$, and $\ensuremath{v_{4}}\xspace$ Fourier harmonics have also been measured for small systems using the symmetric cumulant (SC) method~\cite{CMS:2019lin}. The correlation data reveal features similar to those observed in \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace collisions, \ie, a negative correlation is found between the $\ensuremath{v_{2}}\xspace$ and $\ensuremath{v_{3}}\xspace$ harmonics (SC(2,3)), while the correlation is positive between the $\ensuremath{v_{2}}\xspace$ and $\ensuremath{v_{4}}\xspace$ harmonics (SC(2,4)). First measurements of event-by-event SC(2, 3) and SC(2, 4) with two, three, or four subevents~\cite{CMS:2019lin} corroborate the findings using the SC technique without subevents. By significantly suppressing the nonflow
contribution, the four-subevent results for both SC(2, 3) and SC(2, 4) show a monotonically decreasing magnitude toward zero at an offline track multiplicity of $N_{\text{trk}}\approx{20}$, providing evidence for the onset of long-range collective particle correlations from high to low multiplicity events in \ensuremath{\Pp\mathrm{Pb}}\xspace collisions (Fig.~\ref{fig:fig3}, left).
\begin{figure}[!htb]
\centering
\begin{minipage}{0.3\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{low_x_2021/figures/CMS-HIN-18-015_Figure_001.png}
\end{minipage}
\begin{minipage}{0.65\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{low_x_2021/figures/CMS-PAS-HIN-19-004_Figure_002.png}
\end{minipage}\hfill
\caption{Left: The $\text{SC}(2,4)$ distributions as a function of \noff from 2 subevents (full blue circles), 3 subevents (red squares), and 4 subevents (green crosses)~\cite{CMS:2019lin}.
For comparison, results from Ref.~\cite{CMS:2017kcs} with no subevents (open black circles), are also shown.
Bars represent statistical uncertainties while grey areas represent the systematic uncertainties. Right: Two- and multiparticle cumulant $\ensuremath{v_{2}}\xspace$ results of \ensuremath{\Pp\mathrm{Pb}}\xspace collisions at 8.16\,\TeVns for charged hadrons, \PKzS mesons, and \PgL\ baryons in different \noff ranges~\cite{CMS:2021fhf}. The two-particle results are based on event planes that are determined in either the proton-going (p-SP) or lead-going (Pb-SP) side of the forward hadron calorimeter. The shaded boxes show the total systematic uncertainties.}
\label{fig:fig3}
\end{figure}
Using the multiparticle cumulant method, $\ensuremath{\vtwo\{4\}}\xspace$, $\ensuremath{\vtwo\{6\}}\xspace$, and $\ensuremath{\vtwo\{8\}}\xspace$ values have been measured, for the first time, for \PKzS mesons and \PgL baryons in high-multiplicity \ensuremath{\Pp\mathrm{Pb}}\xspace collisions~\cite{CMS:2021fhf}. Nonflow effects are studied using jet veto and subevent methods. A large difference between the $\ensuremath{\vtwo\{4\}}\xspace$ and $\ensuremath{\vtwo\{6\}}\xspace$ results (not present in \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace collisions) can be explained by jet-related correlations,
which are suppressed by rejecting events with at least one jet with transverse momentum ${\boldsymbol p} > 20\,\GeVns$.
The subevent cumulant method is also performed to reduce short-range correlation effects, with the difference between the standard and the subevent cumulant methods attributed to the effect of event plane decorrelations. For both the jet suppression and subevent methods, the nonflow contribution is found to be particularly significant for at high track multiplicity.
\begin{figure}[!htb]
\centering
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{low_x_2021/figures/CMS-HIN-17-004_Figure_003.png}
\end{minipage}
\begin{minipage}{0.65\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{low_x_2021/figures/CMS-PAS-HIN-19-004_Figure_007.png}
\end{minipage}\hfill
\caption{Left: The $\ensuremath{v_{2}}\xspace$ fluctuation results for \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace collisions at 5.02\,\TeVns in different centrality intervals and \ensuremath{\Pp\mathrm{Pb}}\xspace collisions at 8.16\,\TeVns with $185 \le \noff < 250$ for \PKzS mesons and \PgL baryons~\cite{CMS:2021fhf}. The shaded bands are hydrodynamic calculations of $\ensuremath{v_{2}}\xspace$ fluctuations in \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace collisions. The shaded boxes show the systematic uncertainties Right:
Cumulant ratios $\ensuremath{\vtwo\{6\}}\xspace/\ensuremath{\vtwo\{4\}}\xspace$ as a function of $\ensuremath{\vtwo\{4\}}\xspace/v^{\text{sub}}_2\{2\}$ in \ensuremath{\Pp\mathrm{Pb}}\xspace collisions at $5.02$~\cite{CMS:2015yux} and 8.16\,\TeVns~\cite{CMS:2019wiy}.
Error bars and shaded areas denote statistical and systematic uncertainties, respectively. The solid curves show the expected behavior based on a hydrodynamics-motivated study of the role of initial-state fluctuations.}
\label{fig:fig4}
\end{figure}
The agreement of calculations of purely fluctuation-driven origin with measurements of the ratios $\ensuremath{\vtwo\{6\}}\xspace/\ensuremath{\vtwo\{4\}}\xspace$ and $\ensuremath{\vtwo\{8\}}\xspace/\ensuremath{\vtwo\{6\}}\xspace$~\cite{CMS:2015yux,CMS:2019wiy} shows that the differences found among the multiparticle cumulant results for the $\ensuremath{v_{2}}\xspace$ harmonic can be described by non-Gaussian initial-state fluctuations (Fig.~\ref{fig:fig4}, left). Similarly, the higher-order $\ensuremath{v_{3}}\xspace\{4\}$ coefficient is reported for the first time for a small
system~\cite{CMS:2019wiy}. Both the \ensuremath{\Pp\mathrm{Pb}}\xspace and \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace systems have very similar $\ensuremath{v_{3}}\xspace$ coefficients for the cumulant orders studied, indicating a similar, fluctuation-driven initial-state geometry. In addition, no obvious particle species dependence of the fluctuations in the $\ensuremath{v_{2}}\xspace$ values is observed (Fig.~\ref{fig:fig4}, right). for either the \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace or \ensuremath{\Pp\mathrm{Pb}}\xspace systems~\cite{CMS:2021fhf}. However, the flow fluctuations are observed to be larger in \ensuremath{\Pp\mathrm{Pb}}\xspace collisions compared to \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace collisions~\cite{CMS:2021fhf}, as expected if the overall collision geometry is a driving force.
In small colliding systems, the study of heavy flavor hadron collectivity has the potential to disentangle possible contributions from both initial- and final-state effects. Recent observations of significant $\ensuremath{v_{2}}\xspace$ signal for prompt \PDz~\cite{CMS:2018loe}, and prompt \PJGy~\cite{CMS:2018duw} in $\ensuremath{\mathrm{p}}{}\mathrm{Pb}$ collisions (Fig.~\ref{fig:fig5}, left) provided the first evidence for charm quark collectivity in small systems. In spite of the mass differences, the observed $\ensuremath{v_{2}}\xspace$ signal for prompt \PJGy mesons is found to be comparable to that of prompt \PDz mesons and light-flavor hadrons at a given {\boldsymbol p} range, possibly implying the existence of initial-state correlation effects. Further detailed investigations start addressing open questions for understanding the origin of heavy flavor quark collectivity in small
systems. These include the {\boldsymbol p} and multiplicity dependence of charm quark collectivity in the $\ensuremath{\mathrm{p}}{}\ensuremath{\mathrm{p}}$ system (Fig.~\ref{fig:fig5}, right), and the details of collective behavior of beauty quarks in the $\ensuremath{\mathrm{p}}{}\mathrm{Pb}$ system~\cite{CMS:2020qul}.
\begin{figure}[!htb]
\centering
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{low_x_2021/figures/CMS-HIN-19-009_Figure_006.pdf}
\end{minipage}\hfill
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=0.99\textwidth]{low_x_2021/figures/CMS-HIN-19-009_Figure_004.pdf}
\end{minipage}
\caption{Left: Results of $\ensuremath{v_{2}}\xspace$ for prompt~\protect\cite{CMS:2018loe} and nonprompt~\protect\cite{CMS:2020qul} \PDz mesons, as well as prompt \PJGy\ mesons~\cite{CMS:2018duw} and light flavor hadrons~\protect\cite{CMS:2018loe}, as a function of {\boldsymbol p} in $\ensuremath{\mathrm{p}}{}\mathrm{Pb}$ collisions at $8.16$\,\TeVns. Lines show the theoretical calculations of prompt and nonprompt \PDz, and prompt \PJGy mesons, respectively. Right: Results of $\ensuremath{v_{2}}\xspace^{\text{sub}}$ for charged particles, \PKzS mesons, \PgL baryons, and prompt \PDz mesons as
a function of {\boldsymbol p} for $\abs{y_{\text{lab}}}<1$, with $\noff \geq 100$ in \ensuremath{\Pp\Pp}\xspace collisions at $13\,\TeVns$~\protect\cite{CMS:2020qul}. The vertical bars correspond to the statistical uncertainties, while the shaded areas denote
the systematic uncertainties.}
\label{fig:fig5}
\end{figure}
Probing systems with even smaller interaction regions is important to understand the reach of a hydrodynamic description. The search for collectivity has been extended to electron-positron, electron-proton, and photon-nucleus interactions with none of these systems exhibiting evidence of the long-range correlations seen in hadronic collisions. High-energy \ensuremath{\Pp\mathrm{Pb}}\xspace ultraperipheral collisions, \ie, where the impact parameter is larger than the nucleus radius, provide a new system to extend the search of long-range correlations to photon-proton collisions. Two-particle $V_{n\Delta}$ Fourier coefficients and corresponding single-particle $\ensuremath{v_{\mathrm{n}}}\xspace$ ($n=1\text{--}3$) azimuthal anisotropies are compared to models that do not incorporate collective effects: the data suggest the absence of collectivity in the \PGg{}\ensuremath{\mathrm{p}} system over the explored multiplicity range of up to $N_{\text{trk}}\approx{35}$~\cite{Aad:2021yhy,CMS:2020rfg}.
\section{Summary}
\label{sec:summary}
Although significant progress has been made on the level of precision achieved at large collision systems, the amount of data collected at LHC allow the measurement of more complex flow-related observables. More specifically, such measurements pose constraints on initial conditions, which in turn contribute to more precise modeling of the final-state dynamics. Previous flow measurements mostly focused on $V_n$ (overall flow), \ie, $\ensuremath{v_{\mathrm{n}}}\xspace$ with respect to $\Psi_n$, but recently event-by-event flow fluctuations, longitudinal flow decorrelations, and measurements of nonlinear response coefficients demonstrate that model calculations cannot simultaneously describe all the aforementioned observables.
Similar observations further support the hydrodynamic origin of collective correlations in high-multiplicity events in small collision
systems. Multiparticle long-range correlations of inclusive charged and identified hadrons are observed down to the smallest collision systems, while paving the way for stringent tests of theory predictions on the grounds that heavy flavor particles are formed early and subsequently participate in the medium evolution. The origin of positive $\ensuremath{v_{\mathrm{n}}}\xspace$ seen up to high {\boldsymbol p} is however still not resolved. Whether this is a manifestation of a collective behavior of the system created in such collisions and/or a dilute system with parton scatterings requires experimental techniques for suppressing nonflow contamination. In parallel, precision measurements of \ensuremath{\mathrm{Z}} bosons in peripheral nucleus-nucleus collisions can lead to an improved understanding of the onset of collectivity
in small systems, whereas finding ways to discriminate between the two different scenarios are developed too, \eg, new results provide information on the QCD dynamics of multiparticle production in high-energy photonuclear interactions.
\section*{Acknowledgements}
The work is supported in whole by the Nuclear Physics (NP) program of the U.S. Department of Energy (DOE) with number \href{https://pamspublic.science.energy.gov/WebPAMSExternal/Interface/Common/ViewPublicAbstract.aspx?rv=d1ddcae6-235b-4163-ae34-01fce58e5f90&rtc=24&PRoleId=10}{DOE DE-SC0019389} and \href{https://pamspublic.science.energy.gov/WebPAMSExternal/Interface/Common/ViewPublicAbstract.aspx?rv=00d4fe0f-48a0-4d4a-baf1-c70867d9e499&rtc=24&PRoleId=10}{DE-FG02-96ER40981}.
\nocite{*}
\bibliographystyle{auto_generated.bst}
\chapter{\LARGE{Diffraction and photon-exchange in hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions}}
\graphicspath{ {albrowlowx/} }
\include{albrowlowx/albrow}
\graphicspath{ {Alcerro_Luis/} }
\include{Alcerro_Luis/Alcerro_Luis}
\graphicspath{ {Brodsky_proceedings_elba2021/Brodsky_proceedings_elba2021/Brodsky_proceedings_elba2021/} }
\include{Brodsky_proceedings_elba2021/Brodsky_proceedings_elba2021/Brodsky_proceedings_elba2021/sjb_Elba_contribution}
\graphicspath{ {low_x_2021} }
\include{low_x_2021/GKKrintiras}
\graphicspath{ {Maciej_Lewicki-Low-x_2021_proceedings/Low-x_2021_proceedings/} }
\include{Maciej_Lewicki-Low-x_2021_proceedings/Low-x_2021_proceedings/Lewicki}
\graphicspath{ {jamal/} }
\include{jamal/jamal}
\graphicspath{ {Low_X_proceedingsDMelnikov/} }
\include{Low_X_proceedingsDMelnikov/Low_X_proceedingsDMelnikov}
\graphicspath{ {Low_x_2021_F_Nemes/} }
\include{Low_x_2021_F_Nemes/Low_x_2021_F_Nemes}
\graphicspath{ {Odderon_lowx_Osterberg/} }
\include{Odderon_lowx_Osterberg/Osterberg}
\graphicspath{ {Petrov_article/} }
\include{Petrov_article/Petrov}
\graphicspath{ {proceedings_elba2021/proceedings_elba2021/} }
\include{proceedings_elba2021/proceedings_elba2021/Ribeiro}
\graphicspath{ {proceed_lowx2021/} }
\include{proceed_lowx2021/royon}
\chapter{\LARGE{Spin physics}}
\graphicspath{ {santimaria_proceedings_elba2021/} }
\include{santimaria_proceedings_elba2021/santimaria_proceedings_elba2021}
\chapter{\LARGE{QCD and saturation}}
\graphicspath{ {proceedings_elba2021_SalimCerci/proceedings_elba2021_SalimCerci/proceedings_elba2021/} }
\include{proceedings_elba2021_SalimCerci/proceedings_elba2021_SalimCerci/proceedings_elba2021/SalimCerci}
\graphicspath{ {chachamis_proceedings_elba2021/} }
\include{chachamis_proceedings_elba2021/chachamis_proceedings_elba2021}
\graphicspath{ {colferai/proceedings/} }
\include{colferai/proceedings/Colferai}
\chapter{\LARGE{Low-$x$ PDFs, forward physics, and hadronic final states}}
\graphicspath{ {Boettcher_Lowx2021_proceedings/proceedings/} }
\include{Boettcher_Lowx2021_proceedings/proceedings/Boettcher_Lowx2021_proceedings}
\graphicspath{ {proceeding_Lowx_2021_Celiberto/} }
\include{proceeding_Lowx_2021_Celiberto/proceeding_Lowx_2021_Celiberto}
\graphicspath{ {proceedings_elba2021_DenizSunarCerci/proceedings_elba2021_DenizSunarCerci/} }
\include{proceedings_elba2021_DenizSunarCerci/proceedings_elba2021_DenizSunarCerci/SunarCerci}
\graphicspath{ {draft_Weisong_proceedingslowx2021/} }
\include{draft_Weisong_proceedingslowx2021/Weisong}
\graphicspath{ {giugli_francesco/proceedings_elba2021/} }
\include{giugli_francesco/proceedings_elba2021/giugli_francesco}
\graphicspath{ {Lowxsubmission/Lowxsubmission/} }
\include{Lowxsubmission/Lowxsubmission/klein}
\graphicspath{ {LauraFabbri_Elba2021/LauraFabbri_Elba2021/} }
\include{LauraFabbri_Elba2021/LauraFabbri_Elba2021/LauraFabbri_Elba2021}
\graphicspath{ {Precision_QCD_measurements_from_CMS/Precision_QCD_measurements_from_CMS/} }
\include{Precision_QCD_measurements_from_CMS/Precision_QCD_measurements_from_CMS/Precision_QCD_measurements_from_CMS}
\graphicspath{ {proceedings-CSanchezGras/sanchez_cristina/} }
\include{proceedings-CSanchezGras/sanchez_cristina/Sanchez}
\graphicspath{ {proceedings_elba2021_ragoni/} }
\include{proceedings_elba2021_ragoni/proceedings_elba2021_ragoni}
\graphicspath{ {Zhang_ATLAS/} }
\include{Zhang_ATLAS/Zhang_ATLAS}
\end{document}
\section{Introduction}
We introduce here our topic....
\section{Results}
We describe here the details our results.
The document should not exceed 10 pages excluding references (please contact the organizers if you need more pages). The proceedings will be published online for free by the University of Kansas (this will be an "official" publication) and printed copies will be made available for a reasonable cost. The deadline to send the \textbf{.tar} file of the contribution (\textbf{including the .pdf} which means a successful compilation) is December 15 2021 to \textbf{christophe.royon@ku.edu, lalcerro@ku.edu, gkrintir@ku.edu}. It is also possible to put the article on arXiv provided you mention on the relevant field
Comments: Presented at the Low-$x$ Workshop, Elba Island, Italy, September 27--October 1 2021.
\begin{figure}
\begin{center}
\epsfig{figure=jayhawk.eps,height=0.45\textwidth}
\caption{A beautiful figure}
\label{elcross}
\end{center}
\end{figure}
\section*{Acknowledgements}
The author thanks somebody.
\part[Anomalous coupling studies with intact protons at the LHC\\ \phantom{x}\hspace{4ex}\it{Christophe Royon}]{}
\section{Photon induced processes at the LHC}
We consider special events at the LHC that correspond to photon-induced processes where quasi-real photons are emitted by the incoming interacting protons as shown in Fig.~\ref{fig0} (we will see in the following why photon exchanges dominate at high energy). Protons can be intact after interactions and can be detected and measured in special detectors called roman pots.
These events are especially clean since all particles in the final state (including the intact protons) are measured. We can produce exclusively pairs of photons and $W$ bosons in addition of the two intact protons in the final state (see Fig.~\ref{fig0}). In the same way, one can look for the photon induced production of $ZZ$, $\gamma Z$, $t \bar{t}$, etc.
As an example we will consider the production of two $\gamma$'s in the central ATLAS or CMS detectors, and of two intact protons. Both the ATLAS and CMS-TOTEM collaborations installed roman pots detectors at about 220 meters from the interaction point that can measure intact protons at high luminosity at the LHC, the so-called ATLAS Forward Proton (AFP) and CMS-TOTEM Precision Proton Spectrometer (PPS)~\cite{AFPPPS,AFPPPS2}. At high luminosity (standard runs with $\beta^* \sim 0.5$ at the LHC), the acceptance in mass of the two photons or the two $W$ bosons (see Fig.~\ref{fig0}) with intact protons tagged in the roman pot detectors typically covers the domain 400-2300 GeV. We can thus get sensitivity to beyond standard model physics since we can produce high mass objects.
Quartic photon couplings $\zeta_1$ can be modified via loops of new particles or new resonances that couple to photons~\cite{pap2_1,pap2_2}. In the case of loops of new heavy particles, we get
\begin{eqnarray}
\zeta_1 = \alpha_{em}^2 Q^4 m^{-4} N c_{1,s} \nonumber
\end{eqnarray}
where $\zeta_1$ is proportional to the 4th power of the charge and inversely proportional to the 4th power of the mass of the charged particle, and on its spin, $c_{1,s}$. This
leads to $\zeta_1$ of the order of 10$^{-14}$ -10$^{-13}$ GeV$^{-4}$ depending on beyond standard model theories (extra-dimensions, composite Higgs...).
$\zeta_1$ can also be modified by neutral particles
at tree level (extensions of the SM including scalar,
pseudo-scalar, and spin-2 resonances that couple to the photon). In that case
\begin{eqnarray}
\zeta_1 =
(f_s m)^{-2} d_{1,s} \nonumber
\end{eqnarray}
where $f_s$ is the $\gamma \gamma X$
coupling of the new particle to the
photon, and $d_{1,s}$ depends on the spin of the particle. For instance, 2 TeV
dilatons lead to $\zeta_1 \sim$ 10$^{-13}$ GeV$^{-4}$. All these couplings were implemented in the FPMC generator~\cite{FPMC} that will be used in the following for all predictions.
\begin{figure}[h]
\centering
\includegraphics[width=0.25\textwidth]{proceed_lowx2021/ppww-eps-converted-to.pdf}
\includegraphics[width=0.25\textwidth]{Fig1_photon-eps-converted-to.pdf}
\caption{Example of $WW$ and $\gamma \gamma$ exclusive production by photon exchanges.}
\label{fig0}
\end{figure}
\section{Diphoton exclusive production: SM and BSM contributions}
In this section, we will concentrate on diphoton exclusive production and the conclusions can be generalized to exclusive $WW$, $ZZ$, $\gamma Z$, and $t \bar{t}$ production via photon exchanges. We will start by examining the standard model (SM) production of exclusive diphotons as shown in Fig.~\ref{fig1}. Diphotons can be produced exclusively either via QCD (Fig.~\ref{fig1}, left) or QED processes (Fig.~\ref{fig1}, right). The cross sections for a diphoton mass above the value in abscissa are shown in Fig.~\ref{fig2}. In purple full line, we display the QCD contribution and in black dashed dotted line the sum of the three QED photon-induced contributions (in green dotted lines, the quarks and leptons loop contribution, and in red dashed line the $W$ loop contribution)~\cite{pap1_1,pap1_2,pap1_3,pap1_4}. We note that above a diphoton mass of 200 GeV, the QCD contribution becomes negligible. Recalling the fact that the acceptance of the roman pot detectors starts at about 400 GeV for standard running at the LHC, it is clear that observing two photons in ATLAS/CMS and two tagged protons means
a photon-induced process.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{diag_exclgamma-eps-converted-to.pdf}
\caption{Exlusive production of diphoton vis QCD processes (left) and QED photon exchanges (right).}
\label{fig1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{SM_new-eps-converted-to.pdf}
\caption{Cross section of exclusive diphoton production above a given diphoton mass given in abscissa for QCD (full line) and QED (dashed dotted line) processes (see text).}
\label{fig2}
\end{figure}
Let us new give some details about the exclusive diphoton production analysis for a luminosity of 300 fb$^{-1}$ at the LHC. The number of events is shown in Fig~\ref{fig3}. The number of signal events is shown as a black line for two values of anomalous couplings. We also notice that the number of SM exclusive diphotons (red dashed dotted line) or exclusive dileptons with leptons misidentified as photons (blue dotted line) is quite low and can be neglected. The only background that matters is shown in red dashed lines, and corresponds to the non-exclusive standard diphoton production (with protons destroyed in the final state) superimposed with intact protons originating from secondary interactions called pile up. These events are due to the fact that we have up to 50 interactions per bunch crossing at the LHC at standard luminosities and diphoton productions can be easily superimposed with soft interactions producing intact protons. This is basically the only background that we have to consider.
Measuring intact protons is crucial in order to suppress the pile up background. The method is quite simple. Since, for signal, we detect and measure all particles in the final state (namely the two photons, and the two intact protons), we can match the kinematical information as measured by the two photons with the one using the two protons, namely the rapidity and mass defined as
\begin{eqnarray}
M_{pp} &=& \sqrt{\xi_1 \xi_2 s} = M_{\gamma \gamma} \nonumber \\
y_{pp} &=& \frac{1}{2} \log \left( \frac{\xi_1}{\xi_2} \right) = y_{\gamma \gamma} \nonumber
\end{eqnarray}
where $\xi_1$ and $\xi_2$ are the proton fractional momentum loss. The results are shown in Fig.~\ref{fig4}, left, for the mass ratio and in Fig.~\ref{fig4}, right for the rapidity difference between the $pp$ and $\gamma \gamma$ information for signal in black full line and for pile up background in red dashed lines. It is clear that this variable can reject most of the pile up background and we obtain indeed less than 0.1 event of background for 300 fb$^{-1}$. The sensitivity on quartic photon anomalous coupling is thus up to a few
$10^{-15}$ GeV$^{-4}$, which is better by more than two orders of magnitude with respect to ``standard" methods
at the LHC~\cite{pap2_1,pap2_2}.
Let us note that exclusivity cuts using proton tagging are crucial to suppress backgrounds since,
without matching mass and rapidity requirements, the
background would be about 80.2 events for 300 fb$^{-1}$. Running roman pot detectors at high luminosity at the LHC both in ATLAS and CMS-TOTEM at high luminosity was indeed motivated by the gain that we obtain on the reach on anomalous couplings~\cite{pap1_1,pap1_2,pap1_3,pap1_4}. This is now becoming a reality and both CMS-TOTEM and ATLAS reported recently some observation of QED exclusive dilepton production~\cite{dilepton1, dilepton2} and CMS-TOTEM the first limits on quartic photon anomalous couplings with about 9.4 fb$^{-1}$ of data~\cite{cmsdiphoton}. The analysis with the total accumulated luminosity (about 110 fb$^{-1}$) is in progress.
This method can be applied directly to the search for axion-like particles (ALP) as an example. ALPs can be produced as a resonance via photon induced processes, and we can detect them using the method described above if they decay into two photons as an example. The sensitivity plot (coupling versus mass of the ALP) is shown in Fig.~\ref{fig5} for $pp$ interactions with 300 fb$^{-1}$ of data as a grey region at high ALP masses~\cite{alp1,alp2}. We gain about two orders of magnitude on sensitivity to ALP masses of the order of 1 TeV with respect to standard LHC methods and we reach a new domain at high mass that cannot be reached without tagging the protons. In addition, we also show for reference the complementarity with $Pb Pb$ runnings that cover the region at lower masses (typically ALP masses in the range 10-500 GeV) since the cross section is enhanced by a factor of $Z^4$ for $Pb Pb$ running~\cite{alp1,alp2}. In this case, we do not detect the intact or dissociate heavy ions in roman pot detectors but we use the rapidity gap method since the amount of pile up in heavy ion runs at the LHC is negligible.
\begin{figure}[h]
\centering
\includegraphics[width=0.65\textwidth]{Fig2_gamma_final-eps-converted-to.pdf}
\caption{Number of events as a function of the diphoton mass for signal and background for exclusive $\gamma \gamma$ production for 300 fb${-1}$.}
\label{fig3}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.95\textwidth]{Fig3_gamma_final-eps-converted-to.pdf}
\caption{Mass ratio and rapidity difference between the $pp$ and $\gamma \gamma$ information for signal (in full line) and pile up background (dashed line).}
\label{fig4}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{alp_heavyions-eps-converted-to.pdf}
\caption{Coupling vs ALP mass sensitivity plot. The reach using the measurement of two intact protons and the two photons for photon-induced processes is shown as a grey area for $pp$ collisions, and we also indicate the reach using heavy ion runs at the LHC covering the intermediate mass region.}
\label{fig5}
\end{figure}
\section{Anomalous production of $Z \gamma$ and $WW$ vis photon-induced processes}
Our previous study can be extended to other exclusive productions via photon exchanges and we will discuss briefly the production of $Z \gamma$ and $WW$ events. Exactly the same method of matching the mass and rapidity measurements of the $Z \gamma$ system with the tagged proton information can be used. The new aspect of this study is that we can consider both leptonic and hadronic decays of the $Z$ boson. Of course the resolution on mass and rapidity matching is worse since the jet resolution is worse than for leptons as illustrated in Fig.~\ref{fig6b}, but it leads to unprecedented sensitivities to $\gamma \gamma \gamma Z$ anomalous couplings, up to 10$^{-13}$, better by three orders of magnitude~\cite{gammaz} than the sensitivities without tagging the protons at the LHC (the usual method being to look for the three photon decay of the $Z$ boson).
The same study can be used to observe the SM exclusive production of $WW$ bosons via photon exchanges and also to increase our sensitivity to quartic $\gamma \gamma WW$ anomalous couplings. Recent studies~\cite{ww} showed that the strategy is somewhat different for SM and BSM studies. To measure the SM exclusive $WW$ production (the cross section is of the order of 95.6 fb at the LHC), the best sensitivity originates from the leptonic decays of the $W$s where we can obtain about 50 events with 2 events of background for 300 fb$^{-1}$. The non-zero background originates from the fact that the neutrinos originating from the leptonic decay of the $W$ bosons cannot obviously be measured and this is why the mass and rapidity matching does not work so nicely. Fast timing detectors are needed to suppress further the background in this case. The strategy to look for $\gamma \gamma WW$ quartic anomalous couplings is slightly different since the anomalous coupling events appear at high $WW$ mass as shown in Fig.~\ref{fig6}. The best sensitivity to quartic $\gamma \gamma WW$ couplings appear by looking at the hadronic decay of the $W$ bosons even if the dijet background is quite high. The sensitivity with 300 fb$^{-1}$ is of the order of 3.7 10$^{-7}$ GeV$^{-2}$, better by almost three orders of magnitude that the present LHC sensitivity. This can be further improved by using more advanced jet variables such as subjettiness in order to reject further the dijet background.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{proceed_lowx2021/zgamma_fig2-eps-converted-to.pdf}
\caption{Missing mass ratio and rapidity difference between the $Z \gamma$ and the di-proton system in the case when the $Z$ boson decays into two jets.}
\label{fig6b}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{ww_mass_anomalous}
\caption{$WW$ mass distribution for exclusive $WW$ production (SM is red dashed line and anomalous couplings in full ball line).}
\label{fig6}
\end{figure}
\section{Conclusion}
In this report we considered the exclusive production of $\gamma \gamma$, $WW$ and $Z \gamma$ via photon induced processes, considering the LHC as a $\gamma \gamma$ collider. Tagging the protons in dedicated ATLAS-AFP or CMS-TOTEM-PPS roman pot detector as well as the $\gamma \gamma$, $WW$, $Z \gamma$ in the main ATLAS or CMS detector ensures that we have a photon-induced process since
gluon exchanges are suppressed at high mass in the acceptance of the roman pot detectors.
Matching the kinematical information of the central system with the tagged protons ensures that we have a background-free experiment and any
observed event is a signal. This leads to better sensitivities to quartic anomalous coupling by two or three order of magnitude with respect to the standard methods at the LHC depending on the process.
\iffalse
\part[Phenomenology of the hadronic structure at small-$x$\\ \phantom{x}\hspace{4ex}\it{Francesco Giovanni Celiberto}]{}
\vspace{-0.20cm}
\section{Introduction}
\label{sec:intro_celib}
\vspace{-0.20cm}
Unraveling the inner structure of hadrons through a multi-dimensional study of their constituents represents a frontier research of phenomenological studies at new-generation colliding facilities.
The well-established collinear factorization that relies on a one-dimensional description of the proton content via collinear parton distribution functions (PDFs) has collected a long chain of achievements in describing data at hadronic and lepton-hadronic accelerators.
There are however vital questions on the dynamics of strong interactions which still do not have an answer.
Unveiling the origin of spin and mass of the nucleons requires a stretch of our vision from the collinear description to a tomographic viewpoint in three dimensions, naturally afforded by the \emph{transverse-momentum-dependent} (TMD) factorization.
In the small-$x$ regime a purely TMD-based approach may be not adequate, since large contributions proportional to $\ln (1/x)$ enter the perturbative series with a power that grows with the order, and need to be accounted for via an all-order resummation procedure.
The most powerful tool to resum those large logarithms is the Balitsky--Fadin--Kuraev--Lipatov (BFKL) formalism~\cite{Fadin:1975cb,Kuraev:1976ge,Kuraev:1977fs,Balitsky:1978ic} in the leading approximation (LL$x$), which means inclusion of all terms proportional to $\alpha_s^n \ln (1/x)^n$, and in the next-to-leading approximation (NLL$x$), including all terms proportional to $\alpha_s^{n+1} \ln (1/x)^n$.
In this paper we report progresses on the study of the proton structure at small-$x$ via two distinct kinds of gluon distributions.
In Section~\ref{sec:TMDs} we present the main features of a quite recent calculation of all (un)polarized gluon TMD distributions at leading twist, whose definition genuinely embodies BFKL effects.
In Section~\ref{sec:UGD} we provide with an evidence that helicity amplitudes for the exclusive leptoproduction of $\rho$-mesons act as discriminators for the existing models of the BFKL \emph{unintegrated gluon distribution} (UGD).
We come out with the message that forthcoming analyses at new-generation colliders will help to shed light on the hadronic structure at small-$x$ and to explore the interplay between different formalisms, such as the TMD factorization and the BFKL resummation.
\vspace{-0.20cm}
\section{Small-$x$ improved transverse-momentum dependent gluon distributions}
\label{sec:TMDs}
\vspace{-0.20cm}
The complete list of unpolarized and polarized gluon TMDs at leading twist (twist-2) was given for the first time in Ref.~\cite{Mulders:2000sh}. In Tab.~\ref{tab:gluon_TMDs} we present the eight twist-2 gluon TMDs for a spin-1/2 target, using the nomenclature convention as in Refs.~\cite{Meissner:2007rx,Lorce:2013pza}.
The two functions on the diagonal in Tab.~\ref{tab:gluon_TMDs} respectively represent the density of unpolarized gluons inside an unpolarized nucleon, $f_1^g$, and of circularly polarized gluons inside a longitudinally polarized nucleon, $g_1^g$.
In the collinear regime they correspond to the well-known unpolarized and helicity gluon PDFs.
TMD distributions receive contributions from the resummation of transverse-momentum logarithms which enter perturbative calculations. Much is know about this resummation~\cite{Bozzi:2003jy,Catani:2010pd,Echevarria:2015uaa}, but very little is known about the genuinely non-perturbative TMD content.
The distribution of linearly polarized gluons in an unpolarized hadron, $h_1^{\perp g}$, is particularly relevant, since it gives rise to spin effects in collisions of unpolarized hadrons~\cite{Boer:2010zf,Sun:2011iw,Boer:2011kf,Pisano:2013cya,Dunnen:2014eta,Lansberg:2017tlc}, whose size is expected to increase at small-$x$ values.
The Sivers function, $f_{1T}^{\perp g}$, gives us information about the density of unpolarized gluons in a transversely polarized nucleon, and plays a key role in the description of transverse-spin asymmetries that can be studies in collisions with polarized-proton beams.
Notably, in Ref.~\cite{Boussarie:2019vmk} it was argued that in the forward limit the Sivers function can be accessed in unpolarized electron-nucleon collisions via its connection with the QCD Odderon.
At variance with collinear distributions, TMDs are process-dependent via the \emph{gauge links} (or \emph{Wilson lines})~\cite{Brodsky:2002cx,Collins:2002kn,Ji:2002aa}.
Quark TMDs depend on the $[+]$ and $[-]$ staple links, which set the direction of future- and past-pointing Wilson lines, respectively.
Gluon TMDs exhibit a more involved gauge-link dependence, since they are sensitive on combinations of two or more staple links. This brings to a more diversified kind of \emph{modified universality}.
Two major gluon gauge links appear: the $f\text{-type}$ and the $d\text{-type}$ ones. They are also known in the context of small-$x$ studies as Weisz\"acker--Williams and dipole structures, respectively.
The antisymmetric $f_{abc}$ QCD color structure appears in the $f$-type $T$-odd gluon-TMD correlator, whereas the symmetric $d_{abc}$ structure characterizes the $d$-type $T$-odd one. This fact leads to a dependence of $f$-type gluon TMDs on the $[\pm,\pm]$ gauge-link combinations, while $d$-type gluon TMDs are characterized by the $[\pm,\mp]$ ones.
Much more complicate, box-loop gauge links emerge in processes where multiple color exchanges connect both initial and final state states~\cite{Bomhof:2006dp}. This leads to a violation of the TMD factorization~\cite{Rogers:2010dm} (see also Ref.~\cite{Rogers:2013zha}).
{
\renewcommand{\arraystretch}{1.7}
\begin{table}
\centering
\hspace{1cm} gluon pol. \\ \vspace{0.1cm}
\rotatebox{90}{\hspace{-1cm} nucleon pol.} \hspace{0.1cm}
\begin{tabular}[c]{|m{0.5cm}|c|c|c|}
\hline
& $U$ & circular & linear \\
\hline
$U$ & $f_{1}^{g}$ & & \textcolor{blue}{$h_{1}^{\perp g}$} \\
\hline
$L$ & & $g_{1}^{g}$ & \textcolor{red}{$h_{1L}^{\perp g}$} \\
\hline
$T$ & \textcolor{red}{$f_{1T}^{\perp g}$} & \textcolor{blue}{$g_{1T}^{g}$} & \textcolor{red}{$h_{1}^{g}$}, \textcolor{red}{$h_{1T}^{\perp g}$} \\
\hline
\end{tabular}
\caption{A table of leading-twist gluon TMDs for spin-$1/2$ targets.
$U$, $L$, $T$ stand for unpolarized, longitudinally polarized and transversely polarized hadrons, whereas
$U$, `circular', `linear' depict unpolarized, circularly polarized and linearly polarized gluons, respectively.
$T$-even (odd) functions are given in blue (red).
Black functions are $T$-even and survive the integration over the gluon transverse momentum.}
\label{tab:gluon_TMDs}
\end{table}
}
From the experimental point of view, the gluon-TMD sector is a largely unexplored field.
First attempts at phenomenological analyses of the unpolarized gluon function have been presented in Refs.~\cite{Lansberg:2017dzg,Gutierrez-Reyes:2019rug,Scarpa:2019fol}. Phenomenological studies of the gluon Sivers TMD can be found in Refs.~\cite{Adolph:2017pgv, DAlesio:2017rzj,DAlesio:2018rnv,DAlesio:2019qpk}.
Therefore, exploratory analyses of gluon TMDs via simple and flexible models are needed. Pioneering studies along this direction were carried out in the so-called \emph{spectator-model} framework~\cite{Lu:2016vqu,Mulders:2000sh,Pereira-Resina-Rodrigues:2001eda}.
Formerly employed in the description of quark TMD distributions~\cite{Bacchetta:2008af,Bacchetta:2010si,Gamberg:2005ip,Gamberg:2007wm,Jakob:1997wg,Meissner:2007rx}, it is based on the assumption that the struck hadron with mass $\cal M$ and momentum $\cal P$ emits a gluon with longitudinal fraction $x$, momentum $p$, and transverse momentum $\boldsymbol{p}_T$, and what remains is treated as an effective on-shell particle having ${\cal M}_X$ and spin-1/2.
Within this model taken at tree level all the leading-twist TMDs in Table~\ref{tab:gluon_TMDs} can be calculated. Spectator-model gluon $T$-even densities were recently calculated in Ref.~\cite{Bacchetta:2020vty} and presented in Refs.~\cite{Celiberto:2021zww,Bacchetta:2021oht}.
In those works the nucleon-gluon-spectator vertex was modeled as follows
\begin{equation}
\label{eq:form_factor}
{\cal G}^{\, \mu} = \left( \tau_1(p^2) \, \gamma^{\, \mu} + \tau_2(p^2) \, \frac{i}{2{\cal M}} \sigma^{\, \mu\nu}p_\nu \right) \,,
\end{equation}
the $\tau_1$ and $\tau_2$ functions being dipolar form factors in $\boldsymbol{p}_T^2$. A dipolar profile for the couplings is useful to fade gluon-propagator divergences, quench large-$\boldsymbol{p}_T$ effects which are beyond the reach of a pure TMD description, and remove logarithmic singularities coming from $\boldsymbol{p}_T$-integrated distributions.
Furthermore, the spectator mass was allowed to take a continuous range of values weighed by a spectral function ${\cal S}_{\rm } ({\cal M}_X)$, which provides the necessary flexibility to reproduce both the small- and the moderate-$x$ shape of gluon collinear PDFs.
The analytic expression of the spectral function contains seven parameters and reads
\begin{equation}
\label{eq:rhoX}
{\cal S}_{\rm } ({\cal M}_X) = \mu^{2a} \left( \frac{A}{B + \mu^{2b}} + \frac{C}{\pi \sigma} e^{-\frac{({\cal M}_X - D)^2}{\sigma^2}} \right) \,.
\end{equation}
Model parameters were fixed by performing a simultaneous fit of our unpolarized and helicity TMDs, $f_1^g$ and $g_1^g$, to the corresponding collinear PDFs from {\tt NNPDF}~\cite{Ball:2017otu,Nocera:2014gqa} at the initial scale $Q_0 = 1.64$ GeV. The statistical uncertainty of the fit was obtained via the widely known bootstrap method.
We refer to Ref.~\cite{Bacchetta:2020vty} for details on the fitting procedure and quality.
We stress that since our tree-level approximation does not take into account the gauge link, our model $T$-even TMDs are process-independent.
Preliminary results for spectator-model $T$-odd TMDs at twist-2 and their dependence on the gauge link can be found in Refs.~\cite{Bacchetta:2021lvw,Bacchetta:2021twk,Bacchetta:2022esb}.
Pursuing the goal of shedding light on the the full 3D dynamics of gluons inside the proton, we consider the following densities which describe the 2D $\boldsymbol{p}_T$-distribution of gluons for different combinations of their polarization and the proton spin. For an unpolarized proton, we identify the unpolarized density
\begin{equation}
x \rho (x, p_x, p_y) = x f_1^g (x, \boldsymbol{p}_T^2)
\label{eq:rho_unpol}
\end{equation}
as the probability density of finding unpolarized gluons at given $x$ and $\boldsymbol{p}_T$, while the Boer--Mulder distribution
\begin{equation}
x \rho^{\leftrightarrow} (x, p_x, p_y) = \frac{1}{2} \bigg[ x f_1^g (x, \boldsymbol{p}_T^2) + \frac{p_x^2 - p_y^2}{2 M^2} \, x h_1^{\perp g} (x, \boldsymbol{p}_T^2) \bigg]
\label{eq:rho_T}
\end{equation}
represents the probability density of finding linearly-polarized gluons in the transverse plane at $x$ and $\boldsymbol{p}_T$.
Contour plots in Fig.~\ref{fig:gluon_TMDs} show $\boldsymbol{p}_T$-shape of the $\rho$-distributions in Eqs.~\eqref{eq:rho_unpol} and~\eqref{eq:rho_T}, respectively, obtained at $Q_0 = 1.64$ GeV and $x=10^{-3}$ for an unpolarized proton virtually moving towards the reader. The color code quantifies the size of the oscillation of each distribution along the $p_x$ and $p_y$ directions. To better catch these oscillations, ancillary 1D plots representing the corresponding density at $p_y = 0$ are shown below each contour plot. As expected, the density of Eq.~\eqref{eq:rho_unpol} exhibits a cylindrical symmetry around the direction of motion of the proton pointing towards the reader. Since the nucleon is unpolarized but the gluons are linearly polarized along the $p_x$ direction, the Boer--Mulders $\rho$-density in Eq.~\eqref{eq:rho_T} shows a dipolar structure. The departure from the cylindrical symmetry is emphasized at small-$x$, because the Boer--Mulders function is particularly large.
From the analytic point of view, one has that the ratio between $f_1^g$ and $h_1^{\perp g}$ TMDs goes to a constant in the asymptotic small-$x$ limit, $x \to 0^+$.
This is in line with the prediction coming from the linear BFKL evolution, namely that at low-$x$ the "number" of unpolarized gluons is equal to the number of linearly-polarized ones, up to higher-twist effects (see, \emph{e.g.}, Refs.~\cite{Dominguez:2011br,Marquet:2016cgx,Taels:2017shj,Marquet:2017xwy,Petreska:2018cbf}).
Thus, a connection point between our model gluon TMDs and the high-energy dynamics has been discovered.
\begin{figure}[tb]
\centering
\includegraphics[scale=0.22,clip]{gf1_h1p_x-0p001.pdf}
\caption{
3D tomographic imaging of the proton unpolarized (left) and Boer--Mulders (right) gluon TMD densities as functions of the gluon transverse momentum, for $x = 10^{-3}$ and at the initial energy scale, $Q_0 = 1.64$ GeV. 1D ancillary panels below main contour plots show the density at $p_y = 0$.
Figures from~\cite{Bacchetta:2020vty}.
}
\label{fig:gluon_TMDs}
\end{figure}
\vspace{-0.20cm}
\section{Unintegrated gluon distribution}
\label{sec:UGD}
\vspace{-0.20cm}
The BFKL approach~\cite{Fadin:1975cb,Kuraev:1977fs,Balitsky:1978ic} affords us a factorized formula for scattering amplitudes (and thence, in inclusive reactions, for cross sections) given as a convolution of the universal BFKL Green's function and two process-dependent impact factors describing the transition from each initial-state particle to the corresponding detected object.
The connection between the UGD and gluon TMDs is still largely uncharted. From a formal perspective, the TMD formalism relies on parton correlators and thus on Wilson lines, whereas the BFKL approach ``speaks" the language of Reggeized gluons. From a phenomenological viewpoint, TMD factorization is expected to hold at low transverse momenta, whereas the BFKL resummation requires large transverse-momentum emissions.
A first connection between the UGD and the unpolarized and linearly polarized gluon TMDs, $f^g_1$ and $h^{\perp g}_1$,
was investigated in Refs.~\cite{Dominguez:2011wm,Hentschinski:2021lsh,Nefedov:2021vvy}.
The first class of processes that serves as probe channels of the BFKL dynamics is represented by the inclusive \emph{semi-hard} emission~\cite{Gribov:1983ivg} of two particles with high transverse momenta and well separated in rapidity (see Refs.~\cite{Celiberto:2017ius,Celiberto:2020wpk} for an overview of recent applications). Here the established factorization is \emph{hybrid}. Indeed the pure high-energy factorization is supplemented by collinear densities which enter expressions of impact factors.
In the last thirty years several phenomenological studies have been proposed for different semi-hard final states.
An incomplete list includes: the inclusive hadroproduction of two jets featuring large transverse momenta and well separated in rapidity (Mueller--Navelet channel~\cite{Mueller:1986ey}), for which several phenomenological studies have appeared so far~(see, \emph{e.g.},~Refs.~\cite{Colferai:2010wu,Caporale:2012ih,Ducloue:2013hia,Ducloue:2013bva,Caporale:2013uva,Caporale:2014gpa,Colferai:2015zfa,Caporale:2015uva,Ducloue:2015jba,Celiberto:2015yba,Celiberto:2015mpa,Celiberto:2016ygs,Celiberto:2016vva,Caporale:2018qnm}), the inclusive detection of two rapidity-separated light-flavored bound states~\cite{Celiberto:2016hae,Celiberto:2016zgb,Celiberto:2017ptm,Celiberto:2017uae,Celiberto:2017ydk}, three- and four-jet hadroproduction~\cite{Caporale:2015vya,Caporale:2015int,Caporale:2016soq,Caporale:2016vxt,Caporale:2016xku,Celiberto:2016vhn,Caporale:2016djm,Caporale:2016lnh,Caporale:2016zkc}, $J/\Psi$-plus-jet~\cite{Boussarie:2017oae,Boussarie:2017xdy}, hadron-plus-jet~\cite{Bolognino:2019cac,Bolognino:2019yqj,Bolognino:2019cac}, Higgs-plus-jet~\cite{Celiberto:2020tmb,Celiberto:2021fjf,Celiberto:2021tky,Celiberto:2021txb,Celiberto:2020rxb,Celiberto:2021xpm}, heavy-light dijet system~\cite{Bolognino:2021mrc,Bolognino:2021hxx}, heavy-flavor~\cite{Celiberto:2017nyx,Bolognino:2019yls,Bolognino:2019ccd,Celiberto:2021dzy,Celiberto:2021fdp,Bolognino:2022wgl}, and forward Drell–Yan dilepton production with a possible backward-jet detection~~\cite{Golec-Biernat:2018kem}.
This permitted us to define BFKL-sensitive observables as well as to disengage the BFKL dynamics from collinear contaminations~\cite{Celiberto:2015yba,Celiberto:2015mpa,Celiberto:2020wpk}.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{T_R_all_W100_full.pdf} \hspace{0.5cm}
\includegraphics[width=0.47\textwidth]{T_R_all_W30_full.pdf}
\caption{$Q^2$-dependence of the polarized $T_{11}/T_{00}$ ratio, for all the considered UGD models, at $100$ GeV HERA (left) and at $30$ GeV EIC (right).
Uncertainty bands represent the effect of varying $a_2(\mu_0 = 1\,$\rm GeV$)$ between $0.0$ and $0.6$.
Numerical results were obtained through the \emph{Leptonic-Exclusive-Amplitudes} ({\tt LExA}) super-module of the {\tt JETHAD} interface~\cite{Celiberto:2020wpk}. Figures from~~\cite{Bolognino:2021niq}.}
\label{fig:rho}
\end{figure}
The second kind of high-energy probe channels is represented by single emissions of forward particles. Here we access the proton content through the UGD, whose evolution at small-$x$ is controlled by the BFKL Green's function. Being a non-perturbative object, the UGD in not well known and several phenomenological models for it have been built so far. The UGD has been probed via the inclusive deep inelastic scattering~\cite{Hentschinski:2012kr,Hentschinski:2013id}, the exclusive electro- or photo-production of vector mesons at HERA~\cite{Anikin:2009bf,Anikin:2011sa,Besse:2013muy,Bolognino:2018rhb,Bolognino:2018mlw,Bolognino:2019bko,Bolognino:2019pba,Celiberto:2019slj,Bautista:2016xnp,Garcia:2019tne,Hentschinski:2020yfm} and the EIC~\cite{Bolognino:2021niq,Bolognino:2021gjm,Bolognino:2022uty}, the single inclusive heavy-quark emission at the LHC~\cite{Chachamis:2015ona}, and the forward Drell--Yan production at LHCb~\cite{Motyka:2014lya,Brzeminski:2016lwh,Motyka:2016lta,Celiberto:2018muu}.
We consider the exclusive $\rho$-meson production in lepton-proton collisions via the subprocess
\begin{equation}
\label{eq:subprocess}
\gamma^*_{\lambda_i} (Q^2) \, p \; \to \; \rho_{\lambda_f} p \;,
\end{equation}
where a photon with virtuality $Q^2$ and polarization $\lambda_i$ interacts with the proton and a $\rho$-meson with polarization $\lambda_f$ is produced. The two polarization states $\lambda_{i,f}$ can assume values $0$ (longitudinal) and $\pm 1$ (transverse).
Since a strict semi-hard scale ordering holds, $W^2 \gg Q^2 \gg \Lambda^2_{\rm QCD}$ ($W$ is the subprocess center-of-mass energy), one enters the small-$x$ regime given by $x = Q^2/W^2$. The BFKL approach provides us with a high-energy factorized formula for polarized amplitudes
\begin{equation}
\label{eq:ampltude}
{\cal T}_{\lambda_i \lambda_f}(W^2, Q^2) = \frac{i W^2}{(2 \pi)^2} \int \frac{{\rm d}^2 \boldsymbol{p}_T}{(\boldsymbol{p}_T^2)^2} \; \Phi^{\gamma^*_{\lambda_i} \to \rho_{\lambda_f}}(\boldsymbol{p}_T^2, Q^2) \, f_g^{\rm BFKL} (x, \boldsymbol{p}_T^2, Q^2) \;,
\end{equation}
with $\Phi^{\gamma^*_{\lambda_i} \to \rho_{\lambda_f}}(q^2, Q^2)$ being the impact factor that describes the $\gamma^* \to \rho$ transition and encodes collinear distribution amplitudes (DAs, for further details see Section~2 of Ref.~\cite{Bolognino:2021niq}), and $f_g^{\rm BFKL}$ is the BFKL UGD. We consider in our study the seven UGD models given in Section~3 of Ref.~\cite{Bolognino:2021niq}.
In Fig.~\ref{fig:rho} we show the $Q^2$-dependence of ${\cal T}_{11} / {\cal T}_{00}$. We compare our predictions with HERA data~\cite{Aaron:2009xp} at $W = 100$ GeV (left panel), and we present new results for the EIC at the reference energy of $W = 30$ GeV (right panel). We use the twist-2 (twist-3) DAs for the longitudinal (transverse) case, and we gauge the impact of the collinear evolution of the DAs via a variation of the non-perturbative parameter $a_2(\mu_0 = 1\,$\rm GeV$)$ in the range 0.0 to 0.6.
We note that our predictions are spread over a large range and none of the UGD models is in agreement with HERA data over the whole $Q^2$-window, the ABIPSW, IN and GBW ones better catching the intermediate-$Q^2$ range. Results at EIC energies show a reduction of the distance between models, together with a hierarchy inversion for some regions of $Q^2$. This provides us with a clear evidence that the ${\cal T}_{11} / {\cal T}_{00}$ helicity ratio act as a discriminator for the UGD.
\vspace{-0.20cm}
\section{Future perspectives}
\label{sec:conclusions}
\vspace{-0.20cm}
We reported progresses on the study of the proton structure at small-$x$ via two distinct kinds of gluon distributions: the (un)polarized gluon TMD functions and the BFKL UGD.
All the presented results are relevant to explore the proton content at small-$x$, where the intrinsic motion of the constituent gluons plays an important role in the description of observables sensitive to different combinations of the hadron and the parton polarization states.
Here, a key ingredient to get a consistent description of the proton structure is interplay between genuine TMD and high-energy effects.
We believe that a path towards the first extraction of the small-$x$ improved gluon distributions from a global fit on data coming from new-generation colliding facility, such the EIC~\cite{Accardi:2012qut,AbdulKhalek:2021gbh}, the HL-LHC~\cite{Chapon:2020heu}, the FPF~\cite{Anchordoqui:2021ghd}, and NICA-SPD~\cite{Arbuzov:2020cqg} has been traced.
\vspace{-0.20cm}
\section*{Acknowledgements}
\vspace{-0.20cm}
We thank Alessandro Bacchetta, Andr\`ee Dafne Bolognino, Dmitry Yu. Ivanov, Alessandro Papa, Marco Radici, Wolfgang Sch\"afer, Antoni Szczurek, and Pieter Taels for collaboration.
\vspace{-0.20cm}
\nocite{*}
\bibliographystyle{auto_generated}
\part[Parton distribution functions and intrinsic charm at LHCb\\ \phantom{x}\hspace{4ex}\it{Cristina S\'{a}nchez Gras on behalf of the LHCb Collaboration}]{}
\section{Introduction}
Initially designed for the study of forward beauty and charm physics, the LHCb detector has the pseudorapidity ($\eta$) coverage $2 < \eta < 4.5$~\cite{lhcbdetector}. Its remarkable vertex reconstruction and particle identification performance, together with its high momentum resolution, have now established it as a general purpose detector. The forward coverage allows LHCb to reach \mbox{Bjorken-$x$} values (where $x$ is the fraction of momentum carried away by a parton) complementary to those accessible by other general purpose detectors, such as CMS~\cite{cms} and ATLAS~\cite{atlas_gras}. This allows to probe proton parton distribution functions (PDFs) at very low- and high-$x$ values.
On the low-$x$ side, central exclusive production (CEP) of $J/\psi$ and $\psi(2S)$ mesons in proton-proton ($pp$) collisions can probe the gluon PDF down to $x \sim 10^{-6}$~\cite{lhcbCEP}. For high values of $x > 0.1$, $Z$ boson production in association with charm jets can be used to determine the intrinsic charm content of the proton~\cite{lhcbIC}. The most recent LHCb results for these two cases are presented in this note.
\section{\boldmath Central exclusive production of $J/\psi$ and $\psi(2S)$ mesons in $pp$ collisions}
The diffractive process \mbox{$pp \rightarrow p + X + p$} in which two protons stay unscathed following their interaction is known as central exclusive production (CEP). The proton interaction takes place via the exchange of colourless objects. In the case of vector meson production, the exchange of a photon and a pomeron receives the name of photoproduction. The cross-section for photoproduction is proportional to the square of the gluon PDF (at leading order) in perturbative quantumchromodynamics (QCD). In $pp$ collisions with the LHCb coverage, the gluon PDF can then be proved at very low-$x$ values of $x \sim 10^{-6}$.
This process has a very distinctive signature: low final state multiplicity, as only the muons that follow the meson decay are present in the detector, and large rapidity gaps (regions of no activity) around the dimuon system. The latter can be spoiled when one of the protons dissociates after the interaction, making it an inelastic CEP process. The proton remnants in that case are produced outside the $2 < \eta < 5$ coverage and escape detection. To veto these processes, three (two) scintillator stations are in place around the beam pipe upstream (downstream) from the interaction point, conforming HeRSCheL. The HeRSCheL (High Rapidity Shower Counters for LHCb) detector consists on five 60 cm x 60 cm stations equipped with four scintillating pads each~\cite{herschel}. It increases the LHCb coverage to $1.5 < \eta < 10$ and $-10 < \eta < 5, \ -3.5 < \eta < -1.5$ in the forward and backward regions, respectively. Charged particles produced when a proton dissociates trigger detection by the scintillating pads. A $\chi^2$-like figure of merit built with the activity registered at the HeRSCheL stations is used to veto these events.
CEP events are selected by requiring exactly two muon tracks and imposing a veto on the activity in HeRSCheL. The number of signal events is obtained by fitting the dimuon squared transverse momentum ($p_{\rm T}^2$) distribution. Following Regge theory, the cross-section for $J/\psi$ and $\psi(2S)$ CEP events follows \mbox{${\rm d \sigma}/{\rm d}y \sim \exp(-bp_{\rm T}^2)$}, with $b \sim 6$ (GeV$/c)^{-2}$. The background arising from the dissociation of one of the protons is parametrised in a sample with the HeRSCheL veto inverted. Aside from this, two more backgrounds need to be accounted for: nonresonant dimuon production and feed-down to $J/\psi$ from $\psi(2S)$ and $\chi_{c_{J}} \ (J=0,1,2)$. Nonresonant dimuon production takes place when both protons interact electromagnetically via photon-photon exchange. This background is measured by fitting the dimuon mass spectrum, and its $p_{\rm T}^2$ shape modelled from simulated samples. The $J/\psi$ feed-down background concerns $\chi_{c_{J}} \rightarrow J/\psi \gamma$ and $\psi(2S) \rightarrow J\psi X$ decays where only the $J/\psi$ is fully reconstructed. Its contribution is estimated combining data and simulation. The $J/\psi$ and $\psi(2S)$ $p_{\rm T}^2$ distributions are shown in Fig.~\ref{fig:ptsqCEP}. A fit to the background-subtracted shapes is used to determine the number of signal events.
The result for the differential cross-section in rapidity bins is shown in Fig.~\ref{fig:resultCEP}, as well as the leading order (LO) and next-to-leading order (NLO) Jones-Martin-Ryskin-Teubner (JMRT) theory descriptions~\cite{JMRTLO,JMRTNLO}. In the $J/\psi$ case, the data is more in agreement with the NLO description, especially in the higher rapidity bins. For the $\psi(2S)$ the same trend is observed, but higher statistics are needed.
\begin{figure}[hb]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth]{figs_CEP/Fig4a.pdf}\hskip 0.01\textwidth
\includegraphics[width=0.49\textwidth]{figs_CEP/Fig4b.pdf}\\
\end{tabular}
\caption{Squared transverse momentum ($p_{\rm T}^2$) distribution for CEP $J/\psi \rightarrow \mu^+\mu^-$ (left) and \mbox{$\psi(2S) \rightarrow \mu^+\mu^-$} (right). The different backgrounds described in the text are indicated. }
\label{fig:ptsqCEP}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth]{figs_CEP/Fig5a.pdf}\hskip 0.01\textwidth
\includegraphics[width=0.49\textwidth]{figs_CEP/Fig5b.pdf}\\
\end{tabular}
\caption{Differential cross-section results for $J/\psi \rightarrow \mu^+\mu^-$ (left) and \mbox{$\psi(2S) \rightarrow \mu^+\mu^-$} (right) compared to LO and NLO JMRT theory descriptions~\cite{JMRTLO,JMRTNLO}. }
\label{fig:resultCEP}
\end{center}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth]{figs_CEP/Fig6a.pdf}\hskip 0.01\textwidth
\includegraphics[width=0.49\textwidth]{figs_CEP/Fig6b.pdf}\\
\end{tabular}
\caption{Photoproduction cross-section results for $J/\psi$ (left) and $\psi(2S)$. The LHCb result at $\sqrt{s}=13 \ \mathrm{TeV}$ is shown together with the LHCb results at $\sqrt{s}=7 \ \mathrm{TeV}$~\cite{lhcb7tev} and those from the H1~\cite{H1param,H1psi2S} ZEUS~\cite{zeusCEP} and ALICE~\cite{aliceCEP} collaborations, and the results from the fixed target experiments E401~\cite{e401}, E516~\cite{e516} and E687~\cite{e687}.}
\label{fig:photoprodCEP}
\end{center}
\end{figure}
The measured cross-section per rapidity bins allow for the calculation of the photoproduction cross-section, $\sigma_{pp \rightarrow p\psi p}$, as:
\begin{equation}\label{eq:photoprod}
\sigma_{pp \rightarrow p\psi p} = r(W_+)k_+ \frac{\mathrm{d}n}{\mathrm{d}k_+} \sigma_{\gamma p \rightarrow \psi p} (W_+) \ + \ r(W_-)k_- \frac{\mathrm{d}n}{\mathrm{d}k_-} \sigma_{\gamma p \rightarrow \psi p} (W_-)\,,
\end{equation}
where $r(W_{\pm}$ is the gap survival factor, $k_{\pm} \equiv M_{\psi}/2e^{\pm |y|}$ is the photon energy, $\frac{\mathrm{d}n}{\mathrm{d}k_{\pm}}$ is the photon flux and $W_{\pm} = 2k_{\pm}\sqrt{s}$ is the photon-proton system invariant mass. The positive (negative) signs in Eq.~\ref{eq:photoprod} refer to the situation where the photon is emitted by the proton travelling parallel (antiparallel) to the LHCb beam axis. In LHCb, $W_+$ and $W_-$ contribute to the same rapidity bin and cannot be disentangled. However, given that only about a third of the data corresponds to $W_-$ and this low-energy contribution has been parametrised for the $J/\psi$ meson by the H1 collaboration~\cite{H1param}, their power-law parametrisation is used to fix it. This power-law is scaled by the ratio of the $\psi(2S)$ and $J/\psi$ cross-sections measured by H1~\cite{H1psi2S}. The estimated photoproduction cross-section is presented in Fig.~\ref{fig:photoprodCEP} and compared to the power-law H1 fit and their results~\cite{H1param,H1psi2S}, as well as to different results from other experiments. Also shown is the JMRT NLO theory description. In the case of the $J/\psi$, where more data is available, it is observed that the LHCb photoproduction cross-section values at $\sqrt{s} = 13 \ \mathrm{TeV}$ deviate from the power-law fit at higher rapidities and are in more agreement with the JMRT NLO description. More data is necessary to discern the behaviour of the $\psi(2S)$ photoproduction cross-section at high rapidities.
\section{\boldmath Intrinsic charm with $Z$ bosons produced in association with charm jets}
While the extrinsic charm ($c$) content of the proton (due to perturbative gluon radiation) has been widely established, several theory predictions suggest that the proton also contains charm intrinsically. This could take place in a sea-quark-like manner or as a valence-like $c$ quark, transforming the proton wave-function into $|uudc\bar{c}\rangle$, as predicted by Light Front QCD (LFQCD). Previous measurements have been performed at low-$Q^2$ \cite{measIC1,measIC2}. At such low energy, the theoretical treatment of hadronic nuclear effects is challenging, and it is difficult to understand the results as evidence or not of the proton intrinsic charm (IC) content. Nevertheless, global PDF analysis do not exclude it at the percent level \cite{percIC1,percIC2}.
A proposal was made to measure the ratio of $Z+c$ jets events to that of $Z+$jets, $\sigma(Zc)/\sigma(Zj)$ in the forward region~\cite{proposalIC}. Performing this measurement at high-$Q^2$ with $Z$ bosons at forward rapidities allows to access high Bjorken-$x$ values with $x>0.1$, where the hadronic and nuclear effects are negligible. Figure~\ref{fig:theoryIC} illustrates how the ratio $\sigma(Zc)/\sigma(Zj)$ at high $Z$ boson rapidities would allow to discriminate the intrinsic charm content of the proton.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth]{figs_IC/Fig2.pdf}
\caption{Theory predictions including and excluding intrinsic charm content in the proton for $\sigma(Zc)/\sigma(Zj)$~\cite{proposalIC}. The range $2 < y(Z) < 4.5$ corresponds to the LHCb forward region.}
\label{fig:theoryIC}
\end{center}
\end{figure}
An integrated luminosity of 6 fb$^{-1}$ corresponding to the full proton-proton collision LHCb dataset at $\sqrt{s} = 13 \ \mathrm{TeV}$ is used \cite{lhcbIC}. Events with $Z \rightarrow \mu^+ \mu^-$ and at least one jet with transverse momentum \mbox{$p_\mathrm{T} > 20 \ \mathrm{GeV/}c$} are selected. Charm jets are identified by using a displaced-vertex (DV) tagger in bins of $p_\mathrm{T}(\mathrm{jet}), y(Z)$. A two-dimensional fit to the corrected DV mass and the number of tracks in the DV is performed to identify the flavour of each jet in the selected events. The result of the fit is shown in Fig.~\ref{fig:fitIC}. The efficiency of tagging a jet as a charm jet is estimated in simulation and calibrated in data. The $Z_c$ and $Z_j$ yields in each $y(Z)$ bin are corrected for their selection efficiency and resolution effects at detection.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.49\textwidth]{figs_IC/Fig3a.pdf}\hskip 0.01\textwidth
\includegraphics[width=0.49\textwidth]{figs_IC/Fig3b.pdf}\\
\end{tabular}
\caption{Result of the two-dimensional fit for the corrected mass ($m_\mathrm{cor}$) and number of tracks ($N_\mathrm{trk}$) in the dispaced-vertex (DV). The contributions for charm, beauty and light jets are shown.}
\label{fig:fitIC}
\end{center}
\end{figure}
The measurement of a ratio results in most of the systematic uncertainties cancelling out. The dominant systematic uncertainty is related to the efficiency of identifying charm jets. A comparison of the measured $\sigma(Zc)/\sigma(Zj)$ values to different theory predictions is shown in Fig.~\ref{fig:resultIC}. The first two bins are compatible with both no IC and IC allowed content. The bin at higher $Z$ boson rapidity is consistent with proton IC models, and is about three standard deviations away from the prediction of no intrinsic charm content. The measurement is statistically limited and more data is needed to draw further conclusions. Moreover, these results need to be added to global PDF analyses for interpretation.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth]{figs_IC/Fig5.pdf}
\caption{Results for $\sigma(Zc)/\sigma(Zj)$ compared to theory predictions allowing and excluding intrinsic charm content in the proton.}
\label{fig:resultIC}
\end{center}
\end{figure}
\section{Conclusions}
The LHCb detector can be used to perform precision QCD measurements, both in the low- and high-$x$ regions. The central exclusive production cross-section of $J/\psi$ allows to probe the region $x \sim 10^{-6}$. This measurement at $\sqrt{s} = 13 \ \mathrm{TeV}$ is in agreement with the JMRT NLO description, and further data is needed to observe if the same behaviour is present for $\psi(2S)$. The high-$x$ region provides access to large values $x > 0.1$, where the intrinsic charm of the content can be probed. While statistically limited, the first measurement of the proton intrinsic charm content in the forward region in proton-proton collisions has been performed.
\FloatBarrier
\iffalse
\clearpage
\part[CMS results on photon-induced processes\\ \phantom{x}\hspace{4ex}\it{Beatriz Ribeiro Lopes on behalf of the CMS Collaboration}]{}
\section{Introduction}
Photon-induced processes at the LHC can be measured as CEP processes. The CEP of any system X (where X can be, among many others, $ee$, $\mu\mu$, $\gamma\gamma$, $WW$, $ZZ$, $Z\gamma$, $t\bar{t}$) occurs when X is produced at a hadron collider by photon or gluon exchange and the interacting protons (or ions) are not disrupted but leave the collision intact, and stay in the beam pipe at very small angles. The photon-exchange case is the topic of this talk.
The most distinctive characteristic of this kind of process is that there are no proton (or ion) remnants: since the interacting particles remain intact, the only particles that can be detected around the interaction point are the decay products of X. The intact protons can be measured separately, using dedicated forward detectors.
Not all CEP processes are photon-induced, as already mentioned. In fact, while processes such the exclusive production of dileptons ($X=ee$ or $\mu\mu$, see diagram in figure \ref{dilepton}) are purely quantum electrodynamics (QED) processes, other processes like the exclusive production of a photon pair ($X=\gamma\gamma$) have both a gluon-induced (QCD) and a photon-induced (QED) component, as seen in figure \ref{feynmanLbyL}. The photon-induced component of this process is often referred to as light-by-light scattering.
\begin{figure}
\begin{center}
\epsfig{figure=dilepton_diagram.png,height=0.26\textwidth}
\caption{Diagram showing the central exclusive production of a lepton pair, a pure QED process.}
\label{dilepton}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{figure=feynman_LbyL.png,height=0.30\textwidth}
\caption{Central exclusive production of a photon pair. Left: gluon-induced process (QCD), right: photon-induced process, or light-by-light scatteringn(QED), from \cite{royon}.}
\label{feynmanLbyL}
\end{center}
\end{figure}
The CEP of diphoton is dominated by the QCD contribution at low invariant mass of the diphoton system $m_{\gamma\gamma}$, while the photon-induced component dominates at high masses, starting at a few hundred GeV. This can be seen in figure \ref{fig:xsec_vs_mass}, taken from \cite{royon}, where the production cross-section of exclusive $\gamma\gamma$ is shown as a function of $m_{\gamma\gamma}$. The figure corresponds to the calculation done specifically for exclusive $\gamma\gamma$, but the same is qualitatively true for other CEP processes such as $WW$, $ZZ$, $Z\gamma$ and $t\bar{t}$.
\begin{figure}
\begin{center}
\epsfig{figure=xsec_vs_mass.png,height=0.42\textwidth}
\caption{Contribution of different processes to the production cross-section of exclusive $\gamma\gamma$, as a function of the invariant mass of the two photons, $m_{\gamma\gamma}$, from \cite{royon}.}
\label{fig:xsec_vs_mass}
\end{center}
\end{figure}
To study photon-induced CEP experimentally at the LHC, in proton-proton interactions, we only have access to the mass range where this contribution is dominant, i.e., the high-mass region. However, if we consider lead-lead interactions instead, then the cross-section is enhanced by a factor $Z^4$ (where $Z$ is the atomic number of the colliding particles), and consequently we gain access to the lower mass region as well. The accessible effective luminosities for CEP of diphotons at the LHC experiments for pp and PbPb are compared in figure \ref{fig:pp_vs_PbPb}, from \cite{bruce}.
\begin{figure}
\begin{center}
\epsfig{figure=pp_vs_PbPb.png,height=0.42\textwidth}
\caption{Effective luminosity accessible at the LHC experiments for CEP of diphotons, as a function of the invariant mass of the two photon system. The red solid line shows what is reachable with PbPb collisions. The black solid line shows the luminosity with pp collisions, without tagged protons, while the 2 different black dashed lines show the luminosity for pp collisions with tagged protons, with two different configurations of the proton detectors. The one with the caption "RP 220m" corresponds to what is achievable, for example, with the Precision Proton Spectrometer (PPS) at CMS. Figure from \cite{bruce}.}
\label{fig:pp_vs_PbPb}
\end{center}
\end{figure}
In general, photon-induced processes are a promising way to look for new physics, since they are sensitive to anomalous couplings between the SM particles such as the gauge bosons and the top quark.
A unique feature of these processes is an excellent mass resolution, irrespective of the decay mode of the central system, since the energy loss of the outgoing protons is directly related to the invariant mass of central system. In other words, if we are able to measure the outgoing protons, we have an independent and high resolution handle on the mass of system. This high mass resolution also opens the possibility for precision tests of the SM couplings.
Furthermore, by matching protons to the central system, most backgrounds that would normally be irreducible can be eliminated, and a high signal-to-background ratio is achievable.
The physics programme at CMS that aims to measure CEP can be divided in three categories, according to the invariant mass of the system X which is exclusively produced: the low mass region, with $m_X$ up to a few GeV, is accessible only with heavy ion collisions; the intermediate mass region, with $m_X$ up to a few hundred GeV, is accessible with proton-proton collisions without tagging intact protons, and the high mass region, $m_X$ starting at around 400 GeV, is accessible with proton-proton collisions combined with tagged protons.
\subsection*{Low mass region}
In typical PbPb collisions, where the impact parameter $b$ is smaller than twice the atomic radius ($b<2R_A$), hundreds of particles are produced and events are very "crowded" (see left of figure \ref{PbPbeventdisplay}). However, in the case where $b>2R_A$, normally called ultra-peripheral collisions (UPCs), the Pb ions can interact via photon exchange and remain intact. This means that the only particles observed in the final state are the ones produced via CEP or the respective decay products, resulting in a very distinctive signature with low number of tracks (see right side of figure \ref{PbPbeventdisplay}). This distinctive signature is used to measure CEP in the low mass region, without the need for tagging the outgoing ions.
\begin{figure}
\begin{center}
\epsfig{figure=event_display.png,height=0.26\textwidth}
\caption{Left: CMS event display of a typical PbPb collision, where each yellow track represents a charged particle track in the CMS tracker and each green area represents an energy deposit in the calorimeters. Right: CMS event display of $\gamma\gamma\rightarrow\mu\mu$ candidate event, where the two red tracks represent two muons. Event displays taken from \cite{eventdisplayPbPb}.}
\label{PbPbeventdisplay}
\end{center}
\end{figure}
In the results section, two results are discussed in this mass region: the exclusive dimuon production \cite{dimuonPbPb} and the light-by-light scattering \cite{LbyLPbPb}.
\subsection*{Intermediate mass region}
In pp collisions, processes like the exclusive production of dimuon ($\gamma\gamma\rightarrow \mu\mu$) can be measured even without resorting to proton tagging, since the muons are the only expected tracks in the event, and a requirement for the presence of rapidity gaps can be used as selection criterion.
There is no recent public result in this category, but some analyses of this type were done using data from the Run 1 of the LHC, for example \cite{wwRun1}.
\subsection*{High mass region}
Some photon-induced processes, in particular the production of more massive particles, result in several particles in the final state, and are difficult to tag using only the requirement of no extra activity in the central detector. A good example is the exclusive production of a top quark pair, $\gamma\gamma\rightarrow t\bar{t}$, illustrated in figure \ref{ttbar} for the case where both top quarks decay leptonically. In this case, we have at least two jets in the final state, as well as two leptons and missing energy from the neutrinos. This is a more complex signature and very similar to the inelastic QCD $t\bar{t}$ production at the LHC, which has a cross-section that is larger by several orders of magnitude. In order to tag this process, one needs an additional signature.
\begin{figure}
\begin{center}
\epsfig{figure=ttbar_diagram.png,height=0.42\textwidth}
\caption{Diagram illustrating the exclusive production of a pair of top quarks, with both top quarks decaying leptonically.}
\label{ttbar}
\end{center}
\end{figure}
This signature can be obtained by tagging the outgoing intact protons, which will only be present in CEP and not in the inelastic $t\bar{t}$ production. Tagging protons requires the development of dedicated detectors, and is extremely challenging because these protons travel very close (few mm) to the beam, where the radiation environment will damage significantly any detector. In CMS, the detector that was developed for this purpose is the CMS-TOTEM Precision Proton Spectrometer (PPS), a detector consisting of movable structures called roman pots (RPs), placed at ~200 m from the interaction point. These can contain 3D-pixel or silicon strip detectors, that are able to track the protons as they leave the collision and pass through the RPs. The acceptance of PPS covers protons that lost ~2-20\% of their momentum.
This range of momentum loss can be translated into an acceptance in terms of the mass of the system X, using equation \ref{eq:mass}:
\begin{equation}
m_X=\sqrt{s\xi_1\xi_2}
\label{eq:mass}
\end{equation}
\begin{equation}
y_X=\frac{1}{2}\log(\xi_1/\xi_2)
\label{eq:y}
\end{equation}
where $\xi_i$ is the fraction of momentum loss of each interacting proton, computed as $\xi_i=\frac{p_f-p_i}{p_i}$, for initial proton momentum $p_i$ and final momentum $p_f$. This results in a good acceptance at high masses (starting ~400~GeV). This detector was installed during the most part of the Run 2 of the LHC (2016-2018), and there is more than 100~fb$^{-1}$ of data available.
In this talk, two results are presented concerning lead-lead (PbPb) collisions and two concerning proton-proton (pp) collisions. In PbPb collisions, we present the recent works on exclusive dimuon production \cite{dimuonPbPb} and exclusive diphoton production, often called light-by-light scattering \cite{LbyLPbPb}. In pp collisions, we discuss the recent results on (semi)exclusive dilepton production \cite{semiexclusivedilepton} and light-by-light scattering \cite{LbyLpp}. Limits on anomalous couplings and on pseudoscalar axion-like particles are also discussed.
\section{Results}
\subsection{Results in PbPb collisions}
\subsubsection*{Exclusive dimuon production}
The analysis in \cite{dimuonPbPb} aims to study the exclusive production of muon pairs in UPCs, using PbPb collision data collected in 2018 during the LHC Run 2, corresponding to an integrated luminosity of 1.5 nb$^{-1}$, at $\sqrt{s_{NN}}=5.02$~TeV. The two lead nuclei produce each a photon flux, and the average transverse momentum $<p_T>$ of the exclusively produced muon pair depends on the overlap between these photon fluxes, and thus could depend on the impact parameter $b$.
In \cite{QEDcalculation}, the authors perform a calculation based on QED, which predicts that the dimuon $<p_T>$ should increase as $b$ decreases. The main goal of the analysis is to test this calculation and prove the dependence experimentally.
Instead of measuring the dimuon $<p_T>$ directly, another quantity, which has better experimental resolution, is used, the acoplanarity $\alpha$:
\begin{equation}
\alpha=1-|\Delta\phi_{\mu\mu}|/\pi
\label{eq:acoplanarity}
\end{equation}
where $\Delta\phi_{\mu\mu}$ is the distance in azimuthal angle between the two muons. Larger dimuon $<p_T>$ results in a larger acoplanarity.
The impact parameter $b$ cannot be measured directly, so a good experimental handle on this parameter is needed. We can take advantage of the fact that the smaller the $b$, the larger the overlap between the photon fluxes of the two nuclei, and the higher the probability for the excitation of one or both ions via photon absorption into giant dipole resonances or higher excited states. The giant dipole resonances typically decay with the emission of one neutron, while higher excited states may emit two or more neutrons. These forward neutrons are emitted at very low rapidity, and can be detected at CMS thanks to the zero-degree calorimeters (ZDC), part of the CMS forward calorimetry system.
\begin{figure}
\begin{center}
\epsfig{figure=dimuon6plots.png,height=0.47\textwidth}
\caption{For 6 classes of neutron multiplicity, acoplanarity distributions for $\gamma\gamma\rightarrow\mu\mu$ in ultraperipheral PbPb collisions at $\sqrt{s_{NN}}=5.02$~TeV. The distributions are normalized to unit integral over the measured range. The dot-dot-dashed line denotes the fit to the core contribution, while the dotted line denotes the fit to the tale. The sum of the two is indicated with a pink solid line. The statistical uncertainty is shown as vertical black lines on the points, while the systematic uncertainty is depicted by the gray boxes. Figure from \cite{dimuonPbPb}.}
\label{dimuon6plots}
\end{center}
\end{figure}
The results are summarized in figure \ref{dimuon6plots}. The average acoplanarity at the core of the distributions $<\alpha^{\text{core}}>$ is taken from each fit (red lines in the figure), and plotted against the neutron multiplicity, as can be seen in the top panel of figure \ref{dimuonsummary}. A clear dependence is observed in the data (dark blue), in qualitative agreement with the calculation (light blue). The bottom panel shows a similar distribution, but this time for the average dimuon invariant mass. Again the data (red) are in qualitative agreement with the calculation.
In purple are the predictions from the STARlight \cite{starlight} MC generator. It is clear that the MC does not fully account for the observed dependence between the acoplanarity (or dimuon average $p_T$) and the impact parameter $b$. The authors conclude by calling for theoretical effort in the direction of improving the simulation of these interactions.
\begin{figure}
\begin{center}
\epsfig{figure=dimuonsummary.png,height=0.6\textwidth}
\caption{Top: Average acoplanarity at the core of the distribution as a function of the forward neutron multiplicity. Bottom: Average dimuon invariant mass as a function of the forward neutron multiplicity. Both distribution are shown for $\gamma\gamma\rightarrow\mu\mu$ in ultraperipheral PbPb collisions at $\sqrt{s_{NN}}=5.02$~TeV. The experimental data (dark blue/red) are compared with a QED calculation (light blue) and with the predictions from the MC generator STARlight (purple). Figure from \cite{dimuonPbPb}.}
\label{dimuonsummary}
\end{center}
\end{figure}
\subsubsection*{Light-by-light scattering}
The CMS analysis providing evidence for light-by-light scattering is described in \cite{LbyLPbPb}. It is based on data collected during the several LHC Run 2 PbPb campaigns at $\sqrt{s}=5.02$~TeV, corresponding to a total integrated luminosity of 390~$\mu\text{b}^{-1}$. The signal in this analysis is characterised by two back-to-back photons in the final state and no extra activity. The main background arises from CEP of electron pairs, which are misidentified as photon pairs, and the QCD (gluon-induced) production of photon pairs.
An event selection is applied requiring two photons with $E_T > 2$~GeV, pseudorapidity $|\eta_\gamma| < 2.4$, diphoton invariant mass $m_\gamma\gamma > 5$~GeV, diphoton transverse momentum $p_T(\gamma\gamma)<1$~GeV, and diphoton acoplanarity $\alpha=1-|\Delta\phi_{\gamma\gamma}|/\pi$ below 0.01 (first two bins of the distribution in figure \ref{plotLbyL_pbpb}). The number of observed events after selection is 14, with 9.0$\pm$0.9 signal expected and 4.0$\pm$1.2 background, corresponding to a significance of 3.7$\sigma$, above the 3$\sigma$ threshold normally required to claim evidence for this process.
\begin{figure}
\begin{center}
\epsfig{figure=LbyL_plot.png,height=0.42\textwidth}
\caption{Distribution of the diphoton acoplanarity, for data (black points) superimposed with the prediction from MC simulation for the signal (orange) and the two main backgrounds (purple and yellow). Figure from \cite{LbyLPbPb}.}
\label{plotLbyL_pbpb}
\end{center}
\end{figure}
The fiducial cross-section is measured to be
$$\sigma_{\text{fid}}(\gamma\gamma\rightarrow\gamma\gamma)=120\pm46~\text{(stat)}\pm12~\text{(theo)}~\text{nb}$$
consistent with the SM prediction of 116$\pm$12~nb \cite{LbyLPbPb_theory}.
New spin-even particles like pseudo scalar axion-like particles (ALPs) can contribute to the light-by-light scattering continuum or to new diphoton resonances. This work sets limits on ALPs production in the range 5-90~GeV. These were the best limits at date of publication over the mass range 5-50~GeV (5-10~GeV) for ALPs coupling to electromagnetic (electroweak) current. The limits are shown compared to previous results in figure \ref{alps_limits}, on the left in the case of electromagnetic coupling only, and on the right in case of electroweak coupling. There is an equivalent analysis that was recently published by the ATLAS collaboration, which sets competitive limits on the production of these particles \cite{atlas_LbyL}.
\begin{figure}
\begin{center}
\epsfig{figure=alps_limits.png,height=0.35\textwidth}
\caption{Limits on ALPs production set by CMS in \cite{LbyLPbPb}, compared to previous works, left: assuming ALPs coupling to photons only, right: assuming also the hyper charge coupling. Figure from \cite{LbyLPbPb}.}
\label{alps_limits}
\end{center}
\end{figure}
\subsection{Results in pp collisions with tagged protons}
\subsubsection*{Semi-exclusive dilepton production}
The exclusive and semi-exclusive production of lepton pairs is dominated by photon interaction. We call semi-exclusive the case where one of the protons remains intact but the other is dissociated in the collision. The analysis in \cite{semiexclusivedilepton} aims to tag one (or two) intact protons with PPS and combine them with a lepton pair in the central CMS apparatus, using data collected in 2016 by CMS and TOTEM at $\sqrt{s}=13$~TeV, corresponding to an integrated luminosity of 9.4~fb$^{-1}$. The analysis aims mainly at single-tagged (one proton) events, since the double-arm acceptance of PPS starts at an invariant mass of the lepton pair of around 400~GeV, where a low number of events is expected. A $e^+e^-$ / $\mu^+\mu^-$ selection in the central system is combined with a proton(s) requirement in PPS, and the resulting events are shown in figure \ref{xiplot}. This plot shows the $\xi$ of the dilepton pair, computed from lepton kinematics, and the $\xi$ of the protons detected in each PPS arm (left: left arm, right: right arm). The events along the diagonal represent the signal-like events. 12 (8) single-tagged events are observed in the $\mu^+\mu^-$ ($e^+e^-$) channel, with 1.49 (2.36) expected background, corresponding to a significance of 4.3 (2.6) $\sigma$. The combined significance between the two channels exceeds 5 $\sigma$. This is the first observation of proton-tagged CEP of a lepton pair, and is a very important result to validate the PPS functioning (alignment, optics, etc.).
\begin{figure}
\begin{center}
\epsfig{figure=xiplot.png,height=0.4\textwidth}
\caption{Distribution of $\xi$ of the lepton pair, computed from lepton kinematics, versus the $\xi$ of the protons detected in each PPS arm (left: left arm, right: right arm). The circle points correspond to events matching the diagonal (signal-like), within the uncertainties, while triangles are events that do not match (background like). The non-filled squares correspond to events outside of the PPS acceptance. Blue points are $e^+e^-$ candidates, while red points are $\mu^+\mu^-$ candidates. From \cite{semiexclusivedilepton}.}
\label{xiplot}
\end{center}
\end{figure}
\subsubsection*{Light-by-light scattering}
Light-by-light scattering has been observed by both CMS and ATLAS at low invariant diphoton mass (up to a few GeV). The analysis in \cite{LbyLpp} is the first study of light-by-light scattering at high mass ($m_{\gamma\gamma}>350$~GeV) at a hadron collider. This process is sensitive to anomalous $\gamma\gamma\gamma\gamma$ couplings, in the context of an effective dimension-8 extension of the SM , which can be written as:
\begin{equation}
L_8^{\gamma\gamma\gamma\gamma}=\zeta_1 F_{\mu\nu}F^{\mu\nu}F_{\rho\sigma}F^{\rho\sigma}+\zeta_2 F_{\mu\nu}F^{\mu\rho}F_{\rho\sigma}F^{\rho\nu}
\end{equation}
The data that are used were collected by CMS and TOTEM during 2016 at $\sqrt{s}=13$~TeV, corresponding to an integrated luminosity of 9.4~fb$^{-1}$. The selection consists on two photons at CMS matched with two outgoing protons in PPS.
Events are selected that have two photons, with photon $p_T > 75$~GeV, $m_{\gamma\gamma}>350$~GeV (compatible with PPS double-arm acceptance) and $1-|\Delta\phi_{\gamma\gamma}|< 0.005$. The events observed and expected, after this selection, are shown in figure \ref{plots_diphoton} (left-hand side). Then, a proton matching requirement is applied, but 0 events are observed with matching protons.
\begin{figure}
\begin{center}
\epsfig{figure=plots_diphoton.png,height=0.4\textwidth}
\caption{Left: Distribution of the invariant mass of the photon pair, after central selection but before proton matching. The signal expectation, depicted by the green line, is multiplied by 5000 so it can be visible on the plot. Right: limits on the coupling parameters $\zeta_1$ and $\zeta_2$, compared to the SM prediction, and respective 95\% confidence region. From \cite{LbyLpp}.}
\label{plots_diphoton}
\end{center}
\end{figure}
This work sets limits on quartic gauge couplings using the coupling parameters $\zeta_1$ and $\zeta_2$, as seen on the right-hand side of figure \ref{plots_diphoton}. The limits are:
$$|\zeta_1|<3.7\times10^{-13}~\text{GeV}^{-4} \quad(\zeta_2=0)$$
$$|\zeta_2|<7.7\times10^{-13}~\text{GeV}^{-4} \quad(\zeta_1=0)$$
\section{Conclusion}
In this conference talk, the most recent results on photon-induced exclusive production of $e^+e^-$, $\mu^+\mu^-$ and $\gamma\gamma$ in pp and PbPb collisions were presented. Some of these results are able to set competitive limits on anomalous couplings and ALPs production. The results shown include data with an integrated luminosity up to 9.4 fb$^{-1}$, however there are more than 100 fb$^{-1}$ of data currently being analysed, and many results will be published soon.
In the future, with more data available and the improved PPS setup, we will gain additional sensitivity, and be able to measure a wider variety of processes, as well as perform precision measurements.
\section*{Acknowledgements}
The author thanks the organising committee of the Low-x 2021 conference for making this very interesting event possible. Many thanks also to the author's supervisor, Abideh Jafari, for all the guidance and for the encouragement to present at this conference. Finally, thanks to everyone at the DESY CMS-Top group, as well as to Cristian Baldenegro, Michele Gallinaro, Matteo Pisano, Michael Pitt, Enrico Robutti, Pedro Silva and Silvano Tosi for the insightful comments and suggestions.
\iffalse
\part[Jet cross section measurements in CMS\\ \phantom{x}\hspace{4ex}\it{Deniz Sunar Cerci on behalf of the CMS Collaboration}]{}
\section{Introduction}
In high energy proton-proton (pp) collisions jets, i.e., collimated spray of particles, are abundantly produced. Inclusive jet production in pp collisions is a useful tool to test perturbative Quantum Chromodynamics (QCD) predictions. In addition, this provides important constraints on the description of the proton structure, expressed by the parton distribution functions (PDFs) and the value of the strong coupling constant $\alpha_S$.
The CMS Collaboration has performed many measurements of inclusive jet production and multi-jets production at different centre-of-mass energies. In the following the most recent results are presented.
\section{Results}
\subsection{Inclusive jet measurements}
Measurements of inclusive jet production in proton-proton collisions have been performed with the data collected by the CMS experiment~\cite{cms} at different centre-of-mass energies, i.e. 7 TeV~\cite{cms2}, 8 TeV~\cite{cms3} and 13 TeV~\cite{cms4}.
The CMS Collaboration has recently published an inclusive jet cross section measurement with the data collected in 2016, corresponding to an integrated luminosity of up to 36.3 fb$^{-1}$~\cite{cms5}. The double differential inclusive jet cross sections are measured as a function of jet transverse momentum $p_T$ and rapidity $y$. The jets clustered with the anti-$k_T$ jet algorithm are used with two jet distance parameters, $R = 0.4$ and 0.7. A comprehensive QCD analysis at next-to-next-to-leading order (NNLO) is performed to study the PDFs of the proton as well as to extract the strong coupling constant. The inclusive jet cross sections as functions of the jet $p_T$ and $|y|$ for R = 0.7 is shown in Fig.~\ref{fig:1}. The measured jet cross sections are compared with fixed order NNLO QCD predictions using CT14PDF. In the measurement, a wide range of the jet $p_T$ from 97 GeV up to 3.1 TeV is covered. The prediction using parton $H_T$ as renormalisation and factorisation scale results in a softer $p_T$ spectrum than in case of set to jet $p_T$. The NLO+NLL calculations predict harder $p_T$ spectrum than the NNLO calculations. The data are well described by all predictions within the experimental and theory uncertainties.
\begin{figure}
\begin{center}
\epsfig{figure=Figure_1a.png,height=0.35\textwidth}
\epsfig{figure=Figure_1b.png,height=0.35\textwidth}
\caption{The double-differential inclusive jet cross sections as a function of jet $p_T$ measured in intervals of $|y|$ shown with jet distance parameter $R = 0.7$. The data are divided by NNLO (upper panel) and NLO+NLL predictions (lower panel) ~\cite{cms5}.}
\label{fig:1}
\end{center}
\end{figure}
A comprehensive QCD analysis is performed to investigate the sensitivity of the presented measurement on the proton PDFs and $\alpha_S$. Due to the small out-of-cone radiation effects, the jet cross section for $R = 0.7$ is used. The results obtained with both CMS data and HERA DIS data to the fit on the gluon PDF is shown on Fig.~\ref{fig:2} (left). Significant improvement on the accuracy of the PDFs is observed by using the present measurement in the QCD analysis. For the first time, the value of the strong coupling constant at the Z boson mass is extracted in a QCD analysis at NNLO using these data and results in $\alpha_S = (m_Z) = 0.1170 \pm 0.0019$. Furthermore, the model of contact interactions (CI) is used for investigation of the effect of beyond standard model particle exchanges between the quarks. In the context of the effective field theory (EFT)-improved SM (SMEFT) fit, the CI Wilson coefficient $c_1$ is taken as a free parameter. The obtained result from the SMEFT fit with the left-handed CI model with $\Lambda = 10$~TeV is shown in Fig.~\ref{fig:2} (right).
\begin{figure}
\begin{center}
\epsfig{figure=Figure_2.png,height=0.45\textwidth}
\epsfig{figure=Figure_2b.png,height=0.45\textwidth}
\caption{The gluon distributions shown as a function of $x$ at the scale $\mu = {m_t}^{2} $ resulting from the NNLO fit using HERA DIS and the CMS 13 TeV jets data (left) and from the SMEFT fit with the left-handed CI model with $\Lambda = 10$~TeV (right)~\cite{cms5}.}
\label{fig:2}
\end{center}
\end{figure}
A measurement of the differential inclusive jet production cross section is performed by the CMS Collaboration~\cite{smp-21-009}. The measurement is based on pp collisions at $\sqrt s = 5$ TeV corresponding to a total integrated luminosity of 27.4 pb$^{-1}$. The present measurement provides a valuable reference data for the analysis of heavy ion collisions probing quark-gluon plasma (QGP). The reconstruction of jets with the anti-$k_T$ algorithm using $R = 0.4$ is carried out within in the kinematic range of $|y| < 2$ and $0.06 < p_T < 1 $~TeV. The unfolded jet cross section is compared with pQCD predictions, calculated at both NLO and NNLO with jet $p_T$ and $H_T$ parton scale choices. The predictions are corrected for nonperturbative (NP) and electroweak (EW) effects. The comparison of the measurement to NLO and NNLO predictions with jet $p_T$ and parton $H_T$ scale is shown in Fig.~\ref{fig:3}.
\begin{figure} [hhh]
\begin{center}
\epsfig{figure=Figure_3a.png,height=0.29\textwidth}
\epsfig{figure=Figure_3b.png,height=0.29\textwidth}
\epsfig{figure=Figure_3c.png,height=0.29\textwidth}
\caption{Ratios of the unfolded inclusive jet cross section to the NLO theoretical prediction, using the CT14nlo PDF set, with $\mu_R = \mu_F = p_T$ (left) and with $\mu_R = \mu_F = H_T$ (middle). Ratio of the unfolded inclusive jet cross section to the NNLO theoretical prediction, using the CT14nlo PDF set, with $\mu_R = \mu_F = H_T$ (right)~\cite{smp-21-009}.}
\label{fig:3}
\end{center}
\end{figure}
\subsection{Multijet production}
The differential cross-section of the four jets leading in $p_T$ as a function of their transverse momentum is measured with the data recorded with the CMS detector in pp collisions at $\sqrt s = 13$~TeV~\cite{smp-21-006}. The same analysis strategy as the inclusive jet measurement at 13 TeV is followed except the jets clustered with $R= 0.4$ are used. The events which have at least two jets with the leading jet of $p_{T1} > 200$ GeV and subleading jet of $p_{T2} > 100$ GeV are considered. All jets must satisfy the range of $|y| < 2.5$. The multiplicity of additional jets with $p_T > 50$~GeV is measured in bins of the
azimuthal separation between leading and subleading jets ($\Delta \phi_{1,2}$) and transverse momenta of the leading jet ($p_{T1}$). Comparisons of data to NLO dijet predictions MG5 AMC+PY8 (jj) and MG5 AMC+CA3 (jj) as well as the NLO three-jet prediction of MG5 AMC+CA3 (jjj) are shown in Fig.~\ref{fig:4}. Reasonable agreement is observed with the normalization of MG5 AMC+PY8 (jj) NLO calculation even for three jets. The measurement is larger than the predictions particularly in the low $p_{T1}$ region.
\begin{figure}[hh]
\begin{center}
\epsfig{figure=Figure_4.png,height=0.45\textwidth}
\caption{Comparison of the differential cross section of two leading jets as a function of the exclusive jet multiplicity (inclusive for 7 jets) in bins of $p_{T1}$ and $\Delta \phi_{1,2}$~\cite{smp-21-006}.}
\label{fig:4}
\end{center}
\end{figure}
\section{Summary}
The CMS Collaboration has performed extensive jet studies in proton-proton collisions at different centre-of-mass energies. The most recent measurements of inclusive jet and multijet production are presented. The measurements are compared to various Monte Carlo event generators as well as the fixed order NLO, NLO+NLL and NNLO predictions. The QCD analysis is also performed at next-to-leading order. The PDFs, the values of the strong coupling constant and of the pole mass of the top quark are extracted.
\iffalse
\part[Search for BFKL signatures in CMS\\ \phantom{x}\hspace{4ex}\it{Salim Cerci on behalf of the CMS Collaboration}]{}
\section{Introduction}
Hadronic jet measurements are important probes for investigating the low-$x$ structure of the proton, where $x$ represents the fractional momentum carried by the incoming partons. In the quantum chromodynamics (QCD) prediction, two partons are produced with a back-to-back topology in azimuthal plane. Hence, two jets show a strong correlation in their azimuthal angle which is a sensitive probe for better understanding of the QCD radiation in hard processes in high energy particle collisions. In the Bjorken limit which can be accessible at the large centre-of-mass energies, the scaling variable $x \sim p_T/ \sqrt s$ is kept fixed near unity whereas the transverse momentum $p_T$ becomes approximately equals to the square of the four-momentum $Q^2$. Since $x$ is not small, $Q^2$ tends to infinity. Thus, the calculations can be resummed within the collinear factorization framework with the Dokshitzer–Gribov–Lipatov– Altarelli–Parisi (DGLAP) formalism, where parton emissions are strongly ordered in transverse momentum.
In another kinematic regime: $x \rightarrow finite$, $p_T >> \Lambda_{QCD}$ and $\sqrt s \rightarrow \infty$, the large rapidity separation between the scattered partons occur, thus the DGLAP dynamics fails at low $x$. Such processes can be described by the Balitsky–Fadin–Kuraev–Lipatov (BFKL) evolution equations.
In the following the measurements searching for the BFKL evolution equation effects are presented. The measurements are performed with data collected in proton-proton collisions by the CMS experiment~\cite{cms}.
\section{Azimuthal decorrelation of jets at $\sqrt s = 7$~TeV}
The CMS Collaboration reported a measurement of azimuthal angle decorrelation between the most forward and the most backward jets (so-called Mueller-Navelet jets) in proton-proton collisions at $\sqrt s = 7$ TeV~\cite{fsq-12-002}. In the analysis, jets with transverse momentum, $p_T$ > 35 GeV and absolute pseudorapidity, |y| < 4.7 are considered. The normalised cross sections are compared with various Monte Carlo generators and analytical predictions based on the DGLAP and BFKL parton evolution equations.
In Fig.~\ref{fig:1_scerci}, the azimuthal angle decorrelation of dijets and ratio of its average cosines ($C_{n} = <\mathrm{cos}(n(\pi - \phi_{dijet}))>$) are shown as a function of rapidity separation between the jets, $\Delta y$, reaching up to $\Delta y = 9.4$ for the first time.
\begin{figure}[hh]
\begin{center}
\includegraphics[height=0.43\textwidth]{CMS-FSQ-12-002_Figure_001-e.png}
\includegraphics[height=0.43\textwidth]{CMS-FSQ-12-002_Figure_003-b.png}
\caption{Left: The azimuthal-angle difference distribution measured for Mueller-Navelet jets in the rapidity interval $6.0 < \Delta y < 9.4$. Right: Comparison of the measured ratio $C_2/C_1$ as a function of rapidity difference $\Delta y$ to SHERPA, HEJ+ARIADNE and analytical NLL BFKL calculations at the parton level~\cite{fsq-12-002}.}
\label{fig:1_scerci}
\end{center}
\end{figure}
\section{Dijets with large rapidity separation at $\sqrt s = 2.76$~TeV}
A measurement of inclusive and Mueller-Navelet dijet differential cross sections as a function of
rapidity separation between the jets, $\Delta y$ in pp collisions at $\sqrt s = 2.76$~TeV
is presented~\cite{fsq-13-004}. The present study extends the results of 7 TeV measurement~\cite{fsq-12-002} by measuring cross section ratios. The same event selection and jet definition are applied hence a direct comparison of the results is allowed.
The inclusive dijet production cross
section, $\sigma^{incl}$, is defined as the cross section for events with at least one pair of jets with $p_T > p_{Tmin} = 35$~GeV where $p_{Tmin}$ represents the transverse momentum threshold. The "exclusive" dijet production cross section, $\sigma^{excl}$, corresponds to dijet events if only two jets with $p_{Tmin} > 35$~GeV. The Mueller-Navelet cross section, $\sigma^{MN}$, denotes the dijet events with the most forward and most backward jets with $p_T > 35$~GeV. Finally, the cross section of events with no extra jets above $p_{Tveto} = 20$~GeV is represented as $\sigma^{excl}_{veto}$.
The ratios of the inclusive to the “exclusive” dijet production cross sections, $R^{incl}$, and to "exclusive" with veto dijet production, $R^{incl}_{veto}$, are shown in Fig.~\ref{fig:2_scerci}. The results are compared with the predictions of PYTHIA8 (tune 4C)~\cite{p8}, HERWIG++ (tune EE3C)~\cite{herwig} and HEJ+ARIADNE~\cite{hej} event generators. PYTHIA8 shows an agreement with the data, whereas HEJ+ARIADNE and HERWIG++ significantly overestimate the ratio. In the case of ratio $R^{incl}_{veto}$, PYTHIA8 gives the best description of the data, however it still fails to model the shape of the $\Delta y$ dependence.
\begin{figure}[hhh]
\begin{center}
\includegraphics[height=0.45\textwidth]{CMS-FSQ-13-004_Figure_004-a.png}
\includegraphics[height=0.45\textwidth]{CMS-FSQ-13-004_Figure_005-a.png}
\caption{The ratio of differential inclusive dijet cross section to "exclusive" (left) and to "exclusive" with veto dijet production (right). Vertical error bars represent the statistical uncertainties, whereas the systematic uncertainties are indicated as shaded bands~\cite{fsq-13-004}.}
\label{fig:2_scerci}
\end{center}
\end{figure}
The ratios of the Mueller-Navelet dijet production cross section to “exclusive”, $R^{MN}$, and to “exclusive” with veto dijet production, $R^{incl}_{veto}$, are shown in Fig.~\ref{fig:3_cserci}. Generally, only the leading order DGLAP based event generator PYTHIA8 describes the ratios $R^{incl}$ and $R^{MN}$.
\begin{figure}[hh]
\begin{center}
\includegraphics[height=0.45\textwidth]{CMS-FSQ-13-004_Figure_006-a.png}
\includegraphics[height=0.45\textwidth]{CMS-FSQ-13-004_Figure_007-a.png}
\caption{The ratio of Mueller-Navelet dijet cross section to "exclusive" (left) and to "exclusive" with veto dijet production (right). Vertical error bars represent the statistical uncertainties, whereas the systematic uncertainties are indicated as shaded bands~\cite{fsq-13-004}.}
\label{fig:3_cserci}
\end{center}
\end{figure}
The results of the ratios of $R^{incl}$ and $R^{MN}$ are compared with the previous measurement performed at $\sqrt s = 7$~TeV~\cite{fsq-12-002} are shown in Fig.~\ref{fig:4_scerci}. A strong rise with $\Delta y$ is observed at high energies. This can be explained as a reflection of both the increasing available phase space and BFKL dynamics. Due to the increasing phase space volume for hard parton radiation, the ratios show a strong rise with the increasing $\Delta y$. As a result of the kinematic limitations on the events with more than two jets with $p_T > 35$~GeV, the ratios decrease at vert large $\Delta y$. BFKL calculations at the next-to-leading-logarithm level are needed to compare the results obtained with this analysis.
\begin{figure} [hhh]
\begin{center}
\includegraphics[height=0.45\textwidth]{CMS-FSQ-13-004_Figure_008.png}
\caption{Comparison of ratios of the cross sections for inclusive (left) and MN (right) at the collision energies of 7 TeV and 2.76 TeV~\cite{fsq-13-004}.}
\label{fig:4_scerci}
\end{center}
\end{figure}
\section{Summary}
Recent dijet production studies at different centre-of-mass energies performed by the CMS Collaboration are presented. The measurements are compared to various Monte Carlo event generators as well as the predictions based on BFKL calculations. The results on azimuthal angle decorrelations in dijet events, where the two jets are separated by a large rapidity interval, are consistent with the predictions based on BFKL calculations.
\iffalse
\part[Recent ALICE results on vector meson photoproductions\\ \phantom{x}\hspace{4ex}\it{Simone Ragoni on behalf of the ALICE Collaboration}]{}
\section{Introduction}
Vector meson photoproduction is being investigated in ultra-peripheral collisions (UPCs) at the LHC. In this type of events, the two interacting objects lie at impact parameters larger than the sum of their radii. A photon from one of them interacts with a colourless object from the target, i.e. a gluon ladder, and vector mesons can be formed in the final state. The ALICE Collaboration has previously measured \rhozero \cite{Adam:2015gsa}, \jpsi \cite{Abelev:2012ba, Abbas:2013oua} and \psip \cite{Adam:2015sia} photoproduction at a centre-of-mass energy of $\sqrt{s_{\mathrm{NN}}}~=~2.76$~Te\kern-.1emV\xspace in \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace, and exclusive \jpsi photoproduction in \ensuremath{\Pp\mathrm{Pb}}\xspace at \fivenn \cite{TheALICE:2014dwa, ALICE:2018oyo}.
LHC Run 2 provided a large dataset of UPC events, which in turn allowed for more differential measurements of vector meson photoproduction processes, at \fivenn in \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace, such as the first measurement of the $t$-dependence of coherent \jpsi production. However, symmetric collision systems like \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace, have an ambiguity in the sign of the rapidity $y$ of the vector meson in the final state. This will be further explained below. There are two viable techniques to disentangle this ambiguity, and both will be presented.
\section{Exclusive \jpsi in \ensuremath{\Pp\mathrm{Pb}}\xspace}
The UPC dataset collected with \ensuremath{\Pp\mathrm{Pb}}\xspace beams is quite valuable as it provides direct access to the proton gluon distributions down to a low Bjorken-$x$ of about $x \sim 10^{-5}$ with ALICE's current kinematic reach.
The rapidity $y$ of the vector meson in the final state, may be directly related to the probed Bjorken-$x$ as follows in Eq.~\ref{eq:bjorken-x} \cite{Contreras:2015dqa}:
\begin{equation}
\label{eq:bjorken-x}
x = \frac{M_{\rm VM}}{2 E_{\rm p}}\times \exp(-y) \text{ ,}
\end{equation}
where $M_{\rm VM}$ is the mass of the vector meson, and $ E_{\rm p}$ is the energy of the proton beam.
ALICE results
are presented in Fig.~\ref{fig:pPb} \cite{Aaij:2018arx, TheALICE:2014dwa, ALICE:2018oyo} together with LHCb's own \ensuremath{\Pp\Pp}\xspace results \cite{Aaij:2013jxj, Aaij:2014iea, Aaij:2018arx}. ALICE points are obtained with three different configurations:
\begin{itemize}
\item[\color{red}{$\blacktriangleright$}] \textbf{forward:} two muons\footnote{At pseudorapidities belonging to $-4 < \eta < -2.5$ ALICE can only detect muons.} satisfying the requirement of pseudorapidity $-4 < \eta < -2.5$;
\item[\color{red}{$\blacktriangleright$}] \textbf{semiforward:} one muon satisfying the requirement of pseudorapidity $-4 < \eta < -2.5$, with the other in $-0.9 < \eta < 0.9$;
\item[\color{red}{$\blacktriangleright$}] \textbf{central:} both muons lie in $-0.9 < \eta < 0.9$.
\end{itemize}
As shown in Fig.~\ref{fig:pPb}, the three configurations provide almost continuous coverage from about 25 \GeVns up to about 700 \GeVns in centre-of-mass energy of the photon--proton system, ${\rm W}_{\gamma p}$, which is determined starting from the vector meson rapidity as ${\rm W}_{\gamma p}^2 = 2E_{\rm p} M_{\jpsi}\times e^{-y}$.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\textwidth]{fig3.pdf}
\caption{
Exclusive \jpsi production cross section as a function of the centre-of-mass energy of the $\gamma$p system \cite{Aaij:2018arx, TheALICE:2014dwa, ALICE:2018oyo}.
} \label{fig:pPb}
\end{figure}
LHCb \ensuremath{\Pp\Pp}\xspace data extending to almost 2 \TeVns\ are also shown. The power-law growth of the cross sections can then be related to a power-law growth of gluon distributions down to $x\sim 10^{-6}$. The lack of a clear deviation from the power-law trend indicates a lack of clear signs of gluon saturation. Gluon saturation is considered to be the most straightforward mechanism by which the growth of the cross sections could be tamed. However, neither ALICE nor LHCb have observed compelling evidence for such an effect so far. Gluon saturation at low-$x$ would also have implications for the early stages of ultra-relativistic heavy-ion collisions, thus becoming a key investigation topic for current experiments.
\section{Coherent \jpsi in \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace}
The ALICE Collaboration has measured the production cross section of coherent \jpsi at forward and midrapidity. The results are shown in Fig.~\ref{fig:coherent-xsec} \cite{Acharya:2019vlb, Acharya:2021ugn}. ALICE results can then be directly compared to e.g. the Impulse Approximation, which is a model without nuclear effects, apart from coherence.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.6\textwidth]{xsec_jpsi_run2.pdf}
\caption{
Coherent \jpsi photoproduction cross sections as a function of rapidity \cite{Acharya:2019vlb, Acharya:2021ugn}.
} \label{fig:coherent-xsec}
\end{figure}
This is particularly useful to measure the nuclear suppression factor $S_{{\rm Pb}}(x)$, which is defined as follows in Eq.~\ref{eq:nuclear-suppression-factor} \cite{Guzey:2020ntc}:
\begin{equation}
\label{eq:nuclear-suppression-factor}
S_{{\rm Pb}}(x) = \sqrt{\frac{\sigma(\gamma A \longrightarrow \jpsi A)_{\rm measured}}{\sigma(\gamma A \longrightarrow \jpsi A)_{\rm IA}}} \text{ ,}
\end{equation}
where $\sigma(\gamma A \longrightarrow \jpsi A)_{\rm measured}$ is the measured cross section for coherent \jpsi photoproduction, while $\sigma(\gamma A \longrightarrow \jpsi A)_{\rm IA}$ is the cross section computed with the Impulse Approximation model.
The Impulse Approximation model is derived starting from photoproduction data on protons without including nuclear effects except for coherence. The nuclear suppression factor thus provides a way to test the agreement of ALICE data with the existing datasets. ALICE data are consistent with a $S_{{\rm Pb}}(x) = 0.65 \pm 0.03$ \cite{Acharya:2021ugn} at midrapidity, signalling strong nuclear effects which are unaccounted for in the available nuclear parton distribution function (PDF) sets. STARlight\xspace \cite{Klein:2016yzr}, a Glauber-like model which considers the interaction as a single dipole moving through the nucleus, also overpredicts the data. This would imply that a Glauber-like description only would not have been enough to describe the suppression of the coherent \jpsi. Guzey, Kryshen and Zhalov (GKZ) \cite{Guzey:2016piu} provide two models, one based on EPS09 nPDF parametrisation and the other on leading twist approximation (LTA). Both models describe the data. This implies that the \jpsi data agrees with existing measurements of nuclear shadowing.
The {\boldsymbol p} distributions for those dimuons lying in the \jpsi mass peak region are shown in Fig.~\ref{fig:pt-forward-midrapidity-1}~\cite{Acharya:2019vlb} and in Fig.~\ref{fig:pt-forward-midrapidity-2}~\cite{Acharya:2021ugn}, i.e. for dimuon masses $2.85 < M_{\mu\mu} < 3.35$ \GeVmass and $3.00 < M_{\mu\mu} < 3.20$ \GeVmass at forward and midrapidity, respectively. Coherent \jpsi is characterised by a lower average {\boldsymbol p} compared to incoherent \jpsi, owing to the different size of the corresponding photon targets, i.e. the photon couples with the entire nucleus \textit{coherently} for coherent \jpsi, and with a single nucleon for incoherent \jpsi. It is also shown how incoherent \jpsi with dissociation has a lower impact at midrapidity compared to forward rapidity.
\begin{figure}[ht!]
\begin{center}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{forwardjpsi_pt.pdf}
\caption{}
\label{fig:pt-forward-midrapidity-1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.985\textwidth]{PtFit_MuPP_0.pdf}
\caption{}
\label{fig:pt-forward-midrapidity-2}
\end{subfigure}\\
\end{center}
\caption{Dimuon {\boldsymbol p} distribution for candidates with masses in the \jpsi mass peak region, i.e. $2.85 < M_{\mu\mu} < 3.35$ \GeVmass for Fig.~\ref{fig:pt-forward-midrapidity-1} at forward rapidity \cite{Acharya:2019vlb}, and $3.00 < M_{\mu\mu} < 3.20$ \GeVmass for Fig.~\ref{fig:pt-forward-midrapidity-2} at midrapidity \cite{Acharya:2021ugn}.
}
\label{fig:pt-forward-midrapidity}
\end{figure}
The ALICE Collaboration has also measured for the first time the $|t|$-dependence of coherent \jpsi in \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace \cite{ALICE:2021tyx}. This is shown in Fig.~\ref{fig:t-dependence}. Since the measured $|t|$-dependence shows a trend compatible with models incorporating QCD effects, it then constitutes a valuable new observable to probe the transverse gluonic structure of the nucleus at low Bjorken-$x$.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.6\textwidth]{tdependence2.png}
\caption{
$|t|$-dependence of the photonuclear cross section for coherent \jpsi at midrapidity.
} \label{fig:t-dependence}
\end{figure}
\section{Viable techniques to solve the Bjorken-$x$ ambiguity}
Ultra-peripheral collisions with symmetric systems such as \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace or \ensuremath{\Pp\Pp}\xspace, have an ambiguity as either of the projectiles might have emitted the photon. The rapidity $y$ of the vector meson in the final state will then be related to the Bjorken-$x$ of the process by means of Eq.~\ref{eq:rapidity-x}:
\begin{equation}
\label{eq:rapidity-x}
x = \frac{M_{\rm VM}}{\sqrt{s_{\rm NN}}} \times e^{\pm |y|} \text{ ,}
\end{equation}
where $M_{\rm VM}$ is the mass of the vector meson and $\sqrt{s_{\rm NN}}$ is the centre-of-mass energy of the collision system. With ALICE kinematics this would imply that, at forward rapidity, $x\sim 10^{-2} $ or $10^{-5}$.
Two techniques have currently been proposed to disentangle the ambiguity. They both leverage on the impact parameter:
\begin{itemize}
\item[\color{red}{$\blacktriangleright$}] \textit{neutron emission:} generators such as n$^0_0$n \cite{Broz:2019kpl} predict that different neutron emission classes are characterised by quite different impact parameters \cite{Baltz:2002pp}. If neutron emission is considered only in four cases, 0N0N no neutron emitted, 0NXN neutrons emitted only on one side with respect to the interaction vertex, XNXN neutrons emitted on both sides of the interaction vertex, n$^0_0$n predicts that there will be a hierarchy of average impact parameters, as shown in Eq.~\ref{eq:hierarchy-neutron}:
\begin{equation}
\label{eq:hierarchy-neutron}
\langle{b_{\rm XNXN}}\rangle < \langle{b_{\rm 0NXN}}\rangle < \langle{b_{\rm 0N0N}}\rangle \text{ ,}
\end{equation}
where $\langle{b_{\rm 0N0N}}\rangle $, $\langle{b_{\rm 0NXN}}\rangle $, and $\langle{b_{\rm XNXN}}\rangle $ are the average impact parameters for the 0N0N, 0NXN, and XNXN classes, respectively;
\item[\color{red}{$\blacktriangleright$}] \textit{peripheral photoproduction:} in peripheral collisions the impact parameter is less than the sum of the nuclear radii, as opposed to UPCs, where the impact parameter is larger instead.
\end{itemize}
Using either neutron emission classes \cite{Guzey:2013jaa} or peripheral photoproduction \cite{Contreras:2016pkc} would enable the separation of the low- and high-$x$ photonuclear cross sections, as shown in Eq.~\ref{eq:neutron-emission-splitting}:
\begin{equation}
\label{eq:neutron-emission-splitting}
\begin{split}
\frac{{\rm d}\sigma^{0{\rm N}0{\rm N}}_{{\rm Pb}{\rm Pb}}}{{\rm d}y} &= n_{0{\rm N}0{\rm N}}(\gamma, +y) \cdot {\color{red}\sigma_{\gamma {\rm Pb}}(+y)} + n_{0{\rm N}0{\rm N}}(\gamma, -y) \cdot {\color{red}\sigma_{\gamma {\rm Pb}}(-y)} \text{ ,}\\
\frac{{\rm d}\sigma^{0{\rm NX}{\rm N}}_{{\rm Pb}{\rm Pb}}}{{\rm d}y} &= n_{0{\rm NX}{\rm N}}(\gamma, +y) \cdot {\color{red}\sigma_{\gamma {\rm Pb}}(+y)} + n_{0{\rm NX}{\rm N}}(\gamma, -y) \cdot {\color{red}\sigma_{\gamma {\rm Pb}}(-y)} \text{ ,}\\
\frac{{\rm d}\sigma^{{\rm XNX}{\rm N}}_{{\rm Pb}{\rm Pb}}}{{\rm d}y} &= n_{{\rm XNX}{\rm N}}(\gamma, +y) \cdot {\color{red}\sigma_{\gamma {\rm Pb}}(+y)} + n_{{\rm N}{\rm XNX}}(\gamma, -y) \cdot {\color{red}\sigma_{\gamma {\rm Pb}}(-y)} \text{ ,}
\end{split}
\end{equation}
for neutron emission, where ${\rm d}\sigma^{0{\rm N}0{\rm N}}_{{\rm Pb}{\rm Pb}}/{\rm d}y$, ${\rm d}\sigma^{0{\rm NX}{\rm N}}_{{\rm Pb}{\rm Pb}}/{\rm d}y$ and ${\rm d}\sigma^{{\rm XNX}{\rm N}}_{{\rm Pb}{\rm Pb}}/{\rm d}y$ are the measured UPC cross sections for 0N0N, 0NXN, and XNXN, respectively, $n_{0{\rm N}0{\rm N}}$, $n_{0{\rm NX}{\rm N}}$, and $n_{{\rm XNX}{\rm N}}$ are the corresponding photon fluxes, at either positive or negative rapidities, and finally, $\sigma_{\gamma {\rm Pb}}(\pm y)$ are the photonuclear cross sections, at the two rapidities. It is also possible to simultaneously use peripheral and UPC results as shown in Eq.~\ref{eq:peripheral-photoproduction}:
\begin{equation}
\label{eq:peripheral-photoproduction}
\begin{split}
\frac{{\rm d}\sigma_{{\rm Pb}{\rm Pb}}^{\rm P}}{{\rm d}y} &= n_{\rm P}(\gamma, +y)\cdot {\color{red} \sigma_{\gamma {\rm Pb}}(+y)} + n_{\rm P}(\gamma, -y)\cdot {\color{red}\sigma_{\gamma {\rm Pb}}(-y)} \text{ ,} \\
\frac{{\rm d}\sigma_{{\rm Pb}{\rm Pb}}^{\rm U}}{{\rm d}y} &= n_{\rm U}(\gamma, +y)\cdot {\color{red}\sigma_{\gamma {\rm Pb}}(+y)} + n_{\rm U}(\gamma, -y)\cdot {\color{red}\sigma_{\gamma {\rm Pb}}(-y)}\text{ ,}
\end{split}
\end{equation}
where ${\rm d}\sigma^{{\rm P}}_{{\rm Pb}{\rm Pb}}/{\rm d}y$ and ${\rm d}\sigma^{{\rm U}}_{{\rm Pb}{\rm Pb}}/{\rm d}y$ are the measured peripheral and UPC cross sections, respectively, while $n_{\rm P}$ and $n_{\rm U}$ are the corresponding fluxes, computed for the appropriate impact parameters.
Neutron emission was recently applied by the ALICE Collaboration for the measurement of coherent \rhozero photoproduction in \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace \cite{Acharya:2020sbc}, and overall agreement was found between the ALICE data points and the available models. This indicates a satisfactory knowledge of the photon fluxes.
Peripheral photoproduction of coherent \jpsi was first observed in ALICE Run 1 data \cite{Adam:2015gba}. The STAR \cite{STAR:2019yox} and LHCb \cite{LHCb:2021hoq} collaborations also report the observation. ALICE has also measured coherent \jpsi photoproduction in the peripheral sample with Run 2 data, at forward rapidity \cite{Bugnon:2020vti}. Fig.~\ref{fig:peripheral-run2-7090} and Fig.~\ref{fig:peripheral-run2-centrality} show the cross section for the most peripheral centrality class, and the cross sections as a function of centrality at forward rapidity, respectively\footnote{The more central a collision, the higher the number of participants.}.
\begin{figure}[ht!]
\begin{center}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{CrossSection_70-90_0.pdf}
\caption{}
\label{fig:peripheral-run2-7090}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{CrossSection_centrality_3.pdf}
\caption{}
\label{fig:peripheral-run2-centrality}
\end{subfigure}\\
\end{center}
\caption{Coherent \jpsi photoproduction with the ALICE Run 2 peripheral sample. Cross sections for the most peripheral centrality class in Fig.~\ref{fig:peripheral-run2-7090}, and cross sections as a function of centrality in Fig.~\ref{fig:peripheral-run2-centrality} \cite{Bugnon:2020vti}.}
\label{fig:peripheral-run2}
\end{figure}
\section{Summary}
Exclusive \jpsi photoproduction in \ensuremath{\Pp\mathrm{Pb}}\xspace constitutes a valuable tool to probe for gluon saturation in protons. Coherent \jpsi photoproduction in \ensuremath{\mathrm{Pb}\mathrm{Pb}}\xspace UPCs is particularly useful to measure nuclear shadowing in Pb nuclei. Finally, the ambiguity on the sign of the rapidity of the vector meson in the final state, with respect to the photon emitter -- which occurs in symmetric collision systems -- can be solved by analysing data samples characterised by different average impact parameters. Two techniques are presented, using neutron emission and peripheral photoproduction.
\nocite{*}
\bibliographystyle{auto_generated}
\section{Introduction}
We introduce here our topic....
\section{Results}
We describe here the details our results.
The document should not exceed 10 pages excluding references (please contact the organizers if you need more pages). The proceedings will be published online for free by the University of Kansas (this will be an "official" publication) and printed copies will be made available for a reasonable cost. The deadline to send the \textbf{.tar} file of the contribution (\textbf{including the .pdf} which means a successful compilation) is December 15 2021 to \textbf{christophe.royon@ku.edu, lalcerro@ku.edu, gkrintir@ku.edu}. It is also possible to put the article on arXiv provided you mention on the relevant field
Comments: Presented at the Low-$x$ Workshop, Elba Island, Italy, September 27--October 1 2021.
\begin{figure}
\begin{center}
\epsfig{figure=jayhawk.eps,height=0.45\textwidth}
\caption{A beautiful figure}
\label{elcross}
\end{center}
\end{figure}
\section*{Acknowledgements}
The author thanks somebody.
\part[The LHCspin project\\ \phantom{x}\hspace{4ex}\it{M. Santimaria et al}]{}
\section{Introduction}
\label{sec:intro_santi}
The LHC delivers proton and lead beams with an energy of $7~\rm{TeV}$ and $2.76~\rm{TeV}$ per nucleon, respectively, with world's highest intensity. A short run with xenon ions was also performed in 2017, while an oxygen beam is currently foreseen for the Run 3~\cite{Citron:2018lsq_santi}.
Fixed-target proton-gas collisions occur at a centre of mass energy per nucleon of up to $115~\rm{GeV}$.
This corresponds to a centre of mass rapidity shift of
$y-y_{\rm{CM}} \approx \rm{arcsinh}(\sqrt{E_{\rm{N}} / 2M_{\rm{N}}})=4.8$, so that the LHCb acceptance $(2<\eta<5)$ covers the backward and central rapidities in the centre of mass frame.
Such a coverage offers an unprecedented opportunity to investigate partons carrying a large fraction of the target nucleon momentum, i.e.~large Bjorken$-x$ values, corresponding to large and negative Feynman$-x$ values\footnote{$x_{\rm{F}} \approx x_1 - x_2$ where $x_1$ and $x_2$ are the Bjorken$-x$ values of the beam and target nucleon, respectively.}.
The LHCb detector~\cite{Alves:2008zz} is a general-purpose forward spectrometer specialised in detecting hadrons containing $c$ and $b$ quarks, and the only LHC detector able to collect data in both collider and fixed-target mode.
It is fully instrumented in the $2<\eta<5$ region with a vertex locator (VELO), a tracking system, two Cherenkov detectors, electromagnetic and hadronic calorimeters and a muon detector.
Fig.~\ref{fig:lhcb} shows a scheme of the upgraded LHCb detector which is currently being installed for the Run 3, starting in 2022.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{figs/lhcb.pdf}
\caption{The Run 3 LHCb detector.}
\label{fig:lhcb}
\end{figure}
The fixed-target physics program at LHCb is active since the installation of the SMOG (System for Measuring the Overlap with Gas) device~\cite{Aaij:2014ida}, enabling the injection of noble gases in the beam pipe section crossing the VELO detector at a pressure of $\mathcal{O}(10^{-7})~\rm{mbar}$.
Precise measurements of charm~\cite{LHCb:2018ygc} and antiproton~\cite{LHCb:2018jry} production have been published based on $p-\rm{Ar}$ and $p-\rm{He}$ fixed-target collisions. Fig.~\ref{fig:smog_phe} shows the high-quality and low-background samples collected in just one week of dedicated SMOG runs during Run 2.
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\textwidth]{figs/smog_phe.pdf}
\caption{$J/\psi\to\mu^+\mu^-$ (left) and $D^0\to K^-\pi^+$ (right) SMOG samples from~\cite{LHCb:2018ygc}.}
\label{fig:smog_phe}
\end{figure}
With the SMOG2 upgrade~\cite{LHCbCollaboration:2673690}, an openable gas storage cell, shown in Fig.~\ref{fig:smog2}, has been installed in 2020 in front of the VELO.
\begin{figure}[ht]
\centering
\includegraphics[width=0.49\textwidth]{figs/smog2_open.pdf}
\includegraphics[width=0.49\textwidth]{figs/smog2_closed.pdf}
\caption{The SMOG2 storage cell in the open (left) and closed (right) configuration.}
\label{fig:smog2}
\end{figure}
The cell boosts the target areal density by a factor of $8$ to $35$ depending on the injected gas species. In addition, SMOG2 data will be collected in the upcoming Run 3 with a novel reconstruction software allowing simultaneous data-taking of beam-gas and beam-beam collisions. Very high tracking efficiency is expected in the beam-gas interaction region, despite its upstream position with respect to the VELO. Furthermore, beam-gas and beam-beam vertices are well detached along the $z$ coordinate, as shown in Fig.~\ref{fig:kin}.
SMOG2 will offer a rich physics program for the Run 3 and, at the same time, allows to investigate the dynamics of the beam-target system, setting the basis for future developments.
\\
The LHCspin project~\cite{Aidala:2019pit} aims at extending the SMOG2 physics program in Run 4 (expected to start in 2028) and, with the installation of a polarised gas target, to bring spin physics at LHC by exploiting the well suited LHCb detector.
A selection of physics opportunities accessible at LHCspin is presented in Sec.~\ref{sec:phys}, while the experimental setup is discussed in Sec.~\ref{sec:det}.
\section{Physics case}
\label{sec:phys}
The physics case of LHCspin covers three main areas: exploration of the wide physics potential offered by unpolarised gas targets, investigation of the nucleon spin and heavy ion collisions.
\begin{figure}[h]
\centering
\includegraphics[width=0.52\textwidth]{figs/rta.pdf}
\hfill
\includegraphics[width=0.47\textwidth]{figs/kin.pdf}
\caption{Left: VELO track reconstruction efficiency for beam-gas (red) and beam-beam (blue) primary vertices (PV)~\cite{LHCB-FIGURE-2019-007}. Right: kinematic coverage of LHCspin (orange) and other existing facilities.}
\label{fig:kin}
\end{figure}
\subsection{Measurements with unpolarised gases}
\label{ssec:pdfs}
Similarly to SMOG2, LHCspin will allow the injection of several species of unpolarised gases: $\rm{H}_2$, $\rm{D}_2$, He, $\rm{N}_2$, $\rm{O}_2$, Ne, Ar, Kr and Xe. The impact of the gas on the LHC beam lifetime is negligible: the luminosity loss due to collisions on hydrogen in the cell has a characteristic time of around $2000$ days, whereas typical runs last for $10-20$ hours.
\\
Injecting unpolarised gases yields excellent opportunities to investigate parton distribution functions (PDFs) in both nucleons and nuclei in the large-$x$ and intermediate $Q^2$ regime (Fig.~\ref{fig:kin}, right), which are especially affected by lack of experimental data and impact several fields of physics from QCD to astrophysics.
For example, the large acceptance and high reconstruction efficiency of LHCb on heavy flavour states enables the study of gluon PDFs, which represent fundamental inputs for theoretical predictions~\cite{Hadjidakis:2018ifr} and are currently affected by large uncertainties, as shown in the example of Fig.~\ref{fig:gpdf} (left).
In addition, the structure of heavy nuclei is known to depart from that obtained by the simple sum of free protons and neutrons: within the unique acceptance of LHCb, a large amount of data can be collected to shed light on the intriguing anti-shadowing effect expected at $x\sim 0.1$~\cite{Eskola:2016oht}, as shown in Fig.~\ref{fig:gpdf} (right).
\begin{figure}[ht]
\centering
\includegraphics[width=0.40\textwidth]{figs/gpdf.pdf}
\hfill
\includegraphics[width=0.50\textwidth]{figs/emc.pdf}
\caption{Left: Relative uncertainty on the CT$14$ gluon PDF in a proton~\cite{Dulat:2015mca_santi}. Right: relative uncertainties on a set of gluon nuclear PDFs in a lead nucleus~\cite{Aidala:2019pit}.}
\label{fig:gpdf}
\end{figure}
With the large amount of data to be collected with LHCspin, several measurements impacting astrophysics and cosmic ray physics become possible.
For example, heavy-flavour hadroproduction directly impacts the knowledge of prompt muonic neutrino flux~\cite{Garzelli:2016xmx}, which is especially affected by PDF uncertainties, while large samples of proton collisions on helium, oxygen and nitrogen provide valuable inputs to improve the understanding of the compositions of ultra-high energy cosmic rays. Moreover, the possibility of injecting an oxygen beam opens new and exciting prospects for antiproton measurements~\cite{Brewer:2021kiv_santi}.
\subsection{Spin physics}
Beside the colliner PDFs mentioned in Sec.~\ref{ssec:pdfs}, polarised quark and gluon distributions can be probed at LHCspin by means of proton collisions on polarised hydrogen and deuterium.
Fig.~\ref{fig:wigner} shows the 5D Wigner distributions~\cite{Bhattacharya:2017bvs} which, upon integration on the transverse momentum, lead to the observable generalised parton distributions (GPDs), while transverse momentum dependent distributions (TMDs) are obtained when integrating over the transverse coordinate. There are several leading-twist distributions that can be probed with unpolarised and transversely polarised quarks and nucleons, giving independent information on the spin structure of the nucleon.
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\textwidth]{figs/wigner.pdf}
\caption{Wigner distributions (top) and leading-twist GPDs and TMDs for different combinations of quark$\times$nucleon polarisation states (bottom). Distributions marked in red vanish for no orbital angular momentum contribution to the nucleon spin, while the quantities highlighted in green can be accessed at LHCspin~\cite{pasquini}.}
\label{fig:wigner}
\end{figure}
To access the transverse motion of partons within a polarised nucleon, transverse single spin asymmetries (TSSAs) can be measured. For example
\begin{equation}
A_N=\frac{1}{\it{P}}\frac{\sigma^{\uparrow}-\sigma^{\downarrow}}{\sigma^{\uparrow}+\sigma^{\downarrow}} \sim \frac{f^q_1(x_1,k^2_{T1}) \otimes f^{\perp\overline{q}}_{1T}(x_2,k^2_{T2})}{f^q_1(x_1,k^2_{T1}) \otimes f^{\overline{q}}_1(x_2,k^2_{T2})}
\end{equation}
in the polarised Drell-Yan (DY) channel probes the product of $f_1$ (unpolarised TMD) and $f^{\perp}_{1T}$ (Sivers function) in quarks and antiquarks in the low $(x_1)$ and high $(x_2)$ $x$ regimes. Projections for the uncertainty of such measurements are shown in Fig.~\ref{fig:dy} (left) based on an integrated luminosity of $10~\rm{fb}^{-1}$. Being T-odd, it is theoretically established that the Sivers function changes sign in polarised DY with respect to semi-inclusive deep inelastic scattering~\cite{Collins:2002kn}. This fundamental QCD prediction can be verified by exploiting the large sample of DY data expected at LHCspin. In addition, isospin effects can be investigated by comparing $p-\rm{H}$ and $p-\rm{D}$ collisions.
\\
Several TMDs can be probed by evaluating the azimuthal asymmetries of the produced dilepton pair: projected precisions for three such asymmetries are shown in Fig.~\ref{fig:dy} (left).
\begin{figure}[ht]
\centering
\includegraphics[width=0.56\textwidth]{figs/dy.pdf}
\hfill
\includegraphics[width=0.42\textwidth]{figs/an.pdf}
\caption{Left: Measurements of $A_N$ in DY events as a function of $x$ compared to two theoretical predictions~\cite{Aidala:2019pit}. Right: projected precision for some azimuthal asymmetry amplitudes with DY data as a function of the dilepton invariant mass~\cite{Hadjidakis:2018ifr}.}
\label{fig:dy}
\end{figure}
Heavy flavour states will be the strength point of LHCspin. Being mainly produced via gluon fusion at LHC, quarkonia and open heavy flavour states will allow to probe the unknown gluon Sivers function via inclusive production of $J/\psi$, $D^0$ but also with several unique states like $\eta_c$, $\chi_c$, $\chi_b$ or $J/\psi J/\psi$.
Fig.~\ref{fig:tmds} (left) shows two predictions for $A_N$ on $J/\psi$ events: $5-10\%$ asymmetries are expected in the $x_F<0$ region, where the LHCspin sensitivity is the highest.
Heavy flavour states can be exploited as well to probe the gluon-induced asymmetries $h^{\perp g}_{1}$ (Boer-Mulders) and $f_1^g$ (always present at the denominator of $A_N$), which are both experimentally unconstrained.
\begin{figure}[ht]
\centering
\includegraphics[width=0.49\textwidth]{figs/gsf.pdf}
\hfill
\includegraphics[width=0.49\textwidth]{figs/tmd.pdf}
\caption{Left: theoretical predictions for $A_N$ in inclusive $J/\psi$ production~\cite{DAlesio:2018rnv}. Right: up quark densities in momentum space~\cite{Bacchetta:2017gcc}.}
\label{fig:tmds}
\end{figure}
While TMDs provide a ``tomography'' of the nucleon in momentum space (Fig.~\ref{fig:tmds}, right), a 3D picture in the spatial coordinates can be built by measuring GPDs, as shown in Fig.~\ref{fig:pic}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{figs/pic.pdf}
\caption{Distortion of the up and down quark distributions in the impact parameter space when spin is taken into account~\cite{saggiatore}.}
\label{fig:pic}
\end{figure}
Correlating position and momentum, GPDs also quantify the parton orbital angular momentum, whose contribution to the total nucleon spin can be inferred, for example, via the Ji sum rule~\cite{Ji:1996ek}.
GPDs can be experimentally probed with exclusive dilepton and quarkonia productions in ultra-peripheral collisions, which are dominated by the electromagnetic interaction.
For example, TSSAs can be exploited to access the $E_g$ function, which has never been measured and represents a key element of the proton spin puzzle.
It is also attractive to measure the elusive
transversity PDF, whose knowledge is currently limited to valence quarks at the leading order~\cite{Radici:2018iag}, as well as its integral, the tensor charge, which is of direct interest in constraining physics beyond the Standard Model~\cite{Courtoy:2015haa}.
\subsection{Heavy ion collisions}
Thermal heavy-flavour production is negligible at the typical temperature of few hundreds MeV of the system created in heavy-ion collisions. Quarkonia states ($c\overline{c}$, $b\overline{b}$) are instead produced on shorter timescales, and their energy change while traversing the medium represents a powerful way to investigate Quark-Gluon Plasma (QGP) properties. LHCb capabilities allow to both cover the aforementioned charmonia and bottomonia studies and to extend them to beauty baryons as well as to exotic probes.
The QGP phase diagram exploration at LHCspin can be performed with a rapidity scan at a centre of mass energy of $\sqrt{s_{\rm{NN}}}=72~\rm{GeV}$ which is in-between those accessed at RHIC and SPS.
In addition, flow measurements will greatly benefit from the excellent identification performance of LHCb on charged and neutral light hadrons.
\\
The dynamics of small systems is an interesting topic joining heavy-ion collisions and spin physics.
In the spin $1$ deuteron nucleus, the nucleon matter distribution is prolate for $j_3=\pm1$ and oblate for $j_3=0$, where $j_3$ is the projection of the spin along the polarisation axis.
In ultra-relativistic lead ion collisions on transversely polarised deuteron, the deformation of the target deuteron can influence the orientation of the fireball in the transverse plane, quantified by the ellipticity, as shown in Fig.~\ref{fig:deuteron}.
The measurement proposed in~\cite{Broniowski:2019kjo} can easily be performed at LHCspin on minimum bias events thanks to the high-intensity LHC beam.
\begin{figure}[h]
\centering
\includegraphics[width=0.47\textwidth]{figs/deuteron.pdf}
\hspace{1cm}
\includegraphics[width=0.42\textwidth]{figs/ellipticity.pdf}
\caption{Left: sketch of a ultra-relativistic collision of a lead nucleus against a transversely polarised deuteron in two different angular momentum projections. Right: ellipticity with respect to the polarisation axis as a function of the collision centrality with LHCspin kinematics~\cite{Broniowski:2019kjo}.}
\label{fig:deuteron}
\end{figure}
\section{Experimental setup}
\label{sec:det}
The LHCspin experimental setup is in R\&D phase and calls for the development of a new generation polarised target. The starting point for this ambitious task is the setup of the polarised target system employed at the HERMES experiment~\cite{Airapetian:2004yf} and comprises three main components: an Atomic Beam Source (ABS), a Polarised Gas Target (PGT) and a diagnostic system.
The ABS consists of a dissociator with a cooled nozzle, a Stern-Gerlach apparatus to focus the wanted hyperfine states, and adiabatic RF-transitions for setting and switching the target polarisation between states of opposite sign.
The ABS injects a beam of polarised hydrogen or deuterium into the PGT, which is located in the LHC primary vacuum.
The PGT hosts a T-shaped openable storage cell, sharing the SMOG2 geometry, and a compact superconductive dipole magnet, as shown in Fig.~\ref{fig:rd}. The magnet generates a $300~\rm{mT}$ static transverse field with a homogeneity of 10\%, which is found to be suitable to avoid beam-induced depolarisation~\cite{Steffens:2019kgb}.
Studies for the inner coating of the storage cell are currently ongoing, with the aim of producing a surface that minimises the molecular recombination rate as well as the secondary electron yield.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{figs/pgt.pdf}
\caption{A drawing of the PGT with the magnet coils (orange) and the iron return yoke (blue) enclosing the storage cell. The VELO vessel and detector box are shown in green and grey, respectively.}
\label{fig:rd}
\end{figure}
In Fig.~\ref{fig:pgtvelo} (left), the PGT is shown in the same location of the SMOG2 cell, a configuration that offers a large kinematic acceptance and does not require additional detectors in LHCb. Fig.~\ref{fig:pgtvelo} (right) shows the efficiency to reconstruct a primary vertex and both tracks in simulated \mbox{$\Upsilon\to\mu^+\mu^-$} events as a function of $x_F$ under three locations of a $20~\rm{cm}$-long storage cell.
The simulation is performed within the GAUSS framework~\cite{Clemencic:2011zza} with upgrade LHCb conditions. New algorithms are currently being developed for the Run 3 fixed-target reconstruction and are expected to sensibly improve the current performance as well as to enable to record LHCspin data in parallel with $p-p$ collisions~\cite{LHCB-FIGURE-2019-007}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.49\textwidth]{figs/pgtvelo.pdf}
\hfill
\includegraphics[width=0.45\textwidth]{figs/upsilon.pdf}
\caption{Left: The PGT and the VELO vessel (green). Right: simulated reconstruction efficiency for $\Upsilon\to\mu^+\mu^-$ events with three different cell positions, the blue line corresponding to the SMOG2 location.}
\label{fig:pgtvelo}
\end{figure}
The diagnostic system continuously analyses gas samples drawn from the PGT and comprises a target gas analyser to detect the molecular fraction, and thus the degree of dissociation, and a Breit-Rabi polarimeter to measure the relative population of the injected hyperfine states.
\\
An instantaneous luminosity of $\mathcal{O}(10^{32})~\rm{cm}^{-2}\rm{s}^{-1}$ is foreseen for fixed-target $p-\rm{H}$ collisions in Run 4, with a further factor $3-5$ increase for the high-luminosity LHC phase from Run 5 (2032).
\section{Conclusions}
The fixed-target physics program at LHC has been greatly enhanced with the recent installation of the SMOG2 gas storage cell at LHCb.
LHCspin is the natural evolution of SMOG2 and aims at installing a polarised gas target to bring spin physics at LHC for the first time, opening a whole new range of exploration. With strong interest and support from the international theoretical community, LHCspin is a unique opportunity to advance our knowledge on several unexplored QCD areas, complementing both existing facilities and the future Electron-Ion Collider~\cite{Accardi:2012qut}.
\paragraph{Funding information}
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement STRONG – 2020 - No 824093.
\nocite{*}
\bibliographystyle{auto_generated}
|
3,212,635,537,546 | arxiv | \section{Introduction}\label{Int}
We prove estimates on the time and space regularity of solutions to porous medium equations
\begin{align}\label{pme_sys}
\begin{pdeq}
\partial_tu-\Deltau^{[m]} &= S && \text{in } (0,T)\times \mathbb{R}^d,\\
u(0)&=u_0 && \text{in } \mathbb{R}^d
\end{pdeq}
\end{align}
where $u^{[m]}:=|u|^{m-1} u$ with $m>1$, $u_0 \in L^1(\mathbb{R}^d)$ and $S\in L^1((0,T)\times \mathbb{R}^d)$.
Solutions to porous medium equations are known to exhibit nonlinear phenomena like slow diffusion or filling up of holes at finite rate: if the initial data is compactly supported, then the support of the solution evolves with a free boundary that has finite speed of propagation. The solution close to the boundary is not smooth even for smooth initial data and zero forcing.
Despite many works on the problem of regularity of solutions to porous medium equations, until recently established regularity results in the literature in terms of Hölder or Sobolev spaces were restricted to spatial differentiability of order less than one (cf.\@ Ebmeyer \cite{Ebm05}, Tadmor and Tao \cite{TaT07}). For $m\searrow1$ this is in stark contrast to the limiting case $m=1$, where $u$ is up to twice weakly differentiable in space. Very recently, the first author has proven optimal spatial regularity for \eqref{pme_sys} in \cite{Ges17} for initial data $u_{0}\in(L^{1}\cap L^{1+\varepsilon})(\mathbb{R}^{d})$ for some $\varepsilon>0$. This leaves open three main aspects addressed in the present work: First, the derivation of optimal\footnote{Optimality is indicated by scaling arguments in Section \ref{Scale} below, and the derived estimates are consistent with the optimal space-time regularity in the linear case $m=1$.} space-time regularity. Second, the limit case $u_{0}\in L^{1}(\mathbb{R}^{d})$, which is of particular importance since it covers the case of the Barenblatt solution for which the estimates are shown to be optimal, cf.\@ Section~\ref{Scale} below. Third, higher order integrability. The solution of these three open problems is the purpose of the present paper.
The first main result provides optimal space-time regularity for $L^1$ data.
\begin{thm}\label{cor:pme_l1}
Let $u_0\in\LR{1}(\mathbb{R}^d)$, $S\in L^1((0,T)\times \mathbb{R}^d)$ and $m\in(1,\infty)$. Let $u$ be the unique entropy solution to \eqref{pme_sys} on $[0,T]\times \mathbb{R}^d$.
\begin{enumerate}
\item Let $p\in (1,m]$ and define
\begin{align*}
\kappa_t:= \frac{m-p}{p}\frac{1}{m-1}, \quad
\kappa_x:= \frac{p-1}{p}\frac{2}{m-1}.
\end{align*}
Then, for all $\sigma_t\in [0,\kappa_t) \cup \set{0}$ and $\sigma_x\in [0,\kappa_x)$ we have
\begin{align*}
u\in \WSR{\sigma_t}{p}(0,T;\WSR{\sigma_x}{p}(\mathbb{R}^d)).
\end{align*}
Moreover, we have the estimate
%
\begin{align}\label{cor:pme_est1_l1}
\|u\|_{\WSR{\sigma_t}{p}(0,T;\WSR{\sigma_x}{p}(\mathbb{R}^d))}\lesssim \|u_0\|_{L^1_{x}}^{m} +\|S\|_{L^1_{t,x}}^m +1.
\end{align}
%
\item Suppose ${\mathcal O}\subset\subset \mathbb{R}^d$. Let $s\in [0,1]$ and define
\begin{align*}
p&:=s(m-1)+1, \quad
\kappa_t:= \frac{1-s}{s(m-1)+1}, \quad
\kappa_x:= \frac{2s}{s(m-1)+1}.
\end{align*}
Then for all $\sigma_t\in [0,\kappa_t)\cup \set{0}$, $\sigma_x\in [0,\kappa_x)\cup\set{0}$ and $q\in [1,p]$ we have
\begin{align*}
u\in \WSR{\sigma_t}{q}(0,T;\WSR{\sigma_x}{q}({\mathcal O})).
\end{align*}
Moreover, we have the estimate
%
\begin{align}\label{cor:pme_est2_l1}
\|u\|_{\WSR{\sigma_t}{q}(0,T;\WSR{\sigma_x}{q}({\mathcal O}))}\lesssim \|u_0\|_{L^1_x}^{m} +\|S\|_{L^1_{t,x}}^m +1.
\end{align}
%
\end{enumerate}
\end{thm}
In the previous works by Tadmor and Tao \cite{TaT07} and Ebmeyer \cite{Ebm05}, initial data in $L^1 \cap L^\infty$ has been considered. However, the methods employed in these works did not allow a systematic analysis of the order of integrability of the solutions. For example, the results of \cite{Ebm05} are restricted to the particular order of integrability $p=\frac{2}{m+1}$, while \cite{TaT07} is restricted to $p=1$. In the second main result we provide a systematic treatment of higher order integrability. In particular, this includes and generalizes the corresponding results of \cite{Ebm05} in terms of regularity in Sobolev spaces.
Noting that the regularity of $u^{[m]}$ contains information on the time regularity of $u$ in virtue of the equation \eqref{pme_sys}, in addition, we analyze the spatial regularity of powers of the solution $u^{\mu}$ for $\mu\in[1,m]$.
\begin{thm}\label{lem:pme}
Let $u_0\in L^1(\mathbb{R}^d)\cap L^{\rho}(\mathbb{R}^d)$, $S\in L^1([0,T]\times\mathbb{R}^d)\cap L^{\rho}([0,T]\times\mathbb{R}^d)$ for some $\rho\in(1,\infty)$ and assume $m\in\vpp{1,\infty}$. Let $u$ be the unique entropy solution to \eqref{pme_sys} on $[0,T]\times\mathbb{R}^d$.
\begin{enumerate}
\item Let $\mu\in[1,m]$. Then, for all $p\in (1,\frac{m-1+\rho}{\mu})$ and $\sigma_x\in[0,\frac{\mu p -1}{p}\frac{2}{m-2+\rho})$ we have
\begin{align*}
u^{[\mu]}\in \LR{p}(0,T;\WSR{\sigma_x}{p}(\mathbb{R}^d)),
\end{align*}
and we have the estimate
%
\begin{align}\label{lem:pme_est2}
\|u^{[\mu]}\|_{\LR{p}(0,T;\WSR{\sigma_x}{p}(\mathbb{R}^d))}\lesssim \|u_0\|_{L^1_x\cap L^{\rho}_{x}}^{\mu\rho} + \|S\|_{L^1_{t,x}\cap L^{\rho}_{t,x}}^{\mu\rho}+1.
\end{align}
\item Let $p\in (\rho,m-1+\rho)$ and define
\begin{align*}
\kappa_t:= \frac{m-1+\rho-p}{p}\frac{1}{m-1}, \quad
\kappa_x:= \frac{p-\rho}{p}\frac{2}{m-1}.
\end{align*}
Then, for all $\sigma_t\in [0,\kappa_t)$ and $\sigma_x\in [0,\kappa_x)$ we have
\begin{align*}
u\in \WSR{\sigma_t}{p}(0,T;\WSR{\sigma_x}{p}(\mathbb{R}^d)).
\end{align*}
Moreover, we have the estimate
%
\begin{align}\label{lem:pme_est1}
\|u\|_{\WSR{\sigma_t}{p}(0,T;\WSR{\sigma_x}{p}(\mathbb{R}^d))}\lesssim \|u_0\|_{L^1_x\cap L^{\rho}_{x}}^{\rho} + \|S\|_{L^1_{t,x}\cap L^{\rho}_{t,x}}^{\rho}+1.
\end{align}
%
\end{enumerate}
\end{thm}
Similarly as in Theorem \ref{cor:pme_l1}, if one restricts to estimates that are localized in space, the rigid interdependency of the coefficients in Theorem \ref{lem:pme} can be relaxed.
\begin{cor}\label{cor:pme}
Under the assumptions of Theorem \ref{lem:pme}, suppose ${\mathcal O}\subset\subset \mathbb{R}^d$.
\begin{enumerate}
\item Let $\mu\in[1,m]$. Then, for all $\sigma_x\in[0,\frac{2\mu}{m})$ and $q\in[1,\frac{m}{\mu}]$ we have
\begin{align*}
u^{[\mu]}\in \LR{q}(0,T;\WSR{\sigma_x}{q}({\mathcal O})),
\end{align*}
and we have the estimate
%
\begin{align}\label{cor:pme_est2}
\|u^{[\mu]}\|_{\LR{q}(0,T;\WSR{\sigma_x}{q}({\mathcal O}))}\lesssim \|u_0\|_{L^1_x\cap L^{\rho}_{x}}^{\mu\rho} + \|S\|_{L^1_{t,x}\cap L^{\rho}_{t,x}}^{\mu\rho}+1.
\end{align}
%
\item Let $s\in [0,1]$ and define
\begin{align*}
p&:=s(m-1)+1, \quad
\kappa_t:= \frac{1-s}{s(m-1)+1}, \quad
\kappa_x:= \frac{2s}{s(m-1)+1}.
\end{align*}
Then for all $\sigma_t\in [0,\kappa_t)\cup \set{0}$, $\sigma_x\in [0,\kappa_x)\cup \set{0}$ and $q\in [1,p]$ we have
\begin{align*}
u\in \WSR{\sigma_t}{q}(0,T;\WSR{\sigma_x}{q}({\mathcal O})).
\end{align*}
Moreover, we have the estimate
%
\begin{align}\label{cor:pme_est1}
\|u\|_{\WSR{\sigma_t}{q}(0,T;\WSR{\sigma_x}{q}({\mathcal O}))}\lesssim \|u_0\|_{L^1_x\cap L^{\rho}_{x}}^{\rho} + \|S\|_{L^1_{t,x}\cap L^{\rho}_{t,x}}^{\rho}+1.
\end{align}
%
\end{enumerate}
\end{cor}
The methods employed in this work are inspired by Tadmor and Tao \cite{TaT07} and rely on the kinetic form of \eqref{pme_sys}, that is, with $f(t,x,v):=1_{v<u(t,x)}-1_{v<0}$,
\begin{align}
\partial_{t}f-m|v|^{m-1}\Delta_{x}f & =\partial_{v}q+S(t,x)\delta_{u(t,x)}(v),\label{pme_sys_kin}
\end{align}
for a non-negative measure $q$, which allows the usage of averaging lemmata and real interpolation. An essential difference to purely spatial regularity consists in the necessity to work with anisotropic fractional Sobolev spaces, which only in their homogeneous form are nicely adapted to the Fourier analytic methods of this work, much in contrast to the purely spatial case in \cite{Ges17}. This leads to the so-called dominating mixed anisotropic Besov spaces introduced by Schmeisser and Triebel \cite{ScT87}. Passing from these homogeneous anisotropic spaces to standard inhomogeneous fractional Sobolev spaces is delicate and treated in detail below. A main ingredient in the proof of optimal regularity in \cite{Ges17} was the existence of singular moments $\int_{t,x,v}|v|^{-\gamma}q$ for $\gamma\in(0,1)$. This ceases to be true for general $L^{1}$-initial data. This difficulty is overcome in the present work by treating separately the degeneracy at $|v|=0$ and the singularity at $|v|=\infty$ as they appear in \eqref{pme_sys_kin}. This also necessitates to make use of the equation \eqref{pme_sys_kin} in the case of small spatial modes $\xi$ in order to obtain optimal time regularity, see Corollary \ref{cor:av3} below.
\textbf{Comments on the literature:} The (spatial) regularity of solutions to porous medium equations in Sobolev spaces has previously been considered in \cite{Ebm05,Ges17,TaT07}. Since our main focus is on time-space regularity, we refer to \cite{Ges17} for a more detailed account on the available literature in this regard.
In the case of non-negative solutions the spatial regularity of special types of powers of solutions has been investigated in the literature. For example, much work is devoted to the pressure defined by $v:=\frac{m}{m-1}u^{m-1}$ (cf.\@ e.g.\@ V\'azquez \cite{V07} and the references therein). In the recent work \cite{GS16}, Gianazza and Schwarzacher proved higher integrability for nonnegative, local weak solutions to forced porous medium equations in the sense that $u^{\frac{m+1}{2}} \in L^{2+\varepsilon}_{loc}((0,T);W^{1,2+\varepsilon}_{loc})$ for all $\varepsilon>0$ small enough. In \cite{BDKS18}, this result was generalized by Bögelein, Duzaar, Korte, and Scheven.
The analysis of regularity in time of solutions to porous medium equations (without forcing) has a long history initiated by Aronson and Bénilan in \cite{AB79} and continued by Crandall, Pazy, Tartar in \cite{CPT79} and Bénilan, Crandall in \cite{BC81}, where it has been shown that
\begin{equation}\label{eq:time_l1}
\partial_t u \in L^1_{loc}((0,\infty);L^1(\mathbb{R}^d))
\end{equation}
for $u_0 \in L^1(\mathbb{R}^d)$. Subsequently, in \cite{CP82,CP82-2}, Crandall and Pierre have devoted considerable effort to relax the required assumptions on the nonlinearity $\psi$ in the case of generalized porous medium equations
\begin{align}\label{gen_pme_sys}
\partial_tu-\Delta \psi(u) &= 0 \quad \text{in } (0,T)\times \mathbb{R}^d.
\end{align}
More precisely, in \cite{CP82} assuming the global generalized homogeneity condition
\begin{equation}\label{eq:intro-homogeneity}
\nu \frac{\psi(r)\psi''(r)}{(\psi'(r))^2}\in [m,M],
\end{equation}
for some $0<m<M$, $\nu \in \set{\pm 1}$ and all $r\in\mathbb{R}$, \eqref{eq:time_l1} was recovered. \\
It should be noted that the methods developed in these works are restricted to the non-forced case $S\equiv0$. In fact, for $S\not\equiv 0$, the linear case $m=1$ demonstrates that \eqref{eq:time_l1} should not be expected. We are not aware of results proving regularity in time in Sobolev spaces for porous medium equations with non-vanishing forcing. In this sense, restricting to regularity in time alone, the results of the present work can be regarded as the (partial) extension of the results of \cite{AB79,BC81,CPT79,CP82,CP82-2} to non-vanishing forcing.
We are not aware of previous results on mixed time and space regularity in Sobolev spaces for solutions to porous medium equations.
For simplicity of the presentation we restrict to the nonlinearity $\psi(u)=u^{[m]}$ in this work. However, the methods that we present are not restricted to this case, as long as $\psi$ satisfies a non-linearity condition as in \cite{Ges17}. In addition, by means of a velocity decomposition, i.e.\ writing
$$u(t,x) = \sum_{i=1}^K u^i(t,x) := \sum_{i=1}^K \int_v \varphi^i(v)f(t,x,v) \,\mathrm{d} v,$$
where $\varphi^i$, $i=1,\dots,K$ is a smooth decomposition of the unity, such a non-linearity condition only needs to be supposed locally at points of degeneracy. This is in contrast to the assumptions, such as \eqref{eq:intro-homogeneity}, supposed in the series of works \cite{AB79,BC81,CPT79,CP82,CP82-2} mentioned above, which can be regarded as \textit{global} generalized homogeneity conditions.
\textbf{Structure of this work:} In Section \ref{FctSpc} we collect information on the class of homogeneous and inhomogeneous anisotropic, dominating mixed derivative spaces employed in this work. The optimality of the obtained estimates is indicated in Section \ref{Scale} by scaling arguments and by explicit computations in case of the Barenblatt solution. In Section \ref{AvLem} we provide general averaging lemmata (Lemma \ref{lem:av} and Lemma \ref{lem:av2}) in the framework of homogeneous dominating mixed derivative spaces and translate them to more standard inhomogeneous anisotropic fractional Sobolev spaces (Corollary \ref{cor:av0}, Corollary \ref{cor:av} and Corollary \ref{cor:av3}). In this formulation, they imply the main result by their application to the porous medium equation in Section \ref{App_PME}.
\section{Preliminaries, Notation and Function Spaces}\label{FctSpc}
We use the notation $a\lesssim b$ if there is a universal constant $C>0$ such that $a\le Cb$. We introduce $a\gtrsim b$ in a similar manner, and write $a \sim b$ if $a\lesssim b$ and $a\gtrsim b$.
For a Banach space $X$ and $I\subset \mathbb{R}$ we denote by $C(I;X)$ the space of bounded and continuous $X$-valued functions endowed with the norm $\|f\|_{C(I;X)}:=\sup_{t\in I}\|f(t)\|_X$. If $X=\mathbb{R}$ we write $C(I)$. For $k\in \mathbb{N}\cup\set{\infty}$, the space of $k$-times continuously differentiable functions is denoted by $C^k(I;X)$. The subspace of $C^k(I;X)$ consisting of compactly supported functions is denoted by $C^k_c(I;X)$. Moreover, we write ${\mathcal M}_{TV}$ for the space of all measures with finite total variation.
Throughout this article we use several types of $L^p$-based function spaces.
For a Banach space $X$ and $p\in[1,\infty]$, we endow the Bochner-Lebesgue space $\LR{p}(\mathbb{R};X)$ with the usual norm
\begin{align*}
\norm{f}_{\LR{p}(\mathbb{R};X)}&:=\left(\int_{\mathbb{R}}\norm{f(t)}_{X}^p \,\mathrm{d} t\right)^\frac{1}{p},
\end{align*}
with the standard modification in the case of $p=\infty$. For $k\in \mathbb{N}_0:=\mathbb{N}\cup\set{0}$, the corresponding $X$-valued Sobolev space is denoted by $\WSR{k}{p}(\mathbb{R};X)$. If $\sigma\in (0,\infty)$ is non-integer (say $\sigma=k+r$, with $k\in\mathbb{N}_0$ and $r\in(0,1)$), then we define the $X$-valued Sobolev-Slobodecki\u{\i} space $\WSR{\sigma}{p}(\mathbb{R};X)$ as the space of functions in $\WSR{k}{p}(\mathbb{R};X)$ with
\begin{align}\label{hom_sob_norm}
\norm{f}_{\DSR{\sigma}{p}(\mathbb{R};X)}:=\left(\int_{\mathbb{R}\times\mathbb{R}}\frac{\norm{D^kf(t)-D^kf(s)}_X^p}{|t-s|^{rp+1}}\,\mathrm{d} s \,\mathrm{d} t\right)^{\frac{1}{p}}<\infty,
\end{align}
again with the usual modification in the case of $p=\infty$. Further, let $\DSR{\sigma}{p}(\mathbb{R};X)$ be the space of all locally integrable $X$-valued functions $f$ for which \eqref{hom_sob_norm} is finite. If we factor out the equivalence relation $\sim$, where $f\sim g$ if $\norm{f-g}_{\DSR{\sigma}{p}(\mathbb{R};X)}=0$, the space $\DSR{\sigma}{p}(\mathbb{R};X)$ equipped with the norm $\norm{\cdot}_{\DSR{\sigma}{p}(\mathbb{R};X)}$ is a Banach space.
Moreover, in order to treat regularity results in both time and space efficiently, we introduce spaces with dominating mixed derivatives set in the framework of Fourier analysis, that is, corresponding Besov spaces. These spaces have a long history in the literature, beginning with the works of S.\@ M.\@ Nikol'ski\u{\i} \cite{Nik62, Nik63b, Nik63a}. We refer the reader to the monograph of Schmeisser and Triebel \cite{ScT87} and the references therein. We adopt the notation of \cite{ScT87} for the non-homogeneous spaces. Corresponding homogeneous Besov spaces are treated in \cite{Tri77c, Tri77d}; we adapt the notation to be consistent with the one in \cite{ScT87}.
We recall from \cite{Tri77c} the definition of the spaces ${\mathcal Z}$ and ${\mathcal Z}'$ replacing the standard Schwartz space ${\mathcal S}={\mathcal S}(\mathbb{R}^{d+1})$ and the space of tempered distributions ${\mathcal S}'={\mathcal S}'(\mathbb{R}^{d+1})$ in the definition of homogeneous spaces. As we are concerned with function spaces in the time variable $t\in \mathbb{R}$ and the spatial variable $x\in \mathbb{R}^d$, we introduce besides $\mathbb{R}^{d+1}=\mathbb{R}_t\times \mathbb{R}^d_x$ also the subset
\begin{align*}
\dot\mathbb{R}^{d+1}:=\setc{(t,x)\in\mathbb{R}^{d+1}}{t|x|\neq 0}.
\end{align*}
Note that in \cite{Tri77c}, the notation $\overset{+}{\mathbb{R}^2}$ is used, which gives a better geometrical intuition of the set taken out of $\mathbb{R}^{2}$. However, for typesetting reasons, we have decided for the notation $\dot\mathbb{R}^{d+1}$. Then we let $\dot{\mathcal D}$ be the subset of the standard space of test functions ${\mathcal D}$, consisting of functions with compact support in $\dot\mathbb{R}^{d+1}$ and view it as a locally convex space equipped with the canonical topology. Its dual space is denoted by $\dot{\mathcal D}'$, and is referred to as distributions over $\dot\mathbb{R}^{d+1}$. We define ${\mathcal Z}$ as the image of $\dot{\mathcal D}\subset {\mathcal S}$ under the Fourier transform ${\mathcal F}$ in time and space, equipped with the inherited topology from $\dot{\mathcal D}$. The corresponding dual space is denoted by ${\mathcal Z}'$. Since ${\mathcal F}:\dot{\mathcal D}\to{\mathcal Z}$, we can define by duality the Fourier transform ${\mathcal F}:{\mathcal Z}'\to\dot{\mathcal D}'$.
It holds ${\mathcal Z}\subset {\mathcal S}$ with a continuous embedding, but the fact that ${\mathcal Z}$ is not densely embedded in ${\mathcal S}$ prevents one from stating ${\mathcal S}'\subset {\mathcal Z}'$. However, we note that for $p\in(1,\infty)$, the space $\LR{p}(\mathbb{R}^{d+1})$ can be viewed both as subspace of ${\mathcal S}'$ and as a subspace of ${\mathcal Z}'$, cf.\@ Theorem 3.3 in \cite{Tri77c}.
Let $\varphi$ be a smooth function supported in the annulus $\{\xi\in\mathbb{R}^{d}:\,\frac{1}{2}\le|\xi|\le2\}$ and such that
\[
\sum_{j\in\mathbb{Z}}\varphi_j(\xi):=\sum_{j\in\mathbb{Z}}\varphi(2^{-j}\xi)=1,\quad\forall\xi\in\mathbb{R}^{d}\setminus\set{0}.
\]
Similarly, let $\eta$ be a smooth function supported in $\vpp{-2,-\frac12}\cup \vpp{\frac12,2}$ with
\[
\sum_{l\in\mathbb{Z}}\eta_l(\tau):=\sum_{l\in\mathbb{Z}}\eta(2^{-l}\tau)=1,\quad\forall\tau\in\mathbb{R}\setminus\set{0}.
\]
Moreover, define $\phi_j:=\varphi_j$ for $j\ge 1$ and $\phi_0:=1-\sum_{j\ge 1}\phi_j$ as well as $\psi_l:=\eta_l$ for $l\ge 1$ and $\psi_0:=1-\sum_{l\ge 1}\eta_l$. We will use the shorthand notation $\eta_l\varphi_j$ for the function $(\tau,\xi)\mapsto \eta_l(\tau)\varphi_j(\xi)$, and similarly for combinations of $\psi_l$ and $\phi_j$.
\begin{defn}\label{defn:spaces}
Let $\sigma_i\in\vpp{-\infty,\infty}$, $i=t,x$, and $p\in[1,\infty]$. Set $\overline{\sigma}:=(\sigma_t,\sigma_x)$.
\begin{enumerate}
\item The homogeneous Besov space with dominating mixed derivatives $S^{\overline{\sigma}}_{p,\infty}\dot B(\mathbb{R}^{d+1})$ is given by
\begin{align*}
S^{\overline{\sigma}}_{p,\infty}\dot B:=S^{\overline{\sigma}}_{p,\infty}\dot B(\mathbb{R}^{d+1}):=\setc{f\in{\mathcal Z}'}{\|f\|_{S^{\overline{\sigma}}_{p,\infty}\dot B}<\infty},
\end{align*}
with the norm
\begin{align*}
\|f\|_{S^{\overline{\sigma}}_{p,\infty}\dot B}:=\sup_{l,j\in\mathbb{Z}} 2^{\sigma_t l}2^{\sigma_x j}\|{\mathcal F}^{-1}_{t,x}\eta_l \varphi_j{\mathcal F}_{t,x} f\|_{\LR{p}(\mathbb{R}^{d+1})}.
\end{align*}
Similarly, the space $S^{\overline{\sigma}}_{p,\infty,(\infty)}\dot B(\mathbb{R}^{d+1})$ is given via the norm
\begin{align*}
\|f\|_{S^{\overline{\sigma}}_{p,\infty,(\infty)}\dot B}:=\sup_{l,j\in\mathbb{Z}} 2^{\sigma_t l}2^{\sigma_x j}\|{\mathcal F}^{-1}_{t,x}\eta_l \varphi_j{\mathcal F}_{t,x} f\|_{\LR{p,\infty}(\mathbb{R}^{d+1})}.
\end{align*}
\item The homogeneous Chemin-Lerner spaces $\tilde{L}^{p}_{t}\dot B^{\sigma_x}_{p,\infty}(\mathbb{R}^{d+1})$ and $\tilde{L}^{p}_{x}\dot B^{\sigma_t}_{p,\infty}(\mathbb{R}^{d+1})$ are given by
\begin{align*}
\tilde{L}^{p}_{t}\dot B^{\sigma_x}_{p,\infty}&:=\tilde{L}^{p}_{t}\dot B^{\sigma_x}_{p,\infty}(\mathbb{R}^{d+1}):=\setc{f\in{\mathcal S}'}{\|f\|_{\tilde{L}^{p}_{t}\dot B^{\sigma_x}_{p,\infty}}<\infty}, \\
\tilde{L}^{p}_{x}\dot B^{\sigma_t}_{p,\infty}&:=\tilde{L}^{p}_{x}\dot B^{\sigma_t}_{p,\infty}(\mathbb{R}^{d+1}):=\setc{f\in{\mathcal S}'}{\|f\|_{\tilde{L}^{p}_{x}\dot B^{\sigma_t}_{p,\infty}}<\infty},
\end{align*}
with the norms
\begin{align*}
\|f\|_{\tilde{L}^{p}_{t}\dot B^{\sigma_x}_{p,\infty}}&:=\sup_{j\in\mathbb{Z}} 2^{\sigma_x j}\|{\mathcal F}^{-1}_{x} \varphi_j{\mathcal F}_{x} f\|_{\LR{p}(\mathbb{R}^{d+1})}, \\
\|f\|_{\tilde{L}^{p}_{x}\dot B^{\sigma_t}_{p,\infty}}&:=\sup_{l\in\mathbb{Z}} 2^{\sigma_t l}\|{\mathcal F}^{-1}_{t} \eta_l{\mathcal F}_{t} f\|_{\LR{p}(\mathbb{R}^{d+1})},
\end{align*}
respectively.
\item The non-homogeneous Besov space with dominating mixed derivatives $S^{\overline{\sigma}}_{p,\infty}B(\mathbb{R}^{d+1})$ is given by
\begin{align*}
S^{\overline{\sigma}}_{p,\infty}B:=S^{\overline{\sigma}}_{p,\infty}B(\mathbb{R}^{d+1}):=\setc{f\in{\mathcal S}'(\mathbb{R}^{d+1})}{\|f\|_{S^{\overline{\sigma}}_{p,\infty} B}<\infty},
\end{align*}
with the norm
\begin{align*}
\|f\|_{S^{\overline{\sigma}}_{p,\infty}B}:=\sup_{l,j\ge 0} 2^{\sigma_t l}2^{\sigma_x j}\|{\mathcal F}^{-1}_{t,x}\psi_l \phi_j{\mathcal F}_{t,x} f\|_{\LR{p}(\mathbb{R}^{d+1})}.
\end{align*}
\item The non-homogeneous Chemin-Lerner space $\tilde{L}^{p}_{t} B^{\sigma_x}_{p,\infty}(\mathbb{R}^{d+1})$ is given by
\begin{align*}
\tilde{L}^{p}_{t} B^{\sigma_x}_{p,\infty}&:=\tilde{L}^{p}_{t} B^{\sigma_x}_{p,\infty}(\mathbb{R}^{d+1}):=\setc{f\in{\mathcal S}'}{\|f\|_{\tilde{L}^{p}_{t} B^{\sigma_x}_{p,\infty}}<\infty},
\end{align*}
with the norm
$\displaystyle \|f\|_{\tilde{L}^{p}_{t} B^{\sigma_x}_{p,\infty}}:=\sup_{j\ge 0} 2^{\sigma_x j}\|{\mathcal F}^{-1}_{x} \phi_j{\mathcal F}_{x} f\|_{\LR{p}(\mathbb{R}^{d+1})}$.
\end{enumerate}
\end{defn}
\begin{rem}
All spaces considered in Definition \ref{defn:spaces} are Banach spaces, cf.\@ \cite{Tri77c}. Note that for $\vartheta\in\mathbb{R}$, we use the notation $\vartheta\overline{\sigma}=(\vartheta\sigma_t,\vartheta\sigma_x)$. In this note, we restrict ourselves to the third index of the Besov-type space being infinity, in which case the spaces $S^{\overline{\sigma}}_{p,\infty}B$ are sometimes called \emph{Nikol'ski\u{\i} spaces of dominating mixed derivatives} in the literature. However, there is no conceptual limitation to consider also third indices $q\in[1,\infty]$. On the same token, one could also consider spaces with different indices $p$ and $q$ in different directions. We refer the reader to the monograph \cite{ScT87} for more details concerning such spaces.
\end{rem}
\begin{lem}\label{lem:emb_0}
Let $\kappa_x\ge 0$ and $p\in [1,\infty]$. Then
\[
\tilde{L}^{p}_{t} B^{\kappa_x+\varepsilon}_{p,\infty}(\mathbb{R}^{d+1})\subset \LR{p}(\mathbb{R};\WSR{\kappa_x}{p}(\mathbb{R}^d))\subset \tilde{L}^{p}_{t} B^{\kappa_x-\delta}_{p,\infty}(\mathbb{R}^{d+1}),
\]
whenever $\varepsilon>0$ and $\delta\in(0,\kappa_x]$.
\end{lem}
\begin{proof}
Follows from \cite[p.\@ 98]{BCD11}.
\end{proof}
\begin{lem}\label{lem:emb_1}
Let $\kappa_t,\kappa_x>0$ and $p\in [1,\infty)$. Then $S^{\overline{\kappa}}_{p,\infty}B\subset \WSR{\sigma_t}{p}(\mathbb{R};\WSR{\sigma_x}{p}(\mathbb{R}^d))$ whenever $\sigma_t\in[0,\kappa_t)$ and $\sigma_x\in[0,\kappa_x)$.
\end{lem}
\begin{proof}
The proof is a combination of results in \cite{ScT87}, which are written for $\mathbb{R}\times \mathbb{R}$ but also true for $\mathbb{R}\times \mathbb{R}^d$ by an inspection of their respective proofs: Without loss of generality, we can assume that $\sigma_t$ and $\sigma_x$ are non-integer. By \cite[Remark 2.3.4/4]{ScT87}, we have $\WSR{\sigma_t}{p}(\mathbb{R};\WSR{\sigma_x}{p}(\mathbb{R}^d)) = S B^{\overline{\sigma}}_{p,p}$ (see \cite[Definition 2.2.1/2]{ScT87} for a definition of the latter space). Since by \cite[Proposition 2.2.3/2]{ScT87} we have $S^{\overline{\kappa}}_{p,\infty} B\subset S B^{\overline{\sigma}}_{p,p}$, this yields the claim.
\end{proof}
\begin{lem}\label{lem:emb_2}
Let $\sigma_t,\sigma_x> 0$ and $p\in[1,\infty]$. Then
\begin{align*}
\left(\LR{p}(\mathbb{R}^{d+1})\cap \tilde{L}^{p}_{x}\dot B^{\sigma_t}_{p,\infty} \cap \tilde{L}^{p}_{t} \dot B^{\sigma_x}_{p,\infty} \cap S^{\overline{\sigma}}_{p,\infty}\dot B\right) = S^{\overline{\sigma}}_{p,\infty} B
\end{align*}
with equivalent norms.
\end{lem}
\begin{proof}
As smooth and compactly supported functions, $\psi_0$ and $\phi_0$ extend to $L^p$ multipliers for all $p\in[1,\infty]$, see e.g.\@ \cite{BeL76}.\newline
For $f\in\left(\LR{p}(\mathbb{R}^{d+1})\cap \tilde{L}^{p}_{x}\dot B^{\sigma_t}_{p,\infty} \cap \tilde{L}^{p}_{t} \dot B^{\sigma_x}_{p,\infty} \cap S^{\overline{\sigma}}_{p,\infty}\dot B\right)\subset{\mathcal S}'(\mathbb{R}^{d+1})$ we obtain
\begin{align*}
\|f\|_{S^{\overline{\sigma}}_{p,\infty}B}&\le \|{\mathcal F}^{-1}_{t,x}\psi_0 \phi_0{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}} + \sup_{l> 0} 2^{\sigma_t l}\|{\mathcal F}^{-1}_{t,x}\eta_l \phi_0{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}} \\
&\quad + \sup_{j> 0} 2^{\sigma_x j}\|{\mathcal F}^{-1}_{t,x}\psi_0 \varphi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}} + \sup_{l,j> 0} 2^{\sigma_t l}2^{\sigma_x j}\|{\mathcal F}^{-1}_{t,x}\eta_l \varphi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}} \\
&\lesssim \|f\|_{\LR{p}_{t,x}} + \sup_{l> 0} 2^{\sigma_t l}\|{\mathcal F}^{-1}_{t}\eta_l {\mathcal F}_{t} f\|_{\LR{p}_{t,x}} \\
&\quad + \sup_{j> 0} 2^{\sigma_x j}\|{\mathcal F}^{-1}_{x} \varphi_j{\mathcal F}_{x} f\|_{\LR{p}_{t,x}} + \sup_{l,j> 0} 2^{\sigma_t l}2^{\sigma_x j}\|{\mathcal F}^{-1}_{t,x}\eta_l \varphi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}} \\
&\lesssim \|f\|_{\LR{p}_{t,x}} + \|f\|_{\tilde{L}^{p}_{x}\dot B^{\sigma_t}_{p,\infty}} + \|f\|_{\tilde{L}^{p}_{t}\dot B^{\sigma_x}_{p,\infty}} + \|f\|_{S^{\overline{\sigma}}_{p,\infty}\dot B}.
\end{align*}
Conversely, for $f\in S^{\overline{\sigma}}_{p,\infty}B$, we estimate the four contributions corresponding to $\LR{p}(\mathbb{R}^{d+1})$, $\tilde{L}^{p}_{x}\dot B^{\sigma_t}_{p,\infty}$, $\tilde{L}^{p}_{t} \dot B^{\sigma_x}_{p,\infty}$, and $S^{\overline{\sigma}}_{p,\infty}\dot B$ separately. We start by noting that due to $\sigma_t,\sigma_x>0$, the invariance of multiplier norms with respect to dilation, $\eta_l=\eta_l\tilde\psi_0$ for $l\le 0$ and $\varphi_j=\varphi_j\tilde\phi_0$ for $j\le 0$, where $\tilde\psi_0:=\psi_0+\psi_1$ and $\tilde\phi_0:=\phi_0+\phi_1$, we have
\begin{align*}
\sup_{l\le 0}2^{\sigma_t l}\|{\mathcal F}^{-1}_{t} \eta_l{\mathcal F}_{t} f\|_{\LR{p}_{t,x}}&\lesssim \|{\mathcal F}^{-1}_{t} \tilde\psi_0{\mathcal F}_{t} f\|_{\LR{p}_{t,x}}, \\
\sup_{j\le 0}2^{\sigma_x j}\|{\mathcal F}^{-1}_{x} \varphi_j{\mathcal F}_{x} f\|_{\LR{p}_{t,x}}&\lesssim \|{\mathcal F}^{-1}_{x} \tilde\phi_0{\mathcal F}_{x} f\|_{\LR{p}_{t,x}}.
\end{align*}
Furthermore we use the fact that for $\sigma>0$ one has the estimate $\sum_{n\ge 0}|a_n|\lesssim \sup_{n\ge 0} 2^{\sigma n}|a_n|$ for any sequence $(a_n)\subset \mathbb{R}$ with a constant depending on $\sigma$.
With this, we obtain
\begin{align*}
\|f\|_{\LR{p}_{t,x}}&\le \sum_{l,j\ge 0} \|{\mathcal F}^{-1}_{t,x}\psi_l \phi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}} \lesssim \sup_{l,j\ge 0} 2^{\sigma_t l}2^{\sigma_x j}\|{\mathcal F}^{-1}_{t,x}\psi_l \phi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}}\le \|f\|_{S^{\overline{\sigma}}_{p,\infty}B}.
\end{align*}
Next, we compute
\begin{align*}
\|f\|_{\tilde{L}^{p}_{x}\dot B^{\sigma_t}_{p,\infty}}&\le \sup_{l\le 0} 2^{\sigma_t l}\|{\mathcal F}^{-1}_{t}\eta_l {\mathcal F}_{t} f\|_{\LR{p}_{t,x}} + \sup_{l> 0} 2^{\sigma_t l}\|{\mathcal F}^{-1}_{t}\psi_l {\mathcal F}_{t} f\|_{\LR{p}_{t,x}} \\
&\lesssim \|{\mathcal F}^{-1}_{t}\tilde\psi_0 {\mathcal F}_{t} f\|_{\LR{p}_{t,x}} + \sup_{l> 0} 2^{\sigma_t l}\|{\mathcal F}^{-1}_{t}\psi_l {\mathcal F}_{t} f\|_{\LR{p}_{t,x}} \\
&\le \sum_{j\ge 0} \|{\mathcal F}^{-1}_{t,x}\tilde\psi_0 \phi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}} + \sup_{l>0}\sum_{j\ge 0} 2^{\sigma_t l}\|{\mathcal F}^{-1}_{t,x}\psi_l \phi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}} \\
&\lesssim \sup_{j\ge 0} 2^{\sigma_x j} \|{\mathcal F}^{-1}_{t,x}\tilde\psi_0 \phi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}} + \sup_{l>0,j\ge 0} 2^{\sigma_t l}2^{\sigma_x j}\|{\mathcal F}^{-1}_{t,x}\psi_l \phi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}} \lesssim \|f\|_{S^{\overline{\sigma}}_{p,\infty}B}.
\end{align*}
By analogy, $\|f\|_{\tilde{L}^{p}_{t}\dot B^{\sigma_x}_{p,\infty}}\lesssim \|f\|_{S^{\overline{\sigma}}_{p,\infty}B}$. Hence, it remains to control $\|f\|_{S^{\overline{\sigma}}_{p,\infty}\dot B}$. We split this term into the four contributions
\begin{align*}
\|f\|_{S^{\overline{\sigma}}_{p,\infty}\dot B}&=\sup_{l,j> 0} 2^{\sigma_t l}2^{\sigma_x j}\|{\mathcal F}^{-1}_{t,x}\psi_l \phi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}} + \sup_{l>0,j\le 0} 2^{\sigma_t l}2^{\sigma_x j}\|{\mathcal F}^{-1}_{t,x}\psi_l \varphi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}} \\
&\quad + \sup_{l\le 0, j>0} 2^{\sigma_t l}2^{\sigma_x j}\|{\mathcal F}^{-1}_{t,x}\eta_l \phi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}} + \sup_{l,j\le 0} 2^{\sigma_t l}2^{\sigma_x j}\|{\mathcal F}^{-1}_{t,x}\eta_l \varphi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}}.
\end{align*}
The first contribution is immediately estimated by $\|f\|_{S^{\overline{\sigma}}_{p,\infty}B}$. For the second contribution, we have
\begin{align*}
\sup_{l>0,j\le 0} 2^{\sigma_t l}2^{\sigma_x j}\|{\mathcal F}^{-1}_{t,x}\psi_l \varphi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}}\lesssim \sup_{l>0} 2^{\sigma_t l}\|{\mathcal F}^{-1}_{t,x}\psi_l \tilde\phi_0{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}}\le \|f\|_{S^{\overline{\sigma}}_{p,\infty}B},
\end{align*}
and a similar estimate holds for the third contribution. For the fourth contribution, we have
\begin{align*}
\sup_{l,j\le 0} 2^{\sigma_t l}2^{\sigma_x j}\|{\mathcal F}^{-1}_{t,x}\eta_l \varphi_j{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}}\lesssim \|{\mathcal F}^{-1}_{t,x}\tilde\psi_0 \tilde\phi_0{\mathcal F}_{t,x} f\|_{\LR{p}_{t,x}}.
\end{align*}
This concludes the proof.
\end{proof}
\section{Optimality of Estimates via Scaling}\label{Scale}
It is well known that in the linear case $m=1$ one has estimates of the form
\begin{align}\label{scale_lin_lp_est}
\norm{u}_{\LR{1}_t \DSR{\sigma_x}{1}_x}\le c(\sigma_x) \vpp{\norm{u_0}_{\LR{1}_x} + \norm{S}_{\LR{1}_{t,x}}},
\end{align}
for all $\sigma_x< 2$. In the case $m>1$, such an estimate cannot be true for \emph{any} $\sigma_x>0$ anymore. Intuitively, this is due to the linear nature of \eqref{scale_lin_lp_est} (observe that the integrability exponent is equal on both sides of the inequality), which is not compatible with the nonlinear equation \eqref{pme_sys}. We will make this intuition more precise by the following lemma based on a scaling argument.
\begin{lem}\label{lem:scaling}
Let $T>0$, $m>1$, $\mu\in [1,m]$, $p\in [1,\infty)$ and $\sigma_t,\sigma_x \ge 0$. Assume that there is a constant $c=c(m,\mu,p,\sigma_t,\sigma_x)>0$ such that
\begin{align}\label{scale_pme_lp_est}
\norm{u^{[\mu]}}_{\DSR{\sigma_t}{p}(0,T;\DSR{\sigma_x}{p}(\mathbb{R}^d))}^p\le c \vpp{\norm{u_0}_{\LR{1}(\mathbb{R}^d)} + \norm{S}_{\LR{1}(0,T;\LR{1}(\mathbb{R}^d))}}
\end{align}
for all solutions $u$ to \eqref{pme_sys}. Then
\begin{align}\label{scaling_const}
\begin{split}
p &\le \frac{m}{\mu+(m-1)\sigma_t}\le \frac{m}{\mu}, \\
\sigma_t&\le \frac{m-\mu p}{p(m-1)}\le \frac{m-\mu}{m-1}, \quad \text{and }\\
\sigma_x &= \frac{\mu p -1}{p}\frac{2}{m-1}\le \frac{2(\mu-\sigma_t)}{m}\le \frac{2\mu}{m}.
\end{split}
\end{align}
In particular, if $\sigma_t=\frac{m-\mu}{m-1}$, then $p=1$ and $\sigma_x= \frac{2(\mu-1)}{m-1}$.
\end{lem}
\begin{proof}
For positive constants $\eta,\gamma \ge 1$ with $\eta^{m-1}=\gamma$ and a fixed triple $(u,u_0,S)$ such that $u$ satisfies \eqref{pme_sys} with initial condition $u_0$ and forcing $S$ we consider the rescaled quantities $(\tildeu, \tildeu_0, \tilde S)$ defined via
\begin{align*}
\tildeu(t,x):= \eta u(\gamma t, x), \quad \tilde u_0(x):= \eta u_0 (x), \quad \tilde S(t,x):= \eta^m S(\gamma t, x),
\end{align*}
where we have tacitly extended $S$ on $(T,\gamma T)$ by $0$.
Then $\tildeu$ satisfies \eqref{pme_sys} with $\tilde u_0\in \LR{1}(\mathbb{R}^d)$ and $\tilde S\in \LR{1}(0,T;\LR{1}(\mathbb{R}^d))$, so that \eqref{scale_pme_lp_est} gives
\begin{align}\label{rescaled_pme_lp_est_time}
\norm{\tildeu^{[\mu]}}_{\DSR{\sigma_t}{p}(0,T;\DSR{\sigma_x}{p}(\mathbb{R}^d))}^p\le c \vpp{\norm{\tilde u_0}_{\LR{1}(\mathbb{R}^d)} + \norm{\tilde S}_{\LR{1}(0,T;\LR{1}(\mathbb{R}^d))}}.
\end{align}
We observe
\begin{align*}
\norm{\tildeu^{[\mu]}}_{\DSR{\sigma_t}{p}(0,T;\DSR{\sigma_x}{p}(\mathbb{R}^d))}^p =\eta^{\mu p} \gamma^{\sigma_t p -1} \norm{u^{[\mu]}}_{\DSR{\sigma_t}{p}(0,\gamma T;\DSR{\sigma_x}{p}(\mathbb{R}^d))}^p
\end{align*}
as well as $\norm{\tilde u_0}_{\LR{1}(\mathbb{R}^d)} = \eta \norm{u_0}_{\LR{1}(\mathbb{R}^d)}$ and $\norm{\tilde S}_{\LR{1}(0,T;\LR{1}(\mathbb{R}^d))}= \eta \norm{S}_{\LR{1}(0,\gamma T;\LR{1}(\mathbb{R}^d))}$. Thus, it follows from \eqref{rescaled_pme_lp_est_time} that
\begin{align}\label{scale_pme_lp_est_eta_time}
\begin{split}
&\norm{u^{[\mu]}}_{\DSR{\sigma_t}{p}(0,T;\DSR{\sigma_x}{p}(\mathbb{R}^d))}^p \\
&\qquad \le c \eta^{1- \mu p} \gamma^{1-\sigma_t p} \vpp{\norm{u_0}_{\LR{1}(\mathbb{R}^d)} + \norm{S}_{\LR{1}(0,\gamma T;\LR{1}(\mathbb{R}^d))}} \\
&\qquad = c \eta^{(m-1)(1-\sigma_t p) + 1-\mu p} \vpp{\norm{u_0}_{\LR{1}(\mathbb{R}^d)} + \norm{S}_{\LR{1}(0,T;\LR{1}(\mathbb{R}^d))}}.
\end{split}
\end{align}
As long as $u_0$ or $S$ are non-trivial and unless
\begin{align}\label{scale_exp_cond_time}
(m-1)(1-\sigma_t p) + 1-\mu p \geq 0,
\end{align}
this gives the contradiction $u=0$ by sending $\eta\to\infty$ (and consequently also $\gamma\to \infty$). Since $\sigma_t\ge 0$, \eqref{scale_exp_cond_time} gives $p\le \frac{m}{\mu+(m-1)\sigma_t}\le \frac{m}{\mu}$. On the same token, since $p\ge 1$, \eqref{scale_exp_cond_time} gives $\sigma_t \le \frac{m-\mu p}{p(m-1)} \le \frac{m-\mu}{m-1}$.
Next, we rescale in space. More precisely, for positive constants $\eta,\gamma>0$ with $\eta^{1-m}=\gamma^2$ and a fixed triple $(u,u_0,S)$ as above we consider the rescaled quantities $(\tildeu, \tildeu_0, \tilde S)$ defined via
\begin{align*}
\tildeu(t,x):= \eta u(t, \gamma x), \quad \tilde u_0(x):= \eta u_0 (\gamma x), \quad \tilde S(t,x):= \eta S(t,\gamma x).
\end{align*}
Then $\tildeu$ satisfies \eqref{pme_sys} with $\tilde u_0\in \LR{1}(\mathbb{R}^d)$ and $\tilde S\in \LR{1}(0,T;\LR{1}(\mathbb{R}^d))$, so that \eqref{scale_pme_lp_est} gives
\begin{align}\label{rescaled_pme_lp_est}
\norm{\tildeu^{[\mu]}}_{\DSR{\sigma_t}{p}(0,T;\DSR{\sigma_x}{p}(\mathbb{R}^d))}^p\le c \vpp{\norm{\tilde u_0}_{\LR{1}(\mathbb{R}^d)} + \norm{\tilde S}_{\LR{1}(0,T;\LR{1}(\mathbb{R}^d))}}.
\end{align}
We have
\begin{align*}
\norm{\tildeu^{[\mu]}}_{\DSR{\sigma_t}{p}(0,T;\DSR{\sigma_x}{p}(\mathbb{R}^d))}^p =\eta^{\mu p} \gamma^{\sigma_x p -d} \norm{u^{[\mu]}}_{\DSR{\sigma_t}{p}(0,T;\DSR{\sigma_x}{p}(\mathbb{R}^d))}^p
\end{align*}
as well as $\norm{\tilde u_0}_{\LR{1}(\mathbb{R}^d)} = \eta \gamma^{-d} \norm{u_0}_{\LR{1}(\mathbb{R}^d)}$ and $\norm{\tilde S}_{\LR{1}(0,T;\LR{1}(\mathbb{R}^d))}= \eta \gamma^{-d} \norm{S}_{\LR{1}(0,T;\LR{1}(\mathbb{R}^d))}$. Thus, it follows from \eqref{rescaled_pme_lp_est} and the relation $\eta^{1-m}=\gamma^2$ that
\begin{align}\label{scale_pme_lp_est_eta}
\begin{split}
&\norm{u^{[\mu]}}_{\DSR{\sigma_t}{p}(0,T;\DSR{\sigma_x}{p}(\mathbb{R}^d))}^p \\
&\qquad \le c \eta^{1- \mu p} \gamma^{-\sigma_x p} \vpp{\norm{u_0}_{\LR{1}(\mathbb{R}^d)} + \norm{S}_{\LR{1}(0,T;\LR{1}(\mathbb{R}^d))}} \\
&\qquad = c \eta^{\frac{\sigma_x p(m-1)}{2} + 1-\mu p} \vpp{\norm{u_0}_{\LR{1}(\mathbb{R}^d)} + \norm{S}_{\LR{1}(0,T;\LR{1}(\mathbb{R}^d))}}.
\end{split}
\end{align}
As long as $u_0$ or $S$ are non-trivial and unless
\begin{align}\label{scale_exp_cond}
\frac{\sigma_x p (m-1)}{2} + 1- \mu p = 0 \Leftrightarrow \sigma_x = \frac{\mu p-1}{p} \frac{2}{m-1},
\end{align}
this gives the contradiction $u=0$ by sending $\eta\to 0$ or $\eta\to\infty$ (and consequently $\gamma\to \infty$ or $\gamma\to 0$, respectively). Plugging into \eqref{scale_exp_cond} the restrictions on $p$ and $\sigma_t$, we obtain the result.
\end{proof}
\begin{rem}
If one sets $\mu=1$, $p=1$ and $\sigma_t=0$, Lemma \ref{lem:scaling} tells us that $\sigma_x$ cannot be positive, which is what we claimed following \eqref{scale_lin_lp_est}. Moreover, we emphasize that Lemma \ref{lem:scaling} shows that in the case of the whole space, the regularity exponent $\sigma_x\in [\frac{2(\mu-1)}{m-1},\frac{2\mu}{m}]$ is in a one-to-one correspondence to the integrability exponent $p\in[1,\frac{m}{\mu}]$ via
\begin{align*}
\sigma_x= \frac{\mu p -1}{p}\frac{2}{m-1}, \quad \text{and } \quad p= \frac{2}{2\mu-\sigma_x(m-1)}.
\end{align*}
\end{rem}
\subsection{The Barenblatt Solution}\label{Barenblatt}
Consider the Barenblatt solution
\begin{align*}
u_{BB}(t,x):=t^{-\alpha}(C-k|xt^{-\beta}|^2)_+^{\frac{1}{m-1}},
\end{align*}
where $m>1$, $\alpha:=\frac{d}{d(m-1)+2}$, $k=\frac{\alpha(m-1)}{2md}$, $\beta=\frac{\alpha}{d}$, and $C>0$ is a free constant. Then, for $\mu\in [1,m]$,
$\displaystyle u_{BB}^{[\mu]}\in L^{\frac{m}{\mu}}(0,T;\DSR{s}{\frac{m}{\mu}}(\mathbb{R}^d))$
implies $s<\frac{2\mu}{m}$.
\begin{proof}
With $F(x):=(C-k|x|^2)_+^{\frac{\mu}{m-1}}$ we have $u_{BB}^{[\mu]}(t,x)=t^{-\alpha\mu}F(xt^{-\beta})$. We next observe that, for $s\in (0,1)$ and each $t\ge 0$,
%
\begin{align*}
\norm{u_{BB}^{[\mu]}(t,\cdot)}_{\DSR{s}{\frac{m}{\mu}}(\mathbb{R}^d)}^{\frac{m}{\mu}} & =\int_{\mathbb{R}^d\times\mathbb{R}^d} \frac{|u_{BB}^{[\mu]}(t,x)-u_{BB}^{[\mu]}(t,y)|^{\frac{m}{\mu}}}{|x-y|^{\frac{sm}{\mu}+d}}\,\mathrm{d} x \,\mathrm{d} y \\
&=t^{-\alpha m - \beta(\frac{sm}{\mu}+d) + 2d\beta} \norm{F}_{\DSR{s}{\frac{m}{\mu}}(\mathbb{R}^d)}^{\frac{m}{\mu}}.
\end{align*}
%
Hence,
%
\begin{align*}
\norm{u_{BB}^{[\mu]}}_{L^{\frac{m}{\mu}}(0,T;\DSR{s}{\frac{\mu}{m}}(\mathbb{R}^d))}^{\frac{m}{\mu}} =\norm{t^{-\alpha m - \beta(\frac{sm}{\mu}+d) + 2d\beta}}_{L^1(0,T)} \norm{F}_{\DSR{s}{\frac{m}{\mu}}(\mathbb{R}^d)}^{\frac{m}{\mu}},
\end{align*}
%
which is finite if and only if
%
\begin{align*}
-\alpha m - \beta(\frac{sm}{\mu}+d) + 2d\beta > -1 \quad \text{and } \quad F\in \DSR{s}{\frac{m}{\mu}}(\mathbb{R}^d).
\end{align*}
%
Hence, necessarily
%
\begin{align*}
m + \frac1d(\frac{sm}{\mu}+d) - 2 < \frac{1}{\alpha} = \frac{d(m-1)+2}{d},
\end{align*}
%
which is equivalent to $s<\frac{2\mu}{m}$. In the case $s\in (1,2)$ we observe that it holds $\partial_{x_i}u_{BB}^{[\mu]}(t,x)=t^{-\alpha\mu+\beta}\partial_{x_i}F(xt^{-\beta})$, so that analogous arguments may be applied.
\end{proof}
\section{Averaging Lemma Approach}\label{AvLem}
In \cite{Ges17}, an Averaging Lemma has been introduced that can be applied directly to the porous medium equations \eqref{pme_sys} to obtain estimates on the spatial regularity of $u$, but so far, no corresponding estimates for powers of the solution $u^{\mu}$ or its time regularity could be obtained. In this section, we provide an Averaging Lemma that gives a comprehensive answer to both of these questions. To this end, we recall the definition of the anisotropic and isotropic truncation properties from \cite{Ges17}, which extend the truncation property introduced in \cite[Definition 2.1]{TaT07}.
\begin{defn}\label{def:tuncation_prop}\mbox{ }
\begin{enumerate}
\item Let $m$ be a complex-valued Fourier multiplier. We say that $m$ has the truncation property if, for any locally supported bump function $\psi$ on $\mathbb{C}$ and any $1\le p<\infty$, the multiplier with symbol $\psi(\frac{m(\xi)}{\d})$ is an $L^{p}$-multiplier as well as an ${\mathcal M}_{TV}$-multiplier uniformly in $\delta>0$, that is, its $L^{p}$-multiplier norm (${\mathcal M}_{TV}$-multiplier norm resp.) depends only on the support and $C^{l}$ size of $\psi$ (for some large $l$ that may depend on $m$) but otherwise is independent of $\delta$.
\item Let $m:\mathbb{R}_{\xi}^{d}\times\mathbb{R}_{v}\to\mathbb{C}$ be a Carath\'{e}odory function such that $m(\cdot,v)$ is radial for all $v\in\mathbb{R}$. Then $m$ is said to satisfy the isotropic truncation property if for every bump function $\psi$ supported on a ball in $\mathbb{C}$, every bump function $\varphi$ supported in $\{\xi\in\mathbb{C}:\,\frac12\le|\xi|\le2\}$ and every $1<p<\infty$
\[
M_{\psi,J}f(x,v):=\calf_{x}^{-1}\varphi\left(\frac{|\xi|^{2}}{J^{2}}\right)\psi\left(\frac{m(\xi,v)}{\d}\right)\calf_{x}f(x)
\]
is an $L_{x}^{p}$-multiplier for all $v\in\mathbb{R}$, $J=2^{j},\,j\in\mathbb{Z}$ and, for all $r\ge1$,
\begin{align*}
\Big\|\|M_{\psi,J}\|_{{\mathcal M}^{p}}\Big\|_{L_{v}^{r}} & \lesssim|\Omega_{m}(J,\d)|^{\frac{1}{r}},
\end{align*}
where
$\displaystyle
\Omega_{m}(J,\d):=\{v\in\mathbb{R}:\,|\frac{m(J,v)}{\d}|\in\supp\psi\}$.
Here we use an abuse of notation $|\frac{m(J,v)}{\d}|:=\sup\setc{|\frac{m(\xi,v)}{\d}|}{\frac{|\xi|^2}{J^2}\in\supp \varphi}$.
\end{enumerate}
\end{defn}
We recall that for $m(\xi,v):=|\xi|^2 |v|$, the anisotropic truncation property is satisfied uniformly in $v$ by Example A.2 and the isotropic truncation property is satisfied by Example 3.2 in \cite{Ges17}, albeit only in the case $J\ge 1$. However, the proof given there can be used without any changes to obtain the full assertion for general $J\in\mathbb{Z}$.
\begin{lem}\label{lem:av}
Assume $m\in \vpp{1,\infty}$, $\gamma\in\vpp{-\infty,m}$, $\mu\in [1,m+1-\gamma)$ and let $f\in \LR{\beta}_{t,x,v}$, where $\beta'=\frac{1}{\rho}$ with $\rho\in(0,1)$, be a solution to
%
\begin{align}\label{av_eqn}
\call(\partial_t,\nabla_x,v)f(t,x,v)=g_0(t,x,v)+\partial_v g_1(t,x,v) \ \text{on } \ \mathbb{R}_t \times \mathbb{R}^d_x \times \mathbb{R}_v.
\end{align}
%
Here, the differential operator $\call(\partial_t,\nabla_x,v)$ that is given in terms of its symbol
%
\begin{align}\label{av_op}
\diffop(i\tau,i\xi,\vf) &:= i\tau
+ |v|^{m-1}|\xi|^2,
\end{align}
%
and $g_i$ are Radon measures satisfying
\begin{align*}
|g_{0}|(t,x,v)|v|^{1-\gamma}+|g_{1}|(t,x,v)|v|^{-\gamma}\in
{\mathcal M}_{TV}(\mathbb{R}_{t}\times\mathbb{R}_{x}^{d}\times\mathbb{R}_{v}).
\end{align*}
Suppose $s\in (\frac{\mu-2+\gamma}{m-1},1]\cap [0,1]$. Then $\overline{f}\in S^{\overline{\kappa}}_{p,\infty,(\infty)}\dot B$, where\newline $\overline{f}(t,x):=\int f(t,x,v) |v|^{\mu-1} \,\mathrm{d} v$, $\overline\kappa:=(\kappa_t,\kappa_x)$ and
\begin{align}\label{lem:av_constants}
\begin{split}
&p:=\frac{s(m-1)+1-\gamma+\rho}{\rho\mu+(1-\rho)(s(m-1)+1-\gamma)}, \\
\kappa_t:= &\frac{(1-s)(\mu-1+\rho)}{s(m-1)+1-\gamma+\rho},
\qquad \kappa_x:= \frac{2s(\mu-1+\rho)}{s(m-1)+1-\gamma+\rho}.
\end{split}
\end{align}
Moreover, we have the estimate
\begin{align}\label{lem:av_est1}
&\norm{\overline{f}}_{S^{\overline{\kappa}}_{p,\infty,(\infty)}\dot B}\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}}.
\end{align}
%
If additionally $\overline{f}\in \LR{r}_{t,x}$, $p\ne r\in[1,\infty]$, then for all $q\in\vpp{\min\{p,r\},\max\{p,r\}}$ it holds $\overline{f}\in S^{\vartheta\overline{\kappa}}_{q,\infty}\dot B$, where $\vartheta\in\vpp{0,1}$ is such that
\begin{align*}
\frac{1}{q}=\frac{1-\vartheta}{r}+\frac{\vartheta}{p}.
\end{align*}
In this case we have
%
\begin{align}\label{lem:av_est2}
&\norm{\overline{f}}_{S^{\vartheta\overline{\kappa}}_{q,\infty}\dot B}\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}} + \|\overline{f}\|_{\LR{r}_{t,x}}.
\end{align}
%
Finally, if $s=1$ and consequently $\kappa_t=0$, then \eqref{lem:av_est2} remains true if we replace the space $S^{\vartheta\overline{\kappa}}_{q,\infty}\dot B=S^{(0,\vartheta\kappa_x)}_{q,\infty}\dot B$ by $\tilde{L}^{q}_{t}\dot B^{\vartheta\kappa_x}_{q,\infty}$.
\end{lem}
%
\begin{rem}\label{rem:av}
Observe that for $\rho\in(\frac{m+1-\gamma-\mu}{m+1-\gamma},1)$ one may prescribe a specific integrability exponent. More precisely, given \[\tilde p\in [\frac{1-\gamma+\rho}{\rho\mu+(1-\rho)(1-\gamma)},\frac{m+1-\gamma}{\mu}]\cap (1,\frac{m+1-\gamma}{\mu}]\] choose
\begin{align*}
s:=\frac{\mu \tilde p \rho + \tilde p(1-\rho)(1-\gamma)-1+\gamma+\rho}{(m-1)(1-\tilde p(1-\rho))}\in (\frac{\mu-2+\gamma}{m-1},1]\cap [0,1].
\end{align*}
Then \eqref{lem:av_constants} reads $p=\tilde p$ as well as
\begin{align*}
\kappa_t&:= \frac{m+\rho-\gamma-\mu p \rho + p(1-\rho)(\gamma-m)}{p \rho}\frac{1}{m-1}, \\
\kappa_x&:= \frac{\mu p \rho +p(1-\rho)(1-\gamma)-1+\gamma-\rho}{p \rho}\frac{2}{m-1}.
\end{align*}
Observe that in the limiting case $\rho\to 1$ and $\gamma\to 1$, these orders of differentiability correspond to the ones found in \eqref{scaling_const}.
\end{rem}
\begin{proof}[Proof of Lemma \ref{lem:av}]
We first assume that $f$ is compactly supported with respect to the variable $v$. This condition will enter only qualitatively, and never appears in quantitative form. Therefore, at the end of the proof, we can again remove this additional assumption.
Since we are interested in regularity in terms of homogeneous Besov spaces, we decompose $f$ into Littlewood-Paley blocks with respect to the $t$-variable and the $x$-variable. Let $\{\eta_l\}_{l\in\mathbb{Z}}$ be a partition of unity on $\mathbb{R}\setminus\set{0}$ and $\{\varphi_j\}_{j\in\mathbb{Z}}$ a partition of unity on $\mathbb{R}^d\setminus\set{0}$ as in Section \ref{FctSpc}. Then we define for $l,j\in\mathbb{Z}$
\begin{align*}
f_{l,j}:={\mathcal F}^{-1}_{t,x}[\eta_l\varphi_j{\mathcal F}_{t,x}f],
\end{align*}
where ${\mathcal F}_{t,x}{f}_{l,j}(\tau,\xi,v)$ is supported on frequencies $|\xi|\sim2^{j}$, $|\tau|\sim2^{l}$ for $l,j\in\mathbb{Z}$.
Similarly, we define the decompositions $g_{0,l,j}$ and $g_{1,l,j}$ of $g_0$ and $g_1$, respectively.
We consider a micro-local decomposition of $f_{l,j}$ connected to the degeneracy of the operator ${\mathcal L}(\partial_{t},\nabla_{x},v)$. Let $\psi_{0}\in\CR \infty_c(\mathbb{R})$ be a smooth function supported in $B_{2}(0)$ and set $\psi_{1}:=1-\psi_0$.
For $\d>0$ to be specified later we write
\begin{align*}
f_{l,j} & =\calf_{x}^{-1}\psi_{0}\left(\frac{|v||\xi|^2}{\d}\right)\calf_{x} f_{l,j}+\calf_{x}^{-1}\psi_{1}\left(\frac{|v||\xi|^2}{\d}\right)\calf_{x}f_{l,j} =:f_{l,j}^{0}+f_{l,j}^{1}.
\end{align*}
Since $f$ is a solution to \eqref{av_eqn}, we have
\begin{equation}\label{eq:eqn_fK-3}
\begin{split}
\calf_{t,x}^{-1}&{\mathcal L}(i\tau,i\xi,v)\calf_{t,x}f_{l,j}^{1}(t,x,v)\\
&=\calf_{x}^{-1}\psi_{1}\left(\frac{|v||\xi|^2}{\d}\right)\calf_{x}\left(g_{0,l,j}(t,x,v)+\partial_{v}g_{1,l,j}(t,x,v)\right)
\end{split}
\end{equation}
and thus
\begin{align}
f_{l,j}^{1}(t,x,v)= & \calf_{t,x}^{-1} \psi_{1}\left(\frac{|v||\xi|^2}{\d}\right)\frac{1}{{\mathcal L}(i\tau,i\xi,v)}\calf_{t,x}g_{0,l,j}(t,x,v)\label{eq:eqn_fK-1-1}\\
& + \calf_{t,x}^{-1} \psi_{1}\left(\frac{|v||\xi|^2}{\d}\right)\frac{1}{{\mathcal L}(i\tau,i\xi,v)}\calf_{t,x}\partial_{v}g_{1,l,j}(t,x,v)\nonumber \\
=: & f_{l,j}^{2}(t,x,v)+f_{l,j}^{3}(t,x,v).\nonumber
\end{align}
In conclusion, we have arrived at the decomposition
\[
\begin{split}
\bar{f}_{l,j}&:=\int f_{l,j}|v|^{\mu-1}\,dv\\
& =\int f_{l,j}^{0}|v|^{\mu-1}\,\mathrm{d} v+\int f_{l,j}^{2}|v|^{\mu-1}\,\mathrm{d} v+\int f_{l,j}^{3}|v|^{\mu-1}\,\mathrm{d} v=:\bar{f}_{l,j}^{0}+\bar{f}_{l,j}^{2}+\bar{f}_{l,j}^{3}.
\end{split}
\]
We aim to estimate the regularity of of these three contributions separately.
\smallskip\noindent
\textit{Step 1:} $f^0$.
We note that we have the estimate $\|{\mathcal F}^{-1}_t \eta_l {\mathcal F} f\|_{\LR{\beta}_{t,x}}\lesssim \|f\|_{\LR{\beta}_{t,x}}$ with a constant independent of $l$, since $\|\eta_l\|_{{\mathcal M}^\beta}=\|\eta_0\|_{{\mathcal M}^\beta}<\infty$. Let $l,j\in\mathbb{Z}$ be arbitrary, fixed. Then, we have that $|v|\le 2\cdot 2^{-2j}\d$ on the support of $\varphi(2^{-j}\xi)\psi_{0}\left(\frac{|v||\xi|^2}{\d}\right)$, so that $|\Omega_m(2^j,\delta)|\lesssim |[-2\cdot 2^{-2j}\d,2\cdot 2^{-2j}\d]|\lesssim 2^{-2j}\d$. Hence, by the isotropic truncation property and Minkowski's and H\"older's inequality it holds
\begin{align*}
& \|\int f_{l,j}^{0}|v|^{\mu-1}\,\mathrm{d} v\|_{L_{t,x}^{\beta}} \\
& \qquad = \|\int \calf_{x}^{-1}\psi_{0}\left(\frac{|v||\xi|^2}{\d}\right) |v|^{\mu-1} \calf_{x}f_{l,j}\,\mathrm{d} v\|_{L_{t,x}^{\beta}} \\
& \qquad \lesssim \int \|\calf_{x}^{-1}\psi_{0}\left(\frac{|v||\xi|^2}{\d}\right) |v|^{\mu-1} \calf_{x}f_{l,j}\|_{L_{t,x}^{\beta}}\,\mathrm{d} v \\
&\qquad \lesssim \left(\frac{\d}{2^{2j}}\right)^{\mu-1} \int\|\calf_{x}^{-1}\psi_{0}\left(\frac{|v||\xi|^2}{\d}\right) \calf_{x}f_{l,j}\|_{L_{t,x}^{\beta}}\,\mathrm{d} v \\
&\qquad \lesssim \left(\frac{\d}{2^{2j}}\right)^{\mu-1} \int \|M_{\psi_0,2^{-j}}\|_{{\mathcal M}^\beta}\|f\|_{L_{t,x}^{\beta}}\,\mathrm{d} v \\
&\qquad \le \left(\frac{\d}{2^{2j}}\right)^{\mu-1} \left\| \|M_{\psi_0,2^{-j}}\|_{{\mathcal M}^\beta}\right\|_{\LR{\beta'}_{v}}\|f\|_{L_{t,x,v}^{\beta}} \\
&\qquad \lesssim \left(\frac{\d}{2^{2j}}\right)^{\mu-1} |\Omega_m(2^j,\delta)|^{\frac{1}{\beta'}}\|f\|_{L_{t,x,v}^{\beta}}
\lesssim \left(\frac{\d}{2^{2j}}\right)^{\mu-1+\rho}\|f\|_{L_{t,x,v}^{\beta}},
\end{align*}
where we have used $\beta'=\frac{1}{\rho}$.
\smallskip\noindent
\textit{Step 2:} $f^2$.
Let $l,j\in\mathbb{Z}$ be arbitrary, fixed. Since $s\in [0,1]$, we clearly have
\begin{align*}
|\tau|^{1-s}|v|^{s(m-1)}|\xi|^{2s}\le |{\mathcal L}(i\tau,i\xi,v)|.
\end{align*}
Moreover, in virtue of $s>\frac{\mu-2+\gamma}{m-1}$ we have on the support of $\eta_l\varphi_j\psi_{1}\left(\frac{|v||\xi|^2}{\d}\right)$ (so that $|\tau|\sim 2^{l}$, $|\xi|\sim 2^{j}$, and $|v|\gtrsim 2^{-2j}\d$)
\begin{align*}
\frac{|v|^{\mu-2+\gamma}}{|{\mathcal L}(i\tau,i\xi,v)|}&\lesssim \frac{|v|^{\mu-2+\gamma}}{|\tau|^{1-s}|v|^{s(m-1)}|\xi|^{2s}}\\
& \lesssim \frac{\left(2^{-2j}\d\right)^{\mu-2+\gamma-s(m-1)}}{2^{l(1-s)}2^{2js}} =\frac{2^{2j(s(m-2)-\mu+2-\gamma)}}{\d^{s(m-1)-\mu+2-\gamma}2^{l(1-s)}}.
\end{align*}
Hence, by Theorem \ref{thm:FM} and Lemma \ref{lem:FM}, $\frac{|v|^{\mu-2+\gamma}}{{\mathcal L}(i\tau,i\xi,v)}$ acts on the support of $\eta_l\varphi_j\psi_{1}\left(\frac{|v||\xi|^2}{\d}\right)$ as a constant multiplier of order $\frac{2^{2j(s(m-2)-\mu+2-\gamma)}}{\d^{s(m-1)-\mu+2-\gamma}2^{l(1-s)}}$. Consequently, by the anisotropic truncation property
\begin{align*}
& \|\int f_{l,j}^{2}|v|^{\mu-1}\,\mathrm{d} v\|_{L_{t,x}^{1}}\\
& \qquad = \|\int\calf_{t,x}^{-1} \psi_{1}\left(\frac{|v||\xi|^2}{\d}\right)\frac{|v|^{\mu-2+\gamma}}{{\mathcal L}(i\tau,i\xi,v)}\calf_{t,x}|v|^{1-\gamma}g_{0,l,j}\,\mathrm{d} v\|_{L_{t,x}^{1}}\\
& \qquad \lesssim \frac{2^{2j(s(m-2)-\mu+2-\gamma)}}{\d^{s(m-1)-\mu+2-\gamma}2^{l(1-s)}}\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}.
\end{align*}
Here, we have used that with $\psi_0\left(\frac{|v||\xi|^2}{\d}\right)$ also $\psi_1\left(\frac{|v||\xi|^2}{\d}\right)=1-\psi_0\left(\frac{|v||\xi|^2}{\d}\right)$ is a bounded ${\mathcal M}_{TV}$-multiplier independent of $\d>0$.
\smallskip\noindent
\textit{Step 3:} $f^{3}$.
Let $l,j\in\mathbb{Z}$ arbitrary, fixed.
We observe (recall ${\mathcal L}(i\tau,i\xi,v) = i\tau + |v|^{m-1}|\xi|^2$)
\begin{align*}
\int f_{l,j}^{3}&|v|^{\mu-1}\,\mathrm{d} v \\
& =-\int\calf_{t,x}^{-1} \psi_{1}'\left(\frac{|v||\xi|^2}{\d}\right)\frac{\sgn(v)|\xi|^2}{\d}\frac{|v|^{\mu-1}}{{\mathcal L}(i\tau,i\xi,v)}\calf_{t,x}g_{1,l,j}\,\mathrm{d} v \\
& -(\mu-1)\int\calf_{t,x}^{-1} \psi_{1}\left(\frac{|v||\xi|^2}{\d}\right)\frac{\sgn(v)|v|^{\mu-2}}{{\mathcal L}(i\tau,i\xi,v)}\calf_{t,x}g_{1,l,j}\,\mathrm{d} v \\
& +\int\calf_{t,x}^{-1} \psi_{1}\left(\frac{|v||\xi|^2}{\d}\right)\frac{|v|^{\mu-1} \partial_v{\mathcal L}(i\tau,i\xi,v)}{{\mathcal L}(i\tau,i\xi,v)^2}\calf_{t,x}g_{1,l,j}\,\mathrm{d} v \\
&=-\int\calf_{t,x}^{-1} \psi_{1}'\left(\frac{|v||\xi|^2}{\d}\right)\frac{|v||\xi|^2}{\d}\frac{\sgn(v)|v|^{\mu-2+\gamma}}{{\mathcal L}(i\tau,i\xi,v)}\calf_{t,x}|v|^{-\gamma}g_{1,l,j}\,\mathrm{d} v \\
& -(\mu-1)\int\calf_{t,x}^{-1} \psi_{1}\left(\frac{|v||\xi|^2}{\d}\right)\frac{\sgn(v)|v|^{\mu-2+\gamma}}{{\mathcal L}(i\tau,i\xi,v)}\calf_{t,x}|v|^{-\gamma}g_{1,l,j}\,\mathrm{d} v \\
& +(m-1)\int\calf_{t,x}^{-1} \psi_{1}\left(\frac{|v||\xi|^2}{\d}\right)\frac{|v|^{\mu +m-3+\gamma}|\xi|^2}{{\mathcal L}(i\tau,i\xi,v)^2}\calf_{t,x}|v|^{-\gamma}g_{1,l,j}\,\mathrm{d} v.
\end{align*}
Observe that $\psi_1'$ is supported on an annulus. Therefore, we have as before $|\tau|\sim 2^{l}$, $|\xi|\sim 2^{j}$ and $|v|\gtrsim 2^{-2j}\d$ on the support of $\eta_l\varphi_j\psi_{1}\left(\frac{|v||\xi|^2}{\d}\right)$, and additionally also $|v|\sim 2^{-2j}\d$ on the support of $\eta_l\varphi_j\psi_{1}'\left(\frac{|v||\xi|^2}{\d}\right)$. This last observation allows us to estimate the expression $\frac{|v||\xi|^2}{\d}$ appearing in the first integral on the right hand side by
\begin{align*}
\frac{|v||\xi|^2}{\d}\lesssim 1.
\end{align*}
As in Step 2, we obtain
\begin{align*}
\frac{|v|^{\mu-2+\gamma}}{|{\mathcal L}(i\tau,i\xi,v)|}&\lesssim \frac{2^{2j(s(m-2)-\mu+2-\gamma)}}{\d^{s(m-1)-\mu+2-\gamma}2^{l(1-s)}},
\end{align*}
and, similarly,
\begin{align*}
\frac{|v|^{\mu+m-3+\gamma}|\xi|^2}{|{\mathcal L}(i\tau,i\xi,v)|^2}&=\frac{|v|^{\mu-2+\gamma}}{|{\mathcal L}(i\tau,i\xi,v)|}\frac{|v|^{m-1}|\xi|^2}{|{\mathcal L}(i\tau,i\xi,v)|}\\
& \lesssim \frac{|v|^{\mu-2+\gamma}}{|{\mathcal L}(i\tau,i\xi,v)|}\lesssim \frac{2^{2j(s(m-2)-\mu+2-\gamma)}}{\d^{s(m-1)-\mu+2-\gamma}2^{l(1-s)}}.
\end{align*}
In virtue of these estimates, the expressions
\begin{align*}
\frac{|v||\xi|^2}{\d}\frac{\sgn(v)|v|^{\mu-2+\gamma}}{{\mathcal L}(i\tau,i\xi,v)}, \quad \frac{\sgn(v)|v|^{\mu-2+\gamma}}{{\mathcal L}(i\tau,i\xi,v)}, \quad \frac{|v|^{\mu +m-3+\gamma}|\xi|^2}{{\mathcal L}(i\tau,i\xi,v)^2}
\end{align*}
extend by Theorem \ref{thm:FM} and Lemma \ref{lem:FM} to constant multipliers of order $\frac{2^{2j(s(m-2)-\mu+2-\gamma)}}{\d^{s(m-1)-\mu+2-\gamma}2^{l(1-s)}}$ on supports of $\eta_l\varphi_j\psi_{1}'\left(\frac{|v||\xi|^2}{\d}\right)$ and $\eta_l\varphi_j\psi_{1}\left(\frac{|v||\xi|^2}{\d}\right)$, respectively. Hence, by the anisotropic truncation property, we obtain
\begin{align*}
\|\int f_{l,j}^{3}|v|^{\mu-1}\,\mathrm{d} v\|_{L_{t,x}^{1}} & \lesssim \frac{2^{2j(s(m-2)-\mu+2-\gamma)}}{\d^{s(m-1)-\mu+2-\gamma}2^{l(1-s)}}\||v|^{-\gamma}g_{1,j}\|_{{\mathcal M}_{TV}}.
\end{align*}
\smallskip\noindent
\textit{Step 4:} Conclusion.
We aim to conclude by real interpolation. We set, for $z>0$,
\begin{align*}
K(z,\overline{f}_{l,j}):=\inf\{ & \|\overline{f}_{l,j}^{1}\|_{\LR{1}_{t,x}}+z\|\overline{f}_{l,j}^{0}\|_{\LR{\beta}_{t,x}}:\overline{f}_{l,j}^{0}\in \LR{\beta}_{t,x}, \overline{f}_{l,j}^{1}\in \LR{1}_{t,x},\ \overline{f}_{l,j}=\overline{f}_{l,j}^{0}+\overline{f}_{l,j}^{1}\}.
\end{align*}
By the above estimates we obtain
\begin{align*}
K(z,\overline{f}_{l,j}) & \lesssim \frac{2^{2j(s(m-2)-\mu+2-\gamma)}}{\d^{s(m-1)-\mu+2-\gamma}2^{l(1-s)}}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}})\\
& \ \ +z\left(\frac{\delta}{2^{2j}}\right)^{\mu-1+\rho}\|f\|_{\LR{\beta}_{t,x,v}}.
\end{align*}
We now equilibrate the first and the second term on the right hand side, that is, we choose $\delta>0$ such that
\[
\frac{2^{2j(s(m-2)-\mu+2-\gamma)}}{\d^{s(m-1)-\mu+2-\gamma}2^{l(1-s)}}=z\left(\frac{\delta}{2^{2j}}\right)^{\mu-1+\rho},
\]
that is,
\begin{align*}
\d^{-a}c^{1-s}d^{-a+s}=z\d^{b}d^b
\end{align*}
with $a:=s(m-1)-\mu+2-\gamma$, $b:=\mu-1+\rho$, $c:=2^{-l}$ and $d:=2^{-2j}$.
This yields
\[
\d=z^{-\frac{1}{a+b}}c^{\frac{1-s}{a+b}}d^{\frac{s-a-b}{a+b}},
\]
and further
\[
\d^{-a}c^{1-s}d^{-a+s}=z^{\frac{a}{a+b}}c^{\frac{(1-s)b}{a+b}}d^{\frac{sb}{a+b}}.
\]
Hence, with
\[
\t:=\frac{a}{a+b}=\frac{s(m-1)-\mu+2-\gamma}{s(m-1)+1-\gamma+\rho}
\]
we obtain
\begin{align*}
\lefteqn{z^{-\t}K(z,\overline{f}_{l,j})}\\
&& \lesssim 2^{-l\frac{(1-s)(\mu-1+\rho)}{s(m-1)+1-\gamma+\rho}}2^{-2j\frac{s(\mu-1+\rho)}{s(m-1)+1-\gamma+\rho}}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}})\\
&&= 2^{-l\kappa_t}2^{-j\kappa_x}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}}).
\end{align*}
Observe that $1-\t + \frac{\t}{\beta}=1-\t + \t(1-\rho)=1-\t\rho$, so that $(\LR{1}_{t,x},\LR{\beta}_{t,x})_{\t,\infty}=\LR{p,\infty}_{t,x}$ with $p=\frac{1}{1-\t\rho}=\frac{a+b}{a(1-\rho)+b}=\frac{s(m-1)+1-\gamma+\rho}{\rho\mu+(1-\rho)(s(m-1)+1-\gamma)}$. Hence, we may take the supremum over $z>0$ to obtain
\begin{equation}\label{av_est_lj}
\|\overline{f}_{l,j}\|_{\LR{p,\infty}_{t,x}} \lesssim 2^{-l\kappa_t}2^{-j\kappa_x}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}}).
\end{equation}
Multiplying by $2^{l\kappa_t}2^{j\kappa_x}$ and taking the supremum over $j,l\in \mathbb{Z}$ yields \eqref{lem:av_est1}.
If we assume additionally $\overline{f}\in \LR{r}_{t,x}$, $r\neq p$, we choose for $q\in\vpp{\min\{p,r\},\max\{p,r\}}$ a corresponding $\vartheta\in\vpp{0,1}$ subject to $1/q=(1-\vartheta)/r+\vartheta/p$. Then using $(\LR{r}_{t,x},\LR{p,\infty}_{t,x})_{\vartheta,q}=\LR{q}_{t,x}$, together with \eqref{av_est_lj}, we obtain
\begin{align*}
\|\overline{f}_{l,j}\|_{\LR{q}_{t,x}} &\lesssim \|\overline{f}_{l,j}\|_{\LR{r}_{t,x}}^{1-\vartheta}\|\overline{f}_{l,j}\|_{\LR{p,\infty}_{t,x}}^\vartheta \nonumber\\
& \lesssim \|\overline{f}\|_{\LR{r}_{t,x}}^{1-\vartheta}2^{-l\vartheta\kappa_t}2^{-j\vartheta\kappa_x}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\infty}_{t,x,v}})^{\vartheta} \nonumber \\
&\le 2^{-l\vartheta\kappa_t}2^{-j\vartheta\kappa_x}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}} + \|\overline{f}\|_{\LR{r}_{t,x}}).
\end{align*}
Multiplying by $2^{l\vartheta\kappa_t}2^{j\vartheta\kappa_x}$ and taking the supremum over $j,l\in \mathbb{Z}$ yields \eqref{lem:av_est2}.
Finally we note that if $s=1$ and consequently $\kappa_t=0$, then the partition of unity $\{\eta_l\}_{l\in\mathbb{Z}}$ in the Fourier space connected to the time variable $t$ is not necessary. Hence, if we set $\alpha_\tau=0$ whenever Lemma \ref{lem:FM} is invoked and replace Theorem \ref{thm:FM} by its isotropic variant (cf.\@ Remark \ref{rem:FM}), we obtain
\begin{align*}
\|\overline{f}_{j}\|_{\LR{q}_{t,x}} &\lesssim 2^{-j\vartheta\kappa_x}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}} + \|\overline{f}\|_{\LR{r}_{t,x}}),
\end{align*}
which shows $\overline{f}\in \tilde{L}^q\dot B^{\vartheta\kappa_x}_{q,\infty}$.
It remains to consider the case when $f$ is not localized in $v$. We observe that for a smooth cut-off function $\psi\in \CR \infty_c(\mathbb{R})$, the function $(t,x,v)\to f(t,x,v)\psi(v)=:f^\psi(t,x,v)$ is a solution to
\begin{align*}
\call(\partial_t,\nabla_x,v) f^\psi(t,x,v)=g_0^\psi(t,x,v)+g_1^{\psi'}(t,x,v)+\partial_v g_1^\psi(t,x,v) \ \text{on } \mathbb{R}_t \times \mathbb{R}^d_x \times \mathbb{R}_v,
\end{align*}
%
where $g_0^\psi$, $g_1^{\psi'}$ and $g_1^\psi$ are defined analogously.
Hence, estimate \eqref{av_est_lj} reads in this case
\begin{align*}
\|\overline{f^\psi_{l,j}}\|_{\LR{p,\infty}_{t,x}} \le 2^{-l\vartheta\kappa_t}2^{-j\vartheta\kappa_x}(\||v|^{1-\gamma} (g_{0}^\psi+g_{1}^{\psi'})\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}^\psi\|_{{\mathcal M}_{TV}} + \|f^\psi\|_{\LR{\beta}_{t,x,v}}).\end{align*}
Since $|v|^{-\gamma}g_1\in {\mathcal M}_{TV}$ by assumption, there exists to $\varepsilon_n\downarrow 0$ a sequence $r_n\uparrow\infty$ such that
\[
\int_{\mathbb{R}_t\times \mathbb{R}^d_x\times \mathbb{R}_v} \chi_{\set{r_n\le |v|}} |v|^{-\gamma} g_1 \,\mathrm{d} v \,\mathrm{d} x \,\mathrm{d} t\le \varepsilon_n\]
for all $n\in \mathbb{N}$. For $n\in\mathbb{N}$ and a smooth cut-off function $\psi\in \CR \infty_c(\mathbb{R})$ with $\psi=1$ on $B_1(0)$ and $\supp\psi\in B_2(0)$, we define $\psi_n$ via $\psi_n(v):=\psi(v/n)$. Hence $\psi'_n$ is supported on $r_n\le |v|\le 2r_n$ and takes values in $[0,1/r_n]$, so that we may estimate
\begin{align*}
\||v|^{1-\gamma} g_{1}^{\psi_n'}\|_{{\mathcal M}_{TV}}&=\int_{\mathbb{R}_t\times \mathbb{R}^d_x\times \mathbb{R}_v} |\psi'_n(v)||v|(|v|^{-\gamma}g_1)\,\mathrm{d} v \,\mathrm{d} x \,\mathrm{d} t \\
&=\int_{\mathbb{R}_t\times \mathbb{R}^d_x\times \mathbb{R}_v} \chi_{\set{r_n\le |v|\le 2 r_n}}|\psi'_n(v)||v|(|v|^{-\gamma}g_1)\,\mathrm{d} v \,\mathrm{d} x \,\mathrm{d} t \\ &\lesssim \int \chi_{r_n\le |v|\le 2 r_n} |v|^{-\gamma} g_1 \,\mathrm{d} v\le \varepsilon_n.
\end{align*}
Thus, taking the limit $n\to\infty$ and using Fatou's lemma, we obtain \eqref{av_est_lj} also for general $f$. Multiplying by $2^{l\vartheta\kappa_t}2^{j\vartheta\kappa_x}$ and taking the supremum over $j,l\in \mathbb{Z}$, we may conclude as before.
\end{proof}
\begin{lem}\label{lem:av2}
Assume $\gamma\in(-\infty,1)$, $m\in \vpp{1,\infty}$, $\mu\in [1,2-\gamma)$, $\rho\in(0,1]$, $\beta'=\frac{1}{\rho}$, and let $f$, $g_0$, $g_1$, and $\overline{f}$ be as in Lemma \ref{lem:av}. Define
\begin{align}\label{lem:av2_constants}
\begin{split}
p&:=\frac{1-\gamma+\rho}{\rho\mu+(1-\rho)(1-\gamma)}, \qquad \kappa_t:= \frac{\mu-1+\rho}{1-\gamma+\rho}.
\end{split}
\end{align}
If $\overline{f}\in \LR{r}_{t,x}$, $p\ne r\in[1,\infty]$, then for all $q\in\vpp{\min\{p,r\},\max\{p,r\}}$ we have $\overline{f}\in \tilde{L}^{q}_x\dot B^{\vartheta\kappa_t}_{q,\infty}$, where $\vartheta\in\vpp{0,1}$ is such that
\begin{align*}
\frac{1}{q}=\frac{1-\vartheta}{r}+\frac{\vartheta}{p}.
\end{align*}
Moreover,
%
\begin{align}\label{lem:av2_est}
&\norm{\overline{f}}_{\tilde{L}^{q}_x\dot B^{\vartheta\kappa_t}_{q,\infty}}\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}} + \|\overline{f}\|_{\LR{r}_{t,x}}.
\end{align}
%
\end{lem}
\begin{proof}
By the same arguments as in the proof of Lemma \ref{lem:av}, we may assume that $f$ is localized in $v$. In fact, the whole proof of Lemma \ref{lem:av2} is similar to the one of Lemma \ref{lem:av}, with the modification that here we consider a micro-local decomposition of $f$ depending on the size of $v$ only and do not localize in the Fourier space connected to the spatial variable $x$. More precisely, let $\{\eta_l\}_{l\in\mathbb{Z}}$ be a partition of unity on $\mathbb{R}\setminus\set{0}$ as in Section \ref{FctSpc}. Then we define for $l\in\mathbb{Z}$
\begin{align*}
f_{l}:={\mathcal F}^{-1}_{x}[\eta_l{\mathcal F}_{t}f],
\end{align*}
where ${\mathcal F}_{t}{f}_{l}(\tau,x,v)$ is supported on frequencies $|\tau|\sim2^{l}$ for $l\in\mathbb{Z}$.
Similarly, we define the decompositions $g_{0,l}$ and $g_{1,l}$ of $g_0$ and $g_1$, respectively. Moreover, we again consider a smooth function $\psi_{0}\in\CR \infty_c(\mathbb{R})$ supported in $B_{2}(0)$ and set $\psi_{1}:=1-\psi_0$.
For $\d>0$ to be specified later we write
\begin{align*}
f_l & =\psi_{0}\left(\frac{|v|}{\d}\right) f_l+\psi_{1}\left(\frac{|v|}{\d}\right) f_l =:f_l^{0}+f_l^{1}.
\end{align*}
Since $f$ is a solution to \eqref{av_eqn}, we have
\begin{align*}
\calf_{t,x}^{-1}{\mathcal L}(i\tau,i\xi,v)\calf_{t,x}f_l^{1}(t,x,v)=\psi_{1}\left(\frac{|v|}{\d}\right)\left(g_{0,l}(t,x,v)+\partial_{v}g_{1,l}(t,x,v)\right)
\end{align*}
and thus
\begin{align*}
f_l^{1}(t,x,v)= & \calf_{t,x}^{-1} \psi_{1}\left(\frac{|v|}{\d}\right)\frac{1}{{\mathcal L}(i\tau,i\xi,v)}\calf_{t,x}g_{0,l}(t,x,v)\\
& + \calf_{t,x}^{-1} \psi_{1}\left(\frac{|v|}{\d}\right)\frac{1}{{\mathcal L}(i\tau,i\xi,v)}\calf_{t,x}\partial_{v}g_{1,l}(t,x,v)\nonumber \\
=: & f_l^{2}(t,x,v)+f_l^{3}(t,x,v),\nonumber
\end{align*}
so that we arrive at the decomposition
\begin{align*}
\bar{f}_l &:=\int f_l|v|^{\mu-1}\,\mathrm{d} v=\int f_l^{0}|v|^{\mu-1}\,\mathrm{d} v+\int f_l^{2}|v|^{\mu-1}\,\mathrm{d} v+\int f_l^{3}|v|^{\mu-1}\,\mathrm{d} v\\
& =:\bar{f}_l^{0}+\bar{f}_l^{2}+\bar{f}_l^{3}.
\end{align*}
Again, we treat the three contributions separately.
\smallskip\noindent
\textit{Step 1:} $f^0$.
Let $l\in\mathbb{Z}$ be arbitrary, fixed. Since $|v|\lesssim \d$ on the support of $\psi_{0}\left(\frac{|v|}{\d}\right)$, using Minkowski's and H\"older's inequality, we have
\begin{align*}
\|\int f_{l}^{0}|v|^{\mu-1}\,\mathrm{d} v\|_{L_{t,x}^{\beta}} & = \|\int \psi_{0}\left(\frac{|v|}{\d}\right) |v|^{\mu-1} f_{l}\,\mathrm{d} v\|_{L_{t,x}^{\beta}} \\
& \le \int |\psi_{0}|\left(\frac{|v|}{\d}\right) |v|^{\mu-1} \|f_{l}\|_{L_{t,x}^{\beta}}\,\mathrm{d} v \\
&\lesssim \d^{\mu-1} \int|\psi_{0}|\left(\frac{|v|}{\d}\right) \|f_{l}\|_{L_{t,x}^{\beta}}\,\mathrm{d} v \\
&\lesssim \d^{\mu-1} \|f\|_{L_{t,x,v}^{\beta}}\left(\int |\psi_0|\left(\frac{|v|}{\d}\right)^{\beta'}\,\mathrm{d} v\right)^{\frac{1}{\beta'}} \\
& \lesssim \d^{\mu-1+\rho}\|f\|_{L_{t,x,v}^{\beta}}.
\end{align*}
\smallskip\noindent
\textit{Step 2:} $f^2$.
Let $l\in\mathbb{Z}$ be arbitrary, fixed.
Since $\mu\le 2-\gamma$, we have on the support of $\eta_l\psi_{1}\left(\frac{|v|}{\d}\right)$ (so that $|\tau|\sim 2^{l}$ and $|v|\ge \d$)
\begin{align*}
\frac{|v|^{\mu-2+\gamma}}{|{\mathcal L}(i\tau,i\xi,v)|}&\lesssim \frac{|v|^{\mu-2+\gamma}}{|\tau|}\lesssim \frac{\d^{\mu-2+\gamma}}{2^l}.
\end{align*}
By Lemma \ref{lem:FM} applied with $\alpha_\xi=0$ and the isotropic variant of Theorem \ref{thm:FM} (cf.\@ Remark \ref{rem:FM}), $\frac{|v|^{\mu-2+\gamma}}{|{\mathcal L}(i\tau,i\xi,v)|}$ acts as a constant multiplier of order $\frac{\d^{\mu-2+\gamma}}{2^l}$ on the support of $\eta_l\psi_{1}\left(\frac{|v|}{\d}\right)$. Consequently
\begin{align*}
\|\int f_{l}^{2}|v|^{\mu-1}\,\mathrm{d} v\|_{L_{t,x}^{1}} & =\|\int\calf_{t,x}^{-1} \psi_{1}\left(\frac{|v|}{\d}\right)\frac{|v|^{\mu-2+\gamma}}{{\mathcal L}(i\tau,i\xi,v)}\calf_{t,x}|v|^{1-\gamma}g_{0,l}\,\mathrm{d} v\|_{L_{t,x}^{1}}\\
& \lesssim \frac{\d^{\mu-2+\gamma}}{2^l}\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}.
\end{align*}
\smallskip\noindent
\textit{Step 3:} $f^{3}$.
Let $l\in\mathbb{Z}$ arbitrary, fixed.
We observe (recall ${\mathcal L}(i\tau,i\xi,v) = i\tau + |v|^{m-1}|\xi|^2$)
\begin{align*}
\int f_{l}^{3}|v|^{\mu-1}\,\mathrm{d} v & =-\int\calf_{t,x}^{-1} \psi_{1}'\left(\frac{|v|}{\d}\right)\frac{\sgn(v)}{\d}\frac{|v|^{\mu-1}}{{\mathcal L}(i\tau,i\xi,v)}\calf_{t,x}g_{1,l}\,\mathrm{d} v \\
& -(\mu-1)\int\calf_{t,x}^{-1} \psi_{1}\left(\frac{|v|}{\d}\right)\frac{\sgn(v)|v|^{\mu-2}}{{\mathcal L}(i\tau,i\xi,v)}\calf_{t,x}g_{1,l}\,\mathrm{d} v \\
& +\int\calf_{t,x}^{-1} \psi_{1}\left(\frac{|v|}{\d}\right)\frac{|v|^{\mu-1} \partial_v{\mathcal L}(i\tau,i\xi,v)}{{\mathcal L}(i\tau,i\xi,v)^2}\calf_{t,x}g_{1,l}\,\mathrm{d} v \\
&=-\int\calf_{t,x}^{-1} \psi_{1}'\left(\frac{|v|}{\d}\right)\frac{|v|}{\d}\frac{\sgn(v)|v|^{\mu-2+\gamma}}{{\mathcal L}(i\tau,i\xi,v)}\calf_{t,x}|v|^{-\gamma}g_{1,l}\,\mathrm{d} v \\
& -(\mu-1)\int\calf_{t,x}^{-1} \psi_{1}\left(\frac{|v|}{\d}\right)\frac{\sgn(v)|v|^{\mu-2+\gamma}}{{\mathcal L}(i\tau,i\xi,v)}\calf_{t,x}|v|^{-\gamma}g_{1,l}\,\mathrm{d} v \\
& +(m-1)\int\calf_{t,x}^{-1} \psi_{1}\left(\frac{|v|}{\d}\right)\frac{|v|^{\mu +m-3+\gamma}|\xi|^2}{{\mathcal L}(i\tau,i\xi,v)^2}\calf_{t,x}|v|^{-\gamma}g_{1,l}\,\mathrm{d} v
\end{align*}
Observe that $\psi_1'$ is supported on an annulus. Therefore, we have as before $|\tau|\sim 2^{l}$ and $|v|\ge \d$ on the support of $\eta_l\psi_{1}\left(\frac{|v|}{\d}\right)$, and additionally also $|v|\sim \d$ on the support of $\eta_l\psi_{1}'\left(\frac{|v|}{\d}\right)$. This last observation allows us to estimate the expression $\frac{|v|}{\d}$ appearing in the first integral on the right hand side by $\frac{|v|}{\d}\lesssim 1$. As in Step 2, we obtain
\begin{align*}
\frac{|v|^{\mu-2+\gamma}}{|{\mathcal L}(i\tau,i\xi,v)|}&\lesssim \frac{\d^{\mu-2+\gamma}}{2^l},
\end{align*}
and, similarly,
\begin{align*}
\frac{|v|^{\mu+m-3+\gamma}|\xi|^2}{|{\mathcal L}(i\tau,i\xi,v)|^2}&=\frac{|v|^{\mu-2+\gamma}}{|{\mathcal L}(i\tau,i\xi,v)|}\frac{|v|^{m-1}|\xi|^2}{|{\mathcal L}(i\tau,i\xi,v)|}\lesssim \frac{|v|^{\mu-2+\gamma}}{|{\mathcal L}(i\tau,i\xi,v)|}\lesssim \frac{\d^{\mu-2+\gamma}}{2^l}.
\end{align*}
In virtue of these estimates, Lemma \ref{lem:FM} applied with $\alpha_\xi=0$ and the isotropic variant of Theorem \ref{thm:FM} (cf.\@ Remark \ref{rem:FM}) show that the expressions
\begin{align*}
\frac{|v|}{\d}\frac{\sgn(v)|v|^{\mu-2+\gamma}}{{\mathcal L}(i\tau,i\xi,v)}, \quad \frac{\sgn(v)|v|^{\mu-2+\gamma}}{{\mathcal L}(i\tau,i\xi,v)}, \quad \frac{|v|^{\mu +m-3+\gamma}|\xi|^2}{{\mathcal L}(i\tau,i\xi,v)^2}
\end{align*}
extend to constant multipliers of order $\frac{\d^{\mu-2+\gamma}}{2^l}$ on the supports of $\eta_l\psi_{1}'\left(\frac{|v|}{\d}\right)$ and $\eta_l\psi_{1}\left(\frac{|v|}{\d}\right)$, respectively. Hence, we obtain
\begin{align*}
\|\int f_{l}^{3}|v|^{\mu-1}\,\mathrm{d} v\|_{L_{t,x}^{1}} & \lesssim \frac{\d^{\mu-2+\gamma}}{2^l}\||v|^{-\gamma}g_{1,j}\|_{{\mathcal M}_{TV}}.
\end{align*}
\smallskip\noindent
\textit{Step 4:} Conclusion.
We aim to conclude by real interpolation. We set, for $z>0$,
\begin{align*}
K(z,\overline{f}_{l}):=\inf\{ & \|\overline{f}_{l}^{1}\|_{\LR{1}_{t,x}}+z\|\overline{f}_{l}^{0}\|_{\LR{\beta}_{t,x}}:\overline{f}_{l}^{0}\in \LR{\beta}_{t,x}, \overline{f}_{l}^{1}\in \LR{1}_{t,x},\ \overline{f}_{l}=\overline{f}_{l}^{0}+\overline{f}_{l}^{1}\}.
\end{align*}
By the above estimates we obtain
\begin{align*}
K(z,\overline{f}_{l}) & \lesssim \frac{\d^{\mu-2+\gamma}}{2^l}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}})+z\d^{\mu-1+\rho}\|f\|_{\LR{\beta}_{t,x,v}}.
\end{align*}
We now equilibrate the first and the second term on the right hand side, that is, we choose $\d>0$ such that
\[
\frac{\d^{\mu-2+\gamma}}{2^l}=z\d^{\mu-1+\rho},
\]
that is,
\[
\d:=z^{-\frac{1}{1-\gamma+\rho}}2^{-\frac{l}{1-\gamma+\rho}}.
\]
Hence, with
$
\t:=\frac{-\mu+2-\gamma}{1-\gamma+\rho}
$
we obtain
\begin{align*}
z^{-\t}K(z,\overline{f}_{l}) & \lesssim 2^{-l\frac{\mu-1+\rho}{1-\gamma+\rho}}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}})\\
&= 2^{-l\kappa_t}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}}).
\end{align*}
As in Step 4 of the proof of Lemma \ref{lem:av} we use $(\LR{1}_{t,x},\LR{\beta}_{t,x})_{\t,\infty}=\LR{p,\infty}_{t,x}$ with $p=\frac{1}{1-\t\rho}=\frac{1-\gamma+\rho}{\rho\mu+(1-\rho)(1-\gamma)}$ to obtain
\begin{align}\label{av2_est_lj}
\|\overline{f}_{l}\|_{\LR{p,\infty}_{t,x}} & \lesssim 2^{-l\kappa_t}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}}).
\end{align}
For $q\in\vpp{\min\{p,r\},\max\{p,r\}}$ we choose a corresponding $\vartheta\in\vpp{0,1}$ subject to $1/q=(1-\vartheta)/r+\vartheta/p$. Then using $(\LR{r}_{t,x},\LR{p,\infty}_{t,x})_{\vartheta,q}=\LR{q}_{t,x}$, together with \eqref{av2_est_lj}, we obtain
\begin{align*}
\|\overline{f}_{l}\|_{\LR{q}_{t,x}} &\lesssim \|\overline{f}_{l}\|_{\LR{r}_{t,x}}^{1-\vartheta}\|\overline{f}_{l}\|_{\LR{p,\infty}_{t,x}}^\vartheta \\
& \lesssim \|\overline{f}\|_{\LR{r}_{t,x}}^{1-\vartheta}2^{-l\vartheta\kappa_t}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\infty}_{t,x,v}})^{\vartheta} \\
&\le 2^{-l\vartheta\kappa_t}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\infty}_{t,x,v}} + \|\overline{f}\|_{\LR{r}_{t,x}}).
\end{align*}
Multiplying by $2^{l\vartheta\kappa_t}$ and taking the supremum over $l\in \mathbb{Z}$ yields \eqref{lem:av2_est}.
\end{proof}
\begin{cor}\label{cor:av0}
Let $m\in\vpp{1,\infty}$, $\gamma\in\vpp{-\infty,m}$, $\mu\in[1,m+1-\gamma)$, $f\in \LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}$ be a solution to \eqref{av_eqn}, and let $g_0$, $g_1$ and $\overline{f}$ be as in Lemma \ref{lem:av}. Let $q\in (1,\frac{m+1-\gamma}{\mu})$ and define
\begin{align*}
\tilde\kappa_x:= \frac{\mu q-1}{q}\frac{2}{m-\gamma}.
\end{align*}
If $\overline{f}\in \LR{1}(\mathbb{R}^{d+1})\cap \LR{q}(\mathbb{R};\LR{1}(\mathbb{R}^d))$, then $\overline{f}\in \LR{q}(\mathbb{R};\WSR{\sigma_x}{q}(\mathbb{R}^d))$ for all $\sigma_x\in[0,\tilde\kappa_x)$. Furthermore,
\begin{align}\label{cor:av0_main_est}
\norm{\overline{f}}_{\LR{q}_t(\WSR{\sigma_x}{q}_x)}\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{1}_{t,x,v} \cap \LR{\infty}_{t,x,v}} + \|\overline{f}\|_{\LR{1}_{t,x}\cap \LR{q}_t\LR{1}_x}.
\end{align}
\end{cor}
\begin{proof}
We recall the decomposition $f_j={\mathcal F}_{x}^{-1}\varphi_j{\mathcal F}_{x}f$ introduced in the proof of Lemma \ref{lem:av}. We argue that it suffices to consider the case when $f_{j}=0$ for all $j<0$. Indeed, the part $f_{<}:=\sum_{j<0}f_j$ can be estimated in view of Bernstein's Lemma (cf.\@ \cite[Lemma 2.1]{BCD11}) via
\begin{align*}
\|\overline{f}_{<}\|_{\LR{q}_t(\WSR{\sigma_x}{q}_x)}\lesssim \|\overline{f}\|_{\LR{q}_{t}\LR{1}_x}.
\end{align*}
We aim to control $\overline{f}$ in $\tilde{L}^{q}_t\dot B^{\vartheta\kappa_x}_{q,\infty}$ where $\vartheta\in(0,1)$ is sufficiently large such that $\sigma_x<\vartheta\kappa_x$, and then use Lemma \ref{lem:emb_0} to the effect of
\begin{align*}
\|\overline{f}\|_{\LR{q}_t(\WSR{\sigma_x}{q}_x)}\lesssim \|\overline{f}\|_{\tilde{L}^{q}_t B^{\vartheta\kappa_x}_{q,\infty}} = \|\overline{f}\|_{\tilde{L}^{q}_t \dot B^{\vartheta\kappa_x}_{q,\infty}},
\end{align*}
where the last equality is apparent from the definition of the homogeneous and non-homogeneous Lebesgue-Besov spaces and the fact that the low frequencies of $f$ vanish.
Thus, it remains to establish
%
\begin{align}\label{cor:av0_est2}
\norm{\overline{f}}_{\tilde{L}^{q}\dot B^{\vartheta\kappa_x}_{q,\infty}}\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} + \|\overline{f}\|_{\LR{1}_{t,x}}.
\end{align}
%
For $\tilde p\in (1,\frac{m+1-\gamma}{\mu})$, choose
\begin{align*}
\rho:=\frac{(\tilde p-1)(m-\gamma)}{1+\tilde p(m-\mu-\gamma)}.
\end{align*}
We claim that $\rho$ is positive and well-defined: Since the nominator is positive due to $\tilde p>1$ and $m>\gamma$, it remains to check that the denominator is positive. This is obvious for $\mu\le m-\gamma$. For $\mu> m-\gamma$, we observe that due to $\mu<m+1-\gamma$ we have
\begin{align*}
\tilde p<\frac{m+1-\gamma}{\mu}<\frac{1}{\mu+\gamma-m},
\end{align*}
which implies $1+\tilde p(m-\mu-\gamma)>0$. Moreover, $\tilde p<\frac{m+1-\gamma}{\mu}$ can be rewritten as $(\tilde p-1)(m-\gamma)<1+\tilde p(m-\mu-\gamma)$, so that $\rho\in(0,1)$. Hence, we may apply Lemma \ref{lem:av} with this choice of $\rho$ and with $s=1$. One checks that in this case the integrability and differentiability exponents in \eqref{lem:av_constants} read $p=\tilde p$, $\kappa_t=0$, and $\kappa_x=\frac{\mu \tilde p-1}{\tilde p}\frac{2}{m-\gamma}$.
Choose $\tilde p\in(q, \frac{m+1-\gamma}{\mu})$ so that $\tilde\kappa_x<\kappa_x$ and define $\vartheta\in(0,1)$ through
\begin{align*}
\frac{1}{q}=1-\vartheta+\frac{\vartheta}{\tilde p}.
\end{align*}
We may choose $\tilde p\in(q, \frac{m+1-\gamma}{\mu})$ sufficiently small so that $\vartheta\in(0,1)$ is so large that $\sigma_x<\vartheta\tilde\kappa_x<\vartheta\kappa_x$. In view of \eqref{lem:av_est2} (with the space $S^{\vartheta\overline{\kappa}}_{q,\infty}\dot B=S^{(0,\vartheta\kappa_x)}_{q,\infty}\dot B$ replaced by $\tilde{L}^{q}_{t}\dot B^{\vartheta\kappa_x}_{q,\infty}$) we obtain
\begin{align*}
\|\overline{f}_{j}\|_{\LR{q}_{t,x}}\lesssim 2^{-j\vartheta\kappa_x}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}} + \|\overline{f}\|_{\LR{1}_{t,x}}),
\end{align*}
where we recall the notation $\overline{f}_{j}:=\int {\mathcal F}^{-1}_{x}[\varphi_j {\mathcal F}_{x}f]|v|^{\mu-1} \,\mathrm{d} v$.
If we multiply by $2^{j\vartheta\kappa_x}$ and take the supremum over $j\in \mathbb{Z}$, this yields
\begin{align*}
\norm{\overline{f}}_{\tilde L^{q}_t\dot B^{\vartheta\kappa_x}_{q,\infty}}\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}} + \|\overline{f}\|_{\LR{1}_{t,x}}.
\end{align*}
By the estimate $\|f\|_{\LR{\beta}_{t,x,v}} \lesssim \|f\|_{\LR{1}_{t,x,v}} + \|f\|_{\LR{\infty}_{t,x,v}}$, this gives \eqref{cor:av0_est2}.
\end{proof}
\begin{cor}\label{cor:av}
Let $m\in\vpp{1,\infty}$, $\gamma\in\vpp{-\infty,1}$, $f\in \LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}$ be a solution to \eqref{av_eqn}, and let $g_0$ and $g_1$ be as in Lemma \ref{lem:av}. Assume $\overline{f}\in \LR{r}_{t,x}$ for all $r\in [1,m+1-\gamma)$, where $\overline{f}(t,x):=\int f(t,x,v)\,\mathrm{d} v$. Let $\tilde p\in (2-\gamma,m+1-\gamma)$ and define
\begin{align*}
\tilde\kappa_t:= \frac{m+1-\gamma-\tilde p}{\tilde p}\frac{1}{m-1}, \qquad
\tilde\kappa_x:= \frac{\tilde p-2+\gamma}{\tilde p}\frac{2}{m-1}.
\end{align*}
Then $\overline{f}\in \WSR{\sigma_t}{\tilde p}(\mathbb{R};\WSR{\sigma_x}{\tilde p}(\mathbb{R}^d))$ for all $\sigma_t\in[0,\tilde\kappa_t)$ and $\sigma_x\in[0,\tilde\kappa_x)$. Furthermore, there is an $r\in (\tilde p, m+1-\gamma)$, such that
\begin{align}\label{cor:av_main_est}
\norm{\overline{f}}_{\WSR{\sigma_t}{\tilde p}(\WSR{\sigma_x}{\tilde p})}\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} + \|\overline{f}\|_{\LR{r}_{t,x}}.
\end{align}
\end{cor}
\begin{proof} As we need to pass from homogeneous spaces (the output of Lemma \ref{lem:av} and Lemma \ref{lem:av2}) to a non-homogeneous space, our strategy is to invoke Lemma \ref{lem:emb_2} and Lemma \ref{lem:emb_1}. The input to Lemma \ref{lem:emb_2} requires four pieces of information, namely control of $\overline{f}$ in $\LR{\tilde p}(\mathbb{R}^{d+1})$, $\tilde{L}^{\tilde p}_x\dot B^{\sigma_t}_{\tilde p,\infty}$, $\tilde{L}^{\tilde p}_t\dot B^{\sigma_x}_{\tilde p,\infty}$ and $S^{\overline{\sigma}}_{\tilde p,\infty}\dot B$. Since the control of $\overline{f}$ in $\LR{\tilde p}(\mathbb{R}^{d+1})$ is ensured by assumption, we concentrate on the other three contributions. Note that the main difficulty lies in the condition that both the integrability exponent and the orders of differentiability have to match exactly.
\smallskip\noindent
\textit{Step 1:} $\overline{f}\in S^{\overline{\sigma}}_{\tilde p,\infty}\dot B$.
Let $r\in(\tilde p,m+1-\gamma)$ to be chosen in Step 3. We claim that there exist functions $k_t,k_x:(0,\infty)\to(0,\infty)$ with $k_t(\varepsilon),k_x(\varepsilon)\to 0$ as $\varepsilon\to 0$, such that it holds for all $\varepsilon\ll 1$
%
\begin{align}\label{cor:av_est}
\norm{\overline{f}}_{S^{\overline{\sigma}}_{\tilde p,\infty}\dot B}\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} + \|\overline{f}\|_{\LR{r}_{t,x}},
\end{align}
%
where we have used the notation $\sigma_t:=\tilde\kappa_t-k_t(\varepsilon)$ and $\sigma_x:=\tilde\kappa_x-k_x(\varepsilon)$.
We apply Lemma \ref{lem:av} with $\mu=1$, $\rho=1-\varepsilon$, and $s:=s_\varepsilon\in (0,1)$, where $s_\varepsilon$ is chosen so that the integrability assertion in \eqref{lem:av_constants} reads $p=\tilde p$; this is possible for $\rho$ close to $1$ in view of Remark \ref{rem:av}.
Moreover, we may choose $\vartheta\in(0,1)$ such that for $\kappa_t$ and $\kappa_x$ defined through \eqref{lem:av_constants} satisfy $\vartheta\kappa_t=\tilde\kappa_t-k_t(\varepsilon)$ and $\vartheta\kappa_x=\tilde\kappa_x-k_x(\varepsilon)$ for some functions $k_t$ and $k_x$ as above. Then for $1<q_0<\tilde p<q_1<m+1-\gamma$ so that
\begin{align*}
\frac{1}{q_0}=1-\vartheta+\frac{\vartheta}{\tilde p}, \qquad
\frac{1}{q_1}=\frac{1-\vartheta}{r}+\frac{\vartheta}{\tilde p},
\end{align*}
in view of \eqref{lem:av_est2} we obtain that
\begin{align*}
\|\overline{f}_{l,j}\|_{\LR{q_i}_{t,x}}\lesssim 2^{-l\vartheta\kappa_t}2^{-j\vartheta\kappa_x}(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}} + \|\overline{f}\|_{\LR{1}_{t,x}\cap \LR{r}_{t,x}}),
\end{align*}
for $i=0,1$, where we recall the notation $\overline{f}_{l,j}:=\int {\mathcal F}^{-1}_{t,x}[\eta_l\varphi_j {\mathcal F}_{t,x}f] \,\mathrm{d} v$.
Since $(\LR{q_0}_{t,x},\LR{q_1}_{t,x})_{\t,\tilde p}=\LR{\tilde p}_{t,x}$ for an appropriate $\t\in(0,1)$, we thus obtain
\begin{align*}
\|\overline{f}_{l,j}\|_{\LR{\tilde p}_{t,x}}\lesssim 2^{-l\vartheta\kappa_t}2^{-j\vartheta\kappa_x}&\left(\||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} \right.\\
& \left. \ + \|f\|_{\LR{\beta}_{t,x,v}} + \|\overline{f}\|_{\LR{1}_{t,x}} + \|\overline{f}\|_{\LR{r}_{t,x}}\right),
\end{align*}
which after multiplying by $2^{l\vartheta\kappa_t}2^{j\vartheta\kappa_x}$ and taking the supremum over $l,j\in \mathbb{Z}$ yields
\begin{align}\label{cor:av_est_b}
\norm{\overline{f}}_{S^{\vartheta(\kappa_t,\kappa_x)}_{\tilde p,\infty}\dot B}\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\beta}_{t,x,v}} + \|\overline{f}\|_{\LR{1}_{t,x}} + \|\overline{f}\|_{\LR{r}_{t,x}}.
\end{align}
By the estimate $\|f\|_{\LR{\beta}_{t,x,v}} + \|\overline{f}\|_{\LR{1}_{t,x}}\lesssim \|f\|_{\LR{1}_{t,x,v}} + \|f\|_{\LR{\infty}_{t,x,v}}$, this gives \eqref{cor:av_est}.
\smallskip\noindent
\textit{Step 2:} $\overline{f}\in \tilde{L}^{\tilde p}_t\dot B^{\sigma_x}_{\tilde p,\infty}$.
In this step we establish
%
\begin{align}\label{cor:av_est2}
\norm{\overline{f}}_{\tilde{L}^{\tilde p}\dot B^{\sigma_x}_{\tilde p,\infty}}\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} + \|\overline{f}\|_{\LR{r}_{t,x}}.
\end{align}
%
Choose
\begin{align*}
\rho:=\frac{(\tilde p-1)(m-\gamma)}{1+\tilde p(m-1-\gamma)}.
\end{align*}
We claim that $\rho$ is positive and well-defined: Since the nominator is positive due to $\tilde p>1$ and $m>\gamma$, it remains to check that the denominator is positive. This is obvious for $\gamma\le m-1$. For $\gamma> m-1$, we observe that
\begin{align*}
\tilde p<m+1-\gamma<\frac{1}{1+\gamma-m},
\end{align*}
which implies $1+\tilde p(m-1-\gamma)>0$. Moreover, $\tilde p<m+1-\gamma$ can be rewritten as $(\tilde p-1)(m-\gamma)<1+\tilde p(m-1-\gamma)$, so that $\rho\in(0,1)$. Hence, we may apply Lemma \ref{lem:av} with this choice of $\rho$ and with $s=1$. One checks that in this case the integrability and differentiability exponents in \eqref{lem:av_constants} read $p=\tilde p$, $\kappa_t=0$, and $\kappa_x=\frac{p-1}{p}\frac{2}{m-\gamma}$. We observe that $\kappa_x\ge \tilde\kappa_x$ and hence we find $\vartheta\in (0,1)$ such that $\vartheta\kappa_x=\tilde\kappa_x-k_x(\varepsilon)$.
The same interpolation argument as in Step 1 gives now the estimate \eqref{cor:av_est2}.
\smallskip\noindent
\textit{Step 3:} $\overline{f}\in \tilde{L}^{\tilde p}_x\dot B^{\sigma_t}_{\tilde p,\infty}$.
In this step we show that there is some $r\in (\tilde p, m+1-\gamma)$ such that
%
\begin{align}\label{cor:av_est3}
&\norm{\overline{f}}_{\tilde{L}^{\tilde p}_x\dot B^{\sigma_t}_{\tilde p,\infty}}\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{\infty}_{t,x,v}} + \|\overline{f}\|_{\LR{r}_{t,x}}.
\end{align}
%
We apply Lemma \ref{lem:av2} with $\mu=1$ and $\rho=1$. In this case, \eqref{lem:av2_constants} reads $p=2-\gamma$ and $\kappa_t=\frac{1}{2-\gamma}$. Since $\tilde p>2-\gamma$, we have $\tilde\kappa_t<\kappa_t$. Hence, we can choose $\vartheta\in (0,1)$, such that $\vartheta\kappa_t=\tilde\kappa_t-k_t(\varepsilon)$. In particular,
\begin{align*}
\vartheta<\frac{\tilde\kappa_t}{\kappa_t}=\frac{m+1-\gamma-\tilde p}{\tilde p}\frac{2-\gamma}{m-1}<\frac{2-\gamma}{\tilde p},
\end{align*}
so that $r=\frac{\tilde p(2-\gamma)(1-\vartheta)}{2-\gamma-\vartheta \tilde p}$ is well defined. Since $r$ is increasing in $\vartheta$ due to $\tilde p>2-\gamma$, we see that $r\in (\tilde p,m+1-\gamma)$. We have $\frac{1}{\tilde p}=\frac{1-\vartheta}{r}+\frac{\vartheta}{p}$, and hence Lemma \eqref{lem:av2} gives estimate \eqref{cor:av_est3}.
\smallskip\noindent
\textit{Step 4:} Conclusion.
Since $\overline{f}\in \LR{\tilde p}_{t,x}$ by assumption, Lemma \ref{lem:emb_2} combined with Lemma \ref{lem:emb_1} yields the result.
\end{proof}
\begin{cor}\label{cor:av3}
Let $m\in\vpp{1,\infty}$, $\gamma\in\vpp{-\infty,m}$, and let $f\in \LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}$ be a solution to \eqref{av_eqn}. Let $g_0$ and $g_1$ be as in Lemma \ref{lem:av} and assume additionally
\begin{align*}
|g_{0}|(t,x,v) \in {\mathcal M}_{TV}(\mathbb{R}_{t}\times\mathbb{R}_{x}^{d}\times\mathbb{R}_{v}).
\end{align*}
Assume $\tilde p\in (2-\gamma,m+1-\gamma)\cap (1,m+1-\gamma)$ and define
\begin{align*}
\tilde\kappa_t:= \frac{m+1-\gamma-\tilde p}{\tilde p}\frac{1}{m-1}, \qquad
\tilde\kappa_x:= \frac{\tilde p-2+\gamma}{\tilde p}\frac{2}{m-1}.
\end{align*}
If $\overline{f}\in \LR{r}(\mathbb{R}^{d+1})\cap \LR{1}(\mathbb{R};\LR{\tilde p}(\mathbb{R}^d))$ for all $r\in [1,m+1-\gamma)$, where $\overline{f}(t,x):=\int f(t,x,v)\,\mathrm{d} v$, and if $\int|v|^{m-1} f \,\mathrm{d} v \in \LR{1}(\mathbb{R}^{d+1})$, then $\overline{f}\in \WSR{\sigma_t}{\tilde p}(\mathbb{R};\WSR{\sigma_x}{\tilde p}(\mathbb{R}^d))$ for all $\sigma_t\in[0,\tilde\kappa_t)$ and $\sigma_x\in[0,\tilde\kappa_x)$. Furthermore, there is an $r\in (\tilde p, m+1-\gamma)$, such that
\begin{align}\label{cor:av3_main_est}
\begin{split}
& \norm{\overline{f}}_{\WSR{\sigma_t}{\tilde p}(\WSR{\sigma_x}{\tilde p})}\lesssim \|g_{0}\|_{{\mathcal M}_{TV}} + \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}}\\
& \qquad \qquad + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}}
+ \|\overline{f}\|_{\LR{1}_t\LR{\tilde p}_x\cap \LR{r}_{t,x}} + \|\int|v|^{m-1} f \,\mathrm{d} v\|_{\LR{1}_{t,x}}.
\end{split}
\end{align}
\end{cor}
\begin{proof} It suffices to adapt Step 3 of the proof of Corollary \ref{cor:av}, that is the control of $\overline{f}$ in $\tilde{L}^{\tilde p}_x\dot B^{\sigma_t}_{\tilde p,\infty}$.
\smallskip\noindent
\textit{Step 3}. $\overline{f}\in \tilde{L}^{\tilde p}_x\dot B^{\sigma_t}_{\tilde p,\infty}$.
In this step we show that there is some $r\in (\tilde p, m+1-\gamma)$ such that
%
\begin{align}\label{cor:av3_est3}
\begin{split}
\norm{\overline{f}}_{\tilde{L}^{\tilde p}_x\dot B^{\sigma_t}_{\tilde p,\infty}}& \lesssim \|g_{0}\|_{{\mathcal M}_{TV}} + \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} \\
& + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} + \|\overline{f}\|_{\LR{1}_t\LR{\tilde p}_x\cap \LR{r}_{t,x}} + \|\int|v|^{m-1} f \,\mathrm{d} v\|_{\LR{1}_{t,x}}.
\end{split}
\end{align}
%
We split $f$ into three contributions
\begin{align*}
f&={\mathcal F}^{-1}_t\psi_0(\tau){\mathcal F}_t f + {\mathcal F}^{-1}_{t,x}(1-\psi_0(\tau))(1-\phi_0(\xi)) {\mathcal F}_{t,x} f \\
& \ \ \ \ + {\mathcal F}^{-1}_{t,x}(1-\psi_0(\tau))\phi_0(\xi) {\mathcal F}_{t,x} f \\
&=:f^1+f^2+f^3.
\end{align*}
The low time-frequency part $f^1$ can be estimated in view of Lemma \ref{lem:emb_0} and Bernstein's Lemma (cf.\@ \cite[Lemma 2.1]{BCD11}) via
\begin{align}\label{cor:av3_est5}
\|\overline{f}^1\|_{\tilde{L}^{\tilde p}_x\dot B^{\sigma_t}_{\tilde p,\infty}}\lesssim \|\overline{f}^1\|_{\tilde{L}^{\tilde p}_x B^{\sigma_t}_{\tilde p,\infty}}\lesssim \|\overline{f}^1\|_{\WSR{\sigma_t+\varepsilon}{\tilde p}(\LR{\tilde p}_x)}\lesssim \|\overline{f}\|_{\LR{1}_{t}\LR{\tilde p}_x}.
\end{align}
Next, we apply Lemma \ref{lem:av} with $\mu=1$, sufficiently large $\rho\in(0,1)$ and sufficiently small $s\in (\frac{\gamma-1}{m-1},1]$ so that \eqref{lem:av_constants} implies $p<\tilde p$ and $\kappa_t>\tilde\kappa_t$. Hence, we can choose $\vartheta\in (0,1)$, such that $\tilde\kappa_t>\vartheta\kappa_t>\tilde\kappa_t-k_t(\varepsilon)$. In particular, in light of Remark \ref{rem:av}
\begin{align*}
\vartheta<\frac{\tilde\kappa_t}{\kappa_t}=\frac{m+1-\gamma-\tilde p}{m+\rho-\gamma-p\rho+p(1-\rho)(\gamma-m)}\frac{p\rho}{\tilde p}<\frac{p}{\tilde p}, \qquad \text{if } \ 1-\rho\ll 1,
\end{align*}
so that $r=\frac{\tilde p p (1-\vartheta)}{p-\vartheta \tilde p}$ is well defined. Since $r$ is increasing in $\vartheta$ due to $\tilde p>p$, we see that $r\in (\tilde p,m+1-\gamma)$. We have $\frac{1}{\tilde p}=\frac{1-\vartheta}{r}+\frac{\vartheta}{p}$, and hence Lemma \ref{lem:av} gives
%
\begin{align*}
&\norm{\overline{f}}_{S^{\vartheta\overline{\kappa}}_{\tilde p,\infty}\dot B}\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} + \|\overline{f}\|_{\LR{r}_{t,x}}.
\end{align*}
%
Thus, since $f^2$ is supported only on $\eta_l\varphi_j$ for non-negative $l,j\in\mathbb{Z}$, Lemma \ref{lem:emb_0} and Lemma \ref{lem:emb_1} show in view of the definition of the homogeneous and non-homogeneous Besov spaces and $\sigma_t<\vartheta\kappa_t$ as well as $0<\vartheta\kappa_x$
\begin{align*}
\norm{\overline{f}^2}_{\tilde{L}^{\tilde p}_x\dot B^{\sigma_t}_{\tilde p,\infty}}=\norm{\overline{f}^2}_{\tilde{L}^{\tilde p}_x B^{\sigma_t}_{\tilde p,\infty}}
\lesssim \norm{\overline{f}^2}_{S^{\vartheta\overline{\kappa}}_{\tilde p,\infty} B} = \norm{\overline{f}}_{S^{\vartheta\overline{\kappa}}_{\tilde p,\infty}\dot B}.
\end{align*}
Thus,
%
\begin{equation}\label{cor:av3_est4}
\norm{\overline{f}^2}_{\tilde{L}^{\tilde p}_x\dot B^{\sigma_t}_{\tilde p,\infty}}\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}\!+\!\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}}\!+\! \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}}\!+\!\|\overline{f}\|_{\LR{r}_{t,x}}.
\end{equation}
%
It remains to estimate the contribution of $f^3$. For $l\in \mathbb{Z}$, we introduce $f^3_l:={\mathcal F}^{-1}_t\eta_l(\tau){\mathcal F}_t f^3$. Since $f^3_l=0$ for $l<0$, we may concentrate on the case $l\ge 0$. Observe that $f^3_{l}$ solves the equation
\begin{align*}
f^3_{l}= & -m|v|^{m-1}{\mathcal F}^{-1}_{t,x}\frac{|\xi|^2}{i\tau}\eta_l(\tau)\phi_0(\xi){\mathcal F}_{t,x}f + {\mathcal F}^{-1}_{t,x}\frac{\phi_0(\xi)}{i\tau}{\mathcal F}_{t,x} g_{0,l}\\
& + {\mathcal F}^{-1}_{t,x}\frac{\phi_0(\xi)}{i\tau}{\mathcal F}_{t,x} \partial_v g_{1,l}.
\end{align*}
Integrating in $v$, we obtain
\begin{align*}
\overline{f}^3_{l}=-m \int |v|^{m-1}{\mathcal F}^{-1}_{t,x}\frac{|\xi|^2}{i\tau}\eta_l(\tau)\phi_0(\xi){\mathcal F}_{t,x}f \,\mathrm{d} v + {\mathcal F}^{-1}_{t,x}\frac{1}{i\tau}\phi_0(\xi){\mathcal F}_{t,x} \int g_{0,l,j} \,\mathrm{d} v.
\end{align*}
Since $|\xi|^2$ acts as a constant multiplier on the support of $\phi_0$ and $\tau^{-1}$ acts as a constant multiplier of order $2^{-l}$ on the support of $\eta_l$, it follows by Bernstein's Lemma
\begin{align*}
\norm{\overline{f}^3_{l}}_{\LR{\tilde p}_{t,x}}&\lesssim 2^{l(1-\frac{1}{\tilde p})}\norm{\overline{f}^3_{l}}_{\LR{1}_{t,x}}\lesssim 2^{-l\frac{1}{\tilde p}}(\|\int|v|^{m-1} f \,\mathrm{d} v\|_{\LR{1}_{t,x}} + \|g_0\|_{{\mathcal M}_{TV}}).
\end{align*}
Since $\tilde p>2-\gamma$, we have
\begin{align*}
\sigma_t<\tilde\kappa_t=\frac{m+1-\gamma-\tilde p}{\tilde p} \frac{1}{m-1}<\frac{1}{\tilde p}.
\end{align*}
In view of $l\ge 0$ this yields
\begin{align*}
\norm{\overline{f}^3_{l}}_{\LR{\tilde p}_{t,x}}&\lesssim 2^{-l\sigma_t}(\|\int|v|^{m-1} f \,\mathrm{d} v\|_{\LR{1}_{t,x}} + \|g_0\|_{{\mathcal M}_{TV}}).
\end{align*}
Multiplying by $2^{l\sigma_t}$ and taking the supremum over $l\in\mathbb{Z}$, we conclude
\begin{align}\label{cor:av3_est6}
\norm{\overline{f}^3}_{\tilde{L}^{\tilde p}_x\dot B^{\sigma_t}_{\tilde p,\infty}}&\lesssim \|\int|v|^{m-1} f \,\mathrm{d} v\|_{\LR{1}_{t,x}} + \|g_0\|_{{\mathcal M}_{TV}}.
\end{align}
Collecting \eqref{cor:av3_est5}, \eqref{cor:av3_est4} and \eqref{cor:av3_est6}, we arrive at \eqref{cor:av3_est3}.
\end{proof}
\section{Application to Porous Medium Equations}\label{App_PME}
In this section, we provide proofs of our main results by applying the averaging lemmata obtained in the previous section to entropy solutions to \eqref{pme_sys}.
\begin{proof}[Proof of Theorem \ref{lem:pme}]
We first argue that we have $u\in \LR{s}_{t,x}$ for all $s\in[1,m-1+\rho)$. Since $T<\infty$, Theorem \ref{thm:wp-kinetic} gives
\begin{align}\label{lem:pme_L1bound}
\norm{u}_{\LR{1}_{t,x}}\lesssim \sup_{t\in[0,T]}\|u(t)\|_{\LR{1}_{x}} \lesssim \norm{u_0}_{\LR{1}_x}+\norm{S}_{\LR{1}_{t,x}},
\end{align}
so that we may concentrate on $s>1$.
Let $f$ be the kinetic function corresponding to $u$ and solving \eqref{pme_sys_kin}.
In order to apply Corollary \ref{cor:av0} with $\mu=1$ and $\sigma_x=0$, we need to extend \eqref{pme_sys_kin} to all times $t\in \mathbb{R}$, which can be achieved by multiplication with a smooth cut-off function $\varphi\in \CR \infty_c(0,T)$ with $0\le \varphi\le 1$. Hence, we set $g_0:=\delta_{v=u(t,x)}S+\partial_t\varphi f$ and $g_1:=q$. Let $\gamma:=2-\rho$, so that $s\in (1,m+1-\gamma)$.
From \eqref{cor:av0_main_est} we obtain
\begin{align*}
\norm{\varphiu}_{L^{s}_{t,x}}&\lesssim \||v|^{\rho-1}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{\rho-2}g_{1}\|_{{\mathcal M}_{TV}} + \|\varphi f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} + \|\varphi u\|_{\LR{1}_{t,x}\cap \LR{s}_{t}\LR{1}_{x}} \\
&\lesssim \||v|^{\rho-1}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{\rho-2}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} + \sup_{t\in[0,T]}\|u(t)\|_{\LR{1}_{x}}.
\end{align*}
We note that since trivially $f\in \LR{\infty}_{t,x,v}$ with norm bounded by $1$, estimate \eqref{lem:pme_L1bound} gives
\begin{align*}
\lefteqn{\norm{f}_{\LR{1}_{t,x,v} \cap \LR{\infty}_{t,x,v}} + \sup_{t\in[0,T]}\|u(t)\|_{\LR{1}_{x}}} \\
& & \lesssim \norm{u}_{\LR{1}_{t,x}}+ 1 + \sup_{t\in[0,T]}\|u(t)\|_{\LR{1}_{x}}\lesssim \norm{u_0}_{\LR{1}_x}+\norm{S}_{\LR{1}_{t,x}}+ 1.
\end{align*}
Next, we check that $|v|^{\rho-1}g_0 \in {\mathcal M}_{TV}$. Indeed, we observe that $(\rho-1)\rho':=\rho$, and hence, applying Lemma \ref{lem:ph-est},
\begin{align*}
\||v|^{\rho-1}g_0\|_{{\mathcal M}_{TV}}&=\||v|^{\rho-1}(\delta_{v=u(t,x)}S+\partial_t\varphi f)\|_{{\mathcal M}_{TV}}\lesssim \||u|^{\rho-1}S\|_{L^{1}_{t,x}}+ \|\partial_t\varphi |u|^{\rho}\|_{L^1_{t,x}} \\
&\lesssim \||u|^{(\rho-1)\rho'}\|_{L^{1}_{t,x}} +\||S|^{\rho}\|_{L^{1}_{t,x}} + \|\partial_t\varphi |u|^{\rho}\|_{L^1_{t,x}} \\
&\lesssim \|u_0\|_{L^{\rho}_{x}}^{\rho} + \|S\|_{L^{\rho}_{t,x}}^{\rho} + \|\partial_t\varphi |u|^{\rho}\|_{L^1_{t,x}}.
\end{align*}
Utilizing Lemma \ref{lem:ph-est} once more to the effect of
\begin{align*}
\||v|^{\rho-2}g_1\|_{{\mathcal M}_{TV}}=\||v|^{\rho-2}q\|_{{\mathcal M}_{TV}}\lesssim \|u_0\|_{L^{\rho}_{x}}^{\rho} + \|S\|_{L^{\rho}_{t,x}}^{\rho},
\end{align*}
we obtain
\begin{align*}
\|\varphi u\|_{\LR{s}_{t,x}}\lesssim \|u_0\|_{L^1_x\cap L^{\rho}_{x}}^{\rho} + \|S\|_{L^1_{t,x}\cap L^{\rho}_{t,x}}^{\rho}+\|\partial_t\varphi |u|^{\rho}\|_{L^1_{t,x}}+1.
\end{align*}
We may set $\varphi_n(t)=\psi(nt)-\psi(nt-T/2)$, where $\psi\in\CR \infty(\mathbb{R})$ with $0\le \psi\le 1$, $\supp\psi\subset(0,\infty)$, $\psi(t)=1$ for $t>T/2$ and $\|\partial_t\psi\|_{L^1}=1$. For $n\to\infty$, $\varphi_n$ converges to $1_{[0,T]}$ in the supremum norm, while $\partial_t\varphi_n$ is a smooth approximation of $\delta_{\set{t=0}}-\delta_{\set{t=T}}$. Therefore, $\|\varphi_n u\|_{\LR{s}_{t,x}}\to \|u\|_{\LR{s}_{t,x}}$ and by an application of Lemma \ref{lem:ph-est} $\|\partial_t\varphi_n |u|^{\rho}\|_{L^1_{t,x}}\to \||u|(0)^\rho-|u|(T)^\rho\|_{L^1_{x}}\lesssim \|u_0\|_{L^\rho_{x}}^\rho+\|S\|_{L^{\rho}_{t,x}}^{\rho}$, so that $u\in L^{s}([0,T]\times \mathbb{R}^d)$ and
\begin{align}\label{lem:pme_Lmbound}
\|u\|_{\LR{s}_{t,x}}\lesssim \|u_0\|_{L^1_x\cap L^{\rho}_{x}}^{\rho} + \|S\|_{L^1_{t,x}\cap L^{\rho}_{t,x}}^{\rho}+1.
\end{align}
\newcounter{counter:lem_pme}
\refstepcounter{counter:lem_pme}
(\arabic{counter:lem_pme}).
We apply Corollary \ref{cor:av0} once more. Let $f$, $\varphi$, $g_0$, $g_1$ and $\gamma$ be as above. Then, in particular $p\in (1,\frac{m+1-\gamma}{\mu})$.
From \eqref{cor:av0_main_est} we obtain
\begin{align*}
\lefteqn{\norm{\varphiu^{[\mu]}}_{L^{p}_{t} (\WSR{\sigma_x}{p})}}\\
&&\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} + \|u^{[\mu]}\|_{\LR{1}_{t,x}\cap \LR{p}_t\LR{1}_x}.
\end{align*}
The first three contributions on the right hand side are estimated as above. For the last contribution, we note $1\le \mu<p\mu$ and thus
\begin{align*}
\|u^{[\mu]}\|_{\LR{1}_{t,x}\cap \LR{p}_t\LR{1}_x}&\lesssim \|u^{[\mu]}\|_{\LR{p}_t\LR{1}_x}=\|u\|_{\LR{p\mu}_{t}\LR{\mu}_x}^{\mu}\lesssim (\|u\|_{\LR{p\mu}_{t}\LR{1}_x} + \|u\|_{\LR{p\mu}_{t,x}})^{\mu} \\
&\lesssim (\sup_{t\in[0,T]}\|u(t)\|_{\LR{1}_x} + \|u\|_{\LR{p\mu}_{t,x}})^{\mu} \lesssim \sup_{t\in[0,T]}\|u(t)\|_{\LR{1}_x}^\mu + \|u\|_{\LR{p\mu}_{t,x}}^\mu +1.
\end{align*}
Furthermore, \eqref{lem:pme_L1bound} together with \eqref{lem:pme_Lmbound} applied with $s=p\mu\in(1,m-1+\rho)$ shows
\begin{align*}
\sup_{t\in[0,T]}\|u(t)\|_{\LR{1}_x}^\mu + \|u\|_{\LR{p\mu}_{t,x}}^\mu +1 \lesssim \norm{u_0}_{L_x^1\cap L_x^{\rho}}^{\mu\rho} + \norm{S}_{L_{t,x}^1\cap L_{t,x}^{\rho}}^{\mu\rho} + 1.
\end{align*}
Hence, arguing as above by taking the limit $\varphi_n\to 1_{[0,T]}$, we obtain $u^{[\mu]}\in L^{p}(\mathbb{R}; \WSR{\sigma_x}{p}(\mathbb{R}^d))$ and \eqref{lem:pme_est2}.
\refstepcounter{counter:lem_pme}
(\arabic{counter:lem_pme}).\label{lem:pme_2} The proof is similar to the first part, but we use Corollary \ref{cor:av} instead of Corollary \ref{cor:av0}. Again we localize in time by multiplying with a smooth cut-off function $\varphi\in \CR \infty_c(0,T)$ with $0\le\varphi\le 1$ and set $g_0$ and $g_1$ as before. Choose $\gamma:=2-\rho$, so that $p\in(2-\gamma,m+1-\gamma)$.
From \eqref{cor:av_main_est} in Corollary \ref{cor:av} we obtain
\begin{align*}
\norm{\varphi u}_{\WSR{\sigma_t}{p}(\WSR{\sigma_x}{p})}&\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{1}_{t,x,v} \cap \LR{\infty}_{t,x,v}} + \|u\|_{\LR{r}_{t,x}},
\end{align*}
where $r\in (\rho,m-1+\rho)$.
The terms involving $g_0$, $g_1$ and $f$ can be estimated as above, while the $\LR{r}_{t,x}$-norm of $u$ can be estimated by \eqref{lem:pme_Lmbound}.
Choosing $\varphi_n$ as above, we hence infer that $\varphi_n u$ is bounded in $\WSR{\sigma_t}{p}(0,T;\WSR{\sigma_x}{p}(\mathbb{R}^d))$ and
\begin{align*}
\sup_{n\in\mathbb{N}}\norm{\varphi_n u}_{\WSR{\sigma_t}{p}(\WSR{\sigma_x}{p})}&\lesssim \|u_0\|_{L^1_x\cap L^{\rho}_{x}}^{\rho} + \|S\|_{L^1_{t,x}\cap L^{\rho}_{t,x}}^{\rho}+1.
\end{align*}
Since $\varphi_n u\to u 1_{[0,T]}$ in the sense of distributions, we obtain the result by the weak lower semi-continuity of the norm in $\WSR{\sigma_t}{p}(0,T;\WSR{\sigma_x}{p}(\mathbb{R}^d))$. \qedhere
\end{proof}
%
\begin{proof}[Proof of Corollary \ref{cor:pme}]\mbox{ }
\newcounter{counter:cor_pme}
\refstepcounter{counter:cor_pme}
\medskip
(\arabic{counter:cor_pme}).
Let $\sigma_x\in[0,\frac{2\mu}{m})$. We apply Theorem \ref{lem:pme}~(i) with $p=\frac{m}{\mu}$ for sufficiently small $\eta\in(1,\rho]$ so that $\sigma_x<\frac{\mu p -1}{p}\frac{2}{m-2+\eta}=\frac{2\mu}{m}\frac{m-1}{m-2+\eta}$ and observe that for all $q\in[1,p]$ we have the embedding $L^{p}(0,T; \WSR{\sigma_x}{p}(\mathbb{R}^d))\subset L^{q}(0,T; \WSR{\sigma_x}{q}({\mathcal O}))$.
\medskip
\refstepcounter{counter:cor_pme}
(\arabic{counter:cor_pme})\label{cor:pme_2}. For $s>0$ we have, with $p=s(m-1)+1\in(1,m]$,
\begin{align*}
\kappa_t=\frac{1-s}{s(m-1)+1}=\frac{m-p}{p}\frac{1}{m-1}, \quad
\kappa_x=\frac{2s}{s(m-1)+1}=\frac{p-1}{p}\frac{2}{m-1}.
\end{align*}
Hence, in this case the assertion follows by an application of Theorem \ref{lem:pme}~(ii) with sufficiently small $\eta\in(1,\rho]$ such that $p>\rho$ and $\sigma_x<\frac{p-\rho}{p}\frac{2}{m-1}$ combined with the embedding \[\WSR{\sigma_t}{p}(0,T;\WSR{\sigma_x}{p}(\mathbb{R}^d))\subset \WSR{\sigma_t}{q}(0,T;\WSR{\sigma_x}{q}({\mathcal O})).\]
If $s=0$ and $\sigma_t\in[0,1)$, we may choose $s_0>0$ such that $\sigma_t<\frac{1-s_0}{s_0(m-1)+1}=:\kappa_t(s_0)$, and the result follows by the embedding
\[
\WSR{\kappa_t(s_0)}{s_0(m-1)+1}(0,T;\LR{s_0(m-1)+1}({\mathcal O}))\subset \WSR{\sigma_t}{1}(0,T;\LR{1}({\mathcal O})). \qedhere
\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{cor:pme_l1}]
The proof is similar to the one of Theorem \ref{lem:pme} (\ref{lem:pme_2}), but we discriminate between small and large velocity contributions to the kinetic function. Let $f$ be the kinetic function corresponding to $u$ and solving \eqref{pme_sys_kin}. We extend again to all times $t\in \mathbb{R}$ by multiplying with a smooth cut-off function $\varphi\in \CR \infty_c(0,T)$ with $0\le \varphi\le 1$. Further, we split $f=:f^<+f^>$ and $q=:q^<+q^>$ into a small-velocity and a large-velocity part by multiplying with a smooth cut-off function $\psi_0$ respectively $\psi_1:=1-\psi_0$ in $v$. This gives rise to the two equations
\begin{align*}
\partial_t(\varphif^<)-m|v|^{m-1}\Delta_x(\varphif^<) &= \varphi\psi_0\delta_{v=u(t,x)}S + \partial_v (\varphi q^<) - \varphi q\partial_v\psi_0 + \partial_t\varphi f^<, \\
\partial_t(\varphi f^>)-m|v|^{m-1}\Delta_x(\varphi f^>) &= \varphi\psi_1\delta_{v=u(t,x)}S + \partial_v (\varphi q^>) + \varphi q\partial_v\psi_0 + \partial_t\varphi f^>,
\end{align*}
Integrating $f^<$ and $f^>$ in $v$, we obtain a decomposition of $u=u^<+u^>$.
The proof proceeds in several steps: In first the three steps, we argue that $u\in \LR{s}(0,T;\LR{s}(\mathbb{R}^d))$ for all $s\in[1,m+\frac{2}{d})$ if $d\ge 2$ and $s\in[1,m+1)$ if $d=1$. With this additional bound, we can conclude the higher-order estimates in the last three steps of the proof. We only detail the proof for $d\ge 2$, the case $d=1$ being similar.
\newcounter{cor:pme_l1_prf}
\refstepcounter{cor:pme_l1_prf}
\textit{Step} \arabic{cor:pme_l1_prf}\label{cor:pme_l1_prf_st1}. In this step we establish for $\rho\in (m,\frac{md}{d-2})$ the bound
\begin{align}\label{cor:pme_l1_s1}
\|u^<\|_{\LR{m}_{t}\LR{\rho}_x}\lesssim \|u_0\|_{L^1_x} + \|S\|_{L^1_{t,x}} +1.
\end{align}
Set $g_0:=\varphi\psi_0\delta_{v=u(t,x)}S + \partial_t\varphi f^< - \varphi q\partial_v\psi_0$, $g_1:=\varphi q^<$, and
\begin{align*}
\sigma_x:=\frac{d}{m}-\frac{d}{\rho}\in\vpp{0,\frac{2}{m}}.
\end{align*}
Consequently, we may choose $\gamma\in(0,1)$ so large that $\sigma_x\in[0,\frac{m-1}{m}\frac{2}{m-\gamma})$.
From Corollary \ref{cor:av0} applied with $\mu=1$ and $q=m$ we obtain
\begin{align*}
\lefteqn{\norm{\varphiu^<}_{L^{m}_{t}\WSR{\sigma_x}{m}_x}}\\
&&\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|\varphi f^<\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} + \|\varphi u^<\|_{\LR{1}_{t,x}\cap \LR{m}_{t}\LR{1}_{x}} \\
&&\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} + \sup_{t\in[0,T]}\|u(t)\|_{\LR{1}_{x}}.
\end{align*}
We note that since trivially $f^<\in \LR{\infty}_{t,x,v}$ with norm bounded by $1$ we have by Theorem \ref{thm:wp-kinetic}
\begin{align*}
\norm{f}_{\LR{1}_{t,x,v} \cap \LR{\infty}_{t,x,v}} + \sup_{t\in[0,T]}\|u(t)\|_{\LR{1}_{x}}\lesssim \norm{u}_{\LR{1}_{t,x}}+ 1 + \sup_{t\in[0,T]}\|u(t)\|_{\LR{1}_{x}}\lesssim \norm{u_0}_{\LR{1}_x}+ \|S\|_{L^1_{t,x}}+ 1.
\end{align*}
Next, we check that $|v|^{1-\gamma}g_0 \in {\mathcal M}_{TV}$. Indeed, since $|v|^{1-\gamma}$ can be estimated by a constant on the supports of $\psi_0$ and $\partial_v\psi_0$, we may apply Lemma \ref{lem:ph-est-1} to the effect of
\begin{align*}
\||v|^{1-\gamma}g_0\|_{{\mathcal M}_{TV}}&=\||v|^{1-\gamma}(\varphi\psi_0\delta_{v=u(t,x)}S + \partial_t\varphi f^< - \varphi q\partial_v\psi_0)\|_{{\mathcal M}_{TV}} \\
&\lesssim \|S\|_{L^1_{t,x}} + \|\partial_t\varphi |u|\|_{L^1_{t,x}}+\|q\partial_v\psi_0\|_{{\mathcal M}_{TV}}\\
&\lesssim \|\partial_t\varphi |u|\|_{L^1_{t,x}}+\|u_0\|_{L^1_x} + \|S\|_{L^1_{t,x}}.
\end{align*}
Utilizing Lemma \ref{lem:ph-est-1} once more to the effect of
\begin{align*}
\||v|^{-\gamma}g_1\|_{{\mathcal M}_{TV}}\lesssim \||v|^{-\gamma}q^<\|_{{\mathcal M}_{TV}}\lesssim \|u_0\|_{L^{1}_{x}}+ \|S\|_{L^1_{t,x}},
\end{align*}
we obtain by Sobolev embedding
\begin{align}\label{lem:pme_Lmbound_l1_11}
\|\varphi u^<\|_{\LR{m}_{t}\LR{\rho}_x}\lesssim \norm{\varphiu^<}_{L^{m}_{t}\WSR{\sigma_x}{m}_x}\lesssim \|u_0\|_{L^1_x} + \|\partial_t\varphi |u|\|_{L^1_{t,x}}+ \|S\|_{L^1_{t,x}}+1.
\end{align}
With the same construction $\varphi_n \to 1_{[0,T]}$ as in the proof of Theorem \ref{lem:pme}, this gives \eqref{cor:pme_l1_s1}.
\refstepcounter{cor:pme_l1_prf}
\textit{Step} \arabic{cor:pme_l1_prf}\label{cor:pme_l1_prf_st2}. Next, we investigate $u^>$ and establish for $\eta\in(1,m)$ and $\eta^*=\frac{\eta d(m-1)}{d(m-1)-2(\eta-1)}$ the bound
\begin{align}\label{cor:pme_l1_s2}
\|u^>\|_{\LR{\eta}_{t}\LR{\eta^*}_x}\lesssim \|u_0\|_{L^1_x} + \|S\|_{L^1_{t,x}} +1.
\end{align}
Set $g_0:=\varphi\psi_1\delta_{v=u(t,x)}S + \partial_t\varphi f^> + \varphi q\partial_v\psi_0$ and $g_1:=\varphi q^>$.
Choose $\gamma\in(1,m)$ sufficiently small, so that $\eta\in(1,m+1-\gamma)$, and define
\begin{align*}
\sigma_x:=\frac{\eta-1}{\eta}\frac{2}{m-1}\in\vpp{0,\frac{\eta-1}{\eta}\frac{2}{m-\gamma}}.
\end{align*}
We apply Corollary \ref{cor:av0} with $\mu=1$ and $q=\eta$, which gives
\begin{align*}
\lefteqn{\norm{\varphiu^>}_{L^{\eta}_{t}\WSR{\sigma_x}{\eta}_x}} \\
&&\quad \lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|\varphi f^>\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} + \|\varphi u^>\|_{\LR{1}_{t,x}\cap \LR{\eta}_{t}\LR{1}_{x}} \\
&&\lesssim \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} + \sup_{t\in[0,T]}\|u(t)\|_{\LR{1}_{x}}.
\end{align*}
The terms involving $f$ and $u$ are estimated as in Step \ref{cor:pme_l1_prf_st1}. Further, since $|v|^{1-\gamma}$ can be estimated by a constant on the support of $\psi_1$ and $\partial_v\psi_0$, we have by Lemma \ref{lem:ph-est-1}
\begin{align*}
\||v|^{1-\gamma}g_0\|_{{\mathcal M}_{TV}}&=\||v|^{1-\gamma}(\varphi\psi_1\delta_{v=u(t,x)}S + \partial_t\varphi f^> + \varphi q\partial_v\psi_0)\|_{{\mathcal M}_{TV}} \\
&\lesssim \|S\|_{L^1_{t,x}} +\|\partial_t\varphi |u|\|_{L^1_{t,x}}+\|q\partial_v\psi_0\|_{{\mathcal M}_{TV}}\\
&\lesssim \|\partial_t\varphi |u|\|_{L^1_{t,x}}+\|u_0\|_{L^1_x}+ \|S\|_{L^1_{t,x}},
\end{align*}
and, again due to Lemma \ref{lem:ph-est-1},
\begin{align*}
\||v|^{-\gamma}g_1\|_{{\mathcal M}_{TV}}\lesssim \||v|^{-\gamma}q^>\|_{{\mathcal M}_{TV}}\lesssim \|u_0\|_{L^{1}_{x}} + \|S\|_{L^1_{t,x}}.
\end{align*}
Since $\eta^*=\frac{\eta d}{d-\sigma_x\eta}$, we have by Sobolev embedding $\WSR{\sigma_x}{\eta}_x\subset L^{\eta^*}_x$, and hence
\begin{align*}
\|\varphi u^>\|_{\LR{\eta}_{t}\LR{\eta^*}_x}\lesssim \|\varphi u^>\|_{L^{\eta}_{t}\WSR{\sigma_x}{\eta}_x}\lesssim \|u_0\|_{L^1_x} + \|\partial_t\varphi |u|\|_{L^1_{t,x}}+ \|S\|_{L^1_{t,x}}+1.
\end{align*}
With the same construction $\varphi_n \to 1_{[0,T]}$ as before, this yields \eqref{cor:pme_l1_s2}.
\refstepcounter{cor:pme_l1_prf}
\textit{Step} \arabic{cor:pme_l1_prf}\label{cor:pme_l1_prf_st3}. In this step, we show that for $s\in [1,m+\frac{2}{d})$ we have
\begin{align}\label{lem:pme_Lmbound_l1}
\|u\|_{\LR{s}_{t,x}}\lesssim \|u_0\|_{L^1_x} + \|S\|_{L^1_{t,x}}+1.
\end{align}
Observe that it suffices to show the assertion for $s>m$, since $u\in \LR{1}(0,T;\LR{1}(\mathbb{R}^d))$ is already established by Theorem \ref{thm:wp-kinetic}.
Define $\rho:=\frac{m}{m+1-s}\in(m,\frac{md}{d-2})$. For $\vartheta\in(0,1)$, it holds $[\LR{\infty}_t\LR{1}_x,\LR{m}_t\LR{\rho}_x]_\vartheta=\LR{p_\vartheta}_t\LR{q_\vartheta}_x$ with
\begin{align*}
\frac{1}{p_\vartheta}=\frac{\vartheta}{m} \qquad \text{and } \quad \frac{1}{q_\vartheta}=1-\vartheta+\frac{\vartheta}{\rho}.
\end{align*}
Choosing $\vartheta:=\frac{m\rho}{m\rho +\rho-m}\in(0,1)$, we obtain $p_\vartheta=q_\vartheta=s$, and hence by \eqref{cor:pme_l1_s1} and Theorem \ref{thm:wp-kinetic}
\begin{align}\label{lem:pme_Lmbound_l1_1}
\|u^<\|_{\LR{s}_{t,x}}\lesssim \|u^<\|_{\LR{\infty}_{t}\LR{1}_x}+\|u^<\|_{\LR{m}_{t}\LR{\rho}_x}\lesssim \|u_0\|_{L^1_x} + 1.
\end{align}
Next, we define
\begin{align*}
\eta:=\frac{sd(m-1)+2}{d(m-1)+2}\in(1,m) \qquad \text{and } \quad \eta^*=\frac{\eta d}{d-2\frac{\eta-1}{m-1}}
\end{align*}
and observe that for $\vartheta\in(0,1)$, it holds $[\LR{\infty}_t\LR{1}_x,\LR{\eta}_t\LR{\eta^*}_x]_\vartheta = \LR{p_\vartheta}_t\LR{q_\vartheta}_x$ with
\begin{align*}
\frac{1}{p_\vartheta}=\frac{\vartheta}{\eta} \qquad \text{and } \quad \frac{1}{q_\vartheta}=1-\vartheta+\frac{\vartheta}{\eta^*}.
\end{align*}
Choosing $\vartheta:=\frac{\eta d(m-1)}{\eta d(m-1) + 2(\eta-1)}\in(0,1)$, we obtain $p_\vartheta=q_\vartheta=s$, and hence by \eqref{cor:pme_l1_s2} and Theorem \ref{thm:wp-kinetic}
\begin{align}\label{lem:pme_Lmbound_l1_2}
\|u^>\|_{\LR{s}_{t,x}}\lesssim \|u^>\|_{\LR{\infty}_{t}\LR{1}_x}+\|u^>\|_{\LR{\eta}_{t}\LR{\eta^*}_x}\lesssim \|u_0\|_{L^1_x} + \|S\|_{L^1_{t,x}}+1.
\end{align}
Combining \eqref{lem:pme_Lmbound_l1_1} and \eqref{lem:pme_Lmbound_l1_2}, we obtain \eqref{lem:pme_Lmbound_l1}.
\refstepcounter{cor:pme_l1_prf}
\textit{Step} \arabic{cor:pme_l1_prf}\label{cor:pme_l1_prf_st4}. In this step we argue that
\begin{align*}
\norm{\varphi u^<}_{\WSR{\sigma_t}{p}(\WSR{\sigma_x}{p})}&\lesssim \|\partial_t\varphi |u|\|_{L^1_{t,x}} + \|u_0\|_{L^1_x}^m + \|S\|_{L^1_{t,x}}^m+1.
\end{align*}
Indeed, we choose $\gamma\in(0,1)$ so large that $\sigma_x<\frac{p-2+\gamma}{p}\frac{2}{m-1}$ and $m+1-\gamma<m+\frac{2}{d}$. Then we apply Corollary \ref{cor:av3} with $g_0:=\varphi\psi_0\delta_{v=u(t,x)}S + \partial_t\varphi f^< - \varphi q \partial_v \psi_0$, $g_1:=\varphi q^<$ and $\tilde p=p$. We obtain by \eqref{cor:av3_main_est} some $r\in (p,m+1-\gamma)$ such that
\begin{align*}
\norm{\varphi u^<}_{\WSR{\sigma_t}{p}(\WSR{\sigma_x}{p})}&\lesssim \|g_{0}\|_{{\mathcal M}_{TV}} + \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} \\
&\quad + \|u\|_{\LR{1}_t\LR{p}_x\cap \LR{r}_{t,x}} + \||u|^m\|_{\LR{1}_{t,x}}.
\end{align*}
The first four terms on the right-hand side can be estimated as in Step \ref{cor:pme_l1_prf_st1} (indeed, we did not use the coefficient $|v|^{1-\gamma}$ in the estimate of $g_0$) via
\begin{align*}
\|g_{0}\|_{{\mathcal M}_{TV}} + \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}&+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} \\
&\lesssim \|\partial_t\varphi |u|\|_{L^1_{t,x}} + \|u_0\|_{L^1_x} + \|S\|_{L^1_{t,x}}+1,
\end{align*}
while the last two terms are estimated in virtue of $r<m+1-\gamma<m+\frac{2}{d}$ through \eqref{lem:pme_Lmbound_l1} as
\begin{align*}
\|u\|_{\LR{1}_t\LR{p}_x\cap \LR{r}_{t,x}} + \||u|^m\|_{\LR{1}_{t,x}}\lesssim \|u\|_{\LR{p}_{t,x}\cap \LR{r}_{t,x}} + \|u\|_{\LR{m}_{t,x}}^m\lesssim \|u_0\|_{L^1_x}^m + \|S\|_{L^1_{t,x}}^m+1.
\end{align*}
\refstepcounter{cor:pme_l1_prf}
\textit{Step} \arabic{cor:pme_l1_prf}\label{cor:pme_l1_prf_st5}.
In this step we establish
\begin{align}\label{cor:pme_l1_prf_s5}
\norm{\varphi u^>}_{\WSR{\sigma_t}{p}(\WSR{\sigma_x}{p})}\lesssim \|\partial_t\varphi |u|\|_{L^1_{t,x}} + \|u_0\|_{L^1_x}^m + \|S\|_{L^1_{t,x}}^m +1.
\end{align}
Assume first $p<m$. Choose $\gamma\in(1,m)$ so small that $p\in(1,m+1-\gamma)$ and $\sigma_t<\frac{m+1-\gamma-p}{p}\frac{1}{m-1}$ and apply Corollary \ref{cor:av3} with $g_0:=\varphi\psi_1\delta_{v=u(t,x)}S + \partial_t\varphi f^> + \varphi q \partial_v \psi_0$, $g_1:=\varphi q^>$ and $\tilde p=p$. Estimate \eqref{cor:av3_main_est} gives
\begin{align*}
\norm{\varphi u^>}_{\WSR{\sigma_t}{p}(\WSR{\sigma_x}{p})}&\lesssim \|g_{0}\|_{{\mathcal M}_{TV}} + \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}}+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} \\
& \quad + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}}
+ \|u\|_{\LR{1}_t\LR{p}_x\cap \LR{r}_{t,x}} + \||u|^m\|_{\LR{1}_{t,x}}.
\end{align*}
The first four terms on the right-hand side are estimated as in Step \ref{cor:pme_l1_prf_st2} via
\begin{align*}
\|g_{0}\|_{{\mathcal M}_{TV}} + \||v|^{1-\gamma}g_{0}\|_{{\mathcal M}_{TV}} &+\||v|^{-\gamma}g_{1}\|_{{\mathcal M}_{TV}} + \|f\|_{\LR{1}_{t,x,v}\cap \LR{\infty}_{t,x,v}} \\
&\lesssim \|\partial_t\varphi |u|\|_{L^1_{t,x}} + \|u_0\|_{L^1_x} + \|S\|_{L^1_{t,x}}+1,
\end{align*}
while the last two terms are estimated through \eqref{lem:pme_Lmbound_l1} as
\begin{align*}
\|u\|_{\LR{1}_t\LR{p}_x\cap \LR{r}_{t,x}} + \||u|^m\|_{\LR{1}_{t,x}}\lesssim \|u\|_{\LR{p}_{t,x}\cap \LR{r}_{t,x}} + \|u\|_{\LR{m}_{t,x}}^m\lesssim \|u_0\|_{L^1_x}^m + \|S\|_{L^1_{t,x}}^m+1.
\end{align*}
Hence, we have shown \eqref{cor:pme_l1_prf_s5} in the case $p\in(1,m)$. If $p=m$, we choose $p_0\in(1,m)$ sufficiently large such that for $\kappa_x(p_0):=\frac{p_0-1}{p_0}\frac{2}{m-1}$ it holds $\kappa_x(p_0)-\frac{d}{p_0}>\sigma_x-\frac{d}{m}$. We observe that for $\kappa_t(p_0):=\frac{m-p_0}{p_0}\frac{1}{m-1}$ it holds $\kappa_t(p_0)-\frac{1}{p_0}>\sigma_t-\frac{1}{m}$ due to $p_0<m$ (indeed, we have necessarily $\sigma_t=0$). Choosing sufficiently large $\sigma_x(p_0)<\kappa_x(p_0)$ and $\sigma_t(p_0)<\kappa_t(p_0)$, we conclude by Sobolev embedding
\begin{align*}
\norm{\varphi u^>}_{\LR{m}_t(\WSR{\sigma_x}{m}_x)}&\lesssim \norm{\varphi u^>}_{\WSR{\sigma_t(p_0)}{p_0}(\WSR{\sigma_x(p_0)}{p_0})}\\
&\lesssim \|\partial_t\varphi |u|\|_{L^1_{t,x}} + \|u_0\|_{L^1_x}^m + \|S\|_{L^1_{t,x}}^m +1,
\end{align*}
which is \eqref{cor:pme_l1_prf_s5} in the case $p=m$.
\refstepcounter{cor:pme_l1_prf}
\textit{Step} \arabic{cor:pme_l1_prf}\label{cor:pme_l1_prf_st6}. Conclusion.
With the same construction $\varphi_n \to 1_{[0,T]}$ as in the proof of Theorem \ref{lem:pme}, Steps \ref{cor:pme_l1_prf_st4} and \ref{cor:pme_l1_prf_st5} combine to
\begin{align*}
\sup_{n\in\mathbb{N}}\norm{\varphi_n u}_{\WSR{\sigma_t}{p}(\WSR{\sigma_x}{p})}&\lesssim \sup_{n\in\mathbb{N}}\norm{\varphi_n u^<}_{\WSR{\sigma_t}{p}(\WSR{\sigma_x}{p})} + \sup_{n\in\mathbb{N}} \norm{\varphi_n u^>}_{\WSR{\sigma_t}{p}(\WSR{\sigma_x}{p})} \\
&\lesssim \|u_0\|_{L^1_x}^m + \|S\|_{L^1_{t,x}}^m +1.
\end{align*}
Since $\varphi_n u\to u 1_{[0,T]}$ in the sense of distributions, we obtain \eqref{cor:pme_est1_l1} by the weak lower semi-continuity of the norm in $\WSR{\sigma_t}{p}(0,T;\WSR{\sigma_x}{p}(\mathbb{R}^d))$.
Estimate \eqref{cor:pme_est2_l1} follows analogously to the proof of Corollary \ref{cor:pme} (\ref{cor:pme_2}). \qedhere
\end{proof}
|
3,212,635,537,547 | arxiv | \section{Introduction}
A terminology has developed which names a one extreme
of the distribution of Seyfert galaxies
narrow-line Seyfert 1 galaxies (NLSy1s), loosely defined as having
FWHM H$\beta <2000$ km/s; \verb+[+O{\sc iii}\verb+]+/H$\beta\
< 3$, and strong Fe{\sc ii} emission. These Seyfert galaxies
appear to have systematically different, or very extreme X-ray attributes
compared to the rest of the Seyfert population (which we dub BLSy1s, for
convenience). Examination of
X-ray properties across the Seyfert population
reveals an anti-correlation between both X-ray index and
FWHM H$\beta$ (Boller et al\ 1996; Brandt et al\ 1997)
as well as between variability amplitude $\sigma^2_{rms}$ and FWHM H$\beta$
(Turner et al. 1999). In addition to the obvious range of
index and variability properties across the Seyfert population, it is
interesting to note those X-ray attributes present in only one subset of
sources. For example, it is well established that large amounts of
absorbing material attenuate the X-ray spectra of
many BLSy1s (with varying degrees of ionization), yet there is little evidence
of any line-of-sight gas to the nuclei of NLSy1s.
Theoretical attempts to explain the difference in X-ray properties of NLSy1s
include the idea that these sources accrete
at a higher rate, given their black hole mass, than the rest of the
Seyfert population
(e.g. Pounds, Done \& Osbourne 1995). This may be consistent with
the observation of Fe K$\alpha$ emission
from (apparently) highly ionized
material in NLSy1s (Comastri et al\ 1998,
Turner et al\ 1998) and the relatively large rapid X-ray variations.
There were also some attempts (Komossa \& Greiner 1999) to explain the unusually
steep continuum by dusty warm absorbers.
\section{ROSAT observations of Akn 564}
Arakelian 564 (IRAS 22403$+$2927, MCG +05--53--012; z=0.024) falls within the
NLSy1 ``extreme'' of Seyfert properties (Osterbrock \& Shuder 1982)
and has relatively strong Ca {\sc ii} and Fe {\sc ii}
emission lines (van Groningen 1993).
Akn 564 shows I(H$\alpha$)/I(H$\beta$)=4.4 (similar to many BLSy1s) and
\verb+[+O {\sc iii} \verb+]+/H$\beta \sim 12$.
Walter \& Fink (1993) suggested that the unusual UV ($\lambda1375 $)
to 2 keV flux ratio
in Ark 564 could be due to absorption of the UV continuum but
X-ray data show no evidence for absorption in excess of the Galactic
line-of-sight column $N_H(Gal)=6.4 \times 10^{20} {\rm cm^{-2}}$
(Stark et al. 1992).
Unusual dust-to-gas ratio and different
lines-of-sight to the UV and X-ray continua have also been suggested
(Walter \& Fink 1993).
A {\it ROSAT} PSPC pointed spectrum shows
significant structure indicative of
strong emission or absorption effects, although the PSPC data
did not allow a distinction to be made between the presence of
emission or absorption features (Brandt et al. 1994).
Interestingly, Crenshaw et al. (1999) detect absorption
lines in Si{\sc iv}, L$\alpha$, N{\sc v} and
C{\sc iv}, indicating the presence of at least some material in the
line-of-sight to the optical emission region.
Crenshaw et al (1999) find a
one-to-one correspondence between Seyfert galaxies that show intrinsic
UV absorption and X-ray ``warm absorbers,'' indicating that these two
phenomena are related. Thus the presence of the UV absorber
suggests we should search for the signatures of X-ray absorption in
Ark~564.
\section{ASCA Observation of Akn 564}
{\it ASCA} (Makishima et al. 1996) has two solid-state imaging
spectrometers (SISs) and two gas imaging spectrometers
(GISs) at the focus of four co-aligned X-ray telescopes.
This instrument combination yields data over a useful bandpass
$\sim 0.6 - 10$ keV. Ark 564 was observed over the period 1996
December 23-24. The data were reduced using standard techniques, as
described in Nandra et al. (1997a). Data screening yielded effective exposure
times $\sim 48$ ks in the SIS and $\sim 51$ ks in the GIS instruments.
Ark 564 yielded 1.96$^+_-0.01$ ct/s in SIS0 (0.6-10 keV band).
In this paper, all energy ranges are given in the observers frame,
unless otherwise noted.
The {\it ASCA} data show
significant flux variability (Fig.~1), with factor-of-two
changes occurring in the 0.5 -- 10 keV flux, over timescales of
a few thousand seconds. Construction of a time series with
256s bins yielded a value for ``excess variance''
$\sigma^2_{rms} = 39.3^+_-2.24 \times 10^{-3}$
(see Turner et al. 1999, and references therein for a definition
of this quantity and discussion of results
for a sample of NLSy1s). Rapid variations of this amplitude
are a property of NLSy1s (e.g. Boller et al. 1996). Light curves were also
constructed in two different energy bands, 2-10 keV and
0.5-2 keV, and the excess variance compared in the two bands.
This test was prompted by the discovery of energy-dependent variability
in Ton S180, where the strongest X-ray variations appear to be observed
in the soft X-ray band.
However, we found no evidence for energy-dependent variability in Ark 564.
The X-ray spectrum is remarkable.
The continuum slope was determined by fitting an absorbed power-law
to the 0.6-5.0 plus 7.5-10.0 keV band (to avoid confusion due to the Fe K$\alpha$ line).
Fig~2 shows residuals compared to that
power-law fit.
A strong spectral feature is evident at $\sim 1$ keV and
an excess of emission close to 7 keV, which we know to be
due to an unmodeled Fe K$\alpha$ line.
The photon index over the (featureless) 1.5-4.5 keV band
is $\Gamma=2.63^{+0.16}_{-0.03}$ (and probably provides the best
measure of the underlying continuum slope).
\subsection{The soft X-ray spectrum}
The observation of a strong spectral feature close to 1 keV is
especially interesting. A similar feature has been
observed in the bright NLSy1 TON S180
(Turner, George \& Nandra 1998).
Features of this nature have not been observed in
BLSy1s, but something consistent with the same feature has been
seen at low S/N in some QSOs (Fiore et al 1998;
George et al. 1999). The data (excluding the 5.0 -- 7.5 keV band)
were successfully modeled
using a power-law continuum plus Gaussian emission feature.
The best fit gave $\Gamma=2.62^{+0.01}_{-0.02}$ with
continuum normalization
$n=1.96^{+0.02}_{-0.04} \times 10^{-2} {\rm photons\ cm^{-2} s^{-1}}$
at 1 keV;
no (neutral) absorption was found in excess of the Galactic value, and the
rest-energy of the Gaussian was $E=0.99^{+0.02}_{-0.04}$ keV with
width $\sigma=0.16^{+0.03}_{-0.02}$ keV and normalization
$n=1.3^{+0.3}_{-0.1} \times 10^{-3} {\rm photons\ cm^{-2} s^{-1}}$.
The equivalent width (EW) of this feature is
$\sim 70^{+20}_{-10} $ eV compared to the
observed continuum level. This fit gave $\chi^2=1332/1261 $
degrees of freedom ($dof$).
To ensure that the strength of the low energy feature is
not related to any instrumental effects in the detectors, we
thoroughly investigated sub-divisions of the data based upon
instrumental parameters, these efforts are described in the Appendix.
Intensity-selected spectra revealed no significant variability
in this spectral feature during the observation. However, we note
that the source was in a high flux state for a relatively short time,
affording no strong constraints on spectral variations.
We attempted some alternative fits where the spectrum of
Ark~564 is parameterized by a double power-law with
imprinted absorption edges.
(We do not discuss models using absorption edges in conjunction with
a single power-law, as these were totally inadequate).
It is interesting to compare these fits with
similar ones performed on the PSPC data
(Brandt et al. 1994) especially in light of an expectation that
we might observe absorption in the X-ray band associated with the
absorption systems detected in the UV data (Crenshaw et al. 1999).
We find the {\it ASCA} spectra are not
adequately fit by a model composed of a double power-law
with neutral absorber, plus single absorption
edge. Such a model
yielded $\chi^2=1546/1260\ dof$ (again excluding the 5.0 -- 7.5 keV band)
and an edge energy $E=0.60$ keV (i.e. it hit the lower bound allowed
for the edge energy) with optical depth $\tau=0.73$.
The fit statistic is $\Delta \chi^2$ of 214 worse than
that obtained when fitting the feature with a
Gaussian emission profile (above),
and the latter does not require the presence of a second power-law
continuum component.
Addition of a second
edge to the model yields an improvement
to the fit, giving $\chi^2=1395/1258\ dof$, with the second edge energy
$E=1.32$ keV and optical depth $\tau=0.15$.
This fit is $\Delta \chi^2$ of 63 worse than the fit
utilizing an emission feature and so we do not
pursue this line of modeling.
This conclusion is in agreement with that
stated in recent work by Vaughan et al (1999), who find signatures
of warm absorbers in several NLSy1s, but find the features in Ark~564
to be better modeled with emission than absorption features.
The energy of the Gaussian emission component suggests the observed feature
is a blend of line emission from the K-shells of O and Ne, and the
L-shell of Fe, thus we investigate the agreement between the data and
realistic physical models which produce strong emission from those
elements.
\subsubsection{Photoionized Gas}
We attempted to model the emission assuming an origin in photoionized
gas in equilibrium using models calculated by ION97,
the 1997 version of the code ION
(see Netzer 1996). Emission features could dominate over absorption
features if the ionized gas is out of the line-of-sight.
We allowed the material to be
photoionized by the $\Gamma \sim 2.6$ continuum of
Akn 564. This steep ionizing continuum yields an emission spectrum
characteristically different
from ``typical'' warm absorber models,
like the ones illustrated in several of our
previous publications (Netzer 1996; George et al 1998). In this case it
is characterized by much larger column density and larger $U_X$ than found for
Seyfert 1 galaxies (cf George et al. 1998).
We will not expand on this
point which is currently under study.
The modeling of the blend of lines around 0.8--1.2 keV is
complicated by the rich spectrum of
the Fe {\sc xvii -- xxi} L-shell lines;
there are dozens energy levels and hundreds of transitions to be considered.
Great effort has been made by Liedahl and collaborators (e.g.
Kallman et al. 1996 and references therein) to calculate
the atomic data required for such modeling. Previous
versions of ION included only 2-5 Fe recombination lines per ion
(Netzer 1996). We have
extended the list by grouping a detailed line list, generously provided
by D. Liedahl, into 8-12 lines per ion such that the total recombination
emission is identical to the one computed by the more sophisticated
atomic models; this
allows a reasonable simulation of the Fe recombination spectrum
(adequate, given the {\it ASCA} spectral resolution).
These models do not yet contain the contribution due to continuum
fluorescence (absorption of continuum radiation by resonance lines)
which, in some geometries, can approximately double the line intensities.
The details of this contribution are currently under investigation.
We used trial models composed of a continuum power-law plus emission
from ionized gas. Emission models were generated covering
a column density range of 10$^{22}-10^{23.75}$ cm$^{-2}$;
ionization parameter $ U_x$
in the range 0.1--10.0 (see George et al. 1998 for a definition
of $U_x$) and covering fraction C$_f=$ 0.1--1.0.
The models that produce the largest summed 0.75--1.2 keV flux
are those with C$_f \simeq 0.5$,
column density of about $10^{23.5}$ cm$^{-2}$ and $U_x$ of 5 -- 10.
These models typically give (for solar composition) a
line blend (summing lines and recombination continua
in the 0.75--1.2 keV range) with EW
$\simeq 20$~eV.
This EW is measured relative to the unobscured continuum and a
larger EW, by up to a factor 1.2, is measured relative to the observed
continuum (which is somewhat suppressed due to absorption).
These models did not provide an adequate fit to the data.
The best fit which was achieved (for solar abundances)
was that utilizing the pure emission spectrum from the photoionized gas
which
yielded $\chi^2 = 1701/1260\ dof$ (excluding the Fe K-shell band) for
emission from a column of gas with
$N_H=3.0^{+1.6}_{-0.6} \times 10^{23}$ cm$^{-2}$ and $U_x =10^+_-1$.
Fig.~3 shows the data and model illustrating how little of
the observed feature can be explained. In these models production
of a strong feature at 1 keV leads to
the expectation of strong features at other energies --
ruled out by the {\it ASCA} spectral data.
We have also explored the possibility that the Fe abundance in this
source is unusually high. This is motivated by the strong
Fe K$\alpha$ line and the strong optical Fe{\sc ii} lines in NLSy1s.
Doubling the Fe abundance results in $\sim 50$\% increase in the EW,
bringing it close to $\sim 30$~eV, but still falling short of the
strength of the observed feature.
None of the models based upon the emission spectrum from photoionized gas
provided an adequate fit to the data.
Although models utilizing simple absorption edges failed to fit the data,
we considered the effects of absorption by a cloud of
photoionized gas. A photoionized cloud produces an absorption
profile which is obviously different to that of arbitrary absorption edges
considered in isolation. We considered both models where a single
and a double power-law continuum were absorbed by
a cloud of photoionized gas. Fits were performed
with and without an unconstrained contribution from the emission spectrum
of that gas. However, none of these models produced a satisfactory fit to the
data.
\subsubsection{Thermal Gas}
Next we considered emission from gas in thermal equilibrium.
ION does not include a very sophisticated treatment of collisionally excited
transitions in Fe {\sc xvii -- xxiii}; instead it groups those lines into bands
containing very small number of lines per
ion. The overall collisional contribution is maintained assuring energy
conservation.
This is not adequate for obtaining the realistic
emissivity pattern of the Fe L-shell transitions. We have therefore
used the {\sc mekal} calculations, in {\sc xspec}, to model the 0.8--1.2 keV
thermal emission, as an alternative to the model from ION.
We note that such calculations, while very detailed, do not include
optical depth
effects that are important for some lines.
Such models provide a reasonable fit to the bulk of the
soft feature. A model composed of a continuum power-law plus {\sc mekal}
component gave a best-fit temperature $kT=1.07^+_-0.04$ keV for the gas
(assuming cosmic abundances) with $\chi^2=1453/1262\ dof$. While this is
statistically superior to the fit using a power-law plus ION,
we note that neither model
matches the entire flux in the soft hump.
As expected, for this temperature the dominant Fe
ions are Fe {\sc xvi -- xx} and the resulting Fe line complex around
1 keV is very strong. In this model, the flux in the
thermal component (0.5 -- 2 keV band) is
$f_{0.5-2}= 2.5 \times 10^{-12} {\rm erg\ cm^{-2}\ s^{-1}}$
with a corresponding luminosity
$L_{0.5-2} = 7 \times 10^{42} {\rm erg\ s^{-1}}$
(assuming $H_0=50 {\rm km\ s ^{-1}\ Mpc^{-1}, q_0=0.5} $).
While the {\it ROSAT} PSPC data were taken at a different epoch, we
compared those data with this model, to see whether
there was any inconsistency
if the model was extrapolated down to 0.1 keV, specifically, to
determine whether the PSPC data fell below the flux of the {\sc mekal}
component, which would provide significant constraints to our model.
The PSPC data
gave good agreement with the {\it ASCA} model in the 0.1 -0.5 keV band.
We review the possible origins of this component in the discussion.
\subsection{Fe K$\alpha$}
A significant Fe K$\alpha$ emission line is evident in the
{\it ASCA} spectrum (Fig.~2). The
line profile is asymmetric with a marked red wing, as
observed commonly in Seyfert 1 galaxies (Nandra et al. 1997b).
The line was modeled using a broad Gaussian component. This provided
an adequate parameterization of the line shape, and yielded a
rest-energy for the line $E = 6.25^+_-0.29$ keV, line width
$\sigma=1.0^{+0p}_{-0.12}$ keV, normalization $n=9.28^+_-2.11 \times 10^{-5}
{\rm photons\ cm^{-2}\ s^{-1}}$ and equivalent width $EW=566^+_-128$ eV
(the $p$ denotes that the parameter hit the hard bound set within the fit).
The asymmetry of the line lead us to fit a
disk-line model profile (Fabian et al. 1989). The model
assumes a Schwarzschild geometry and we assumed an emissivity law
$r^{-q}$ for the illumination pattern of the accretion disk, where
r is the radial distance from the black hole, and q=2.5 (as found
for a sample of Seyfert 1 galaxies; Nandra et al. 1997b). We
assume the line originates between 6 and 1000 gravitational radii
and we constrained the rest-energy of the line to lie between 6.4 and 7 keV.
The inclination of the system is defined such that $i=0$ is a disk
orientated face-on to the observer.
This model gave a marginally worse fit than the broad Gaussian model
for the same number of free parameters
($\Delta \chi^2=4$ for 1217 degrees of freedom). The rest-energy of the line
was $E = 6.40^{+0.15}_{-0p}$ keV, the inclination was $i=57^{+21}_{-8}$
degrees and normalization
$n=7.09^{+2.02}_{-1.79} \times 10^{-5} {\rm photons\ cm^{-2}\ s^{-1}}$.
The equivalent width was $EW=550^{+128}_{-156}$ eV.
In both parameterizations of the line, we obtain a flux of
$\sim 9 \times 10^{-13} {\rm erg\ cm^{-2}\ s^{-1}}$, corresponding to a line
luminosity $L\sim 2.5 \times 10^{42} {\rm erg\ s^{-1}}$.
\section{Discussion}
The strong spectral signature in the soft X-ray regime is
intriguing.
Starburst activity could explain the presence of a large soft X-ray flux,
and such activity often appears concentrated towards a
galaxy center. The large X-ray luminosity of this component
would suggest that it must produce detectable infrared emission if these
X-rays originate from a starburst region.
In the 0.5 -- 4.5 keV band the luminosity in the
soft component alone is $8 \times 10^{42} {\rm erg\ s^{-1}}$.
Use of this bandpass allows comparison with a study
of normal and starburst galaxies (David, Jones \& Forman 1992). Using
the infrared fluxes reported for Ark~564 (Bonatto \& Pastoriza 1997)
the estimated X-ray emission from a region dominated by star
formation would be
$L_{0.5-4.5} \sim 6 \times 10^{40} \rm erg\ s^{-1}$, almost two orders
of magnitude lower than that observed.
Using again the infrared fluxes from Bonatto \& Pastoriza (1997)
we calculate indices $\alpha_{60,25}=-0.642$ and
$\alpha_{100,60}=-0.204$
\footnote{$\alpha_{x,y}=
-log[\frac{F_{\nu}(y)}{F_{\nu}(x)}]/ log[\frac{\nu(y)}{\nu(x)}]$ }.
We compared these indices with those
calculated for samples of Seyfert galaxies and starburst dominated galaxies
(Miley, Neugebauer \& Soifer 1985).
The 60$\mu$m curvature is not consistent with that observed in
starburst galaxies (c.f. Miley, Neugebauer \& Soifer 1985; their Fig.~4).
Thus the infrared luminosity and spectrum indicate that
starburst activity does not make a significant contribution to
the observed X-ray luminosity in Ark~564.
The optical image of Ark~564 does show extent with ridges of enhanced emission
with a red optical color (Arakelian, 1975). The extended
optical features give Ark~564 a diameter of $\sim 30''$, with $12''$
separating the nucleus from one bright optical region to the SE.
The {\it ROSAT} HRI image of this source
shows no significant extended emission coincident with the optical
enhancements. The
{\it ROSAT} HRI data provide an upper limit such that
the emitting gas must exist within the inner $\sim 3$ kpc
of the nucleus,
much stronger constraints will be obtained by AXAF images
in the near future.
(Unfortunately faint extent would
be very difficult to confirm with the HRI, because the instrument suffers
some scattering of point-source photons into a halo outside of the
point-spread-function.)
An extended thermal source would not show rapid time variability.
While we see strong X-ray flux variability in
Akn 564 it is
currently unclear whether the flux in the spectral feature
itself varies.
The current constraint, that the hot gas lies within a 3 kpc radius,
leaves many possibilities open to discussion. However,
no such component is observed in normal galaxies or
BLSy1s, thus it seems more compelling to discuss possible origins
associated with the nucleus, rather than
further out (such as gas in the galactic halo).
The {\it ASCA} data show the spectral feature at 1 keV is
not produced by emission or absorption from
gas in photoionization equilibrium.
An alternative is that the gas is in thermal equilibrium.
The {\sc mekal} model can be extrapolated to give a bolometric luminosity
in that component (over $\sim$ 0.1-10 keV, although contributing little
above 2 keV) which is $10^{43} {\rm erg\ s^{-1}}$
(c.f. the nuclear luminosity $L_{2-10} \sim 5 \times 10^{43}
{\rm erg\ s^{-1}}$).
The temperature determined from our modeling is $kT=1$ keV.
This can be explained if the gas exists in a
location where some heating mechanism dominates over
photoionization heating. This could be part of the nucleus
obscured from the central radiation field or a location where
the density is high enough that collisional cooling is
able to maintain a low level of ionization. This would give rise to
strong line emission.
We consider an alternative origin as the hot intercloud medium (HIM)
confining the clouds in either the broad line region (BLR)
or narrow line region (NLR).
Such gas has been postulated in early studies of AGN.
Krolik, McKee and Tarter (1981) discuss the expectation that
emission line clouds must be confined by a hot ($10^7$ -- $10^9$K)
medium (also see Netzer 1990 and references therein).
This medium has also been discussed in connection with
warm absorber gas in AGN, by
Reynolds and Fabian (1995).
The situation regarding NLSy1 is of particular interest since
the HIM is
likely to be at relatively low temperature
due to the inefficient Compton heating (because of
the very soft X-ray continuum)
and the large Compton cooling (due to the strong UV bump). Such material, if
in equilibrium, does not emit strong emission lines since it is fully
ionized.
However, the HIM may be unstable. NLSy1s
are known to have large flux variations and it is conceivable that
the current observed state of Akn~564 represents a low-phase of activity.
Previous higher luminosity phases might have
resulted in full ionization of the HIM which, at the time of
the {\it ASCA} observations was
undergoing intense cooling and recombination. The recombination time depends
on the density and temperature. Given a BLR location, and a temperature
of $10^6$K, the recombination time is short enough to
produce a strong recombination
spectrum. Full modeling of such a time-dependent component is beyond
the scope of this paper. We note, however, that
non-equilibrium situations are likely to be important in some cases
involving rapidly varying sources (Nicastro et al. 1999).
The assumption that there is a strong thermal component in
Akn~564 can be put to observational tests. First, this component is unlikely
to respond on short time scales to large continuum variations. Its relative
contribution at high phases of the source should decrease inversely with the
continuum brightness. Second, high resolution spectral observations, like
those expected with several of the coming X-ray missions ({\it Chandra} and
{\it XMM}), will provide clear diagnostic tools to allow the identification of
the thermal plasma and its separation from any photoionized plasma.
If thermal gas is indeed found in high
resolution observations of Akn~564, it will provide yet another clue to the
unique spectral appearance of NLSy1 galaxies.
Turning now to the properties of the Fe K$\alpha$ line in Ark~564, we note an
asymmetric shape as observed in BLSy1s.
The pronounced red wing indicates that the innermost disk is emissive.
We find an acceptable explanation of the Fe K$\alpha$ line
as emission from an accretion disk composed of
neutral material, but viewed at $\sim 60$ degrees to the line-of-sight. This
inclination angle contradicts the popular hypothesis that
NLSy1s are Seyfert nuclei viewed pole-on.
Data from Ark~564 and Ton S180 indicate that the Fe K$\alpha$
lines may have systematically large equivalent
width in NLSy1s, compared to BLSy1s. The unusual spectral
feature observed at 1 keV in these two NLSy1s
is consistent with emission lines from the L-shell of Fe.
Ark~564 and Ton S180 show strong
optical Fe{\sc ii} emission, as do NLSy1 galaxies in general.
These facts suggest that overabundance of
Fe may be a property of NLSy1s.
\section{Conclusions}
An {\it ASCA} observation of the NLSy1 galaxy Ark 564 reveals
rapid X-ray variability, although we find no evidence for
energy-dependent flux changes. Ark~564 has a
complex X-ray spectrum. A strong feature is observed close to 1 keV,
which is not easily attributed to emission or absorption by
photoionized gas or emission from gas in
thermal equilibrium.
A similar feature was previously observed in
Ton S180, these features may be characteristic of NLSy1s and important
to understanding conditions in these sources.
The emission may indicate the hot intercloud medium in the BLR or NLR
is currently in a cooling phase in Ark 564.
The hot intercloud
medium may have a characteristically different temperature in NLSy1s to
the rest of the Seyfert population, affording a visible signature
of the gas in the soft X-ray spectrum of NLSy1s, but not BLSy1s.
Separation
of Ne and Fe lines over the 0.8--1.2 keV range with {\it XMM} or
{\it Chandra} grating observations will enable us to test this explicitly.
Ark~564 has a broad and asymmetric
Fe K$\alpha$ line of large equivalent width, $\sim 550$ eV. The line profile
is consistent with emission from a neutral disk inclined at 60 degrees
to our line-of-sight which argues against models inferring NLSy1s
are viewed pole-on.
\section*{Appendix}
We investigated whether the strength and shape of the low
energy feature in Ark 564 changed as a function of any instrumental
parameter. First we examined each of the SIS detectors.
As the lower level for
event discrimination changed during the observation
we accumulated two spectra for each SIS, one for each
discriminator level. A separate response matrix was
generated for each SIS and for each discriminator level.
These four SIS spectra were then fit to the power-law model,
and the data/model ratios are shown in Fig.~4. The SIS0 and SIS1
data are shown separately. It is clear that the spectral feature
is evident in both, showing consistent strength in each.
Fig.~4 shows a comparison between
spectra from each discriminator level, for the two SIS instruments,
demonstrating the close consistency between each pair. As no
significant difference was found, henceforth we
show only the average SIS0 spectrum and average SIS1 spectrum.
The known discrepancy between SIS0 and SIS1 data
is evident. SIS1 data lie 10\% lower than SIS0 data below an
energy of $\sim 0.8$ keV. However, the peak and turnover energy
of the low energy feature are clearly seen in SIS0, therefore the
overall shape and strength of the feature is not an artifact of problems with
SIS1. We found no difference in our conclusions considering SIS0
data alone, and as there are small uncertainties in all instruments, we
consider the best approach to be that which we present, analysis based upon
simultaneous fits to the four {\it ASCA} instruments.
There are also documented problems with the SIS response matrix
generator for 2-pixel events in cases when the event threshold is high.
There has been some speculation that a more reliable
analysis can be achieved by consideration of the grade 0 events
alone (for which accurate response matrices can be generated regardless of
event threshold),
therefore we repeated our analysis using only the grade 0 events.
Fig.~4 shows the data/model ratio for SIS0 and SIS1
separately, for spectra accumulated from grade 0 events alone.
We find no dependence of the strength of the feature on
grade of event.
The data were also subdivided based upon the CCD temperature during the
observation. We found no changes in the strength of the low
energy feature as the CCD temperatures changed. To illustrate
this point we show data/model ratios to the power-law model, for
two SIS1 spectra; the first accumulated for a mean instrument
temperature of -60C, the second for mean temperature -61 C
(close to the full range of temperature during the observation).
In conclusion, we have screened and subdivided
the data based upon a large number of
different criteria, but found no instrumental effects which
falsely enhanced the strength of the low energy feature.
Our conclusions are not
dependent on the small residual
uncertainties in the {\it ASCA} calibration.
\section{Acknowledgements}
We are grateful to the {\it ASCA} team for their operation of the satellite.
This research has
made use of the NASA/IPAC Extragalactic database,
which is operated by the Jet Propulsion Laboratory, Caltech, under
contract with NASA; of the Simbad database,
operated at CDS, Strasbourg, France; and data obtained through the
HEASARC on-line service, provided by NASA/GSFC.
This work was supported by NASA Long Term Space Astrophysics grant
NAG 5-7385.
\section*{References}
{\noindent}Arakelian, M.A. 1975, SoByu 47, 3 \\
Boller et al 1996A\&A, 305, 53 \\
Bonatto, C.J., Pastoriza, M.G. 1997, ApJ\ 486, 132 \\
Brandt, W.N., Fabian, A.C., Nandra, K., Reynolds, C.S., Brinkmann, W. 1994,
MNRAS, 271, 958 \\
Brandt et al 1997 MNRAS, 285, 25 \\
Comastri et al 1998, A\&A, 333, 31 \\
Crenshaw, D.M., Kraemer, S.B., Boggess, A., Maran, S.P.,
Mushotzky, R.F., Wu, C-C. 1999, ApJ 516, 750 \\
David, L.P., Jones, C., Forman, W. 1992, ApJ\ 388, 82 \\
Fabian,A.C., Rees, M.J., Stella, L., White, N.E. 1989, MNRAS 238, 729 \\
Fiore,F., et al. 1998, MNRAS 503, 607 \\
George, I.M., Turner,T.J., Netzer, H., Nandra, K., Mushotzky, R.F.,
Yaqoob, T. 1998, ApJS 114, 73 \\
George, I.M., et al 1999, ApJ, in prep \\
Kallman, T. R., Liedahl, D., Osterheld, A., Goldstein, W., Kahn, S. 1996,
ApJ\ 465, 994 \\
Komossa, S., Greiner, J. 1999, (Astro-ph9810105) \\
Krolik, J.H., McKee,C.F., Tarter,C.B. 1981, ApJ\ 249, 422 \\
Makashima, et al. 1996, PASJ 48, 171 \\
Miley,G.K., Neugebauer, G., Soifer, B.T. 1985, ApJ\ 293, 148 \\
Nandra, K., George,I.M., Mushotzky, R.F.,Turner,T.J.,Yaqoob, T.
1997a, ApJ\ 476, 70 \\
Nandra, K., George,I.M., Mushotzky,R.F., Turner,T.J., Yaqoob,T.
1997b, ApJ\ 477, 602 \\
Netzer,H. 1990, in {\it Active Galactic Nuclei}, by Woltjer, Netzer \&
Blandford, SAA-FEE series, Courvoisier and Mayor eds (Berlin, Springer) p57 \\
Netzer, H. 1996, ApJ\ 473, 781 \\
Netzer, H. 1999, in prep \\
Nicastro, F., Fiore, F., Perola, G.C., Elvis, M., 1999, ApJ 512, 184 \\
Osterbrock, D.E., Shuder, J.M., 1982, ApJS 49, 149 \\
Pounds, K.A., Done, C., Osbourne, J.P., 1995, MNRAS 277, L5 \\
Reynolds, C., Fabian, A.C. 1995, {\it M.~N.~R.~A.~S.}\ 273, 1167 \\
Stark, A.A., Gammie, C.F., Wilson, R.W., Bally, J., Linke, R.A., Heiles, C.,
Hurwitz, M., 1992, ApJS 79, 77 \\
Turner, T.J., George, I.M., Nandra, K. 1998, ApJ, 508, 648 \\
Turner, T.J., George, I.M., Nandra, K. 1999, ApJ, sched. Nov 1 issue,
astro-ph/9906050 \\
Van Groningen, E. 1993, A\&A 272, 25 \\
Vaughan, S., Reeves, J., Warwick, R., Edelson, R. 1999, MNRAS in press \\
Walter, R., Fink, H.H., 1993, A\&A 274, 105 \\
\newpage
\begin{figure}[h]
\plotfiddle{fig1.vps}{10cm}{0}{50}{50}{-200}{-20}
\caption{ {\it The ASCA light curve in the 0.5-10 keV band,
based upon the combined SIS data, in 128 s bins. } }
\end{figure}
\begin{figure}[h]
\plotfiddle{fig2.vps}{10cm}{0}{50}{50}{-200}{0}
\caption{ {\it The data/model ratio (combined SIS and GIS data)
compared to a power-law continuum
fit to the 0.6-5.0 plus 7.5-10.0 keV ASCA data, with the 5.0 -- 7.5
keV data overlaid for comparison.
Residual counts due to emission from Fe K$\alpha$ are
evident, as well as the
strong feature close to 1 keV. } }
\end{figure}
\begin{figure}[h]
\plotfiddle{closeup_ionmo.vps}{4.5cm}{0}{50}{50}{-200}{0}
\plotfiddle{closeup_rat.vps}{4.5cm}{0}{50}{50}{-200}{-220}
\caption{ {\it Top: The model for the best fit using the photoionized
gas, as detailed in \S 3.1.1. Bottom: The data/model ratio with
each of the four instruments plotted }}
\end{figure}
\begin{figure}[h]
\plotfiddle{s0_lvdl.vps}{5cm}{0}{50}{50}{-300}{-20}
\plotfiddle{s1_lvdl.vps}{5cm}{0}{50}{50}{0}{+140}
\caption{
{\it The data/model ratio as in Figure 2, except that
data are shown for each individual SIS and each
level discriminator used during the observation.
Left: SIS 0, stars are data where S0LVDL=135, the crosses are
S0LVDL=115.
Right: SIS 1, the stars show S1LVDL=146, the crosses
are S1LVDL=125
} }
\end{figure}
\begin{figure}[h]
\plotfiddle{s0g0.vps}{5cm}{0}{50}{50}{-340}{-150}
\plotfiddle{s1g0.vps}{5cm}{0}{50}{50}{-20}{+10}
\plotfiddle{s1t1.vps}{5cm}{0}{50}{50}{-340}{-200}
\plotfiddle{s1t2.vps}{5cm}{0}{50}{50}{-20}{-40}
\caption{ {\it Data/model ratio as in Figure 2. Top:
Grade 0 data from SIS 0 and SIS 1.
Bottom: Data/model ratio for SIS 1 subdivided by
CCD temperature, T1=-60 C (left) and T2=-61 C} (right)}
\end{figure}
\end{document}
|
3,212,635,537,548 | arxiv | \section*{Introduction\label{s1}}
\noindent Throughout this paper $H,V$ are two separable Hilbert spaces over $\mathbb C$ such that $V$ is
densely and continuously embedded into $H$ (we write $V \underset d \hookrightarrow H$).
We denote by $(\cdot \mid \cdot)_V$ the scalar product and $\|\cdot\|_V$ the norm
on $V$ and by $(\cdot \mid \cdot)_H, \|\cdot\|_H$ the corresponding quantities in $H.$ Let $V'$
be the antidual of $V$ and denote by $\langle ., . \rangle$ the duality
between $V'$ and $V.$ As usual, by identifying $H$ with $H'$
we have $V\underset d\hookrightarrow H\cong H'\underset d\hookrightarrow V'$
see e.g., \cite{Bre11}.
\par\noindent Let $\fra:[0,T]\times V \times V \to \C$ be a \textit{non-autonomous sesquilinear form}, i.e., $\fra(t;\cdot,\cdot)$ is for each $t\in[0,T]$ a sesquilinear form,
\begin{equation}\label{measurability}
\fra(\cdot;u,v) \text{ is measurable for all } u,v\in V,
\end{equation}
such that
\begin{equation}\label{eq:continuity-nonaut}
|\fra(t,u,v)| \le M \Vert u\Vert_V \Vert v\Vert_V\ \text{ and } \Re ~\fra (t,u,u)\ge \alpha \|u\|^2_V \quad \ (t,s\in[0,T], u,v\in V),
\end{equation}
for some constants $M, \alpha>0$ that are independent of $t, u,v.$ Under these assumptions
there exists for each $t\in[0,T]$ an isomorphism
$\A(t):V\to V^\prime$ such that
$\langle \A(t) u, v \rangle = \fra(t,u,v)$ for all $u,v \in V.$ It is well known that $-\A(t),$ seen as unbounded operator
with domain $V,$ generates an analytic $C_0$-semigroup on $V'$. The operator $\A(t)$ is usually called the operator
associated with $\fra(t,\cdot,\cdot)$ on $V^\prime.$ Moreover, we associate an operator $A(t)$ with $\fra(t;\cdot,\cdot)$ on $H$
as follows
\begin{align*}
D(A(t))={}&\{u\in V \mid \exists f\in H \text{ such that } \fra(t;u,v)=(f\mid v)_H \text{ for all } v\in V \}\\
A(t) u = {}& f.
\end{align*}
It is not difficult to see that $A(t)$ is the part of $\A(t)$ in $H.$ In fact, we have
$D(A(t))= \{ u\in V : \A(t) u \in H \}$ and
$A(t) u = \A(t) u.$
Furthermore, $-A(t)$ with domain $D(A(t))$ generates
a holomorphic $C_0$-semigroup on $H$ which is the restriction to $H$ of that generated by $-\A(t).$ For all this results
see e.g. \cite[Chapter 2]{Tan79} or \cite[Lecture 7]{Ar06}.
\par\noindent We now assume that there exist $0< \gamma<1$ and a continuous function $\omega:[0,T]\longrightarrow [0,+\infty)$ such that
\begin{equation}\label{eq 1:Dini-condition}
|\fra(t,u,v)-\fra(s,u,v)| \le\omega(|t-s|) \Vert u\Vert_{V_\gamma} \Vert v\Vert_{V_\gamma}\quad \ (t,s\in[0,T], u,v\in V),
\end{equation}
with
\begin{equation}\label{eq 2:Dini-condition}\sup_{t\in[0,T]} \frac{\omega(t)}{t^{\gamma/2}}<\infty \quad \text{ and }
\int_0^T\frac{\omega(t)}{t^{1+\gamma/2}}<\infty
\end{equation}
where $V_\gamma:=[H,V]$ is the complex interpolation space. In addition we assume that $\fra(t_0; \cdot,\cdot)$
has the square root property for some $t_0\in[0,T]$ (and then for all $t\in[0,T]$ by \cite[Proposition 2.5]{Ar-Mo15} ),
i.e.,
\begin{equation}\label{square property}
D(A^{\frac{1}{2}}(t_0))=V.
\end{equation}
Recall that for symmetric forms, i.e., if $\fra(t;u,v)=\overline{\fra(t;v,u)}$ for all $t,u,v,$ then the square root property is satisfied.
Under the assumptions \eqref{measurability}-\eqref{square property} it is known that for each $x_0\in V$ the non-autonomous homogeneous Cauchy problem
\begin{equation}\label{Abstract Cauchy problem 0}
\left\{
\begin{aligned}
\dot u(t)&+\A(t)u(t)= 0\quad \hbox{a.e. on}\ [0,T],\\
u(0)&=x_0, \,\\
\end{aligned}
\right.
\end{equation}
has a unique solution $u \in \MR(V,H):=L^2(0,T;V)\cap H^1(0,T;H)$ such that $u\in C([0,T];V).$ This result has been proved by Arendt and Monniaux \cite{Ar-Mo15} (see also \cite{ELLA15}) when the form $\fra$ satisfies the weaker condition
\begin{equation}\label{eq 1:Dini-conditionWeaker}
|\fra(t,u,v)-\fra(s,u,v)| \le\omega(|t-s|) \Vert u\Vert_V \Vert v\Vert_{V_\gamma}\quad \ (t,s\in[0,T], u,v\in V).
\end{equation}
\medskip
\par\noindent In this paper we continue to investigate further regularity of the solution of
(\ref{Abstract Cauchy problem 0}). For this it is necessary to associate to the Cauchy problem \eqref{Abstract Cauchy problem 0} an \textit{evolution family} \[\mathcal U:=\Big\{U(t,s):\ 0\leq s\leq t\leq T\Big\}\subset \L(H)\] which means that:
\begin{itemize}
\item[$(i)$] $U(t,t)=I$ and $U(t,s)= U(t,r)U(r,s)$ for every $0\leq r\leq s\leq t\leq T,$
\item [$(ii)$] for every $x\in X$ the function $(t,s)\mapsto U(t,s)x$ is continuous into $H$
for $0\leq s\leq t\leq T$
\item [$(iii)$] for each $x_0\in H, U(\cdot,s)x_0$ is the unique solution of (\ref{Abstract Cauchy problem 0}).
\end{itemize}
\begin{definition} Let $Y\subseteq H$ be a subspace. An evolution family $\mathcal U\subset \L(H)$ is said to be \textit{norm continuous in $Y$} if $\mathcal U\subset\L(Y)$ and the map $(t,s)\mapsto U(t,s)$ is
norm continuous with value
in $\L(Y)$ for $0\leq s<t\leq T.$
\end{definition}
\par\noindent If the non-autonomous form $\fra$ satisfies the weaker condition (\ref{eq 1:Dini-conditionWeaker}) then it is known that (\ref{Abstract Cauchy problem 0}) is governed by an evolution family which is norm continuous in $V$ \cite[Theorem 2.6]{LH17}, and norm continuous in $H$ if in addition $V\hookrightarrow H$ is compact \cite[Theorem 3.4]{LH17}. However, for many boundary value problem the compactness embedding fails.
\par\noindent
In this paper we prove that the compactness assumption can be omitted provided $\fra$ satisfies
\eqref{eq 1:Dini-condition} instead of (\ref{eq 1:Dini-conditionWeaker}). This will allow us to consider a large class of examples of applications.
One of the main ingredient used here is the non-autonomous
\textit{returned adjoint form} $\fra^*_r : [0,T]\times V\times V\lra \C$ defined by
\begin{equation}\label{returnd adjoint form}
\fra^*_r(t,u,v):=\overline{\fra(T-t,v,u)} \quad \ (t,s\in[0,T], u,v\in V).
\end{equation}
The concept of returned adjoint forms appeared in the work of D. Daners \cite{Daners01} but for different interest. Furthermore, \cite[Theorem 2.6]{LH17} cited above will be also needed to prove our main result.
\medskip
\par\noindent We note that the study of regularity properties of the evolution family
with respect to $(t,s)$ in general Banach spaces has been investigated in the case of constant domains by Komatsu \cite{H.Ko61}
and Lunardi \cite{Lu89}, and by Acquistapace \cite{Ac88} for time-dependent domains.
\medskip
We illustrate our abstract results by two relevant examples. The first one concerns the Laplacian with non-autonomous Robin boundary conditions on a unbounded Lipschitz domain. The second one traits a class of Schr\"odinger operators with time dependent potential.
\section{Preliminary results \label{Approximation}}
Let $
\fra: [0,T]\times V\times V \to \C$ a non-autonomous sesquilinear form satisfying \eqref{measurability} and (\ref{eq:continuity-nonaut}). Then the following well known result regarding \textit{$L^2$-maximal regularity in $V'$}
is due to J. L. Lions
\begin{theorem}[Lions, 1961]\label{wellposedness in V'2}
For each given $s\in[0,T)$ and $x_0\in H$ the homogenuous Cauchy problems
\begin{equation}\label{evolution equation u(s)=x}
\left\{
\begin{aligned}
\dot u(t)&+\A(t)u(t)= 0\quad \hbox{a.e. on}\ [s,T],\\
u(s)&=x, \,\\
\end{aligned}
\right.
\end{equation} has a unique solution $u \in \MR(V,V'):=\MR(s,T;V,V'):=L^2(s,T;V)\cap H^1(s,T;V').$
\end{theorem}
Recall that the maximal regularity space $\MR(V,V')$ is continuously embedded into $C([s,T],H)$ \cite[page 106]{Sho97}. A proof of Theorem \ref{wellposedness in V'2} using a representation theorem of linear functionals, known in the literature as \textit{Lions's representation Theorem}
can be found in \cite[page 112]{Sho97} or \cite[XVIII, Chpater 3, page 513]{DL88}.
\par\noindent
\par\noindent
Furthermore, we consider the non-autonomous adjoint form $\fra^*:[0,T]\times V\times V\lra \C$ of $\fra$ defined by
\[ \fra^*(t;u,v):=\overline{\fra(t;v,u)}\]
for all $t\in[0,t]$ and $u,v\in V.$ Finally, we will need to consider
the returned adjoin form $\fra^*_r:[0,T]\times V\times V\lra \C$ given by \[\fra^*_r(t,u,v):=\fra^*(T-t,u,v).\]
Clearly, the adjoint form is a non-autonomous sesquilinear form and satisfies
\eqref{measurability} and (\ref{eq:continuity-nonaut}) with the same constant $M, \alpha.$
Moreover, the adjoint operators $A^*(t), t\in[0,T]$ of $A(t), t\in[0,T]$ coincide with the
operators associated with $\fra^*$ on $H.$ Thus applying Theorem \ref{wellposedness in V'2} to the returned adjoint form we obtain that
the Cauchy problem associated with $\A^*_r(t):=\A^*(T-t)$
\begin{equation}\label{evolution equation u(s)=x returned}
\left\{
\begin{aligned}
\dot v(t)&+\A^*_r(t)v(t)= 0\quad \hbox{a.e. on}\ [s,T],\\
v(s)&=x, \,\\
\end{aligned}
\right.
\end{equation}
has for each $x\in H$ a unique solution $v\in MR(V,V').$ Accordingly, for every
$(t,s)\in\overline{\Delta}:=\{ (t,s)\in[0,T]^2:\ t\leq s\}$
and every $x\in H$ we can define the following family of linear operators
\begin{equation}\label{evolution family}
U(t,s)x:=u(t) \quad \hbox{ and }\quad
U^*_r(t,s)x:=v(t),
\end{equation}
where $u$ and $v$ are the unique solutions in $MR(V,V')$
respectively of (\ref{evolution equation u(s)=x}) and (\ref{evolution equation u(s)=x returned}).
Thus each family $\{\mathcal U(t,s):\ (t,s)\in\overline{\Delta}\}$
and $\{\mathcal U^*_r(t,s):\ (t,s)\in\overline{\Delta}\}$ yields a contractive,
strongly continuous evolution family on $H$ \cite[Proposition]{LH17}.
\par\noindent In the autonomous case, i.e., if $\fra(t,\cdot,\cdot)=\fra(\cdot,\cdot)$ for all $t\in[0,T],$ then one knows that $-A_0,$ the operator associated with $\fra_0$ in $H,$ generates a $C_0$-semigroup $(T(t))_{t\geq 0}$ in $H.$ In this case $\U(t,s):=T(t-s)$ yields a strongly continuous evolution family on $H.$ Moreover, we have
\begin{equation}\label{equalities: adjoint EVF and EVF autonomous case}
\U(t,s)^\prime=T(t-s)^{\prime}=T^*(t-s)=\U^*(t,s)=\U^*_r(t,s).
\end{equation}
Here, $T(\cdot)^\prime$ denote the adjoint of $T(\cdot)$ which coincides with the $C_0$-semigroup $(T^*(t))_{t\geq 0}$ associated with the adjoint form $\fra^*.$
In the non-autonomous setting however, (\ref{equalities: adjoint EVF and EVF autonomous case}) fails in general even in the finite dimensional case, see \cite[Remark 2.7]{Daners01}. Nevertheless, Proposition \ref{equalities: adjoint EVF and EVF} below shows that the evolution families $\U$ and $\U_r^*$ can be related in a similar way. This formula appeared in \cite[Theorem 2.6]{Daners01}.
\begin{proposition}\label{equalities: adjoint EVF and EVF} Let $\U$ and $\U^*_r$ be given by (\ref{evolution family}). Then we have
\begin{equation}\label{Key equalities: adjoint EVF and EVF}
\big [ \U^*_r(t,s)\big ]^\prime x=\U(T-s,T-t)x
\end{equation}
for all $x\in H$ and $(t,s)\in\overline{\Delta}.$
\end{proposition}
The equality \eqref{Key equalities: adjoint EVF and EVF} will play a crucial role in the proof of our main result.
We include here a new proof for the sake of completeness.
\begin{proof}(of Proposition \ref{equalities: adjoint EVF and EVF})
Let $\Lambda=(0=\lambda_0<\lambda_1<...<\lambda_{n+1}=T)$ be
a subdivision of $[0,T].$ Let $\fra_k:V \times V \to \mathbb C\ \ \hbox{ for } k=0,1,...,n$ be given by \begin{equation*}\begin{aligned}
\ \fra_k(u,v):=\fra_{k,\Lambda}(u,v):=\frac{1}{\lambda_{k+1}-\lambda_k}
\int_{\lambda_k}^{\lambda_{k+1}}&\fra(r;u,v){\rm d}r\ \hbox{ for } u,v\in V. \
\end{aligned}
\end{equation*}
All these forms satisfy (\ref{eq:continuity-nonaut}) with the same constants $\alpha, M.$ The associated operators in $V'$ are denoted by $\A_k\in \L(V,V')$ and are given for all $u\in V$ and $k=0,1,...,n$ by
\begin{equation}\label{eq:op-moyen integrale}
\A_ku :=\A_{k,\Lambda}:=\frac{1}{\lambda_{k+1}-\lambda_k}
\int_{\lambda_k}^{\lambda_{k+1}}\A(r)u{\rm d}r.\ \ \
\end{equation}
Consider the non-autonomous form $\fra_\Lambda:[0,T]\times V \times V \to \C$ defined by
\begin{equation}\label{form: approximation formula1}
\fra_{\Lambda}(t;\cdot,\cdot):=\begin{cases}
\fra_k(\cdot,\cdot)&\hbox{if }t\in [\lambda_k,\lambda_{k+1})\\
\fra_n(\cdot,\cdot)&\hbox{if }t=T\ .
\end{cases}
\end{equation}
Its associated time dependent operator $\A_\Lambda(\cdot): [0,T]\subset \L(V,V')$ is given by
\begin{equation}\label{form: approximation formula1}
\A_{\Lambda}(t):=\begin{cases}
\A_k&\hbox{if }t\in [\lambda_k,\lambda_{k+1})\\
\A_n &\hbox{if }t=T\ .
\end{cases}
\end{equation}
Next denote by $T_k$ the $C_0-$semigroup associated with $\fra_k$ in $H$ for all $k=0,1...n.$ Then applying Theorem \ref{wellposedness in V'2}) to the form $\fra_\Lambda$ we obtain that in this case the associated evolution family $\U_\Lambda(t,s)$ is given explicitly for $\lambda_{m-1}\leq s<\lambda_m<...<\lambda_{l-1}\leq t<\lambda_{l}$
by
\begin{equation}\label{promenade1}\U_\Lambda (t,s):= T_{l-1}(t-\lambda_{l-1})
T_{l-2}(\lambda_{l-1}-\lambda_{l-2})...T_{m}(\lambda_{m+1}-\lambda_{m})T_{m-1}(\lambda_{m}-s),
\end{equation}
and for $\lambda_{l-1}\leq a\leq b<\lambda_{l}$ by
\begin{equation}\label{promenade2}U_\Lambda (t,s):= T_{l-1}(t-s).\end{equation}
By \cite[Theorem 3.2]{LASA14} we know that $(\U_\Lambda)_{\Lambda}$ converges weakly in $MR(V,V')$ as $|\Lambda|\to 0$ and
\[\lim\limits_{|\Lambda|\to 0}\|\U_\Lambda-\U\|_{MR(V,V')}=0\]
The continuous embedding of $MR(V,V')$ into $C([0,T];H)$ implies that $\lim\limits_{|\Lambda|\to 0}\U_\Lambda=\U$
in the weak operator topology of $\L(H).$
\par\noindent Now, let $(t,s)\in \overline{\Delta}$ with $\lambda_{m-1}\leq s<\lambda_m<...<\lambda_{l-1}\leq t<\lambda_{l}$ be fixed. Applying the above approximation argument to $\fra_r^*$ one obtains that
\begin{align}\label{eq1 proof returned adjoint}
\U^*_{\Lambda,r}(t,s)&=T_{l-1,r}^*(t-\lambda_{l-1})
T_{l-2,r}^*(\lambda_{l-1}-\lambda_{l-2})...T_{m,r}^*(\lambda_{m+1}-\lambda_{m})T_{m-1,r}^*(\lambda_{m}-s)
\\\label{eq2 proof returned adjoint}&=T_{l-1,r}^\prime(t-\lambda_{l-1})
T_{l-2,r}^\prime(\lambda_{l-1}-\lambda_{l-2})...T_{m,r}^\prime(\lambda_{m+1}-\lambda_{m})T_{m-1,r}^\prime(\lambda_{m}-s)
\end{align}
where $T_{k,r}$ and $T_{k,r}^*$ are the $C_0$-semigroups associated with \begin{equation}\label{eq1 proof Thm equalities: adjoint EVF and EVF} \fra_{k,r}(u,v):=\frac{1}{\lambda_{k+1}-\lambda_k}\int_{\lambda_k}^{\lambda_{k+1}}\fra(T-r;u,v){\rm d}r=\frac{1}{\lambda_{k+1}-\lambda_k}\int_{T-\lambda_{k+1}}^{T-\lambda_k}\fra(r;u,v){\rm d}r\end{equation}
and its adjoint $\fra_{k,r}^*$, respectively. Recall that $T_{k,r}^*=T_{k,r}^\prime.$
\par\noindent On the other hand, the last equality in (\ref{eq1 proof Thm equalities: adjoint EVF and EVF}) implies that $T_{k,r}$ coincides with the semigroup associated with $\fra_{k,\Lambda_T}$ where $\Lambda_T$ is the subdivision $\Lambda_T:=(0=T-\lambda_{n+1}<T-\lambda_n<...<T-\lambda_1<T-\lambda_0=T).$ It follows from \eqref{promenade1}-\eqref{promenade2} and \eqref{eq1 proof returned adjoint}-\eqref{eq2 proof returned adjoint} that
\begin{align*}\Big[\U^*_{\Lambda,r}(t,s)\Big]^\prime&=\T_{m-1,r}(\lambda_{m}-s)\T_{m,r}(\lambda_{m+1}-\lambda_{m})...\T_{l-2,r}(\lambda_{l-1}-\lambda_{l-2})\T_{l-1,r}(t-\lambda_{l-1})
\\&=\T_{m-1}\Big((T-s)-(T-\lambda_{m})\Big)\T_{m}\Big((T-\lambda_m)-(T-\lambda_{m+1})\Big)...\T_{l-1}\Big((T-\lambda_{l-1})-(T-t)\Big)\\&=\U_{\Lambda_T}(T-s,T-t)
\end{align*}
Finally, the desired equality \eqref{Key equalities: adjoint EVF and EVF} follows by passing to the limit as $|\Lambda|=|\Lambda_T|\to 0.$
\end{proof}
\begin{remark}
\label{remark-rescaling} The coerciveness assumption in \eqref{eq:continuity-nonaut} may be replaced with
\begin{equation}\label{eq:Ellipticity-nonaut2}
\Re \fra (t,u,u) +\omega\Vert u\Vert_H^2\ge \alpha \|u\|^2_V \quad ( t\in [0,T], u\in V)
\end{equation}
for some $\omega\in \R.$ In fact, $\fra$ satisfies (\ref{eq:Ellipticity-nonaut2}) if and only if the form $a_\omega$ given by $\fra_\omega(t;\cdot,\cdot):=\fra(t;\cdot,\cdot)+\omega (\cdot\mid \cdot)$
satisfies the second inequality in \eqref{eq:continuity-nonaut}. Moreover, if $u\in MR(V,V')$ and $v:=e^{-w.}u,$ then $v\in MR(V,V')$ and $u$ satisfies (\ref{evolution equation u(s)=x}) if and only if $v$ satisfies
\begin{equation*}
\dot{v}(t)+(\omega+\mathcal A(t))v(t)=0 \ \ \ t{\rm
-a.e.} \hbox{ on} \ [s,T],\
\ \ \ \ v(s)=x. \
\end{equation*}
\end{remark}
\section{Norm continuous evolution family}\label{Sec2 Norm continuity}
In this section we assume that the non-autonomous form $\fra$ satisfies
(\ref{eq:continuity-nonaut})-\eqref{square property}. Thus as mentioned in the introduction,
under theses assumptions the Cauchy problem (\ref{evolution equation u(s)=x})
has $L^2$-maximal regularity in $H$.
Thus for each $x\in V,$ \[ U(\cdot,s)x\in \MR(V,H):=\MR (s,T;V,H):=L^2(s,T;V)\cap H^1(s,T;H).\]
Moreover, $ U(\cdot,s)x\in C[s,T];V)$ by \cite[Theorem 4.2]{Ar-Mo15}. From \cite[Theorem 2.7]{LH17} we known that the restriction of $\U$ to $V$ defines an evolution family which norm continuous. The same is also true for the Cauchy problem
(\ref{evolution equation u(s)=x returned}) and the assocaited evolution
family $U^*_r$ since the returned adjoint form $\fra_r^*$ inherits all
properties of $\fra.$
\noindent In the following we establish that $\U$ can be extended to a strongly continuous evolution
family on $V'.$
\begin{proposition}\label{Lemma EVF on V'} Let $\fra$ be a non-autonomous sesquilinear form satisfying
(\ref{eq:continuity-nonaut})-(\ref{square property}).
Then $\U$ can be extended to a strongly continuous evolution family on $V^\prime,$ which we still denote $\U.$
\end{proposition}
\begin{proof} Let $x\in H$ and $(t,s)\in\overline {\Delta}.$ Then using Proposition \ref{equalities: adjoint EVF and EVF} and the fact that $\U$ and $\U_r^*$ define both strongly continuous evolution families on $V$ and $H$ we obtain that
\begin{align*}
\|\U(t,s)x\|_{V'}&=\sup_{\underset{v\in V}{\|v\|_V=1}}\mid<\U(t,s)x, v>\mid
\\&=\sup_{\underset{v\in V}{\|v\|_V=1}}\mid (\U(t,s)x|v)_H\mid
=\sup_{\underset{v\in V}{\|v\|_V=1}}\mid (x| \U(t,s)^\prime v)_H\mid
\\&=\sup_{\underset{v\in V}{\|v\|_V=1}}\mid (x|\U_r^*(T-s,T-t)v)_H\mid
\\&=\sup_{\underset{v\in V}{\|v\|_V=1}}\mid <x,\U_r^*(T-s,T-t)v>\mid
\\&\leq \|x\|_{V^\prime}\|\U_r^*(T-s,T-t)\|_{\L(V)}
\\&\leq c\|x\|_{V^\prime}
\end{align*}
where $c>0$ is such that $\underset{t,s\in\Delta}{\sup}\|\U_r^*(t,s)\|_{\L(V)}\leq c.$ Thus, the claim follows since $H$ is dense in $V'.$
\end{proof}
Let $\Delta:=\{(t,s)\in\Delta\mid t\ge s\}.$ The following theorem is the main result of this paper
\begin{theorem}\label{main result}
Let $\fra$ be a non-autonomous sesquilinear form satisfying
(\ref{eq:continuity-nonaut})-(\ref{square property}). Let $\{U(t,s):\ (t,s)\in\Delta\}$ given by (\ref{evolution family}). Then the function $(t,s)\mapsto \U(t,s)$ is norm continuous on $\Delta$ into $\L(X)$ for $X=V, H$ and $V'.$
\end{theorem}
\begin{proof}
The norm continuity for $\U$ in the case where $X=V$
follows from \cite[Theorem 2.7]{LH17}. On the other hand, applying \cite[Theorem 2.7]{LH17}
to $\fra_r^*$ we obtain that $\U_r^*$ is also norm continuous on $\Delta$ with values in $\L(V).$
Using Proposition \ref{equalities: adjoint EVF and EVF}, we obtain by similar arguments as in the proof
of Lemma \ref{Lemma EVF on V'}
\begin{equation}
\|\U(t,s)-\U(t',s')\|_{\L(V')}\leq \|\U_r^*(T-s,T-t)x-\U_r^*(T-s',T-t')x\|_{\L(V)}
\end{equation}
for all $x\in V'$ and $(t,s), (t',s')\in \Delta.$ This implies that $\U$ is norm continuous on $\Delta$ with values in $\L(V').$
Finally, the norm continuity in $H$ follows then by interpolation.
\end{proof}
\section{Examples}\label{S application}
\noindent This section is devoted to some relevant examples illustrating the theory developed
in the previous sections. We refer to \cite{Ar-Mo15} and \cite{Ou15} and the references therein for further examples.
\medskip
$(i)$ \textbf{Laplacian with time dependent Robin boundary conditions on exterior domain} Let $\Omega$ be a bounded domain of $\R^d$ with Lipschitz boundary $\Gamma.$ Denote by $\sigma$ the $(d-1)$-dimensional Hausdorff measure on $\Gamma.$ Let $\Omega_{ext}$ denote the exterior domain of $\Omega,$ i.e., $\Omega_{ext}:=\R^d\setminus\overline{\Omega}.$ Let $T>0$ and $\alpha>1/4.$ Let $\beta:[0,T]\times \Gamma \lra \R$
be a bounded measurable function such that
\[|\beta(t,\xi)-\beta(t,\xi)|\leq c|t-s|^\alpha\]
for some constant $c>0$ and every $t,s\in [0,T], \xi\in \Gamma.$ We consider the from $\fra:[0,T]\times H^1(\Omega_{ext})\times H^1(\Omega_{ext})\lra \C$ defined by
\[\fra(t;u,v):=\int_{\Omega_{ext}}\nabla u\cdot\nabla v {\rm d}\xi+\int_{\Omega_{ext}}\beta(t,\cdot)\restr{u}{\Gamma} \restr{\bar v}{\Gamma} {\rm d}\sigma \]
where $u\to \restr{u}{\Gamma}: H^1(\Omega_{ext}) \lra L^2(\Gamma,\sigma)$ is the trace operator which is bounded \cite[Theorem 5.36]{AdFou}. The operator $A(t)$ associated with $\fra(t;\cdot,\cdot)$ on $H:=L^2(\Omega_{ext})$ is minus the Laplacian with time dependent Robin boundary conditions
\[\partial_\nu u(t)+\beta(t,\cdot)u=0\ \text{ on } \Gamma. \]
Here $\partial_\nu$ is the weak normal derivative. Thus the domain of $A(t)$ is the set
\[D(A(t))=\Big\{ u\in H^1(\Omega_{ext}) \mid \Laplace u\in L^2(\Omega_{ext}), \partial_\nu u(t)+\beta(t,\cdot)\restr{u}{\Gamma}=0 \Big\} \]
and for $u\in D(A(t)), A(t)u:=-\Laplace u.$
Thus similarly as in \cite[Section 5]{Ar-Mo15} one obtains that $\fra$ satisfies (\ref{eq:continuity-nonaut})-(\ref{square property}) with $\gamma:=r_0+1/2$ and $\omega(t)=t^\alpha$ where $r_0\in(0,1/2)$ such that $r_0+1/2<2\alpha.$ We note that \cite[Section 5]{Ar-Mo15} the authors considered the Robin Laplacian on the bounded Lipschitz domain $\Omega.$ The main ingredient used there is that the trace operators are bounded from $H^{s}(\Omega)$ with value in $H^{s-1/2}(\Gamma,\sigma)$ for all $1/2<s<3/4.$ This boundary trace embedding theorem holds also for unbounded Lipschitz domain, and thus for $\Omega_{ext}$, see \cite[Theorem 3.38]{Mclean} or \cite[Lemma 3.6]{Cos}.
Thus applying \cite[Theorem 4.1]{Ar-Mo15} and Theorem \ref{main result} we obtain that the non-autonomous Cauchy problem
\begin{equation}\label{RobinLpalacian}
\left\{
\begin{aligned}
\dot {u}(t) - \Laplace u(t)& = 0, \ u(0)=x\in H^1(\Omega_{ext })
\\ \partial_\nu u(t)+\beta(t,\cdot){u}&=0 \ \text{ on } \Gamma
\end{aligned} \right.
\end{equation}
has $L^2$-maximal regularity in $L^2(\Omega_{ext})$ and its solution is governed by an evolution family $\U(\cdot,\cdot)$ that is norm continuous on each space $V, L^2(\Omega_{ext})$ and $V'.$
\subsection{Non-autonomous Schr\"odinger operators}
Let $m_0, m_1\in L_{Loc}^1(\R^d)$ and $m:[0,T]\times\R^d\lra \R$ be a measurable function such that there exist positive constants $\alpha_1,\alpha_2$ and $\kappa$ such that
\[\alpha_1 m_0(\xi)\leq m(t,\xi)\leq \alpha_2 m_0(\xi),
\quad \text{ }\
\quad \mid m(t,\xi)-m(s,\xi)\mid \leq \kappa|t-s|m_1(\xi)\]
for almost every $\xi\in \R^d$ and every $t,s\in [0,T].$ Assume moreover that there exist a constant $c>0$ and $s\in[0,1]$ such that for $u\in C_c^\infty(\R^d)$
\begin{equation}\label{additional AS Svhrödinger example}\int_{\R^d}m_1(\xi)|u(\xi)|^2 \d\xi\leq c\|u\|_{H^s(\R^d)}.
\end{equation}Consider the non-autonomous Cauchy problem
\begin{equation}\label{Schroedinger operator}
\left\{
\begin{aligned}
\dot {u}(t) - &\Laplace u(t)+m(t,\cdot)u(t) = 0,
\\ u(0)&=x\in V.
\end{aligned} \right.
\end{equation}
Here $A(t)=-\Laplace+m(t,\cdot)$ is associated with the non-autonomous form $\fra:[0,T]\times V\times V\lra \C$ given by
\[V:=\left\{u\in H^1(\R^d): \int_{\R^d}m_0(\xi)|u(\xi)|^2 \d\xi <\infty \right\}\]and
\[\fra(t;u,v)=\int_{\R^d}\nabla u\cdot\nabla v {\rm d} \xi+\int_{\R^d}m(t,\xi)|u(\xi)|^2 \d\xi. \] The form $\fra$ satisfies also (\ref{eq:continuity-nonaut})-(\ref{square property}) with $\gamma:=s$ and $\omega(t)=t^\alpha$ for $\alpha>\frac{s}{2}$ and $s\in[0,1].$
This example is taken from \cite[Example 3.1]{Ou15}.
Using our Theorem \ref{main result} we have that the solution of Cauchy problem (\ref{Schroedinger operator}) is governed by a norm continuous evolution family on $L^2(\R^d), V$ and $V'.$
|
3,212,635,537,549 | arxiv | \section{Introduction}
The precise characterization of the Comic Microwave Background (CMB) polarization will provide a wealth of information in addition to the {\it Planck} satellite CMB temperature measurement~\cite{planck_collaboration_planck_2016}. It will help to further constrain the $\Lambda \rm CDM$ cosmological model and its extensions. CMB polarization is generally described in terms of the two linear components Stokes parameters $Q$ and $U$, which can be mathematically combined to define the curl-free '$E$' and divergence-free '$B$' polarization patterns. CMB anisotropies are conveniently projected in harmonic space, with their statistics encoded in the angular power spectra $C_\ell^{XY}$, where $\ell$ is the multipole, and $X,Y \in \{E,B\}$. Since the anisotropies in the CMB are expected to be Gaussian distributed, all the cosmological information is contained in $C_\ell$.
The dominant source of $E$-modes anisotropies are scalar fluctuations at the epoch of recombination. Tensor (primordial gravitational waves) perturbations generated during inflation can act as a subdominant source of $E$-modes. Primordial $B$-modes, however, are only sourced by tensor fluctuations, and thus represent a unique observable to test inflationary physics. Their amplitude, parametrized by the tensor-to-scalar ratio '$r$', can be arbitrarily low, depending on the inflation energy scale, and is expected to be maximal at large and intermediate angular scales ($\ell\lesssim 10^2$). In addition, CMB photons undergo a lensing effect induced by their passage through the gravitational field of matter between the CMB last scattering surface and us, which leads to the transfer of $E$ to $B$ modes (and vice versa). The lensing $B$-modes thus contaminate the primordial $B$-modes signal. Figure~\ref{fig:ClthVSmuK} represents the $E$-modes and the predicted lensing + tensor $B$-modes derived from the {\it Planck} best fit model~\cite{planck_collaboration_planck_2016} with an optical depth parameter $\tau=0.06$~\cite{planck_collaboration_planck_2016-1}. In addition, $E$ and $B$ tensor modes are shown for $r=10^{-3}$, as well as instrumental noise levels between $0.1$ and $50\,\mu\rm K.arcmin$.
\begin{figure}[!htb]
\includegraphics[width=\columnwidth]{Figures/ClthVSmuK_long.pdf}
\caption{Tensor (dashed) and total (solid) components of the $E$-modes (green), and $B$-modes (blue) spectra $\ell(\ell+1)/(2\pi) \cdot C_\ell$ as a function of the multipole $\ell$, based on {\it Planck} 2015 best fit model with an optical depth $\tau=0.06$. The primordial (tensor) polarization spectra are indicated for a tensor-to-scalar ratio $r=10^{-3}$. Various experimental noise levels $\sigma_n\,[\mu\rm K . arcmin]$ are indicated.\label{fig:ClthVSmuK}}
\end{figure}
Because of experimental limitations and/or foreground contaminations, the effective CMB surveys sky coverage can be partial. This introduces an ambiguity in the relationship between the Stokes parameters and the $E$ and $B$ modes. In this context, the $E$ and $B$ modes are inevitably mixed and mislabeled~\cite{bunn_separating_2003,bunn_e/b_2003}. Although this polarization leakage can be corrected on average~\cite{chon_fast_2004,kogut_firstyear_2003}, the $E/B$ mixing signals contribute to each other's spectrum variance. Since the $B$-mode signal is expected to be much lower than the $E$-mode signal, the impact of this 'variance leakage' is extremely problematic for the detection of $B$-modes and their precise measurement.
The pure pseudo-spectrum (PpCl) method presented in~\cite{smith_pseudo-$c_ell$_2006,bunn_separating_2003,lewis_harmonic_2003} is an extension of the standard pseudo-spectrum method (pCl) and currently represents the most popular solution that reduces the amount of polarization variance leakage. It has been widely investigated in e.g. \cite{grain_polarized_2009,grain_cmb_2012}, and has been demonstrated to produce near-optimal variance power spectrum estimates for intermediate and small angular scales. The extension of the PpCl method to cross-spectra formalism offers the advantage of cross-correlate CMB maps, allowing us to remove correlated noise and mitigate the impact of systematic effects, providing that they are uncorrelated. However, the PpCl method requires particular sky mask apodizations, which depend on the scanning strategy and on the depth of the observed CMB field. Moreover, the method has been proved to be sub-optimal for large and intermediate angular scales ($\ell \lesssim 100$)~\cite{molinari_comparison_2014,efstathiou_myths_2004,efstathiou_hybrid_2006}.
Other methods consist of estimating the spectra using a pixel based approach, which is particularly relevant for large angular scale analysis, but they have the drawback of being computationally more expensive. The Maximum Likelihood Estimator (MLE) and the Quadratic Maximum Likelihood (QML) have the advantage of minimizing spectra uncertainties. The latter, developed in~\cite{tegmark_how_1997} and extended to polarization in~\cite{tegmark_how_2001}, gives the same error bars as the MLE and requires $\mathcal O (N_{\rm d}^3)$ operations for a dataset of size $N_{\rm d}$, relative to the pCl which only demands $\mathcal O (N_{\rm d}^{3/2})$ operations~\cite{efstathiou_myths_2004}.
In this paper we describe a method based on the QML approach that allows us to cross-correlate CMB maps that have common sky coverage, in analogy with the pseudo cross-spectra formalism. The formalism was first introduced in \cite{planck_collaboration_planck_2016_XLVI} for the 2016 Planck results.
Although this spectrum estimator is not derived from a maximum likelihood, we will refer to it as the cross-QML (xQML) for simplicity. The analysis presented hereafter focuses on the case of polarization spectra and the xQML ability to reduce the impact of $E/B$ variances leakage.
In Sec.~\ref{sec:method}, we develop the formalism of the xQML estimator. We review the QML and extend it to cross-spectra. We then discuss its bias and uncertainty. Important steps of the xQML implementation are then described, in particular the pixel covariance matrix construction, and the binning of the spectrum estimator.
In Sec.~\ref{sec:MCSimu}, the xQML is tested on two simulation set-ups : a large angular scale survey aiming at the measurement of the reionization signal ($\ell \lesssim 10$) , and an intermediate angular scale survey aiming at the measurement of the recombination bump ($\ell \simeq 100$). We show that the xQML method is unbiased, and gives minimum error bars.
The polarization leakage is discussed in Sec.~\ref{sec:Leakage}, in which we also compare the xQML $B$-mode variance with other methods such as the PpCl.
The same analysis is realized in Sec.~\ref{sec:EBspec} for the $EB$ spectrum.
We conclude in Sec.~\ref{sec:Conclusion}, and we forecast the uncertainty on $r$ based on the different methods introduced and compared in this paper.
\section{\label{sec:method}Method}
In this section we review the most important steps that lead to the definition of the QML estimator, following what has been done in \cite{tegmark_how_1997,tegmark_how_2001}. We then derive a cross-spectrum QML estimator (xQML) and compare its properties with the QML. Finally, we discuss in depth the implementation of the algorithm.
Lower case characters correspond to vectors and upper case correspond to matrices. Bold font, Latin indices, the trace and transpose operators are used for elements in the pixel domain, while normal font and $\ell$ indices are used in the multipole domain.
We consider a dataset $\mathbf{d}$, of dimension $N_d = 3 n_\text{pix}$ which encodes temperature and Stokes parameters measurements,
\begin{equation}
\mathbf{d} \equiv \begin{pmatrix}\mathbf T \\ \mathbf Q \\ \mathbf U \end{pmatrix}.
\end{equation}
The pixels covariance matrix $\mathbf C$ of the dataset is given by
\begin{equation}
\mathbf C \equiv \braket{\mathbf{d},{\mathbf{d}}^T} = \mathbf S + \mathbf N, \label{eq:PixelCov}
\end{equation}
with $\mathbf N$ the pixel noise covariance matrix, and $\mathbf S$ the signal covariance matrix defined as
\begin{equation}
\mathbf S \equiv \sum_{\ell} \mathbf P_\ell C_\ell, \quad {\rm with} \quad \mathbf P_\ell^{ij} = \frac{\partial \mathbf C^{ij}}{\partial C_\ell}. \label{eq:Smatrix}
\end{equation}
The vector $C_\ell$ encodes all six power spectra $TT, EE, BB, TE, TB,$ and $EB$.
\subsection{\label{sec:QML}QML estimator}
We review important steps of the QML estimator developed in~\cite{tegmark_how_1997,tegmark_how_2001}. We can write the power spectrum estimator as a quadratic function of the pixels
\begin{equation}
\hat y_\ell \equiv \mathbf d^T \mathbf E_\ell \mathbf d - b_\ell. \label{eq:autoyl}
\end{equation}
$\mathbf E_\ell$ ($\ell=2,...$) are arbitrary $N_d \times N_d$ matrices, and $b_\ell$ are arbitrary constants.
From Eqs. \eqref{eq:PixelCov} and \eqref{eq:Smatrix}, the estimator ensemble average reads
\begin{flalign}
\braket{\hat y_\ell} & = \tr{ \mathbf E_\ell \braket{\mathbf d, \mathbf d^T}} - b_\ell , \\
& =\sum_{\ell'} W_{\ell \ell'} C_{\ell'} + \tr { \mathbf E_\ell \mathbf N} - b_\ell,
\end{flalign}
with
\begin{equation}
W_{\ell\ell'} \equiv\tr{\mathbf E_\ell \mathbf P_{\ell'}} \label{eq:autoWindow}
\end{equation}
as the 'mode-mixing' matrix. Choosing $b_\ell = \tr { \mathbf E_\ell \mathbf N}$, the unbiased estimator of the true power spectrum $C_\ell$ thus reads
\begin{equation}
\hat C_\ell \equiv \sum_{\ell'} [W^{-1}]_{\ell\ell'} \hat y_{\ell'}, \label{eq:autoCl}
\end{equation}
and has the following covariance
\begin{equation}
\braket{\Delta \hat C_\ell, \Delta \hat C_{\ell'}} = [W^{-1}]_{\ell\ell_1} \braket{\Delta \hat y_{\ell_1}, \Delta \hat y_{\ell_2}} [W^{-1}]_{\ell_2\ell'}, \label{eq:autoCovCl}
\end{equation}
where $\Delta \hat C_\ell = \hat C_\ell - \braket{\hat C_\ell}$. The summation over repeated indices is implied. The resulting power spectrum is unbiased, regardless of the choice of the $\mathbf E_\ell$ matrices. However, they are usually constructed in order to minimize the estimator variance
\begin{equation}
\braket{\Delta \hat y_\ell, \Delta \hat y_{\ell}} = 2 \tr{\mathbf C \mathbf E_\ell \mathbf C \mathbf E_{\ell}}, \label{eq:autoCovyl}
\end{equation}
which gives the trivial solution $\mathbf E_\ell = \mathbf 0$. We thus impose the mode-mixing matrix diagonal to be nonzero. For each $\ell$, introducing the Lagrange multipliers $\lambda$ and the condition $W_{\ell \ell} = \beta$, we require the derivative of
\begin{equation}
\braket{\Delta \hat y_\ell, \Delta \hat y_{\ell}} - 2 \lambda ( \tr{\mathbf E_\ell \mathbf P_{\ell}} - \beta), \label{eq:autoLagrange}
\end{equation}
with respect to $\mathbf E_\ell$ to vanish, and obtain the solution\footnote{Using matrix identities $\partial_{\mathbf E} \tr{\mathbf C\mathbf E\mathbf C\mathbf E} = 2 \mathbf C^T \mathbf E^T \mathbf C^T $.}
\begin{equation}
\mathbf E_\ell = \frac\lambda2 \mathbf C^{-1} \mathbf P_{\ell} \mathbf C^{-1}. \label{eq:autoEl}
\end{equation}
Finally, imposing $W_{\ell \ell}= \tr{\mathbf E_\ell \mathbf P_{\ell}} = \beta$ gives
\begin{equation}
\lambda \tr{ \mathbf C^{-1} \mathbf P_{\ell} \mathbf C^{-1} \mathbf P_{\ell}} = \beta .
\end{equation}
We choose $\beta$ such that $\lambda= 1$ and $\mathbf E_\ell$ is well defined. With this choice, the mode-mixing matrix $W_{\ell\ell'}$ is the Fisher information matrix
\begin{equation}
F_{\ell\ell'} = \frac12 \tr{ \mathbf C^{-1} \mathbf P_{\ell} \mathbf C^{-1} \mathbf P_{\ell'}}, \label{eq:autoFisher}
\end{equation}
with $\braket{ \Delta \hat y_\ell, \Delta \hat y_{\ell'}} = F_{\ell\ell'}$ and $\braket{\Delta \hat C_\ell, \Delta \hat C_{\ell'}} = [F^{-1}]_{\ell\ell'}$.
The $\mathbf E_\ell$ matrices are thus constructed such that the spectrum estimator has minimal variance. However, the QML estimator requires a precise knowledge of the pixel noise matrix $\mathbf N$ to cancel the bias term $b_\ell$ in Eq.~\eqref{eq:autoyl}. In practice, estimating the noise model of an experiment is difficult and requires an exquisite knowledge of instrument properties. In the next section, we develop a method that allows us to compute a cross-spectrum estimator that is unbiased independently of the choice of $\mathbf N$.
\subsection{\label{sec:xQMLmethod}xQML estimator}
Following the same formalism as for the 'auto'-spectrum QML estimator detailed in the previous section, we now consider two datasets $\mathbf{d}^A$ and $\mathbf{d}^B$ from which the pixel covariance matrix reads
\begin{equation}
\mathbf C^{AB}\equiv\braket{\mathbf{d}^A,{\mathbf{d}^B}^T}=\mathbf S + \mathbf N^{AB}.
\end{equation}
We assume uncorrelated noise between the two datasets, such that the cross pixel noise covariance matrix vanishes $\mathbf N^{AB}=0$.
The cross estimator now reads
\begin{equation}
\hat y_\ell^{AB} \equiv {\mathbf d^A}^T \mathbf E_\ell \mathbf d^B - b_\ell^{AB}, \label{eq:crossyl}
\end{equation}
with $b_\ell^{AB} = \tr { \mathbf E_\ell \mathbf N^{AB}} = 0$.
The covariance of the estimator is computed using Wick's theorem,
\begin{flalign}
&\braket{\Delta \hat y^{AB}_\ell, \Delta \hat y^{AB}_{\ell'}} \nonumber\\
&= \left[ \braket{d^A_i,d^A_k}\braket{d^B_j,d^B_n}\right. + \left.\braket{d^A_i,d^B_n}\braket{d^A_j,d^B_k} \right] \mathbf E_\ell^{ij}\mathbf E_{\ell'}^{kn}\nonumber \\
&= \tr{\mathbf C^{AA} \mathbf E_\ell \mathbf C^{BB} \mathbf E^T_{\ell'} + \mathbf C^{AB} \mathbf E_\ell \mathbf C^{AB} \mathbf E_{\ell'}}, \label{eq:crossCovyl}
\end{flalign}
where summation on the pixels indices $i,j,k,n$ is implied. Matrices $\mathbf C^{AA} = \mathbf S + \mathbf N^{AA}$ and $\mathbf C^{BB} = \mathbf S + \mathbf N^{BB}$ are respectively the pixel covariance matrix of the datasets $A$ and $B$. As in Eqs.~\eqref{eq:autoCl} and \eqref{eq:autoCovCl} for the QML method, the unbiased estimator reads
\begin{equation}
\hat C_\ell \equiv \sum_{\ell'} [W^{-1}]_{\ell\ell'} \hat y^{AB}_{\ell'}, \label{eq:crossCl}
\end{equation}
and its covariance
\begin{equation}
\braket{\Delta \hat C_{\ell}, \Delta \hat C_{\ell'}} = [W^{-1}]_{\ell\ell_1} \braket{\Delta \hat y^{AB}_{\ell_1}, \Delta \hat y^{AB}_{\ell_2}} [W^{-1}]_{\ell_2\ell'}.\label{eq:crossCovCl}
\end{equation}
As in Eq.~\eqref{eq:autoLagrange} for the QML, we seek for the $\mathbf E_\ell$ matrices that minimize the estimator variance of Eq.~\eqref{eq:crossCovyl}. We get the equation\footnote{Using matrix identities $\partial_{\mathbf E} \tr{\mathbf A\mathbf E\mathbf B\mathbf E} = \mathbf A^T \mathbf E^T \mathbf B^T + \mathbf B^T \mathbf E^T \mathbf A^T$ and $\partial_{\mathbf E} \tr{\mathbf A\mathbf E\mathbf B \mathbf E^T} = \mathbf A^T \mathbf E \mathbf B^T + \mathbf A \mathbf E \mathbf B$.}
\begin{eqnarray}
& \mathbf C^{AA} \mathbf E_\ell \mathbf C^{BB} + \mathbf C^{AB} \mathbf E_\ell^T \mathbf C^{AB} = \lambda\mathbf P_{\ell}, \label{eq:Sylvester}
\end{eqnarray}
which is a generalized form of the Sylvester equation~\cite{de_teran_uniqueness_2016}. Although the exact solution exists, as discussed in Sec.~\ref{sec:Implementation}, it requires us to solve a system of $N_d^2$ equations, which quickly becomes computationally prohibitive for large datasets. For this reason, we derive an approximate solution by considering two extreme signal-to-noise ratio (SNR) cases~:
\setlist[description]{font=\normalfont\itshape\space}
\begin{description}
\item[\textsc{Hs}] : High SNR, such that \\
$\mathbf S \gg \mathbf N$, and $\mathbf C^{AA} \sim \mathbf C^{BB} \sim \mathbf S$.
\item[\textsc{Ls}] : Low SNR , such that \\
$\mathbf S \ll \mathbf N$, and $\mathbf C^{AA} \sim \mathbf N^{AA} $, $\mathbf C^{BB} \sim \mathbf N^{BB}$.
\end{description}
For both limits, Eq~\eqref{eq:Sylvester} admits a solution of the form\footnote{We remark that when $\mathbf C^{AA} \sim \mathbf C^{BB}$, and more specifically for a high signal-to-noise ratio, $\mathbf E_\ell \simeq \mathbf E_\ell^T$.}
\begin{eqnarray}
\mathbf E_\ell \simeq \frac \lambda \alpha (\mathbf C^{AA})^{-1} \mathbf P_{\ell} (\mathbf C^{BB})^{-1}, \label{eq:crossEl}
\end{eqnarray}
where $ \alpha$ is a normalization coefficient that depends on the SNR, with $\alpha=2$ for the \textsc{Hs} regime, and $\alpha=1$ for the \textsc{Ls} regime. The impact of the approximation made in Eq.~\eqref{eq:crossEl} on the spectrum variance is discussed in Sec~\ref{sec:Implementation}. Finally, imposing $ W_{\ell \ell} = \tr{\mathbf E_\ell \mathbf P_{\ell}} = \beta$ gives
\begin{equation}
\frac\lambda\alpha \tr{ (\mathbf C^{AA})^{-1} \mathbf P_{\ell} (\mathbf C^{BB})^{-1} \mathbf P_{\ell}} = \beta.
\end{equation}
We choose $\beta$ such that $\lambda/\alpha = 1/2$, and we recover the QML solution for $A=B$. Inserting $\mathbf E_\ell$ of Eq.~\eqref{eq:crossEl} in the mode-mixing matrix defined in Eq.~\eqref{eq:autoWindow}, one obtains
\begin{equation}
W_{\ell\ell'} = \frac 12 \tr{(\mathbf C^{AA})^{-1} \mathbf P_{\ell} (\mathbf C^{BB})^{-1} \mathbf P_{\ell'} }.\label{eq:crossFisher}
\end{equation}
Using Eqs.~\eqref{eq:crossCovyl},~\eqref{eq:crossCovCl}, \eqref{eq:crossEl} and~\eqref{eq:crossFisher}, the cross-spectrum estimator covariance reads
\begin{flalign}
\braket{\Delta \hat C_\ell, \Delta \hat C_{\ell'}} &= \frac12 [W^{-1}]_{\ell \ell_1} \left( W_{\ell_1 \ell_2} +G _{\ell_1 \ell_2} \right) [W^{-1}]_{\ell_2 \ell'} \nonumber \\
& = \frac12 \left ([ W^{-1}] _{\ell \ell'} + [ W^{-1}] _{\ell \ell_1} G _{\ell_1 \ell_2} [ W^{-1}] _{\ell_2 \ell'}\right) \nonumber \\
&\equiv V_{\ell \ell'}, \label{eq:crossCovClEl}
\end{flalign}
where we define
\begin{equation}
\begin{split}
G_{\ell \ell'} \equiv \frac12 \mathrm{Tr} \left[ (\mathbf C^{AA})^{-1} \mathbf P_{\ell} (\mathbf C^{BB})^{-1} \mathbf C^{AB} \right. \\ \times \left. (\mathbf C^{AA})^{-1} \mathbf P_{\ell'} (\mathbf C^{BB})^{-1} \mathbf C^{AB} \right]. \label{eq:crossGisher}
\end{split}
\end{equation}
In the \textsc{Hs} regime $G_{\ell\ell'} \sim W_{\ell \ell'}$, such that $V_{\ell\ell'} = [ W^{-1}] _{\ell \ell'}$. In the \textsc{Ls} regime, the second term $[W^{-1}] _{\ell \ell_1} G _{\ell_1 \ell_2} [ W^{-1}] _{\ell_2 \ell'}$ in Eq.~\eqref{eq:crossCovClEl} contributes at second order of the cross-spectrum variance. As a representative example, the diagonal elements of those two terms are compared in Fig.~\ref{fig:FvsG} for the $EE$ and $BB$ spectra, with a $10\,\mu\rm K.arcmin$ noise level. With this choice, the $E$-mode is signal dominated, and corresponds to the \textsc{Hs} regime, while the $B$-mode SNR is low for most of the multipoles ($\ell \gtrsim10$), and corresponds to the \textsc{Ls} case.
\begin{figure}[!htb]
\includegraphics[width=\columnwidth]{{Figures/FvsG_FULL_Ns16_Nbins46_Slmaxmul3_fwhmdeg0.5_r0.001_fsky0.7_muK10.0}.pdf}
\caption{Diagonals of the covariance matrix terms $W_{\ell \ell_1}^{-1} G _{\ell_1 \ell_2} W_{\ell_2 \ell}^{-1}$ (dashed) and $W_{\ell \ell}^{-1}$ (plain) of Eq.~\eqref{eq:crossCovClEl}. $EE$ and $BB$ components are plotted in green and blue respectively. The noise level is $10\,\mu{\rm K . arcmin}$.\label{fig:FvsG}}
\end{figure}
We successfully defined a quadratic estimator based on datasets cross-correlation which does not require the subtraction of noise bias. Moreover, we derived an approximation of the $\mathbf E_\ell$ matrices that minimizes its variance. We also recover the QML estimator when $A=B$, with a nonvanishing noise bias term $b_\ell$.
\subsection{\label{sec:Implementation}Implementation}
In this section we detail some important steps of the xQML implementation. We first discuss the pixel covariance matrix construction. We then derive an exact solution for the Sylvester Eq.~\eqref{eq:Sylvester}. Finally, we describe a method for binning the xQML spectrum estimator.
\subsubsection{Pixel covariance matrix}
The $\mathbf P_\ell$ matrices Eq.~\eqref{eq:Smatrix} are directly multiplied by each of the datasets pixel window and beams transfer functions. We do not discuss the matrices construction, for which further details can be found in~\cite{tegmark_how_2001}.
The covariance matrix $\mathbf C$ introduced in Eq.~\eqref{eq:PixelCov} includes correlations between pixels for each of the Stokes parameters,
\begin{eqnarray}
\mathbf C = \begin{pmatrix} C^{TT} & C^{TQ} & C^{TU} \\ C^{QT} & C^{QQ} & C^{QU} \\ C^{UT} & C^{UQ} & C^{UU}\end{pmatrix}. \label{eq:Cstokes}
\end{eqnarray}
We can separate the temperature and polarization measurements by using an approximated pixel covariance matrix
\begin{eqnarray}
\tilde {\mathbf C} = \begin{pmatrix} C^{TT} & 0 & 0\\ 0 & C^{QQ} & C^{QU} \\ 0 & C^{UQ} & C^{UU}\end{pmatrix} .
\end{eqnarray}
This matrix does not mix temperature with polarization estimates. As a result, the $\hat C_\ell$ estimator is not optimal anymore, while it is still an unbiased estimator of the true $C_\ell$. As shown in~\cite{tegmark_how_2001}, the price to pay is a slight error bar increase of the order of one percent. Using this choice, temperature and polarization analysis can be done completely separately. For the rest of this paper, we focus our analysis on polarization measurement only. The method can be implemented for the temperature spectrum estimation following the same approach.
In Eq.~\eqref{eq:Smatrix}, the summation over $\ell$ is theoretically infinite. It can however be truncated at a given $\ell_{max}$ as long as the remaining contributions from $C_{\ell > \ell_{max}}$ are negligible. This can be accomplished manually by smoothing the dataset $\mathbf d$ (e.g. by convolving the spectrum with a decreasing function). In the framework of our analysis, we simply generated CMB simulations while filtering all $C_{\ell > \ell_{max}}$.
The xQML variance has been shown to be minimal if the fiducial $\tilde {\mathbf C}$ matrix is built from the true ${\mathbf C}$. In practice, it is not always possible to estimate precisely the latter. We can compute the estimator variance in Eq.~\eqref{eq:crossCovCl} for any fiducial $\tilde {\mathbf C}$
\begin{equation}
\begin{split}
&\braket{\Delta \hat C_\ell, \Delta \hat C_{\ell'}} \equiv \\
&[\tilde W^{-1}]_{\ell \ell_1} \tr{\mathbf C^{AA} \tilde{\mathbf E}_{\ell_1} \mathbf C^{BB} \tilde{\mathbf E}^T_{\ell_2}} [\tilde W^{-1}]_{\ell_2 \ell'} \label{eq:CovClmc} \\
&+ [\tilde W^{-1}]_{\ell \ell_1} \tr{\mathbf C^{AB} \tilde{\mathbf E}_{\ell_1} \mathbf C^{AB} \tilde{\mathbf E}_{\ell_2}} [\tilde W^{-1}]_{\ell_2 \ell'},
\end{split}
\end{equation}
where $\tilde{\mathbf E}_\ell$ and $\tilde{\mathbf W}_{\ell\ell'}$ are computed using $\tilde {\mathbf C}$ in Eqs.~\eqref{eq:crossEl} and \eqref{eq:crossFisher}.
To estimate the impact on the spectra estimations variance of small deviations of the fiducial $\tilde {\mathbf C}$ from the true ${\mathbf C}$, we consider a simplified toy model with $\mathbf C^{AA} = \mathbf C^{BB} = \mathbf C$. We also restrict our calculation to the first term of Eq.~\eqref{eq:CovClmc}, since we showed that, depending on the noise level, the second term is either negligible, or either equal to the first one. Any small perturbation to the fiducial $\tilde {\mathbf C}$ around the true $\mathbf C$ can be written as
\begin{equation}
\tilde {\mathbf C} = \mathbf C + \mathbfcal E, \quad {\rm with }\quad \mathbfcal E \ll \mathbf C,
\end{equation}
and thus
\begin{equation}
\tilde {\mathbf C}^{-1} = \mathbf C^{-1} - \mathbfcal D, \; {\rm with }\; \mathbfcal D \equiv \mathbf C^{-1}\mathbfcal E\mathbf C^{-1} \ll \mathbf C^{-1}
\end{equation}
At the first order in $\mathbfcal D$,
\begin{equation}
\tr{\mathbf C \tilde{\mathbf E}_{\ell} \mathbf C \tilde{\mathbf E}_{\ell}} \simeq \tr{\mathbf C^{-1} \mathbf P_\ell \mathbf C^{-1} \mathbf P_{\ell} - 4 \mathbf C^{-1} \mathbf P_\ell \mathbfcal D \mathbf{P_{\ell}}},
\end{equation}
and
\begin{equation}
\tilde{ W}_{\ell\ell} \simeq \tr{\mathbf C^{-1} \mathbf P_\ell \mathbf C^{-1} \mathbf P_{\ell} - 2 \mathbf C^{-1} \mathbf P_\ell \mathbfcal D \mathbf{P_{\ell}} }.
\end{equation}
Inserting both expressions in Eq.~\eqref{eq:CovClmc},
\begin{flalign}
&\braket{\Delta \hat C_\ell, \Delta \hat C_{\ell'}} = \nonumber\\
&\frac{\tr{\mathbf C^{-1} \mathbf P_\ell \mathbf C^{-1} \mathbf P_{\ell}} - 4\tr{\mathbf C^{-1} \mathbf P_\ell \mathbfcal D \mathbf{P_{\ell}}} }{ \left(\tr{\mathbf C^{-1} \mathbf P_\ell \mathbf C^{-1} \mathbf P_{\ell} } - 2 \tr{\mathbf C^{-1} \mathbf P_\ell \mathbfcal D \mathbf{P_{\ell}}}\right)^2 } \\
& \simeq \tr{\mathbf C^{-1} \mathbf P_\ell \mathbf C^{-1} \mathbf P_{\ell}}^{-1} \\
& = V_{\ell \ell}.
\end{flalign}
We see that a fiducial $\tilde {\mathbf C}$ sufficiently close to the true ${\mathbf C}$ induces only second order deviations of the spectrum estimation variance from the optimal variance $V_{\ell\ell'}$. For a low SNR, the choice of the fiducial $\tilde C_\ell$ have little impact on $\tilde {\mathbf C}$. Conversely, for signal dominated datasets, deviations of $\tilde C_\ell$ can have a non-negligible impact on the spectrum error. A solution is to run the xQML method iteratively as recommended in \cite{tegmark_how_2001}, with previous spectrum estimation as the new fiducial model. This especially applies for the tensor-to-scalar ratio and reionization fiducial parameters. We have found that the choice of their fiducial values, if far from their true values, can greatly increase the large angular scale uncertainty of $BB$ and $EE$ spectra.
However, even if the variance of the spectrum estimation is only slightly impacted when the fiducial $\tilde {\mathbf C}$ diverges from the true dataset covariance matrix, the analytical estimate of the variance $\tilde V_{\ell\ell'} = \dfrac12 \left ([ \tilde W^{-1}] _{\ell \ell'} + [ \tilde W^{-1}] _{\ell \ell_1} \tilde G _{\ell_1 \ell_2} [ \tilde W^{-1}] _{\ell_2 \ell'}\right) $ is biased. Taking, for example, $\tilde {\mathbf C} = \gamma \mathbf C $ implies $\braket{\Delta \hat C_\ell, \Delta \hat C_{\ell'}} = V_{\ell\ell'}$, but $ \tilde V_{\ell\ell'} = \gamma ^2 V_{\ell\ell'} $, for any constant $\gamma$. One must thus be cautious when estimating the spectrum variance analytically.
\subsubsection{\label{sec:ElApprox}Sylvester equation solution}
We discuss the approximate solution of Eq.~\eqref{eq:Sylvester} introduced in Sec.~\ref{sec:xQMLmethod}, also known as a generalized form of the Sylvester equation, and we compare it with the exact solution described in~\cite{de_teran_uniqueness_2016}. To find the exact solution, we use the Kronecker product property ${\rm vec}(\mathbf A \mathbf X \mathbf B) = (\mathbf B^T \otimes \mathbf A) {\rm vec}(\mathbf X)$, under the condition that the product $\mathbf A \mathbf X \mathbf B$ is well defined. The operator ${\rm vec}()$ vectorizes a matrix (by stacking its columns), and $\otimes$ is the Kronecker matrix product. We also introduce the permutation matrix $\Pi$ such that ${\rm vec} (\mathbf X^T)=\Pi\,{\rm vec} (\mathbf X)$. One can show that ${\rm vec}(\mathbf A \mathbf X^T \mathbf B) = \Pi\,(\mathbf A^T \otimes \mathbf B) {\rm vec}(\mathbf X)$~\cite{horn_topics_1991}. The Sylvester Eq.~\eqref{eq:Sylvester} can thus be written as a set of linear equations
\begin{equation}
\begin{split}
\left[\mathbf C^{BB} \otimes \mathbf C^{AA} + \Pi (\mathbf C^{AB}\otimes \mathbf C^{AB})\right]{\rm vec}(\mathbf E_\ell) \\
= {\rm vec}(\mathbf P_{\ell}). \label{eq:vecSylvester}
\end{split}{}
\end{equation}
We can then solve it exactly for ${\rm vec}(\mathbf E_\ell)$ using the least-squares method. However, the equation system is of dimension $N_d^2$, which quickly becomes computationally costly for large datasets. Using Eq.~\eqref{eq:CovClmc} with the approximate solution in Eq.~\eqref{eq:crossEl} as $\tilde{\mathbf E}_\ell$, we find that the deviation of the spectrum variance from the minimum one using the exact solution of Eq.~\eqref{eq:vecSylvester} is of the order of $2\%$ in the worse case, when the signal and the noise level are of the same order. We can thus safely use the approximated solution of Eq.~\eqref{eq:crossEl} for the implementation of the xQML method.
\subsubsection{\label{sec:Binning}Binning}{}
CMB observations are only available on a limited sky fraction, and as a result, individual multipoles can be strongly correlated when reconstructing the CMB spectra. It is thus convenient to bin the power spectra in multipoles band powers, labeled $b$ hereafter. We define the binning operators,
\begin{equation}
R_{b\ell} = \begin{cases} \Delta_b^{-1} \\ 0 \end{cases}, Q_{\ell b} = \begin{cases} 1 \quad \text{ if } \ell\in b \\ 0 \quad \text{ otherwise } \end{cases}\label{eq:binoperator},
\end{equation}
with $\Delta_{b}$ the width of the $b$th bin, which can be varied from one bin to another. The binned estimator is written
\begin{equation}
\hat y_b \equiv \sum_{\ell} R_{b \ell} \hat y_\ell, \label{eq:binyl}
\end{equation}
for which the covariance reads
\begin{flalign}
\braket{\Delta \hat y_b, \Delta y_{b'}} = & \tr{\mathbf C^{AA} \mathbf E_b \mathbf C^{BB} \mathbf E^T_{b'}+\mathbf C^{AB} \mathbf E_b \mathbf C^{AB} \mathbf E_{b'}} \nonumber\\
= & \frac 1{2 \Delta_{b'}} \left ( W_{b b'}+ G _{b b'}\right) \label{eq:CovylBin}
\end{flalign}
with $ W_{b b'} = R_{b\ell} W_{\ell \ell'} Q_{\ell' b'}$, and $ G_{b b'} = R_{b\ell} G_{\ell \ell'} Q_{\ell' b'}$. The true binned spectrum is thus
\begin{equation}
C_b \equiv \sum_{\ell,\ell',b'} [W^{-1}]_{b b'} R_{b'\ell} W_{\ell \ell'} C_{\ell'},
\end{equation}
and its unbiased binned estimation becomes
\begin{equation}
\hat C_b \equiv \sum_{\ell'} [W^{-1}]_{b b'} \hat y_{b'}, \label{eq:binCl}
\end{equation}
with covariance
\begin{equation}
V_{bb'} =\frac 1{2 \Delta_{b'}} \left ( [W^{-1}]_{bb'}+[W^{-1}]_{bb_1} G _{b_1 b_2} [W^{-1}]_{b_2b'} \right). \label{eq:binCovCl}
\end{equation}
We remark that the binning can also be achieved by computing $\mathbf P_{b} \equiv \sum_{\ell\in b} \mathbf P_{\ell}$ directly (without the normalization term $\Delta_{b}$), or equivalently $\mathbf P_{b} \equiv \sum_{\ell} \mathbf P_{\ell} Q_{\ell b}$. With this definition of $\mathbf P_{b}$, the xQML components can be computed as usually defined in Eqs.~\eqref{eq:crossyl}, \eqref{eq:crossCl}, \eqref{eq:crossEl} and \eqref{eq:crossFisher} for the spectrum estimate $\hat C_\ell$, and Eqs.~\eqref{eq:crossCovClEl}, \eqref{eq:crossGisher} and \eqref{eq:crossFisher} for its analytical covariance (replacing all subscripts $\ell$ by $b$). This method is computationally more efficient compared to the method presented above.
\begin{figure*}[htb]
\includegraphics[width=0.49\textwidth]{{Figures/ThvsMC_FULL_Ns16_Nbins46_Slmaxmul3_fwhmdeg0.5_r0.001_fsky0.7_muKarcm1.0_nsimu100000EEBB}.pdf}
\includegraphics[width=0.49\textwidth]{{Figures/ThvsMC_BICEPivw_Ns128_Nbins25.48_fwhmdeg0.5_r0.001_fsky0.0159_muKarcm1.0_nsimu100000EEBB}.pdf}
\caption{$EE$ (green) and $BB$ (blue) mean power spectra xQML estimates $\ell(\ell+1)/2\pi \cdot \braket{\hat C_\ell}$, and residues $R_\ell[\hat C_\ell]$ from Eq.~\eqref{eq:ResidSpect}, computed from $n_{\rm MC}=10^5$ Monte-Carlo (MC) simulations. Spectra models are plotted in black solid lines. Left panel corresponds to the reionization survey simulations ($n_{side} = 16$, $f_{\rm sky}\simeq 0.7\%$), right panel corresponds to the recombination survey simulations ($n_{side} = 128$, $f_{\rm sky}\simeq 1\%$). Noise level is $\sigma_n = 1\,\mu \rm K.arcmin$ for both surveys.\label{fig:ThvsMC}}
\end{figure*}
\section{\label{sec:MCSimu}Monte Carlo Simulations}
In this section we describe two simulated surveys on which we test the xQML estimator. We first consider a full sky experiment aiming at the measurement of the reionization signal ($\ell \lesssim 10$). The foreground contaminations are assumed to be removed, and their residuals, which are assumed to be strong in the galactic plane, are masked. The second survey covers a smaller sky fraction, aiming at the measurement of the recombination bump ($\ell \simeq 100$), and for which the foregrounds contamination is assumed to be removed. Both surveys sky fractions are shown in Fig.~\ref{fig:Masks}. We generate $n_{MC} = 10^5$ CMB simulations from the {\it Planck} 2015 best fit spectrum model~\cite{planck_collaboration_planck_2016} shown in Fig.~\ref{fig:ClthVSmuK}, with a tensor-to-scalar ratio $r=10^{-3}$, and a reionization optical depth $\tau=0.06$. The two surveys are treated completely independently. For each of them, we cross-correlated two simulated maps (usually obtained through data-splits), with noise levels between $0.1 \leq \sigma_n \leq 50 \,\mu\rm K.arcmin$ indicated in Fig.~\ref{fig:ClthVSmuK}. This choice roughly covers the characteristics of future ground experiments from CMB Stage 4 (S4)~\cite{abazajian_cmb-s4_2016} ($\sim 1\,\mu\rm K.arcmin$), or satellites such as {\it LiteBIRD}, {\it CORE}, and {\it PICO} (between $1$ and $5\,\mu\rm K.arcmin$)~\cite{matsumura_mission_2014,delabrouille_exploring_2018,de_zotti_prospects_2018}, up to {\it Planck} noise level (around $50\,\mu\rm K.arcmin$)~\cite{the_planck_collaboration_scientific_2006}.
\subsection{Reionization survey}
For the large angular scales analysis, referred as the 'reionization survey', we consider an observed sky fraction $f_{\rm sky} \simeq 70\%$. A binary mask is built from the $353\,\rm GHz$ {\it Planck} polarization maps, for which pixels with the highest polarization amplitude $(Q^2+U^2)^{1/2}$ accurately traces the galactic polarized dust. We choose to follow the instrumental specifications of the satellite mission {\it LiteBIRD}~\cite{matsumura_mission_2014}, considering a beam-width of $0.5\,\rm deg$, and a white homogeneous noise. The analysis is done at the map resolution {$ n_{side} = 16$}, over the multipoles range $\ell\in[2,47]$.
\subsection{Recombination survey}
The 'recombination survey' sky patch is based on the public {\it BICEP2}~\cite{noauthor_keck_nodate} apodized mask $M \in [0,1]$. We build a binary mask using all pixels $i$ for which $M_i \geq 0.1$. Rather than considering a homogeneous noise as for the reionization survey, we apply an inverse weighting noise distribution based on the mask $M$. The effective sky fraction is therefore $f_{\rm sky} = (\sum_i{M_i^2})^2/\sum_i M_i^4 \simeq 1\%$, as defined in~\cite{hivon_master_2002}. Our analysis is done with maps resolution {$ n_{side} = 128$}, and a beam-width of $0.5\,\rm deg$. Because of the limited sky fraction, individual multipoles are strongly correlated. We thus reconstruct the spectrum using the binning scheme described in Sec.~\ref{sec:Implementation}. We show the results starting from $\ell=48$ to account for the insensitivity of the survey to large angular scales, and we define $24$ bins up to {$\ell=383$} with $\Delta_{b} = 14$.
\begin{figure}[!htb]
\includegraphics[width=\columnwidth]{Figures/maskCombi.pdf}
\caption{Mollweide projection of the sky coverages for the reionization (yellow + blue areas), and the recombination (yellow area) surveys. The latter corresponds to the $\sim1\%$ sky fraction from {\it BICEP2} public mask. The grey area corresponds to the $30\%$ where {\it Planck} dust polarization amplitude is the highest, mostly located in the galactic plane.\label{fig:Masks}}
\end{figure}
\begin{figure*}[!hbt]
\includegraphics[width=\columnwidth]{{Figures/Vars_FULL_Ns16_Nbins46_Slmaxmul3_fwhmdeg0.5_r0.001_fsky0.7BB}.pdf}
\includegraphics[width=\columnwidth]{{Figures/Vars_BICEPivw_Ns128_Nbins25.48_Slmaxmul3_fwhmdeg0.5_r0.001_fsky0.0159}.pdf}
\caption{Monte-Carlo (dots) and analytical (plain) errors of polarization spectra $EE$ (up) and $BB$ (bottom), for the reionization (left) and recombination (right) surveys, with noise levels $0.1\leq \sigma_n \leq50\,\mu \rm K .\rm arcmin$.\label{fig:Vars}}
\end{figure*}
\subsection{\label{sec:results}Power spectra reconstruction}
We verify with simulations that the reconstructed power spectra are unbiased with respect to the input model $C_\ell$. From the central limit theorem we expect that, as $n_{\rm MC}$ is large, the mean spectra residues
\begin{equation}
R_\ell[\hat C_\ell] \equiv \frac{C_\ell - \braket{\hat C_\ell}}{\sqrt{ \sigma^2(\hat C^{\rm MC}_\ell) / n_{\rm MC} }}\label{eq:ResidSpect}
\end{equation}
are expected to be normally distributed around zero, for all $\ell$ if the spectra are unbiased, with $\sigma^2(\hat C_\ell^{\rm MC})/ n_{\rm MC}$ the MC variance of the mean spectra. We carefully checked that this is the case for all noise levels $0.1\leq \sigma_n \leq50\,\mu \rm K . arcmin$. Power spectra and their residues are shown in Fig.~\ref{fig:ThvsMC} for $1\,\mu \rm K.arcmin$. Given the residues distribution for $n_{\rm MC}=10^{5}$ simulations, we conclude that the spectra bias level is less than one percent of the spectra errors.
The MC spectra variance, and that derived analytically $\sigma^2 (\hat C_\ell^{\rm ana}) = V_{\ell\ell}$ in Eq.~\eqref{eq:crossCovClEl} are shown to be in excellent agreement, as displayed in Fig.~\ref{fig:Vars}. The covariance matrix, not shown here, is band diagonal over the whole multipoles range, meaning that correlations are low and only occur between neighboring bins.
We successfully verified that the xQML spectrum reconstruction is unbiased, and that the MC covariance corresponds to that which is expressed analytically. The xQML thus gives a near minimal spectrum error.
\section{\label{sec:Leakage}\texorpdfstring{$E$}-\texorpdfstring{$B$}{} leakage}
\subsection{\label{sec:ModeMix}Modes mixing}
\begin{figure}[!h]
\includegraphics[width=0.9\columnwidth]{{Figures/WindowMat_FULL_Ns16_Nbins46_Slmaxmul3_fwhmdeg0.5_r0.001_fsky0.7_muKarcm1.0}.pdf}
\includegraphics[width=0.9\columnwidth]{{Figures/WindowMat_BICEPivw_Ns128_Nbins25.48_Slmaxmul3_fwhmdeg0.5_r0.001_fsky0.0159_muKarcm1.0}.pdf}
\caption{The normalized mode-mixing matrix $\bar W_{\ell\ell'}$ defined in Eq.~\eqref{eq:CorrW} in log scale, for the reionization (up) and recombination (down) surveys, for $\sigma_n = 1\,\mu \rm K.arcmin$.\label{fig:WindowMat}}
\end{figure}
\begin{figure*}[!ht]
\includegraphics[width=0.49\textwidth]{{Figures/Leak_FULL_Ns16_Nbins46_Slmaxmul3_fwhmdeg0.5_r0.001_fsky0.7BB}.pdf}
\includegraphics[width=0.49\textwidth]{{Figures/Leak_BICEPivw_Ns128_Nbins25.48_Slmaxmul3_fwhmdeg0.5_r0.001_fsky0.0159BB}.pdf}
\caption{Top panels show the $BB$-spectrum uncertainty with variance leakage (solid) and without (dashed), for the reionization (left) and recombination (right) surveys, at noise levels $0.1\leq \sigma_n \leq50\,\mu\rm K . arcmin$.
Bottom panels quantify the absolute variance leakage, computed from $[\sigma (\hat C^{\rm leak}_\ell) - \sigma (\hat C^{\rm no leak}_\ell)]/\sigma(\hat C^{\rm no leak}_\ell)$.\label{fig:VarLeak}}
\end{figure*}
The mode-mixing matrix $W_{\ell\ell'}$ introduced in Eq.~\eqref{eq:autoWindow} quantifies the contribution of all $\ell'$-modes to the spectrum estimator at angular scale $\ell$. The rescaled matrix
\begin{equation}
\bar W_{\ell\ell'} = \frac{W_{\ell\ell'}}{\sqrt{W_{\ell\ell}W_{\ell'\ell'}}},\label{eq:CorrW}
\end{equation}
is displayed in Fig.~\ref{fig:WindowMat} in log-scale for $\sigma_n = 1\,\mu \rm K.arcmin$. The off-diagonal blocks quantify the $E/B$ modes mixing, also known as polarization leakage. This mixing appears as soon as maps are partially masked, making some modes ambiguously belong to both $E$ and $B$ polarizations patterns.
We remark that the $E/B$ mixing is on average very low. Most of the $E$-to-$B$ leakage is localized at $\ell\lesssim 10$ for the reionization survey. The recombination survey also suffers from a polarization mixing increase at $\ell\gtrsim 250$. This effect is caused by the pixel resolution of the maps. It appears when the multipole angular scale is close to the typical pixel scale, and disappears as soon as we increase the datasets pixel resolution. The effect remains however very small. For the multipoles ranges of interest, it induces a negligible increase of variance as shown hereafter.
\subsection{\label{sec:VarianceLeakage}Variance induced leakage}
Because of polarization leakage, $E$ and $B$ modes respective uncertainty contribute to each other variance. For noise dominated datasets, this variance leakage has a small impact since both polarizations have the same noise, and their mutual contributions are equivalent. Conversely, when the noise is much below the signal level, the uncertainty is limited by the intrinsic 'cosmic variance', arising from the finite number of modes that can be sampled on the sky. The $E$-modes signal, thus its cosmic variance, is much higher than that of $B$-modes. As a consequence, even for small polarization mixing, the impact of the $E$-to-$B$ variance leakage can become non-negligible.
Since, by construction the error of the xQML estimator is minimal, it also minimizes the amount of variance leakage. The $BB$ uncertainty is represented in Fig.~\ref{fig:VarLeak}, for which we compare the cases with and without leakage. The latter is obtained by simulating CMB polarization maps using null $EE$ and $TE$ spectra. We also show the absolute level of variance leakage $[\sigma (\hat C^{\rm leak}_\ell) - \sigma (\hat C^{\rm no leak}_\ell)]/\sigma(\hat C^{\rm no leak}_\ell)$. We observe that the recovered spectra uncertainties for $\sigma_n=0.1$ and $\sigma_n=1\,\mu\rm K.arcmin$ are both mostly cosmic variance limited by the lensing $B$-modes signal. We also recover that the impact of the variance leakage gets less important as the SNR decreases.
For the reionization survey, the variance leakage is observed to be maximal at large angular scales, up to a $80\%$ increased uncertainty around $\ell \lesssim 10$, which quickly drops to $30\%$ for higher $\ell$'s. This is not surprising since, for this multipole's range, the $EE$ cosmic variance as well as the $E$-to-$B$ mixing in $\bar W_{\ell\ell'}$ are maximal.
For the recombination survey, the impact is maximal for the first bins. This is again related to the higher polarization mixing in $\bar W_{\ell\ell'}$ at those multipoles. It then drops to $20\%$ for $\ell\gtrsim90$, followed by a slight increase at $\ell\gtrsim250$. This is consistent with the previous $E/B$ mixing observations made on $\bar W_{\ell\ell'}$ for this multipoles range. The impact at low $\ell$'s remains however smaller since the $E$-modes cosmic variance level is much lower for those angular scales.
We conclude that, even if the mixing between polarization modes is minimized when using the xQML estimator, the induced variance increase can however be non-negligible, especially at large angular scales.
\subsection{\label{sec:CompPpCl}Comparison with pseudo-spectra}
\begin{figure*}[!ht]
\includegraphics[width=0.49\textwidth]{{Figures/PuXp_opti_FULL_Ns16_Nbins46_r0.001_fsky0.7_muKarcm1.0BB_Sapo0.0_res32}.pdf}
\includegraphics[width=0.49\textwidth]{{Figures/PuXp_opti_BICEPivw_Ns128_Nbins25.48_r0.001_fsky0.0159_muKarcm1.0BB_Sapo0.0_res256}.pdf}
\caption{$BB$ spectrum errors from xQML, standard pCl estimators, PpCl (pure pCl) using the 'C2' and optimized apodization, for the reionization (left) and recombination (right) surveys, with a noise level $\sigma_n = 1\,\mu{\rm K . arcmin}$.}
\label{fig:PureVarBB}
\end{figure*}
In this section we compare the xQML with other methods such as the standard pCl, and the (pure) PpCl approach. The latter method requires the mask and its first derivative to be equal to zero on its boundaries in order to eliminate the polarization variance leakage. We follow the cross PpCl formalism described in~\cite{grain_polarized_2009}, using the two mask apodization processes.
The first process is achieved by isotropically applying the apodization function 'C2' defined in~\cite{grain_polarized_2009} and parametrized by the apodization length parameter $\theta_*\,[\rm deg]$, to our binary masks. This parameter needs to be adapted to the SNR. We thus select for each multipole the mask apodization for which the mode estimate has minimal variance. Each mode is then combined to reconstruct an unbiased spectrum estimation, which the covariance matrix can be evaluated using MC simulations. This process has to be repeated depending on the noise level.
An optimized apodization process, proposed in \cite{smith_general_2007}, consists of finding the adequate windows functions that lower the total (noise and leakage) $B$ pseudo-spectrum variance. This is achieved by implementing an iterative Preconditioned Conjugate Gradient solver as in ~\cite{smith_general_2007, grain_polarized_2009}. In this framework, the relation between the mask and its derivative required by the pure method is relaxed. The optimization is performed over bins of multipoles.
The mixing kernel that allows us to recover the spectra from the pseudo-spectra is then built according to each bin optimization.
The uncertainty on the reconstructed $B$-mode power spectrum for methods introduces above are illustrated in Fig.~\ref{fig:PureVarBB} based on $10^4$ MC simulations. The standard pCl leads to higher uncertainties, for which the $E$-modes variance leakage contribution is particularly visible on the recombination survey $B$-modes variance.
The pure methods PpCl allow to recover much lower error bars. The two apodizations gives similar results at high multipoles but an optimized apodization is required to obtain better results at large angular scales.
Nevertheless, the pixel domain cross-correlation xQML provides the lowest spectra uncertainty over the whole multipole range.
This is particularly true at large angular scales, and even increases when the tensor-to-scalar ratio $r$ decreases.
We conclude that the xQML method is particularly suited for reducing the $B$-modes variance leakage for large angular scale analysis compared to the PpCl approach. It produces smaller error bars and does not require mask apodization optimizations. This is of special interest for the detection of primordial $B$-modes.
\section{\label{sec:EBspec}\texorpdfstring{$E$-$B$}{} correlation spectrum}
Although first order primordial $E$-$B$ and $T$-$B$ correlations are predicted to be null in the frame of the $\Lambda\rm CDM$ model, nonstandard cosmological mechanisms, such as cosmic birefringence, could induce nonzero correlation spectra~\cite{lue_cosmological_1999,carroll_quintessence_1998,loeb_faraday_1996,kahniashvili_effects_2005,campanelli_faraday_2004,caprini_cosmic_2004,pogosian_signatures_2002}. In addition to providing an important probe to nonstandard physics, measuring $EB,TB$ spectra could also help to diagnose instrumental systematic effects~\cite{yadav_primordial_2010,hu_benchmark_2003}.
We focus on the $E$-$B$ correlation, for which we compute the $EB+BE$ spectrum variance. The rescaled mode-mixing matrix introduced in Eq.~\eqref{eq:CorrW} is extended to $EB$ multipoles as displayed in Fig.~\ref{fig:WindowMatEB} for $1\,\mu\rm K.arcmin$. Apart from a negligible resolution effect for high $\ell'$'s, we observe no mixing between $EB$ and $EE,BB$ when using the xQML method. Note, however that this statement is not true if we consider particular models with nonzero $\tilde C^{EB}_\ell$.
\begin{figure*}[!ht]
\includegraphics[width=0.49\textwidth]{{Figures/PuXp_opti_FULL_Ns16_Nbins46_r0.001_fsky0.7_muKarcm1.0EB_Sapo0.0_res32}.pdf}
\includegraphics[width=0.49\textwidth]{{Figures/PuXp_opti_BICEPivw_Ns128_Nbins25.48_r0.001_fsky0.0159_muKarcm1.0EB_Sapo0.0_res256}.pdf}
\caption{$EB$ spectrum errors from xQML, standard pCl estimators, and PpCl (pure pCl) using the 'C2' and optimized apodization, for the reionization (left) and recombination (right) surveys, with a noise level $\sigma_n = 1\,\mu \rm K . arcmin$.}
\label{fig:PureVarEB}
\end{figure*}
\begin{figure}[!htb]
\includegraphics[width=\columnwidth]{{Figures/WindowMat_FULL_Ns16_Nbins46_Slmaxmul3_fwhmdeg0.5_r0.001_fsky0.7_muKarcm1.0EB}.pdf}\\~\\
\includegraphics[width=\columnwidth]{{Figures/WindowMat_BICEPivw_Ns128_Nbins25.48_Slmaxmul3_fwhmdeg0.5_r0.001_fsky0.0159_muKarcm1.0EB}.pdf}
\caption{The normalized mode-mixing matrix $\bar W_{\ell\ell'}$ defined in Eq.~\eqref{eq:CorrW} in log scale, for the reionization (up) and recombination (down) surveys, for $\sigma_n = 1\,\mu \rm K.arcmin$.\label{fig:WindowMatEB}}
\end{figure}
As in the previous section for the $BB$ uncertainty, we compare our results with the pCl and PpCl methods. The latter is computed using the hybrid approach proposed in~\cite{grain_cmb_2012}, where the $E$-modes are obtained using the standard pseudo-spectrum, and the $B$-modes using the pure method. Variances are shown in Fig.~\ref{fig:PureVarEB} for $1\,\mu\rm K.arcmin$.
The PpCl uncertainty is about $20\%$-$60\%$ higher than that of the xQML for the reionization survey. Longer mask apodization lengths improve the PpCl error for $\ell\lesssim 10$.
On the recombination survey, the xQML gives significant lower $EB$ uncertainty only for $\ell\lesssim 100$.
The conclusion is similar as for the $BB$-spectrum analysis. The xQML method provides an efficient estimator for large angular scales analysis.
\section{\label{sec:Conclusion}Conclusion}
In this paper, we derived a pixel-based spectrum estimator that allows us to cross-correlate CMB datasets. The method is very similar to the QML, but does not require a precise knowledge of the datasets noise covariance matrices to subtract the noise bias. We also provided an approximation to the Sylvester equation that has little impact on the optimality of the estimator, which, by construction, provides near-minimal error bars. The estimator variance is shown to be sensitive to only second order perturbations of the fiducial pixels covariance matrix. Moreover, using no $TQ$ and $TU$ correlations for the construction of this matrix, temperature and polarization analysis can be done completely separately.
We showed that the xQML estimator is unbiased, and that the error bars on the recovered spectrum, obtained from Monte-Carlo simulations, correspond to the analytically derived variance. We presented two CMB surveys aiming at the reionization and recombination polarized signals measurement, with a fiducial tensor-to-scalar ratio $r = 10^{-3}$. The source of polarization leakage can be identified in the mode-mixing matrix $W_{\ell\ell'}$. We showed in Sec.~\ref{sec:Leakage} that it is consistent with the increase of variance in $B$-modes when compared to the no-leakage case. The reionization survey $BB$ uncertainty at low noise levels is particularly impacted by the polarization mixing, with a maximum of an $80\%$ increase for large angular scales at $0.1$ - $1\,\mu\rm K.arcmin$. Since the xQML method minimizes bins correlations as well as polarization mixing, the resulting error bars thus correspond to the minimal uncertainty achievable when aiming to polarization variance leakage reduction.
Comparison with the pure pseudo-spectrum formalism shows a significant improvement of the error bars and correlations for both $BB$ and $EB$ when using the xQML method. The particular advantage relative to pure methods is that it does not require any special masks apodization processing. However, due to its higher computational cost ($\mathcal O (N_{\rm d}^3)$ operations) relative to pseudo-spectra ($O (N_{\rm d}^{3/2})$ operations), the xQML cannot be run on as many multipoles as for the pseudo-spectra. For those reasons, the xQML estimator is particularly suited for large and intermediate angular scales analysis.
\begin{figure}[!htb]
\includegraphics[width=\columnwidth]{{Figures/sigR_xpureLol_FULL_Ns16_Nbins46_Slmaxmul3_fwhmdeg0.5_r0.001_fsky0.7BB}.pdf}
\includegraphics[width=\columnwidth]{{Figures/sigR_xpureLol_BICEPivw_Ns128_Nbins25.48_Slmaxmul3_fwhmdeg0.5_r0.001_fsky0.0159BB}.pdf}
\caption{Error on the tensor-to-scalar ratio with a fiducial $ r=10^{-3}$, for the reionization (up) and recombination (bottom) surveys as a function of the noise levels $0.1 \leq \sigma_n \leq 20\,\mu\rm K.arcmin$. We compare the uncertainty obtained from the standard pseudo-spectrum (blue), the pure pseudo-spectrum using C2 (magenta) and optimized apodization (orange), the cross quadratic pixel based (green), and the mode-counting formula (red).}\label{fig:sigR}
\end{figure}
As a forecast analysis, we show in Fig.~\ref{fig:sigR} the uncertainty of $r$, obtained from each method introduced previously, as a function of the noise level. We also proceed to a comparison with the mode-counting formula\footnote{I.e. $\sigma^2_{\rm m.c.} = \left[ 2 \tilde C_\ell^2 + \tilde C_\ell(N_\ell^{A} + N_\ell^{B}) + N_\ell^{A} N_\ell^{B} \right]/\left[ \ell(\ell+1) \Delta_\ell f_{\rm sky} \right] $, where $\tilde C_\ell$ is the power spectrum fiducial model, $N_\ell = n_\ell/B_\ell^2$ is the noise spectrum of the dataset convolved by the corresponding beam functions $B_\ell$.
}, which gives a naive estimate of the lowest achievable variance, neglecting correlations and leakage induced by the sky coverage. We use the spectrum-based likelihood presented in~\cite{mangilli_large-scale_2015}, which is a cross-spectra extended version of the low-multipoles Hamimeche and Lewis likelihood~\cite{hamimeche_likelihood_2008}. The pure method covariance matrix is computed using MC as described in Sec.~\ref{sec:CompPpCl}. We consider only two datasets, no foreground contamination and/or residuals, nor de-lensing. For low SNR, the impact of the polarization mixing is small, and all (standard and pure) pseudo-spectrum methods give the same error on $r$. For high SNR, the uncertainty of $r$ is cosmic variance limited, which corresponds to the plateau from $\sigma_n=0.1$ to $\sigma_n=1\,\mu\rm K . arcmin$. In this range of noise level, the pure pseudo-spectrum method with optimized apodization and the xQML gives the same uncertainty of $r$ for the recombination survey, while the xQML uncertainty is $\sim20\%$ lower than the optimized PpCl method for the reionization survey.
\begin{acknowledgments}
The authors would like to thank G. Efstathiou and J. Grain for the helpful discussions about the xQML and pure methods. We also thank H. Taeter for his help leading to the identification of the Sylvester equation. Some of the results in this paper have been derived using the HEALPix \cite{healpix:_2018} package, the NaMaster~\cite{alonso_unified_2018} package, and the Xpure package~\cite{trsitram_xpure:_2018}.
\end{acknowledgments}
\clearpage
\bibliographystyle{apsrev}
|
3,212,635,537,550 | arxiv | \section{}
In a spin-singlet superconductor under an applied
magnetic field, when the Pauli depairing effect dominates over the orbital depairing effect,
spatially modulated superconducting order parameter
is stabilized.\cite{LO,FF}
It has been discussed recently that this inhomogeneous superconducting state called
the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state
may be realized in
a heavy fermion system CeCoIn$_5$ and some quasi-low-dimensional superconductors.\cite{FFLO,CeCoIn1,CeCoIn2, CeCoIn3,CeCoIn4,gloos,houz,gruen,adachi,Agterberg}
The possible realization of an analogous inhomogeneous superconducting state was also proposed for noncentrosymmetric superconductors, in which anti-symmetric spin-orbit interaction combined with the Zeeman magnetic field
stabilizes the helical vortex state.\cite{helical}
It is an important issue to establish the realization of
these exotic superconducting states in the above-mentioned systems experimentally.
From this perspective,
it is useful to study electromagnetic properties specific to the inhomogeneous superconducting states
in detail, which may be utilized for the experimental identification of the modulated order parameter.\cite{Rad,samokhin}
In this paper, we demonstrate that a spatially-varying superconducting order parameter characterizing the
inhomogeneous state gives rise to distinct electromagnetic response caused
by topological Berry phase effects.\cite{niu}
In particular, under a certain circumstance, the topological Hall effect can be raised by
a fictitious "Lorentz force" which is generated by the Berry phase effect associated with
the inhomogeneous order parameter.
It was discussed by Bruno et al. that for electrons interacting with spin textures which possess
a nonzero Berry curvature, the Hall effect is induced by the fictitious Lorentz force raised by
the Berry phase effect.\cite{bruno}
We here consider a possible analogous phenomenon in superconducting states with a spatially slowly-varying
order parameter.
We note that the topological Hall effect considered in this paper is
a transport property of quasiparticles, and we do not consider the Hall
effect associated with supercurrents here.\cite{kita}
Our approach is based upon
the quasiclassical method for the description of quasiparticle dynamics in
superconducting states.\cite{Eil,Larkin,vekhter}
We extend the quasiclassical Eilenberger equation to take into account important Berry phase effects.
The basic quantity with which we are concerned in the following argument is
the single-particle Green function for the superconducting state, from which
dynamical properties can be derived;
\begin{eqnarray}
\hat{G}(x_1,x_2)=
\left(
\begin{array}{cc}
G(x_1,x_2) & -F(x_1,x_2) \\
F^{\dagger}(x_1,x_2) & \bar{G}(x_1,x_2)
\end{array}
\right)
\end{eqnarray}
where $G(x_1,x_2)$ and $F(x_1,x_2)$ are, respectively, the normal and anomalous
Green functions, and
$\bar{G}(x,x')=G(x',x)$, $x_1=(\bm{r}_1,t_1)$ etc.
The spatial modulation of the superconducting order parameter is expressed in terms of
the center of mass coordinate $\bm{R}=\frac{\bm{r}_1+\bm{r}_2}{2}$.
Fourier transforming the relative coordinate $\bm{r}=\bm{r}_1-\bm{r}_2$,
we introduce the Wigner transformation of the Green function,
\begin{eqnarray}
{\hat{G}}(\bm{k},\bm{R},\varepsilon_n)=\int d\bm{r}\int^{\beta}_0 d\tau{\hat{G}}(\bm{R}+\frac{\bm{r}}{2},\frac{\tau}{2},\bm{R}-\frac{\bm{r}}{2},-\frac{\tau}{2})&&
\nonumber \\
\times e^{i\varepsilon_n\tau}e^{i\bm{k}\bm{r}},&&
\label{foug}
\end{eqnarray}
with $\varepsilon_n$ the fermionic Matsubara frequency.
When there is a vector potential $\bm{A}_0$,
the Gor'kov equation satisfied by ${\hat{G}}(\bm{k},\bm{R},\varepsilon_n)$ is
\begin{eqnarray}
\left[{\hat{\tau}}_3i\varepsilon_n-\varepsilon(\bm{k}-\frac{i}{2}\nabla_{\bm{R}}
-e\bm{A}_0\hat{\tau}_3
)+{\hat{\tau}}_3h-{\hat{\Delta}}
-{\hat{\Sigma}}\right] && \nonumber \\
\times{\hat{G}}(\bm{k},\bm{R},\varepsilon_n)={\hat{1}}&&
\label{gorkov}
\end{eqnarray}
where $\varepsilon(\bm{k})$ is the energy band dispersion for electrons, $h=\mu_{\rm B}H_z$ is the Zeeman magnetic field,
${\hat{\tau}}_\mu$ ($\mu=1,2,3$) is the Pauli matrix in the particle-hole space,
and $\hat{\Sigma}$ is the normal selfenergy matrix which is diagonal in the particle-hole space, and includes effects of impurity scattering,
and electron-electron interaction.
We consider the spin-singlet pairing state with the gap function,
\begin{eqnarray}
{\hat{\Delta}}(x,x')=
\left(
\begin{array}{cc}
0 & \Delta(\bm{R}) \\
-\Delta^{*}(\bm{R}) & 0
\end{array}
\right).
\end{eqnarray}
We expand the kinetic energy term of (\ref{gorkov}) in terms of the spatial gradient $\nabla_{\bm{R}}$, and transform the basis of the particle hole space as
$\tilde{\hat{G}}=\hat{G}\hat{\tau}_3$. Then, eq.(\ref{gorkov}) is rewritten into
\begin{eqnarray}
\left[i\varepsilon_n+{\hat{\tau}}_3\frac{i}{2}\bm{v}\nabla_{\bm{R}}+e\bm{v}\bm{A}_0
+h-{\hat{\bm{\tau}}}\cdot\hat{\bm{H}}_0
-{\hat{\tau}}_3{\hat{\Sigma}}\right] && \nonumber \\
\times\tilde{\hat{G}}(\bm{k},\bm{R},\varepsilon_n) ={\hat{1}} &&
\label{gorkov2}
\end{eqnarray}
with $\hat{\bm{H}}_0=(\Delta_1(\bm{R}),-\Delta_2(\bm{R}),\varepsilon(\bm{k}))$,
$\Delta_1({\bm{R}})={\rm Re}\Delta(\bm{R})$, $\Delta_2(\bm{R})={\rm Im}\Delta(\bm{R})$, and
$\bm{v}=\frac{\partial \varepsilon(\bm{k})}{\partial \bm{k}}$.
We diagonalize the fifth term $\hat{\bm{\tau}}\cdot\hat{\bm{H}}_0$ by applying the unitary transformation
$\tilde{\hat{G}}'=\hat{U}^{\dagger}(\bm{R})\tilde{\hat{G}}\hat{U}(\bm{R})$; i.e.
$\hat{U}^{\dagger}(\bm{R})\hat{\bm{\tau}}\cdot\hat{\bm{H}}_0\hat{U}(\bm{R})=E(\bm{k})\hat{\tau_3}$
and $E(\bm{k})=\sqrt{\varepsilon^2(\bm{k})+|\Delta(\bm{R})|^2}$:
\begin{eqnarray}
\bigl[i\varepsilon_n+\frac{i}{2}\hat{U}^{\dagger}{\hat{\tau}}_3\hat{U}\bm{v}\nabla_{\bm{R}}
+\frac{\bm{v}}{2}\hat{\bm{A}}_{\rm f}
+e\bm{v}\bm{A}_0+h-E(\bm{k})\hat{\tau}_3 &&\nonumber \\
-\hat{U}^{\dagger}{\hat{\tau}}_3{\hat{\Sigma}}\hat{U}\bigr]
\tilde{\hat{G}}' ={\hat{1}}. &&
\label{gorkov3}
\end{eqnarray}
Here, the unitary transformation applied to the spatial gradient term of (\ref{gorkov2}),
${\hat{\tau}}_3\frac{i}{2}\bm{v}\nabla_{\bm{R}}$,
gives rise to a fictitious vector potential
$\hat{\bm{A}}_{\rm f}=i\hat{U}^{\dagger}\hat{\tau}_3\nabla_{\bm{R}}\hat{U}$, which is a $2\times2$ matrix in the particle-hole space.
At this stage, we apply the adiabatic approximation, assuming that
the transition between the electron band with the energy $E(k)$ and the hole band with the energy $-E(k)$
is suppressed, and
neglect the off-diagonal terms of $\hat{U}^{\dagger}\hat{\tau}_3\hat{U}$ and $\hat{\bm{A}}_{\rm f}$;
i.e.
\begin{eqnarray}
\hat{U}^{\dagger}\hat{\tau}_3\hat{U}\rightarrow \frac{\varepsilon(k)}{E(k)}\hat{\tau}_3,
\label{utu}
\end{eqnarray}
\begin{eqnarray}
\hat{\bm{A}}_{\rm f}\rightarrow \frac{\varepsilon(k)}{E(k)}(\bm{A}_{\rm f1}+\bm{A}_{\rm f2}\hat{\tau}_3),
\label{ficvecapp}
\end{eqnarray}
with
$\bm{A}_{{\rm f} 1}=\frac{1}{2}(1-\frac{E(k)}{\varepsilon(k)})\nabla_{\bm{R}}\phi(\bm{R})$, and
$\bm{A}_{{\rm f} 2}=-\frac{i}{4}\frac{\nabla_{\bm{R}}|\Delta|^2}{E^2(k)}$.
Here $\phi(\bm{R})$ is the phase of the gap function; $\Delta(\bm{R})=|\Delta(\bm{R})|e^{i\phi(\bm{R})}$.
The approximation (\ref{utu}) and (\ref{ficvecapp}) is most crucial in our argument for the realization of the Berry phase effect. The Berry phase effect appears when one restricts the Hilbert space within a sub-space in which the change of the phase of the wave function is regarded as an adiabatic one. Here, we restrict the Hilbert space within the electron band or the hole band.
Then, suppressing the transition between the electron band and the hole band,
one can neglect the off-diagonal elements of Eq.(\ref{gorkov3}), as done in Eqs.(\ref{utu}) and (\ref{ficvecapp}). This approximation is valid at sufficiently low temperatures, because of the energy difference between the electron band and the hole band due to the Zeeman splitting.
We will discuss the validity of the approximation in more detail in the last part of this paper.
As a result, the diagonal component (\ref{ficvecapp}) can be regarded as
fictitious U(1) gauge fields acting on quasiparticles.
Within this approximation, $\tilde{\hat{G}}'$ is diagonal; $\tilde{\hat{G}}'={\rm diag}(\tilde{G}'_{+},\tilde{G}'_{-})$.
Each component satisfies,
\begin{eqnarray}
\biggl[i\varepsilon_n-\sigma E(\bm{k}-\frac{i}{2}\nabla_{\bm{R}}-\sigma\frac{\bm{A}_{\rm f1}}{2}-\frac{\bm{A}_{\rm f2}}{2}-\sigma e\frac{E(k)}{\varepsilon(k)}\bm{A}_0) && \nonumber \\
+h-\tilde{\Sigma}_{\sigma}'\biggr]
\tilde{G}'_{\sigma}=1, &&
\label{gorkov-dia}
\end{eqnarray}
with $\sigma=\pm$, and $\tilde{\Sigma}_{\sigma}'$ is the diagonal component of $\hat{U}^{\dagger}\hat{\tau}_3
\hat{\Sigma}\hat{U}$.
In Eq.(\ref{gorkov-dia}), we have rewritten the derivative term and the vector potential terms into a
gauge invariant form.
To simplify the analysis, we assume that $\bm{A}_0$ is a time-dependent uniform field, which yields an electric field.
It is straightforward to generalize the following analysis to the case that $\bm{A}_0$ also produces a magnetic field.
To solve (\ref{gorkov-dia}) for $\tilde{G}'_{\sigma}$, we follow the quasiclassical approach developed by Eilenberger.
We extract the left-hand Gor'kov equation from
the right-hand equation (\ref{gorkov-dia}),
expand it in terms of the spatial gradient $\nabla_{\bm{R}}$
up to the second order, and integrate each term over the energy dispersion $\varepsilon_k\equiv \varepsilon(\bm{k})$.
From the second term of (\ref{gorkov-dia}), we obtain,
\begin{eqnarray}
\int \frac{d\varepsilon_k}{\pi}\left[i\sigma\frac{\varepsilon_k}{E(k)}\bm{v}\tilde{\nabla}_{\bm{R}}\tilde{G}'_{\sigma}+
i(\bm{v}\times \bm{B}_{\rm f})\frac{\partial \tilde{G}'_{\sigma}}{\partial \bm{k}_{\parallel}}
\right],
\label{ficlor}
\end{eqnarray}
where $\bm{B}_{\rm f}$ is the Berry curvature $\bm{B}_{\rm f}=\nabla\times \bm{A}_{\rm f1}$, the explicit expression of which is
\begin{eqnarray}
(\bm{B}_{\rm f})_{\alpha}=-\epsilon_{\alpha\beta\gamma}\frac{1}{4E^2(k)}\frac{\partial |\Delta|^2}{\partial R_{\beta}}
\left(\frac{\partial \phi}{\partial R_{\gamma}}-2eA_{0\gamma}\right)
\label{ficb}
\end{eqnarray}
and $\tilde{\nabla}_{\bm{R}}$ is the derivative with respect to $\bm{R}$ under the constraint that
$E(k-\sigma\frac{\bm{A}_{\rm f1}}{2}-\frac{\bm{A}_{\rm f2}}{2}-\sigma e \frac{E_k}{\varepsilon_k}\bm{A}_0)$ is fixed.
$\bm{k}_{\parallel}$ is the momentum parallel to the Fermi surface.
Note that the $\bm{A}_{\rm f2}$ term of (\ref{gorkov-dia}) is a pure gauge, and does not give a nonzero Berry curvature.
The second term of Eq.(\ref{ficlor}) is the fictitious "Lorentz force" term, the origin of which is
the topological Berry phase effect raised by
the spatial modulation of the superconducting order parameter.
From (\ref{ficb}),
we see that the fictitious magnetic field is nonzero only
when both the amplitude and the phase of the superconducting gap
are spatially modulated.
Thus, the topological Hall effect does not occur for the Fulde-Ferrel state and the helical vortex phase, in which only the phase of the superconducting gap is modulated.\cite{FF,helical}
It is also noted that the fictitious magnetic field (\ref{ficb}) has a gauge-invariant form.
In the standard quasiclassical approach, the Gor'kov equation is recast into the Eilenberger equation for
the normalized Green function,
\begin{eqnarray}
\tilde{g}'_{\sigma}=\int \frac{d\varepsilon_k}{\pi}{\tilde{G}'_{\sigma}}.
\end{eqnarray}
However, unfortunately, the first term of (\ref{ficlor}) can not be expressed in terms of the normalized Green function because of a strongly varying factor $\varepsilon_k/E(k)$, which stems from the use of the transformed Green function $\tilde{\hat{G}}'$ instead of the standard Green function $\hat{G}$.
To avoid this difficulty, we restrict our argument within the case with a uniform current, and discard
this term.
For the second term of (\ref{ficlor}), we evaluate the integral over $\varepsilon_k$ in the following manner.
\begin{eqnarray}
\int \frac{d\varepsilon_k}{\pi}
(\bm{v}\times \bm{B}_{\rm f})\frac{\partial \tilde{G}'_{\sigma}}{\partial \bm{k}_{\parallel}}
\approx(\bm{v}\times\tilde{\bm{B}}_{\rm f})\frac{\partial \tilde{g}'_{\sigma}}{\partial \bm{k}_{\parallel}},
\label{lorentz}
\end{eqnarray}
with
\begin{eqnarray}
\tilde{\bm{B}}_{\rm f}=\left\{
\begin{array}{ll}
\bm{B}_{\rm f}|_{\varepsilon_k=0}&
\mbox{for } \Delta(\bm{R}) > h \\[0.2cm]
0& \mbox{for } \Delta(\bm{R}) < h.
\end{array}
\right.
\label{br}
\end{eqnarray}
Then, the Eilenberger equation for the uniform current state satisfied by the normalized Green function
$\tilde{g}'_{\sigma}(\varepsilon_n,\varepsilon_{n'})$ under the vector potential $\bm{A}_0(t)=\bm{A}_0(\omega_0)e^{i\omega_0 t}$
is given by
\begin{eqnarray}
&& (i\varepsilon_n-i\varepsilon_{n'})\tilde{g}'_{\sigma}(\varepsilon_n,\varepsilon_{n'})
+i(\bm{v}\times\tilde{\bm{B}}_{\rm f})\cdot\nabla_{\bm{k}_{\parallel}}\tilde{g}'_{\sigma}(\varepsilon_n,\varepsilon_{n'})
\nonumber \\
&&+e\bm{v}\bm{A}_0(\omega_0)[\tilde{g}'_{\sigma}(\varepsilon_n-\omega_0,\varepsilon_{n'})
-\tilde{g}'_{\sigma}(\varepsilon_n,\varepsilon_{n'}+\omega_0)] \nonumber \\
&&-\tilde{\sigma}_{\sigma}(\varepsilon_n)\tilde{g}'_{\sigma}(\varepsilon_n,\varepsilon_{n'})+\tilde{g}'_{\sigma}(\varepsilon_n,\varepsilon_{n'})\tilde{\sigma}_{\sigma}(\varepsilon_{n'})=0.
\label{eil1}
\end{eqnarray}
Here $\tilde{\sigma}_{\sigma}(\varepsilon_n)$ is the normalized selfenergy $\tilde{\sigma}_{\sigma}=\int \frac{d\varepsilon_k}{\pi}\tilde{\Sigma}'_{\sigma}$. We have neglected effects of external fields on the selfenergy.
The second term of (\ref{eil1}) is the fictitious Lorentz force term which gives rise to the topological Hall effect.
In the following, to be concrete, we consider the case of the FFLO state
with the spatially modulated superconducting gap $\Delta(\bm{R})=\Delta_0\cos qx$,
which is believed to be realized in CeCoIn$_5$ under an applied magnetic field parallel
to the $x$-axis.\cite{CeCoIn1,CeCoIn2}
We examine the Hall current parallel to the $x$-axis induced by the topological Berry phase effect,
when an electric field $E_y$ is applied along the $y$-axis.
Note that since the Hall current is parallel to the external magnetic field in this situation, its is not difficult to distinguish between the topological Hall effect considered here and the ordinary Hall effect induced by the applied magnetic field
in experimental measurements.
As mentioned above, to obtain
the nonzero $\bm{B}_{\rm f}$, we need the spatial modulation of the phase of the superconducting gap $\phi$, as well as the amplitude modulation due to the FFLO state.
To fulfill this requirement, we consider the situation that the electric field $E_y$ induces a
supercurrent parallel to the $y$-axis, and thus
$\partial_y \phi -2eA_{0y}\neq 0$.
We solve Eq.(\ref{eil1}) for $\tilde{g}'_{\sigma}$ in the vicinity of
the superconducting transition temperature $T_c$.
It is easily seen from (\ref{eil1}) that $\tilde{g}'_{+}$ and $\tilde{g}'_{-}$ in the uniform current state are the same.
Thus, we obtain
the normalized Green function in the original particle-hole space
$\hat{g}={\rm diag}(g,\bar{g})=\tilde{g}'_{+}\hat{\tau}_3$.
Then, the expression for the Hall current for $T\sim T_c$ is
\begin{eqnarray}
J_x^{\rm Hall}&=&T\sum_{n}\int d\Omega_{\bm{k}}ev_x(g-\bar{g})|_{i\omega\rightarrow \omega+i\delta \atop \omega\rightarrow 0} \nonumber \\
&\approx& \frac{\sigma_n\tau\langle (\tilde{\bm{B}}_{\rm f})_z\rangle}{m}E_y.
\label{Hallc}
\end{eqnarray}
Here $\langle (\tilde{\bm{B}}_{\rm f})_z\rangle$ is the spatial average of the fictitious magnetic field, $\sigma_n$ is the normal state conductivity, and $\tau$
is the relaxation time of electrons.
Note that from the derivation described above, it is apparent that the topological Hall effect considered here
is a non-linear response to external fields.
The bulk Hall current obtained above is nonzero only when the spatial average of the fictitious field
$\tilde{\bm{B}}_{\rm f}$ is nonzero.
This imples that $\langle \partial_x|\Delta|\rangle=|\Delta(L_x)|-|\Delta(0)|\neq 0$; i.e.
the magnitude of the gap function at the two opposite edges must be different.
This condition crucially depends on extrinsic factors such as the geometry of
a sample used for the measurement of the Hall effect, and pinning of the
nodal plane of the FFLO state due to impurities.
These extrinsic factors, unfortunately, makes it difficult to detect the Hall current
experimentally.
To avoid such extrinsic factors, one can use the STM measurement for the detection of the Hall effect.
Even when the condition $\langle \partial_x|\Delta|\rangle=|\Delta(L_x)|-|\Delta(0)|\neq 0$
is not satisfied, the fictitious Lorentz force induced by the Berry curvature is balanced by
the electrostatic force due to the topological Hall voltage which has a spatial dependence
$V_{\rm Hall}\sim \cos qx$ in the above-mentioned model.
This electrostatic field gives rise to the inhomogeneous charge redistribution, which may be observed on
the surface of the system via the STM measurement.
It should be cautioned that the charge disproportion raised by the topological Hall effect can not be described by Eq.(\ref{Hallc}), because it is assumed in its derivation that the current is spatially uniform.
However, more promising approach for the detection of the topological Hall effect is to exploit
an electromagnetic wave $\bm{E}_0e^{i(\omega t-kx)}$ applied on a surface of the system.
For the setup considered above, the electromagnetic wave is a monochromatic plane wave
propagating along the $x$-axis, and is linearly polarized so as that
the electric field $\bm{E_0}$ is parallel to the $y$-axis.
We consider the situation that this electromagnetic wave is applied in addition to
the static electric field parallel to the $y$-axis, which is required to
realize the nonzero fictitious field $\tilde{\bm{B}}_{\rm f} \neq 0$.
When the wave number $k$ is chosen to be equal to $q$,
the oscillating factor of the fictitious magnetic field $\tilde{\bm{B}}_{\rm f}$ is cancelled out with
that of the electromagnetic field, and hence, there is the net nonzero fictitious Lorentz force
acting on quasiparticles.
In this situation, we obtain the ac topological Hall current flowing along the $x$-direction on the surface,
which is easily detected.
Since the induced Hall current is uniform in this case, the derivation of the expression of the Hall conductivity
presented above is justified, and the Hall current
is given by (\ref{Hallc}) with $\langle (\tilde{\bm{B}}_{\rm f})_z\rangle$ replaced with
$\langle (\tilde{\bm{B}}_{\rm f})_z\cos qx\rangle$.
It is noted that the above argument can be straightforwardly extended to the case with the orbital effect of
magnetic fields.\cite{gruen,adachi}
We stress again that the direction of the topological Hall current considered here is parallel to
the applied external magnetic field, which is required to realize the FFLO state.
Thus, in experimental measurements, one can clearly discriminate between the topological Hall effect and the ordinary Hall effect of quasiparticles induced by the applied magnetic field.\cite{note}
Finally, we discuss the validity of the adiabatic approximation which is crucial in our argument.
The emergence of the Berry phase effect is due to
the application of the adiabatic approximation; i.e. the transition between
the electron band and the hole band is neglected, which is the central assumption
in the derivation of the Gor'kov equation with the fictitious vector potential (\ref{gorkov-dia}).
This assumption is valid as far as there is an energy gap which separates
the electron band and hole band, and temperature is sufficiently lower than the energy scale of the gap.
However, in the FFLO state with the gap function $\Delta(x)=\Delta_0\cos qx$,
there are nodal planes of the superconducting gap at which $\Delta(x)=0$.
The existence of the nodal planes affects the energy spectrum of quasiparticles drastically.
This issue was solved exactly in the case of the one-dimensional system,\cite{Machida,samokhin}
and it was found that there is still an energy gap in the quasiparticle spectrum, which may
validate the adiabatic approximation.
However, in two and three dimensions, with which we are concerned,
it may be possible that the quasiparticle spectrum may become gapless,
because of the energy dispersion in the direction perpendicular to the $x$-axis.
Nevertheless, we can justify the adiabatic approximation applied to our system because of
the following reason.
Even if the superconducting gap vanishes at the nodal plane of the FFLO state,
there is still an energy gap between the electron band with up (down) spin
and the hole band with down (up) spin in the vicinity of the Fermi level,
because of the Zeeman splitting.
Thus, for temperatures much lower than the Zeeman energy scale, the transition between
these two bands is suppressed, and hence,
the adiabatic approximation is properly applied.
Although we derived the expression for the topological Hall conductivity (\ref{Hallc})
only in the vicinity of the transition temperatures, the Hall effect is more clearly observed at sufficiently low temperatures.
In summary, we have demonstrated that the topological Hall effect of quasiparticles can be raised by
the Berry phase effect associated with the spatially slowly-varying superconducting order parameter
which is realized in the FFLO state.
The experimental detection of this effect which is feasible with the use of
an ac electromagnetic field applied on a surface of the system may provide
an evidence of the realization of the inhomogeneous superconducting state.
We thank K. Samokhin for illuminating discussions, and introducing the author ref.\cite{Machida}.
This work is supported by the Grant-in-Aids for
Scientific Research from MEXT of Japan
(Grants No.19052003 and No.21102510).
|
3,212,635,537,551 | arxiv | \section{Introduction}
\label{sec:intro}
As the largest gravitationally bound structures in the universe, galaxy clusters are an important cosmological probe. For example, they can be used to test cosmological models by constraining parameters such as $\sigma_8$ or the dark energy equation of state via cluster abundances or their mass function \citep{vik09b, rozo10, allen11}. Their distribution is also an important observable for testing models of structure formation and evolution.
It is often important in the study of galaxy clusters to know their mass. However, since most of their mass content is in the form of dark matter, indirect measures are necessary. While weak lensing offers highly reliable mass estimates in the presence of high resolution imaging, they become prohibitively expensive to obtain as redshift increases \citep[see, e.g.,][]{vonderLinden14}. More indirect proxies are often used, such as gas mass, X-ray luminosity or temperature. Scaling relations between these observables and the cluster mass are then used to estimate the latter parameter, but such relations have considerable scatter \citep[e.g.,][]{pratt09,ett12}, and are valid only under certain conditions, such as virialization and hydrodynamic equilibrium.
To obtain accurate parameter estimates using cluster scaling relations, we need to understand how these relations apply to different parameters, their scatter, and to which clusters they can be appropriately applied. In this paper, we will focus on those relations that involve properties of the intra-cluster medium (ICM) gas. In the simplest case, the ICM is heated only gravitationally as it infalls into the cluster. This leads to simple ``self-similar" power-law relations between parameters, such as temperature and luminosity \citep{kaiser86}. However, studies of these relations have shown clusters tend to deviate from these naive relations, implying non-gravitational sources of heating, such as from active galactic nuclei (AGN) \citep{mark98,AE99,XW00,vik02}. Such AGN activity can result in substantial deviations from the canonical self-similar scaling relations (e.g., \citealt{hilton12}), deviations that can vary depending on the location of the AGN with respect to the ICM and the mass of the galaxy hosting the AGN activity \citep{stott12}. Further, such scaling relations assume clusters are relaxed. If a cluster is still in the process of forming, gravitational energy will still be in the process of converting to internal energy, or if it has recently been disturbed by a merger, it will have substantial additional non-gravitational sources of energy. Therefore, such clusters will likely deviate from the scaling relations.
Because of these deviations, the use of scaling relations, such as for mass estimates, will yield inaccurate results for non-virialized clusters \citep[see, e.g.,][]{smith03,arn07,nagai07,mah08}. It is therefore important to identify these clusters to avoid bias in mass estimates from fitting to these relations. A number of methods have been used for this purpose, including the two-dimensional, projected X-ray emission distribution \citep[e.g.,][]{jel05,allen08,okabe10,mah13}, the Dressler-Shectman test of substructure \citep[e.g.,][]{DS88, hall04}, deviations from Gaussianity in member dynamics \citep{marceu10, fakebruno13, haines15}, and projected offsets between various peaks such as X-ray, Sunyaev-Zeldovich, and the brightest cluster galaxy (BCG) \citep{mann12, hashimoto14, rossetti16}. However, it is questionable how broadly applicable such methods are and under what circumstances, if any, they fail to identify non-virialized clusters. For instance, it has been shown that certain types of cluster-cluster mergers can leave limited impact on the line-of-sight dynamics of member galaxies, which make them difficult to identify using the degree to which the member line-of-sight velocities depart from Gaussianity \citep{nate17}. Similarly, viewing angle effects can severely limit the ability of the Dressler-Shectman test and similar tests to detect significant sub-structure when it is present \citep{white10}. The lack of a comprehensive study to test the efficacy of these methods over a large range of cluster types and operational definitions limits their general application.
In this paper, we seek to determine which methods of detecting non-virialized structures are most effective. With the Observations of Redshift Evolution in Large-Scale Environments \citep[ORELSE;][]{lubin09}, we have an extensive multiwavelength dataset with which to do so, including $\sim 50$\ spectroscopically confirmed member galaxies per cluster, on average. The ORELSE survey is a systematic search for large-scale structures (LSSs) around an original sample of 20 galaxy clusters in a redshift range of $0.6 < z < 1.3$, designed to study galaxy properties over a wide range of local and global environments. The survey currently consists of 16 LSSs. These structures consist of several superclusters (defined here as LSSs with three or more member clusters) and merging systems, while some of the initially targeted galaxy clusters are found to be relatively isolated systems. This sample provides a wide range of environments that house both virialized and non-virialized galaxy clusters.
Twelve of the sixteen LSSs in the ORELSE survey have {\it Chandra} imaging of sufficient quality to study diffuse X-ray emission, each of which has been described in previous papers \citep[see][]{rumb12,rumb13,rumb17}. We combine the X-ray data of these 12 LSSs, which are succinctly summarized in Table \ref{strsumtab}, with our extensive optical and near-infrared (IR) imaging and spectroscopy, as well as spectral energy distribution (SED) fitting, to assemble 10 tests of virialization/substructure and to compare them to three scaling relations between galaxy cluster observables. For our cosmological model, we assume $\Omega_m=0.3$, $\Omega_{\Lambda}=0.7$, and $h = H_0/70 km s^{-1} Mpc^{-1}$.
We first describe our observations and data reduction, including SED fitting, in Section \ref{sec:obsred}. In Section \ref{sec:CP}, we discuss the various galaxy cluster properties that we measured. This involves carrying out two separate, but parallel, analyses: one using the results of our SED fitting to measure higher order galaxy properties such as rest-frame colour and stellar mass, and one without. The purpose of this parallel analysis is for applicability to a wide range of studies, including those without the requisite imaging depth or panchromatic coverage necessary to perform SED fitting to derive photometric redshifts, rest-frame magnitudes, or stellar masses. We then discuss scaling relations between galaxy cluster observables in Section \ref{sec:SR}. In Section \ref{sec:ana}, we analyze the effectiveness of our virialization/substructure tests by measuring their correlations with offsets from the scaling relations. We discuss the results of these correlations and their implications for surveys using galaxy clusters scaling relations in Section \ref{sec:disc} as well as make suggestions for optimal observational strategies for future surveys.
\begin{table*}
\caption{Properties of Observed ORELSE LSSs}
\label{strsumtab}
\begin{tabular}{llllrrrll}
\toprule
\footnotesize{LSS}
& \footnotesize{$\langle z\rangle$}
& \footnotesize{$z$ Lower}
& \footnotesize{$z$ Upper}
& \footnotesize{Num. of}
& \footnotesize{Num. of}
& \footnotesize{{\it Chandra}-}
& \footnotesize{$\sigma$}
& \footnotesize{Confirmed} \\
{ }
& { }
& \footnotesize{Bound}
& \footnotesize{Bound}
& \footnotesize{Known}
& \footnotesize{Known}
& \footnotesize{detected}
& \footnotesize{Range$^{\rm b}$}
& \footnotesize{Members$^{\rm c}$} \\
{ }
& { }
& { }
& { }
& \footnotesize{Clusters$^{\rm a}$}
& \footnotesize{Groups$^{\rm a}$}
& \footnotesize{Clusters}
& { }
& { }\\
\midrule
SG0023 & 0.84 & 0.82 & 0.87 & 0 & 5 & 0 & 218-418 & 250 \\
RCS0224 & 0.77 & 0.76 & 0.79 & 2 & 0 & 1 &710-825 & 119 \\
SC0849 & 1.26 & 1.25 & 1.28 & 3 & 2 & 2 &260-840 & 111 \\
RXJ0910 & 1.11 & 1.08 & 1.15 & 2 & 0 & 2 & 724-840 & 142 \\
RXJ1053 & 1.14 & 1.10 & 1.15 & 1 & 0 & 0 & $898\pm142$ & 72 \\
RXJ1221 & 0.70 & 0.69 & 0.71 & 1 & 1 & 1 & 427-753 & 160 \\
SC1324 & 0.76 & 0.65 & 0.79 & 3 & 4 & 3 & 186-873 & 452 \\
Cl1350 & 0.80 & 0.79 & 0.81 & 1 & 2 & 1 & 300-802 & 102 \\
SC1604 & 0.90 & 0.84 & 0.96 & 5 & 3 & 3 & 287-772 & 531 \\
RXJ1716 & 0.81 & 0.80 & 0.83 & 3 & 0 & 1 & 624-1120 & 197 \\
RXJ1757 & 0.69 & 0.68 & 0.71 & 1 & 0 & 1 & $862\pm108$ & 74 \\
RXJ1821 & 0.82 & 0.80 & 0.84 & 1 & 0 & 1 & $1129\pm100$ & 131 \\
\bottomrule
\multicolumn{9}{p{13.5cm}}{$^{\rm a}$ \footnotesize{Clusters and groups are defined as having velocity dispersions greater than or less than 550 km\ s$^{-1}$, respectively.}}\\
\multicolumn{9}{p{13.5cm}}{$^{\rm b}$ \footnotesize{In units of km\ s$^{-1}$. For LSSs with more than group or cluster, this measurement is the range of velocity dispersions of groups and clusters within the LSS. All velocity dispersions are measured within 1 $h_{70}^{-1}$ Mpc of the luminosity-weighted spectroscopic member center (see \citealt{ascaso14} for details).}}\\
\multicolumn{9}{l}{$^{\rm c}$ \footnotesize{Spectroscopically confirmed galaxies within the redshift bounds of the LSS.}}\\
\end{tabular}
\end{table*}
\section{Observations and Reduction}
\label{sec:obsred}
\subsection{Chandra Observations}
All X-ray imaging was conducted using the Advanced CCD Imaging Spectrometer (ACIS) of the {\it Chandra X-Ray Observatory}. Both the ACIS-I and ACIS-S arrays were used. These have fields of view of $16\farcm9\times16\farcm9$ and $8\farcm3\times50\farcm6$, respectively. In most cases, the ACIS-I field of view is extended through the inclusion of the closest 1-2 ACIS-S chips, and vice versa. Several of the larger structures, SC1324 and SC1604, were imaged with multiple pointings of the array. Characteristics of the individual observations are listed in Table \ref{obsIDtab}. While we planned observations to have exposure times of approximately 50 ks per pointing per field, we supplemented our sample with publicly available data. These observations were exposed for varying lengths and can be distinguished, in Table \ref{obsIDtab}, as those with a PI other than Lubin. Note that, though \emph{Chandra} observations for SG0023 and RXJ1053 are included in Table \ref{obsIDtab} for completeness, these fields are removed from our analysis due to insufficient counts necessary to reliably measure X-ray luminosity, temperature, or both for the known structures in these fields.
The reduction of the data was conducted using the {\it Chandra} Interactive Analysis of Observation 4.7 software \citep[CIAO;][]{frusc06}. We used the Imperial reduction pipeline, which is described in detail in \citet{laird09} and \citet{nandra15}. For a summary of this reduction process, see \citet{rumb17}.
\begin{table*}
\caption{{\it Chandra} Observations}
\label{obsIDtab}
\begin{tabular}{lllllll}
\toprule
\footnotesize{Observation}
& \footnotesize{Target}
& \footnotesize{Instrument}
& \footnotesize{PI}
& \footnotesize{Exposure}
& \footnotesize{RA $^{\rm a}$}
& \footnotesize{Dec. $^{\rm a}$} \\
{ID}
& { }
& { }
& { }
& {Time (ks)}
& { }
& { }\\
\midrule
7914 & SG0023 & ACIS-I & Lubin & 49.38 & 00\ 23\ 52.30 & 04\ 22\ 34.20 \\
3181 & RCS0224 & ACIS-S & Gladders & 14.37 & 02\ 24\ 34.10 & -00\ 02\ 30.90 \\
4987 & RCS0224 & ACIS-S & Ellingson & 88.97 & 02\ 24\ 34.10 & -00\ 02\ 30.90 \\
927 & Cl0849 & ACIS-I & Stanford & 125.15 & 08\ 48\ 55.90 & 44\ 54\ 50.00 \\
1708 & Cl0849 & ACIS-I & Stanford & 61.47 & 08\ 48\ 55.90 & 44\ 54\ 50.00 \\
2227 & RXJ0910 & ACIS-I & Stanford & 105.74 & 09\ 10\ 45.41 & 54\ 22\ 05.00 \\
2452 & RXJ0910 & ACIS-I & Stanford & 65.31 & 09\ 10\ 45.41 & 54\ 22\ 05.00 \\
4936 & RXJ1053 & ACIS-S & Predehl & 92.4 & 10\ 53\ 43.00 & 57\ 35\ 00.00 \\
1662 & RXJ1221 & ACIS-I & van Speybroeck & 79.08 & 12\ 21\ 24.50 & 49\ 18\ 14.40 \\
9403 & SC1324 & ACIS-I & Lubin & 26.94 & 13\ 24\ 49.50 & 30\ 51\ 34.10 \\
9404 & SC1324 & ACIS-I & Lubin & 30.4 & 13\ 24\ 42.50 & 30\ 16\ 30.00 \\
9836 & SC1324 & ACIS-I & Lubin & 20 & 13\ 24\ 42.50 & 30\ 16\ 30.00 \\
9840 & SC1324 & ACIS-I & Lubin & 21.45 & 13\ 24\ 49.50 & 30\ 51\ 34.10 \\
2229 & Cl1350 & ACIS-I & Stanford & 58.31 & 13\ 50\ 46.10 & 60\ 07\ 09.00 \\
6932 & SC1604 & ACIS-I & Lubin & 49.48 & 16\ 04\ 19.50 & 43\ 10\ 31.00 \\
6933 & SC1604 & ACIS-I & Lubin & 26.69 & 16\ 04\ 12.00 & 43\ 22\ 35.40 \\
7343 & SC1604 & ACIS-I & Lubin & 19.41 & 16\ 04\ 12.00 & 43\ 22\ 35.40 \\
548 & RXJ1716 & ACIS-I & van Speybroeck & 51.73 & 17\ 16\ 52.30 & 67\ 08\ 31.20 \\
10443 & RXJ1757 & ACIS-I & Lubin & 21.75 & 17\ 57\ 19.80 & 66\ 31\ 39.00 \\
11999 & RXJ1757 & ACIS-I & Lubin & 24.7 & 17\ 57\ 19.80 & 66\ 31\ 39.00 \\
10444 & RXJ1821 & ACIS-I & Lubin & 22.24 & 18\ 21\ 38.10 & 68\ 27\ 52.00 \\
10924 & RXJ1821 & ACIS-I & Lubin & 27.31 & 18\ 21\ 38.10 & 68\ 27\ 52.00 \\
\bottomrule
\multicolumn{7}{l}{$^{\rm a}$ \footnotesize{Coordinates refer to those of the observation aim point.}}\\
\end{tabular}
\end{table*}
\subsection{Photometry and Spectroscopy}
We have photometric observations of our sample across a wide range of bands, from optical to near-infrared (NIR), typically $B/V/R/I/Z/J/K/[3.6]/[4.5]$, as well as 24$\mu$m data which we do not include for this study. This dataset is composed of both our own ORELSE observing campaigns and archival data. This includes data from the Large Format Camera \citep[LFC;][]{simcoe00} on the Palomar 200-inch Hale Telescope, Suprime-Cam on the Subaru Telescope \citep{Suprime-Cam}, MegaCam and the Wide-field InfraRed Camera \citep[WIRCam;][]{WIRCam} on the Canada France Hawaii Telescope (CHFT), the Wide Field Camera \citep[WFCAM;][]{WFCAM} on the United Kingdom InfraRed Telescope (UKIRT), and the InfraRed Array Camera \citep[IRAC;][]{IRAC} on the {\it Spitzer Space Telescope}. For full details on the reduction of these data see \citet{gal08} and \citet{adam17}. Additionally, SC1604 has imaging from the Advanced Camera for Surveys (ACS) on-board the Hubble Space Telescope (HST)\footnote{SG0023, RCS0224, SC0849, RXJ0910, RXJ1053, SC1324, RXJ1716, and RXJ1757 also have archival coverage with varying exposure times and spatial coverages from ACS and the Wide Field Planetary Camera 2 \citep[WFPC2;][]{WFPC2} which we do not include in this study.}. See \citet{koc09b} for the details of these ACS observations.
Photometry in the optical through near-IR (B-K bands) are measured in fixed circular apertures on images that are convolved to the seeing of the image with the largest point spread function (PSF). The size of this circular aperture is set to be 1.3x the size of the largest PSF, a choice which tends to maximize the signal-to-noise ratio of the extracted flux \citep{whit11}. The image used to detect sources on which photometric measurements were to be performed varied from field to field but were generally comprised of at least one image whose central filter was redder than the $D_{n}(4000)$/Balmer break at the redshift of the LSS in that field. A summary of detection images used along with the worst seeing across all of our ground-based images for all ORELSE fields studied in this paper is presented in Table \ref{tab:det}. For Spitzer/IRAC photometry we use T-PHOT \citep{merlin15}, a software package specifically designed to extract photometry in crowded imaging. Note that all optical through NIR photometry are used in our SED-fitting procedures. The 80\% completeness depth for all images of the two fields which were studied here and not included in \citet{adam17} are given in Table \ref{tab:limits}. A full description of the procedure of measuring photometry, estimating image depths, and the 80\% completeness depths for images taken on the remainder of the ORELSE fields studied here can be found in \citet{adam17}.
Our photometric catalog is complemented by extensive spectroscopic data. The bulk of these data are from the Deep Imaging Multi-Object Spectrograph \citep[DEIMOS;][]{Faber03} on the Keck II 10m telescope. DEIMOS has a wide field of view ($16\farcm9 \times 5\farcm0$), high efficiency, and is able to position over 120 targets per slit mask, ideal for establishing an extensive spectroscopic catalog. We used the 1200 line mm$^{-1}$ grating, blazed at 7500 \AA, and 1$\arcsec$-wide slits for a pixel scale of 0.33 \AA\ pixel$^{-1}$ and a FWHM resolution of $\sim1.7\ $\AA. Central wavelengths varied for each LSS, in the range 7000-8700 \AA, and the approximate spectral coverage was $\sim \pm1300$\ \AA\ around these central wavelengths. For an overview of the DEIMOS data used in this study see \citet{lem12} and \citet{rumb17}.
In addition to our own DEIMOS data, SG0023, RXJ0910, SC1604 and RXJ1821 had some spectral redshifts obtained from the Low-Resolution Imaging Spectrometer (LRIS; \citealt{oke95}). See \citet{oke98, GalLub04,gioia04,tan08} for further details on these observations and their reduction. In the SC0849 supercluster we supplemented our DEIMOS observations with a large number of redshifts obtained by a variety of different telescopes and instruments. The bulk of the spectral redshifts of SC0849 member galaxies in the two clusters presented in this paper were obtained with LRIS \citep{stan97, ros99, mei06}. These data were complemented by redshifts that were obtained by observations with a combination of the Faint Object Camera and Spectrograph (FOCAS; \citealt{kash02}) on Subaru, the northern version of the Gemini Multi-Object Spectrographs (GMOS-N; \citealt{hook04}), and DEIMOS using the 600 l mm$^{-1}$ grating, though these observations were primarily aimed at the surrounding large scale structure. Depending on the analysis, 40-45\% of the secure spectral redshifts of the member population of the two SC0849 clusters presented in this paper are drawn from these observations, with our own redshifts obtained from DEIMOS making up the remainder. For more details on the observation and reduction of this supplementary SC0849 spectroscopic data see \citet{mei12} and references therein. For more details on the DEIMOS spectroscopy taken specifically for ORELSE, its reduction, and the process of measuring redshifts see \citet{lem17} and \citet{adam17} and references therein. Only secure spectroscopic redshifts were used in our analysis, meaning those with quality flags of $Q$ = -1, 3, 4 as defined in \citet{gal08} and \citet{new13}. To the best of our ability, this scheme was applied equally to our own DEIMOS data and other spectroscopic data incorporated into the analysis in this paper.
\begin{table}
\caption{Photometric Information}
\label{tab:det}
\begin{center}
\begin{tabular}{rccc}
\hline
Field & Detection & Detection & Worst \\
& Image & Instrument & PSF \\
\hline
RXJ0910 & $R_C,I_+,Z_+$ & Subaru/Suprime-Cam & 1.00\arcsec \\
Cl1350 & $r$ & CFHT/MegaCam & 1.96\arcsec \\
RXJ1757 & $r',i'$ & Palomar/LFC & 1.24\arcsec \\
RXJ1821 & $Y$ & Subaru/Suprime-Cam & 1.23\arcsec \\
RCS0224 & $I_+$ & Subaru/Suprime-Cam & 1.25\arcsec \\
RXJ1053 & $Z_+$ & Subaru/Suprime-Cam & 1.30\arcsec \\
RXJ1221 & $i'$ & Palomar/LFC & 1.37\arcsec \\
RXJ1716 & $R_C,I_+,Z_+$ & Subaru/Suprime-Cam & 0.89\arcsec \\
SC0849 & $Z_+$ & Subaru/Suprime-Cam & 1.40\arcsec \\
SC1324 & $i$ & Palomar/LFC & 1.28\arcsec \\
SC1604 & $R_C$ & Subaru/Suprime-Cam & 1.30\arcsec\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table*}
\begin{center}
\caption{Photometry}
\label{tab:limits}
{\vskip 1mm}
\begin{tabular}{c @{\hskip 15mm} c}
\begin{tabular}{llll}
\hline \\[-3.3mm]
Filter & Telescope & Instrument & Depth$^a$ \\[0mm]
\hline \\[-3mm]
Cl1350 \\[0.5mm]
\hline \\[-3mm]
\hline \\[-2mm]
$B$ & Subaru & Suprime-Cam & 26.5 \\
$V$ & Subaru & Suprime-Cam & 25.8 \\
$R_C$ & Subaru & Suprime-Cam & 25.1 \\
$g^{\prime}$ & CFHT & MegaCam & 24.4 \\
$r^{\prime}$ & CFHT & MegaCam & 24.3 \\
$r^{\prime}$ & Palomar & LFC & 25.0 \\
$i^{\prime}$ & Palomar & LFC & 23.5 \\
$z^{\prime}$ & Palomar & LFC & 22.9 \\
$[3.6]$ & {\it Spitzer} & IRAC & 23.4 \\
$[4.5]$ & {\it Spitzer} & IRAC & 23.4 \\[1mm]
\hline \\[-3mm]
RXJ1221 \\[0.5mm]
\hline \\[-3mm]
\hline \\[-2mm]
$B$ & Subaru & Suprime-Cam & 26.6 \\
$V$ & Subaru & Suprime-Cam & 26.1 \\
$r'$ & Palomar & LFC & 24.2 \\
$i'$ & Palomar & LFC & 24.4 \\
$z'$ & Palomar & LFC & 22.8 \\
$J$ & UKIRT & WFCAM & 22.4 \\
$K$ & UKIRT & WFCAM & 21.9 \\
$[3.6]$ & {\it Spitzer} & IRAC & 24.0 \\
$[4.5]$ & {\it Spitzer} & IRAC & 23.8 \\[1mm]
\end{tabular}
&
\begin{tabular}{llll}
\hline \\[-3mm]
Filter & Telescope & Instrument & Depth$^a$ \\[0mm]
\hline \\[-3mm]
RXJ1053 \\[0.5mm]
\hline \\[-3mm]
\hline \\[-2mm]
$u^{\ast}$ & CFHT & MegaCam & 24.8 \\
$g^{\prime}$ & CFHT & MegaCam & 25.7 \\
$r^{\prime}$ & CFHT & MegaCam & 24.5 \\
$z^{\prime}$ & CFHT & MegaCam & 23.6 \\
$B$ & Subaru & Suprime-Cam & 26.1 \\
$V$ & Subaru & Suprime-Cam & 26.1 \\
$R_C$ & Subaru & Suprime-Cam & 25.2 \\
$R_+$ & Subaru & Suprime-Cam & 26.4 \\
$I_+$ & Subaru & Suprime-Cam & 25.1 \\
$Z_+$ & Subaru & Suprime-Cam & 25.5 \\
$J$ & UKIRT & WFCAM & 22.3 \\
$K$ & UKIRT & WFCAM & 21.7 \\
$[3.6]$ & {\it Spitzer} & IRAC & 23.9 \\
$[4.5]$ & {\it Spitzer} & IRAC & 23.4 \\
$[5.8]$ & {\it Spitzer} & IRAC & 21.7 \\
$[8.0]$ & {\it Spitzer} & IRAC & 21.8 \\
\end{tabular}
\end{tabular}
\end{center}
$^a$ 80\% completeness limits derived from the recovery rate of artificial sources inserted at empty sky regions.
\end{table*}
\subsection{Spectral Energy Distribution Fitting}
\label{sec:SED}
We performed spectral energy distribution (SED) fitting for our sample using our optical to mid-infrared photometry, which we used to estimate the photometric redshifts of galaxies as well as properties such as stellar mass. To do this fitting, we used the Easy and Accurate $z_{phot}$ from Yale \citep[EAZY;][]{EAZY}, which performs an iterative $\chi^2$ fit using Projet d'Etude des GAlaxies par Synth\`{e}se \'{E}volutive \citep[P\'{E}GASE;][]{PEGASE} models, taking the results of our aperture photometry as input. This code outputs a probability density function $P\left(z\right)$, a measure of our confidence that the respective source is at a given redshift $z$. This PDF is modulated by a magnitude prior, designed to mimic the intrinsic redshift distribution for galaxies of given apparent magnitude. We adopted as the photometric redshift $z_{\rm{peak}}$, which is obtained by marginalizing over the output $P(z)$, except when an object has multiple significant peaks in its $P(z)$. In this case, the marginalization is constrained to the peak with the largest integrated probability.
To select a pure sample of galaxies, we cut sources that were likely stars, any objects with a S/N $<$ 3 in the detection band, those covered in less than five of the broadband images, ones that were saturated, or with catastrophic SED fits (defined as $\chi^2_{galaxy}>10$ from EAZY fits). To determine likely stars, another round of fitting with stellar templates was performed. See \citet{adam17} for more details.
To derive stellar masses and other galactic properties, we used the Fitting and Assessment of Synthetic Templates \citep[FAST;][]{FAST} code. FAST creates a multi-dimensional cube of model fluxes from a stellar population synthesis library (SPS). The best fit model is found by fitting each object to every object in this cube and minimizing $\chi^2$. High quality spectroscopic redshifts were used when available, and $z_{peak}$ from EAZY was used as a redshift prior for all other cases. See \citet{lem17} and \citet{adam17} for more details on the SED fitting.
Besides deriving stellar masses, we also use the rest-frame colours derived from our SED fitting to separate galaxies into star-forming (SF) and quiescent populations. To divide the galaxies into these two categories, we use a two-colour selection technique proposed by \citet{ilbert10}. We adopted the rest-frame $M_{NUV} - M_{r}$ versus $M_{r} - M_{J}$ colour-colour diagram separations from \citet{lemaux14Herschel}. Galaxies at $0.5 < z \leq 1.0$ were considered quiescent if they had $M_{NUV} - M_r > 2.8(M_r - M_J) + 1.51$ and $M_{NUV} - M_r > 3.75$, while galaxies at $1.0 < z \leq 1.5$ were considered quiescent when $M_{NUV} - M_r > 2.8(M_r - M_J) + 1.36$ and $M_{NUV} - M_r > 3.6$. All other galaxies were classified as star-forming.
\section{Cluster Properties}
\label{sec:CP}
As discussed in Section \ref{sec:intro}, we seek to determine the most effective ways to determine the virialization status of galaxy clusters. To accomplish this, we first need to carry out a number of tests of virialization which we will then compare to offsets in the various empirical scaling relations drawn from the literature for our cluster sample presented in Section \ref{sec:SR}. In this section, we examine the properties of the individual clusters using optical and X-ray imaging and our spectroscopic data. We use these to carry out a number of tests of virialization and substructure on the galaxy clusters, and each cluster property discussed is used as part of one of these tests.
Some of the following tests will use the results of the SED fitting, such as using the separation of galaxies into star-forming and quiescent populations to measure their separate velocity dispersions. This was made possible for this work trough extensive multiwavelength observations and time spent carrying out the fitting. Not all surveys have these resources and instead must rely on measures more closely related to observables, such as the observed optical colour or luminosity, as opposed to model-derived rest-frame colours or stellar mass. Where applicable, we carry out parallel tests: one using the results of our SED fitting, and one without those results.
\begin{table*}
\caption{Cluster Properties}
\label{clusproptab}
\begin{tabular}{llllllll}
\toprule
\footnotesize{Cluster}
& \footnotesize{$z$}
& \footnotesize{Num. of}
& \footnotesize{Gas Temp.}
& \footnotesize{Bol. X-ray}
& \footnotesize{Velocity}
& \footnotesize{MMCG (BCG)}
& \footnotesize{Quiescent} \\
{}
& {}
& \footnotesize{Members$^{\rm a}$}
& \footnotesize{(keV)}
& \footnotesize{Lum. ($10^{44}$\ ergs/s)}
& \footnotesize{Disp. (km/s)$^{\rm b}$}
& \footnotesize{Vel. Off. (km/s)$^{\rm c}$}
& \footnotesize{Fraction$^{\rm d}$}\\
\midrule
RCS0224B & 0.778 & 52 & $5.1^{+4.0}_{-1.7}$ & $2.0\pm0.1$ & $710\pm60$ & 1095 (1095) & $0.513\pm0.066$\\
SC0849C & 1.261 & 25 & $7.7^{+3.9}_{-2.2}$ & $2.4\pm0.2$ & $840\pm110$ & -196 (-196) & $0.427\pm0.067$\\
SC0849D & 1.270 & 23 & $4.0^{+6.3}_{-1.8}$ & $0.9\pm0.2$ & $700\pm110$ & 686 (686) & $0.292\pm0.067$\\
SC0910A & 1.103 & 23 & $2.9^{+1.7}_{-0.8}$ & $1.8\pm0.2$ & $840\pm240$ & -139 (-139) & $0.574\pm0.092$\\
SC0910B & 1.101 & 25 & $5.2^{+2.7}_{-1.4}$ & $2.4\pm0.2$ & $720\pm150$ & 757 (757) & $0.638\pm0.117$\\
RXJ1221B & 0.700 & 36 & $9.0^{+1.5}_{-1.1}$ & $11.0\pm0.3$ & $750\pm120$ & 3 (3) & $0.636\pm0.073$\\
SC1324A & 0.756 & 43 & $8.5^{+36.0}_{-4.3}$ & $2.0\pm0.2$ & $870\pm110$ & -413 (-413) & $0.458\pm0.088$\\
SC1324B & 0.698 & 13 & $^{\rm e}$ & $^{\rm e}$ & $680\pm140$ & 4 (4) & $0.630\pm0.117$\\
SC1324I & 0.696 & 27 & $2.9^{+4.1}_{-1.4}$ & $1.3\pm0.2$ & $890\pm130$ & -447 (-447) & $0.516\pm0.075$\\
Cl1350C & 0.800 & 43 & $4.7^{+1.7}_{-1.0}$ & $4.4\pm0.2$ & $800\pm80$ & -239 (-239) & $0.500\pm0.069$\\
SC1604A & 0.898 & 35 & $3.7^{+5.6}_{-1.8}$ & $2.4\pm0.3$ & $720\pm130$ & 126 (126) & $0.438\pm0.077$\\
SC1604B & 0.865 & 49 & $1.3^{+2.7}_{-0.5}$ & $1.6\pm0.2$ & $820\pm70$ & 33 (33) & $0.348\pm0.071$\\
SC1604D & 0.923 & 70 & $0.8^{+0.3}_{-0.4}$ & $1.3\pm0.9$ & $690\pm90$ & 50 (-199) & $0.410\pm0.082$\\
RXJ1716A & 0.809 & 83 & $6.4^{+1.2}_{-0.9}$ & $11.9\pm0.4$ & $1120\pm100$ & 2102 (2102) & $0.525\pm0.051$\\
RXJ1757 & 0.693 & 34 & $5.1^{+3.0}_{-1.6}$ & $2.0\pm0.2$ & $860\pm110$ & -960 (278) & $0.489\pm0.074$\\
RXJ1821 & 0.817 & 52 & $6.1^{+2.3}_{-1.4}$ & $7.5\pm0.4$ & $1120\pm100$ & 510 (510) & $0.512\pm0.049$\\
\bottomrule
\multicolumn{8}{l}{$^{\rm a}$ \footnotesize{Includes all spectroscopically confirmed members within a projected distance of 1 Mpc.}}\\
\multicolumn{8}{p{16cm}}{$^{\rm b}$ \footnotesize{Velocity dispersion measured using all spectroscopically confirmed cluster members within 1$h_{70}^{-1}$ Mpc of the X-ray centroid as the initial sample for $3\sigma$ clipping. See Section \ref{sec:veldisp} for more details.}}\\
\multicolumn{8}{l}{$^{\rm c}$ \footnotesize{MMCG (BCG) velocity offsets are measured relative to the velocity center of the galaxy cluster.}}\\
\multicolumn{8}{l}{$^{\rm d}$ \footnotesize{Quiescent fraction is corrected for selection effects. See Section \ref{sec:SF} for details.}}\\
\multicolumn{8}{p{16cm}}{$^{\rm e}$ \footnotesize{We were unable to measure an X-ray temperature for SC1324B with any meaningful precision. Since measuring X-ray luminosities involved using a model based on this X-ray temperature, we were unable to accurately measure a luminosity as well.}}\\
\end{tabular}
\end{table*}
\subsection{Weighted Mean Centers}
\label{sec:WMC}
One of the simplest ways of measuring the centroid for a galaxy cluster is to take the weighted mean of the positions of its members. A luminosity or stellar mass-weighted mean center can be useful as a test of virialization when compared to the X-ray centroid or position of the brightest/most massive cluster galaxy. The weighted mean centers are a measure of the position of galaxies within the cluster, so offsets with other centroids can be signs of a disturbed state. There are several ways to calculate these centers, though we only consider two here: weighting by the stellar mass and luminosity. For both measures, we included all galaxies spectroscopically confirmed in the redshift range $\langle z \rangle \pm 3\sigma_{v}$, where $\sigma_{v}$ is the galaxy line of sight differential velocity dispersion (see Section \ref{sec:veldisp}), and within $R_{proj}<1h_{70}^{-1}$ Mpc of the X-ray center\footnote{Note that some previous ORELSE studies, such as \citet{rumb13}, used red galaxy density peaks to center the LWMCs instead of the X-ray centers.}. We would expect that all cluster members would tend to be located within $1h_{70}^{-1}$ Mpc, and we investigate the impact of varying this radius in Section \ref{sec:norm}.
First, we use the galaxy stellar masses from our SED fitting described in Section \ref{sec:SED} to measure mass-weighted mean centers (MWMCs), which are given in Table \ref{centab}. The coordinates for these MWMCs for each cluster were calculated by:
\begin{multline}
\alpha_{MWMC} = \frac{\sum\limits_{i=1}^{N}\alpha_{i}\mathcal{M}_{\ast,i}}{\sum\limits_{i=1}^{N}\mathcal{M}_{\ast,i}}, \, \, \delta_{MWMC} = \frac{\sum\limits_{i=1}^{N}\delta_{i}\mathcal{M}_{\ast,i}}{\sum\limits_{i=1}^{N}\mathcal{M}_{\ast,i}}
\label{eqn:MWMC}
\end{multline}
\noindent where $\alpha_{MWMC, j}$ and $\delta_{MWMC, j}$ are the MWMC right ascension and declination for a given cluster and $N$ is the number of members of that cluster. However, stellar masses are not available in many studies. For comparison to such studies, we also measured luminosity-weighted mean centers (LWMCs). The luminosity weighting should be carried out ideally with a band with rest-frame coverage completely redward of $D_n(4000)$ as such coverage ensures that its flux density correlates well with stellar mass. We used the `supercolours', defined in \citet{rumb17}, which are parameterized rest-frame magnitude estimates using observed magnitudes and redshifts as inputs. They consist of $M_{red}$ and $M_{blue}$, which are created from the $r$, $i$, and $z$ band apparent magnitudes as an approximation of rest-frame magnitudes, and can be thought of as approximating $M_{u*}$ and $M_{B}$. The supercolours are easily derived from observed data according to the methodology of \citet{rumb17} without intensive model fitting. The supercolour fluxes were defined according to the redshift-dependent formulae:
\begin{multline}
f_{blue}=A_{blue}\left[1-B_{blue}\left(z-z_0\right)\right]f_{\nu,R}\\+\left(1-A_{blue}\right)\left[B_{blue}\left(z-z_0\right)\right]f_{\nu,I}\\
\label{eqn:supercolor1}
\end{multline}
\vspace{-0.2cm}
\begin{multline}
f_{red}=A_{red}\left[1-B_{red}\left(z-z_0\right)\right]f_{\nu,I}\\+\left(1-A_{red}\right)\left[B_{red}\left(z-z_0\right)\right]f_{\nu,Z}
\label{eqn:supercolor2}
\end{multline}
\noindent where $A_{blue}$, $B_{blue}$, $A_{red}$, $B_{red}$, and $z_0$ are the free parameters. This parameterization is set up such that $f_{blue}$ = $f_{nu,R}$ and $f_{red}$ = $f_{nu, I}$ at $z\sim0.65$ while $f_{blue} = f_{nu,I}$ and $f_{red} = f_{nu, Z}$ at $z\sim1.2$. For some parts of the analysis $f_{blue}$ and $f_{red}$ are transformed into their absolute magnitude equivalents, $M_{blue}$ and $M_{red}$, using the relations given in \citet{rumb17}. The exact values of the various constants in Equation \ref{eqn:supercolor2} were determined by fitting the parameterization to minimize the difference with the actual rest-frame colours on a subset of the data with SED fitting, and can be found in \citet{rumb17}. These two quantities can be thought of as reliable proxies of the rest-frame $u^{\ast}$$-$ and $B-$band flux densities of galaxies within the ORELSE redshift range of $0.55 < z < 1.4$ for cases where $riz$ imaging is available but full SED fitting is not possible. The accuracy with which these supercolours predict the true (SED-fit) rest-frame $u^{\ast}$$-$ and $B-$band flux densities are discussed in detail in \citet{rumb17}. In contrast to \citet{rumb17}, here we used the photometry that was measured using the methods of \citet{adam17} for all clusters to calculated $M_{blue}$ and $M_{red}$. As such, red-sequence fits in supercolor/magnitude space were re-measured using an identical approach to that of \citet{rumb17}. The quantity $f_{red}$ was used for the luminosity weighting for all clusters. The LWMC coordinates for a cluster were then calculated by:
\begin{multline}
\alpha_{LWMC} = \frac{\sum\limits_{i=1}^{N}\alpha_{i}f_{\rm{red}, i}}{\sum\limits_{i=1}^{N}f_{\rm{red}, i}}, \, \delta_{LWMC} = \frac{\sum\limits_{i=1}^{N}\delta_{i}f_{\rm{red}, i}}{\sum\limits_{i=1}^{N}f_{\rm{red}, i}}
\label{eqn:LWMC}
\end{multline}
\noindent with $\alpha$, $\delta$, and $N$ having the same meaning as in Equation \ref{eqn:MWMC}. These centroids are listed in Table \ref{centab}.
\begin{table*}
\caption{Cluster Centroids}
\label{centab}
\begin{tabular}{lcccccccccc}
\toprule
\scriptsize{Cluster}
& \scriptsize{X-ray}
& \scriptsize{X-ray}
& \scriptsize{LWM}
& \scriptsize{LWM}
& \scriptsize{MWM}
& \scriptsize{MWM}
& \scriptsize{BCG}
& \scriptsize{BCG}
& \scriptsize{MMCG}
& \scriptsize{MMCG}\\
{}
& \scriptsize{R.A.}
& \scriptsize{Dec.}
& \scriptsize{R.A.}
& \scriptsize{Dec.}
& \scriptsize{R.A.}
& \scriptsize{Dec.}
& \scriptsize{R.A.}
& \scriptsize{Dec.}
& \scriptsize{R.A.}
& \scriptsize{Dec.}\\
\midrule
\footnotesize{RCS0224B} & \footnotesize{02:24:34} & \footnotesize{-00:02:26.6} & \footnotesize{02:24:34} & \footnotesize{-00:02:26.9} & \footnotesize{02:24:34} & \footnotesize{-00:02:25.0} & \footnotesize{02:24:34} & \footnotesize{-00:02:27.9} & \footnotesize{02:24:34} & \footnotesize{-00:02:27.9} \\
\footnotesize{SC0849C} & \footnotesize{08:48:58} & \footnotesize{+44:51:56.2} & \footnotesize{08:48:59} & \footnotesize{+44:52:03.4} & \footnotesize{08:48:58} & \footnotesize{+44:52:00.3} & \footnotesize{08:48:59} & \footnotesize{+44:51:57.2} & \footnotesize{08:48:59} & \footnotesize{+44:51:57.2} \\
\footnotesize{SC0849D} & \footnotesize{08:48:36} & \footnotesize{+44:53:46.6} & \footnotesize{08:48:37} & \footnotesize{+44:54:10.7} & \footnotesize{08:48:35} & \footnotesize{+44:53:37.4} & \footnotesize{08:48:36} & \footnotesize{+44:53:36.1} & \footnotesize{08:48:36} & \footnotesize{+44:53:36.1} \\
\footnotesize{SC0910A} & \footnotesize{09:10:09} & \footnotesize{+54:18:57.0} & \footnotesize{09:10:06} & \footnotesize{+54:18:50.8} & \footnotesize{09:10:07} & \footnotesize{+54:18:53.7} & \footnotesize{09:10:09} & \footnotesize{+54:18:53.8} & \footnotesize{09:10:09} & \footnotesize{+54:18:53.8} \\
\footnotesize{SC0910B} & \footnotesize{09:10:45} & \footnotesize{+54:22:07.4} & \footnotesize{09:10:44} & \footnotesize{+54:22:13.2} & \footnotesize{09:10:45} & \footnotesize{+54:22:19.7} & \footnotesize{09:10:46} & \footnotesize{+54:23:29.0} & \footnotesize{09:10:46} & \footnotesize{+54:23:29.0} \\
\footnotesize{RXJ1221B} & \footnotesize{12:21:26} & \footnotesize{+49:18:30.7} & \footnotesize{12:21:26} & \footnotesize{+49:18:30.5} & \footnotesize{12:21:27} & \footnotesize{+49:18:22.5} & \footnotesize{12:21:29} & \footnotesize{+49:18:17.2} & \footnotesize{12:21:29} & \footnotesize{+49:18:17.2} \\
\footnotesize{SC1324A} & \footnotesize{13:24:49} & \footnotesize{+30:11:27.9} & \footnotesize{13:24:49} & \footnotesize{+30:11:52.1} & \footnotesize{13:24:49} & \footnotesize{+30:11:53.1} & \footnotesize{13:24:49} & \footnotesize{+30:11:38.9} & \footnotesize{13:24:49} & \footnotesize{+30:11:38.9} \\
\footnotesize{SC1324B} & \footnotesize{13:24:21} & \footnotesize{+30:12:31.3} & \footnotesize{13:24:21} & \footnotesize{+30:12:57.6} & \footnotesize{13:24:21} & \footnotesize{+30:12:57.0} & \footnotesize{13:24:21} & \footnotesize{+30:12:43.2} & \footnotesize{13:24:21} & \footnotesize{+30:12:43.2} \\
\footnotesize{SC1324I} & \footnotesize{13:24:50} & \footnotesize{+30:58:28.7} & \footnotesize{13:24:49} & \footnotesize{+30:58:20.6} & \footnotesize{13:24:50} & \footnotesize{+30:58:26.3} & \footnotesize{13:24:49} & \footnotesize{+30:58:40.7} & \footnotesize{13:24:49} & \footnotesize{+30:58:40.7} \\
\footnotesize{Cl1350C} & \footnotesize{13:50:48} & \footnotesize{+60:07:11.5} & \footnotesize{13:50:51} & \footnotesize{+60:06:56.0} & \footnotesize{13:50:51} & \footnotesize{+60:06:57.1} & \footnotesize{13:50:60} & \footnotesize{+60:06:08.5} & \footnotesize{13:50:60} & \footnotesize{+60:06:08.5} \\
\footnotesize{SC1604A} & \footnotesize{16:04:24} & \footnotesize{+43:04:36.6} & \footnotesize{16:04:23} & \footnotesize{+43:04:55.4} & \footnotesize{16:04:22} & \footnotesize{+43:04:57.2} & \footnotesize{16:04:24} & \footnotesize{+43:04:37.5} & \footnotesize{16:04:24} & \footnotesize{+43:04:37.5} \\
\footnotesize{SC1604B} & \footnotesize{16:04:26} & \footnotesize{+43:14:23.5} & \footnotesize{16:04:27} & \footnotesize{+43:14:24.8} & \footnotesize{16:04:26} & \footnotesize{+43:14:16.5} & \footnotesize{16:04:26} & \footnotesize{+43:14:18.8} & \footnotesize{16:04:26} & \footnotesize{+43:14:18.8} \\
\footnotesize{SC1604D} & \footnotesize{16:04:34} & \footnotesize{+43:21:07.1} & \footnotesize{16:04:33} & \footnotesize{+43:21:03.0} & \footnotesize{16:04:33} & \footnotesize{+43:21:02.5} & \footnotesize{16:04:36} & \footnotesize{+43:21:57.2} & \footnotesize{16:04:35} & \footnotesize{+43:21:56.0} \\
\footnotesize{RXJ1716A} & \footnotesize{17:16:49} & \footnotesize{+67:08:24.4} & \footnotesize{17:16:51} & \footnotesize{+67:08:38.1} & \footnotesize{17:16:50} & \footnotesize{+67:08:36.8} & \footnotesize{17:16:49} & \footnotesize{+67:08:21.6} & \footnotesize{17:16:49} & \footnotesize{+67:08:21.6} \\
\footnotesize{RXJ1757} & \footnotesize{17:57:19} & \footnotesize{+66:31:27.8} & \footnotesize{17:57:20} & \footnotesize{+66:31:16.2} & \footnotesize{17:57:21} & \footnotesize{+66:31:01.6} & \footnotesize{17:57:20} & \footnotesize{+66:31:32.6} & \footnotesize{17:57:21} & \footnotesize{+66:29:44.7} \\
\footnotesize{RXJ1821} & \footnotesize{18:21:32} & \footnotesize{+68:27:55.4} & \footnotesize{18:21:31} & \footnotesize{+68:28:03.5} & \footnotesize{18:21:31} & \footnotesize{+68:28:08.3} & \footnotesize{18:21:31} & \footnotesize{+68:29:28.8} & \footnotesize{18:21:31} & \footnotesize{+68:29:28.8} \\
\bottomrule
\multicolumn{11}{p{16cm}}{\scriptsize{The acronyms LWM, MWM, BCG, and MMCG stand for, respectively, luminosity-weighted mean, mass-weighted mean, brightest cluster galaxy, and most massive cluster galaxy.}}
\end{tabular}
\end{table*}
\subsection{Most Massive and Brightest Cluster Galaxies}
\label{sec:MMCG}
We found both the most massive cluster galaxy (MMCG) and brightest cluster galaxy (BCG) for each cluster in our sample. Examining these galaxies is useful, since a BCG or MMCG with large positional or velocity offset from other centroiding measures can be indicative of a recent cluster-cluster merger \citep{bird94,GB02}. For the MMCGs, we used the results of our SED fitting, while, for the BCGs, we used the observed luminosities of the galaxies as an alternative to stellar mass, appropriate for comparison with studies that do not have SED fitting available.
\subsubsection{MMCGs}
\label{sec:MMCG2}
We selected as potential MMCGs both galaxies that had secure spectroscopic redshifts and those that did not, but had photometric redshifts derived from our SED fitting described in Section \ref{sec:SED}. We used as our full sample galaxies within a projected distance of $R_{proj}<1.5R_{vir}$ from the X-ray centroids\footnote{We use this expanded projected radius to ensure we locate the BCG/MMCG for each cluster, even if it has an exceptionally large offset from the cluster center. In practice, the MMCG/BCG is always $R_{proj}\la1R_{vir}$} in projection from the X-ray centroid and typically much less ($R_{proj}<0.25R_{vir}$).. We required spectroscopically confirmed candidates to be within the redshift range $\langle z \rangle \pm 3\sigma_{v}$, where $\sigma_{v}$ is the galaxy line-of-sight differential velocity dispersion (see Section \ref{sec:veldisp}).
As our spectroscopic coverage is not 100\% complete in these clusters, it was necessary to supplement the spectroscopic member sample with potential members that had no secure spectral redshift, but did have a photometric redshift consistent with the cluster redshift. The allowed redshift range for potential cluster members with only photometric redshifts was expanded to $z_{min}-\sigma_{\Delta z/(1+z)}\left(1+z_{min}\right)$ and $z_{max}+\sigma_{\Delta z/(1+z)}\left(1+z_{max}\right)$ to account for the relative lack of precision of $z_{phot}$ measurements, where $z_{min}$ and $z_{max}$ refer to the minimum and maximum redshifts of the spectroscopic member redshift range, respectively. Values of $\sigma_{\Delta z/(1+z)}$ were estimated on a field by field basis by fitting a Gaussian to the
distribution of $\left(z_{spec} - z_{phot}\right)/\left(1 + z_{spec}\right)$ measurements in the range $0.5 < z < 1.2$ for all galaxies with a secure spectroscopic redshift (for more details see \cite{adam17}. The average $\sigma_{\Delta z/(1+z)}$ for all fields is $\sim0.025$, meaning we allow, on average, photometric-redshift members to spread in velocity an additional $\sim\pm$3500 km s$^{-1}$ relative to the spectroscopic members at the mean redshift of our cluster sample. This photometric redshift range is chosen to maximize the product of purity and completeness of member galaxies, as derived from tests using spectroscopically confirmed samples.
For all cluster members at $z \leq 0.96$ we selected as potential MMCGs only those objects with $18.5 \leq i^{\prime}/I_{c}/I^{+} \leq 24.5$, while at $z>0.96$, we used $z^{\prime}/Z^{+} \leq 24.5$ instead. Due to the incompleteness of our spectroscopy, we include only objects with stellar masses of $M \geq 10^{10} M_{\odot}$, as this is roughly the stellar mass limit where our spectroscopic sample is representative of the underlying photometric sample at these redshifts and subject to the magnitude cuts above (see \citep{lu17} for more details). Additionally, this limit is comparable to the stellar mass completeness limit of our imaging data for all galaxy types at all redshifts considered in this paper (see \citealt{adam17}).
We then selected the remaining galaxies with the top three stellar masses from our SED fits as the potential MMCG candidates for each cluster. For each MMCG candidate we inspected the SED fit and rejected any candidates with obvious photometric issues or probable stars. The most massive of the remaining MMCG candidates, all of which are spectroscopically confirmed, was then adopted as the MMCG for each cluster.
\subsubsection{BCGs}
\label{sec:BCG}
To select BCGs, we used the supercolours described in Section \ref{sec:WMC} and defined in \citet{rumb17}, though, as mentioned in \S\ref{sec:WMC} we now adopted the photometry measured using the methods of \citet{adam17} as input for all supercolor calculations in order to be consistent with our SED-fitting results. The BCG was chosen as the spectroscopically-confirmed galaxy\footnote{While, in principle, we are setting up this analysis to be independent of results from the SED fitting, we verified that each BCG in the spectroscopic member sample was brighter than all photometric redshift members, i.e., that it was the true BCG.} with the smallest value of $M_{red}$. The photometry as measured on our ground-based Suprime-Cam or LFC imaging was used in all cases to compute $M_{red}$ and identify the BCG. While our spectroscopy is nearly complete, photometric redshift analysis or careful colour/magnitude selection may be necessary in other cases to locate potential BCGs for less complete surveys. Note that only two clusters had BCGs that were not identified as the MMCG. In these two clusters, the average ratio of the stellar masses between the MMCG and the BCG was 2.09 and the difference between the $M_{red}$ of the MMCG and the BCG was small ($<$0.05 mags). Since many of the clusters in our sample have yet to form a truly dominant galaxy, a small luminosity gap typically exists between the BCG and the next brightest galaxies (see \citealt{ascaso14} and \citealt{rumb17}). Further, since $M_{red}$ is an imperfect proxy of stellar mass, a lack of concordance between the identified BCGs and MMCGs is to be expected at some level. Regardless, given the large overlap between the two samples and the fact that both sets of galaxies lie are extremely massive at these redshifts and generally appear on the red sequence, it is likely that both sets of galaxies have had considerable time to interact with the cluster potential. As such, it is likely the spatial and velocity information of both the BCGs identified and not identified as the MMCGs provide some level of information on the virialization state of the cluster regardless of their precise identity.
\subsection{Velocity Dispersions}
\label{sec:veldisp}
Examining the velocity information of galaxy cluster members is useful for tests of virialization and substructure. Comparing velocity centers and dispersions of subpopulations (e.g., red vs. blue galaxies) provides tests of clusters' dynamical state and substructure \citep[e.g.,][]{ZF93}. Before studying subpopulations, we first examine the cluster velocity distributions as wholes and describe our measurement methods.
We measure differential line-of-sight galaxy velocity dispersions (hereafter referred to simply as velocity dispersions) following the methods described in \citet{lubin02}, \citet{gal05}, and \citet{rumb13}. Unlike the values reported in Table \ref{strsumtab}, we include here all galaxies within 1$h_{70}^{-1}$\ Mpc of the X-ray center of each detected cluster. Adopting a different centroid for the defining cluster members, e.g., the LWMC, does not significantly affect the calculated velocity dispersions.
To measure the velocity dispersions, we first select an initial redshift range by eye based on the redshift histogram. We perform iterative $3\sigma$ clipping, using the biweight scale estimator or gapper as defined in \citet{beers90}. The velocity dispersion measurement is given by the final iteration, and uncertainties are estimated using jackknife confidence intervals. Our measurements are presented in Table \ref{clusproptab}.
Velocity histograms for each cluster are shown in Figure \ref{fig:veldisp}. The total distribution is shown, as well as those of the quiescent and star-forming populations, using hatched histograms. Velocities are given relative to the central redshift of the cluster defined as the mean redshift of all member galaxies. A Gaussian distribution with $\sigma$ equal to the velocity dispersion of the cluster and a mean value equal to that of the mean redshift of all spectral members is overplotted in each case. Note that these Gaussian distributions are shown for illustrative purposes and do not represent fits to the data. The velocities of the MMCGs and BCGs are shown with full and half arrows, respectively.
\begin{figure*}
\includegraphics[trim={8cm 11cm 10cm 11cm},clip,width=0.9\textwidth]{figure1.pdf}
\caption
Velocity histograms for each cluster are shown, relative to the cluster velocity center. Additionally, velocity histograms for the star-forming and quiescent subpopulations are shown in blue and red, respectively. To provide a sense of how well the velocity histogram conforms to a normal profile, a Gaussian distribution is overplotted with $\sigma=\sigma_v$. The velocities of the MMCG and BCG are shown with an arrow and half arrow, respectively.}
\label{fig:veldisp}
\end{figure*}
\subsubsection{Red vs. Blue and Quiescent vs. Star-Forming Galaxy Populations}
\label{sec:RB}
As stated in the previous subsection, studying the differences between the red and blue galaxy populations of a cluster can provide information on its substructure and dynamical state \citep[e.g.,][]{ZF93}. Roughly, the colour of the galaxies, in certain bands, can be seen as a proxy for the current star formation, since bluer galaxies (in rest-frame optical bands) tend to have younger stellar populations. A difference between the velocity dispersions of these populations can be a sign of virialization. Galaxies that have spent more time within a cluster have also had more time to feel the influence of dynamical friction, sending them closer to the core, as well as cluster processes such as ram pressure stripping, meaning they are more likely to have quenched star formation \citep{balogh01}. Galaxies which we know are star-forming are more likely to be infalling, residing closer to the cluster outskirts. Therefore, we would expect star-forming populations in virialized clusters to have larger velocity dispersions than the quiescent populations. Either way, this difference in velocity dispersions of subpopulations is supported by \citet{ZF93}, who find such differences between early and late-type galaxies, types which correlate well with quiescent and star-forming galaxies, respectively.
We can perform this test using our SED fitting results, using the cuts outlined in Section \ref{sec:SED} to classify galaxies as star-forming or quiescent. In addition, as part of our suite of virialization tests without SED fitting, we rely instead on some combination of the observed galaxy colours. For this latter case, we adopted the $M_{red}$ and $M_{blue}$ `supercolours' described in Section \ref{sec:WMC} for this purpose. To separate galaxies into red and blue populations, we performed red sequence fits on the member galaxies of the clusters of each LSS in $M_{red}$ versus $M_{red}-M_{blue}$ colour-magnitude space using the methods described in \citet{rumb17}. Red galaxies were defined as those above (i.e., redder than) the lower (bluer) edge of the red sequence, and blue galaxies were defined as everything below (bluer) than this edge.
We calculated the velocity dispersions of the red, blue, star-forming, and quiescent populations using the method described in the previous subsection. The various subpopulation velocity dispersions are given in Table \ref{sigtab}, along with the number of galaxies in each population. Using these counts, we can compare the subdivisions of galaxies by observed colour and star formation status derived from SED fits. While the average percentage of star-forming galaxies is 45\%, the average percentage of blue galaxies is only 35\%, meaning some red galaxies are dusty enough to fall onto the red sequence on a colour-magnitude diagram, but are actually star-forming. Another possible explanation for this discrepancy is that our SED-fitting scheme failed for fainter, redder galaxies resulting in these galaxies being spuriously placed in the star-forming region. However, the median $M_{red}$ and $i$-band apparent magnitude of red star-forming galaxies is, in fact, brighter by 0.5 mags than the average blue star-forming galaxy, which broadly precludes this possibility. In addition, galaxies classified as red and star forming appear almost exclusively in the area of $NUVrJ$ phase space typically containing dustier galaxies (e.g., below the quiescent region but at $M_{r}-M_{J}>1$) and have stacked spectral properties which diverge from those of the overall star-forming population (lack of strong emission lines, presence of strong Balmer absorption) which are consistent with those of dusty star-forming galaxies (see, e.g., \citealt{lemaux14Herschel}). These two lines of evidence essentially rule out the possibility that the classification of such galaxies is spurious. While the star-forming versus quiescent results may more accurately categorize the cluster members, the difference is small enough that observed colour appears to be an acceptable proxy for star formation in the absence of SED fits, assuming the data and redshift range allow an analysis similar to our supercolours. In Figure \ref{fig:veldisp}, we plot the quiescent versus star-forming velocity histograms on top of the full histograms.
In addition to examining the differences in velocity dispersions of subpopulations, we also examine the differences in velocity centers, which can be an indication of substructure \citep{ZF93}. We quantify the velocity centers by using the biweight location estimator defined in \citet{beers90} on the blue and red (or quiescent and star-forming) galaxies in each cluster. The estimates of the systemic velocities of each sets of sub-population were then differenced (red to blue and quiescent to star-forming). To estimate the significance of each of these velocity differences, we performed Monte Carlo simulations in which we randomly assigned the galaxies in each cluster to be red and blue (or quiescent and star-forming), while still preserving the true number of red and blue (or quiescent and star-forming) galaxies. The fraction of trials with a velocity difference larger than the observed difference is given in Table \ref{testtab} and serves as our estimate of its significance.
\begin{table*}
\caption{Velocity Dispersions}
\label{sigtab}
\begin{tabular}{lllllllllll}
\toprule
\footnotesize{Cluster}
& \footnotesize{$\sigma_v^{\rm{a}}$}
& \footnotesize{$N_{all}$}
& \footnotesize{$\sigma_v$}
& \footnotesize{$N_{Qui}$}
& \footnotesize{$\sigma_v$}
& \footnotesize{$N_{SF}$}
& \footnotesize{$\sigma_v$}
& \footnotesize{$N_{red}$}
& \footnotesize{$\sigma_v$}
& \footnotesize{$N_{blue}$}\\
{}
& \footnotesize{(All)}
& {}
& \footnotesize{(Qui.)}
& {}
& \footnotesize{(SF)}
& {}
& \footnotesize{(Red)}
& {}
& \footnotesize{(Blue)}
& {}\\
{}
& \footnotesize{(km/s)}
& {}
& \footnotesize{(km/s)}
& {}
& \footnotesize{(km/s)}
& {}
& \footnotesize{(km/s)}
& {}
& \footnotesize{(km/s)}
& {}\\
\midrule
RCS0224B & $700\pm60$ & 54 & $670\pm70$ & 32 & $740\pm150$ & 22 & $710\pm50$ & 42 & $540\pm240$ & 12 \\
SC0849C & $950\pm140$ & 28 & $1070\pm200$ & 17 & $910\pm140$ & 11 & $930\pm150$ & 19 & $890\pm320$ & 9 \\
SC0849D & $850\pm310$ & 24 & $810\pm250$ & 13 & $790\pm590$ & 11 & $670\pm120$ & 18 & $600\pm240$ & 6 \\
SC0910A & $1060\pm140$ & 25 & $980\pm150$ & 16 & $1110\pm280$ & 9 & $1020\pm160$ & 19 & $1150\pm430$ & 6 \\
SC0910B & $720\pm170$ & 25 & $700\pm160$ & 11 & $570\pm310$ & 14 & $830\pm200$ & 12 & $390\pm160$ & 12 \\
RXJ1221B & $830\pm90$ & 35 & $730\pm90$ & 29 & $980\pm600$ & 6 & $720\pm90$ & 28 & $1230\pm200$ & 7 \\
SC1324A & $1070\pm130$ & 34 & $680\pm160$ & 14 & $1270\pm150$ & 20 & $810\pm150$ & 19 & $1310\pm210$ & 15 \\
SC1324B & $680\pm140$ & 13 & $660\pm180$ & 12 & $0\pm0$ & 1 & $690\pm140$ & 12 & $0\pm0$ & 0 \\
SC1324I & $850\pm100$ & 35 & $690\pm80$ & 21 & $1120\pm300$ & 14 & $700\pm70$ & 26 & $1220\pm250$ & 9 \\
Cl1350C & $920\pm90$ & 52 & $1030\pm120$ & 34 & $650\pm140$ & 18 & $980\pm120$ & 38 & $720\pm160$ & 14 \\
SC1604A & $720\pm130$ & 35 & $460\pm110$ & 19 & $1070\pm230$ & 16 & $500\pm80$ & 23 & $1190\pm260$ & 12 \\
SC1604B & $820\pm80$ & 50 & $690\pm100$ & 18 & $890\pm110$ & 32 & $670\pm100$ & 26 & $740\pm160$ & 23 \\
SC1604D & $810\pm130$ & 73 & $410\pm180$ & 19 & $940\pm140$ & 54 & $510\pm110$ & 25 & $950\pm230$ & 45 \\
RXJ1716A & $1280\pm110$ & 92 & $1250\pm180$ & 44 & $1260\pm130$ & 45 & $1210\pm140$ & 57 & $1380\pm150$ & 35 \\
RXJ1757 & $950\pm130$ & 38 & $520\pm140$ & 20 & $1160\pm330$ & 18 & $690\pm110$ & 26 & $1430\pm290$ & 12 \\
RXJ1821 & $1140\pm90$ & 58 & $940\pm90$ & 44 & $1280\pm860$ & 14 & $920\pm80$ & 45 & $380\pm110$ & 9 \\
\bottomrule
\multicolumn{11}{p{15.3cm}}{$^{\rm a}$ \footnotesize{Velocity dispersion measured using all spectroscopically confirmed cluster members within 1$h_{70}^{-1}$ Mpc of the X-ray centroid as the initial sample for $3\sigma$ clipping. See Section \ref{sec:veldisp} for more details.}}\\
\end{tabular}
\end{table*}
\begin{table*}
\caption{Virialization and Substructure Tests}
\label{testtab}
\begin{tabular}{lllllllllllllll}
\toprule
\footnotesize{Cluster}
& \footnotesize{$P\left(\Delta_v\right)$}
& \footnotesize{$P\left(\Delta_v\right)$}
& \footnotesize{$\Delta^{\rm b}$}
& \footnotesize{$P\left(\Delta\right)^{\rm b}$}
& \footnotesize{$P_3/P_0$}
& \footnotesize{$P_4/P_0$}
& \footnotesize{$P_3/P_0$}
& \footnotesize{$P_3/P_0$}
& \footnotesize{$P_4/P_0$}
& \footnotesize{$P_4/P_0$}\\
{}
& \footnotesize{Red vs.}
& \footnotesize{Qui. vs.}
& {}
& {}
& {}
& {}
& \footnotesize{Upper}
& \footnotesize{Lower}
& \footnotesize{Upper}
& \footnotesize{Lower}\\
{}
& {Blue$^{\rm a}$}
& \footnotesize{SF$^{\rm a}$}
& {}
& {}
& {}
& {}
& \footnotesize{Bound$^{\rm c}$}
& \footnotesize{Bound$^{\rm c}$}
& \footnotesize{Bound$^{\rm c}$}
& \footnotesize{Bound$^{\rm c}$}\\
\midrule
RCS0224B & 0.953 & 0.106 & 57.830 & 0.001 & 0.189 & 0.139 & 5.158 & 0.611 & 2.943 & 0.169 \\
SC0849C & 0.373 & 0.721 & 43.437 & 0.129 & 3.182 & 0.009 & 10.023 & 1.415 & 2.470 & 0.177 \\
SC0849D & 0.839 & 0.572 & 17.659 & 0.091 & 6.773 & 11.743 & 23.825 & 3.789 & 21.305 & 5.067 \\
SC0910A & 0.204 & 0.544 & 12.768 & 0.060 & 1.714 & 0.469 & 11.695 & 0.832 & 3.122 & 0.521 \\
SC0910B & 0.505 & 0.301 & 13.648 & 0.000 & 0.504 & 0.092 & 7.598 & 1.081 & 3.054 & 0.278 \\
RXJ1221B & 0.354 & 0.017 & 37.456 & 0.774 & 1.882 & 0.974 & 5.247 & 1.074 & 2.846 & 0.414 \\
SC1324A & 0.779 & 0.535 & 47.083 & 0.139 & 28.410 & 1.432 & 54.699 & 10.962 & 13.057 & 1.169 \\
SC1324B & 0.779 & 0.000 & 6.179 & 0.185 & 5.978 & 4.223 & 37.457 & 3.622 & 17.817 & 1.148 \\
SC1324I & 0.157 & 0.818 & 40.650 & 0.362 & 3.601 & 1.686 & 24.365 & 2.514 & 14.421 & 1.142 \\
Cl1350C & 0.754 & 0.262 & 57.507 & 0.782 & 26.714 & 8.116 & 51.883 & 18.655 & 11.806 & 2.797 \\
SC1604A & 0.101 & 0.747 & 35.527 & 0.747 & 21.876 & 1.900 & 43.635 & 10.979 & 8.326 & 0.900 \\
SC1604B & 0.022 & 0.659 & 60.944 & 0.035 & 26.902 & 6.537 & 52.249 & 10.841 & 22.747 & 2.856 \\
SC1604D & 0.973 & 0.742 & 133.068 & 0.406 & 3.806 & 1.955 & 29.283 & 2.997 & 14.747 & 0.887 \\
RXJ1716A & 0.655 & 0.000 & 91.995 & 0.066 & 1.053 & 1.704 & 4.078 & 0.380 & 3.099 & 0.609 \\
RXJ1757 & 0.429 & 0.004 & 38.921 & 0.043 & 6.760 & 1.751 & 24.982 & 3.847 & 7.886 & 1.301 \\
RXJ1821 & 0.000 & 0.000 & 63.362 & 0.146 & 3.396 & 1.325 & 11.740 & 1.188 & 4.945 & 0.551 \\
\bottomrule
\multicolumn{11}{p{15.3cm}}{$^{\rm a}$ \footnotesize{Probabilities that the differences in velocity centers between the red/quiescent and blue/star-forming galaxy populations arose by chance.}}\\
\multicolumn{11}{p{15.3cm}}{$^{\rm b}$ \footnotesize{$\Delta$ is derived from the DS tests, and $P\left(\Delta\right)$ represents the likelihood that the null hypothesis of zero substructure is true (see Section \ref{sec:DStest}).}}\\
\multicolumn{11}{p{15.3cm}}{$^{\rm c}$ \footnotesize{Lower and upper bounds on $P_3/P_0$ and $P_4/P_0$ are the inner 68\% range of values of these power ratio parameters found in Monte Carlo simulations of Poisson noise based on each cluster's diffuse emission. See Section \ref{sec:DE} for more details.}}
\end{tabular}
\end{table*}
\subsection{Star-Forming and Quiescent Galaxy Fractions}
\label{sec:SF}
We may expect more virialized galaxy clusters to have more quiescent galaxy populations. As mentioned in Section \ref{sec:RB}, in a virialized cluster, galaxies tend to have spent more time close to the cluster core, where they are subjected to processes such as ram pressure stripping that can quench star formation. To look for a correlation between quiescence and virialization, as well as our other metrics, we measure the fraction of quiescent galaxies in each cluster, using the results of our SED fitting (see Section \ref{sec:SED}).
We calculated the fraction of quiescent galaxies, $f_{q,comb}$, for each cluster using the sample of all galaxies with spectroscopic and photometric redshifts, to mitigate any observational bias associated with the former. As discussed in Section \ref{sec:MMCG}, galaxies with only photometric redshifts were considered as galaxy members when they were within $\sigma_{\Delta z/(1+z)}\left(1+z_{max/min}\right)$ of the spectroscopic member redshift range. Uncertainties in $f_{q,comb}$ were derived from Poissonian statistics. $f_q$ is given in Table \ref{clusproptab}. Note that calculation of $f_q$ is only possible with SED fitting, due to contamination of the red sequence by dusty star-forming galaxies and, more importantly, in the absence of photometric redshifts, the fraction of red galaxies is a complex function of the spectroscopic sampling rate, redshift success, and targeting strategy.
\subsection{Dressler-Shectman Tests of Substructure}
\label{sec:DStest}
As another test of substructure within a galaxy cluster, we use the Dressler-Shectman (D-S) test, defined in \citet{DS88} and \citet{hall04}. The D-S test estimates the degree of substructure present in a cluster using both spatial and velocity information, which has has the potential of being predictive in terms of the dynamical state of the cluster. This spatial and velocity information is incorporated into the statistic $\delta^2$, which is calculated for each individual galaxy based on its ten nearest neighbours:
\begin{equation}
\delta^2=\frac{11}{\sigma_v^2}\left[\left(v_{loc}-\bar{v}\right)^2+\left(\sigma_{loc}-\sigma_v\right)^2\right]
\end{equation}
\label{eq:DS}
where $\bar{v}$ and $\sigma_v$ are the mean velocity and velocity dispersion of the cluster as a whole. When calculating $\delta^2$ for a certain galaxy, $v_{loc}$ and $\sigma_{loc}$ are the mean velocity and velocity dispersion, respectively, of that galaxy and its ten nearest neighbours (which is the reason for the coefficient of 11 as the first term is set by N$_{nn}$+1). Choices other than ten neighbours have been used in computing the D-S statistic, especially in cases where the parent sample is not much larger than 10 \citep[see, e.g.,][]{pink96}. However, the choice of 10 here is appropriate as it approximately corresponds to the optimal window size for the typical number of member galaxies in the clusters in our sample \citep[see][]{silverman86}. To measure the level of substructure of a cluster as a whole, \citet{DS88} define $\Delta$, which is the sum of the $\delta$ statistics of each member galaxy. $\Delta$ has a distribution dependent on the specific set of coordinates and redshifts for the given sample, similar to $\chi^2$, with the expected value on the order of the number of cluster members.
The results of our D-S tests are given in Table \ref{testtab}. To estimate the significance of the $\Delta$ values, we performed Monte Carlo simulations using the method of \citet{hall04} and \citet{rumb13}. For each trial, we shuffled the velocities, but not the spatial coordinates, of all cluster members, and recalculated $\Delta$. The fraction of trials with $\Delta$ larger than the observed value are also given in Table \ref{testtab}, as $P\left(\Delta\right)$. This value is our estimate of the likelihood that no substructure exists in the given cluster. We note that this likelihood only speaks to the possibility of substructure and not the virialization state of a given cluster. The relationship between the results of the D-S test to the virialization state will be investigated in Section \ref{sec:ana}.
\subsection{Diffuse X-Ray Emission}
\label{sec:DE}
Examining the X-ray emission from a cluster provides information on the diffuse gas located at its center. A disturbed cluster can have asymmetries in its gas distribution or an offset between the center of the ICM emission and other centroiding measures.
To locate diffuse X-ray emission from the galaxy clusters in our sample, we first removed bright point sources from the {\it Chandra} images. The areas containing point source emission were filled in using Poisson distributions to simulate the background, with the background level estimated using an annulus around each object. We then convolved the {\it Chandra} images with azimuthally symmetric beta models of the form
\begin{equation}
f(r) = A\left(1+\frac{r^2}{r_c^2}\right)^{-3\beta+1/2}
\label{eq:betamodel}
\end{equation}
We used $\beta=2/3$ and core radii of $r_c=180$ kpc, which are typical for galaxy clusters \citep[see, e.g.,][]{AE99,ett04,maug06,hicks08}. We defined the centroids as the points of local maxima in the smoothed images. In Table \ref{centab}, we provide the coordinates of these X-ray centers\footnote{Any galaxy group or cluster not listed had diffuse emission which was not measurable at a high enough signal-to-noise ratio to be useful for our full analysis.}.
\begin{figure*}
\includegraphics[width=0.99\textwidth]{figure2.pdf}
\caption
Spatial plots of diffuse X-ray emission are shown for each cluster, with different centroids marked. The center of the X-ray emission is marked with an X, the BCG is marked with an open circle, the MMCG is marked with a star, the luminosity-weighted mean center is marked with a plus, and the mass-weighted mean center is marked with a diamond. The dotted line is a circle of radius 0.5 Mpc at the redshift of the cluster, centered on the X-ray emission. The contours are derived from {\it Chandra} images of diffuse emission, convolved with a beta model. The background level was subtracted, and the image was divided by the RMS variability (see Section \ref{sec:DE} for more details). In these units, the contour levels are as follows: RCS0224B - 6, 12, 18, 24; SC0849C - 4, 8, 12, 16; SC0849D - 1.5, 3, 4.5, 6; SC0910A - 5, 10, 15, 20, 25; SC0910B - 2.5, 5, 7.5, 10, 12.5; RXJ1221B - 20, 40, 60, 80; SC1324A - 4, 8, 12, 16, 19; SC1324B - 2.5, 5, 7.5, 10, 12.5; SC1324I - 1, 2, 3, 4, 5. Contour levels were broadly set at 4-5 equally spaced intervals between the background and the peak of the X-ray emission.}
\label{fig:DE}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.99\textwidth]{figure3.pdf}
\caption
Spatial plots of diffuse X-ray emission are shown for each cluster, with different centroids marked. The center of the X-ray emission is marked with an X, the BCG is marked with an open circle, the MMCG is marked with a star, the luminosity-weighted mean center is marked with a plus, and the mass-weighted mean center is marked with a diamond. The dotted line is a circle of radius 0.5 Mpc at the redshift of the cluster, centered on the X-ray emission. The contours are derived from {\it Chandra} images of diffuse emission, convolved with a beta model. The background level was subtracted, and the image was divided by the RMS variability (see Section \ref{sec:DE} for more details). In these units, the contour levels are as follows: Cl1350C - 10, 20, 30, 40; SC1604A - 3, 6, 9, 12; SC1604B - 2, 4, 6, 7.5; SC1604D - 1, 2, 3; RXJ1716A - 30, 60, 90, 120; RXJ1757 - 7, 14, 21, 28; RXJ1821 - 10, 20, 30, 40.}
\label{fig:DE2}
\end{figure*}
\subsubsection{Spatial Profiles}
\label{sec:spatprof}
X-ray contours from the smooth {\it Chandra} images are shown in Figures \ref{fig:DE} and \ref{fig:DE2}. While the contours appear largely symmetric, this can be quantified using power ratios, as in \citet{BT95,BT96}. The power ratios are basically calculated by evaluating the multipole moments of the surface brightness of the diffuse X-ray emission in a region around the cluster center. They are defined according to:
\begin{align}
P_0&=\left[a_0\ ln\left(R_{ap}\right)\right]^2\\
P_m&=\left(a_m^2+b_m^2\right)/\left(2m^2R_{ap}^{2m}\right)\\
a_m\left(R\right)&=\int_{r<R}\Sigma\left({\bf x}\right)r^m {\rm cos}\ m\phi\ d^2{\bf x}\\
b_m\left(R\right)&=\int_{r<R}\Sigma\left({\bf x}\right)r^m {\rm sin}\ m\phi\ d^2{\bf x}
\label{eq:PR}
\end{align}
where ${\bf x}=\left(r,\phi\right)$, $\Sigma\left({\bf x}\right)$ is the two-dimensional projection of mass density (in our case, we use photon counts per pixel as a proxy), and $R_{ap}$ is a chosen aperture radius inside which the power ratios are evaluated. Note that a perfectly round cluster will have $P_{m>1}$=0, so larger values will generally indicate more asymmetry. We measure the power ratios $P_3/P_0$ and $P_4/P_0$ for each cluster on the unsmoothed images, using $R_{ap} = 1 h_{70}^{-1}$ Mpc. These quantities for each cluster are given in Table \ref{testtab}. These two ratios should be most sensitive to asymmetries in the diffuse emission, and provide different information on the cluster gas, since odd moments are not sensitive to ellipticity \citep{jel05,don16}.
To estimate uncertainties for the power ratios, we performed Monte Carlo simulations. {\it Chandra} images are very sparse, with either 0 or 1 photon counts in the overwhelming majority of pixels, so we first smoothed the {\it Chandra} images using a tophat kernel with a 5 pixel radius to get an estimate of the background level. We then generated a series of random images using Poisson functions based on the background level at each pixel\footnote{Uncertainties derived from an image with random noise placed on top of a sparsely sampled beta profile yielded similar values.}. The upper and lower bounds of the central 68\% of the Monte Carlo trials are shown in Table \ref{testtab}. Most of the power ratio measurements are enclosed by these intervals, meaning our measurements are dominated by Poisson noise. However, there are several clusters with power ratios below this level, which implies their diffuse emission is too symmetric compared to pure noise, meaning they are highly regular in shape.
For use in measuring temperatures and luminosities (see Section \ref{sec:templum}), we also measured and modeled the one-dimensional radial profiles of the diffuse emission. We first measured the counts in circular annuli around the X-ray centroid. For each cluster, we determined at what radius $r_e$ the surface brightness reached the background level, which we used later to define the region for extracting spectra and net counts. We then used the counts in the annuli to fit an azimuthally-averaged surface brightness model consisting of a beta model and a constant background:
\begin{equation}
SB\left(r\right)=A\left(1+r^2/r^2_c\right)^{1/2-3\beta}+SB_{bkg}
\end{equation}
\label{eq:SB}
As in Equation \ref{eq:betamodel}, $r_c$ is the core radius, and we set $\beta=2/3$. As an additional constraint, we required that the net counts (NC) predicted by the surface brightness model within $r_e$ match the value we measured. This is equivalent to
\begin{equation}
NC\left(r_e\right)= 2\pi Ar_c^2\left(1 - 1/\sqrt{1 + r_e^2/r_c^2}\right)
\end{equation}
This reduces the independent parameters in our model to two: $r_c$ and $SB_{bkg}$.
\subsubsection{Temperature and Luminosity}
\label{sec:templum}
To measure the gas temperature of each cluster, we extracted spectra using the CIAO tool {\it specextract}. We extracted the spectra inside circular regions centered on the X-ray centroids, using the $r_e$ values we found in Section \ref{sec:spatprof} as the radii. A background spectrum was extracted nearby and subtracted.
The spectra were fit to a Raymond-Smith thermal plasma model \citep{RS77}, with the absorption model of \citet{MM83}, which we chose for consistency with previous work. For our models, we fixed $Z=0.3Z_{\odot}$, which is commonly used in the literature \citep{ES91}. Additionally, variations in the metallicity do not have a large impact on the results. We calculated Galactic neutral hydrogen column densities using the COLDEN tool from the {\it Chandra} proposal toolkit, which uses the data set of \citet{DicLock90}. We fit the Raymond-Smith models, using the Sherpa tool from the CIAO toolset, in the energy range 0.5-8 keV, using $\chi^2$\ statistics. Because of low number counts, spectra were grouped to include a minimum of 20 counts per bin. Our results are shown in Table \ref{clusproptab}. Note that we were unable to fit a temperature model to SC1324B with any meaningful precision.
We measured the net photon counts from each cluster using the same extraction and background regions as the spectra. Since the size of these regions varied from cluster to cluster, we normalized these measurements by extrapolating to $r_{500} \equiv 2\sigma_v /\left[ 500 H\left(z\right)\right]$. We carried out the extrapolation using the surface brightness models we fit in Section \ref{sec:spatprof}. We then used the fitted Raymond-Smith models to convert net counts within $R_{500}$ in the 0.5-2.0 keV range into fluxes in the observer frame in the $0.5/(1+z)$-$2.0/(1+z)$ keV range. Multiplying these values by $4\pi D_L^2$ then converts to the luminosity emitted in the rest frame 0.5-2.0 keV range. The Raymond-Smith models were then used again to extrapolate bolometric luminosities. These values are given in Table \ref{clusproptab}. Since we were unable to measure a temperature for SC1324B, we did not have a Raymond-Smith model to use for extrapolating bolometric luminosities. Since we were unable to measure an accurate X-ray temperature or luminosity, this cluster is excluded from analysis in Section \ref{sec:ana}.
\begin{figure}
\includegraphics[width=0.45\textwidth]{figure4.pdf}
\caption
Scaling relations between properties of the galaxy cluster. The top plot shows the relation between the bolometric X-ray luminosities and X-ray temperatures of the diffuse gas. Also shown are the scaling relations derived from virialized clusters of \citet{pratt09}, \citet{andersson11}, and \citet{rei11}. In the middle plot, we show the relation between the X-ray gas temperature and the velocity dispersion of the cluster galaxies, along with the scaling relation from \citet{XW00}. In the bottom plot, we show the relation between the bolometric X-ray luminosities and velocity dispersion of the cluster galaxies. Again, we plot the scaling relation from \citet{XW00}. For consistency with \citet{XW00}, the luminosities are extrapolated to infinite radius, rather than $R_{500}$.}
\label{fig:SR}
\end{figure}
\section{Scaling Relations}
\label{sec:SR}
If a galaxy cluster were subjected only to gravitational interactions, we would expect certain scaling relations between the temperature and luminosity of the ICM gas. Furthermore, if we are able to effectively remove outliers and select a largely pure and representative member population, these relations should also extend to the line of sight differential velocity dispersions of these galaxies. For example, the only source of heating for the gas, in this case, would be the gravitational collapse of the cluster. With photons emitted via bremsstrahlung emission, we would expect the X-ray luminosity to scale with the gas temperature as $L_x\propto T^2$\ \citep{kaiser86}. We would also expect the coefficient of proportionally to evolve with redshift as $E\left(z\right)$ \citep{krav06}, where
\begin{equation}
E\left(z\right)=\left[\Omega_m\left(1+z\right)^3+\left(1-\Omega_m-\Omega_{\Lambda}\right)\left(1+z\right)^2+\Omega_{\Lambda}\right]^{1/2}.
\end{equation}
Similarly, we would expect the velocity dispersion of the cluster galaxies to be related to the ICM temperature as $\sigma_v\propto T^{1/2}$, assuming virialization. It would then follow that $L_x\propto \sigma_v^4$.
In practice, it has been found that galaxy clusters do not follow these naive scaling relations. Studies at low redshift have found an $L_x$-$T$ relation closer to $L_x\propto T^3$ \citep{mark98, AE99, XW00, vik02}. This steeper relation implies an injection of energy from a non-gravitational source, such as AGN. Similarly, low-redshift studies have found that the $\sigma_v$-$T$ and $L_x$-$\sigma_v$ relations also deviate from the naive expectations, with higher powers of $T$ and $\sigma_v$, respectively \citep{XW00,horner01}, with the same implication of energy from non-gravitational sources.
In Figure \ref{fig:SR}, we have plotted the ICM temperatures, luminosities, and galaxy velocity dispersions against each other. In the first panel, the bolometric luminosities are used, and we have plotted the local scaling relations found by \citet{pratt09}, \citet{andersson11}, and \citet{rei11}, which follow $L_x\propto T^{2.70}$, $L_x\propto T^{2.90}$, and $L_x\propto T^{2.53}$, respectively. For the other two scaling relations, we plot the empirical relations from \citet{XW00}, which follow $\sigma_v\propto T^{0.65}$ and $L_x\propto \sigma_v^{5.30}$. For the latter, \citet{XW00} use bolometric luminosities extrapolated to infinite radius, using their fitted beta models. Lacking an alternative relation using $L_{500}$, we made the same extrapolation for the $L_x$-$\sigma_v$ relation.
\begin{table}
\caption{Scaling Relation Offsets}
\label{offsettab}
\begin{tabular}{lcccc}
\toprule
\footnotesize{Cluster}
& \footnotesize{$L_x$-$T$}
& \footnotesize{$\sigma_v$-$T$}
& \footnotesize{$L_x$-$\sigma_v$}
& \footnotesize{Mean}\\
{}
& {offset ($\sigma$)$^{\rm a}$}
& {offset ($\sigma$)}
& {offset ($\sigma$)}
& {offset ($\sigma$)$^{\rm a}$}\\
\midrule
RCS0224B & 1.30(0.05) & 0.83 & 0.48 & 0.87(0.45) \\
SC0849C & 2.26(1.18) & 1.27 & 1.52 & 1.68(1.32) \\
SC0849D & 1.21(0.23) & 0.26 & 1.29 & 0.92(0.84) \\
SC0910A & 0.49(1.05) & 0.66 & 0.81 & 0.65(0.84) \\
SC0910B & 1.64(0.03) & 0.78 & 0.29 & 0.90(0.37) \\
RXJ1221B & 3.18(0.41) & 3.23 & 1.56 & 2.66(1.74) \\
SC1324A & 1.33(0.76) & 0.81 & 1.74 & 1.29(1.10) \\
SC1324I & 0.37(0.40) & 0.52 & 2.01 & 0.97(0.98) \\
Cl1350C & 0.91(1.31) & 1.15 & 0.16 & 0.45(0.59) \\
SC1604A & 0.44(0.31) & 0.01 & 0.21 & 0.22(0.18) \\
SC1604B & 0.47(1.31) & 1.15 & 2.25 & 1.29(1.57) \\
SC1604D & 1.30(1.39) & 3.86 & 0.63 & 1.93(1.96) \\
RXJ1716A & 1.06(2.76) & 0.57 & 1.69 & 1.11(1.68) \\
RXJ1757 & 1.43(0.08) & 0.14 & 1.61 & 1.06(0.61) \\
RXJ1821 & 1.12(0.95) & 0.48 & 2.51 & 1.37(1.31) \\
\bottomrule
\multicolumn{5}{p{8.5cm}}{$^{\rm a}$ \footnotesize{Offsets from the $L_x$-$T$ use the \citet{rei11} relation. Values in parentheses use the \citet{andersson11} relation instead. See Section \ref{sec:SR} for details.}}
\end{tabular}
\end{table}
In Table \ref{offsettab}, we have compiled the offsets from the scaling relations for each cluster, normalized by the uncertainties on the measurements\footnote{Scaling relation offsets are calculated as the shortest distance to the scaling relation curve.}. Since there is substantial variation in the $L_x$-$T$ relations from the literature\footnote{Note that the variation between scaling relations is within the uncertainties on their parameters.}, as seen in Figure \ref{fig:SR}, we recorded offsets relative to both the \citet{rei11} and \citet{andersson11}, with the latter in parentheses. The \citet{rei11} relation is derived from a large sample compiled from the literature, with a wide range of redshifts ($z<1.1$). The \citet{andersson11} sample is derived from a smaller sample, but with a similar redshift range to the ORELSE survey ($0.4<z<1.1$). Because of a relative lack of research into the $\sigma_v$-$T$ and $L_x$-$\sigma_v$ relations, the \citet{XW00} relations are the only ones available to us, and we therefore cannot compare different derived relations in these cases. In addition, we lack the dynamic range necessary to fit meaningful relations ourselves given our sample size.
\section{Analysis of Virialization Tests and Correlations to Scaling Relation Offsets}
\label{sec:ana}
We have compiled a set of tests that have resulted in a set of measurements designed to probe the virialization and presence of substructure in galaxy clusters. In addition, we computed the velocity dispersions, X-ray temperatures, and luminosities of all clusters in our sample. Comparing these values to relations between virialized clusters, we would expect offsets from these relations to be correlated with the results of our virialization/substructure tests. By examining the strength of correlations, we can determine which tests have the most power in predicting the virialization of galaxy clusters.
We stress that in the analysis that follows we rely completely on the assumption that clusters with small offsets from the adopted empirical scaling relations are near virialized, while those exhibiting large offsets relative to these relations are not. We further stress that the purpose of this exercise is not to definitively determine which ORELSE clusters are in a near virialized (or unrelaxed) state or at what value of a certain metric a cluster can be considered near virialized, but rather to understand which of the measurements\footnote{Here we refer only to those measurements in Section \ref{sec:CP} that do not explicitly go into calculating scaling relation offsets, i.e., not $L_{X}$, $T_{X}$, or $\sigma_{v}$.} described in Section \ref{sec:CP} correlate well with scaling relation offsets measured in Section \ref{sec:SR} in order to inform observational strategies for general cluster surveys. The first step in the process of this testing is to gather all of our tests of virialization and substructure and to normalize them so that they may properly compared in a common framework. For brevity, we will refer to these normalized measurements as ``metrics".
\subsection{Normalizing Measurements}
\label{sec:norm}
First, in Section \ref{sec:veldisp}, we examined the velocities of different populations of galaxies within clusters. We derive two metrics: the difference between the velocity centers of the red/quiescent and blue/star-forming populations, and the differences between their velocity dispersions. For the difference in velocity dispersions, the metric was normalized by dividing the difference by its uncertainty, effectively converting our units to the uncertainty in the velocity dispersion, $\sigma$, under the assumption of a Gaussian distribution. For the difference in velocity centers, we calculated the probability that the difference arose by chance, using Monte Carlo simulations where the colours or star-forming statuses of the galaxies were randomly shuffled among the sample in each trial (see Section \ref{sec:RB}). This was normalized by assuming a Gaussian distribution as well. $P\left(\Delta_v\right)$ is our measure of confidence that $\Delta_v$ arose by chance, which can be thought of in units of $\sigma$. For example, a 32\% chance of measuring a velocity center difference at least as high as observed would correspond to $\sigma=1$, and to have a confidence of $3\sigma$, $P\left(\Delta_v\right)$ would have to be $\le0.3$\%.
Next, we performed the DS tests of substructure in Section \ref{sec:DStest}. The test outputs the parameter $\Delta$, whose distribution depends on the specific coordinates and redshifts of the galaxies involved. We used Monte Carlo simulations to obtain a distribution of $\Delta$ values for each cluster. We then calculated $P\left(\Delta\right)$, the chance of measuring a $\Delta$ value at least as high as observed by observing how many Monte Carlo trials had a $\Delta$ value at least that high. We can normalize this metric in the same way we normalized the velocity center differences above: we converted $P\left(\Delta\right)$ to units of $\sigma$ by assuming a Gaussian distribution. $P\left(\Delta\right)$ is our measure of confidence, so $1\sigma$ would correspond to $P\left(\Delta\right)=32$\%, $2\sigma$ would correspond to $P\left(\Delta\right)=4.6$\%, etc.
We then examined the asymmetry of the diffuse X-ray emission in Section \ref{sec:DE}. To quantify the asymmetry, we measured the power ratios $P_3/P_0$ and $P_4/P_0$. These measurements were especially noisy, with most of the variation explained by noise. There were, however, some clusters with power ratios lower than would be expected by Poisson noise in the X-ray images. This evidence suggests these are the clusters with the most symmetric ICM emission, which provides some information. We encode this information into a normalized metric by assigning clusters with these especially low power ratios measurements a metric value of -1, while assigning a value of zero to all others.
We calculated three different galaxy cluster centroids: the centers of X-ray emission, the MMCGs/BCGs, and weighted centers using luminosities/masses (see Sections \ref{sec:WMC}, \ref{sec:MMCG}, and \ref{sec:DE}). The distances between these measurements are three different measurements of virialization, although only two are independent. They were normalized by dividing the distances by their uncertainties, using errors on the positions.
Two different methods were combined to estimate the uncertainty for the weighted mean centers. In the first method we measured the offset between the LWMCs/MWMCs of each cluster calculated by spec-z members only and those centers calculated from a combination of spec-$z$ members and photo-$z$ members which did not have a secure spectroscopic redshift outside of the cluster bounds (see Section \ref{sec:MMCG2} for details on photo-$z$ membership criteria). This method was used to account for uncertainties related to the incompleteness of our spectroscopic campaign. In the second method we carried out Monte Carlo simulations to estimate the error associated with choosing a specific radius around the X-ray centroid within which to measure the weighted mean centers. In each trial, the radial cut was randomly varied between 0.5 and 1.25 $h_{70}^{-1}$\ Mpc and 5\% of the spectroscopic sample was cut. As an ensemble, variations in the weighted mean centers were found to be $<0.1h_{70}^{-1}$\ Mpc for 84\% of trials, for both luminosity and mass weighting. The median value offset value from these simulations ($\sim$$0.05h_{70}^{-1}$\ Mpc) was added in quadrature with the uncertainties from the first method to estimate the total uncertainty on the weighted mean centers.
\begin{table*}
\caption{Virialization Metric Correlations}
\label{corrtab}
\begin{tabular}{lcccc}
\toprule
\footnotesize{Metric}
& \multicolumn{4}{c}{\footnotesize{$R^2$ without Metric$^{\rm a}$}} \\
{} & \multicolumn{2}{c}{\footnotesize{Reichert et al. (2011)$^{\rm b}$}} & \multicolumn{2}{c}{\footnotesize{Andersson et al. (2011)$^{\rm b}$}} \\
{}
& \footnotesize{(SED fits)}
& \footnotesize{(no SED fits)}
& \footnotesize{(SED fits)}
& \footnotesize{(no SED fits)}\\
\midrule
Galaxy populations velocity center offset$^{\rm c}$ & {\bf 0.204 (0.123)} & {\bf 0.496 (0.132)} & {\bf 0.412 (0.226)} & 0.442 (0.006)\\
Galaxy populations velocity dispersion diff.$^{\rm c}$ & 0.305 (0.021) & 0.618 (0.009) & 0.465 (0.172) & 0.433 (0.015)\\
DS test & 0.306 (0.020) & 0.595 (0.033) & 0.540 (0.098) & 0.441 (0.007)\\
$P_3/P_0$ & 0.291 (0.035) & {\bf 0.501 (0.127)} & {\bf 0.446 (0.191)} & {\bf 0.330 (0.117)}\\
$P_4/P_0$ & 0.325 (0.001) & 0.618 (0.010) & 0.628 (0.010) & 0.439 (0.009)\\
MMCG/BCG to X-ray distance$^{\rm d}$ & 0.295 (0.031) & {\bf 0.542 (0.086)} & 0.484 (0.153) & 0.448 (0.000)\\
MMCG/BCG to MWMC/LWMC distance$^{\rm d,e}$ & {\bf 0.258 (0.068)} & 0.619 (0.009) & {\bf 0.461 (0.177)} & {\bf 0.416 (0.032)}\\
MWMC/LWMC to X-ray distance$^{\rm e}$ & {\bf 0.231 (0.096)} & {\bf 0.082 (0.546)} & {\bf 0.252 (0.385)} & {\bf 0.270 (0.178)}\\
MMCG/BCG Velocity & 0.307 (0.020) & 0.553 (0.075) & 0.509 (0.129) & {\bf 0.408 (0.039)}\\
Quiescent fraction & {\bf 0.252 (0.075)} & 0.000 (0.628) & 0.467 (0.171) & 0.000 (0.448)\\
\hline
Total $R^2$ & 0.327\phantom{ (0.021)} & 0.628\phantom{ (0.021)} & 0.638\phantom{ (0.021)} & 0.448\phantom{ (0.021)} \\
\bottomrule
\multicolumn{5}{p{16cm}}{$^{\rm a}$ \footnotesize{Results show the goodness-of-fit metric, $R^2$, when performing ordinary least squares regression between the combined virialization metrics listed and the mean offset from the scaling relations. The Total $R^2$ row uses all metrics, while the other rows use all but the listed metric. Values in parentheses are the $R^2$ calculated with the exclusion of the relevant metric subtracted from the Total $R^2$. The four highest reductions in $R^2$ in each column are shown in bold.}}\\
\multicolumn{5}{p{16cm}}{$^{\rm b}$ \footnotesize{Results use either the \citet{rei11} or \citet{andersson11} $L_x$-$T$ scaling relation. See Section \ref{sec:SR} for details.}}\\
\multicolumn{5}{p{16cm}}{$^{\rm c}$ \footnotesize{Difference between between quiescent and star-forming populations for test using SED fitting metrics, and difference between blue and red populations for test without SED fitting.}}\\
\multicolumn{5}{l}{$^{\rm d}$ \footnotesize{Distance measure uses MMCG for test using SED fitting metrics, and BCG for test without SED fitting.}}\\
\multicolumn{5}{l}{$^{\rm e}$ \footnotesize{Distance measure uses MWMC for test using SED fitting metrics, and LWMC for test without SED fitting.}}\\
\multicolumn{5}{l}{$^{\rm f}$ \footnotesize{Since SED fitting was required for estimating the quiescent fraction, it was not used for these linear fits.}}
\end{tabular}
\end{table*}
\begin{figure*}
\includegraphics[width=0.99\textwidth]{figure5.pdf}
\caption
Results from performing ordinary least squares regression between the combined virialization metrics and the mean offset from the scaling relations. Plotted are the percentage reductions in the goodness-of-fit metric, $R^2$, when all metrics except the one listed are used, compared to when every metric is used.}
\label{fig:metrics}
\end{figure*}
Uncertainties on the X-ray centroids were estimated using Monte Carlo simulations performed as follows. For each cluster, we took a cutout of the unsmoothed image around the cluster and measured the surface brightness at each point, smoothing with a tophat kernel because the average counts per pixel was less than 1. For each Monte Carlo trial, we randomized the photon counts for each pixel using a Poisson distribution with the surface brightness as the estimate of the expected value, then re-smoothed the simulated image and located the X-ray center. The distribution of simulated X-ray centers gives us an estimate of the uncertainty on the X-ray centroid.
Compared to the uncertainties on the mean centers and X-ray centroid, the positional uncertainty of the galaxy identified as the MMCG/BCG is negligible. While there is a systematic uncertainty associated with the potential misidentification of the MMCG/BCG, we definitively identify each in all clusters, so such a concern does not apply to this work. The three distances between centroids are therefore normalized by the uncertainties on the weighted mean center and X-ray centroid, adding them in quadrature for the distance between these two centroids, and using only the single relevant uncertainty when normalizing the offsets from the MMCG/BCG. The distance between the MMCG/BCG and the cluster center in velocity space also provides information on the virialization of a cluster. The uncertainty on the velocity offset should, in principle, be measured from a combination of the uncertainty on the redshift of the MMCG/BCG, the uncertainty in the systemic velocity, and the uncertainty relating to whether a given velocity is meaningful relative to the random motions in the cluster (i.e., relative to $\sigma_{v}$). In practice, however, the uncertainty in redshift of the MMCG/BCG is negligible ($\sim10-20$ km s$^{-1}$) and the uncertainty in the systemic redshift of a given cluster based on bootstrap estimates\footnote{A better way to approach the estimate of this uncertainty would be to use an identical method to what was used to estimate the LWMC/MWMC uncertainties. However, the lack of velocity precision of the photo-$z$ measurements prevents such a method from being used.} is also negligible ($\sim100-200$ km s$^{-1}$) to the precision necessary for the calculation here. Thus, to normalize this metric we simply divide the velocity offsets by the velocity dispersion of each cluster.
Lastly, we have the quiescent fractions, described in Section \ref{sec:SF}. We may expect more virialized clusters to also have higher quiescent fractions, since their members have spent more time near the cluster core, where process that quench star formation are stronger. However, we do not need to normalize these measurements, since they are already directly comparable between clusters, and we expect them to correlate directly with the degree of virialization.
\begin{figure*}
\includegraphics[width=0.68\textwidth]{figure6a.pdf}
\includegraphics[width=0.68\textwidth]{figure6b.pdf}
\caption
\emph{Top:} The distribution of the ranked importance of each metric in reducing $R^2$ when removed from the joint fit relative to the scaling relation offsets for
different sub-samples of the ORELSE clusters, metrics which use and do not use SED fitting, different normalization approaches, and offsets with respect to different
scaling relations (see Section \ref{sec:predict} for details). Lower numbers indicate a metric is relatively more important in predicting cluster virialization. The mode
of each distribution is shown by the solid line and the value is given in each sub-panel. For consistency, the 10th most important metric for those with iterations which
employ SED fitting was not considered in these distributions$^{11}$. \emph{Bottom:} The distribution of the weighted percentage reductions in $R^2$ (see Section \ref{sec:predict})
for each metric for the same cases as were considered in the left panels. The median of each distribution is shown by the solid line and the value is given in each sub-panel. Higher numbers indicate a metric is relatively more important in predicting cluster virialization.}
\label{fig:metricdist}
\end{figure*}
\subsection{Metric Correlations with Scaling Relation Offsets}
\label{sec:corr}
We may expect each of the normalized metrics described in the previous section to correlate with the offsets from the scaling relations. We can test this by performing linear regression between the metrics and scaling relation offsets.
For each metric, we used the \textsc{LinearRegression} class from {\it scikit-learn} to perform ordinary least squares regression with the offsets from each of the scaling relations, performed for both the \citet{andersson11} and \citet{rei11} relations in the case of the $L_x$-$T$ relation, as well as the mean offset from all three scaling relations again using those offsets from the two $L_x$-$T$ relations separately. To evaluate the goodness of fit, we calculated $R^2$ for each case. In every case, $R^2$ was less than 0.37, indicating weak to moderate correlations between individual metrics and both individual and mean offsets from the various scaling relations.
The correlation between the metrics and the scaling relation offsets can be improved by fitting them jointly. In short, we want to find the coefficients such that $\Sigma_i \alpha_i m_i$ is a good predictor of the offset from the scaling relations, where the $m_i$ are the different metrics and the $\alpha_i$ are constant coefficients. We did this using two sets of metrics: one using metrics derived using SED fitting, and one using metrics without using SED fitting, the latter omitting photometric redshifts, rest-frame colours, stellar masses, and quiescent fractions. The rationale here is that the parameters derived from SEDs may be more accurate indicators of certain properties (e.g., the masses derived from SED fits can more accurately locate the MMCG, as opposed to using the luminosities as a proxy for mass with the BCG), but these may not be available for all surveys. Metrics such as LWMCs are much less expensive to calculate, in terms of observation and computation, than MWMCs. So, a set of metrics with metrics derived from SED fitting may provide insights into how parameters correlate with virialization when one has the luxury of accurately and precisely estimating SEDs, while the second set of metrics provides a comparison for when this is not feasible.
In addition, we measured offsets from the $L_x$-$T$ scaling relation using the fitted relations of both \citet{rei11} and \citet{andersson11}, and we carry out the analysis using both sets of offsets for comparison. With the parallel analyses using and not using the SED fitting, this makes four separate analyses, as shown in Table \ref{corrtab}.
For each set of metrics, we performed a linear fit and estimated the goodness of fit using $R^2$, again using {\it scikit-learn} as described above. In this case, the goodness of fit metric is calculated as \begin{equation}
R^2=1-\frac{\Sigma_i \left(y_{i,obs}-y_{i,pred}\right)^2}{\Sigma_i \left(y_{i,obs}-\left<y_{i,obs}\right>\right)^2}
\end{equation}
The $R^2$ values are given in the row marked ``Total'' in Table \ref{corrtab}. The values of $R^2$ range from 0.33-0.64, a marked improvement over individual fits.
While the joint fits show that a suite of virialization tests can be effective in predicting offsets from scaling relations, we would like to investigate the individual power of the tests. We actually can do so using the joint linear fits. If an individual metric provides meaningful information for predicting scaling relation offsets, removing it from the set of metrics and reevaluating the linear fits without it should reduce the Total $R^2$ (hereafter simply $R^2$). The larger the decrease in $R^2$, the more predictive power that metric provided. The relative sizes of the decreases in $R^2$ should also tell us the relative importance of the metrics. We carry out this test for each metric described in Section \ref{sec:norm}. The $R^2$ calculated from the revised fits with one metric removed are given in Table \ref{corrtab}, with the reduction in $R^2$ in parentheses. The percentage reductions compared to the $R^2$ are also plotted in Figure \ref{fig:metrics}. The reductions in $R^2$ range from almost negligible to more than 0.8 (i.e., over 80\%) indicating a wide range of influences from the different metrics. Broadly, the spatial offsets between the various projected centers (MMCG/BCG, WMC, X-ray) as well as the differences in the mean and dispersion of the velocities of red and blue member galaxies and the X-ray power ratio P$_3$/P$_0$ appear to have the most predictive power with regards to the scaling relation offsets of the ORELSE clusters irrespective of the framework of the analysis (i.e., SED fits vs. no SED fits, scaling relation used). Nearly all six of these metrics appear among the top four most important metrics in terms of $R^2$ reduction in Table \ref{corrtab} in at least one framework and usually more than one. Conversely, the X-ray power ratio P$_4$/P$_0$, the DS tests, and the quiescent fraction appear to have limited predictive power. These three metrics rarely appear among the top four largest reductions in $R^2$. While it is possible that surface brightness issues contribute to the limited combined predictive power of the two power ratios, it is unlikely as both $P_3/P_0$ and $P_4/P_0$ have been shown to be sensitive to sub-structure for clusters of similar masses and redshifts to those of our own sample using similar observational setups \citep{jel05}.
\subsection{General Predictive Power of Metrics}
\label{sec:predict}
In the previous section we established that some metrics appear to have more power than others in predicting scaling relation offsets and, by consequence, the virialization state of ORELSE clusters. Though the number of clusters studied here is relatively modest, the sample is selected through a variety of different techniques (see \citealt{lubin09}) and, further, spans a large range in dynamical mass ($\log(\mathcal{M}_{vir}/\mathcal{M_{\odot}}) = 14.4-15.1$ using the methodology of \citealt{lem12}). As such, the results derived here should be useful in informing observing strategies for future surveys if, indeed, these results can be extended to the general population of intermediate redshift galaxy clusters. To this end, we attempted to test the robustness of these results to changes in both the sample and the methodology by following the approach in Section \ref{sec:corr}, calculating the reduction in $R^2$ when removing each metric (hereafter $\Delta R^2/R^2$), for a large suite of conditions. These conditions included different samples, achieved through jackknifing the ORELSE cluster sample, using offsets with respect to all three different scaling relations individually, and changes in the approach used to estimate uncertainties, done through replacing the fiducial velocity dispersion and WMC uncertainties with estimates using a bootstrap approach. In this way we created $\sim$1000 unique values of $\Delta R^2/R^2$ for each metric\footnote{Since the quiescent fraction was only calculated for iterations which included SED fitting, there are only $\sim$500 iterations for this metric.} that span an enormous range of combinations of measured quantities (SED fitting vs. no SED fitting), different scaling relations against which to measure offsets, different cluster samples, and different approaches in the way some of the metrics are normalized. Though in the latter case the approach is changed for only two of the metrics, $\Delta R^2/R^2$ for a given iteration depends on all metrics in that iteration resulting in a unique value being generated for each metric for each iteration. Because of the immense range of conditions these different iterations cover, if certain metrics appear to consistently return higher $\Delta R^2/R^2$ when removed from the fits across all iterations it is likely these metrics would be useful in predicting the virialization state of clusters for a general cluster survey.
The top panel of Figure \ref{fig:metricdist} shows the distribution of the ranked importance of each metric in terms of $\Delta R^2/R^2$ for
all iterations along with the mode of each distribution. Our convention here is to assign a rank of one to the most important metrics for a given
iteration and a rank of nine to the least important\footnote{Note that, since those iterations that did not include metrics
which employed SED fitting had a maximum rank of nine, the quiescent fraction metric being removed, the tenth ranked metric for each iteration which employed
SED fitting was not considered on this plot.}. While the distribution of the ranked importance of
each metric is broad, spanning from most important to least important in all cases, it is clear from a visual inspection that the distributions are considerably
different in shape. In some cases
the distribution is skewed towards higher ranks while others appear be generally less highly ranked or have a near uniform distribution. The two metrics that
appeared to be consistently the most important were the projected offset between the WMC and X-ray centers and the projected offset between the
location of the MMCG/BCG and the WMC. These two metrics appeared in the top three most important metrics 68.8\% and 36.5\% of the time, respectively. In contrast,
the DS test, the quiescent fraction, and the two power ratios appeared to consistently have minimal predictive power, appearing in the top three most important metrics a combined
24.6\% of the time. These results
are bolstered by the results of KS tests run on the various distributions. According to these tests, the distribution of the ranks of the two most
important metrics are the only two metrics which are statistically distinguishable at the $>>$3$\sigma$ level from the distributions of all
other metrics.
However, to simply rank order the importance of each metric in predicting the virialization state of a given cluster sample is perhaps an overly simplistic
approach as it does not consider the magnitude of the reduction in the $R^2$ for the removal of a given metric for a given iteration. As can be seen in Table
\ref{corrtab}, lower ranked metrics, in some cases, still exhibit markedly high $\Delta R^2/R^2$ values, while, in other cases, the
reductions of similarly ranked metrics are essentially negligible. Additionally, there are some cases where a single metric
or a few metrics have dominate $\Delta R^2/R^2$ values
and other cases in which the distribution is more egalitarian (see the numbers for the no SED fits vs. the \citealt{rei11} relation and the SED fit metrics vs.
the \citealt{andersson11} relation in Table \ref{corrtab} for an example of each, respectively).
In order to avoid conflating these cases, we defined a weighted $R^2$ reduction, $(\Delta R^2/R^2)_{weighted}$, for the $i$th metric and the
$j$th iteration as:
\begin{multline}
(\Delta R^2/R^2)_{weighted,i,j}= \\
\frac{(\frac{\Delta R^2}{R^2})_{i}}{\frac{\sum_{i=1}^{n}(\frac{\Delta R^2}{R^2})_{i}-(\frac{\Delta R^2}{R^2})_{1}-(\frac{\Delta R^2}{R^2})_{n}}{n}+(\frac{\Delta R^2}{R^2})_{1}}
\end{multline}
\noindent where $n$ is the number of metrics used for the $j$th iteration, i.e., 9 or 10 for no SED fit and SED fit cases, respectively, and
$(\Delta R^2/R^2)_{1}$ and $(\Delta R^2/R^2)_{n}$ are the percentage reduction of the first and last most important metric for this iteration.
The quantity $(\Delta R^2/R^2)_{weighted}$ is designed such that all values are between zero and unity, zero for small reductions relative to the most important
metric and unity for a single, dominate metric. Egalitarian cases return a value of 0.5 for all metrics.
The bottom panel of Figure \ref{fig:metricdist}
shows the distribution of $(\Delta R^2/R^2)_{weighted}$ for all iterations for each metric along with the median value. We plot $\log(\Delta R^2/R^2)_{weighted}$
rather than $(\Delta R^2/R^2)_{weighted}$ to visually highlight the differences between the various distributions. As was the case for the ranked order
distributions, the projected offset between the WMC and X-ray centers as well as the projected offset between the WMC and the MMCG/BCG
appear to consistently have the highest values of $(\Delta R^2/R^2)_{weighted}$. These two metrics have the highest median $(\Delta R^2/R^2)_{weighted}$ values
and appear in the top half of all $(\Delta R^2/R^2)_{weighted}$ values a combined 68.0\% of the time. Again the quiescent fraction, power ratios, with the
possible exception of P$_3$/P$_0$, and DS test values appear to be have generally lower median values and only appear in the top half of the full
$(\Delta R^2/R^2)_{weighted}$ distribution a combined 40.2\% of the time. In other words, the combination of the WMC to MMCG/BCG-WMC Distance and the WMC
to X-ray Distance metrics was more predictive of the virialization state of a given cluster sample approximately twice as frequently as any combination of two of metrics
with generally lower predictive power (i.e., the quiescent fraction, power ratios, and DS tests), and their median $(\Delta R^2/R^2)_{weighted}$ is nearly four times
times higher.
Taken alone, the WMC to X-ray Distance is unequivocally the most important of all metrics, with a median $(\Delta R^2/R^2)_{weighted}$ that is more than twice
that of any other metric and is nearly an order of magnitude more predictive on average than the lowest rank metric (DS test).
Also mirroring the results on ranked importance, KS tests on the $(\Delta R^2/R^2)_{weighted}$ distributions of the two most important metrics find them statistically
distinguishable from those of every other metric at the $>>$3$\sigma$. In order of importance the remaining metrics are: the projected offset between the
BCG/MMCG and the X-ray center (falling 53.3\% of the time in the top half of the $(\Delta R^2/R^2)_{weighted}$ distribution), the offset in the velocity center of
red and blue member galaxies (51.3\%), the offset between the MMCG/BCG velocity and the systemic velocity of its parent cluster (50.1\%), and the velocity dispersion
offset between
red and blue member galaxies (43.8\%). While the gradations in importance might seem small between the various metrics, the fact that we have tested a huge
variety of circumstances and seen some metrics clearly maintain predictive power and some consistently lack predictive power is telling. In the next
section we summarize these results and suggest observing strategies for different types of surveys based on these results.
\section{Discussion}
\label{sec:disc}
As part of the ORELSE survey, we searched for diffuse X-ray emission in 12 LSSs, finding emission from a total of 16 galaxy clusters. In Section \ref{sec:CP}, we studied the properties of these galaxy clusters, and performed a number of tests of virialization and substructure on them, using a separate set of tests that both did and did not use SED fitting, to allow generalization to a wide range of studies and observational datasets. These galaxy cluster properties and tests are given in Tables \ref{clusproptab}, \ref{centab}, \ref{sigtab}, and \ref{testtab}.
We would expect virialized clusters to follow relations between properties such as the temperature and luminosity of their diffuse gas, as well as the velocity dispersions of their galaxies, such as those plotted in Figure \ref{fig:SR}. We would expect a galaxy cluster that is still in the process of forming, or that has been recently disrupted by an interaction such as a merger, to be offset from these relations. We find varying degrees of offset from the scaling relations, as shown in Table \ref{offsettab}, which we take to indicate that our sample includes both near virialized and non-virialized clusters. Note that, because a wide range of $L_x$-$T$ relations exist in the literature, we evaluate the offsets from two different $L_x$-$T$ relations: \citet{rei11} and \citet{andersson11}. For the purposes of the analysis presented in this paper, we adopt offsets from the various scaling relations as our proxy for the degree of virialization of a given cluster, with smaller offsets implying a higher level of virialization.
As discussed in Section \ref{sec:corr}, these offsets from the scaling relation should then correlate, to varying degrees, with the virialization and substructure tests we performed. However, correlation between individual metrics and both the offsets from individual scaling relations and the mean offset from all scaling relations were relatively weak, meaning any individual test is, by itself, insufficient as a predictor of virialization. When all metrics given in Table \ref{corrtab} were combined for a linear fit to the mean scaling relation offset, the correlation was relatively strong ($R^2$ ranged from $\sim$ 0.33$-$0.64). Performing every test may be prohibitively expensive, however, involving potentially years of both observation and effort over a wide range of wavelengths, as was the case for the ORELSE survey.
It is useful, then, to determine which metrics have the most predictive power when it comes to testing cluster virialization and substructure, using scaling relation offsets as a proxy. Rather than using linear fits of individual metrics to the scaling relations, we used the joint fits, as described in Section \ref{sec:corr}. By removing one metric from our set and re-performing the linear fit to the scaling relation offsets, we can evaluate how informative that metric was through the corresponding drop in $R^2$ after excising it. The results of this exercise are shown in Table \ref{corrtab}, with different results for the set of metrics that do and do not use SED fitting, and for offsets from the \citet{rei11} versus \citet{andersson11} $L_x$-$T$ relations. This line of analysis was expanded on in Section \ref{sec:predict} to incorporate a variety of different sub-samples, different scaling relation offsets, and different metric normalization methods. In both our original sample and approach and in the expanded analysis we found some metrics to be consistently predictive, such as the projected offset between the WMC and X-ray centers and the projected offset between the WMC and the MMCG/BCG, and others, such as the X-ray power ratios, the quiescent fraction, and the DS test, appear to generally have consistently limited predictive power. As discussed in Section \ref{sec:predict} these results should help to inform data-gathering strategies for classifying galaxy clusters as relaxed or disturbed in future optical/NIR or X-ray surveys. We take here each in turn.
The case of optical/NIR surveys where accompanying X-ray data is limited is perhaps where the results of this analysis are most valuable as no information on the
virialization state of clusters from deviations in $L_x$-$T$ space are available. In such cases, the primary predictive
metric, the projected offset between the WMC and the X-ray center is not available. However, performing enough spectroscopy to unambiguously confirm the MMCG/BCG and to estimate
both the WMC and the systemic velocity of the cluster allows for the estimation of two metrics with higher predictive power (MMCG/BCG to WMC Dist. and MMCG/BCG Vel. Off.). From
tests on the member populations of the ORELSE clusters, the WMC and systemic velocity can be determined at high precision ($\la50$ kpc and $\la$100 km s$^{-1}$, respectively)
from limited spectroscopy which focuses on the most massive/luminous members. However, we note that metrics which rely on a single galaxy, i.e., the MMCG/BCG, are subject to serious
uncertainty when spectroscopy is limited, too narrow in its selection (e.g., brightness or color range, spatial coverage), or not well-informed by imaging data. Under these circumstances
the true MMCG/BCG can easily escape spectroscopic detection resulting in potentially catastrophic consequences. If further spectroscopy is performed and if the optical/NIR imaging
allows for the calculation of photo-$z$s to mitigate the number of bluer interlopers, such spectroscopy should focus on equally targeting blue and red photo-$z$ member galaxies.
Adopting this strategy would allow for the calculation of the velocity center and dispersion offsets between the two populations, both of which are moderate predictors of virialization. The
former is considerably easier to obtain as it is, at least in our cluster sample, more robust to changes in sample size and outliers and can be estimated with high precision
with a small number of galaxies. The quiescent fraction and results from DS tests both require extensive spectroscopy to meaningfully constrain and appear to have the
least predictive power of all optical metrics. The most powerful leverage, however, comes if shallow X-ray observations are added onto the optical/NIR data. These observations need not
be deep enough to measure $L_X$ or $T_X$, but just deep enough to obtain a centroid of the diffuse ICM emission. Under this scenario, the projected distance between the WMC and
the X-ray center, by far the most predictive metric, can be measured and used in conjunction with information on the MMCG/BCG to estimate the level of virialization with a high level of reliability.
The case of large-scale X-ray surveys with limited or shallow accompanying optical/NIR imaging is one of limited applicability to both present and future
cluster surveys. However, while deep, multi-band optical/NIR imaging, if it does not already exist, should become available for the vast majority of X-ray detected clusters
over the course of the next decade, spectroscopic followup will likely not be as readily available. In such cases it is worth considering what populations are most valuable to
target spectroscopically in order to most efficiently and reliably constrain the virialization state of the cluster. While, in principle, offsets relative to
fiducial $L_x$-$T$ relations can be used for such clusters, we note that deep X-ray imaging is required to meaningfully estimate an X-ray temperature for $z\sim1$ clusters
and, as shown in Table \ref{offsettab}, clusters which show unremarkable $L_x$-$T$ offsets
can sometimes exhibit severe offsets in the other scaling relations. The primary goal of any spectroscopic followup should, again, be aimed at unambiguously confirming the MMCG/BCG
as well as the most massive/luminous member galaxies. With this, four of the most important metrics, the projected offsets between the MMCG/BCG, the WMCs, and the X-ray centers
as well as the velocity offset between the MMCG/BCG and the systemic velocity can
be calculated to a high level of precision. Again, any additional spectroscopy should focus on splitting spectroscopic targets between blue and red galaxies, with the primary
aim being to calculate the offsets between the mean velocities of the two sub-samples. It is important
to note this strategy only requires knowledge of the center of the resolved X-ray emission, which means that even shallow X-ray data along with limited, intelligently-targeted
spectroscopy would suffice to place strong constraints on the virialization state of a cluster. While deep X-ray data can be leveraged for other purposes, e.g., to determine
the total mass of a cluster under certain assumptions, our results suggest that obtaining deeper X-ray imaging for the purposes of determining the X-ray luminosity/temperature
of the ICM or to calculate meaningful power ratios, is, at least for the purposes of estimating the virialization state of a cluster, far too expensive for the minimal gain it
provides.
\bigskip
\section*{Acknowledgements}
\footnotesize{
This material is based upon work supported by the National Aeronautics and Space Administration under NASA Grant Number NNX15AK92G. Part of the work presented herein is supported by the National Science Foundation under Grant No. 1411943. The authors thank Kirpal Nandra and Antonis Georgakakis for providing the Imperial reduction pipeline and their ongoing support of the software. The authors also thank the anonymous referee for suggestions which allowed us to catch several errors and spurred a vast improvement in the content and scope of the paper. Work presented here is based in part on data collected at Subaru Telescope as well as archival data obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan. This work is based in part on observations made with the Large Format Camera mounted on the 200-inch Hale Telescope at Palomar Observatory, owned and operated by the California Institute of Technology. A subset of observations were obtained with WIRCam, a joint project of CFHT, Taiwan, Korea, Canada, France, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. UKIRT is supported by NASA and operated under an agreement among the University of Hawaii, the University of Arizona, and Lockheed Martin Advanced Technology Center; operations are enabled through the cooperation of the East Asian Observatory. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. The spectroscopic data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. }
\bibliographystyle{mn2e}
|
3,212,635,537,552 | arxiv | \section{Introduction}
The functioning of organisms on the molecular level is a research topic of
increasing attention. Survival and reproduction requires an autonomous
regulation of chemical concentrations in the living cell. Modeling such
regulatory dynamics, various mathematical approaches have been studied, from
discrete to continuous methods, from deterministic to stochastic techniques,
from static to dynamical models, from detailed to coarse grained perspectives
\cite{Bornholdt:2005}, see ref.~\cite{deJong:2002} for an overview.
Boolean dynamics
\cite{Kauffman:1969,Charteret:2008,Albert:2008,drossel-review,Helikar:2008,Sahoo:2010} is a
framework for modeling regulatory systems, especially for precise sequence
control as observed in morphogenesis \cite{Albert:2003} and cell cycle dynamics
\cite{Li:2004} but also in the regulation of the metabolism \cite{Samal:2008}.
Using binary (on/off) concentrations as an idealization, Boolean dynamics
directly implements the logical skeleton of regulation. Values of system
parameters such as binding constants, production and degradation rates etc.\
are not needed. This abstraction simplifies computation and analytical
treatment. Boolean networks have been extracted directly from the literature
\cite{Helikar:2008,Davidich:2008} of known biochemical interactions or obtained
by discretization of differential equation models \cite{Davidich:2008b}. Known
state sequences and responses of several systems have been faithfully
reproduced by the discrete models \cite{Albert:2003,Li:2004}.
Despite these benefits, modelers do not employ Boolean dynamics as widely as
ordinary or delay differential equations. The latter are embedded in an
established framework for state-{\em continuous} dynamical systems
\cite{Strogatz:1994} which itself builds on the mathematical foundations of
linear algebra and infinitesimal calculus. In particular, the definition of {\em
stability} of a solution under {\em small} perturbations is based on the
consideration of infinitesimally small neighborhoods in state space. Stability
checks for solutions of the dynamical equations are a salient part of
mathematical modeling. Unstable solutions are not expected to be observed in a
real-world system.
In the state-{\em discrete} Boolean dynamics, {\em large} perturbations are
normally implemented as a {\em flip}, where the state of a single Boolean
variable is inverted. Then the evolution of the damage is tracked. The damage is
the difference between the state of the perturbed and the unperturbed system.
The return map of the expected size of the damage is known as Derrida plot
\cite{Derrida:1986}. Numerous studies have elucidated the effect of flip
perturbations on regulatory dynamics with Boolean states
\cite{Kauffman:2003,Shmulevich:2003,Kauffman:2004,Rohlf:2007,
Pomerance:2009,Peixoto:2010}. When asking if a gene-regulatory system reproduces
a prescribed trajectory despite noise, large perturbations are to be considered
in the case of low copy numbers of regulatory molecules and bursty stochastic
response \cite{Eldar:2010}. Small perturbations, however, are more appropriate
when modeling systems with large copy numbers and an integrative response to
filter out bursts, see e.g.\ \cite{Lestas:2010}.
Here we find that the clear distinction between the two types of perturbations
is crucial. In a continuous system, stability or instability under small
perturbations is not indicative of the effect of flip perturbations. Likewise,
probing a Boolean system with flip perturbations does not necessarily provide
information about the stability of the continuous counterpart under small
perturbations.
An $n$-dimensional Boolean map $f:\{0,1\}^n \rightarrow \{0,1\}^n$ gives
rise to a time-discrete dynamics
\begin{equation} \label{eq:boolmap}
x(t+1) = f(x(t))
\end{equation}
with $x=(x_1,\dots,x_n)$ being a Boolean state vector (bit string) of $n$
entries. Such a map is equivalent to a {\em Boolean network}. When $f$ is
pictured as a network, a node corresponds to a coordinate $i$ of the Boolean
state vector and a directed edge $j \rightarrow i$ (from node $j$ to node $i$)
is present if the Boolean function $f_i$ explicitly depends on the $j$-th
coordinate.
Let us now define a continuous dynamics whose discretization readily
leads to the Boolean map in Eq.\refeq{eq:boolmap}. Taking values
$y_i(t) \in [0,1]$, $i \in \{1,\dots,n\}$, $t \in \mathbb{R}$, the states
evolve according to the delay differential equation
\begin{equation} \label{eq:ddeours}
\dot{y_i}(t+1) = \alpha \mathop{\mathrm{sgn}}(\tilde f(y(t)) - y_i(t+1))
\end{equation}
with $\alpha$ an inverse time constant. For large $\alpha$, this is essentially Boolean dynamics with fast but
continuous switching between the saturation values.
The simplest choice is $\tilde f = f \circ \Theta$ with $\Theta$ the
component-wise step function, $\Theta_i(y) = 1$ if $y_i \ge 1/2$ and
$\Theta_i(y)=0$ otherwise. This choice of continuous dynamics is in close
correspondence with the discrete dynamics in the following sense. Suppose $x(0),
x(1), x(2), \dots$ is a solution of Eq.\refeq{eq:boolmap}. Let
$y(t)$ be a solution of Eq.\refeq{eq:ddeours} such that there is a time
interval $[t_1,t_2]$ with $y(s) = x(0)$ for all $s \in [t_1,t_2]$.
Then for all future times $t \in \mathbb{N}$ and all $s \in [t_1,t_2]$
\begin{equation}
x(t) = y(\beta t + s)
\end{equation}
with $\beta = 1+ 1/(2\alpha)$.
The closest resemblance between Boolean and continuous dynamics is obtained
when choosing the same initial condition, that is $y(s)=x(0)$ for all $s \in
[-1,0]$. Similar correspondence between Boolean maps and ordinary
differential equations has been studied earlier neglecting
transmission delay \cite{Glass:1972} or implementing more complicated
differential equations \cite{Braunewell:2009,Norrell:2007,Norrell:2009,
Gehrmann:2010} compared to Equation\refeq{eq:ddeours}.
{\em Perturbations. ---} Given a map $f$, the evolution of states is uniquely determined by
Eq.\refeq{eq:ddeours} by an initial condition $y(t)$ on a time interval
of unit length, here taken as $[-1,0]=:I$. We restrict ourselves to
initial conditions that do not vary on $I$, $y(t) = y(0)$ for all $t \in I$.
An initial condition with a {\em small} perturbation is generated as
\begin{equation} \label{eq:pert}
y^\prime_i (t) := \epsilon_i (1-y_i(t)) + (1-\epsilon_i) y_i(t)
\end{equation}
for $t \in I$. The perturbation amplitudes are arbitrary numbers
$\epsilon_i\in\; ]0,1/2[$.
An initial condition with a {\em flip} perturbation is generated as
\begin{equation}
y^!_i (t) := \left\{ \begin{array}{rl}
1-y_i(t) & \text{if }i=l \\
y_i(t) & \text{otherwise}
\end{array}\right.
\end{equation}
for $t \in I$ and an arbitrary node $l \in \{1,\dots,n\}$.
Note that the total amplitude $\sum_i \epsilon_i$ of a small perturbation may
exceed the unit amplitude of a flip perturbation. A small perturbation produces
small deviations from the original state potentially at each node.
A flip perturbation concentrates a maximal deviation at a single node.
We say that the system {\em heals} from the perturbation if the dynamics from
perturbed and unperturbed initial condition eventually become the same except
for an arbitrary time lag. Formally, healing from a small perturbation means
that there are $t_0>0$ and $\tau > -t_0$ such that
\begin{equation} \label{eq:healing}
y(t) = y^\prime(t+\tau)
\end{equation}
for all $t \ge t_0$. Healing from a flip perturbation means that
Eq.\refeq{eq:healing} holds analogously for $y^!$ instead of $y^\prime$. We
define the heal time $t_\text{heal}$ as the smallest time $t_0$ for which this
holds.
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{fig1.eps}}
\caption{\label{fig:st_graph} (color online).
Dynamics of two mutually activating nodes.
(a) State space of the Boolean system described by Eq.\refeq{eq:twonodes}.
Thin arrows indicate the mapping $f$ of states by the dynamics, thick bidirectional arrows stand
for flip perturbations. Indicated by shaded areas, the system has three
dynamical modes (attractors): two fixed points $(0,0)$ and $(1,1)$ and a cycle
of length 2 involving the states $(0,1)$ and $(1,0)$.
(b) Time evolution of the corresponding continuous system in
Equation\refeq{eq:ddeours} with initial
condition $x_1(0)=1$ (thick curve) and $x_2(0)=0$ (thin curve).
The two nodes switch in a synchronous mode as indicated by vertical
double arrows akin to the Boolean state sequence $(0,1), (1,0), (0,1), \dots$.
(c) Time evolution from perturbed initial condition, $x_1(0)<1$, $x_2(0)>0$.
The perturbation translates into a phase lag in switching that does
not heal out.
}
\end{figure}
{\em Fixed points and bistable circuits. ---}
Let us first consider a fixed point as the simplest dynamical
behaviour. A fixed point of the continuous dynamics is a state
vector $y^\ast$ such that constant $y(t) = y^\ast$ is a solution
of Eq.\refeq{eq:ddeours}. This in turn means that the time derivative
vanishes at all times, equivalent to $y^\ast = f(y^\ast)$. The fixed points
of the continuous dynamics are exactly the fixed points of the discrete
map $f$. A small perturbation to a fixed point $y^\ast$
always heals, because values after
applying the treshold $\Theta$ remain unchanged, $\tilde f(y^\prime(t))=y^\ast$
for all $t \in I$. All fixed points are stable under small perturbations.
However, a flip perturbation to a fixed point does not always heal.
The {\em bistable switch} is an example. Consider a two-dimensional map $f$
with $f(x_1,x_2) = (x_2,x_1)$. It gives rise to the dynamics
\begin{equation} \label{eq:twonodes}
x_1(t+1) = x_2(t) \qquad x_2(t+1) = x_1(t)
\end{equation}
with fixed points $(0,0)$ and $(1,1)$. After perturbing a fixed point by
flipping one node's state, the system does not return to the fixed point. It
remains in the set of the state vectors $(0,1)$ and $(1,0)$ constituting a limit
cycle, cf.\ Figure~\ref{fig:st_graph}(a). The stability of the fixed points is
not obtained when probing the dynamics with flip perturbations. The bistable
switch constitutes a first simple example of systems with different
stability properties under flip and small perturbations.
In the continuous counterpart of the alternating Boolean state $(0,1)$ and
$(1,0)$, small perturbations do not heal, see Figure~\ref{fig:st_graph}(b,c).
The effect of a small perturbation is to induce a phase lag in the
oscillation, being discussed in earlier work
\cite{Klemm:2005a,Klemm:2005b,Braunewell:2007,Braunewell:2009}.
\begin{figure}
\centerline{\includegraphics[width=0.48\textwidth]{fig2.eps}}
\caption{\label{fig:pcon_0} (color online).
Stability of dynamics in random networks under perturbation by spin flip
(dashed) and under continuous perturbation (solid lines) in random networks with
$K=2$ and $K=4$ inputs per node. Symbols distinguish system size $n=300$
($\circ$), $1000$ ($\Box$) and $3000$ ($\diamond$). Each data point gives the
relative frequency of healed out perturbations on a set of $10^4$ independent
random realizations of network, initial condition and perturbation. Each
amplitude $\epsilon_i$ of a small perturbation is drawn independently from the
uniform distribution on an interval $[0;r]$ with $0<r<0.5$. The results are
independent of the choice of $r$. As a general invariance of the dynamics of
Equation~\eqref{eq:ddeours} with $\tilde f = f \circ \Theta$, the qualitative
effect (healing or spreading) of a small perturbation is not altered when the
amplitude vector is multiplied with a positive scalar keeping each
amplitude $\epsilon_i<0.5$.
}
\end{figure}
{\em Stability in random networks.---} We now compare the effects of the two
types of perturbations on dynamics in randomly generated networks. An ensemble
of random Boolean networks (RBN) \cite{drossel-review} is defined by the
number of nodes $n$, the number of inputs $K$ of each node, and the probability
distribution of Boolean functions $\pi(f)$. The latter is taken as a maximum
entropy ensemble $\pi_\lambda(f) \propto \exp(\lambda s(f))$ under a given
average sensitivity $\langle s \rangle$. The sensitivity $s(f)$ of a Boolean
function $f$ is the number of flips at one of the $K$ inputs that lead to a
change of the output value, averaged over all input vectors
\cite{Shmulevich:2004}. The resulting value $s(f)$ lies in the range from zero
(for a constant function $f$) to $K$, obtained for a parity function where
for all input vectors, a flip of a single input state flips the output.
For RBN, where the $K$ inputs of each node are drawn randomly and independently
from the set of $n$ nodes, the average sensitivity $\langle s \rangle$ is the
crucial parameter determining the system's response to flip perturbations
\cite{Shmulevich:2004}. In the limit $n\rightarrow \infty$, these perturbations
heal in ensembles with $\langle s \rangle <1$; they spread when $\langle s
\rangle >1$. This change of behaviour in dependence of $\langle s \rangle$
is reproduced in Figure~\ref{fig:pcon_0} (dashed lines) for varying $K$ and
$n$.
As our main result, we show in Fig.~\ref{fig:pcon_0} that the $\langle s
\rangle$-dependence of the
healing probability under flip perturbations is qualitatively different from
that under small perturbations. Only in the so-called critical region of
$\langle s \rangle\approx 1$, small perturbations spread. Both for $\langle s
\rangle\ll 1$ and $\langle s \rangle\gg 1$, the healing probability tends
towards 1.
This effect is enhanced by increasing system size. In the limit of $n
\rightarrow \infty$ one may expect a finite probability of non-healing only at
$\langle s \rangle= 1$. Then the dynamics is almost always stable under small
perturbations.
\begin{figure}
\includegraphics[width=0.48\textwidth]{fig3.eps}
\caption{\label{fig:healtime} (color online).
The average time to heal from a small perturbation increases linearly with the
number of nodes in the system for sensitivity $\langle s \rangle \ge 1$, and
sublinearly otherwise. The dashed line has slope 1 in this double-logarithmic
plot. Each data point is the average over $t_\text{heal}$ for the subset of
healing realizations. Realizations of network, initial condition and
perturbation are the same as in Figure~\ref{fig:pcon_0}.}
\end{figure}
The average time $t_\text{heal}$ to heal from small perturbations increases
moderately with system size as shown in Figure~\ref{fig:healtime}. For average
sensitivity above $1$, we observe a linear increase $\langle t_\text{heal}
\rangle \propto n$. For lower values of the average sensitivity, the increase
is sublinear.
\begin{figure}
\centerline{\includegraphics[clip,width=0.48\textwidth]{fig4.eps}}
\caption{\label{fig:cont_B} (color online).
Healing probabilities remain qualitatively the same
(cf.\ Figure~\ref{fig:pcon_0}) when using the alternative transfer function
$\tilde f_i (y) = \Theta (h_i (y))$ with $h_i(y) = a y_j y_k + b_1 y_j + b_2 y_k + c$;
for node $i$ taking inputs from nodes $j$ and $k$. The parameters
$a,b_1,b_2,c$ are chosen such that $h_i(y) = f_i(y)$ for $y_j,y_k \in \{0,1\}$.
If, for instance, $f_i$ is an AND then $a=1$ and $b_1=b_2=c=0$ so
$\tilde f_i (y) =1$ if and only if the product of inputs $y_j y_k \ge 1/2$.
Each data point is the healing fraction of 1000
realizations of given average sensitivity and system size
$n=30$ ($\circ$), $100$ ($\Box$) and $300$ ($\diamond$).
The perturbation amplitude $\epsilon_i$ is drawn from the
uniform distribution on $[0;0.01]$ independently for each node $i$.}
\end{figure}
The dynamics we have studied so far is simple but not the only possibility to
pass from the Boolean map to a continuous flow. In order to check to what
extent our results depend on this choice we repeat simulations for $K=2$ with
an alternative function $\tilde f$ (cf.\ Equation\refeq{eq:ddeours}) now taking
into account cooperative effects between inputs. Figure~\ref{fig:cont_B} shows
that the same qualitative result obtains under this choice, see figure caption
for details.
In summary, we have shown that the dynamics of large random networks of
switch-like elements typically recovers from small perturbations of the state
vector. Healing is observed naturally at low sensitivity. However, also large
sensitivities of the nodes' functions render the long-term behaviour of the
whole system insensitive to small perturbations. Instability is observed only in
an intermediate sensitivity regime that shrinks as systems become larger.
The behaviour under small perturbations is essentially different from the
established stability diagram for RBN. Under {\em flip} perturbations, RBN
display a transition from healing to non-healing (damage spreading) behaviour at
average sensitivity $1$. It has been suggested that networks of regulatory
switches position themselves at this transition \cite{Kauffman:1993}, known as
the edge of chaos \cite{Langton:1990}. Then some but not all flip perturbations
spread and therefore allow for complex information processing without rendering
the system unreliable under noise.
According to our findings, a complementary scenario is worth discussing. The
apparent conflict between responsiveness to external input signals and
resilience to intrinsic noise dissolves when these influences act as
perturbations at separate scales: noise corresponds to small perturbations
whilst input signals are interpreted as the flipping of a state. Under these
assumptions, noise resilience and responsiveness are compatible rather than
conflicting in the regime of average sensitivity above 1. Systems that combine
both beneficial properties are obtained ``for free'' in random networks of
sufficiently sensitive switching elements.
{\em Acknowledgments.---} The authors thank Gunnar Boldhaus, Florian Greil and
Thimo Rohlf for valuable comments. This work has been financially supported by
Volks\-wagen\-Stiftung through the initiative on Complex
Networks as Phenomena across Disciplines.
|
3,212,635,537,553 | arxiv | \section{Introduction}
Over the last few years, statistical physics has increasingly contributed useful tools and valuable insight into
many emerging interdisciplinary fields of science \cite{oli99,wei00,sta06}.
In particular, many efforts have focused recently on the mathematical modeling
of a rich variety of social phenomena, such as social influence and self-organization, cooperation,
opinion formation and spreading, evolution of social structures, etc
(see e.g. \cite{gal00,szn00,ber01,ber02,kup02,ale02,sch03,ben05,smi06,des06,can06,can07a,ang07,bor07a,bor07b,can07b,cas07}).
In this context, an agent-based model for social influence, originally proposed by Axelrod \cite{axe97a,axe97b}
to address the formation of cultural domains, has been extensively studied within the sociophysics community
(see e.g. \cite{cas00, kle03a, kle03b, gon05, gon06, kup06, maz07, gon07}).
In Axelrod's model, culture is defined by the set of cultural attributes (such as language, art,
technical standards, and social norms \cite{axe97b}) subject to social influence. The cultural state of
an individual is given by their set of specific traits, which are
capable of changing due to interactions with their acquaintances. In the original formulation, the individuals are
located at the nodes of a regular lattice and the interactions are assumed to take place between lattice neighbors.
Social influence is defined by a simple local dynamics, which is assumed to satisfy the following two properties:
(a) social interaction more likely takes place between individuals that share some of their cultural traits;
(b) as a result of the interaction, their cultural similarity is increased.
Earlier investigations showed that the model undergoes a phase transition separating an
ordered (culturally polarized) phase from a disordered (culturally fragmented) one,
which was found to depend on the number of different cultural traits available \cite{cas00}.
The critical behavior of the model was also studied in different complex network topologies, such as small-world and
scale-free networks \cite{kle03a}.
These studies considered, however, zero-temperature dynamics that
neglected the effect of fluctuations.
Following Axelrod's original idea of incorporating random perturbations to describe the effect
of {\it cultural drift} \cite{axe97a}, noise was
later added to the dynamics of the system \cite{kle03b}.
With the inclusion of this new ingredient, the disordered multicultural configurations were found to be metastable
states that could eventually decay towards ordered stable configurations, depending on the competition between the noise rate
and the characteristic time for the relaxation of perturbations.
Very recently, other extensions of the model were proposed, in which the role of mass media was investigated within
different scenarios. Neglecting random fluctuations, some studies considered in detail the role of
external \cite{gon05} and autonomous local or global fields \cite{gon06}.
Another recent investigation focused on the interplay and competition between
cultural drift and mass media effects \cite{maz07}. Adopting a mass media coupling capable of
affecting the cultural traits of any individual in the society (including those who do not share any
features with the external message), it was shown that the external field can induce cultural ordering
and reproduce the trend of actual advertising campaign data.
In a related context, recent investigations addressed the role played by the underlying topology of
complex substrates on the dynamical and critical behavior of the models defined on them.
The effects of some structural properties
that characterize disordered substrates, such as the small-world effect,
the degree distribution, the degree-degree correlations, and the local clustering, were extensively
studied \cite{wat99,alb02,new02,dor03,new06}.
Furthermore, the property of community structure, or large-scale clustering, appears to be common to
many real-world networks and is nowadays being subject of intense research efforts. In many social networks, as e.g.
the well known karate club study of Zachary \cite{gir02}, the United States
House of Representatives \cite{por06}, scientific co-authorships and mobile phone call records \cite{pal07},
well defined modular structures were observed.
However, the effects of community structure on models of sociophysical interest have received so far little attention.
Lambiotte and Ausloos \cite{lam07} have very recently considered the effect of communities on the Majority Rule model
by means of the so-called {\it coupled random networks}, a mixture of two random communities, in which a
parameter $\nu$ controls the degree of intercommunity links relative to that of intracommunity connections \cite{gir02}.
Depending on $\nu$ and on a noise parameter, a diagram with three distinct phases is obtained:
a disordered phase, where no collective phenomena takes place; an ordered, symmetric phase, where both communities
share the same average state; and an ordered, asymmetric phase, in which different communities reach different states.
The aim of this work is to investigate effects arising from the characteristic modular structure of social networks
in the propagation of mass media messages across the society. To this end, we focus on the extension of Axelrod's
model proposed in Ref.~\cite{maz07}, which includes effects of mass media and cultural drift, using
coupled random networks for the substrate. In the absence of external messages, a phase diagram with three phases is found,
qualitatively analogous to that observed in Ref.~\cite{lam07} for the majority rule model. Then, we assume that an
inhomogeneous mass media field affects one of the communities and study the system's response to the spreading of the message.
Incorporating the intensity of the mass media field as an additional parameter,
several new phases are observed to emerge, thus leading to a very rich, novel phase diagram in the multidimensional space of
model parameters.
This paper is organized as follows:
in Section 2, details on the model and the simulation method are given;
Section 3 is devoted to the presentation and discussion of the results, while Section 4 contains the conclusions.
\section{The model and the simulation method}
In order to represent the community structure observed in social networks, we consider a substrate topology
consisting of two coupled random networks (CRN). These structures were first proposed in Ref.~\cite{gir02}
to carry out comparative tests of different methods for community detection in complex networks.
We assume that a system of $N$ nodes is divided into two communities ($A$ and $B$) of equal size. A CRN configuration
is built by adding intracommunity links between pairs of nodes that belong to the same community, as well as intercommunity
links between pairs of nodes that belong to different communities.
Considering all possible node pairs, intracommunity links are added with probability $p_{int}$, while
intercommunity connections are added with probability $p_{ext}$.
On average, a node is thus connected to $k_{int}= p_{int}(N/2-1)$ neighbors inside the same community and to
$k_{ext}= p_{ext}N/2$ nodes that belong to a different community. For the sake of simplicity, we fix $k_{int}=4$
and tune the intercommunity connectedness by means of a single parameter, namely $\nu\equiv
k_{ext}/k_{int}\approx p_{ext}/p_{int}$. Notice that the $\nu\to 1$ limit corresponds to a single random graph
lacking modular features, while $\nu\ll 1$ is the case in which well defined communities are sparsely
connected with each other.
The nodes of the system are labeled with an index $i$ ($1\leq i\leq N$) and represent individuals subject to interactions
with their neighbors (i.e. other individuals directly linked to him/her by either intra- or inter-community connections),
as well as with an externally broadcast mass media message. According to Axelrod's model,
the cultural state of the $i-$th individual is described by the integer vector
${\bf \sigma_i} = (\sigma_{i1},\sigma_{i2},...,\sigma_{iF})$, where
$1\leq\sigma_{if}\leq q$. The dimension of this vector, $F$, defines the
number of cultural attributes, while $q$ corresponds to the number of different cultural traits per attribute.
Initially, the specific traits for each individual are assigned randomly with a uniform distribution. Similarly,
the mass media cultural message is modeled by a constant integer vector
${\bf \mu} = (\mu_1,\mu_2,...,\mu_F)$, which can be chosen as ${\bf \mu} = (1,1,...,1)$ without loss of generality.
The intensity of the mass media message relative to the local interactions between neighboring
individuals is controlled by the parameter $M$ ($0\leq M\leq 1$). Moreover, the parameter $r$ ($0 < r\leq 1$)
is introduced to represent the noise rate \cite{kle03a}.
Since the main focus of this work is on mass media spreading phenomena under the influence
of an underlying modular substrate, we will consider inhomogeneous mass media affecting only
individuals that belong to the community $A$.
The model dynamics is defined by iterating a sequence of rules, as follows: (1) an individual
is selected at random; (2a) if the individual belongs to the community $A$, he/she interacts with the mass media
field with probability $M$, while he/she interacts with a randomly chosen neighbor with probability (1-$M$);
(2b) if the individual belongs to the community $B$, he/she interacts with a randomly chosen neighbor;
(3) with probability $r$, a random single-feature perturbation is performed.
The interaction between the $i-$th and $j-$th individuals is governed by their cultural overlap,
$C_{ij}=\sum_{f=1}^F\delta_{\sigma_{if},\sigma_{jf}}/F$, where $\delta_{kl}$ is the Kronecker delta.
With probability $C_{ij}$, the result of the interaction is that of
increasing their similarity: one chooses at random one of the attributes on which they differ
(i.e., such that $\sigma_{if}\neq\sigma_{jf}$) and sets them equal by changing one of their traits.
Naturally, if $C_{ij}=1$,
the cultural states of both individuals are already identical, and the interaction leaves them unchanged.
The interaction between the $i-$th individual and the mass media field is governed by the overlap term
$C_{iM}=(\sum_{f=1}^F\delta_{\sigma_{if},\mu_f}+1)/(F+1)$. Analogously to the precedent case,
$C_{iM}$ is the probability that, as a result of the interaction, the individual changes one of the traits
that differ from the message by setting it equal to the message's trait.
Again, if $C_{iM}=1$, the cultural state of the individual is already identical to the mass media message,
and the interaction leaves it unchanged.
Notice that $C_{iM}>0$; thus, the mass media coupling used here is
capable of affecting the cultural traits of any individual within community $A$,
including those who do not share any features with the external message.
As regards the perturbations introduced in step (3),
a single feature of a single individual is randomly chosen, and, with probability $r$,
their corresponding trait is changed to a randomly selected value between 1 and $q$.
In the absence of fluctuations, the system evolves towards absorbing states, i.e., frozen configurations that
are not capable of further changes. However, for $r>0$ the system evolves continuously and, after a transient period,
it attains a stationary state. Following previous studies on Axelrod's model \cite{gon05,maz07},
in this work we chiefly focus on systems of fixed size ($N=2500$ nodes), fixed number of cultural attributes ($F=10$)
and fixed number of different cultural traits per attribute ($q=40$). Furthermore, we also briefly discuss the effects
(or lack thereof) observed by changing these model parameters.
The results presented in the next Section correspond to observables measured over statistically-averaged ensembles in
the stationary regime, which were obtained by averaging over 200 different (randomly generated) initial
configurations and 100 different network realizations.
\section{Results and discussion}
In order to set the stage for the investigation of modularity effects,
let us first briefly summarize the main results concerning Axelrod's model defined on the square lattice.
As mentioned above, in the absence of fluctuations the system reaches absorbing configurations, in
which the state of each individual is fixed and not capable of further changes.
With the inclusion of noise to model the effect of cultural drift, however,
disordered multicultural configurations become metastable
states that can eventually decay to ordered stable configurations \cite{kle03b}.
Whether this decay actually takes place or not depends
on the competition between the noise rate, $r$, and the
characteristic time for the relaxation of perturbations, $T$. For $r<T^{-1}$, repeated cycles of perturbation-relaxation
processes drive the disordered system towards monocultural states, while, for $r>T^{-1}$, noise
rates are large enough to hinder the relaxation mechanism, thus conserving the disorder. With arguments based on
a mean-field description of a damage spreading process, the characteristic time for the relaxation of perturbations
is estimated as $T\sim N\ {\rm ln}N$, where $N$ is the system size \cite{kle03b}.
In the absence of noise, the number of cultural traits is observed to play a key role in determining the final
absorbing state: ordered monocultural configurations (for $q<q_c$) and disordered multicultural
ones (for $q>q_c$) are separated by
a finite critical value $q_c>0$ \cite{cas00,kle03b}. For instance, in a system of size $N=2500$ with
$F=10$ cultural attributes, the transition takes place at $q_c\approx 50$.
For $r>0$, however, the order-disorder transition solely depends on
the effective noise rate $r_{eff}=r\times (1-1/q)$. The very mild dependence on the parameter $q$
just stems from the fact that, according to the third rule of the model dynamics,
a perturbation can leave the cultural configuration unchanged with probability $1/q$. For the $N=2500$ and
$F=10$ case, the order-disorder transition is observed around $r_{eff}\approx 10^{-4}$ \cite{kle03b}.
When both noise and the mass media external field, $M$, are taken into account, interplay and competition effects
are observed \cite{maz07}. As the field intensity is increased, the transition shifts to higher noise levels.
Since the ordering is driven by the mass media field, the system attains a unique
ordered state, namely, the monocultural state in which all individuals share their
cultural traits with those of ${\bf \mu}$, the external message. In the absence of external fields,
the noise-induced ordering leads to $q^F$ equally likely monocultural
ground configurations, as well as to excursions from a ground configuration to another one.
Note hence that, due to considerations of ergodicity and
the multiplicity of ground states, it is not possible to simply define an
``effective noise intensity" $r^\prime=r^\prime(r,M)$ in order to trivially map
the model with field onto an effective model without field.
Let us now focus on the effects of modularity, which is modeled using a substrate topology that
consists of two coupled random networks (see Sect. 2 for details).
Firstly, we will address cultural drift effects alone ($r>0, M=0$); later, we will study the case in which
inhomogeneous mass media affect one of the communities ($r>0, M>0$) and explore the conditions for successful message
spreading across the whole system.
\begin{figure}[t!]
\begin{center}
\epsfxsize=4.2truein\epsfysize=2.9truein\epsffile{snap1.eps}
\end{center}
\caption{Typical snapshot configurations for different values of noise, $r$,
and intercommunity connectedness, $\nu$, in the absence of mass media fields ($M=0$).
Nodes belonging to the community $A$ ($B$) are shown on the left (right) side of each CRN realization.
The most popular cultural state is shown in blue, the second most popular in green,
the rest in greyscale. The network visualizations were created with Cytoscape \cite{sha03}.}
\label{fig1}
\end{figure}
Figure 1 presents some typical snapshot configurations of the stationary regime for
different values of noise, $r$, and intercommunity connectedness, $\nu$, in the absence of mass media fields:
(a) $r=10^{-3}, \nu=6\times 10^{-2}$; (b) $r=10^{-2}, \nu=4\times 10^{-2}$; and (c) $r=10^{-3}, \nu=1.5\times 10^{-2}$.
Here and throughout, the community $A$ ($B$) is shown on the left (right) side of each network realization.
For the sake of clarity, snapshot visualizations correspond to networks of
small size ($N=100$). The cultural state of the individuals is indicated by different node colors: the cultural
state shared by the largest number of individuals is shown in blue, the second most popular cultural state is shown
in green, while less frequent states are indicated in greyscale.
The characteristic configuration for small noise levels and many intercommunity links (Figure 1(a)) is a nearly full
consensus: most of the individuals share the same cultural state. However, when considering larger noise
rates (Figure 1(b)), the system undergoes a transition towards complete disorder, where the size of the most popular
cultural state represents just a small fraction of the total system's size. Indeed, these phenomena are reminiscent of the
observed behavior of Axelrod's model in the lattice, where a finite critical noise, $r_c$, was found to separate
the ordered, monocultural phase (for $r<r_c$) from the disordered, multicultural one (for $r>r_c$) \cite{kle03b}.
However, strong effects arising from the modular
structure of the substrate are observed at small values of $\nu$, leading to the appearance of a new phase
(Figure 1(c)). This is the ordered, bicultural phase, in which different cultural states prevail within each community.
Interestingly, this behavior is in qualitative agreement with related work on a 2-state, majority rule model defined on
substrates with community structure, where the coexistence of opposite opinions was observed \cite{lam07,lam07b}.
In order to quantitatively characterize different phases in the stationary regime,
we define $A_{max}$ ($B_{max}$) as the maximal number of members of community $A$ ($B$)
that share the same cultural state, normalized to unity. Furthermore, we define the vector ${\bf{a_{max}}}$
(${\bf{b_{max}}}$) as the prevailing
cultural state within community $A$ ($B$). Once a stationary configuration is generated, we
classify the cultural state of
each community as being ordered or disordered according to a simple majority
criterion: $A$ ($B$) is ordered if $A_{max}\geq 0.5$ ($B_{max}\geq 0.5$),
and disordered otherwise. Moreover, when both communities are ordered, the whole system is in an ordered {\it symmetric} state
if ${\bf{a_{max}}}={\bf{b_{max}}}$, while it is in an ordered {\it asymmetric} state otherwise.
Since $A$ and $B$ are indistinguishable communities, the combination
of these different states leads to 4 possible phases.
Figure 2 shows the resulting phase diagram in the $r-\nu$ parameter space, in which the dominant phases for each region
are displayed. At any given point on the $r-\nu$ plane, the {\it dominant phase} is defined as the phase with the largest
probability of occurrence (for instance, phase probability profiles for the $M>0$ case are shown in Figures 4 and 7 below).
Thus, boundaries separating two phases correspond to states for which two dominant phases are equally probable, while triple points
are associated to states for which three dominant phases are equally probable. The same definition was also adopted to
determine the phase diagrams presented below (see Figures 5 and 8).
As anticipated, three distinct regimes prevail: a multicultural (disordered)
phase, a monocultural phase in which both communities share the same cultural state, and a bicultural phase
in which each community is ordered, but in different states independent of each other.
The effect of increasing $r$ at a fixed value of $\nu$ is that of increasing the disorder in the system, as expected.
The noise-induced order-disorder transition is only mildly dependent on the number of links connecting both
communities, with the transition curve located at $r\simeq 2\times 10^{-4}$ for $\nu\geq 1.4\times 10^{-3}$.
The mild $\nu$-dependence of the order-disorder transition curve is due to finite-size effects:
$\nu$ plays here the role of tuning the ``effective system size" from $N_{eff}=N/2$
(in the $\nu\to 0$ limit, when the two communities are independent of each other) to $N_{eff}=N$
(in the $\nu\to 1$ limit, when the modular structure washes out).
\begin{figure*}[t]
\begin{center}
\epsfxsize=3.8truein\epsfysize=2.7truein\epsffile{PD_0.eps}
\end{center}
\caption{Phase diagram for $M=0$ in the $r-\nu$ parameter space.}
\label{fig2}
\end{figure*}
It is within the region with small $r$ where community structure effects become more noticeable.
For small $\nu$,
the two communities are weakly connected and do not influence each other, leading to the prevalence of the
(ordered asymmetric) bicultural phase. However, increasing $\nu$ the noise-driven ordered phases tend towards consensus,
thus leading to a dominant (ordered symmetric) monocultural phase. A tri-critical point is found at $(r=2\times 10^{-4},
\nu=1.65\times 10^{-3})$.
As commented above, a qualitatively similar phase diagram was obtained in previous investigations of a 2-state
majority rule model defined on substrates with community structure \cite{lam07,lam07b}.
The transition from the bicultural phase to the monocultural phase that takes place in the small-$r$ region can be
roughly estimated by the following theoretical argument. Under conditions of small rate of perturbations, we can assume
that typically most of the nodes in the community $A$ will tend to agree in the same cultural state $\sigma_A$ (randomly
chosen among any of the $q^F$ possible cultural vectors), and similarly, most of the nodes in the community $B$ will
share the cultural state $\sigma_B$. The monocultural phase, $\sigma_A=\sigma_B$, is driven by interactions between pairs of
border nodes, i.e. those with intercommunity links. If a border node is chosen by rule (1) of the model dynamics,
its interacting neighbor, chosen in turn by rule (2), has a probability $P_{ext}=k_{ext}/(k_{int}+k_{ext})$ to belong
to a different community. If $L_\nu$ is the total number of intercommunity links, we can assume that the
interaction between communities is effectively present for $P_{ext}L_\nu>1$. Thus, $P_{ext}L_\nu\sim 1$ can be
taken as a rough estimate for the occurrence of the monocultural/bicultural phase transition in the small-$r$ region.
Since, on average, $k_{ext}=\nu k_{int}$ and $\nu\ll 1$, a border node has $k_{ext}=1$, i.e. only one external neighbor.
Using $L_\nu=2\nu N$, the condition $P_{ext}L_\nu\sim 1$ reads
\begin{equation}
\frac{2\nu N}{k_{int}+1}\sim 1\ .
\label{estimate}
\end{equation}
Replacing $k_{int}=4$ and $N=2500$, we obtain $\nu\sim 10^{-3}$, which provides a good estimate for the boundary
observed in Figure 2 between the monocultural and bicultural phases.
An immediate consequence of this simple theoretical argument is that results for different network sizes
should scale with $\nu$ and $N$ through $L_\nu\propto \nu N$. This predicted behavior was indeed confirmed by
our simulations of systems of different size. Moreover, increasing $N$ also shifts the boundary between ordered
phases and the multicultural phase towards smaller values of noise, which is consistent with previous observations
of noise-driven transitions in Axelrod's model defined on the square lattice \cite{kle03b}.
Let us now address the case in which mass media affect one of the communities
and explore the conditions for successful message spreading across the whole system.
In order to capture the characteristic behavior of this system in
the multidimensional parameter space, we consider separately the small-$r$ ordered region
and the large-$r$ disordered case.
Since $r$ can be regarded as a measure of the intrinsic individual determination or ``free will"
relative to the influence
exerted by neighbors and mass media, the small-$r$ scenario represents a society with individuals subject to
strong social pressure, while the large-$r$ case corresponds to a society characterized by loose
social ties. Within these different scenarios, the
adoption of inhomogeneous mass media fields that introduce a physical distinction between the dynamics of
both communities will drive the system across different phase transitions.
\begin{figure}[t]
\begin{center}
\epsfxsize=4.4truein\epsfysize=2.5truein\epsffile{snap2.eps}
\caption{Typical snapshot configurations for different values of $M$ and $\nu$ within the small-$r$ regime.
The $\mu$-state is shown in red, the most popular among non-${\bf\mu}$ states appears in blue,
the second most popular non-${\bf\mu}$ state in green, while other states are shown in greyscale.}
\end{center}
\label{fig3}
\end{figure}
Figure 3 shows typical snapshot configurations of size $N=100$ for different values of $M$ and $\nu$ within the small-$r$ regime
($r=10^{-3}$): (a) $M=10^{-3}, \nu=6\times 10^{-2}$; (b) $M=10^{-2}, \nu=6\times 10^{-2}$;
(c) $M=10^{-3}, \nu=2\times 10^{-2}$; and (d) $M=10^{-2}, \nu=2\times 10^{-2}$.
Recall that nodes belonging to the community $A$ ($B$) are shown on the left (right) side of each network realization.
The cultural state that
corresponds to the external message, ${\bf\mu}$, is shown in red. Other states are shown in
blue (most popular among non-${\bf\mu}$ states), green (second most popular) and greyscale (all other states).
When communities are strongly interconnected, the system evolves towards consensus, where
most of the individuals share the same cultural state (Figures 3(a)-(b)). However, the nature of the attained consensus
depends on the strength of the mass media field: for small $M$, the system organizes itself into any of
the $q^F$ possible monocultural states, while increasing $M$ a transition takes place towards a regime dominated by
the mass media message. When communities are instead sparsely interconnected, they tend to evolve independently
of each other (Figures 3(c)-(d)). However, analogously to the case where communities are tightly bound together,
the system undergoes an $M$-driven transition from a regime where communities are ordered but independently
organized into different non-$\mu$ states (Figure 3(c)) to a phase where the mass
media message prevails within community $A$, while the community $B$ is in a non-$\mu$ state (Figure 3(d)).
Notice that, lacking enough intercommunity links, even a very intense mass media campaign will fail to convey
its message to the whole society.
\begin{figure}[t]
\begin{center}
\epsfxsize=4.8truein\epsfysize=2.5truein\epsffile{PROB_1.eps}
\caption{Probability of occurrence of the relevant phases as a function of $M$ for
the small-$r$ regime (with $r=10^{-5}$) and different
values of the community connectivity: (a) $\nu=6\times 10^{-4}$ and (b) $\nu=2\times 10^{-3}$.}
\label{fig4}
\end{center}
\end{figure}
In order to quantitatively distinguish among different phases, we can follow a procedure similar to
that described above for the $M=0$ case.
However, additional phases now arise from the fact that the cultural state corresponding to the mass media message, $\mu$,
is physically distinguishable from the other $q^F-1$ possible cultural states. For instance, the community $A$ can
be either dominated by the mass media message ($A_{max}\geq 0.5$ and ${\bf{a_{max}=\mu}}$), ordered in a
different cultural state ($A_{max}\geq 0.5$ and ${\bf{a_{max}}}\neq\mu$), or disordered ($A_{max} < 0.5$).
Taking also into account the distinction between symmetric and asymmetric ordered states (which is relevant
when $A$ and $B$ are ordered in cultural states both different from $\mu$), this ultimately leads to 10 possible
different phases. However, as suggested by the snapshot configurations shown in Figure 3, only 4 phases are relevant.
Figure 4 shows the probability of occurrence of the relevant phases as a function of the message intensity,
corresponding to the small-$r$ regime (with $r=10^{-5}$) and for different
values of the community connectivity: (a) $\nu=6\times 10^{-4}$ and (b) $\nu=2\times 10^{-3}$. In agreement
with our qualitative discussion, Figure 4(a) shows that different kinds of asymmetric phase prevail when
communities are loosely interconnected. Indeed, for small mass media fields, communities are predominantly
in different non-$\mu$ states, i.e. the $(A_{non-\mu},B_{non-\mu})_A$ phase,
while increasing $M$ above $M_c=7\times 10^{-4}$ the system is most often found in a
bicultural phase with $\mu$ prevailing within community $A$, labeled as the $(A_\mu,B_{non-\mu})$ phase.
Note also that, somewhat counterintuitively,
the probability of achieving overall consensus tends to decrease as a function of the mass media intensity.
This phenomenon is due to the fact that the external field prevents the independent auto-organization
of the whole system in a non-$\mu$ state, while the lack of strong intercommunity ties prevent the message from
reaching out far beyond the region directly exposed to the inhomogeneous mass media field.
Figure 4(b) shows that, on the contrary,
strongly interconnected communities allow the whole system to reach consensus. Increasing
the mass media field, the system undergoes the expected transition from non-$\mu$ monocultural states, i.e.
the symmetric $(A_{non-\mu},B_{non-\mu})_S$ phase, to $\mu$-consensus, indicated as $(A_\mu,B_\mu)$.
\begin{figure}[t]
\begin{center}
\epsfxsize=3.8truein\epsfysize=2.7truein\epsffile{PD_1.eps}
\caption{$M-\nu$ phase diagram within the small-$r$ regime (for $r=10^{-5}$).}
\label{fig5}
\end{center}
\end{figure}
The corresponding phase diagram in the $M-\nu$ parameter space is shown in Figure 5.
In the low-$M$ end, we observe that, increasing the intercommunity connectedness, the dominating phase changes
from asymmetric non-$\mu$ to symmetric non-$\mu$. In fact, this transition matches the bicultural to monocultural
phase transition observed earlier in the small-$r$ end in the absence of mass media fields (recall Figure 2).
As discussed above, successful message spreading across the whole system can only be achieved when sufficiently
strong mass media fields are applied on a sufficiently interconnected system. The boundaries for the $\mu$-consensus
region are approximately $M\geq 10^{-3}$ and $\nu\geq 1.5\times 10^{-3}$. These results stress the fact that, in a
general scenario, well-designed, cost-effective advertising campaigns should take into account the specific modular
structure of the target population. Indeed, even very intense (and, hence, costly) mass media campaigns may fail if
social modularity effects are disregarded.
\begin{figure}[t]
\begin{center}
\epsfxsize=4.4truein\epsfysize=2.5truein\epsffile{snap3.eps}
\caption{Typical snapshot configurations for different values of $M$ and $\nu$ within the large-$r$ regime.
The $\mu$-state is shown in red, the most popular among non-${\bf\mu}$ states appears in blue,
the second most popular non-${\bf\mu}$ state in green, while other states are shown in greyscale.}
\label{fig6}
\end{center}
\end{figure}
Considering networks of different size, phase diagrams
can be made roughly invariant along the vertical axis by adopting the scaling relation $\nu N$. Indeed, this
indicates that the relevant quantity defining the actual degree of interconnectedness is the total number
of intercommunity links, $L_\nu$, in agreement with the theoretical argument presented above, Eq.(\ref{estimate}).
The boundary between non-$\mu$ states and the $\mu-$consensus phase, which in Figure 5 appears around $M\simeq 10^{-3}$,
is observed to shift towards lower values of $M$ as the network size is increased. We also explored the stability of our
results under changes in the number of cultural attributes, $F$, and the number of different cultural traits per attribute,
$q$, without noticing any significant variations. This behavior agrees well with previous investigations on
Axelrod's model, where results roughly independent of the parameter $F$ \cite{cas00} and the parameter $q$ (for the
model with noise, provided that $q\gg 1$) \cite{kle03b,maz07} were reported. A similar behavior is also observed in the
large-$r$ regime, which is discussed below.
\begin{figure}[t]
\begin{center}
\epsfxsize=4.8truein\epsfysize=2.5truein\epsffile{PROB_2.eps}
\caption{Probability of occurrence of the relevant phases as a function of $M$ for
the large-$r$ regime (with $r=10^{-3}$) and different
values of the community connectivity: (a) $\nu=2\times 10^{-3}$ and (b) $\nu=1.7\times 10^{-2}$.}
\label{fig7}
\end{center}
\end{figure}
As anticipated, we now consider the large-$r$ scenario, which corresponds to a society characterized by loose social ties.
Figure 6 shows typical snapshot configurations for different values of $M$ and $\nu$ within the large-$r$ regime ($r=10^{-2}$),
with the same coloring scheme used in the visualizations of Figure 3. The corresponding parameter values are:
(a) $M=1.5\times 10^{-2}, \nu=6\times 10^{-2}$; (b) $M=10^{-1}, \nu=6\times 10^{-2}$; (c) $M=1.5\times 10^{-2}, \nu=2\times 10^{-2}$;
and (d) $M=10^{-1}, \nu=2\times 10^{-2}$.
Irrespective of connectivity, the low-$M$ region is characterized by disordered multicultural configurations
(Figures 6(a) and 6(c)). Indeed, disordered states are characteristic of the large-$r$ region in the absence of mass
media fields (compare to Figure 1(b)). Within this scenario,
order can be achieved only when strong external fields oppose the large intrinsic noise (see Figure 6(b)). However,
if the communities are sparsely interconnected, even strong fields are not capable of driving the system towards
consensus: while the mass
media message prevails within community $A$, the community $B$ is instead in a disordered multicultural state (Figure 6(d)).
\begin{figure}[t]
\begin{center}
\epsfxsize=3.8truein\epsfysize=2.7truein\epsffile{PD_2.eps}
\caption{$M-\nu$ phase diagram within the large-$r$ regime (for $r=10^{-3}$).}
\label{fig8}
\end{center}
\end{figure}
Following the procedure described above, we can determine the probability of occurrence of each phase as a function
of the field intensity. However, out of 10 possible phases, only 3 are relevant.
These are shown in Figure 7 for $r=10^{-3}$ and different
values of the community connectivity: (a) $\nu=2\times 10^{-3}$ and (b) $\nu=1.7\times 10^{-2}$.
As discussed above, when the communities are loosely interconnected (Figure 7(a)) the system is intrinsically
disordered. Only a strong field can oppose the large noise level and order the community $A$, thus leading to
a transition from the $(A_{dis},B_{dis})$ phase to the $(A_\mu,B_{dis})$ phase. For highly connected
communities (Figure 7(b)),
instead, the strong field is able to order the whole system in the $\mu$-state, driving the phase transition from
$(A_{dis},B_{dis})$ to $(A_\mu,B_\mu)$.
Figure 8 shows the phase diagram in the $M-\nu$ parameter space corresponding to the large-$r$ regime ($r=10^{-3}$).
Matching the large-$r$ region for the $M=0$ phase diagram of Figure 2, the low-$M$ end is dominated by
the multicultural disordered phase. Increasing $M$, two different phases can be reached depending on the intercommunity
connectedness: the $(A_\mu,B_{dis})$ phase, for loosely connected communities, and the $\mu$-consensus, when the
communities are more strongly bound together.
\begin{figure}[t]
\begin{center}
\epsfxsize=3.8truein\epsfysize=2.7truein\epsffile{ELS.eps}
\caption{Effective link weights as a function of $M$
for $r=10^{-5}$ and $\nu=2\times 10^{-3}$ (solid lines). Link weight plots are shown separately for intracommunity
connections within each community, as well as for intercommunity connections. For comparison,
the probability of occurrence of the relevant phases are also shown (dashed lines). See more details in the text.}
\label{fig9}
\end{center}
\end{figure}
Finally, let us discuss some subtle, intriguing effects that result from the model dynamics.
Recalling the phase probability distributions
of Figure 4(b), the plot corresponding to the $(A_\mu,B_\mu)$ phase shows
a dip at large message intensities,
which is correlated with the occurrence of a bump in the plot of the $(A_\mu,B_{non-\mu})$ phase. Far from being an
artifactual feature due to poor statistics, this phenomenon stems from the dynamical rules of the model and can be
understood on the basis of a sound sociological interpretation.
According to the second dynamical rule, within the community $A$,
the parameter $M$ regulates the competition between
two different types of interaction: that of an individual with the mass media, and that between neighboring individuals.
This implies that, besides the ordering effect driven by the mass media interaction,
which tends to align all cultural traits with the external message,
there is also a competing disordering mechanism: individuals subject to strong mass media fields have a low
probability of interacting (and, thus, of increasing the similarity) with their social neighbors.
Although the former tends to prevail and is, ultimately, the mechanism
responsible for the $\mu$-consensus ordering observed in the large-$M$ end of the phase diagrams,
competition effects lead to visible features such as the dip and the bump noted above.
In order to confirm this explanation, we computed the effective link weight, $w_{eff}(A-B)$, as the mean cultural
overlap between neighbors that belong to different communities, and compared it to the effective link
weight within each community. Figure 9 shows the effective link weights as a function of $M$ for the
small-$r$ regime (with $r=10^{-5}$) and a large connectivity ($\nu=2\times 10^{-3}$).
For the sake of comparison, the probability of occurrence of the relevant phases (same as those in Figure 4(b)) are also shown.
Marked with arrows, we indicate two distinct features in the plots of effective link weight: a first dip (local minimum)
in all three plots taking place at $M=10^{-3}$, and a second dip observed at $M=2.6\times 10^{-3}$ only in the plot
of intercommunity links.
The former is well correlated with the phase transition from
$(A_{non-\mu},B_{non-\mu})_S$ to $(A_\mu,B_\mu)$, and hence reflects the corresponding phase changes within each community.
Instead, the latter is well correlated with the dip in the probability of the $(A_\mu,B_\mu)$ phase, as well as with the
bump in the probability of the $(A_\mu,B_{non-\mu})$ phase.
These results show that individuals subject to intense mass media fields are less
likely to interact with their social neighbors, hampering message spreading processes
across community boundaries. Thus,
induced by mass media pervasiveness, the tendency of individuals towards isolated behavior is
captured and well accounted for by this model.
\section{Conclusions}
In the context of an extension of Axelrod's model for social influence, we studied
how the modular structure of social
networks affects the propagation of mass media messages across the society.
The community structure of social networks was represented by coupled random networks,
in which two random graphs are connected by intercommunity links.
In the absence of mass media,
we observed the prevalence of three distinct phases, depending on the values of cultural drift (i.e. the level
of intrinsic noise) and community interconnectedness:
the ordered monocultural phase, the ordered bicultural phase, and the disordered multicultural phase.
The obtained phase diagram is qualitatively similar to that reported for the majority rule model defined
on modular substrates \cite{lam07,lam07b}.
Then, considering inhomogeneous advertising campaigns, we studied the
system's response to the spreading of the mass media message. We considered separately two different scenarios:
the small noise regime, which represents a society with individuals subject to
strong social pressure, and the large noise regime, which is characterized by loose
social ties.
Incorporating the intensity of the mass media field as an additional parameter, we observed the emergence of
new phases, which led to a very rich, novel phase diagram in the multidimensional parameter space.
Our results show that the design of successful, cost-effective advertising campaigns
should take into account the specific modular
structure of the target population. Indeed, even very intense (and, hence, costly) mass media campaigns may fail if
social modularity effects are disregarded.
Certainly, the simplified scenario considered here leaves room for further investigations that may look for
modularity effects when communities of different size and different inter-/intra-community connectedness are considered.
In this vein, the study of social influence and message spreading on real modular substrates taken from social interaction data
(such as e.g. large-scale mobile phone usage with detailed space-time resolution \cite{can08,lam08}) could lead to interesting
new results.
We thus hope that the present findings will contribute to the growing interdisciplinary efforts
in the mathematical modeling of social dynamics phenomena, and stimulate further work.
\section*{Acknowledgments}
We acknowledge the hospitality of the University of New Mexico, where this work was started during
the authors' visit to the Consortium of the Americas for Interdisciplinary Science.
J. C. is supported by the James S. McDonnell Foundation and the National
Science Foundation ITR DMR-0426737 and CNS-0540348 within the DDDAS program.
|
3,212,635,537,554 | arxiv |
\subsection{Cloud Software Infrastructure}
\begin{figure}
\centering
\subfloat[]{
\includegraphics[width=0.5\textwidth]{figures/google_redraw.pdf}
\label{fig:google}
}
\subfloat[]{
\includegraphics[width=0.5\textwidth]{figures/facebook_redraw.pdf}
\label{fig:facebook}
}
\caption{(a) Google and (b) Facebook software infrastructures (redrawn from \cite{schwarzkopf2015operating})}
\end{figure}
Internet scale applications such as search engine and social network demand a complicated software infrastructure with cluster management, data storage, data processing, and application layers. Each layer usually contains multiple systems to serve different use scenarios. Figures \ref{fig:google} and \ref{fig:facebook} illustrate the known systems at Google and Facebook. The cluster management layer allocates hardware resources and schedules jobs as a service provided to upper layers. Distributed monitoring systems help system admins to keep cluster healthy. There are several data storage systems that persist system state at scale and provide a rich interface with various consistency guarantees. Data processing layer implements required data analysis and representation for the applications.
Upon a closer examination, the distributed system within each layer often adopts a simple architecture pattern where the master node maintains the critical states and workers accepts tasks from the master and compute on data in a stateless manner. The master node relies on state-machine replication for fault-tolerance. Finally, these systems often employ ZooKeeper for coordination needs: including metadata management, group membership, leader election, and resource and configuration management. Cloud applications typically leverage such robust infrastructure for fault tolerance and recovery among other things.
\subsection{Service Oriented Architecture (SOA) and Microservices}
SOA is the most popular software design style in large-scale applications. In SOA, the functionality is divided into isolated services and each service can use other services through a communication protocol (e.g. RESTful) over a network. Compared to the more traditional monolithic architecture pattern, the resulting microservices architecture pattern is more scalable through functional decomposition.
Every microservice in the system is a simple, independent, distinct feature or functionality of the application. Each service instance is typically a process running on its own VM or container. Each microservice instance maintains its own state by using the data storage systems abundant in cloud computing infrastructure.
The interconnected microservices exposes a well defined REST API that is consumed by other services or clients. Representational state transfer (REST) is the most common choice for inter-service communication because it makes use of stateless protocol and standard operations (HTTP) for fast performance, reliability and the ability to grow/update without affecting the running system as a whole. Service API, therefore, is identified by its URL. Between any service requests, the client context is not stored on the server providing the service. As a result, each request from any client must contain all the information necessary to serve the request.
\subsection{Towards Stateless Computing}
\iffalse
Where the applications are running within the infrastructure has also been changed dramatically in recent years as the more components are shared between the distributed applications, the more efficient it gets.
\fi
The infrastructure to support the cloud application has been rapidly evolving, mostly in the opensource domain. The evolutionary trend is to increase the flexibility of cloud infrastructure while reducing the overheads via virtualization and resource sharing. Figure \ref{fig:sharing} compares these infrastructure approaches and their shared layers in gray.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{figures/serverless.pdf}
\caption{Evolution of sharing (redrawn from \cite{serverless})}
\label{fig:sharing}
\end{figure}
\iffalse
In the pre-cloud era, applications were running on the dedicated hardware, as such scalability was limited by both the capabilities of physical server and by how much effort is needed to add additional servers to grow the system horizontally. The distributed application of that time were bulkier and tried to use as many resources of available servers before engineers were required to add more physical hardware.
\fi
The development of virtualization technologies and increased efficiency of Virtual Machines (VMs) gave rise to the proliferation of cloud computing applications. In the cloud, horizontal scalability is attained much easier by no longer requiring physical hardware to support applications. Many systems are able to run in virtual machines on the same hardware, making resource utilization higher and favoring horizontal scalability over vertical. This allowed applications to have smaller state and become more portable for the ease of quick deployment on new virtual machines.
Containerized environments took the resource sharing beyond the hardware. Containers are isolated application environments that share hardware and an underlying operating system~\cite{container}. They are very lightweight and can be brought up online or turned off in the matter of seconds. This demanded the applications to be lightweight as well, and having large state became prohibitively expensive due to the costs of recovering the state after container migration or restart.
Finally, serverless or lambda~\cite{awslambda} computing allows multiple applications to share not only the same operating system, but also the user-space libraries and runtimes. With serverless computing, the applications are defined as a set of functions or lambda-handlers that have access to some common data-store for state. The functions themselves, however, do not carry any state from one execution to another. Handlers are typically written in a portable, interpreted languages such as JavaScript or Python. By sharing the runtime environment across functions, the code specific to a particular application will typically be small. The code size, portability and stateless nature of handlers make them inexpensive to send to any worker in a cluster.
\iffalse
Figure \ref{fig:sharing} compares the different architecture and their shared layers in gray. At the granularity of operating system, virtual machines virtualized and shared the hardware so multiple VMs can colocate on the same machine. This allowed consolidation of machines, prevented the server sprawl problem, and reduced costs as well as improving manageability. However, VM requires static allocation of the hardware resources. At the granularity of service, the containers virtualized and shared the operating system, and avoided the overheads of VMs by dynamically allocate the resource as the application demand at the moment. "Serverless" (lambda) takes the virtualization a step ahead. They virtualize and share the runtime, and now the unit of deployment is a function. Applications are now defined as a set of functions (i.e., lambda handlers) with access to a common data store. Lambda handlers from different customers share common pools of runtimes managed by the cloud provider, so developers need not worry about server management. Handlers are typically written in interpreted languages such as JavaScript or Python. By sharing the runtime environment across functions, the code specific to a particular application will typically be small, and hence it is inexpensive to send the handler code to any worker in a cluster.
\fi
\subsection{Coordination in the Cloud}
Coordination plays an important role in the cloud applications, especially as
the complexity of the systems grows. In our recent
work~\cite{consensus_in_the_cloud}, we surveyed the use of Paxos-like consensus
systems used in various cloud computing systems. We identified 9 distinct usecases for
the coordination as illustrated in Figure~\ref{fig:consensus_use}. The most
common of these usecases are metadata management with 27\% of the usage
scenarios surveyed, leader election (20\%), group membership (11\%), and
synchronization (11\%).
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{figures/consensus-usage.pdf}
\caption{Consensus systems usage in the cloud (redrawn from \cite{consensus_in_the_cloud})}
\label{fig:consensus_use}
\end{figure}
Metadata management provides a mechanism for maintaining metadata, such as configuration or state of shared objects, across the nodes in the cluster in a consistent and fault-tolerant manner. The leader election usecase allows systems to safely elect a single node acting as a dedicated master for some operations. Synchronization is another important use case that enables distributed locks. Other use cases for consensus systems in the cloud include server and log replication, barrier orchestration, service discovery and distributed queues.
\subsection{Stabilization for Online Recovery}
\label{subsec:autorecovery}
Perpetual
Seamless
\fi
\subsection{Stabilization in DataFlow Systems}
\label{subsec:dataflow}
Modern data-processing~\cite{naiad,heron} and distributed machine learning systems \cite{tensorflow} often represent the computation as a graph with data flowing along the edges and vertices representing the computations. These systems tend to utilize checkpoint-reset as their failure recovery model, however this may be inadequate for online stream processing systems, such as Twitter’s Heron~\cite{heron}. Under checkpoint reset, an online stream-processing system would lose all computations happening after the checkpoint in case of a recovery. Heron runs on a large number of small-state workers that receive their tasks from a centralized streaming manager and utilizes recovery by restart model for many faults, as the small tasks can be rescheduled to different workers as needed.
Dhalion is a system that aims to automate many of the actions undertaken by Heron’s engineers for fault recovery and performance tuning~\cite{dhalion}. Dhalion achieves this through monitoring of many vital parameters and using the monitoring data to detect abnormalities in operation and come up with a set of possible causes for abnormal behavior. Dhalion uses the probable causes to identify the best possible fix out of the ones available to it and apply that correction to the systems, bringing it to a healthier state. The systems attempts to resolve the problems at the higher, infrastructure level instead of the application-logic level. For instance, if Dhalion detects the performance is inadequate, it will attempt to scale up by adding more workers. Similarly, if Dhalion sees a faulty worker, it will attempt moving the tasks away from that worker into a healthy one.
\subsection{What is Different about the Cloud?}
\subsection {Cloud Computing Faults}
Failures in the cloud can be roughly categorized as small-scale faults and large-scale failures. Small faults typically only impact a few clients or some specific requests, allowing the system to process the rest of the workload. For instance, an example of such small fault would be a rejected hotel booking request due to the insufficient funds on the card. In this case, the fault does not have any impact on other users or requests, yet the system has to take a corrective action to make sure the room is not marked as reserved. The credit card failure left the booking service in a corrupted state as the room was reserved without a payment. However, this corruption is localized to a single service and not likely to spread to other services, making the room unavailability to be the only adversarial effect of the fault if left without a correction.
Crash fault approach to fault tolerance is crucial to many of such small and localized failures. If the node failure did not impact any global state of the systems, such as persisting some corrupted data to storage, then a failure can be dealt with a simple restart or provisioning of a new node. In some cases, even if the failure is persisted to storage, it can be masked through replication and redundancy in the cloud.
More intricate small-scale failures may corrupt a state in a way that allows such corruption to propagate from one component to another. In such non-localized corruption scenarios, correcting state is more difficult, since the engineers need to account for the possibility of re-corruption cycles~\cite{leal2004scalable}. The spread of corruption across the components of the cloud system can lead to a catastrophic failure.
Large-scale failures are faults that leave the system unavailable for significant number of users or requests. Gunawi et al. conducted a survey of catastrophic failures across various cloud applications and identified the most common reasons cloud systems crash \cite{cloud_stop_computing} as illustrated in Table \ref{tab:failure_causes}. According to their study of 597 failure reports from different systems, only 15\% of the failures are the result of some bugs in the code, while another 10\% are due to the problems in configuration. Upgrade failures, which are bugs and misconfigurations introduced during the system upgrade and maintenance account for another 16\% of failures. Many of the other failures are subject to the external events, such as security attacks, natural disasters or failure of outside dependencies
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\centering
\caption{Causes of Catastrophic Failures in the cloud \cite{cloud_stop_computing}}
\label{tab:failure_causes}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
Root Cause & \# of Services & \# of Occurances & \% of Occurances\\ \hline
Unknown & 29 & 355 & - \\ \hline
Upgrade & 18 & 54 & 16 \\ \hline
Network & 21 & 52 & 15 \\ \hline
Bugs & 18 & 51 & 15 \\ \hline
Misconfiguration & 19 & 34 & 10 \\ \hline
Traffic Load & 18 & 31 & 9 \\ \hline
Cross-Service Dependencies & 14 & 28 & 8 \\ \hline
Power Outages & 11 & 21 & 6 \\ \hline
Security Attacks & 9 & 17 & 5 \\ \hline
Other Human Errors & 11 & 14 & 4 \\ \hline
Storage Failure & 4 & 13 & 4 \\ \hline
Server Failure & 6 & 11 & 3 \\ \hline
Natural Disasters & 5 & 9 & 3 \\ \hline
Other Hardware Failures & 4 & 5 & 1 \\ \hline
\end{tabular}
}
\end{table}
Cross-service dependencies are a special category of external failures originated at some service outside of the affected cloud system. These failures show that a reliability of a cloud application depends not only on the reliability of internal infrastructure and code, but also on the services external to the application. This is especially important in the context of microservices architecture, since a single unreliable microservice can cause cascading failures on its dependents.
\subsection{Recovery Models}
\iffalse
Cloud computing has reshaped the distributed systems landscape, introducing many changes and novel ideas that impact how engineers perceive fault tolerance and fault recovery in the cloud. Depending on the failure type and severity, cloud systems utilize various recovery mechanisms.
\fi
\textbf{Recovery by Waiting.} Recovery by waiting is typically used in cases of outside cross-service dependency failures \cite{cloud_stop_computing}. In these situations, the cloud application and infrastructure is capable of proper operation, but fails due to the unavailability of an external service, and must wait for the outside resources to become available again. As a remedy to the waiting for outside recovery, cloud applications need to design for redundancy on the external dependencies they use.
\textbf{Recovery by Restart.} Since many of the cloud components are either stateless or have small state that can be quickly replicated and loaded to new nodes, restarting the failed or slow nodes became a common practice in the cloud systems. Cloud systems are carefully guarded against server failures, making the recovery by restart an easy and safe option for handling many faults. \iffalse In some environments, such as containerized virtualization platforms, failures due to container shutdown or migration are frequent and systems are built to tolerate these failures as part of the normal operation. \fi This led to the fact that many smaller scale problems, such as memory leaks or re-optimizations done as a results of shifting workloads, are addressed by restarting the nodes instead of fixing the code or introducing additional complexity to it \cite{aws_fault_tolerant_applications}. Some cloud applications constantly induce server crashes in their infrastructure to probe the system and make sure it can handle a larger-scale faults \cite{chaos_monkey}.
However, recovery by restart is not able to address all failures happening in the cloud systems. Some failures are not caused by problems in the software, while some more intricate problems may involve distributed state corruption that can persist after individual node restarts.
\textbf{Recovery by Checkpoint Reset.} Checkpoint-reset is also a popular way to bring the system back to the correct state after the fault. These type of recovery causes the system to lose some progress and restart from some older known safe checkpoint. Because of this drawback, this type of recovery cannot be used with systems providing real-time feedback to the users. However, checkpoint-reset is a popular approach for data-processing \cite{mapreduce,naiad} and distributed machine learning \cite{tensorflow,petuum} applications.
\textbf{Recovery by Application Rollback.} Failures during the system upgrade are the single largest cause of catastrophic failures in the cloud systems \cite{cloud_stop_computing}. These failures often happen due to the bugs or misconfigurations introduced with application upgrades. The natural way to remedy such faults is rolling back to the previous stable version of the application and configuration.
Large applications, often perform gradual application upgrade or configuration rollout to the production system to minimize the impact possible bugs may bring to the systems. The idea is to catch the problem before it was deployed full-scale, thus both minimizing the severity of problem and time to roll-back. For instance, Facebook's Configerator tool controls how new configurations rollout from testing to the production environment and how the new configuration gradually expands to cover more production nodes \cite{configerator}. Configerator monitors the system behavior with every increase of new configuration's coverage and can automatically rollback to the previous good configuration if the problem is detected during the rollout.
\iffalse
Many of the causes for catastrophic cloud systems failures cannot be easily addressed and recovered in software. For instance, despite the redundancy in power supply at the data-center level, power failure is not recoverable in software. Hardware failure, such as network or server failures are also difficult to circumvent at the application level, but health monitors can be deployed more extensively to provide early warning on these failures. However, many of other failure causes, especially of the configuration nature, can be recovered given that the system can detect these software problems and make corrections or notify engineers in time. [[talk about facebook's configerator]]
\fi
\textbf{Recovery by State Repair.} An application state can be corrupted due to bugs in the code or configuration, however not every cloud application can tolerate being reset to a prior checkpoint or some predetermined safe state. For such application, engineers must design a corrector \cite{detectors_correctors} that fix the application state when a corruption occurs. Many severe bugs are capable of corrupting the state of the application. For instance, Leesatapornwongsa et al. compiled the database of known distributed concurrency (DC) bugs, one of the most complicated bug types in distributed systems from popular cloud-scale applications \cite{taxDC}. The authors also analyzed the fixes that were devised to address these issues in the code.
Many DC bugs in \cite{taxDC} were fixed in an offline manner by patching the code to prevent the faulty execution from occurring, but some more intricate bugs could not be addressed in such manner. Unlike the preventable bugs caused by bad message timing or incorrect timing assumptions, the more difficult DC bugs were caused by unfortunate and uncontrollable timing of the component failures in the cloud applications. Instead, such bugs were fixed by adding a simple corrector to the system that forward corrects the global state to a consistent state while the system is running.
\iffalse
Cloud application tend to have small distributed global state that is easier to manage at the application level. For loosely coupled systems, coordination systems such as ZooKeeper are often used to provide shared global state and facilitate the coordination between components. In cases when engineers need to provide some stabilization to their cloud systems and return them to a state satisfying the invariants, they tend to do so by accounting for expected faults and not an arbitrary state corruption. It is much easier to define corrective actions for a known fault than for an unanticipated arbitrary state corruption. For example, the system can have mutating actions and complementary undo actions that can be used in case of a fault
to reverse the changes.
\fi
\textbf{Offline Recovery.} Many of the DC bug fixes mentioned earlier operated at the code level, preventing the problem from occurring in the fixed version. These fixes happen offline, as the application needs to go through a development cycle of making a fix, testing and deployment.
In other words, the engineers develop the correctors in an ad-hoc manner on a case-by-case basis.
\iffalse
In case of DC bugs, the state correction was outside of the scope of the fixes, and engineers we either expected to restart from some known state or perform the correction manually after redeployment patched code.
\fi
\iffalse
\textbf{Cheap and Fast Storage}
\textbf{Cheap Computation}
\textbf{Expensive Latencies}
\subsection{Arbitrary Faults}
In cases when engineers need to provide some stabilization to their cloud systems and return them to a state satisfying the invariants, they tend to do so accounting the excepted faults and not arbitrary state corruption. It is much easier to define a corrective actions for a known fault than for an unanticipated arbitrary state corruption. For example, the system can have mutating actions and a complementary undo action that can be used in case of a fault to reverse the changes. Additionally, in a practical cloud system, state tends to be relatively small, with well-defined state transitions, making arbitrary sate corruption less likely to occur.
In a data-driven system, arbitrary data-state corruption occurring while data is at rest can be remedied by replication and quorum reads with the same techniques used to address data-staleness in many of the distributed storage systems. With a quorum read, a client reads the value from multiple sources and can detect corruption if some sources return different value then the majority.
\fi
\subsection{Outline of the rest of the paper}
To illustrate how the three design principles above are embraced in cloud computing systems, in Section~\ref{sec:arch} we present examples of services that power global-scale operations of Google and Facebook, and then delve in to the architectures of these services and their interactions with each other. We introduce the service-oriented-architecture (SOA) design pattern popular in the cloud computing software, and the microservices design pattern.
In Section~\ref{sec:faults}, we review the literature on what types of faults occur in the cloud computing systems and what type of fault-tolerance and recovery mechanisms are employed to deal with these.
In Section~\ref{sec:directions}, we point out to opportunities for applicability/adoption of self-stabilization techniques in new emerging topics in cloud computing areas, including in managing distributed coordination when composing multiple microservices to form higher level services (in Section~\ref{subsec:trans}), in dealing with complicated dataflow dependencies in realtime stream processing systems (in Section~\ref{subsec:dataflow})\iffalse, and in achieving online automatic recovery without requiring devops intervention (in Section~\ref{subsec:autorecovery})\fi.
\iffalse
The above were examples of trivial implementations of stabilization in the clouds. They work by reducing the state, managing it at ZK. They have applications to loosely-coupled loosely-coordinated systems. We don't see more elaborate implementation of stabilization in the clouds? Why not?
\fi
\iffalse
4. Use CRDTs to enable fast concurrent updates, without waiting for ZK serialization!
E.g., Dynamo (DHT) adopted eventual consistency.
5. Use existing infrastructure for recovery/fault-tolerance!
ZK enables coordination of recovery. There is even Zeus (Zeus: Facebook: Sign-off, rollout, canaries, and rollback). For recovery, the most common action is restart.
You can also use tools like chaos monkey to strengthen your system by debugging/improving the recovery code.
\fi
\iffalse
We mention at the end, some new directions where self-stabilization research and techniques would see applicability and adoption.
\fi
\section{Introduction}
\label{sec:intro}
\input{intro.tex}
\section{Cloud Computing Design Principles}
\label{sec:arch}
\input{arch.tex}
\section{Cloud Computing Faults and Recovery}
\label{sec:faults}
\input{diss.tex}
\section{Future Directions for Stabilization in the Cloud}
\label{sec:directions}
\input{trans.tex}
\input{dataflow.tex}
\input{autorecovery.tex}
\section{Concluding remarks}
\label{sec:concl}
\input{concl.tex}
\bibliographystyle{acm}
\subsection{Distributed Coordination at Application Level}
\label{subsec:trans}
Cloud applications are often build on top of frameworks and architectures that hide the complexity of distributed systems. For example, MapReduce provides powerful batch-processing capabilities while keeping a simple API and allowing the application developer to write code as if they were developing a single-threaded application \cite{mapreduce}. Distributed transactions accomplish similar goals, as they allow developers to think of distributed and concurrent requests as sequential operations on a single machine.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{figures/hotel_reservation1.pdf}
\caption{An example of a monolithic hotel reservation service.}
\label{fig:hotel_reservation1}
\end{figure}
A transaction updates the state of multiple remote nodes or components in the cloud application in such a way that either all components are successful at changing their state or none. For example, consider our earlier hotel reservation application that perform multiple actions for the booking to be complete. The application consists of a coordinator and two components: room reservation and credit card processing, as shown in the Figure \ref{fig:hotel_reservation1}. With the help of transactions, the application can tolerate component failures without the global state being affected. For instance, if the room reservation component successfully marks the room as reserved, but the credit card processing fails, then the system will abort the transaction and no room-reservation will be preserved in the state of the application.
Traditionally, transactional systems \cite{spanner,sinfonia} employ atomic commit protocols, such as two-phase commit (2PC), to perform transactions. However, 2PC is regarded as slow protocol with a number of corner cases that affect its performance and availability \cite{pnuts,bigtable}.
\iffalse
In addition to atomicity, distributed transactions also provide isolation guarantees in that concurrently executing transaction appear as they have executed in some sequential form. As such, conflicting concurrent transactions must be ordered wit respect to each other.
Traditionally, transactional systems \cite{spanner,sinfonia} employ some form of inter-node coordination to facilitate the transactions. Atomic commit protocols, such as two-phase commit (2PC) and its variations, are popular solutions for transaction commit in data-storage systems. These protocols achieve atomic and consistent state change at all nodes, ensuring that under normal operation all nodes never violate consistency invariant and either commit the transaction or abort it. However, 2PC is regarded as slow protocol with a number of corner cases that affect its performance and availability \cite{pnuts,bigtable}.
On the large scale, Google's spanner database is one notable example to successfully use 2PC despite the protocol's shortcomings. Spanner is a geo-distributed cloud database with support of transactions \cite{spanner}. In Spanner, data is replicated across Paxos-backed spanner nodes forming Paxos-groups. Many small transactions can be contained within a single Paxos-group and are carried out at the Paxos level. However, large transactions may involve multiple Paxos-groups and Spanner uses a 2PC to synchronize between the Paxos-groups involved in the request. Each coordinator, or transaction-manager, in Spanner is backed its Paxos-group, and therefore Spanner is less susceptible for coordinator failures, addressing availability problems of 2PC. Google also claims to achieve good average commit latency of around 42.7 ms when the number of nodes participating in the transaction is less than 50, however its scalability to larger transactions is severely limited \cite{spanner}. Most transactions do not span across Paxos-groups, reducing the dependability on 2PC for transactions. Additionally, not all applications and design approaches can fully benefit from transactional databases like Spanner.
\fi
Transactions do not fit well in the geodistributed heterogenous cloud systems due to performance reasons~\cite{apostateHelland}. \iffalse For example, transactions can be used with loosely-coupled SOA-based designs, but the coordination tools, such as ZooKeeper \cite{zookeeper}, must be in place to facilitate the synchronization.\fi Moreover, developers cannot always rely on having a common coordination tool in microservice designs, because some of the services may be external to the application and maintained by another party. Returning to our hotel booking example, every component of the reservation service had little knowledge of other components' states, but at least these components were part of the same infrastructure and could coordinate. Now consider an example in which the hotel reservation system needs to interact with different external booking microservices for each of the hotel brands, as illustrated in Figure~\ref{fig:hotel_reservation2}. An external microservice can mutate its own state, such as booking a room or canceling the reservation, but it cannot participate in the internal transactions, since the infrastructure and protocols to coordinate between the microservices are lacking.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{figures/hotel_reservation2.pdf}
\caption{An example of a micro-service hotel reservation system.}
\label{fig:hotel_reservation2}
\end{figure}
Despite the fact that transactional solutions do not work well with microservices architectures especially in the presence of outside actors, we often need to provide some of the transactional guarantees to such cloud systems. In particular, if one of the microservices in the request is not successful, we need to revert the state of the microservices that have already changed their states. We illustrate this process in the Figure \ref{fig:corrective}.
These corrective actions are typically written at the coordinator layer of the application in an ad-hoc manner and are not enforced by some specialized protocol. With this corrective approach, engineers need to provide undo-actions for the anticipated failures. For instance, if the room booking micro-service has already reserved the room when the credit card processing micro-service failed to charge the credit card, the coordinator needs to issue a corrective action to the booking microservice and cancel the reservation in order to bring the overall global system in a consistent state.
\begin{figure}%
\centering
\subfloat[][]{\includegraphics[width=2.22in]{figures/corrective_a.pdf}}%
\qquad
\subfloat[][]{\includegraphics[width=2.22in]{figures/corrective_b.pdf}}
\caption{Hotel reservation example with corrective actions. (a) Credit card micro-service is not successful at charging the card. (b) Correcting the state of room-booking micro-service.}
\label{fig:corrective}%
\end{figure}
\iffalse
The idea of ensuring the correctness of the system at the application is popular among web-developers. For instance, one of the philosophies of Ruby on Rails framework is to separate application logic from database, making developers write code to enforce data consistency and invariants at the application level ~\cite{feral_concurrency}. While this approach simplifies the requirements for the database layer of the system, the applications logic gets more complicated for sufficiently large systems. Additionally, engineers design the rules for enforcing data consistency and concurrency control based on failures they anticipate to happen, potentially leaving the system vulnerable for rare or unexpected faults. However, there are protocols, such as distributed sagas~\cite{distributed_saga}, that aim to provide more structured and formal approach to guaranteeing the correctness of distributed state in microservice cloud applications.
\fi
Self-stabilization can be useful in these scenarios,
where distributed microservices need to be coordinated and kept eventually consistent to the face of failures.
On the other hand, in order to become applicable for large scale cloud systems, self-stabilization methodology should allow certain compromises. For instance, recovering to an approximate invariant can be supported. Another example would be to design stabilization in a limited/specialized framework written to handle distributed coordination, such as distributed sagas~\cite{distributed_saga}.
\iffalse
This is messy, leads to distributed invariant, and distributed state synchronization. This is where self-stabilization can be useful. But not in the traditional sense of calculating an entire invariant and recovering to it. Certain compromises limitations need to be made it to be applied practically. Calculating approximate invariants could be an example. Another example would be to design stabilization in a limited/specialized framework written to handle distributed transactions, such as distributed sagas~\cite{distributed_saga}.
\fi |
3,212,635,537,555 | arxiv | \section{Introduction}
A binary code $C$ of length $n$ over the finite field $\mathbb{Z}_2$ is a subset of $\mathbb{Z}_{2}^{n}$. If $C$ is a vector subspace of $\mathbb{Z}_{2}^{n}$, then we call this
code is linear. A quaternary code $\mathcal{C}$ is a subset of $\mathbb{Z}_{4}^{n}$ and it is said to be linear if it is a submodule. In \cite{Delsarte1973}, Delsarte proposed the definition of additive codes, which are subgroups of the underlying abelian group in a translation association scheme. In particular, a binary Hamming scheme which is the only structures for the abelian group are those of the form $\mathbb{Z}_{2}^{\alpha}\times \mathbb{Z}_{4}^{\beta}$ when the underlying abelian group is of order $2^{n}$, where $n=\alpha+2\beta$(\cite{Delsarte1998}). Hence, the only additive codes in a binary Hamming scheme are the subgroups $\mathcal{C}$ of $\mathbb{Z}_{2}^{\alpha}\times \mathbb{Z}_{4}^{\beta}$. In order to distinguish them from additive codes over finite fields (see \cite{Bachoc2000,Bierbrauer2005,Blokhuis2004,J-L.Kim2003}), we call them $\mathbb{Z}_{2}\mathbb{Z}_{4}$-additive codes. Foundational results on $\mathbb{Z}_{2}\mathbb{Z}_{4}$-additive codes, including the generator matrix, the existence and the construction of self-dual codes, can be found in \cite{Borges2009}.
Now, we present another important ring $R$ which contains four elements, where $R=\mathbb{Z}_{2}+u\mathbb{Z}_{2}=\{0,1,u,1+u\}$ with $u^{2}=0$. It is well know that the ring $\mathbb{Z}_{2}$ is a subring of the ring $R$. Similar as that $\mathbb{Z}_{2}\mathbb{Z}_{4}-$additive codes, we define the following set:
$$\mathbb{Z}_{2}^\alpha \times R^\beta=\{(\mathbf{a},\mathbf{b})\mid~\mathbf{a}\in \mathbb{Z}_{2}^\alpha,~\mathbf{b}\in R^\beta\}.$$
We cannot directly define an algebraic structure endowing the set $\mathbb{Z}_{2}R$, which is not well defined with respect to the usual multiplication by $u\in R$. So this set is not $R$-module. In order to make it well defined and enrich with an algebraic structure we introduce a new multiplication as follows.
Define a map
$$\eta:R\longrightarrow \mathbb{Z}_{2},\eta (r+uq)\longmapsto r,$$
Clearly, the map $\eta$ is a ring homomorphism. Using this map, we can define a scalar multiplication as follows: for $\nu=(a_{1},a_{2},\dots,a_{\alpha}\mid b_{1},b_{2},\dots,b_{\beta})\in \mathbb{Z}_{2}^\alpha\times R^{\beta}$ and $d\in R$, we have
\begin{equation}\label{eq:1.1q}
d\nu=(\eta(d)a_{1},\cdots,\eta(d)a_{\alpha}\mid db_{1},\cdots,db_{\beta}).
\end{equation}
\begin{define} A linear code $\mathcal{C}$ is called a $\mathbb{Z}_2 R$ linear codes if it is a $R$-submodule of $\mathbb{Z}_2^\alpha \times R^\beta$ with respect to the scalar multiplication defined in \eqref{eq:1.1q}. Then the binary image of $\Phi(\mathcal{C})=C$ is called a $\mathbb{Z}_2 R$-linear code of length $n=\alpha+2\beta$, where $\Phi$ is a map from $\mathbb{Z}_2^\alpha \times R^\beta$ to $\mathbb{Z}_2^n$ defined as \[\Phi(a,b)=(a_1,\cdots,a_{\alpha}\mid \phi(b_1),\cdots,\phi(b_{\beta})),\]
for all $a=(a_0,\cdots,a_{\alpha})\in \mathbb{Z}_2^\alpha$, and $b=(b_1,\cdots,b_{\beta})\in R^\beta$. Furthermore, $\phi: R$ to $\mathbb{Z}_2^2$ is defined by $\phi(0)=(0,0)$, $\phi(1)=(0,1)$, $\phi(u)=(1,1)$, $\phi(1+u)=(1,0)$.
\end{define}
With the above preparation, Aydogdu et al. in \cite{Ismail Aydogdu2014} obtained the standard form matrix of $\mathbb{Z}_2R$ linear codes.
\begin{theorem}\label{th:standard form}{\rm \cite{Ismail Aydogdu2014}}
Let $\mathcal{C}$ be a $\mathbb{Z}_{2}R$ linear code of type $(\alpha,\beta;\gamma,\delta;\kappa)$.Then $\mathcal{C}$ is permutation equivalent to a $\mathbb{Z}_{2}R$ linear code with the standard form matrix
\begin{equation}\label{eq:standard matrix}
G =\left(
\begin{array}{cc|ccc}
I_{\kappa} & A_{1} & uT & 0 & 0 \\
0 & 0 & uD & uI_{\gamma-\kappa} & 0 \\
0 & S & B_{1}+uB_{2} & A & I_{\delta}\\
\end{array}
\right),
\end{equation}
where $A,A_{1},B_{1},B_{2},D,S$ and $T$ are matrices over $\mathbb{Z}_{2}$.
\end{theorem}
From Theorem~\ref{th:standard form}, it is easy that see $\mathbb{Z}_{2}R$ linear code is isomorphic to $\mathbb{Z}_{2}^{\gamma}\times \mathbb{Z}_{2}^{2\delta}$, and it has $|\mathcal{C}|=2^{\gamma+2\delta}$. Moreover, having the generator matrix as above, we say that $\mathcal{C}$ is of type $(\alpha,\beta;\gamma,\delta;\kappa)$. Also, $\kappa$ can be defined as follows:
Let $X$(respectively $Y$) be the set of $\mathbb{Z}_{2}$ (respectively $R$) coordinates positions, hence $|X|=\alpha$ (respectively $|Y|=\beta$). Unless otherwise stated, the $X$ corresponds to the first $\alpha$ coordinates and $Y$ corresponds to the last $\beta$ coordinates. Call $\mathcal{C}_{X}$ (respectively $\mathcal{C}_{Y}$) the punctured code of $\mathcal{C}$ by deleting the coordinates outside $X$ (respectively $Y$). Let $\mathcal{C}_{b}$ be the subcode of $\mathcal{C}$ which contains all codewords having the form of $(x|y_{1},y_{2},\dots,y_{\beta})$, where $x\in \mathbb{Z}_{2}^{\alpha},y_{i}\in \{0,u\},i=1,2,\dots,\beta$. Then $\kappa=dim(\mathcal{C}_{b})_{X}$. For the case $\alpha=0$, we will take $\kappa=0$.
\begin{define}An inner product for two vectors $\mathbf{v}=(v_1,\cdots,v_\alpha|v_{\alpha+1},\cdots,v_{\alpha+\beta}), \mathbf{w}=(w_1,\cdots,w_\alpha|w_{\alpha+1},\cdots,w_{\alpha+\beta})\in \mathbb{Z}_{2}^{\alpha}\times (\mathbb{Z}_{2}+u\mathbb{Z}_{2})^{\beta}$ is defined as
\begin{equation}\label{eq:innerproduct}
\langle \mathbf{v},\mathbf{w}\rangle=u\left(\sum\limits_{i=1}^\alpha v_{i}w_{i}\right)+\sum\limits_{j=\alpha+1}^{\alpha+\beta} v_{j}w_{j}\in R=\mathbb{Z}_{2}+u\mathbb{Z}_{2}.
\end{equation}
\end{define}
Hence, we have the definition of dual codes.
\begin{define}
Let $\mathcal{C}$ be a $\mathbb{Z}_{2}R$ code. We denote the dual of $\mathcal{C}$ by $\mathcal{C}^\perp$, which is defined as
$$\mathcal{C}^\perp=\{\mathbf{w}\in \mathbb{Z}_{2}^{\alpha}\times (\mathbb{Z}_{2}+u\mathbb{Z}_{2})^{\beta}\mid \langle \mathbf{v},\mathbf{w}\rangle=0~{\rm for~all}~ \mathbf{v}\in \mathcal{C}\}.$$
We say that $\mathcal{C}$ is self-orthogonal if and only if $\mathcal{C}\subseteq \mathcal{C}^\perp$ and $\mathcal{C}$ is self-dual if and only if $\mathcal{C}=\mathcal{C}^\perp$.
\end{define}
Following above definitions, the standard form generator matrix of $\mathcal{C}^\perp$ can be obtained.
\begin{theorem}\label{th:dualcodestandardform}{\rm \cite{Ismail Aydogdu2014}}
Let $\mathcal{C}$ be a $\mathbb{Z}_{2}R$ linear code of type $(\alpha,\beta;\gamma,\delta;\kappa)$ with standard form matrix defined in \eqref{eq:standard matrix}. Then the generator matrix of $\mathcal{C}^{\perp}$ is given as
$$
H=\left(
\begin{array}{cc|ccc}
A_{1}^{t}& I_{\alpha-\kappa} & 0 & 0 & uS^{t} \\
0 & 0 & 0 & uI_{\gamma-\kappa} & uA^{t} \\
T^{t} & 0 & I_{\beta+\kappa-\gamma-\delta} & D^{t} & (B_{1}+uB_{2})^{t}+D^{t}A^{t}\\
\end{array}
\right),
$$
where $A,A_{1},B_{1},B_{2},D$ and $T$ are matrices over $\mathbb{Z}_{2}$.
\end{theorem}
From Theorem~\ref{th:dualcodestandardform}, we know that the dual code $\mathcal{C}^{\perp}$ is a $\mathbb{Z}_{2}R$ linear code of type $(\alpha,\beta;\bar{\gamma},\bar{\delta};\bar{\kappa})$, where
\[
\begin{cases}
\bar{\kappa}=\alpha-\kappa;\\
\bar{\gamma}=\alpha+\gamma-2\kappa;\\
\bar{\delta}=\beta-\gamma-\delta+\kappa\\
\end{cases}
\]
Let $(\textbf{v}| \textbf{w})=(v_{1},\dots,v_{\alpha}\mid w_{1},\dots,w_{\beta})\in \mathbb{Z}_{2}^{\alpha}\times R^{\beta}$. The Gray map defined on $R$ can be expressed as follows: $\phi(a+bu)=(b,a+b),a,b\in \mathbb{Z}_{2}$ and the Lee weight is $wt_{L}(a+bu)=wt_{H}(b,a+b)$. Hence, $wt_L(\textbf{v}|\textbf{w})=wt_{H}(\textbf{v} )+wt_{L}(\textbf{w})$.
Let $\mathcal{C}$ be a $\mathbb{Z}_2R $ linear code of type $(\alpha,\beta;\gamma,\delta;\kappa)$ and $n=\alpha+2\beta$. The weight enumerator of code $\mathcal{C}$ is defined as
$$W(X,Y)=\sum\limits_{c\in \mathcal{C}}X^{n-wt_L(c)}Y^{wt_L(c)}.$$
Aydogdu et al. \cite{Ismail Aydogdu2014} gave the following result.
\begin{theorem}\label{th:MacIden}{\rm \cite{Ismail Aydogdu2014}}
Let $\mathcal{C}$ be a $\mathbb{Z}_{2}R$ linear code.The relation between the weight enumerators of $\mathcal{C}$ and $\mathcal{C}^\perp$ is given by the following identity:
$$W_{\mathcal{C}^{\perp}}(X,Y)=\frac{1}{\mid\mathcal{C}\mid}W_{\mathcal{C}}(X+Y,X-Y).$$
\end{theorem}
Based on above definitions, we can construct a self-dual code $\mathcal{C}_1$ over $\mathbb{Z}_{2}^4R^2$ of type $(4,2;2,1;2)$, which is generated by the following matrix
$$\left(\begin{array}{cccc|cc}
1 & 0 & 1 & 0 & u & 0 \\
0 & 1 & 0 & 1 & u & 0 \\
0 & 0 & 1 & 1 & 1 & 1
\end{array}
\right).$$
It is easy to get the weight enumerator of $\mathcal{C}_1$ is
\[W(X,Y)=X^{8}+14X^4Y^4+Y^{8}, \]
which implies that all codewords have doubly-even weight.
In \cite{Borgesselfdual}, Borges et al. studied self-dual codes over $\mathbb{Z}_{2}\mathbb{Z}_{4}$. Three types of self-dual codes are defined. For
each type, the possible values $\alpha,\beta$ such that there exists a code $\mathcal{C}\subseteq \mathbb{Z}_{2}^\alpha\times \mathbb{Z}_{4}^\beta$ are established.
Borges et al. defined a self-dual code is Type II if all the codewords have doubly-even weight. And they showed if $\mathcal{C} $ is type II $\mathbb{Z}_{2}\mathbb{Z}_{4}$-additive code, then $\alpha\equiv 0~({\rm mod} ~8)$. Recall that linear code $\mathcal{C}_1$ over $\mathbb{Z}_{2}^4R^2$, it is easy to find this case dosenot exist in \cite{Borgesselfdual}. Motivated by this discovery, we furthermore study self-dual codes over $\mathbb{Z}_{2}R$. Similar as that in \cite{Borgesselfdual}, three types of self-dual codes are defined. We also give the existence condition for each type and present several approaches to construct self-dual codes. Finally, we study self-dual codes over $\mathbb{Z}_{2}R$ with two nonzero weights, and the structure of these codes is described.
This paper is organized as follows. Section $2$, we study the properties of self-dual codes over $\mathbb{Z}_2R$, and give the existence conditions for three types. In Section $3$, we give several approaches of constructing self-dual codes over $\mathbb{Z}_2R$. Section $4$, we determine the structure of two-weight self-dual codes over $\mathbb{Z}_{2}R$ for $\alpha\cdot\beta\neq0$.
\section{Self-dual codes over $\mathbb{Z}_2R$}
\begin{lemma}\label{lem:selfdualcanshu}
Let $\mathcal{C}$ be a $\mathbb{Z}_2R$ linear self-dual code, then $\mathcal{C}$ is of type $(2\kappa,\beta;\beta+\kappa-2\delta,\delta;\kappa)$, $| \mathcal{C}|=2^{\kappa+\beta}$ and $\mathcal{C}_b=2^{\kappa+\beta-\delta}$.
\end{lemma}
\begin{proof} By Theorem~\ref{th:standard form} and Theorem~\ref{th:dualcodestandardform}, we finish the proof.\end{proof}
Hence, we have the following result.
\begin{cor}
Let $\mathcal{C}$ be a $\mathbb{Z}_2R$ linear self-dual code of type $(\alpha,\beta;\gamma,\delta;\kappa)$ , then $\alpha$ and $n$ are both even.
\end{cor}
\begin{lemma}\label{wandNwareeven}
Let $\mathcal{C}$ be a $\mathbb{Z}_2R$ linear self-dual code and $(\mathbf{v}| \mathbf{w})\in \mathcal{C}$. Denote $N(\mathbf{w})$ as the number of unit ($1$ or $1+u$) coordinates of vector $\mathbf{w}\in R^\beta$, then $wt_H(\mathbf{v})$ and $N(\mathbf{w})$ are both even. Moreover, we have $(\mathbf{1}^\alpha,\mathbf{0}^\beta)$, $(\mathbf{{0}}^\alpha,\mathbf{u}^\beta)$ and $(\mathbf{1}^\alpha,\mathbf{u}^\beta)$ are all in $\mathcal{C}$, where $\mathbf{a}^r$ is defined as the tuple $\overbrace{(a,a,\cdots,a)}^r$.
\end{lemma}
\begin{proof} Since $\mathcal{C}$ is a self-dual code, then for any codeword $(\mathbf{v}\mid \mathbf{w})\in \mathcal{C}$, we have $\langle(\mathbf{v}| \mathbf{w}),(\mathbf{v}| \mathbf{w})\rangle=u\cdot wt_H(\mathbf{v})+N(\mathbf{w})=0\in R$. Note that $wt_H(\mathbf{v})$ and $N(\mathbf{w})$ are all integers, so $wt_H(\mathbf{v})$ and $N(\mathbf{w})$ are both even. Since $wt_H(\mathbf{v})$ and $N(\mathbf{w})$ are both even, then $(\mathbf{1}^\alpha,\mathbf{0}^\beta)$, $(\mathbf{0}^\alpha,\mathbf{u}^\beta)$ are in $\mathcal{C}$. We are done.\end{proof}
\begin{lemma}\label{lem:CBXself}
Let $\mathcal{C}$ be a linear self-dual code, then the subcode $(\mathcal{C}_b)_X$ is a binary self-dual code.
\end{lemma}
\begin{proof} By Lemma~\ref{lem:selfdualcanshu}, we have $\mathcal{C}$ is of type $(2\kappa,\beta;\beta+\kappa-2\delta,\delta;\kappa)$. Note that for any two codewords $(\mathbf{x}|\mathbf{y})$, $(\mathbf{v}|\mathbf{w})\in \mathcal{C}_b$, one has $\langle\mathbf{y},\mathbf{w}\rangle=0$. This implies $(\mathcal{C}_b)_X\subseteq(\mathcal{C}_b)_X^\perp$. Since the dimension of $(\mathcal{C}_b)_X$ is $\kappa$ and the length of $(\mathcal{C}_b)_X$ is $\alpha=2\kappa$, we are done.\end{proof}
\begin{lemma}
Let $\mathcal{C}$ be a linear self-dual code of type $(2\kappa,\beta;\beta+\kappa-2\delta,\delta;\kappa)$. There exists an integer $r,$ $1\leq r\leq\kappa$, such that each codeword in $\mathcal{C}_Y$ appears $2^r$ times in $\mathcal{C}$ and $|\mathcal{C}_Y|\geq2^\beta$.
\end{lemma}
\begin{proof} Denote the subcode $\mathcal{C}_0=\{(\mathbf{v}|\mathbf{0})\in \mathcal{C}\}$. It is easy to see that $(\mathcal{C}_0)_X$ is a linear binary code with dimension $r=dim(\mathcal{C}_0)_X$. Thus, any vector in $\mathcal{C}_Y$ appears $2^r$ times in $\mathcal{C}$. Note that $(\mathcal{C}_0)_X\subseteq (\mathcal{C}_b)_X$, then $r\leq\kappa$. Since $|\mathcal{C}|=2^{\beta+\kappa}=|\mathcal{C}_Y||\mathcal{C}_0|$, then $|\mathcal{C}_Y|\geq2^\beta$.\end{proof}
For convenience, we define notations in the following for any two codewords $(\mathbf{v}| \mathbf{w})$, $(\mathbf{x}| \mathbf{y})\in \mathcal{C}$.
\begin{table}[!h]
\centering
\vspace*{10pt}
\begin{tabular}{ l l }
\hline
$N(\mathbf{w})$ & the number of units of $w$ \\
$N_u(\mathbf{w})$ & the number of $u\in w$ \\
$N_{1,1}(\mathbf{w},\mathbf{y})$ & $\# \{i \mid w_i=1 ~{\rm or}~ 1+u, y_i=1 ~{\rm or}~ 1+u, 1\leq i\leq \beta\}$\\
$N_{1,u}(\mathbf{w},\mathbf{y})$ & $\# \{i \mid w_i=1 ~{\rm or}~ 1+u, y_i=u, 1\leq i\leq \beta\}$ \\
$ N_{u,1}(\mathbf{w},\mathbf{y})$ &$ \# \{i \mid w_i=u, y_i=1 ~{\rm or}~ 1+u, 1\leq i\leq \beta\}$ \\
$ N_s(\mathbf{w},\mathbf{y})$ & $\#\{i \mid w_i=y_i=1 ~{\rm or}~ 1+u, 1\leq i\leq \beta\}$ \\
$N_d(\mathbf{w},\mathbf{y})$ & $\#\{i \mid w_i=1, y_i=1+u ~{\rm or}~ w_i=1+u, y_i=1, 1\leq i\leq \beta\}$ \\
\hline
\end{tabular}
\end{table}
\begin{lemma}
Let $\mathcal{C}$ be a $\mathbb{Z}_2R$ linear self-dual code, and $(\mathbf{v}|\mathbf{w})$, $(\mathbf{x}| \mathbf{y})$ are two codewords in $\mathcal{C}$. Then we have $N_s(\mathbf{w},\mathbf{y}) \equiv N_d(\mathbf{w},\mathbf{y}) ~({\rm mod}~ 2)$ and $N_{1,1}(\mathbf{w},\mathbf{y})$ is even.
\end{lemma}
\begin{proof} Let $(\mathbf{v}| \mathbf{w})=(v_1,\cdots,v_\alpha | w_1,\cdots,w_\beta)$, $(\mathbf{x}| \mathbf{y})=(x_1,\cdots,x_\alpha|y_1,\cdots,y_\beta)$. In the following, we consider the inner product of $\langle(\mathbf{v}| \mathbf{w}),(\mathbf{x}| \mathbf{y})\rangle$. By \eqref{eq:innerproduct}, one has
\begin{eqnarray*}
&&\langle(\mathbf{v}| \mathbf{w}),(\mathbf{x}| \mathbf{y})\rangle\\
&=&u\sum_{i=1}^\alpha v_ix_i+uN_{1,u}(\mathbf{w},\mathbf{y})+uN_{u,1}(\mathbf{w},\mathbf{y})+N_s(\mathbf{w},\mathbf{y})+N_d(\mathbf{w},\mathbf{y})+uN_d(\mathbf{w},\mathbf{y}) \\
&=& u\left(\sum_{i=1}^\alpha v_ix_i+N_{1,u}(\mathbf{w},\mathbf{y})+N_{u,1}(\mathbf{w},\mathbf{y})+N_d(\mathbf{w},\mathbf{y})\right)+N_s(\mathbf{w},\mathbf{y})+N_d(\mathbf{w},\mathbf{y})\\
&=& 0\in R.
\end{eqnarray*}
Then, we have $N_s(\mathbf{w},\mathbf{y}) \equiv N_d(\mathbf{w},\mathbf{y})~({\rm mod}~ 2)$, which implies $N_{1,1}(\mathbf{w},\mathbf{y})$ is even.\end{proof}
\begin{define}
Let $\mathcal{C}$ be a $\mathbb{Z}_2R$ linear code. If $\mathcal{C}=\mathcal{C}_X\times \mathcal{C}_Y$, then $\mathcal{C}$ is called separable.
\end{define}
By Theorem~\ref{th:standard form}, if $\mathcal{C}$ is a separable linear code, we have that the generator matrix of $\mathcal{C}$ in standard form as follows
\[G_s=\left(\begin{array}{cc|ccc}
I_\kappa & A & 0 &0 & 0\\
0 & 0 & uB & uI_{\gamma-\kappa} & 0 \\
0 & 0 & C &D & I_\delta
\end{array}\right).
\]
The following result is some equivalent conditions of separable $\mathbb{Z}_2R$ linear self-dual codes.
\begin{theorem}\label{equivalentseparable}
Let $\mathcal{C}$ be a $\mathbb{Z}_2R$ self-dual code of type $(2\kappa,\beta;\beta+\kappa-2\delta,\delta;\kappa)$. Then we have the following statements are equivalent:
\begin{enumerate}
\item $\mathcal{C}$ is separable.
\item $\mathcal{C}_X$ is binary self-orthogonal.
\item $\mathcal{C}_X$ is binary self-dual.
\item $\mid\mathcal{C}_X\mid=2^\kappa$.
\item $\mathcal{C}_Y$ is a self-orthogonal code.
\item $\mathcal{C}_Y$ is a self-dual code.
\item $\mid\mathcal{C}_Y\mid=2^\beta$.
\end{enumerate}
\end{theorem}
\begin{proof} By the similar proof in \citep[Theorem~$3$]{Borgesselfdual}, we are done.\end{proof}
Note that for any vectors $\mathbf{x}$, $\mathbf{y}\in \mathcal{C}_X$, we have
\[wt_H(\mathbf{x}+\mathbf{y})=wt_H(\mathbf{x})+wt_H(\mathbf{y})-2wt_H(\mathbf{x}*\mathbf{y}),\]where $\mathbf{x}*\mathbf{y}$ is the componentwise product of $\mathbf{x}$ and $\mathbf{y}$. If $wt_H(\mathbf{x})$, $wt_H(\mathbf{y})$, $wt_H(\mathbf{x}+\mathbf{y})$ are all doubly-even, then $wt_H(\mathbf{x}*\mathbf{y})\equiv 0 ~({\rm mod}~2)$, i.e. $\mathbf{x}$ and $\mathbf{y}$ are orthogonal. Therefore, by Theorem~\ref{equivalentseparable}, we have below result.
\begin{cor}
Let $\mathcal{C}$ be a $\mathbb{Z}_2R$ linear self-dual code. If $\mathcal{C}_X$ has all weights doubly-even, then $\mathcal{C}$ is separable.
\end{cor}
\begin{cor}\label{cor:delta=0}
Let $\mathcal{C}$ be a $\mathbb{Z}_2R$ linear self-dual code of type $(\alpha,\beta;\gamma,\delta;\kappa)$. If $\delta=0$, then $\mathcal{C}$ is separable.
\end{cor}
\begin{proof} If $\delta=0$, then we have for any $(\mathbf{v}|\mathbf{w})\in \mathcal{C}$, $\mathbf{w}$ does not contain unit. Thus, $ \mathcal{C}_Y$ is self-orthogonal, by Theorem~\ref{equivalentseparable}, we get $\mathcal{C}$ is separable.\end{proof}
The above corollary implies the following result.
\begin{cor}
Let $\mathcal{C}$ be a $\mathbb{Z}_2R$ linear self-dual code of type $(\alpha,\beta;\gamma,\delta;\kappa)$. If $\mathcal{C}$ is non-separable, then $\delta\geq1$.
\end{cor}
\begin{define}
If a $\mathbb{Z}_2R$ linear self-dual code has odd weights, then it is said to be Type $0$. If
it has only even weights, then the code is said to be Type I. If all the codewords have doubly-even weight then it is said to be Type II.
\end{define}
By Lemma~\ref{wandNwareeven}, we get the following result.
\begin{theorem}
There donot exist $\mathbb{Z}_2R$ linear self-dual codes of Type $0$.
\end{theorem}
The followings examples are of Type I and Type II.
\begin{example}(Type~I, separable).\label{exa1}
Let $\mathcal{C}$ be a code of type $(2,1;2,0;1)$ over $\mathbb{Z}_{2}R$, whose generator matrix is
$$\left(
\begin{array}{cc|c}
1 & 1 & 0 \\
0 & 0 & u \\
\end{array}
\right)
$$
It is easy to see $\mathcal{C}=\mathcal{D}\times \mathcal{E}$, where $\mathcal{D}=\langle(11)\rangle$, $\mathcal{E}=\langle(u)\rangle$. Thus $\mathcal{C}$ is separable. Moreover, we have the weight enumerator of this code is
$$W(x,y)=x^{4}+2x^{2}y^{2}+y^{4}.$$
Hence, $\mathcal{C}$ is a self-dual code and $\mathcal{C}$ is of Type I.
\end{example}
\begin{example}(Type~I, non-separable).
Let $\mathcal{C}$ be generated by the following matrix
$$\left(
\begin{array}{cccc|ccc}
1 & 0 & 1 & 0 & 0 & 0 & u \\
0 & 1 & 0 & 1 & 0 & 0 & u \\
0 & 0 & 0 & 0 & 0 & u & 0 \\
0 & 0 & 1 & 1 & 1 & 0 & 1+u \\
\end{array}
\right)
$$
Clearly, $\mathcal{C}$ is a self-dual code of type $(4,3;3,1;2)$.
Note that $(0~1~0~1)$, $(0~0~1~1)\in \mathcal{C}_{X},$ we have $ \langle(0~1~0~1), (0~0~1~1)\rangle=1$. Thus $\mathcal{C}_{X}$ is not self-orthogonal. By Theorem~\ref{equivalentseparable}, $\mathcal{C} $ is non-separable. Moreover, the weight enumerator of $\mathcal{C}$ is
$$W(x,y)=x^{10}+8x^{8}y^{2}+14x^{4}y^{6}+8x^{2}y^{8}+y^{10}.$$
Therefore, $\mathcal{C} $ is of Type~I code.
\end{example}
\begin{example}(Type~II, separable).
Let $\mathcal{C}$ be generated by
$$\mathcal{G}_{X}=\left(
\begin{array}{cccccccc}
1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\
0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 \\
0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 \\
0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 \\
\end{array}
\right).
$$
Let $\mathcal{D}$ be generated by
$$\mathcal{G}_{Y}=\left(
\begin{array}{cccc}
u & u & 0 & 0 \\
u & 0 & u & 0 \\
1 & 1 & 1 & 1 \\
\end{array}
\right).
$$
It is easy to check $\mathcal{C}$ is a self-dual code over $\mathbb{Z}_2$ and $\mathcal{D}$ is a self-dual code over $R$. Moreover, the weight of each codeword of $\mathcal{C}$ and $\mathcal{D}$ is doubly-even, respectively. Consider the following matrix
$$\left(
\begin{array}{c|c}
\mathcal{G}_{X} & \mathbf{0} \\
\mathbf{0} & \mathcal{G}_{Y} \\
\end{array}
\right).
$$ It generates a self-dual code $\mathcal{C}\times\mathcal{D}$ over $\mathbb{Z}_{2}R$. Since $\mathcal{C}$ and $\mathcal{D}$ are doubly-even codes, then $\mathcal{C}\times\mathcal{D}$ is of Type II.
\end{example}
\begin{example}\label{ex:2.7}(Type~II, non-separable).
Consider the following matrix
$$\left(
\begin{array}{cccc|cc}
1 & 0 & 1 & 0 & 0 & u \\
0 & 1 & 0 & 1 & 0 & u \\
0 & 0 & 1 & 1 & 1 & 1+u \\
\end{array}
\right),
$$
which generates a linear code $\mathcal{C} $ of type $(4,2;2,1,2)$ over $\mathbb{Z}_2R$. Note that $\mathcal{C} $ is a self-orthogonal code and $|\mathcal{C}|=2^4$, then $\mathcal{C} $ is self-dual. The weight enumerator of this code is
$$W(x,y)=x^{8}+14x^{4}y^{4}+y^{8}.$$
It is easy to see $\mathcal{C}_Y$ is not self-orthogonal. By Theorem~\ref{equivalentseparable}, we have $\mathcal{C} $ is non-separable. To sum up, $\mathcal{C} $ is of Type~II code.
\end{example}
From above examples, we obtain the minimal bounds of $\alpha,~ \beta$ for different types.
\begin{theorem}\label{lem:alpha4beta2}
Let $\mathcal{C}$ be a $\mathbb{Z}_2R$ self-dual code of type $(\alpha,\beta;\gamma,\delta;\kappa)$ with $\alpha\cdot\beta>0$.
\begin{itemize}
\item If $\mathcal{C}$ is Type I and separable, then $\alpha\geq2$, $\beta\geq1$.
\item If $\mathcal{C}$ is Type I and non-separable, then $\alpha\geq4$, $\beta\geq2$.
\item If $\mathcal{C}$ is Type II, then $\alpha\geq4$, $\beta\geq2$.
\end{itemize}
\end{theorem}
\begin{proof} If $\mathcal{C}$ is Type I and separable, then $\mathcal{C}_X$ is binary self-dual and $\mathcal{C}_X$ is self-dual over $R$. Thus $\alpha\geq2$, $\beta\geq1$.
If $\mathcal{C}$ is Type I and non-separable. By Lemma~\ref{wandNwareeven}, we have $\mathcal{C}_x$ are even weight. If $\alpha=2$, then $\mathcal{C}_x$ is binary self-dual. By Theorem~\ref{equivalentseparable}, it is a contradiction. Thus $\alpha\geq4$. From Example~\ref{ex:2.7}, we have $\beta\geq2$.
Assume $\mathcal{C}$ is Type II, by Lemma~\ref{wandNwareeven}, we have $(\mathbf{1}^\alpha,\mathbf{0}^\beta),(\mathbf{0}^\alpha,\mathbf{u}^\beta)\in \mathcal{C}$. Since $\mathcal{C}$ is Type II, then $\alpha\equiv0~({\rm mod}~ 4)$, $\beta\equiv0~({\rm mod}~ 2)$. Note that Example~\ref{ex:2.7}, then $\alpha\geq4$, $\beta\geq2$.
\end{proof}
\section{Several constructions of self-dual codes}
In this section, we present several kinds of construction methods for self-dual
codes over $\mathbb{Z}_2R$.
The following theorem is the first one that self-dual codes over $\mathbb{Z}_2R$ are obtained from other self-dual codes over $\mathbb{Z}_2R$.
\begin{theorem}\label{th:G1G2}
Let $\mathcal{C}$ be a self-dual code with generator matrix $G=(G_1| G_2)$ of type $(\alpha,\beta;\gamma,\delta;\kappa)$, and $\mathcal{C}'$ be a self-dual code with generator matrix $G'=(G'_1|G'_2)$ of type $(\alpha',\beta';\gamma',\delta';\kappa')$. Then
\[ \left( \begin{array}{cc|cc}
G_1&0 &G_2 &0 \\
0& G'_1 & 0& G'_2
\end{array}
\right)\]
generates a self-dual code $\mathcal{M}$ of type $(\alpha+\alpha',\beta+\beta';\gamma+\gamma',\delta+\delta';\kappa+\kappa')$. Moreover, we have the weight enumerator of $\mathcal{M}$ is
$$W_\mathcal{M}(X,Y)=W_\mathcal{C}(X,Y)W_{\mathcal{C}'}(X,Y).$$
\end{theorem}
\begin{proof} For any codeword $(\mathbf{v}| \mathbf{w})\in \mathcal{M}$, we have
\[ (\mathbf{v}| \mathbf{w})=\left(A_{1\times (\gamma+\delta)},A'_{1\times (\gamma'+\delta')}\right)\left( \begin{array}{cc|cc}
G_1&0 &G_2 &0 \\
0& G'_1 & 0& G'_2
\end{array}
\right),\]
where $\left(A_{1\times (\gamma+\delta)},A'_{1\times (\gamma'+\delta')}\right)\in R^{\gamma+\gamma'+\delta+\delta'}$. Note that any two rows of generator $\left( \begin{array}{cc|cc}
G_1&0 &G_2 &0 \\
0& G'_1 & 0& G'_2
\end{array}
\right)$ are orthogonal, then $\mathcal{M}$ is a self-orthogonal code. Since $\mathcal{C}$ and $\mathcal{C}'$ are self-dual codes, then $\alpha+2\beta=2(\gamma+2\delta)$ and $\alpha'+2\beta'=2(\gamma'+2\delta')$. Note that the length of $\mathcal{M}$ is $\alpha+\alpha'+2\beta+2\beta'$ and $\mid \mathcal{M}\mid=2^{\gamma+\gamma'+2\delta+2\delta'}$, then we have $\mathcal{M}$ is a self-dual code of type $(\alpha+\alpha',\beta+\beta';\gamma+\gamma',\delta+\delta';\kappa+\kappa')$. It is easy to check that
$$W_\mathcal{M}(X,Y)=W_\mathcal{C}(X,Y)W_{\mathcal{C}'}(X,Y).$$
\end{proof}
\begin{cor}
There exist $\mathbb{Z}_2R$ linear self-dual codes of type $(\alpha,\beta;\gamma,\delta;\kappa)$ for all even $\alpha$ and all $\beta$.
\end{cor}
In the following, we first establish a relationship between $\mathbb{Z}_2R$ linear self-dual codes and $\mathbb{Z}_2\mathbb{Z}_4$-additive self-dual codes. From this relationship, then we can construct self-dual codes over $\mathbb{Z}_2R$ form self-dual codes over $\mathbb{Z}_2\mathbb{Z}_4$, or self-dual codes over $\mathbb{Z}_2\mathbb{Z}_4$ form self-dual codes over $\mathbb{Z}_2R$.
Define a map
$$ \theta: \mathbb{Z}_2R \longrightarrow \mathbb{Z}_2\mathbb{Z}_4$$
$$(\mathbf{v}| \mathbf{w}) \longmapsto (\mathbf{v}|\theta(\mathbf{w})),$$
where $(\mathbf{v}|\theta(\mathbf{w}))=(\mathbf{v}| \theta(w_1),\cdots, \theta(w_\beta))$, $~\theta(0)=0, ~\theta(1)=1, ~\theta(u)=2,~ \theta(1+u)=3.$
\begin{theorem}\label{th:relationZ2Z2utoZ2Z4}
Let $\mathcal{C}$ be a $\mathbb{Z}_2R$ linear code with generator matrix $G$, and for any two codewords $(\mathbf{v}| \mathbf{w}),(\mathbf{x}| \mathbf{y})\in \mathcal{C}$ with $4 \mid N_{1,1}(\mathbf{w},\mathbf{y})$. If $\mathcal{C}$ is a self-orthogonal code, then $\theta(\mathcal{C})$ is also a self-orthogonal code over $\mathbb{Z}_2\mathbb{Z}_4$. Furthermore, if $\mathcal{C}$ is a self-dual code, then $\theta(\mathcal{C})$ is also a self-dual code over $\mathbb{Z}_2\mathbb{Z}_4$.
\end{theorem}
\begin{proof} Since $\mathcal{C}$ is a self-orthogonal code, then for any two codewords $(\mathbf{v}| \mathbf{w})$ and $(\mathbf{x}| \mathbf{y})\in \mathcal{C}$, we have
\[\langle(\mathbf{v}| \mathbf{w}),(\mathbf{x}| \mathbf{y})\rangle=u\sum_{i=1}^\alpha v_ix_i+\sum_{j=1}^\beta w_jy_j=0\in R. \]
Since $4 \mid N_{1,1}(\mathbf{w})$ and $N_{1,1}(\mathbf{w},\mathbf{y})=N_s(\mathbf{w},\mathbf{y})+N_d(\mathbf{w},\mathbf{y})$, following the notations in Section $2$, we get
\begin{eqnarray*}
& & \langle(\mathbf{v}| \mathbf{w}),(\mathbf{w},\mathbf{y})\rangle \\
&=& u\sum\limits_{i=1}^\alpha v_ix_i+uN_{1,u}(\mathbf{w},\mathbf{y})+uN_{u,1}(\mathbf{w},\mathbf{y})+N_s(\mathbf{w},\mathbf{y})+N_d(\mathbf{w},\mathbf{y})+uN_d(\mathbf{w},\mathbf{y}) \\
&=& u\left(\sum\limits_{i=1}^\alpha v_ix_i+N_{1,u}(\mathbf{w},\mathbf{y})+N_{u,1}(\mathbf{w},\mathbf{y})+
N_d(\mathbf{w},\mathbf{y})\right)+N_s(\mathbf{w},\mathbf{y})+N_d(\mathbf{w},\mathbf{y})\\
&=& u\left(\sum\limits_{i=1}^\alpha v_ix_i+N_{1,u}(\mathbf{w},\mathbf{y})+N_{u,1}(\mathbf{w},\mathbf{y})+
N_d(\mathbf{w},\mathbf{y})\right)=0\in R,
\end{eqnarray*}
which implies that $2\mid \left(\sum\limits_{i=1}^\alpha v_ix_i+N_{1,u}(\mathbf{w},\mathbf{y})+N_{u,1}(\mathbf{w},\mathbf{y})+
N_d(\mathbf{w},\mathbf{y})\right)$.
Note that
\begin{eqnarray*}
& &\langle(\mathbf{v}|\theta(\mathbf{w})),(\mathbf{x}| \theta(\mathbf{y}))\rangle \\ &=&2\sum\limits_{i=1}^\alpha v_ix_i+2N_{1,u}(\mathbf{w},\mathbf{y})+2N_{u,1}(\mathbf{w},\mathbf{y})+N_s(\mathbf{w},\mathbf{y})
+N_d(\mathbf{w},\mathbf{y})+2N_d(\mathbf{w},\mathbf{y}) \\
& =&2\left(\sum\limits_{i=1}^\alpha v_ix_i+N_{1,u}(\mathbf{w},\mathbf{y})+N_{u,1}(\mathbf{w},\mathbf{y})+N_d\right)+N_s(\mathbf{w},\mathbf{y})+N_d(\mathbf{w},\mathbf{y})\\
& =&2\left(\sum\limits_{i=1}^\alpha v_ix_i+N_{1,u}(\mathbf{w},\mathbf{y})+N_{u,1}(\mathbf{w},\mathbf{y})+N_d(\mathbf{w},\mathbf{y})\right)+N_{1,1}(\mathbf{w},\mathbf{y}).
\end{eqnarray*}
Since $4\mid N_{1,1}(\mathbf{w},\mathbf{y})$ and $2\mid \left(\sum\limits_{i=1}^\alpha v_ix_i+N_{1,u}(\mathbf{w},\mathbf{y})+N_{u,1}(\mathbf{w},\mathbf{y})+
N_d(\mathbf{w},\mathbf{y})\right)$, then
\[\langle(\mathbf{v}|\theta(\mathbf{w})),(\mathbf{x}| \theta(\mathbf{y}))\rangle =0\in \mathbb{Z}_4.\] Therefore, we have $\theta(\mathcal{C})$ is a self-orthogonal code over $\mathbb{Z}_2\mathbb{Z}_4$.
Furthermore, if $\mathcal{C}$ is a self-dual code with generator matrix $G$, we have $\langle\theta(\mathcal{C}),\theta(G)\rangle=0$. Hence, $\langle\theta(\mathcal{C}),\langle\theta(G)\rangle\rangle=0$, which implies that $\theta(\mathcal{C})\subseteq \langle\theta(G)\rangle^{\perp}$. Note that $\theta$ is a bijective, then $$|\theta(\mathcal{C})|=2^{\gamma+2\delta},~~~|\langle\theta(G)\rangle^{\perp}|=2^{\alpha+
2\beta}/|\langle\theta(G)\rangle|=2^{\gamma+2\delta}.$$ This implies that $\theta(\mathcal{C})$ is a $\mathbb{Z}_2R$ linear code. Together with $\theta(\mathcal{C})$ is self-orthogonal and $|\theta(\mathcal{C})|=2^{\gamma+2\delta}$, we have $\theta(\mathcal{C})$ is also self-dual.\end{proof}
Similar as proof in Theorem~\ref{th:relationZ2Z2utoZ2Z4}, we have below result.
\begin{theorem}\label{th:relationZ2Z4toZ2Z2u}
Let $\mathcal{C}$ be a $\mathbb{Z}_2R$ linear code with generator matrix $G$, and for any codewords $(\mathbf{v}| \mathbf{w}),(\mathbf{x}| \mathbf{y})\in \mathcal{C}$ such that $4 \mid N_{1,1}(\mathbf{w},\mathbf{y})$. If $\mathcal{C}$ is a self-orthogonal code, then $\theta^{-1}(\mathcal{C})$ is also a self-orthogonal code over $\mathbb{Z}_2R$. Furthermore, if $\mathcal{C}$ is a self-dual code, then $\theta^{-1}(\mathcal{C})$ is also a self-dual code over $\mathbb{Z}_2R$.
\end{theorem}
For convience, we Let $\mathcal{C}$ be a self-dual code of length $\ell=\alpha+\beta$ and $(G_0|G_1)=(\mathbf{g}_{i}|\mathbf{r}_i)$ be the generator matrix of $\mathcal{C}$, where $\mathbf{g}_{i}$($\mathbf{r}_i$) is the $i$-th row of $G_0$($ G_1$), $1\leq i\leq \gamma+\delta$, respectively.
In the following, we use a building-up approach to construct self-dual codes of different lengths. Here, we just give the proof of Theorem~\ref{th:con1}, since Theorem~\ref{th:con2} and Theorem~\ref{th:con3} can be proved by similar way.
\begin{theorem}\label{th:con1}
Assume the notations given as above. Let $\mathbf{x}$ be a vector in $\mathbb{Z}_2^\alpha$ with odd $wt_H(\mathbf{x})$ and $\mathbf{y}$ be a vector in $u\mathbb{Z}_2^\beta$ with $\langle\mathbf{r_i},\mathbf{y}\rangle=0$ for $1\leq i\leq \gamma+\delta$. Suppose that $h_i=\langle\mathbf{g_i},\mathbf{x}\rangle$ for $1\leq i\leq \gamma+\delta$. Then the following matrix
$$G=\left(\begin{array}{ccc|c}
1 & 0 & \mathbf{x} & \mathbf{y} \\ \hline
h_1 & h_1 & \mathbf{g}_1 & \mathbf{r}_1 \\
\vdots & \vdots & \vdots & \vdots \\
h_{\gamma+\delta} & h_{\gamma+\delta} & \mathbf{g}_{\gamma+\delta} & \mathbf{r}_{\gamma+\delta}
\end{array}\right)
$$
generates a self-dual code $\mathcal{D}$ over $\mathbb{Z}_2R$ of length $\ell+2$. Moreover, we have $\mathcal{C}$ is separable if and only if $\mathcal{D}$ is separable.
\end{theorem}
\begin{proof}
It is easy to obtain
$$GG^T=0.$$
Hence, $\mathcal{D}$ is self-orthogonal. Since
$|\mathcal{D}|\cdot |\mathcal{D}^\perp|=2^{\alpha+2+2\beta}, $
so $\mathcal{D}$ is self-dual if and only if
$$|\mathcal{D}|=2^{\frac{\alpha+2+2\beta}{2}}.$$
Now, we begin computing the size of code $\mathcal{D}$. We first assume that there exist two vectors $(\mathbf{v}_1|\mathbf{w}_1)$ and $(\mathbf{v}_2|\mathbf{w}_2)$ such that
\begin{equation*}\label{eq:zhengmingweiyi}
(\mathbf{v}_1|\mathbf{w}_1)G=(\mathbf{v}_2|\mathbf{w}_2)G, ~\mbox{where}~\mathbf{v}_i\in \mathbb{Z}_2,~\mathbf{w}_i\in R^{\gamma+\delta},~i=1,2,
\end{equation*}
i.e.
$$(\mathbf{v}|\mathbf{w})G=0 , ~~\mbox{where}~\mathbf{v}=\mathbf{v}_1-\mathbf{v}_2,~ \mathbf{w}=\mathbf{w}_1-\mathbf{w}_2 .$$
Then
$$\mathbf{v}(1~0~\mathbf{x}~|~\mathbf{y})+\mathbf{w}\left(\begin{array}{ccc|c}
h_1 & h_1 & \mathbf{g}_1 & \mathbf{r}_1 \\
\vdots & \vdots & \vdots & \vdots \\
h_{\gamma+\delta} & h_{\gamma+\delta} & \mathbf{g}_{\gamma+\delta} & \mathbf{r}_{\gamma+\delta}
\end{array}\right)=0.$$
We have
$$\mathbf{v}+\mathbf{w}\left(\begin{array}{c}
h_1 \\
\vdots \\
h_{\gamma+\delta}
\end{array}\right)=0
~\mbox{and}~
\mathbf{w}\left(\begin{array}{c}
h_1 \\
\vdots \\
h_{\gamma+\delta}
\end{array}\right)=0. $$
This implies that $\mathbf{v}=0$. Hence, we have
\begin{equation}\label{eq:liangge}
\mathbf{w}\left(\begin{array}{c|c}
g_1 &r_1 \\
\vdots& \vdots \\
g_{\gamma+\delta}&r_{\gamma+\delta}
\end{array}\right)=0.
\end{equation}
Note that
$$\left(\begin{array}{c|c}
\mathbf{g}_1 & \mathbf{r}_1 \\
\vdots & \vdots \\ \mathbf{g}_{\gamma+\delta} & \mathbf{r}_{\gamma+\delta}
\end{array}\right)$$
is the generator matrix of $\mathcal{C}$, then from equation~\eqref{eq:liangge}, we have $\mathbf{w}=0$. This shows that
$$ \mathcal{D}=2^{\gamma+1+2\delta}.$$
Note that $\mathcal{C}$ is self-dual, then $\alpha+2\beta=2(\gamma+2\delta).$ Together with $|\mathcal{D}|=2^{\frac{\alpha+2+2\beta}{2}}$, we obtain $\mathcal{D}$ is self-dual.
It is easy to see
$\mathbf{y}$ is orthogonal to $\mathbf{y}$ and $\mathbf{r}_i$, where $1\leq i\leq \gamma+\delta$. Therefore, by Theorem~\ref{equivalentseparable}, we have $\mathcal{D}$ is separable if and only if $\mathcal{C}$ is separable.
\end{proof}
\begin{theorem}\label{th:con2}
Assume the notations given as above. Let $\mathbf{y}$ be a vector in $R^\beta$ with odd $ wt_L(\mathbf{y})$ and $\mathbf{x}$ be a vector in $ \mathbb{Z}_2^\alpha$ such that $wt_H(x)$ is even and $\langle\mathbf{g_i},\mathbf{x}\rangle=0$ for $1\leq i\leq \gamma+\delta$. Let $t$ be a unit in $R$ and $s_i=\langle\mathbf{r_i},\mathbf{y}\rangle$ for $1\leq i\leq \gamma+\delta$. Then the following matrix
$$G=\left(\begin{array}{c|ccc}
\mathbf{x}& 1 & 0 & \mathbf{y} \\ \hline
\mathbf{g}_1 & s_1 & ts_1 & \mathbf{r}_1 \\
\vdots & \vdots & \vdots & \vdots \\
\mathbf{g}_{\gamma+\delta}& s_{\gamma+\delta} & ts_{\gamma+\delta} & \mathbf{r}_{\gamma+\delta}
\end{array}\right)
$$
generates a self-dual code $\mathcal{E}$ over $\mathbb{Z}_2R$ of length $m+2$. Moreover, we have $\mathcal{E}$ is separable if and only if $\mathcal{C}$ is separable.
\end{theorem}
\begin{theorem}\label{th:con3}
Assume the notations given as above. Let $\mathbf{x}$ be a vector in $\mathbb{Z}_2^\alpha$ with odd $wt_H(\mathbf{x})$ and $\mathbf{y}$ be a vector in $R^\beta$ with odd $wt_L(\mathbf{y})$. Let $\mathbf{e}$ be a vector in $ \mathbb{Z}_2^\alpha$ such that $wt_H(\mathbf{e})$ is even and $\langle\mathbf{g_i},\mathbf{e}\rangle=0$ for $1\leq i\leq \gamma+\delta$, and $\mathbf{a}$ be a vector in $u \mathbb{Z}_2^\beta$ satisfying $\langle\mathbf{r_i},\mathbf{a}\rangle=0$ for $1\leq i\leq \gamma+\delta$. Suppose that $\langle (\mathbf{x}|\mathbf{a}),(\mathbf{e}|\mathbf{y})\rangle=0$, $h_i=\langle\mathbf{g_i},\mathbf{x}\rangle$, $s_i=\langle\mathbf{r_i},\mathbf{y}\rangle$ for $1\leq i\leq \gamma+\delta$. Let $t$ be a unit in $R$, then the following matrix
$$G=\left(\begin{array}{ccc|ccc}
1 & 0& \mathbf{x}& 0 & 0 & \mathbf{a} \\
0 & 0&\mathbf{e}& 1 & 0 & \mathbf{y} \\ \hline
h_1 & h_1& \mathbf{g}_1 & s_1 & ts_1 & \mathbf{r}_1 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
h_{\gamma+\delta} &h_{\gamma+\delta} & \mathbf{g}_{\gamma+\delta}& s_{\gamma+\delta} & ts_{\gamma+\delta} & \mathbf{r}_{\gamma+\delta}
\end{array}\right)
$$
generates a self-dual code $\mathcal{F}$ over $\mathbb{Z}_2R$ of length $m+4$. Furthermore, if $\langle\mathbf{x},\mathbf{e}\rangle=0\in \mathbb{Z}_2$, we have $\mathcal{F}$ is separable if and only if $\mathcal{C}$ is separable. Otherwise, we have $\mathcal{F}$ is non-separable.
\end{theorem}
\section{The structure of two-weight self-dual codes}
Note that if $\mathcal{C}$ is self-dual, then $(\mathbf{1}^\alpha|\mathbf{0}^\beta)$, $ (\mathbf{0}^\alpha|\mathbf{u}^\beta)\in\mathcal{C}$, where $\alpha \cdot\beta\neq 0$. This implies that $\mathcal{C}$ has at least two different weights. So, we study a class of self-dual codes with two different weights and get the structure of these self-dual codes. Now, we assume that $\mathcal{C}$ is a two-weight linear code over $\mathbb{Z}_2R$ with $\alpha \cdot\beta\neq 0$, and $(\mathbf{1}^\alpha|\mathbf{0}^\beta)$, $ (\mathbf{0}^\alpha|\mathbf{u}^\beta)\in\mathcal{C}$. Since $(\mathbf{1}^\alpha|\mathbf{0}^\beta)$, $ (\mathbf{0}^\alpha|\mathbf{u}^\beta)$ and $(\mathbf{1}^\alpha|\mathbf{u}^\beta)$ are all in $\mathcal{C}$, then $\alpha=2\beta$. Note that
$\alpha+2\beta=n$, then $\alpha=n/2,$ $\beta=n/4$. Note that if $4\nmid n$, these codes do not exist. Thus,
we assume $4\mid n$ in this section.
\begin{lemma}\label{lem:weightdis}
Assume the notations given as above. Then the weight distribution of linear code $\mathcal{C}$ is obtained as follows
\begin{table}[!h]
\centering
\begin{tabular}{c|c}
\hline
Weight & Frenquence\\\hline
$0$ & $1$ \\ \hline
$n$ & $1$ \\ \hline
$\frac{n}{2}$ & $|\mathcal{C}|-2$ \\
\hline
\end{tabular}
\end{table}
\end{lemma}
We define a linear subcode of $\mathcal{C}$ as follows
\[ \mathcal{C}^*=\{(\mathbf{0}^\alpha|\mathbf{0}^\beta),(\mathbf{1}^\alpha|\mathbf{0}^\beta)
(\mathbf{0}^\alpha|\mathbf{u}^\beta)(\mathbf{1}^\alpha|\mathbf{u}^\beta)\}.\]
\begin{lemma}\label{lem:TW1}
Let $\mathcal{C}$ be a two-weight linear code. For any codeword $(\mathbf{v}|\mathbf{w})\in \mathcal{C}\setminus \mathcal{C}^*$, we have
\begin{itemize}
\item $N(\mathbf{w})=\frac{n}{4},~N_u(\mathbf{w})=0,~wt_H(\mathbf{v})=\frac{n}{4};$ or
\item $N(\mathbf{w})=0,~N_u(\mathbf{w})=\frac{n}{8},~wt_H(\mathbf{v})=\frac{n}{4}.$
\end{itemize}
\end{lemma}
\begin{proof}
If $N(\mathbf{w})\neq0$, then
$$wt_L(u(\mathbf{v}|\mathbf{w}))=wt_L(u \mathbf{w} )=2N(\mathbf{w})=\frac{n}{2}.$$
Thus, $N(\mathbf{w})=\frac{n}{4}$. Note that $wt_L (\mathbf{v}|\mathbf{w}) =\frac{n}{2}$ and $\beta=\frac{n}{4}$, we have $wt_H(\mathbf{v})=\frac{n}{4}$, $N_u(\mathbf{w})=0$.
If $N(\mathbf{w})=0$, then $N_u(\mathbf{w})\neq0$. Note that $(\mathbf{0}^\alpha|\mathbf{u}^\beta)\in \mathcal{C}$, then
\[wt_L\left( (\mathbf{v}|\mathbf{w})+(\mathbf{0}^\alpha|\mathbf{u}^\beta)\right)= \frac{n}{2} .\]
Thus $N_u(\mathbf{w})=\frac{n}{8}$, $wt_H(\mathbf{v})=\frac{n}{4}$.
\end{proof}
\begin{example}
Let $\mathcal{C}_1$ be generated by $\left(
\begin{array}{cc|c}
1 & 1 & 0 \\
0 & 0 & u \\
\end{array}
\right)
$, then $$\mathcal{C}_1=\{(00|0),(11|0),(00| u),(11| u)\}.$$ It is easy to check that $\mathcal{C}_1$ is a seif-dual code with two nonzero weights.
Let $\mathcal{C}_2$ be generated by $\left(
\begin{array}{cc|c}
1 & 1 & 0 \\
1 & 0 & 1 \\
\end{array}
\right)
$, then $$\mathcal{C}_2=\{(00|0),(11|0),(10| 1),(01| 1),(10|1),(10|1+u),(01|1),(01|1+u)\}.$$ It is a two-weight linear code, which confirms the result in Lemma~\ref{lem:TW1}.
\end{example}
By above discussion, we get the maximal bound of $\delta$.
\begin{lemma}\label{cor:delta1}
Assume the notations given as above, then $\delta\leq1$.
\end{lemma}
\begin{proof}
We assume that $(\mathbf{v}|\mathbf{w})\in\mathcal{C} $ with $N(\mathbf{w})\neq0$. By Lemma~\ref{lem:TW1}, we have $N(\mathbf{w})=\frac{n}{4}$. Since $\beta=\frac{n}{4}$, then we get $\delta\leq1$.
\end{proof}
With above preparation we can obtain the main result in this section.
\begin{theorem}
Let the notations be given as above. Then $\mathcal{C}$ is self-dual if and only if the generator matrix of $\mathcal{C}$ is permutation equivalent to
$$\left(
\begin{array}{cc|c}
1 & 1 & 0 \\
0 & 0 & u \\
\end{array}
\right)~{\rm or}~ \left(
\begin{array}{cccc|cc}
1 & 1 & 1 & 1 & 0 & 0 \\
0 & 1 & 0 & 1 & 0 & u \\
0 & 0 & 1 & 1 & 1 & 1 \\
\end{array}
\right).
$$
\end{theorem}
\begin{proof}
Let $G$ be the generator matrix of $\mathcal{C}$.
By Lemma~\ref{lem:weightdis}, we have the weight enumerator of $\mathcal{C}$ is
\begin{equation}\label{eq:1}
W_{\mathcal{C}}(X,Y)=X^{n}+(|\mathcal{C}|-2)X^{\frac{n}{2}}Y^{\frac{n}{2}}+Y^{n}.
\end{equation}
By Theorem~\ref{th:MacIden}, the weight enumerator of $\mathcal{C}^\perp$ satisfies
\begin{eqnarray}\label{eq:2}
\nonumber & & |\mathcal{C}|W_{\mathcal{C}^{\perp}}(X,Y)= W_{\mathcal{C}}(X+Y,X-Y) \\
\nonumber &=& (X+Y)^{n}+(|\mathcal{C}|-2)(X^{2}-Y^{2})^{\frac{n}{2}}+(X-Y)^{n} \\
\nonumber&=& X^{n}+C_{n}^{1}X^{n-1}Y+C_{n}^{2}X^{n-2}Y^{2}+\cdots+C_{n}^{n-1}XY^{n-1}+Y^{n}\\
\nonumber& & \quad +(|\mathcal{C}|-2)\left(X^{n}-C_{\frac{n}{2}}^{1}(X^{2})^{\frac{n}{2}-1}Y^{2}+\cdots+Y^{n}\right) \\
\nonumber& & \qquad\qquad +X^{n}-C_{n}^{1}X^{n-1}Y+C_{n}^{2}X^{n-2}Y^{2}+\cdots-C_{n}^{n-1}XY^{n-1}+Y^{n}\\
\nonumber&= & -(|\mathcal{C}|-2)\left(C_{\frac{n}{2}}^{1}(X^{2})^{\frac{n}{2}-1}Y^{2}-
C_{\frac{n}{2}}^{2}(X^{2})^{\frac{n}{2}-2}Y^{4}+\cdots+C_{\frac{n}{2}}^{\frac{n}{2}-1}X^{2}(Y^{2})^{\frac{n}{2}-1}\right) \\
\nonumber&& \qquad\qquad \qquad +|\mathcal{C}|(X^{n}+Y^{n})+2C_{n}^{2}X^{n-2}Y^{2}+\cdots+2C_{n}^{n-2}X^{2}Y^{n-2}\\
\nonumber&=& |\mathcal{C}|X^{n}+ \left(2C_{n}^{2}-(|\mathcal{C}|-2)C_{\frac{n}{2}}^{1}\right)X^{n-2}Y^{2}+ \cdots\\
&& \qquad\qquad\qquad\qquad\ + \left(2C_{n}^{2}-(|\mathcal{C}|-2)C_{\frac{n}{2}}^{\frac{n}{2}-1}\right)X^{2}Y^{n-2}+|\mathcal{C}|Y^{n}.
\end{eqnarray}
If $\mathcal{C}$ is self-dual, then $W_{\mathcal{C}^{\perp}}(X,Y)= W_{\mathcal{C}}(X,Y)$.
Comparing the coefficients of both sides of equation~\eqref{eq:2}, we discuss it in two cases.
i) If $n=4$, it is easy to check
$$W_{\mathcal{C}^{\perp}}(X,Y)= W_{\mathcal{C}}(X,Y)=X^{4}+2X^{2}Y^{2}+Y^{4}.$$
Note that $\alpha=\frac{n}{2}$, $\beta=\frac{n}{4}$, we have $\alpha=2,$ $\beta=1$. Since $|\mathcal{C}|=2^{\frac{n}{2}}=4$, and $(\mathbf{1}^\alpha|\mathbf{0}^\beta),(\mathbf{0}^\alpha|\mathbf{u}^\beta)\in\mathcal{C} $, thus $G$ is permutation equivalent to $$\mathcal{C}=\langle\left(
\begin{array}{cc|c}
1 & 1 & 0 \\
0 & 0 & u \\
\end{array}
\right)\rangle.$$
ii) If $n>4$, then the right side of equation~\eqref{eq:2} has at least five terms. The second term of right side of equation~\eqref{eq:2} $\left(2C_{n}^{2}-(|\mathcal{C}|-2)C_{\frac{n}{2}}^{1}\right)X^{n-2}Y^{2}$ is equal to zero. Hence, $n=8$. Simplifying equation~\eqref{eq:2}, one has
$$W_{\mathcal{C}^{\perp}}(X,Y)=X^{8}+14X^{4}Y^{4}+Y^{8}. $$ Since
$$W_{\mathcal{C}}(X,Y)=X^{8}+(|\mathcal{C}|-2)X^{4}Y^{4}+Y^{8}, $$ and $|\mathcal{C}|=2^{\frac{n}{2}}=16$, then $$W_{\mathcal{C}^{\perp}}(X,Y)=W_{\mathcal{C}}(X,Y).$$ This implies when $n=8$, there exists a self-dual code over $\mathbb{Z}_2R$.
Note that $\mathcal{C}$ is Type II, by Theorem~\ref{lem:alpha4beta2}, we get $\alpha=4$, $\beta=2$. Thus, $\mathcal{C}$ is non-separable. Otherwise, $\mathcal{C}$ is contradiction with two non-zero weights. By Lemma~\ref{cor:delta1}, we have $\delta\leq1$. If $\delta=0$, by Corollary~\ref{cor:delta=0}, it is a contradiction. Hence, $\delta=1$, $\gamma=2$.
Let $$G=\left(\begin{array}{c|c}
\mathbf{v}_1 & \mathbf{w}_1 \\
\mathbf{v}_2 & \mathbf{w}_2 \\
\mathbf{v}_3 & \mathbf{w}_3
\end{array}\right)
$$, where $\mathbf{v}_i\in \mathbb{Z}_2^4$ for $1\leq i\leq3$, $\mathbf{w}_1,\mathbf{w}_2\in\{0,u\}^2$, $\mathbf{w}_3\in R^2$.
By Lemma~\ref{lem:TW1}, we have $N(\mathbf{w}_3)=2,~wt_H(\mathbf{v}_3)=2,$ i.e. $\mathbf{w}_3=(11)$. Now we fix the sequence of $(\mathbf{v}_3 | \mathbf{w}_3)=(0011|11)$. Note that $(\mathbf{1}^4|\mathbf{0}^2)\in \mathcal{C}$, then we let $(\mathbf{v}_1 | \mathbf{w}_1)=(\mathbf{1}^4|\mathbf{0}^2)$. Finally, we need to choose $ (\mathbf{v}_2 | \mathbf{w}_2 )$. Since $ (\mathbf{v}_2 | \mathbf{w}_2 )\notin \mathcal{C}^*$, then by Lemma~\ref{lem:TW1}, we have $N_u(\mathbf{w}_2)=1,~wt_H(\mathbf{v}_2)=2$. Since $\mathcal{C}$ is non-separable, then $wt_H(\mathbf{v}_2*\mathbf{v}_3)$ is odd. To sum up, we get $G$ is permutation equivalent to
$$\left(
\begin{array}{cccc|cc}
1 & 1 & 1 & 1 & 0 & 0 \\
0 & 1 & 0 & 1 & 0 & u \\
0 & 0 & 1 & 1 & 1 & 1 \\
\end{array}
\right)$$
Conversely, it is easy to check.
We finish the proof.
\end{proof}
From equation~\eqref{eq:2}, we get the bound of the minimal distance of dual code $\mathcal{C}^\perp$.
\begin{theorem}
Let the notations be given as above. If $n=\frac{|\mathcal{C}|}{2}$, then the minimal Hamming
distance of $\mathcal{C}^\perp$ is $4$. Otherwise, the minimal Hamming
distance of $\mathcal{C}^\perp$ is $2$.
\end{theorem}
|
3,212,635,537,556 | arxiv | \section{Introduction}
The celebrated particle observed by the ATLAS \cite{atlas} and the
CMS \cite{cms} Collaborations at the Large Hadron Collider (LHC) in July 2012
is mostly consistent with the standard model (SM) Higgs boson
than any other extensions of the SM \cite{Cheung:2013kla,Cheung:2014noa},
at least
in terms of some statistical measures. The SM Higgs boson was proposed
in 1960s \cite{higgs}, but only received the confirmation recently
through its decays into $\gamma\gamma$ and $ZZ^* \to 4 \ell$ modes.
Although the data on Higgs signal strengths are best described by the SM,
the other extensions are still viable options to explain the data.
Numerous activities occurred in the constraining the SM boson
\cite{Cheung:2013kla,r1,r2,r3,r4,r5,r6,r7,r8,r9,r10,r11,r12,r13,r14,r15,r16,r17,r18}, higher dimension operators of the Higgs boson
\cite{anom1,anom2,anom3,anom4,anom5,anom6}, the two-Higgs doublet models
\cite{2hdm0,2hdm1,2hdm2,2hdm3,2hdm4,2hdm5,2hdm6,2hdm7,2hdm8,2hdm9,Cheung:2013rva,
2hdm10,2hdm11,2hdm12},
and in the supersymmetric framework
\cite{susy1,susy2,susy3,susy4,susy5,susy6,susy7,susy8,susy9,susy10}.
A very recent update to all the data as of summer 2014
was performed in Ref.~\cite{Cheung:2014noa}. We shall describe
the most significant change to the data set in Sec. III.
In this work, we perform the fits in the framework of the
minimal supersymmetric standard model (MSSM) to all the most updated data
on Higgs signal strengths as of summer 2014.
In our previous analysis of the two-Higgs-doublet model (2HDM)
\cite{Cheung:2013rva}, we do not
specify which neutral Higgs boson is the observed Higgs boson, so that
the whole scenario can be described by a small set of parameters.
The bottom and leptonic Yukawa couplings are determined through the
top Yukawa coupling, and the $HWW$ coupling is determined via $\tan\beta$
and top Yukawa, so that a minimal set of parameters includes only
$\tan\beta$ and the top Yukawa coupling. We can easily include the
effects of the charged Higgs boson
by the loop factor in the $H\gamma\gamma$ vertex,
and include possibly very light Higgs bosons
by the factor $\Delta \Gamma_{\rm tot}$.
Here we follow the same strategy for the global fits in the framework of
MSSM, the Higgs sector of which is the same as the Type II of the 2HDM,
in order to go along with a minimal set of parameters,
unless we specifically investigate the spectrum of
supersymmetric particles, e.g., the chargino mass.
In this work, we perform global fits in the MSSM under various
initial conditions to the most
updated data on Higgs boson signal strengths. A few specific features
are summarized here.
\begin{enumerate}
\item We use a minimal set of parameters without specifying the spectrum
of the SUSY particles. For example, all up-, down- and lepton-type
Yukawa couplings and the gauge-Higgs coupling are given in terms of
the top Yukawa coupling, $\tan\beta$, and $\kappa_d$, where $\kappa_d$ is
the radiative correction in the bottom Yukawa coupling defined later.
\item Effects of heavy SUSY particles appear in the loop factors $\Delta S^g$
and $\Delta S^\gamma$ of the $Hgg$ and $H\gamma\gamma$ vertices, respectively.
\item Effects of additional light Higgs bosons or light neutralinos that
the 125.5 GeV Higgs boson
can decay into are included by the deviation
$\Delta \Gamma_{\rm tot}$ in the Higgs boson width.
\item CP-violating effects can occur in Yukawa couplings, which are
quantified by the CP-odd part of the top-Yukawa coupling. Effects of
other CP sources can appear in the loop factor of $Hgg$ and $H\gamma\gamma$
vertices. We label them as $\Delta P^g$ and $\Delta P^\gamma$, respectively.
In Ref.~\cite{Cheung:2014oaa}, we have computed all the
Higgs-mediated CP-violating contributions to the
electric dipole moments (EDMs) and compared to
existing constraints from the EDM measurements of Thallium,
neutron, Mercury, and Thorium monoxide.
Nevertheless, we are content with CP-conserving fits in this work.
\item We impose the existing limits of chargino and stau masses when
we investigate specifically their effects on the vertex of $H\gamma\gamma$.
The current limit on chargino and stau masses are \cite{pdg}
\[
M_{\tilde{\chi}^\pm} > 103.5 \;{\rm GeV},\qquad
M_{\tilde{\tau}_1} > 81.9 \;{\rm GeV} \;.
\]
Similarly, the current limits for stop and sbottom masses quoted in PDG
are \cite{pdg}
\[
M_{{\tilde t}_1} > 95.7\;{\rm GeV}\,, \qquad
M_{{\tilde b}_1} > 89 \;{\rm GeV}\,,
\]
which will be applied in calculating the effects in $H\gamma\gamma$
and $Hgg$ vertices.
Note that the current LHC limits on the stop and sbottom masses
are $M_{{\tilde t}_1} > 650$ GeV and $M_{{\tilde b}_1} > 600$ GeV
at $95$\% confidence level in a simplified model
with $M_{\tilde{\chi}_1^0} = 0$ GeV~\cite{pdg}.
However, there often exist underlying assumptions of
search strategies and the mass of the lightest neutralino.
Therefore, we conservatively take the
above mass limits on the stops and sbottoms in most of
the analysis.
\item Since we shall try to find the implication of
the current Higgs signal strength data
on the SUSY spectrum, which in practice affects the lightest Higgs boson
mass, we therefore also calculate the corresponding Higgs boson mass
and impose the current Higgs mass constraint of $M_{H_1} \sim 125.5 \pm 6$
GeV, taking at a roughly $3$-$\sigma$ level.
\end{enumerate}
The organization of the work is as follows. In the next section,
we describe the convention and formulas for all the couplings used
in this work.
In Sec. III, we describe various CP-conserving fits and present the results.
In Sec. IV, we specifically investigate the SUSY parameter
space of charginos, staus, stops, and sbottoms.
We put the synopsis and conclusions in Sec. V.
\section{Formalism}
For the Higgs couplings to SM particles we assume that the observed
Higgs boson is a generic CP-mixed state without carrying any
definite CP-parity. We follow the conventions and notation of
{\tt CPsuperH}~\cite{Lee:2003nta}.
\subsection{Yukawa couplings}
The Higgs sector of the MSSM
is essentially the same as the Type II of the 2HDM. More details of the
2HDM can be found in Ref.~\cite{Cheung:2013rva}.
In the MSSM, the first Higgs doublet couples to the down-type quarks
and charged leptons while the second Higgs doublet couples to the up-type
quarks only. After both doublets take on vacuum-expectation values (VEV)
we can rotate the neutral components $\phi^0_1,\,\phi^0_2$ and $a$
into mass eigenstates $H_{1,2,3}$ through a mixing matrix $O$ as follows:
\[
(\phi^0_1,\, \phi^0_2,\, a )_\alpha^T = O_{\alpha i} (H_1,\,H_2,\, H_3)_i^T \;,
\]
with the mass ordering $M_{H_1} \le M_{H_2} \le M_{H_3}$.
We do not specify which Higgs boson is the observed one, in fact,
it can be any of the $H_{1,2,3}$.
We have shown in Ref.~\cite{Cheung:2013rva} that the bottom and lepton Yukawa
couplings can be expressed in terms of the top Yukawa coupling in general
2HDM. We can therefore afford a minimal set of input parameters.
The effective Lagrangian governing the interactions of the neutral
Higgs bosons with quarks and charged leptons is
\begin{equation}
\label{eq1}
{\cal L}_{H\bar{f}f}\ =\ - \sum_{f=u,d,l}\,\frac{g m_f}{2 M_W}\,
\sum_{i=1}^3\, H_i\, \bar{f}\,\Big( g^S_{H_i\bar{f}f}\, +\,
ig^P_{H_i\bar{f}f}\gamma_5 \Big)\, f\ .
\end{equation}
At the tree level, $(g^S,g^P) = (O_{\phi_1 i}/c_\beta, -O_{ai}
\tan\beta)$ and $(g^S, g^P) = (O_{\phi_2 i}/s_\beta, -O_{ai}
\cot\beta)$ for $f=(\ell,d)$ and $f=u$, respectively, and
$\tan\beta\equiv v_2/v_1$ is the ratio of the VEVs of the two doublets.
Threshold corrections to the down-type Yukawa couplings
change the relation between the Yukawa coupling $h_d$ and mass $m_d$ as
\footnote{
In general settings, $\kappa_{d}$ and $\kappa_{s}$ are usually the same,
but $\kappa_{b}$ could be very different because of the third generation
squarks. However, our main concern in this work is the third-generation
Yukawa couplings. Thus, we shall focus on $\kappa_b$ although we are using
the conventional notation $\kappa_d$.
}
\begin{equation}
\label{eq2}
h_d =\frac{\sqrt{2} m_d}{v \cos\beta}\,\frac{1}
{1+\kappa_d\tan\beta}\,.
\end{equation}
Thus, the Yukawa couplings of neutral Higgs-boson mass eigenstates $H_i$ to
the down-type quarks are modified as
\begin{eqnarray}
\label{gSHbb}
g^S_{H_i\bar{d}d} & =& {\rm Re}\, \bigg(\,
\frac{1}{1\, +\, \kappa_d\,\tan\beta}\,\bigg)\,
\frac{O_{\phi_1 i}}{\cos\beta}
\ +\ {\rm Re}\, \bigg(\, \frac{\kappa_d}{1\, +\,
\kappa_d\, \tan\beta}\,\bigg)\
\frac{O_{\phi_2 i}}{\cos\beta}\nonumber\\
&& +\: {\rm Im}\, \bigg[\,
\frac{ \kappa_d\, (\tan^2\beta\, +\, 1)}{1\, +\,
\kappa_d\, \tan\beta}\,\bigg]\
O_{ai}\, , \nonumber\\[0.35cm]
\label{gPHbb}
g^P_{H_i\bar{d}d} & =& -\, {\rm Re}\, \bigg(\,
\frac{ \tan\beta\, -\, \kappa_d}{1\, +\, \kappa_d \tan\beta}\,\bigg)\, O_{ai}
\ +\ {\rm Im}\, \bigg(\, \frac{\kappa_d\,\tan\beta}{1\, +\,
\kappa_d\, \tan\beta}\,\bigg)\
\frac{O_{\phi_1 i}}{\cos\beta}\nonumber\\
&&-\: {\rm Im}\, \bigg(\,
\frac{\kappa_d}{1\, +\, \kappa_d\, \tan\beta}\,\bigg)\
\frac{O_{\phi_2 i}}{\cos\beta}\ ,
\end{eqnarray}
In the MSSM,
neglecting the electroweak corrections and taking the most dominant
contributions,
$\kappa_b$ can be split into \cite{Lee:2004}
\[
\kappa_b
= \epsilon_g + \epsilon_H ,
\]
where $\epsilon_g$ and $\epsilon_H$ are the contributions from the
sbottom-gluino exchange diagram and from stop-Higgsino diagram,
respectively. Their explicit expressions are
\[
\epsilon_g = \frac{2\alpha_s}{3\pi}M^*_3\mu^*
I(m^2_{\tilde{b}_1},m^2_{\tilde{b}_2},\vert M_3\vert^2) , \qquad
\epsilon_H = \frac{\vert h_t\vert^2}{16\pi^2}A^*_t\mu^*
I(m^2_{\tilde{t}_1},m^2_{\tilde{t}_2},\vert \mu \vert^2)\ ,
\]
where $M_3$ is the gluino mass, $h_t$ and $A_t$ are the
top-quark Yukawa and trilinear coupling, respectively.
\subsection{Couplings to gauge bosons}
\begin{itemize}
\item
Interactions of the Higgs bosons with the gauge bosons $Z$ and $W^\pm$
are described by
\begin{equation}
{\cal L}_{HVV} = g\,M_W \, \left(W^+_\mu W^{- \mu}\ + \
\frac{1}{2c_W^2}\,Z_\mu Z^\mu\right) \, \sum_i \,g_{_{H_iVV}}\, H_i
\end{equation}
where
\begin{equation}
g_{_{H_iVV}} = c_\beta\, O_{\phi_1 i}\: +\: s_\beta\, O_{\phi_2 i}\,.
\end{equation}
\item
Couplings to two photons:
the amplitude for the decay process
$H_i \rightarrow \gamma\gamma$ can be written as
\begin{equation} \label{hipp}
{\cal M}_{\gamma\gamma H_i}=-\frac{\alpha M_{H_i}^2}{4\pi\,v}
\bigg\{S^\gamma(M_{H_i})\,
\left(\epsilon^*_{1\perp}\cdot\epsilon^*_{2\perp}\right)
-P^\gamma(M_{H_i})\frac{2}{M_{H_i}^2}
\langle\epsilon^*_1\epsilon^*_2 k_1k_2\rangle
\bigg\}\,,
\end{equation}
where $k_{1,2}$ are the momenta of the two photons and
$\epsilon_{1,2}$ the wave vectors of the corresponding photons,
$\epsilon^\mu_{1\perp} = \epsilon^\mu_1 - 2k^\mu_1 (k_2 \cdot
\epsilon_1) / M^2_{H_i}$, $\epsilon^\mu_{2\perp} = \epsilon^\mu_2 -
2k^\mu_2 (k_1 \cdot \epsilon_2) / M^2_{H_i}$ and $\langle \epsilon_1
\epsilon_2 k_1 k_2 \rangle \equiv \epsilon_{\mu\nu\rho\sigma}\,
\epsilon_1^\mu \epsilon_2^\nu k_1^\rho k_2^\sigma$.
The decay rate of $H_i\to \gamma\gamma$ is
proportional to $|S^\gamma|^2 + |P^\gamma|^2$.
The form factors are given by
\begin{eqnarray}
S^\gamma(M_{H_i})&=&2\sum_{f=b,t,\tau} N_C\,
Q_f^2\, g^{S}_{H_i\bar{f}f}\,F_{sf}(\tau_{f})
- g_{_{H_iVV}}F_1(\tau_{W})
+ \Delta S^\gamma_i \,, \nonumber \\
P^\gamma(M_{H_i})&=&2\sum_{f=b,t,\tau}
N_C\,Q_f^2\,g^{P}_{H_i\bar{f}f}\,F_{pf}(\tau_{f})
+ \Delta P^\gamma_i \,,
\end{eqnarray}
where $\tau_{x}=M_{H_i}^2/4m_x^2$, $N_C=3$ for quarks and $N_C=1$ for
taus, respectively.
In MSSM, the factors $\Delta S^\gamma_i$ and $\Delta P^\gamma_i$
receive contributions from charginos, sfermion, and charged Higgs boson:
\begin{eqnarray}
\Delta S^\gamma_i &=& \sqrt{2} g\,\sum_{f=\tilde{\chi}^\pm_1,\tilde{\chi}^\pm_2} \,
g^{S}_{H_i\bar{f}f}\,\frac{v}{m_f} F_{sf}(\tau_{if})
\nonumber \\ &&
- \sum_{\tilde{f}_j=\tilde{t}_1,\tilde{t}_2,\tilde{b}_1,\tilde{b}_2,
\tilde{\tau}_1,\tilde{\tau}_2}
N_C\, Q_f^2g_{H_i\tilde{f}^*_j\tilde{f}_j}
\frac{v^2}{2m_{\tilde{f}_j}^2} F_0(\tau_{i\tilde{f}_j})
- g_{_{H_iH^+H^-}}\frac{v^2}{2 M_{H^\pm}^2} F_0(\tau_{iH^\pm})
\,, \nonumber \\[3mm]
\Delta P^\gamma_i &=&\sqrt{2}g\,\sum_{f=\tilde{\chi}^\pm_1,\tilde{\chi}^\pm_2}
g^{P}_{H_i\bar{f}f} \,\frac{v}{m_f}
F_{pf}(\tau_{if}) \label{eq4}
\,,
\end{eqnarray}
where the couplings to charginos, sfermions, and charged Higgs are
defined in the interactions:
\begin{eqnarray}
{\cal L}_{H\widetilde{\chi}^+\widetilde{\chi}^-}
&=&-\frac{g}{\sqrt{2}}\sum_{i,j,k} H_k
\overline{\widetilde{\chi}_i^-}
\left(g_{H_k\tilde{\chi}^+_i\tilde{\chi}^-_j}^{S}+i\gamma_5
g_{H_k\tilde{\chi}^+_i\tilde{\chi}^-_j}^{P}\right)
\widetilde{\chi}_j^-\,, \nonumber \\
{\cal L}_{H\tilde{f}\tilde{f}}&=&v\sum_{f=u,d}\,g_{H_i\tilde{f}^*_j\tilde{f}_k}
(H_i\,\tilde{f}^*_j\,\tilde{f}_k)\,,
\nonumber \\
{\cal L}_{3H} & = & \: v\,\sum_{i=1}^3 g_{_{H_iH^+H^-}}\,H_iH^+H^-\,.
\end{eqnarray}
We shall describe the couplings of the Higgs boson to the charginos,
sfermions, and charged Higgs boson a little later.
\item Couplings to two gluons:
similar to $H\to\gamma\gamma$,
the amplitude for the decay process
$H_i \rightarrow gg$ can be written as
\begin{equation} \label{higg}
{\cal M}_{gg H_i}=-\frac{\alpha_s\,M_{H_i}^2\,\delta^{ab}}{4\pi\,v}
\bigg\{S^g(M_{H_i})
\left(\epsilon^*_{1\perp}\cdot\epsilon^*_{2\perp}\right)
-P^g(M_{H_i})\frac{2}{M_{H_i}^2}
\langle\epsilon^*_1\epsilon^*_2 k_1k_2\rangle
\bigg\}\,,
\end{equation}
where $a$ and $b$ ($a,b=1$ to 8) are indices of the eight $SU(3)$
generators in the adjoint representation.
The decay rate of $H_i\to gg $ is
proportional to $|S^g|^2 + |P^g|^2$.
The fermionic contributions and additional loop contributions from
squarks in the MSSM to
the scalar and pseudoscalar form factors are given by
\begin{eqnarray}
S^g(M_{H_i})&=&\sum_{f=b,t}
g^{S}_{H_i\bar{f}f}\,F_{sf}(\tau_{f}) +
\Delta S^g_i\,,
\nonumber \\
P^g(M_{H_i})&=&\sum_{f=b,t}
g^{P}_{H_i\bar{f}f}\,F_{pf}(\tau_{f}) +
\Delta P^g_i \,,
\end{eqnarray}
with
\begin{eqnarray}
\Delta S^g_i &=& -\sum_{\tilde{f}_j=\tilde{t}_1,\tilde{t}_2,\tilde{b}_1,\tilde{b}_2}
g_{H_i\tilde{f}^*_j\tilde{f}_j}
\frac{v^2}{4m_{\tilde{f}_j}^2} F_0(\tau_{i\tilde{f}_j}) \,, \nonumber \\
\Delta P^g_i &=& 0\,, \label{eq3}
\end{eqnarray}
where the $\Delta P^g=0$ because
there are no colored SUSY fermions in the MSSM that can
contribute to $\Delta P^g$ at one loop level.
\end{itemize}
\subsection{Interactions of neutral Higgs bosons with charginos, sfermions,
and charged Higgs}
The interactions between the Higgs bosons and charginos are described
by the following Lagrangian:
\begin{eqnarray}
{\cal L}_{H\widetilde{\chi}^+\widetilde{\chi}^-}
&=&-\frac{g}{\sqrt{2}}\sum_{i,j,k} H_k
\overline{\widetilde{\chi}_i^-}
\left(g_{H_k\tilde{\chi}^+_i\tilde{\chi}^-_j}^{S}+i\gamma_5
g_{H_k\tilde{\chi}^+_i\tilde{\chi}^-_j}^{P}\right)
\widetilde{\chi}_j^-\,,
\nonumber \\
g_{H_k\tilde{\chi}^+_i\tilde{\chi}^-_j}^{S}&=&\frac{1}{2}\left\{
[(C_R)_{i1}(C_L)^*_{j2}G^{\phi_1}_k+(C_R)_{i2}(C_L)^*_{j1}G^{\phi_2}_k]
+[i\leftrightarrow j]^* \right\}\,,
\nonumber \\
g_{H_k\tilde{\chi}^+_i\tilde{\chi}^-_j}^{P} &=&\frac{i}{2}\left\{
[(C_R)_{i1}(C_L)^*_{j2}G^{\phi_1}_k+(C_R)_{i2}(C_L)^*_{j1}G^{\phi_2}_k]
-[i\leftrightarrow j]^* \right\}\,, \label{eq13}
\end{eqnarray}
where $G^{\phi_1}_k=(O_{\phi_1 k}-is_\beta O_{ak})$,
$G^{\phi_2}_k=(O_{\phi_2 k}-ic_\beta O_{ak})$,
$i,j=1,2$, and $k=1-3$.
The chargino mass matrix in the $(\tilde{W}^-,\tilde{H}^-)$ basis
\begin{eqnarray}
{\cal M}_C = \left(\begin{array}{cc}
M_2 & \sqrt{2} M_W\, c_{\beta} \\[2mm]
\sqrt{2} M_W\, s_{\beta} & \mu
\end{array}\right)\, , \label{eq14}
\end{eqnarray}
is diagonalized by two different unitary matrices
$ C_R{\cal M}_C C_L^\dagger ={\sf diag}\{M_{\tilde{\chi}^\pm_1},\,
M_{\tilde{\chi}^\pm_2}\}$, where
$M_{\tilde{\chi}^\pm_1} \leq M_{\tilde{\chi}^\pm_2}$.
The chargino mixing matrices $(C_L)_{i\alpha}$ and $(C_R)_{i\alpha}$
relate the electroweak eigenstates to the mass eigenstates, via
\begin{eqnarray}
\tilde{\chi}^-_{\alpha L} &=&
(C_L)^*_{i \alpha } \tilde{\chi}_{iL}^-\,,\qquad
\tilde{\chi}^-_{\alpha L}\ =\ (\tilde{W}^-, \tilde{H}^-)_L^T\,,\nonumber\\
\tilde{\chi}^-_{\alpha R} &=& (C_R)^*_{i \alpha} \tilde{\chi}_{iR}^-\,,\qquad
\tilde{\chi}^-_{\alpha R}\ =\ (\tilde{W}^-, \tilde{H}^-)_R^T\,.
\end{eqnarray}
The Higgs-sfermion-sfermion interaction can be written in terms of the sfermion mass
eigenstates as
\begin{equation}
{\cal L}_{H \tilde{f} \tilde{f}}
= v \sum_{f=u,d} g_{H_i \tilde{f}^*_j \tilde{f}_k}
(H_i \tilde{f}^*_j \tilde{f}_k)\,, \label{eq16}
\end{equation}
where
\[
v g_{H_i \tilde{f}^*_j \tilde{f}_k} = (\Gamma^{\alpha \tilde{f}^* \tilde{f}})_{\beta \gamma}
O_{\alpha i} U^{\tilde{f}^*}_{\beta j}
U^{\tilde{f}}_{\gamma k}\,,
\]
with $\alpha = (\phi_1,\phi_2,a) = (1,2,3)$, \, $\beta, \gamma = L, R $, \,
$ i = (H_1, H_2, H_3) = (1,2,3)$ and $j,k = 1,2$. The expressions for the couplings
$\Gamma^{\alpha \tilde{f}^* \tilde{f}}$ are shown in \cite{Lee:2003nta}.
The stop and sbottom mass matrices may conveniently be written in the
$(\tilde{q}_L,\tilde{q}_R)$ basis as
\begin{eqnarray}
\tilde{\cal M}^2_q = \left(\begin{array}{cc}
M^2_{\tilde{Q}_3} + m^2_q + c_{2\beta}M^2_Z(T^q_z - Q_q s^2_W) & h^*_q v_q (A^*_q - \mu R_q)/
\sqrt{2} \\[2mm]
h_q v_q (A_q - \mu^* R_q)/\sqrt{2} &
M^2_{\tilde{R}_3} + m^2_q + c_{2\beta} M^2_Z Q_q s^2_W
\end{array}\right)\, , \label{eq17}
\end{eqnarray}
with $q=t,b, \, R=U,D, \, T^t_z=-T^b_z=1/2, \, Q_t=2/3, \, Q_b=-1/3, \, v_b=v_1, \, v_t=v_2,
\, R_b=\tan\beta =v_2/v_1, \, R_t=\cot\beta$, and $h_q$ is the Yukawa coupling of
the quark $q$. On the other hand, the stau mass matrix is written in the
$(\tilde{\tau}_L,\tilde{\tau}_R)$ basis as
\begin{eqnarray}
\tilde{\cal M}^2_{\tau} = \left(\begin{array}{cc}
M^2_{\tilde{L}_3} + m^2_{\tau} + c_{2\beta}M^2_Z(s^2_W - 1/2) & h^*_{\tau} v_1
(A^*_{\tau} - \mu \tan\beta)/\sqrt{2} \\[2mm]
h_{\tau} v_1 (A_{\tau} - \mu^* \tan\beta)/\sqrt{2} &
M^2_{\tilde{E}_3} + m^2_{\tau} + c_{2\beta} M^2_Z s^2_W
\end{array}\right)\, . \label{eq18}
\end{eqnarray}
The $2 \times 2$ sfermion mass matrix $\tilde{M}^2_f$ for $f=t,b$ and
$\tau$ is diagonalized by a unitary matrix $U^{\tilde{f}}$ :
$U^{\tilde{f} \dagger} \tilde{M}^2_f U^{\tilde{f}}= {\bf
diag}(m^2_{\tilde{f}_1},m^2_{\tilde{f}_2})$ with $m^2_{\tilde{f}_1}
\leq m^2_{\tilde{f}_2}$. The mixing matrix $U^{\tilde{f}}$ relates the
electroweak eigenstates $\tilde{f}_{L,R}$ to the mass eigenstates
$\tilde{f}_{1,2}$, via
\[
(\tilde{f}_L,\tilde{f}_R)^T_{\alpha} = U^{\tilde{f}}_{\alpha i} (\tilde{f}_1,\tilde{f}_2)^T_i \, .
\]
Interactions between the Higgs bosons and the charged Higgs boson
can be found in Ref.~\cite{Cheung:2013rva}.
\section{Data, Fits, and Results}
\subsection{Data}
Our previous works \cite{Cheung:2013kla,Cheung:2013rva,Cheung:2014oaa}
were performed
with data of the Summer 2013.
Very recently we have also updated the
model-independent fits using the data of the
Summer 2014 \cite{Cheung:2014noa}.
The whole set of Higgs strength data on $H\to \gamma\gamma$, $ZZ^* \to 4 \ell$,
$WW^* \to \ell\nu \ell\nu$, $\tau\tau$, and $b\bar b$ are listed
in Ref.~\cite{Cheung:2014noa}. The most significant changes
since summer 2013
are the $H\to \gamma\gamma$ data from both ATLAS and CMS.
The ATLAS Collaboration updated their best-measured value from
$\mu_{ggH+ttH} = 1.6 \pm 0.4$
to $\mu_{\rm inclusive}=1.17 \pm 0.27$ \cite{atlas_zz_2014},
while the CMS $H\to\gamma\gamma$ data entertained a very dramatic change
from $\mu_{\rm untagged}=0.78\,^{+0.28}_{-0.26}$
to $\mu_{ggH}= 1.12 \,^{+0.37}_{-0.32}$ \cite{cms_aa_2014}.
Other notable differences can be found in Ref.~\cite{Cheung:2014noa}.
The $\chi^2_{\rm SM}/$d.o.f. for the SM is now at $16.76/29$,
which corresponds to a $p$-value of $0.966$.
\subsection{CP-Conserving (CPC) Fits}
We consider the CP-conserving MSSM and use the most updated Higgs
boson signal strengths to constrain a minimal set of parameters under
various conditions.
Regarding the $i$-th Higgs boson $H_i$ as the
candidate for the 125 GeV Higgs boson,
the varying parameters are:
\begin{itemize}
\item
the up-type Yukawa coupling $C_u^S \equiv g^S_{H_i\bar{u}u}=
O_{\phi_2 i}/s_\beta$, see Eq.~(\ref{eq1}),
\item
the ratio of the VEVs of the two Higgs doublets $\tan\beta\equiv v_2/v_1$,
\item the parameter $\kappa_d$ (assumed real) quantifying the modification
between the down-type quark mass and Yukawa coupling due to radiative
corrections, as shown in Eq.~(\ref{eq2}),
\item $\Delta S^\gamma\equiv \Delta S^\gamma_i$ as in Eq.~(\ref{eq4})
\item $\Delta S^g\equiv \Delta S^g_i$ as in Eq.~(\ref{eq3}), and
\item the deviation in the total decay width of the observed
Higgs boson: $\Delta \Gamma_{\rm tot}$.
\end{itemize}
The down-type and lepton-type Yukawa and the gauge-Higgs couplings
are derived as
\begin{eqnarray}
C_d^S&\equiv & g^S_{H_i\bar{d}d}= \bigg(\,
\frac{O_{\phi_1 i}+\kappa_d O_{\phi_2 i}}{1\, +\, \kappa_d\,\tan\beta}\,\bigg)\,
\frac{1}{\cos\beta} \,, \nonumber\\[2mm]
C_\ell^S&\equiv & g^S_{H_i\bar{\ell}\ell} =
\frac{O_{\phi_1 i}}{\cos\beta}\,, \nonumber\\[2mm]
C_v&\equiv & g_{H_iVV}
= c_\beta\, O_{\phi_1 i}\: +\: s_\beta\, O_{\phi_2 i} \label{eq5}
\end{eqnarray}
with
\begin{equation}
O_{\phi_1 i}=\pm\sqrt{1-s_\beta^2 (C_u^S)^2}\,, \ \ \
O_{\phi_2 i}=C_u^S s_\beta\,.
\end{equation}
In place of $\tan\beta$ we can use $C_v$ as a varying parameter, and
then $\tan\beta \;(t_\beta) $ would be determined by
\begin{equation}
\label{eq:tbsq}
t_\beta^2=\frac{(1-C_v^2)}{(C_u^S-C_v)^2}
=\frac{(1-C_v^2)}{\left[(C_u^S-1)+(1-C_v)\right]^2}
\,. \\[3mm]
\end{equation}
We note that $t_\beta= \infty$ when $(C_u^S-1)=-(1-C_v)<0$
\footnote{Note $C_v\leq 1$ and positive definite in our convention.}
while $t_\beta=1$ when $(C_u^S-1)=\pm\sqrt{1-C_v^2}-(1-C_v)$.
Therefore $t_\beta$ changes from $\infty$ to $1$ when $(C_u^S-1)$ deviates from
$-(1-C_v)$ by the amount of $\pm\sqrt{1-C_v^2}$. This implies that the value
of $t_\beta$ becomes more and more sensitive to the deviation of $C_u^S$ from $1$
as $C_v$ approaches to its SM value $1$.
\medskip
We are going to perform the following three categories of CPC fits
varying the stated parameters
while keeping the others at their SM values.
\begin{itemize}
\item{\bf CPC.II}
\begin{itemize}
\item{\bf CPC.II.2}: $C_u^S$, $\tan\beta$
($\kappa_d=\Delta\Gamma_{\rm tot}=\Delta S^\gamma = \Delta S^g = 0$ )
\item{\bf CPC.II.3}: $C_u^S$, $\tan\beta$, $\kappa_d$
($\Delta\Gamma_{\rm tot}=\Delta S^\gamma = \Delta S^g = 0$ )
\item{\bf CPC.II.4}: $C_u^S$, $\tan\beta$, $\kappa_d$, $\Delta \Gamma_{\rm tot}$
($\Delta S^\gamma = \Delta S^g = 0$ )
\end{itemize}
\item{\bf CPC.III}
\begin{itemize}
\item{\bf CPC.III.3}: $C_u^S$, $\tan\beta$, $\Delta S^\gamma$
($\kappa_d=\Delta\Gamma_{\rm tot}=\Delta S^g = 0$ )
\item{\bf CPC.III.4}: $C_u^S$, $\tan\beta$, $\Delta S^\gamma$, $\kappa_d$
($\Delta\Gamma_{\rm tot}= \Delta S^g = 0$ )
\item{\bf CPC.III.5}: $C_u^S$, $\tan\beta$, $\Delta S^\gamma$, $\kappa_d$, $\Delta
\Gamma_{\rm tot}$
($\Delta S^g = 0$ )
\end{itemize}
\item{\bf CPC.IV}
\begin{itemize}
\item{\bf CPC.IV.4}: $C_u^S$, $\tan\beta$, $\Delta S^\gamma$, $\Delta S^g$
($\kappa_d=\Delta\Gamma_{\rm tot}= 0$ )
\item{\bf CPC.IV.5}: $C_u^S$, $\tan\beta$, $\Delta S^\gamma$, $\Delta S^g$, $\kappa_d$
($\Delta\Gamma_{\rm tot}= 0$ )
\item{\bf CPC.IV.6}: $C_u^S$, $\tan\beta$, $\Delta S^\gamma$, $\Delta S^g$,
$\kappa_d$, $\Delta \Gamma_{\rm tot}$
\end{itemize}
\end{itemize}
Basically, the {\bf CPC.II}, {\bf CPC.III}, and {\bf CPC.IV} fits vary
($C_u^S$,$\tan\beta$),
($C_u^S$,$\tan\beta$,$\Delta S^\gamma$), and
($C_u^S$,$\tan\beta$,$\Delta S^\gamma$,$\Delta S^g$), respectively.
Each category of {\bf CPC} fits includes three fits:
the second fit adds $\kappa_d$ to the set of varying parameters and
$\Delta \Gamma_{\rm tot}$ is further varied in the third one.
The Arabic number at the end of each label denotes the
total number of varying parameters.
The $\Delta S^\gamma$ is the deviation in the $H\gamma\gamma$
vertex factor other than the effects of changing the Yukawa and
gauge-Higgs couplings, and it receives contributions from any exotic
particles running in the triangular loop. For example, the charginos,
charged Higgs bosons, sleptons, and squarks in the MSSM. Here we are
content with a varying $\Delta S^\gamma$ without specifying the
particle spectrum of the MSSM. Later in the next section we shall
specifically investigate the effects of charginos, staus,
stops, and sbottoms.
In the MSSM,
$\Delta S^g$ receives contributions only from colored SUSY particles--squarks
running in the $Hgg$ vertex. The current limits on squark masses are
in general above TeV such
that $\Delta S^g$ is expected to be small.
Nevertheless, we do not restrict the size of $\Delta S^g$ in this fit
in order to see the full effect of $\Delta S^g$.
The parameter $\kappa_d$
arises from the loop corrections to the down-type Yukawa couplings. It changes
the relation between the mass and the Yukawa coupling of the down-type quarks.
We limit the range of $|\kappa_d | < 0.1$ as it is much smaller than $0.1$ in
most of the MSSM parameter space.
Although the charginos are constrained to be heavier than
103.5 GeV and sleptons to be heavier than 81.9 GeV \cite{pdg}, there
are still possibilities that the decays of
the 125.5 GeV Higgs boson into
neutralinos and another neutral Higgs
boson are kinematically allowed. These channels have not been
explicitly searched for, but we can take them into account by the
deviation $\Delta \Gamma_{\rm tot}$ in the total decay width of the
observed Higgs boson.
The best-fit points for the fits are summarized in Table~\ref{tab:best}.
We see that the $p$ values of the
{\bf CPC.II.2}, {\bf CPC.III.3}, and {\bf CPC.IV.4} fits are the
highest in each category.
Also, the $p$ value of the {\bf CPC.III.3} fit is slightly higher than that of
the {\bf CPC.IV.4} fit, followed by the {\bf CPC.II.2} fit.
\begin{sidewaystable}[thb!]
\caption{\small \label{tab:best}
The best-fit values for various {\bf CPC} fits.
The SM chi-square per degree of freedom is $\chi^2_{\rm SM}$/d.o.f.$=16.76/29$,
and $p$-value$=0.966$.
}
\begin{ruledtabular}
\begin{tabular}{ l|ccc|rcrrrrrrr}
Fits & $\chi^2$ & $\chi^2$/dof & $p$-value &
\multicolumn{9}{c}{Best-fit values} \\
& & & & $C_u^S$ & $\tan\beta$ &
$\Delta S^\gamma$ & $\Delta S^g$ & $\kappa_d$ & $\Delta{\Gamma}_{tot}$ &
$C_v$ & $C_{d}^{S}$ & $C_{\ell}^{S}$ \\
\hline\hline
{\bf CPC.II.2}
& $16.74$ & $0.620$ & $0.937$ & $1.011$ &
$0.111$ & $-$ & $-$ & $-$ & $-$ & $1.000$ & $1.000$ & $1.000$\\
{\bf CPC.II.3}
& $16.74$ & $0.644$ & $0.917$ & $1.011$ &
$0.194$ & $-$ & $-$ & $0.099$ & $-$ & $1.000$ & $1.000$ & $1.000$\\
{\bf CPC.II.4}
& $16.72$ & $0.669$ & $0.892$ & $1.023$ &
$0.312$ & $-$ & $-$ & $-0.079$ & $0.103$ & $1.000$ & $0.997$ & $0.998$\\
\hline
{\bf CPC.III.3}
& $15.50$ & $0.596$ & $0.947$ & $-0.930$ &
$0.194$ & $2.326$ & $-$ & $-$ & $-$ & $0.932$ & $1.003$ & $1.003$\\
{\bf CPC.III.4}
& $15.48$ & $0.619$ & $0.929$ & $-0.948$ &
$0.180$ & $2.402$ & $-$ & $-0.097$ & $-$ & $0.940$ & $1.036$ & $1.002$\\
{\bf CPC.III.5}
& $15.43$ & $0.643$ & $0.907$ & $1.061$ &
$0.100$ & $-0.938$ & $-$ & $0.100$ & $0.557$ & $1.000$ & $1.000$ & $1.000$\\
\hline
{\bf CPC.IV.4}
& $14.85$ & $0.594$ & $0.945$ & $-1.219$ &
$0.154$ & $2.893$ & $1.547$ & $-$ & $-$ & $0.943$ & $0.994$ & $0.994$\\
& $14.85$ & $0.594$ & $0.945$ & $-1.219$ &
$0.154$ & $2.893$ & $0.204$ & $-$ & $-$ & $0.943$ & $0.994$ & $0.994$\\
{\bf CPC.IV.5}
& $14.83$ & $0.618$ & $0.926$ & $-1.224$ &
$0.164$ & $2.902$ & $1.540$ & $0.088$ & $-$ & $0.935$ & $0.962$ & $0.993$\\
& $14.83$ & $0.618$ & $0.926$ & $-1.225$ &
$0.164$ & $2.902$ & $0.217$ & $0.088$ & $-$ & $0.935$ & $0.962$ & $0.993$\\
{\bf CPC.IV.6}
& $14.83$ & $0.645$ & $0.901$ & $-1.213$ &
$0.173$ & $2.868$ & $1.528$ & $0.082$ & $-0.071$ & $0.929$ & $0.962$ & $0.993$\\
& $14.83$ & $0.645$ & $0.901$ & $-1.213$ &
$0.173$ & $2.870$ & $0.213$ & $0.079$ & $-0.075$ & $0.929$ & $0.963$ & $0.993$\\
& $14.83$ & $0.645$ & $0.901$ & $1.022$ &
$2.600$ & $-1.228$ & $-0.180$ & $0.005$ & $-0.839$ & $0.782$ & $-0.811$ & $-0.837$\\
& $14.83$ & $0.645$ & $0.901$ & $1.022$ &
$2.600$ & $-1.228$ & $-1.288$ & $0.005$ & $-0.840$ & $0.782$ & $-0.811$ & $-0.837$\\
\end{tabular}
\end{ruledtabular}
\end{sidewaystable}
\begin{sidewaystable}[thb!]
\caption{\small \label{tab:best3}
The other local minima for various CPC fits.
}
\begin{ruledtabular}
\begin{tabular}{ l|ccc|rcrrrrrrr}
Fits & $\chi^2$ & $\chi^2$/dof & $p$-value &
\multicolumn{9}{c}{Best-fit values} \\
& & & & $C_u^S$ & $\tan\beta$ &
$\Delta S^\gamma$ & $\Delta S^g$ & $\kappa_d$ & $\Delta{\Gamma}_{tot}$ &
$C_v$ & $C_{d}^{S}$ & $C_{\ell}^{S}$ \\
\hline\hline
{\bf CPC.III.3}
& $15.68$ & $0.603$ & $0.944$ & $1.000$ &
$34.58$ & $-0.853$ & $-$ & $-$ & $-$ & $1.000$ & $1.039$ & $1.039$\\
{\bf CPC.III.4}
& $15.59$ & $0.624$ & $0.926$ & $0.999$ &
$9.332$ & $-1.026$ & $-$ & $-0.006$ & $-$ & $0.976$ & $-1.170$ & $-1.051$\\
\hline
{\bf CPC.IV.4}
& $15.23$ & $0.609$ & $0.936$ & $1.000$ &
$5.681$ & $-1.127$ & $-0.057$ & $-$ & $-$ & $0.940$ & $-1.002$ & $-1.002$\\
& $15.23$ & $0.609$ & $0.936$ & $1.000$ &
$5.695$ & $-1.126$ & $-1.395$ & $-$ & $-$ & $0.940$ & $-1.002$ & $-1.002$\\
{\bf CPC.IV.5}
& $15.22$ & $0.634$ & $0.914$ & $1.000$ &
$5.423$ & $-1.128$ & $-0.062$ & $0.002$ & $-$ & $0.934$ & $-0.980$ & $-0.999$\\
& $15.22$ & $0.634$ & $0.914$ & $1.000$ &
$5.429$ & $-1.127$ & $-1.387$ & $0.002$ & $-$ & $0.934$ & $-0.980$ & $0.999$\\
\end{tabular}
\end{ruledtabular}
\end{sidewaystable}
\subsection{Results}
Before we present descriptions of the confidence regions and
the correlations among the fitting parameters
$C_u^S$, $\tan\beta$, $\Delta S^\gamma$, $\Delta S^g$, $\kappa_d$, and
$\Delta\Gamma_{\rm tot}$, we look into the behavior of
$\Delta\chi^2$ versus $C_u^S$
in each category of fits.
In the {\bf CPC.II} fits, the minimum $\chi^2$ values are
$16.74$ ({\bf CPC.II.2}, {\bf CPC.II.3}) and $16.72$ ({\bf CPC.II.4})
(see Table~\ref{tab:best}), and
$\Delta\chi^2$ versus $C_u^S$ are shown in the upper row of Fig.~\ref{fig:chi2}.
The minima are located at
$C_u^S=1.011$ ({\bf CPC.II.2}, {\bf CPC.II.3}) and
$C_u^S=1.023$ ({\bf CPC.II.4})
and the second local minima are
developed around $C_u^S=-1$ but with $\Delta\chi^2\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 5$.
It is clear that $C_u^S \approx 1$ is preferred much more than the negative
values.
The $\Delta\chi^2$
dependence on $C_u^S$ hardly changes by varying $\kappa_d$
as shown in the upper-middle frame.
With $\Delta\Gamma_{\rm tot}$ varying further,
we observe the dependence of $\Delta\chi^2$ on $C_u^S$
becomes broader by extending to the regions of $|C_u^S|>1$
as shown in the upper-right frame.
We also observe that the second local
minimum around $C_u^S=-1$ disappears when $\tan\beta \raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 0.6$.
In the {\bf CPC.III} fits, the minimum $\chi^2$ values are
$15.50$ ({\bf CPC.III.3}), $15.48$ ({\bf CPC.III.4}), and $15.43$ ({\bf CPC.III.5}):
see Table~\ref{tab:best}, and
$\Delta\chi^2$ versus $C_u^S$ are shown in the middle row of Fig.~\ref{fig:chi2}.
The minima are located at
$C_u^S=-0.930$ ({\bf CPC.III.3}),
$C_u^S=-0.948$ ({\bf CPC.III.4}), and
$C_u^S=1.061$ ({\bf CPC.III.5}),
and the second local minima are
developed around $C_u^S=1$ ({\bf CPC.III.3} and {\bf CPC.III.4})
and $C_u^S=-1$ ({\bf CPC.III.5}), respectively.
In contrast to the {\bf CPC.II} fits, the
$\Delta\chi^2$ difference between the true and local minima
is tiny, $\left.\Delta\chi^2\right|_{\rm local}-
\left.\Delta\chi^2\right|_{\rm true}\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 0.2$: see Table~\ref{tab:best3}.
The $\Delta\chi^2$ dependence on $C_u^S$ hardly changes by
varying $\kappa_d$ additionally (shown in the middle-middle frame),
but when $\Delta\Gamma_{\rm tot}$ is varied further,
the dependence of $\Delta\chi^2$ on $C_u^S$
becomes broader, the same as the {\bf CPC.II} fits (see the middle-right
frame).
We observe the true/local
minima around $C_u^S=-1$ disappear when $\tan\beta \raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 0.6$.
In the {\bf CPC.IV} fits, the minimum $\chi^2$ values are
$14.85$ ({\bf CPC.IV.4}), $14.83$ ({\bf CPC.IV.5} and {\bf CPC.IV.6}):
see Table~\ref{tab:best}, and
$\Delta\chi^2$ versus $C_u^S$ are shown in the lower row of Fig.~\ref{fig:chi2}.
The minima are located at
$C_u^S=-1.219$ ({\bf CPC.IV.4}),
$C_u^S=-1.225$ ({\bf CPC.IV.5}), and
$C_u^S=-1.213\,,1.022$ ({\bf CPC.IV.6}).
The second local minima are
developed for {\bf CPC.IV.4} and {\bf CPC.IV.5} at $C_u^S=1$:
see Table~\ref{tab:best3}.
Similar to the {\bf CPC.III} fits the
$\Delta\chi^2$ difference between the true and local minima
is tiny for {\bf CPC.IV.4} and {\bf CPC.IV.5},
$\left.\Delta\chi^2\right|_{\rm local}-
\left.\Delta\chi^2\right|_{\rm true}\sim 0.4$: see Table~\ref{tab:best3}.
On the other hand,
in contrast to the {\bf CPC.III} fits
any values of $C_u^S$ between $-2$ and $2$
are allowed at $2$-$\sigma$ level and higher.
The behavior of $\Delta\chi^2$ by additionally
varying $\kappa_d$ and $\Delta\Gamma_{\rm tot}$
is the same as in the previous cases.
We again observe the true
minima around $C_u^S=-1$ disappear when $\tan\beta \raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 0.6$.
We show the confidence-level regions on the $(C_u^S, \tan\beta)$ plane
for three categories of {\bf CPC} fits:
{\bf CPC.II} (upper row), {\bf CPC.III} (middle row), and
{\bf CPC.IV} (lower row) in Fig.~\ref{fig:tanbeta}.
The confidence level (CL) regions shown are for
$\Delta \chi^2 \le 2.3$ (red), $5.99$ (green), and $11.83$ (blue)
above the minimum, which correspond to CLs of
$68.3\%$, $95\%$, and $99.7\%$, respectively. The best-fit point
is denoted by the triangle.
We observe that the plots are very close to those of the
Type II of the 2HDM \cite{Cheung:2013rva},
though the regions in general shrink by small amounts.
First of all,
the vertical $68.3\%$ confidence (red) regions around $C_u^S=1$ can be understood from
Eq.~(\ref{eq:tbsq}) by observing
that the value of $t_\beta$ changes from $\infty$ to $1$ when $(C_u^S-1)$ deviates from
$-(1-C_v)$ by the amount of $\pm\sqrt{1-C_v^2}$ and there are
generally many points around $C_v=1$ as shown in
Fig.~\ref{fig:cv}.
In each category of fits, Fig.~\ref{fig:chi2} is helpful to understand
the basic behavior of the CL regions as $C_u^S$ is varied.
In the {\bf CPC.II} fits, the region around $C_u^S=1$ is much more preferred.
The negative $C_u^S$ values are not allowed at $68\%$ CL.
In the {\bf CPC.III} fits, the region around $C_u^S=-1$ falls into
the stronger 68.3\% CL
but $C_u^S=0$ is not allowed even at $99.7\%$ CL.
On the other hand,
the whole range of $-2 < C_u^S < 2$ is allowed at 95\% CL
for the {\bf CPC.IV} fits though not at $68.3\%$ CL.
In all the fits, the negative values of $C_u^S$ are not allowed at $95\%$ CL
when $\tan\beta \raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 0.5$ is imposed, which is in general
required by the perturbativity of the top-quark Yukawa coupling.
The CL regions hardly change by varying $\kappa_d$ additionally,
but the CL regions can extend to the regions of $|C_u^S|>1$
by further varying $\Delta\Gamma_{\rm tot}$.
The CL regions on the $(C_u^S, C_v)$ plane are shown in Fig.~\ref{fig:cv}
for the three categories of {\bf CPC} fits:
{\bf CPC.II} (upper row), {\bf CPC.III} (middle row), and
{\bf CPC.IV} (lower row).
The CL regions are labeled in the same way as in Fig.~\ref{fig:tanbeta}.
We observe $C_v\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 0.75$ at $68.3\%$ CL except in the {\bf CPC.IV.6} fit.
Otherwise,
one may make similar observations as in Fig.~\ref{fig:tanbeta} for
the behavior of the CL regions as $C_u^S$ is varied.
Figure~\ref{fig:cd} shows the CL regions on the $(C_u^S, C_d^S)$ plane
in the same format as Fig.~\ref{fig:tanbeta}. $C_d^S\approx 1$ is
preferred except for the {\bf CPC.IV.6} fit, in which the best-fit values
of $C_d^S$ are about $0.96$ and $-0.81$ when
$C_u^S\sim -1.2$ and $1.0$, respectively:
see Table~\ref{tab:best}.
Nevertheless, the difference in $\Delta\chi^2$ between the
true minima and the local minimum around the SM limit $(C_u^S,C_d^S)=(1,1)$ is
small.
The CL regions, centered around
the best-fit values, significantly expand as the fit progresses
from {\bf CPC.II} to {\bf CPC.III} and
from {\bf CPC.III} to {\bf CPC.IV},
as well as by adding
$\Delta\Gamma_{\rm tot}$ to the set of varying parameters.
We show the CL regions on the $(C_d^S, C_\ell^S)$ plane in Fig.~\ref{fig:cdcl}.
The format
is the same as in Fig.~\ref{fig:tanbeta}. At tree level without including
$\kappa_d$, $C_\ell^S=C_d^S=O_{\phi_1i}/\cos\beta$
as clearly seen in the left frames and the true and local minima are located
at $(C_d^S, C_\ell^S)=(1,1)$ and $(-1,-1)$.
The tree-level relation is modified by introducing
$\kappa_d$ and the local minima around $(C_d^S, C_\ell^S)=(-1,1)$ are developed as shown in
the middle frames.
Further varying $\Delta \Gamma_{\rm tot}$, we observe that $C_d^S=0$ is allowed at the
$99.7\%$ CL but $|C_\ell^S|>0$ always: see the right frames.
The CL regions involved with $\kappa_d$ are shown in the left and middle
frames of
Fig.~\ref{fig:kddgam} for the {\bf CPC.II} (upper), {\bf CPC.III} (middle),
and {\bf CPC.IV} (lower) fits. We see any value of $\kappa_d$ between
$-0.1$ and $0.1$ is allowed.
Note that in the most recent update
\cite{Cheung:2014noa} when $\Delta \Gamma_{\rm tot}$ is the only
parameter allowed to vary, the fitted value of $\Delta \Gamma_{\rm
tot}$ is consistent with zero and is constrained by
$\Delta \Gamma_{\rm tot} < 0.97 \;{\rm MeV}$ at 95\% CL.
From the right frames of Fig.~\ref{fig:kddgam}, we observe that
the range of $\Delta \Gamma_{\rm tot}$ at 95\% CL (green region) varies from
$-2.4$ MeV to $3.3$ MeV ({\bf CPC.II.4}) and
$-2.9$ MeV to $5.6$ MeV ({\bf CPC.III.5} and {\bf CPC.IV.6}).
Such a large range is not very useful in constraining the exotic decay branching
ratio of the Higgs boson. Usually we have to limit the number of
varying parameters
to be small enough to draw a useful constraint on $\Delta \Gamma_{\rm tot}$.
We show the CL regions on the $(C_u^S, \Delta S^\gamma)$ plane
in Fig.~\ref{fig:dcp}
for the {\bf CPC.III} (upper) and {\bf CPC.IV} (lower) fits.
In the {\bf CPC.III} fits, the range of $\Delta S^\gamma$ is from
$-2.5\,(1)$ to $0.3\,(3.7)$ at $68.3\%$ CL for the positive (negative) $C_u^S$.
In the {\bf CPC.IV} fits, the range is a bit widened.
In Fig.~\ref{fig:dcpdcg}, we show the CL regions of the {\bf CPC.IV} fits
on the $(C_u^S, \Delta S^g)$ (upper) and $(\Delta S^\gamma, \Delta S^g)$ (lower)
planes.
We found that there are two bands of $\Delta S^g$ allowed
by data, which are consistent with the results in the model-independent fits
\cite{Cheung:2013kla}.
In the plots of $\Delta S^\gamma$ vs $\Delta S^g$
there are four almost degenerate solutions to the local minimum of $\chi^2$,
which only differ from one another by a very small amount. It happens because
$\Delta S^\gamma$ and $\Delta S^g$ satisfy a set of elliptical-type equations,
which imply two solutions for each of $\Delta S^\gamma$ and $\Delta S^g$
\cite{Cheung:2013kla}.
A quick summary of the {\bf CPC} fits is in order here. The confidence
regions in various fits are similar to the Type II of the 2HDM. When
$\kappa_d$ and $\Delta \Gamma_{\rm tot}$ (not investigated in the
previous 2HDM fits) are allowed to vary, the confidence regions are
slightly and progressively enlarged due to more varying
parameters. Especially the linear relation between $C_d^S$ and $C_\ell^S$
are ``diffused'' when $\kappa_d$ varies between $\pm 0.1$ as shown in
Eq.~(\ref{eq5}). The two possible solutions for $\Delta S^\gamma$ in
the {\bf CPC.III} and {\bf CPC.IV}
cases are consistent with what we have
found in previous works \cite{Cheung:2013kla,Cheung:2013rva}.
The best-fit point of each fit is shown in Table~\ref{tab:best} with
the corresponding $p$-value. It is clear that the SM fit provides the
best $p$-value in consistence with our previous works
\cite{Cheung:2013kla,Cheung:2013rva,Cheung:2014noa}.
Among the fits other than the
SM one, the {\bf CPC.III.3} fit
gives the smallest $\chi^2$ per degree of freedom
and thus the largest $p$-value. It demonstrates that the set of
parameters consisting of the top-Yukawa coupling $C_u^S$, $\tan\beta$
or equivalently the gauge-Higgs coupling $C_v$, and $\Delta S^\gamma$
is the minimal set of parameters that gives the best description of
the data, other than the SM.
In this fit, the $C_v =0.93$ being very close
to the SM value while $C_u^S$ takes on a negative value $-0.93$,
which is then compensated by a relatively large $\Delta S^\gamma = 2.3$.
The derived $C_d^S$ and $C_\ell^S$ are very close to the SM values.
On the other hand, we show in Table~\ref{tab:best3} the other local
minima for various {\bf CPC} fits. We can see that
the {\bf CPC.III.3} fit indeed
has another local minimum, which has a $\chi^2$ very close to the
true minimum, at which $C_u^S$, $C_v$, $C_d^S$, and $C_\ell^S$ are
extremely close to their SM values while $\Delta S^\gamma = -0.85$.
\section{Implications on the MSSM spectrum}
In this section, we shall try to find the implications of
the current Higgs signal strength data
on the masses of charginos, sleptons, sbottoms, and stops, as
well as the $A$ parameters -- SUSY spectrum -- through the virtual effects.
Supersymmetric particles can enter into the picture of the observed Higgs boson
via (i) exotic decays, e.g., into neutralinos, (ii) contributions to
$\Delta S^\gamma$ by charginos, sleptons, squarks, and (iii) contributions
to $\Delta S^g$ by squarks. Note that virtual effects are also present in
$\kappa_d$.
Being different from the fits considered in the previous section,
we restrict $\tan\beta$ to be larger than $1/2$
so that the top-quark Yukawa coupling is supposed to be perturbative and
the one-loop contributions of the SUSY particles to the $H\gamma\gamma$
and $Hgg$ vertices remain reliable.
Furthermore, as we shall see, the best-fit values of the couplings
are close to the SM ones and, accordingly, we take the lightest
Higgs state ($H_1$) for the observed Higgs boson with
$M_{H_1}\sim 125.5$ GeV.
A comprehensive survey over the full parameter space of the MSSM is
a demanding task requiring a large amount of computing time.
Since we are in pursuit of the implications of the current Higgs data on
SUSY spectrum, we consider the following three representative fits
instead of carrying out the comprehensive study:
\begin{itemize}
\item{\bf MSSM-1}: Only with chargino contributions.
\item{\bf MSSM-2}: Only with scalar-tau contributions.
\item{\bf MSSM-3}: With all chargino, scalar-tau, sbottom, and
stop contributions.
\end{itemize}
In the {\bf MSSM-1} fit, we assume all the scalar
fermions are too heavy to affect the Higgs signal strengths,
and the heavy scalar fermions can easily generate the
lightest Higgs boson weighing 125.5 GeV through the large
renormalization group running effects, such as in
Split SUSY~\cite{Giudice:2011cg}.
In this case, the lightest supersymmetric stable particle (LSP) is
in general a mixed state of bino, wino, and higgsinos.
In the {\bf MSSM-2} fit, except for the neutral LSP, we assume only the scalar
taus are light enough to affect the Higgs signal strengths.
Similar to the {\bf MSSM-1} case, the heavy stop and sbottoms
can easily give $M_{H_1}\sim 125.5$ GeV. In this fit, we are
assuming the charginos are heavy and, therefore, the LSP is bino-like and
its mass is fixed by the bino mass parameter $M_1$.
In the {\bf MSSM-3} fit, we consider all the chargino, scalar-tau, sbottom,
and stop contributions. Being different from the previous two fits,
the mass spectrum of the Higgs sector is closely
correlated with the SUSY contributions to Higgs signal strengths.
To calculate the lightest Higgs mass, we adopt the the approximated
two-loop level analytical expression~\cite{Heinemeyer:1999be,Espinosa:2000df}
which is precise enough for the purpose of the current study.
For the heavier Higgses, we assume that they are decoupled or heavier
than $\sim 300$ GeV.
To be more specific, we are taking $M_A=300$ GeV and require
$|M_{H_1}-125.5\,{\rm GeV}|\leq 6$ GeV,
taking account of the $\sim 3$ GeV theoretical error of the lightest Higgs
mass.
Note that the charginos and sleptons have negligible effects on
the Higgs boson mass and thus we do not impose Higgs boson mass
constraints in the {\bf MSSM-1} and {\bf MSSM-2} fits.
\subsection{MSSM-1: Charginos only}
\begin{table}[t!]
\caption{\small \label{tab:char}
The best-fit values for chargino contributions to $\Delta
S^{\gamma}(\tilde{\chi}^\pm_1,\tilde{\chi}^\pm_2)$. We imposed
$M_{\tilde{\chi}^\pm_1}>103.5$ GeV and $\tan\beta > 1/2$.
The parameters: $C_u^S$, $\tan\beta$, $M_2\subset [-1$TeV$,1$TeV$]$,
$\mu\subset [0,1$TeV$]$ are scanned.
}
\begin{ruledtabular}
\begin{tabular}{ l|ccc|rcrrrr}
Fits & $\chi^2$ & $\chi^2$/dof & $p$-value &
\multicolumn{6}{c}{Best-fit values} \\
& & & & $C_u^S$ & $\tan\beta$ & $\kappa_d$ &
$\Delta S^\gamma$ & $\Delta S^g$ & $\Delta{\Gamma}_{\rm tot}$ \\
\hline\hline
{\bf Charginos}
& $15.78$ & $0.631$ & $0.921$ & $0.992$ &
$1.513$ & $-$ & $-0.683$ & $-$ & $-$
\end{tabular}
\begin{tabular}{ lcccrrrrrrr}
& & & & \multicolumn{7}{c}{Best-fit values} \\
& & & & $C_v$ & $C_{d}^{S}$ & $C_{\ell}^{S}$ & $M_2$(GeV) & $\mu$(GeV) &
$M_{\tilde{\chi}^\pm_1}$(GeV) & $M_{\tilde{\chi}^\pm_2}$(GeV) \\
\hline\hline
& & & & $1.000$ & $1.019$ & $1.019$ & $184$ & $179$ & $103.7$ & $261.3$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
We first investigate the effects of charginos. The lower mass limit of
chargino is
103.5 GeV, so that the only place that it can affect the Higgs boson is in the
loop factor $\Delta S^\gamma$. The MSSM parameters that affect the chargino mass
and the interactions with the Higgs boson are: $M_2$, $\mu$, and $\tan\beta$,
shown in Eqs.~(\ref{eq13}) and (\ref{eq14}).
We show in Fig.~\ref{fig:char} the confidence regions when we vary
$C_u^S$, $\tan\beta$, $M_2$, and $\mu$ with the additional constraint on the
chargino mass:
\[
M_{\tilde{\chi}^\pm} > 103.5 \;{\rm GeV} \;.
\]
The results are analogous to those of the {\bf CPC.III.3} case if we do not
impose the chargino mass constraint and
the restriction of $\tan\beta>1/2$.
In the {\bf CPC.III.3} fit,
$\Delta S^\gamma$ is free to vary both negatively and positively,
while here
the sign of the chargino contribution correlates with
$C_u^S$ in the parameter space of $M_2$ and $\mu$.
From the upper frames, we note that
$C_u^S$ is always positive under the requirement of $\tan\beta>1/2$ and
$\Delta S^\gamma$ tends to be positive taking its value in the range between
$-0.75$ and $1.7$ at $99.7\%$ CL.
In the lower-left frame, we show the $M_{\tilde\chi_1^\pm}$ dependence of the
CL regions of $\Delta S^\gamma$. We observe that all the points fall into the
$68.3\%$ CL region of $-0.25\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~\Delta S^\gamma\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 0.43$
when $M_{\tilde\chi_1^\pm}\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 200$ GeV. We also observe that
the $\mu$ parameter can be as low
as $70$ GeV when $M_2<0$ from the lower-right frame.
We show the best-fit point for the chargino contribution in
Table~\ref{tab:char}.
The best-fit point gives
$M_2 =184$ GeV and $\mu = 179$ GeV, which give the lightest
chargino mass $M_{\tilde{\chi}^\pm_1} = 103.7$ GeV, just above the current limit.
The corresponding $\Delta S^\gamma \approx -0.68$.
The $p$-value is slightly worse than
the {\bf CPC.III.3} case.
\subsection{MSSM-2: Scalar taus}
\begin{table}[t!]
\caption{\small \label{tab:stau}
The best-fit values for stau contributions to $\Delta
S^{\gamma}(\tilde{\tau}_1,\tilde{\tau}_2)$. We set $M_{E_3}=M_{L_3}$ and imposed
$\tan\beta > 1/2$, $\mu > 1$ TeV, and $M_{\tilde{\tau}_1}>81.9$ GeV.
The scanning parameters are
$C_u^S$, $\tan\beta$, $M_{L_3}\subset [0,1$TeV$]$, $\mu\subset
[1,2$TeV$]$, $A_{\tau}\subset [-1$TeV$,1$TeV$]$.
}
\begin{ruledtabular}
\begin{tabular}{ l|ccc|rcrrrr}
Fits & $\chi^2$ & $\chi^2$/dof & $p$-value &
\multicolumn{6}{c}{Best-fit values} \\
& & & & $C_u^S$ & $\tan\beta$ & $\kappa_d$ &
$\Delta S^\gamma$ & $\Delta S^g$ & $\Delta{\Gamma}_{tot}$ \\
\hline\hline
{\bf Scalar taus}
& $15.68$ & $0.653$ & $0.899$ & $1.000$ &
$47.14$ & $-$ & $-0.854$ & $-$ & $-$
\end{tabular}
\begin{tabular}{ lcccrrrrrrrr}
&&& & \multicolumn{8}{c}{Best-fit values} \\
&&&& $C_v$ & $C_{d}^{S}$ & $C_{\ell}^{S}$
& $M_{L_3}$(GeV) & $\mu$(GeV) & $A_{\tau}$ (GeV) & $M_{\tilde{\tau}_1}$(GeV) &
$M_{\tilde{\tau}_2}$(GeV) \\
\hline\hline
&&&& $1.000$ & $1.040$ & $1.040$ & $323$ & $1075$ & $-43.2$ & $132.3$ & $442.4$
\\
\end{tabular}
\end{ruledtabular}
\end{table}
The staus contribute to $\Delta S^\gamma$ in a way similar to charginos.
The SUSY soft parameters that affect the stau contributions are
the left- and right-handed slepton masses $M_{L_3}$ and $M_{E_3}$, the
$A$ parameter $A_\tau$, and the $\mu$ parameter.
We are taking $\mu>1$ TeV to avoid possibly large chargino
contributions to $\Delta S^\gamma$.
The $2\times 2$ stau
mass matrix is diagonalized to give two mass eigenstates $\tilde{\tau}_1$
and $\tilde{\tau}_2$, shown in ~(\ref{eq16}) and (\ref{eq18}).
The current mass limit on stau is
$M_{\tilde{\tau}_1} > 81.9\; {\rm GeV}$ \cite{pdg}.
We show in Fig.~\ref{fig:stau} the confidence regions when we vary
$C_u^S$, $\tan\beta$, $M_{L_3}= M_{E_3}$, $\mu$, and $A_\tau$.
Requiring $\tan\beta>1/2$,
$C_u^S > 0$ and most allowed regions
are concentrated at $C_u^S\approx 1$
and $\Delta S^\gamma < 0$.
Similar to the chargino case, $C_u^S$ and $\Delta S^\gamma$
correlate with each other in the parameter space.
The ``T" shape of the
CL regions of $\Delta S^\gamma$ (upper-right) can be understood by
observing that $C_v$ is constrained to be very close to $1$
unless $C_u^S \approx 1$ when $C_u^S > 0$: see
the {\bf CPC.III} (middle) frames of Fig.~\ref{fig:cv}.
We observe that all the points fall into the
$68.3\%$ CL region of $-1.8\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~\Delta S^\gamma\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 0$
when $M_{\tilde\tau_1}\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 180$ GeV.
The best-fit values are shown in Table~\ref{tab:stau}.
The $\chi^2$ is just slightly worse than that
of the {\bf CPC.III.3} case
and the $p$ value is lowered because of
more varying parameters.
The values for $C_u^S$, $C_v$, $C_\ell^S$ and $C_d^S$ are very close
to their SM values.
The lightest stau has a mass of 132.3 GeV.
\subsection{MSSM-3: With all chargino, scalar tau, sbottom, and stop contributions}
\begin{table}[thb!]
\caption{\small \label{tab:all}
The chargino, scalar tau, sbottom, and stop contributions to $\Delta
S^{\gamma}(\tilde{\chi}^{\pm}_1,\tilde{\chi}^{\pm}_2,
\tilde{\tau}_1,\tilde{\tau}_2,\tilde{b}_1,\tilde{b}_2,\tilde{t}_1,\tilde{t}_2)$,
$\Delta S^g(\tilde{b}_1,\tilde{b}_2,\tilde{t}_1,\tilde{t}_2)$, $\kappa_d$.
We are taking
$M_{L_3}=M_{E_3}$, $M_{Q_3}=M_{U_3}=M_{D_3}$, $A_t=A_b=A_{\tau}$, $M_3=1$TeV,
$M_A=300$GeV, $M_2=\pm \mu$, and imposing mass limits
$|M_{H_1}-125.5\,{\rm GeV}|\leq 6$ GeV,
$M_{\tilde{\chi}^{\pm}_1}>103.5$GeV,
$M_{\tilde{\tau}_1}>81.9$GeV, $M_{\tilde{t}_1}>95.7$ GeV, and
$M_{\tilde{b}_1}>89$ GeV.
Scanning parameters: $C_u^S$, $\tan\beta \subset [1,100]$,
$M_{L_3}\subset [0,2$TeV$]$, $M_{Q_3}\subset [0,2$TeV$]$,
$\mu\subset [0,2$TeV$]$,
$A_t\subset [-6$TeV$,6$TeV$]$.}
\begin{ruledtabular}
\begin{tabular}{ l|ccc|rcrrrr}
Fits & $\chi^2$ & $\chi^2$/dof & $p$-value &
\multicolumn{6}{c}{Best-fit values} \\
& & & & $C_u^S$ & $\tan\beta$ & $\kappa_d$ &
$\Delta S^\gamma$ & $\Delta S^g$ & $\Delta{\Gamma}_{tot}$ \\
\hline\hline
{\bf All-SUSY}
& $15.68$ & $0.682$ & $0.869$ & $1.000$ &
$16.85$ & $0.002$ & $-0.846$ & $0.001$ & $-$ \\
\end{tabular}
\begin{tabular}{ lcccrrrrrrrrrrrrrrr}
&&&& \multicolumn{14}{c}{Best-fit values} \\
&&&& $C_v$ & $C_{d}^{S}$ & $C_{\ell}^{S}$
& $M_{L_3}$ & $M_{Q_3}$ & $M_2$ & $A_{t}$
& $M_{\tilde{\chi}^{\pm}_1}$ & $M_{\tilde{\chi}^{\pm}_2}$
& $M_{\tilde{\tau}_1}$& $M_{\tilde{\tau}_2}$
& $M_{\tilde{t}_1}$ & $M_{\tilde{t}_2}$ &
$M_{\tilde{b}_1}$ & $M_{\tilde{b}_2}$ \\
\hline\hline
&&&& $1.000$ & $1.040$ & $1.041$ & $220$ & $1732$ & $-1255$ & $-2218$ & $1203$ &
$1310$ &
$94.5$ & $303$ & $1640$ & $1829$ & $1717$ & $1748$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
Here we include all contributions from charginos, scalar taus, sbottoms,
and stops.
The relevant SUSY soft parameters are $M_{Q_3}$, $M_{U_3}$,
$M_{D_3}$, $M_{L_3}$, $M_{E_3}$, $A_t$, $A_b$, $A_\tau$, $M_3$, $M_2$, and $M_A$.
In addition to $C_u^S$ and $\tan\beta$,
we are varying $M_{Q_3}$, $M_{L_3}$, $A_t$, $\mu$ while taking
$M_{Q_3}=M_{U_3}=M_{D_3}$, $M_{L_3}=M_{E_3}$,
$A_t=A_b=A_\tau$, and $M_2=\pm\mu$. We fix the other parameters
as $M_3=1$ TeV and $M_A=300$ GeV.
Furthermore, we impose the following constraints
on the masses:
\begin{eqnarray}
&& M_{\tilde{\chi}^{\pm}_1}>103.5~{\rm GeV},~~~
M_{\tilde{\tau}_1}>81.9~{\rm GeV}, \nonumber \\
&& M_{\tilde{t}_1}>95.7 ~{\rm GeV}, ~~~
M_{\tilde{b}_1}>89 ~{\rm GeV}, \nonumber \\
&& |M_{H_1}-125.5\,{\rm GeV}|\leq 6 ~{\rm GeV}. ~~~ \nonumber
\end{eqnarray}
Note that we adopt rather loose mass limits quoted in PDG \cite{pdg}
and impose the Higgs-boson mass constraint.
The best-fit values are shown in Table~\ref{tab:all}.
Note that the lighter stau mass ($94.5$ GeV)
is near to its low mass limit
while all other SUSY particles are heavy, so that the major contribution
to $\Delta S^\gamma$ is from the lighter stau as shown in the middle-right frame
of Fig.~\ref{fig:allsusy}.
We observe that the stau contribution becomes comparable to
that of the chargino around
$M_{\tilde\tau_1} = 270$ GeV and.
For the larger values of $M_{\tilde\tau_1}$,
$\Delta S^\gamma$ is saturated to have the values
between $\sim -0.6$ and $\sim 0.4$ at $68\%$ CL where it is
dominated by the chargino loops.
The confidence regions
in the relevant parameter space are shown in Fig.~\ref{fig:allsusy}.
{}From the upper-left frame of Fig.~\ref{fig:allsusy}, we observe
the requirement of $M_{H_1}\sim 125.5$ GeV completely removes
the negative $C_u^S$ region with $|C_u^S-1|\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 0.02$ and
$\tan\beta\raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 3$ at $95\%$ CL.
The majority of allowed parameter space is concentrated at around
$C_u^S \approx 1$, $-2\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~\Delta S^\gamma \raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 0$,
and $\Delta S^g \approx 0$.
Yet, there is a small island allowed at 99.7\% CL around
$\Delta S^\gamma \sim -3.5$ and $\Delta S^g \sim -1.5$.
To identify the origin of the island, we note
the following linear relationships between $\Delta S^\gamma$ and $\Delta S^g$:
\begin{eqnarray}
\Delta S^{\gamma}&=&2 N_C Q_b^2 \Delta S^g=\frac{2}{3}\,\Delta S^g\, ~~~
{\rm for~sbottom}\,,\nonumber \\
\Delta S^{\gamma}&=&2 N_C Q_t^2 \Delta S^g=\frac{8}{3}\,\Delta S^g\, ~~~
{\rm for~stop}\,.\nonumber
\end{eqnarray}
In the chargino and stau cases, $\Delta S^g=0$. These four
correlations
are represented by the straight lines in the upper-right frame of
Fig.~\ref{fig:allsusy}.
It is clear that the island is due to the stop loops and it
disappears completely when
we require
either $M_{\tilde{t}_1} \raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 150$ GeV or $M_{\tilde{b}_1} \raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 450$ GeV,
as shown in the lower frames..
In order to examine
how large the squark contributions are or to suppress
the relatively
dominant stau and chargino contributions, we take
$M_{\tilde{\chi}^{\pm}_1}>300$ GeV and $M_{\tilde{\tau}_1}>300$ GeV
and show the results in Fig.~\ref{fig:allsusy3}. We observe that
$|\Delta S^\gamma|\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 0.6$ at $68.3\%$ CL independently of the squark masses.
This means that $|\Delta S^\gamma/S^\gamma_{\rm SM}|\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 0.1$
with $S^\gamma_{\rm SM}\simeq -6.6$.
Therefore, unless the $H\gamma\gamma$ coupling is determined
with a precision better than $10\%$, this may imply
that the Higgs data are not sensitive to the MSSM spectrum at $68.3\%$ CL
when $M_{\tilde{\chi}^{\pm}_1}>300$ GeV and $M_{\tilde{\tau}_1}>300$ GeV
independently of the stop and sbottom masses.
Incidentally, in the middle frames, we observe that the CL regions of
$\Delta S^\gamma$ is almost independent of $M_{\tilde{\chi}^{\pm}_1,\tilde{\tau}_1}$
since it is dominated by the squark loops when
$M_{\tilde{\chi}^{\pm}_1,\tilde{\tau}_1}>300$ GeV.
Furthermore, we observe that
the stau and chargino contributions decrease quickly as their masses
increase, as
shown in the previous {\bf MSSM-1} and {\bf MSSM-2} fits.
Also, it worths to note that
$|\Delta S^\gamma|\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 0.2$ when
$M_{\tilde{\chi}^\pm_1,{\tilde{\tau}_1}} > 500$ GeV, see
Figs.~\ref{fig:char} and \ref{fig:stau} when squarks are very heavy.
Finally, we also find that $|\Delta S^\gamma|\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 0.2$
if we take the current $95$\%-CL LHC limits on the stop and sbottom masses
with $M_{\tilde{\chi}_1^0} = 0$ GeV~\cite{pdg}:
$M_{{\tilde t}_1} > 650$ GeV and $M_{{\tilde b}_1} > 600$ GeV,
assuming that charginos and staus are heavy enough and do not
contribute to $|\Delta S^\gamma|$ more significantly than squarks.
Before concluding, we would like to briefly discuss the SUSY
impact on future measurements of the Higgs properties
through the Higgs decay into $Z\gamma$ and the Higgs cubic coupling.
In the {\bf MSSM-1} case, thanks to light charginos,
we have found that the branching ratio of the 125 GeV Higgs boson
to $Z\gamma$ can be enhanced by about $15 \%$ compared to the SM prediction.
On the other hand, in the {\bf MSSM-2} and {\bf MSSM-3}
cases, the SUSY contribution to
the branching ratio is less than $1 \%$.
Meanwhile, in the {\bf MSSM-3} case in which all the masses of
relevant SUSY particles are specified and an unambiguous estimation
of the Higgs cubic coupling is possible,
the deviation of the Higgs cubic coupling
from the SM value $M_{H_1}^2/2v$ ($v\approx 246$ GeV)
is negligible upon
its variation according to the Higgs mass constraint taken
in this work: $|M_{H_1} - 125.5\,{\rm GeV}| < 6$ GeV.
\section{Synopsis and Conclusions}
We have analyzed the relevant parameter space in the MSSM with respect to the
most updated data on Higgs boson signal strength. The analysis is
different from the model-independent one \cite{Cheung:2014noa} mainly
because $\Delta S^\gamma$ and
$\Delta S^g$ are related by a simple relation, and up-type, down-type
and leptonic Yukawa couplings are also related to one another,
such that they are no longer independent.
We have shown in Figs.~\ref{fig:chi2} to \ref{fig:dcpdcg}
the confidence-level regions in the parameter space for the cases of
{\bf CPC.II} to {\bf CPC.IV} fits
by varying a subset or
all of the following parameters:
$C_u^S$, $\tan\beta$ (or equivalently $C_v$), $\kappa_d$, $\Delta S^\gamma$,
$\Delta S^g$, and $\Delta \Gamma_{\rm tot}$.
This set of parameters is inspired by the
parameters of the general MSSM.
Since the Higgs sector of the MSSM is the same as the 2HDM type II,
the down-type and the leptonic Yukawa couplings are determined once
the up-type Yukawa couplings are fixed. It implies that $C_u^S$ and
$\tan\beta$ (or equivalently $C_v$) can determine all the tree-level
Yukawa and gauge-Higgs couplings. The effects of SUSY spectrum then
enter into the parameters $\kappa_d$, $\Delta S^\gamma$, and $\Delta
S^g$ through loops of colored and charged particles.
There are improvements in all the {\bf CPC} fits since our analysis of 2HDM
\cite{Cheung:2013rva} a year ago. The most significant changes in the
Higgs-boson data from 2013 to 2014 were the diphoton signal strengths
measured by both ATLAS and CMS \cite{atlas_zz_2014,cms_aa_2014} while
all other channels were moderately improved. Overall, all fitted
couplings are improved by about 10\% and the SM Higgs boson enjoys a
large $p$ value close to 1 \cite{Cheung:2014noa}.
The SUSY particles enter the analysis mainly through the loop effects
of the colored and charged particles into the parameters such as
$\Delta S^\gamma$, $\Delta S^g$, and $\kappa_d$ while light
neutralinos with mass less than $M_{H_1}/2$ can enter into $\Delta
\Gamma_{\rm tot}$. We have analyzed the effects of the SUSY spectrum
with the direct search limits quoted in PDG \cite{pdg}. We offer the
following comments concerning the MSSM spectrum.
\begin{enumerate}
\item
The effect of $\kappa_d$ on the CL regions is insignificant,
which can be seen easily when we go across from the first column to the
second column in Figs.~\ref{fig:tanbeta} to \ref{fig:cd}.
On the other hand, the effect of $\Delta \Gamma_{\rm tot}$ is relatively
large, which can be seen by going across from the second column to the
last column in Figs.~\ref{fig:tanbeta} to \ref{fig:cd}.
\item
Since the mass of the lightest Higgs boson is sensitive to the stop mass,
we especially impose the current Higgs-boson mass limit $M_{H_1}
\sim 125.5 \pm 6$ GeV (taking on a roughly 3$-\sigma$ level)
on the parameter space
in the {\bf MSSM-3} fits with all-SUSY particles.
There are always some underlying assumptions on deriving the mass limits
of stops and sbottoms (also true for other SUSY particles). We have imposed
mild but robust mass limits.
\item
The {\bf MSSM-1} (chargino) and
{\bf MSSM-2} (stau) fits
are special cases of {\bf CPC.III.3}
in which
$\tan\beta$ (or equivalently $C_v$), $C_u^S$, and $\Delta S^\gamma$
are varied. Nevertheless, the $\Delta S^\gamma$ is restricted by the
SUSY parameters $\mu$, $\tan\beta$, and $M_2$ or $M_{L_3,E_3}$ in such a
way that $\Delta S^\gamma$ is not entirely free to vary. The
resulting fits are not as good as the {\bf CPC.III.3} case.
\item In the {\bf MSSM-3} case
in which we consider
the chargino, stau, stop, and sbottom contributions,
the preferred $C_u^S$ is very close to 1. The major contribution
comes from the lightest stau,
which stands very close to the low mass
limit of 81.9 GeV.
\item The direct search limits on charginos and staus prevent
the $\Delta S^\gamma$ from becoming too large while those on stops and sbottoms
prevent both $\Delta S^\gamma$ and $\Delta S^g$ from becoming too large.
\item We find that $|\Delta S^\gamma/S^\gamma_{\rm SM}|\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 0.1$ when
$M_{\tilde{\chi}^{\pm}_1}>300$ GeV and $M_{\tilde{\tau}_1}>300$ GeV,
irrespective of the squarks masses.
Note that $S^\gamma_{\rm SM}\simeq -6.6$.
\item Further we observe that
$|\Delta S^\gamma/S^\gamma_{\rm SM}|\raisebox{-0.13cm}{~\shortstack{$<$ \\[-0.07cm] $\sim$}}~ 0.03$
when $M_{\tilde{\chi}^\pm_1,{\tilde{\tau}_1}} > 500$ GeV
and $M_{{\tilde t}_1,{\tilde b}_1} \raisebox{-0.13cm}{~\shortstack{$>$ \\[-0.07cm] $\sim$}}~ 600$ GeV.
\end{enumerate}
\section*{Acknowledgment}
This work was supported the National Science
Council of Taiwan under Grants No. NSC 102-2112-M-007-015-MY3.
J.S.L. was supported by
the National Research Foundation of Korea (NRF) grant
(No. 2013R1A2A2A01015406).
This study was also
financially supported by Chonnam National University, 2012.
|
3,212,635,537,557 | arxiv | \section{Introduction}
Early spectroscopic studies of galaxies showed that profiles of emission lines could present asymmetries, shoulders, or double peaks \citep[e.g.]{heckman81,pelat80,glaspey76,seyfert43}. These features come from the light of several gaseous systems with different kinematics that is integrated along the observer's light of sight. They have been interpreted as due to rotating gaseous disks, outflows/inflows or dual active galactic nuclei \citep[e.g.]{shen11,greene05,zhou04,arribas96}.
Systematic search of double-peaked emission line profiles in large spectra databases (e.g. Sloan Digital Sky Survey, SDSS \citep{york00}) have been performed using different selection criteria, but the final confirmation is usually done by visual inspection \citep{ge12,pilyugin12,smith10,liu10}. In this work, we present an automatic procedure to detect multi-component emission line profiles in large databases of spectra of galaxies based on the symmetry of the cross-correlation function.
The cross-correlation (C-C hereafter) technique has been extensively used in astronomy to infer radial velocities \citep{anglada12,westfall11,allende07,fromerth00,gunn96,storm92,dalle91,tonry1979}. The C-C technique globally compares a problem spectrum with a reference spectrum or template. The C-C of two spectra analyses the similarity between one spectrum and a wavelength (or velocity) shifted version of the other, as a function of this wavelength (or velocity) shift. The C-C function only contains the frequencies that are common to both spectra. Therefore, the C-C function provides a clear indication of the shift at which the two spectra are most similar and also a quantitative measure of that similarity. As long as the largest peak of the C-C function is symmetric, it might be used to derive the shift of the problem spectrum as well as its velocity dispersion in conjunction with the width of the template \citep{tonry1979}. When the two spectra are the same, the C-C function becomes the auto-correlation.
We propose a methodology for searching multi-component/double-peaked emission line
profiles in the spectra of galaxies based on the deviation from symmetry of the peak
of the C-C function. Details on the C-C technique can be found in \cite{furenlid90}, \cite{tonry1979} and references therein. The C-C algorithm might be used in the search
for binary active galactic nuclei in large spectra databases (e.g. SDSS). It may be also used to locate spaxels with spectra showing
multicomponent emission line profiles in Integral Field Spectroscopic (IFS)
surveys of galaxies (e.g. the on-going Calar Alto Legacy Integral Field Area Survey, CALIFA \citep{sanchez12} or The Mapping Nearby Galaxies at APO, MANGA (http://dunlap.utoronto.ca/research/surveys/)).
\section{Cross-correlation technique for searching double-peaked line profiles}
\subsection{Cross-correlation function shape traced by bisectors}
The procedure developed in this work is based on the symmetry of the C-C function nearby its main peak. The estimation of the shift and velocity dispersion of the spectra of galaxies might be obtained by fitting a smooth symmetric function to the peak of the C-C function \citep{tonry1979}. However, C-C is not an even function and therefore, this approach is only valid when the shapes of the spectral features in the problem spectrum are similar to those in the reference spectrum. Emission lines in the spectra of galaxies are commonly assumed to be fitted by Gaussian profiles. If single Gaussian profiles are assumed for emission lines in the template and problem spectra, the velocity shift and velocity dispersion of the problem spectrum can be derived by fitting a single Gaussian to the peak of the C-C function (see Fig. \ref{model1}a,b). However, if the emission line profiles in the reference and in the problem spectrum are significantly different, the peak of the C-C function will be asymmetric (see Fig. \ref{model1}c,d).
\begin{figure*}
\centering
\includegraphics[width=12.25cm]{figura1.eps}
\caption{Example of the cross-correlation technique applied in the [OIII]$\lambda\lambda4959,5007$ spectral range, assuming Gaussian emission line profiles and a redshift of about 6610 km/s. Spectral resolution for this model is 1650. (a) Reference spectrum (blue-dashed lines) shifted at the assumed redshift, and problem spectrum (black line) at redshift-400 km/s. Velocity dispersions for problem and template spectrum are 115 km/s. (b) The cross-correlation function of the spectra in (a). The green-dashed lines correspond to the Gaussian profile fit to the peak of the C-C function. The Gaussian centroid (at -390 km/s) and full-width-half-maximum (384.17 km/s) provides the velocity shift between problem and template and the estimation of the problem velocity dispersion, respectively. (c) Problem spectrum (black line) showing double components. The brightest component is described by the problem spectrum (V$_{sys}$=6210 km/s and $\sigma$=115 km/s) in (a), and secondary component is at V$_{sys}$=6710 km/s with a $\sigma$=115 km/s. Both components have been selected to be clearly resolved. Blue-dashed line is the template (the same template than in (a)). (d) The C-C function of the problem and template spectra in (c). The C-C peak clearly has an asymmetric shape although a double-peak is not resolved due to the degradation of the spectral resolution in the calculation of the C-C function. The Gaussian fit to the asymmetric C-C peak provides a velocity shift between template and problem spectra of -289 km/s, and the estimation of the velocity dispersion for the problem is 235 km/s. (e) The velocity dispersion of the template (blue line) is $\sim8$ km/s, negligible compared to the velocity dispersion of the emission lines ($\sigma$=115 km/s) in the problem spectrum (black line). (f) The C-C function of the problem and template spectra in (e). The asymmetric C-C peak profile is now resolved in two peaks. Gaussian fit to the peak of the C-C function provides and estimation of redshift and velocity dispersion of -311 km/s and 225 km/s, respectively, for the problem spectrum. }
\label{model1}
\end{figure*}
The reference spectrum must have a large signal-to-noise ratio and a spectral resolution similar to or better than the problem spectrum. As the C-C of two spectra is similar to their convolution, the spectral resolution of the C-C function might be significantly reduced during the C-C operations. This results in a significant reduction in the capability of detecting double-components (see Fig. \ref{model1}d). However, the template can be selected to have a negligible velocity dispersion compared to the problem spectrum. Indeed, a variation of the C-C technique using delta functions has been successfully used for detection of binary stars \citep{furenlid90}. In this case, the C-C spectral resolution, and hence the capability of detecting double-components, strongly depends on the problem spectrum spectral resolution. Moreover, the shape of the C-C peak will correspond to the average shape of the line profiles in the problem spectrum (figures \ref{model1}e and \ref{model1}f). Therefore, the presence of double or multiple components in emission line profiles is reduced to study the shape of the C-C peak profile.
The C-C peak profile can be characterized by tracing its bisector. Bisector shapes are commonly used in the analysis of the mechanisms that cause asymmetries and variations in stellar spectra \citep[e.g.]{ba11}. The bisector of a symmetric profile should remain at constant wavelength (or velocity) for all parts of the profile, dividing it into two equal parts; any existing asymmetry between the base and the peak of the line will remain reflected in the shape of the bisector. Bisectors for any profile can be constructed by connecting the midpoints of horizontal line segments spanning the width of the profile at a number of intensity positions inside the profile. The comparison of the C-C peak profile bisector (V$_b$) with the C-C peak velocity (V$_p$, the remained value as if it were symmetric) gives a reference value to identify the presence of various gaseous components in the profiles. This procedure does not infer the actual number of gaseous components forming the observed profile, but an evidence of the presence of various gaseous components (at least two). The sign of V$_b$-V$_p$ also gives information about the equivalent wavelength (velocity) of the double/multiple components relative to the dominant component (wavelength or velocity at the C-C peak): if the sign is positive, the secondary (or equivalent) component is red-shifted respect to the dominant component. A positive detection of double/multiple components in a galaxy spectrum will depend on the signal-to-noise of the C-C function and hence on the signal-to-noise of the problem spectrum. Figure \ref{model2} shows the bisectors of the C-C peak functions for the examples in figure \ref{model1}.
Tracing the bisector on a single emission line profile would also provide evidences of the presence of double/multiple components forming that profile. However, any observational or instrumental signature (e.g. cosmic-ray) not properly removed during the data reduction process and affecting the selected single emission line could result in an asymmetric line bisector and hence in an spurious double/multiple component detection. Including several emission lines in the selected cross-correlation spectral range will smoothed any undesirable feature affecting a single-line since the shape of C-C peak function will correspond to their average shape. Moreover, as the C-C function also provides a quantitative measure of the similarity between the problem and template, other advantage of tracing the bisector on the C-C peak function instead of on single emission line profiles is that any shape of the line profiles can be selected to generate the template. Here, we have assumed gaussian profiles for the emission lines to create the template but any other profile shape could be assumed (e.g. Lorentz profiles).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figura2.eps}
\caption{Bisectors (blue dashed-lines) of the C-C peak functions in Fig. \ref{model1}a. Black dotted line corresponds to the peak velocity dividing the profile in two parts. When the emission lines profiles in the template and problem spectrum are similar (Gaussian profiles are assumed) black dotted line divides the C-C peak function profile in two symmetric parts. In this case, bisector (blue dashed-lines) and peak velocity extension to different intensities (black dotted-lines) are similar (see (a)). When asymmetries due to double components are presented in the problem spectrum, the bisector of the C-C peak function is twisted and differs from the velocity at the peak (see (b) and (c)). The difference between the velocity at the peak of the C-C function and the bisector at a determined intensity level indicates the presence of asymmetries (double or multiple components) in the problem spectrum profile. The sign of this difference indicates if these asymmetries are blue or red-shifted respect to the dominant component (peak of the C-C function).}
\label{model2}
\end{figure*}
\subsection{Implementation of the procedure}
The C-C technique uses the information contained in all the lines of the selected spectral range for calculating the C-C function. The resulting shape of the C-C peak corresponds to an average shape of the line profiles in the problem spectrum.
In order to calculate the C-C function, it is necessary to prepare the problem spectrum following these steps:
\begin{itemize}
\item[1.] The ends of the C-C spectral range are selected at the continuum to avoid discontinuities at the edges.
\item[2.] The data are divided by the continuum to remove any residual curvature in the continuum.
\item[3.] In order to minimize edge effects, the continuum is subtracted (just subtract the unity when step 2 was performed).
\end{itemize}
When a stellar fit had been carried out previously and properly subtracted from the original spectra, steps 2 and 3 can be stripped.
The reference spectrum is generated including as many delta functions as single
emission lines are expected in the spectral range selected to apply the C-C
technique. The wavelength (or velocity) position of these delta functions
corresponds to the center of the emission lines. Different weights (in flux) can be given
to the delta functions to account for fix intensity ratio between
emission lines according to atomic parameters. Noise contribution is not desirable to be included in the template to avoid degradation of the signal-to-noise when the C-C function is computed.
Both template and problem spectra are normalized to the maximum of the brightest emission line in
the selected wavelength. When the reference has been generated and the problem spectrum has
been rectified, the cross-correlation function can be computed. The maximum of
the cross-correlation function is located and a Gaussian is fitted to
this peak to obtain a better precision in the determination of the velocity
shift between template and problem spectra (V$_p$) than the pixel size/resolution element. The bisector of the C-C
peak function can be traced by calculating the midpoints of horizontal lines segments slicing
the C-C peak profile at a number of intensities (e.g. twenty intensity positions, from 100\% to
5\% in steps of 5\%). In order to detect asymmetries, we can compute the difference between V$_p$
and the bisector at different intensities (e.g. V$_b$(i), for i=1,N, being N the total number of intensity positions). We will have as many V$_p$-V$_b$ as
intensities have been selected to trace the bisector. If $\Delta$V denotes the size of the resolution element, then:
\begin{itemize}
\item[-] A blue asymmetry is detected when V$_b$-V$_p$ $< -\Delta$V
\item[-] A red asymmetry is detected when V$_b$-V$_p$ $> \Delta$V
\end{itemize}
For the implementation of the procedure, we calculate the number of $|$V$_b$-V$_p$(i)$|$ values (for i=1,N, being N the total number of intensity positions) larger than $\Delta$V (N$_{asym}$) and also the number of V$_b$-V$_p$(i) smaller than -$\Delta$V (N$_{blue}$) and larger than $\Delta$V (N$_{red}$), considering that:
\begin{itemize}
\item When N$_{asym}$=N$_{blue}$ a pure blue asymmetry in the profile is detected, and at least two components are present.
\item When N$_{asym}$=N$_{red}$ a pure red asymmetry in the profile is detected, and at least two components are present.
\item When N$_{asym} >$ N$_{red}$ or/and N$_{asym} >$ N$_{blue}$ then the profile presents blue and red asymmetries depending on the intensity level (multi-asymmetries hereafter), indicating the presence of multiple gaseous components forming the observed emission line profiles.
\end{itemize}
Obviously, the level of noise in the problem spectrum affects the detection of asymmetries. In this sense, a signal-to-noise threshold is defined to reduce the number of false positive detections of double/multiple components on emission line profiles in the spectra of galaxies. Removing bisector lower levels might also help to avoid false positive detections due to poor signal-to-noise of the spectra. In practice, we consider a positive asymmetry detection (blue, red, or multi-asymmetries) when N$_{asym}$ is equal or larger than 2, that is, at least two intensity levels tracing the bisector of the C-C peak function must satisfied that $|$V$_b$-V$_p$(i)$|$ $>\Delta$V.
\subsection{Application to integral field spectroscopy}
In this section, we present some examples to illustrate the results from the proposed procedure for searching double or multiple gaseous components in the spectra of galaxies applied on IFS data.
\subsubsection{The data}
We have used the IFS data of the central 24$\mathstrut{^{{\prime}{\prime}}}\times$20$\mathstrut{^{{\prime}{\prime}}}$ of the Seyfert 2 galaxy NGC 1068 presented in \cite{garcia99}. NGC~1068 is the nearest and brightest example of a barred galaxy with an active galactic nucleus. The presence of several kinematically distinct gaseous components in the central region of NGC~1068 is evident from the large amount of emission-line profiles obtained using different instruments and techniques \citep{ozaki09, emsellem06, gerssen06, ishigaki04, groves04, cecil02, garcia99, arribas96, cecil90, pelat80}. Careful visual examination (see Fig. \ref{oiii_profiles}) and description of the spectra \citep{emsellem06}[e.g] revealed the presence of a minimum of three different gaseous systems in NGC~1068. When studying IFS data, the visual inspection of the spectra is a tedious task but accessible for examining a several hundred of spectra. This is not the case for the large amount of spectra provided by IFS surveys (e.g. CALIFA or MaNGA surveys) of galaxies.
Figure \ref{oiii_profiles} shows the [OIII]$\lambda\lambda4959,5007$ line profiles at each observed position of NGC~1068 as a spectra diagram \citep{garcia99}. These profiles offer a large variety of examples to test the developed algorithm.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{oiii_profiles_v4.eps}
\caption{Spectra diagram of the [OIII]$\lambda\lambda4959,5007$ emission lines at 463 positions on the central region of NGC~1068 \citep{garcia99}. The spectra at each location are auto scaled to better show the profile shape. Emission lines nearer to the optical nucleus (taken to be the origin) are brighter than those farther out. The plotted spectral range is 4925-5075 \AA , the same selected for the application of the cross-correlation technique. Different colors indicate the asymmetry detected in those profiles with a signal-to-noise larger than 200 (red: red-asymmetry; blue: blue-asymmetry; magenta: multi-asymmetries; green: symmetric profiles).}
\label{oiii_profiles}
\end{figure*}
\subsubsection{Results for NGC~1068}
Applying the proposed procedure of studying the symmetry of the C-C peak function explained in section \S2 to the spectra from the central region of NGC~1068, we can point out the following:
\begin{itemize}
\item Only 12\% of the emission lines profiles in the central 24$\mathstrut{^{{\prime}{\prime}}}\times$20$\mathstrut{^{{\prime}{\prime}}}$ of the Seyfert 2 galaxy NGC 1068 (Fig. \ref{oiii_profiles}) seems to present symmetric profiles. The percentage of symmetric [OIII]$\lambda\lambda4959,5007$ profiles is 24.5\% for those spectra with a signal-to-noise larger than 200.
\item 6 \% of the total number of spectra presents pure red-asymmetries. Red asymmetries are detected in the 26\% of spectra with a signal-to-noise larger than 200.
\item 51 \% of the total number of spectra presents pure blue-asymmetries. This percentage is 31.5\% for those spectra with a signal-to-noise ratio $ > 200$.
\item 31 \% of the spectra in the central region of NGC~1068 present asymmetries, to the blue or red depending on the bisector intensity level. In the case of spectra with a signal-to-noise larger than 200, the percentage of multi-asymmetric emission line profiles is 18\%.
\end{itemize}
Examples of spectra (the same spectra than in Fig. 7 in Garc\'{\i}a-Lorenzo et al. (1999)) displaying asymmetries and the bisectors shape of the C-C peak function are shown in Fig. \ref{examples}. The [OIII]$\lambda\lambda4959,5007$ profiles in Fig. \ref{examples}(a) show a blue shoulder (corresponding to component 4b in Garc\'{\i}a-Lorenzo et al. (1999) and the additional component in Emsellem et al. (2006)). The shape of the C-C function for this spectrum (Fig. \ref{examples}(a1) and (a2)) also presents a clear blue asymmetry that is traced by the shape of its bisector. The peak of the C-C function indicates a radial velocity of 1145 km/s for the dominant component in this profile, which is in agreement to the systemic velocity, 1144 km/s \citep{emsellem06}. The difference in velocity between the bisector at a 10\% peak intensity level and the velocity at the peak of the C-C function (V$_b$(10\%)-V$_p$) is -302 km/s, indicating a blue asymmetry. The [OIII]$\lambda\lambda4959,5007$ emission lines in Fig. \ref{examples}(b) presents double-peaked [OIII] profiles that it is also reproduced in the C-C peak function. The radial velocity of the dominant component in this profile (velocity at the peak of the C-C function) is 1618 km/s (474 km/s larger than the systemic velocity), which was identified as component 4r in Garc\'{\i}a-Lorenzo et al. (1999) and as the additional component in Emsellem et al (2006). The bisector of the C-C peak function clearly indicates the presence of a secondary bluer component in this profile. The difference V$_b$(10\%)-V$_p$ is -485 km/s. This secondary component was identified as component 1+3 in Garc\'{\i}a-Lorenzo et al. (1999) and as the narrow component in Emsellem et al (2006). Emission lines in Fig. \ref{examples}(c) show a peaked profile with a clear red shoulder that is reproduced by the shape of the C-C peak function bisector. In this case, V$_p$=1124.52 km/s (component 1/narrow in Garc\'{\i}a-Lorenzo et al. (1999)/Emsellem et al. (2006), respectively), while V$_b$(10\%)-V$_p$= 209 km/s, indicating a red component identified as component 3 in Garc\'{\i}a-Lorenzo et al. (1999) and broad component in Emsellem et al (2006). For this profile, the lower level of the bisector (Fig. \ref{examples}(c2)), turns to the blue, V$_b$(5\%)-V$_b$(10\%) = -104 km/s, although still indicating a global red asymmetry (V$_b$(5\%)-V$_p$ = 105 km/s). This twist in the bisector indicates the presence of a faint additional component, that was identified as component 2 in Garc\'{\i}a-Lorenzo et al. (1999). The visual inspection of emission line profiles in Fig. \ref{examples}(d) showed single profiles with not signs of double-components. The C-C function is symmetric respect to its peak, being V$_p$ = 1071 km/s. Only at a 5\% level of intensity of the C-C peak function, a blue asymmetry is detected by the proposed procedure (with a V$_b$(5\%)-V$_p$ = -188 km/s), while for the rest of the levels, the difference V$_b$(i)-V$_p$ (for i=10\%,100\% in steps of 5\%) is always smaller than 10 km/s. This false positive asymmetry detection is due to the low signal-to-noise at the level in which the asymmetry has been detected. In practice, this case will be labeled as symmetric in the implemented algorithm due to the condition of having at least two intensity levels with positive detection for asymmetries. Figure \ref{oiii_profiles} also indicates the identified asymmetries for the observed spectra with a signal-to-noise ratio larger than 200 in the central region of NGC~1068.
\begin{figure*}
\centering
\includegraphics[width=16cm]{figura_1999.eps}
\caption{(Left-column) Examples of emission line profiles of NGC~1068. Spectra are located at positions (a) [+0.02,+3.49]; (b) [-1.02,-0.68] arcsec; (c) [+2.97,+4.08] arcsec; and (d) [-8.86,6.24] arcsec from the optical nucleus. (Central-column) (a1),(b1),(c1), and (d1) The cross-correlation functions of the spectra in the left-column ((a),(b),(c), and (d)) using two delta function as template spectrum in the spectral range including the [OIII]$\lambda\lambda4959,5007$ emission lines (4925-5075 \AA ). (Righ-column) The cross-correlation peak function (zoom of function in central column). Dashed line corresponds to the traced bisector. }
\label{examples}
\end{figure*}
The proposed procedure only indicates those spectra showing asymmetries due to double or multiple components, but the deprojection of the different gaseous systems needs any other procedure such as Gaussian fit \citep{arribas96}[e.g.].
\section{Conclusions}
The identification of double/multiple component emission line profiles in spectra of galaxies is the first step for the sample selection of many research topics (e.g. binary black holes, ionized gas kinematic deprojection). This paper deals with a quick estimation of velocity dispersions, radial velocities of the dominant component and the detection of the presence of double or multiple components among a large set of spectra of a galaxy or galaxies using the cross-correlation technique and the shape of the C-C peak function. The proposed procedure allows processing a large amount of problem spectra in a short period of time (in the order of minutes) using the peak, full-width-half-maximum and symmetry of the C-C peak function.
\section*{Acknowledgments}
The author thanks A. Eff-Darwich, and A. Bongiovanni for their help and useful discussions. This work was partially funded by the Instituto de Astrof\'{\i}sica de Canarias, by the Spanish Ministerio de Econom\'{\i}a y Competitividad (MINECO; grant AYA2009-12903) and by the Spanish Agencia Canaria de Investigaci\'on, Innovaci\'on y Sociedad de la Informaci\'on (proID20100121).
|
3,212,635,537,558 | arxiv | \section{Introduction}
Text-based emotion detection and classification has a long-lasting history of research~\cite{alm2005emotions}. It is to become increasingly important in the area of machine learning with the increasing emphasis on assistants as frontline interactions for service design. Such assistantship is becoming more manifest in the form of ``chatbots''~\cite{chorusDeploy}, suggesting the research in our work is getting relevant.
However, compared to content recommendation~\cite{bohus2007olympus,DialPort} or behavioral modeling~\cite{levin2000stochastic}, it is still under discussed. Still less, it has rarely been used in applications for individual users such as instant messengers.
To understand the feasibility of text-based affective computing in the era of mobile devices,
we introduced \emph{EmotionPush\xspace}\footnote{EmotionPush\xspace is available at Google Play: \url{https://play.google.com/store/apps/details?id=tw.edu.sinica.iis.emotionpush}}, a mobile application that automatically detects the emotion of the text message that user received via Facebook Messenger,
and provides emotion cues by colors in real-time~\cite{wang2016sensing}.
EmotionPush\xspace uses 7 colors to represent 7 emotions, which is based on Plutchik's Emotion Wheel color theme (Figure~\ref{fig:color_mapping}.)
For instance, when the user receives a message saying \textit{``Hi, How are you?''}, EmotionPush\xspace first classifies this message's emotion as \emph{Joy},
and then pushes a notification on the user's smartphone with a yellow icon (Figure~\ref{fig:step_03}), which is the corresponding color of \emph{Joy}.
Later when the user clicks the notification to open Messenger to start the conversation,
EmotionPush\xspace keeps track on each message that the user receives and uses a color bubble on the top of the screen to continually provide emotion cues (Figure~\ref{fig:step_04}). In Figure~\ref{fig:step_05}, the other party suggests a lunch meeting, which keeps the emotion cue as \emph{Joy}; then the next message about feeling tired changes the emotion cue from \emph{Joy} to \emph{Tired} as in Figure~\ref{fig:step_06}. After giving the last reply which ends this chat session (Figure~\ref{fig:step_07}), users can go back to the desktop but still see the emotion cue of the last message as shown in Figure~\ref{fig:step_08}. Later users can start over and check the notifications from EmotionPush\xspace again by pulling down the notification bar (Figure~\ref{fig:step_09}).
\begin{figure}[t]
\centering
\hspace{-0.6cm}
\includegraphics[width=0.5\textwidth]{figure/emotion_mapping2}
\vspace{-1pc}
\caption{Visualizing emotion colors with Plutchik's Emotion Wheel. The 40 emotion categories of LJ40K are compacted into 7 main categories, each has a corresponding color on the emotion wheel.}
\vspace{-1pc}
\label{fig:color_mapping}
\end{figure}
\newcommand{\widthfactor}[0]{0.21}
\newcommand{\figurefolder}[0]{figure/small_step}
\begin{figure*}[t]
\centering
\subfloat[Receive the first message.\label{fig:step_02}]{%
\includegraphics[width=\widthfactor\textwidth]{\figurefolder/02.png}
}
\subfloat[Notification.\label{fig:step_03}]{%
\includegraphics[width=\widthfactor\textwidth]{\figurefolder/03.png}
}
\subfloat[Colored bubble.\label{fig:step_04}]{%
\includegraphics[width=\widthfactor\textwidth]{\figurefolder/04.png}
}
\subfloat[Receive a joy message.\label{fig:step_05}]{%
\includegraphics[width=\widthfactor\textwidth]{\figurefolder/05.png}
}
\vspace{-1pt}
\subfloat[Receive a tired message.\label{fig:step_06}]{%
\includegraphics[width=\widthfactor\textwidth]{\figurefolder/06.png}
}
\subfloat[Response.\label{fig:step_07}]{%
\includegraphics[width=\widthfactor\textwidth]{\figurefolder/07.png}
}
\subfloat[Go back to the desktop.\label{fig:step_08}]{%
\includegraphics[width=\widthfactor\textwidth]{\figurefolder/08.png}
}
\subfloat[Check Notification again.\label{fig:step_09}]{%
\includegraphics[width=\widthfactor\textwidth]{\figurefolder/09.png}
}
\vspace{-1pt}
\caption{Illustration of using EmotionPush\xspace.}
\vspace{-2pt}
\label{fig:steps}
\end{figure*}
We conducted a deployment study of EmotionPush\xspace with 20 participants during a time span of two weeks.
20 English native speakers who identified themselves as Messenger frequent users (ages ranged from 18 to 31 years) were recruited.
Participants were asked to install EmotionPush\xspace on their personal smartphones, keep it open, and actively use it during the whole study.
Each participant was compensated with \$20 US dollars.
In our study, totally 62,378 messages were recorded during 10,623 conversational sessions, which were automatically segmented with 5-minute user timeouts.
In this paper, we first list the potential use cases of EmotionPush.
Then we describe the challenges we identified during the deployment study of EmotionPush\xspace.
While prior work has shown that identifying emotions based on text is possible, we detail the challenges emerged from the deployment of such a system to the real world.
Challenges that we identified included modeling the continuum of emotional expressions;
referring the detected emotion to the right speaker in a multi-user conversation;
considering various expression levels per familiarity between users;
handling the classifier errors;
and problems derived from nonconventional contents such as long messages, code switching, and emojis.
\subsection{EmotionPush System}
\label{subsection:emotion_classification}
The task to predict the emotion of a given text can be regarded as a document (sentence) classification problem if the emotion categories are given. To make sure the developed system achieves good performance, we adopted the powerful classification module LibSVM~\cite{fan2008liblinear}, a supervised-learning tool for Support Vector Machines, and the widely-recognized well-performed feature, Word Embedding, to train the classifier.
Word Embedding, generated from the neural language models, has been shown to be a successful feature for classification problems~\cite{bengio2006neural,huang2012improving,mikolov2013efficient,mikolov2013distributed}.
We adopted pre-trained 300 dimension word vectors trained on part of Google News\footnote{\url{https://code.google.com/archive/p/word2vec/}} and represented each sentence or document as a vector by averaging all the word vectors of the words in the sentence or the document.
The emotion classifiers of EmotionPush\xspace were trained on LJ40k~\cite{leshed2006understanding}, a dataset contains 1,000 blog posts for each of 40 common emotions. These classifiers were shown to be comparable to the state-of-the-art methods in terms of performance~\cite{wang2016sensing}. Then EmotionPush\xspace adopted the categorical representation with the designed color scheme to visualize detected 7 emotions, simplifying the connection between emotions and texts, such that users can easily interact instantly by their instinct.
\section{Use Case of EmotionPush}
The goal of EmotionPush is to enable end-users to understand better of emotions of their conversational partners.
Hence, the most apparent use case we can think of is to prioritize the messages according to the emotions they convey.
Participants confirmed this use case by indicating that EmotionPush can help them organize their messages, read messages quickly, and managing multiple messages at once.
However, other use cases are not clear before the deployment.
In the use study, participants did name some interesting use cases.
We list these use cases as follows, along with quotes of participants.
\begin{itemize}
\item \textbf{Emotion Management}
Participants will be able to decide whether they want to receive some information in order to keep their emotion stable. They can either choose to reach the people needed to be taken care, or not to read any message from them to keep themselves neutral:
\textit{``One of the major advantages was to identify who was angry and needed to talk immediately. That helped me in my interactions a lot.''}
\textit{``If there is a red color chat, I wouldn't read it as it might ruin my mood.''}
\item \textbf{Interacting with People of Little Acquaintance}
Participants mentioned that EmotionPush helps them when talking to strangers or new friends. This makes perfect sense as EmotionPush learns from large datasets and should interpret
emotions of messages in a general way. Hence, emotion cues from EmotionPush\xspace are of valuable reference when we don't know much about the other party.
\item \textbf{Fun Topics to Have}
When participants see some suggested emotions which are different from they expect or interpret, they will confirm with the other party and hence have more topics to create an interesting conversation:
\textit{``It's a funny topic of conversation when the app predicts the emotion interestingly.''}
\end{itemize}
Another use case, which is more implicit but draws our attention, is that users may rely on the prompted emotions instead of interpreting received messages by themselves. It is raised by one participant by saying
\textit{``... has the team thought about the social impact this kind of app would have if many people used it? ..., but if everyone were to use an app like this, I feel like people would start to rely on the app too much to determine how they should respond as opposed to figuring out the other person's emotions on their own.''}
However, even with this hidden social concern which should be investigate further, from the result of the user study we can still expect that the system could help people in their interactions from many aspects. The quantitative summary of the user study is reported in Appendix for reference, while the expressed opinions are discussed in this paper. In the following sections, we further detail 5 mentioned challenges emerged from our experiments.
\section{Challenge 1: The Continuum of Emotion}
EmotionPush\xspace uses a \emph{categorical representation} (e.g. \textit{Anger}, \textit{Joy}, etc.)~\cite{klein2002computer} of emotions instead of a dimensional representation (valence, arousal)~\cite{Sanchez:IHC06} to reduce users' cognitive load.
One natural limitation of applying a categorical representation is the lack of capability of expressing continuum of emotion.
For instance, in the following conversation, the user B sent five consecutive messages, which is less likely to express four different emotions, as predicted\footnote{ EmotionPush\xspace users do not receive any affective feedback for the messages sent by themselves. In this paper we show colored emotion cues for all messages only for readers' reference.
The example conversations will be lightly disguised based on the techniques suggested by Bruckman~\cite{bruckman2006teaching} on publication.}:
\begin{itemize}[label={}]
\small
\itemy \textbf{A:} Aww thanks!!
\vspace{-.4pc} \itemr \textbf{A:} How's being home?
\vspace{-.4pc} \itemy \textbf{B:} Studying, haha
\vspace{-.4pc} \itemb \textbf{B:} But it doesn't feel like I have been away for one year
\vspace{-.4pc} \itemb \textbf{B:} Nothing has changed here
\vspace{-.4pc} \itemg \textbf{B:} Time is running so slow now
\vspace{-.4pc} \itemp \textbf{B:} And I'm still jetlagged, haha
\end{itemize}
While prior work explored modeling continuum of information in text, speech~\cite{yeh2011segment} and video streaming~\cite{gunes2013categorical}, literature had little to say about modeling continuous emotions in a text-based conversation.
To the best of our knowledge, none of the existent conversational datasets contain emotion labels,
and the continuum property has not been considered in modern emotion labeling systems for conversations.
We believe that considering the hidden emotion states to develop the computational models of humans consecutive dynamics of emotion is a promising direction, where a middle-layered computation which captures the nature flow of emotions is necessary.
\section{Challenge 2: Multi-User Conversations}
Unexpected challenges were raised by multi-party chatting, which is also known as \textit{Group} or \textit{Channel} in modern messengers.
In our study, in which 22.46\% of messages
were recorded in multi-user chatting groups,
we found that providing emotion cues on top of a multi-user conversation would make it difficult for users to concentrate on the running dialog.
For instance, in the following conversation between four different users, it is hard to keep track of both the dialog and the emotion of each user at the same time.
\begin{itemize}[label={}]
\small
\itemg \textbf{A:} Oh I'll have it tonight, just can't rsvp on mobile arm
\vspace{-.4pc} \itemgrey \textbf{A:} *atm
\vspace{-.4pc} \itemgrey \textbf{B:} ACK
\vspace{-.4pc} \itemb \textbf{B:} I'll mark you down
\vspace{-.4pc} \itemg \textbf{B:} yup, it's tonight :)
\vspace{-.4pc} \itemy \textbf{C:} holy shit this sounds awesome!
\vspace{-.4pc} \itemy \textbf{B:} John \cmmnt{Artemis} is super nerdy rad
\vspace{-.4pc} \itemg \textbf{D:} I want in on this. I'll see if I can make it work with tech
\vspace{-.4pc} \itemg \textbf{D:} I can make it
\vspace{-.4pc} \itemg \textbf{D:} unfortunately my grandparents are coming in tonight so I don't think I'll be able to join :( ha
\end{itemize}
Furthermore, multi-party conversations also raised challenges in designing user experience .
As shown in the Introduction section, EmotionPush\xspace uses two ways to provide emotion cues: 1) a colored push notification, and 2) a colored bubble that floats on the top layer of the screen.
However, both methods were not capable to efficiently convey emotions in multiple-user conversations.
While a notification can show the message and its emotion cue simultaneously, it only displays the name of the chat group instead of the name of message sender;
users would also find it difficult to identify the corresponding speaker based on bubble's color changes when multiple users are talking.
These design challenges of providing affective feedback that considers emotions, texts and users are beyond prior research of on-line chatting interfaces~\cite{vronay1999alternative,roddy1998interface}.
\begin{comment}
In the following cases, we show three short conversations in which more than two users are involved.
Some related issues about computer-based chat rooms are noticed in \cite{vronay1999alternative,roddy1998interface}. \lwku{don't know what related issues are here. Can CY gives some examples?}
For mobile version, \cite{seong2007designing,su2007exploring} try to design and evaluate several different ways of presenting messages for users.
\end{comment}
\section{Challenge 3: Different Dynamics Between Different Users}
Different interaction dynamics occur between people in different context and relationships.
One risk of classifying emotions solely based on texts is the neglect of user context, which is known to have strong correlations with user behavior~\cite{baym2007relational,gilbert2009predicting}.
Prior work has also shown that language comprehension is more than a process of decoding the literal meaning of a speaker’s utterance but making pragmatic inferences that go beyond the linguistic data~\cite{frank2014inferring}.
For instance, in our study, we observed that emotion classification worked better on conversations between users who rarely interacted with each other, in which the languages were more formal.
The following is an example.
\begin{itemize}[label={}]
\small
\itemgrey \textbf{A:} Hey man.. hows it going!
\vspace{-.4pc} \itemy \textbf{B:} Hey! It's going well :-)
\vspace{-.4pc} \itemgrey \textbf{B:} THings are pretty hectic, but I'm trying to get used to the assignments and whatnotn
\vspace{-.4pc} \itemgrey \textbf{A:} haha sounds like grad school
\vspace{-.4pc} \itemy \textbf{B:} Yup! Haha
\vspace{-.4pc} \itemy \textbf{B:} Weren't you planning a trip to Eastern United States \cmmnt{Pittsburgh}?
\vspace{-.4pc} \itemb \textbf{A:} I was! But I never ended up coming.. I would still like to but my best bet was recruiting and I asked not to go as there was soem work that came up
\end{itemize}
On the other hand, the conversations between users who frequently talked with each other often contain informal expressions.
The following is an example.
\begin{itemize}[label={}]
\small
\itemgrey \textbf{A:} Okay I was thinking of getting pierced tomorrow after 6:30? I could theoretically do today between like 4:30-6 but I worry about cutting it too close?
\vspace{-.4pc} \itemr \textbf{B:} I'M DOWN
\vspace{-.4pc} \itemg \textbf{A:} what time would work best 4 u?
\vspace{-.4pc} \itemp \textbf{B:} a little after 6:30 might work better bc of activity fair?
\vspace{-.4pc} \itemy \textbf{A:} Yeah that makes sense!
\vspace{-.4pc} \itemw [Discussing about inviting other friends]
\vspace{-.4pc} \itemgrey \textbf{B:} cooooooooooool
\vspace{-.4pc} \itemgrey \textbf{B:} i can prob get out of helping with teardown haha
\vspace{-.4pc} \itemg \textbf{A:} Its no big if u cant, its open until 9
\vspace{-.4pc} \itemgrey \textbf{B:} yeeeee
\end{itemize}
Our observation suggested that EmotionPush\xspace could be more helpful for some conversations, in this case, the conversations between people who talked less frequently with each other, than for others.
User context could be helpful to both directly improve emotion classification or identify which conversations EmotionPush\xspace can assist better.
\begin{comment}
Different degree of familiarity would cause different interaction between users.
The following two cases are extracted from the unfamiliar users who only send a few messages to each other, and the third conversation is chosen from the users who are familiar to each other.
As we can see, the word choice of the unfamiliar users' conversation is more formal than the familiar one.
Furthermore, when people are familiar with each other, they tend to use more emojis, more abbreviations and more abnormal languages.
\cite{hsu2011closer,baym2007relational} investigate the impact of different types of relation and different strength of relation.
Besides, in \cite{gilbert2009predicting}, the authors try to predict the relation strength by the social media data, which also reveals that difference degree of familiarity would cause great impact on various aspects.
\end{comment}
\section{Challenge 4: Misclassification of Emotions}
Emotion classification is not perfect.
It is inevitable that some emotion cues that EmotionPush\xspace send are incorrect.
For instance, the message of user B in the following conversation should be of \textit{Anticipation} (orange) instead of \textit{Fear} (green).
\begin{itemize}[label={}]
\small
\itemo \textbf{A:} Will it be factory reset, does it have Microsoft office preset
\vspace{-.4pc} \itemg \textbf{B:} Yes, I will factory reset it tonight and if you want, you can have a look at it :)
\end{itemize}
In the following example, the message of user A should be of \textit{Anticipation} (orange) instead of \textit{Fear} (green), and the message of user B was apparently not of \textit{Sadness} (blue).
\begin{itemize}[label={}]
\small
\itemg \textbf{A:} Hey guys so does 2:30 sound good for volunteering tomorrow? We'll take next week off because of fall break
\vspace{-.4pc} \itemb \textbf{B:} We can leave at 230
\end{itemize}
Misclassified cases raised the questions that what level of performance is good enough for an realistic application.
EmotionPush\xspace's classifier achieved an average AUC (the area under the receiver operating characteristic curve) of 0.68, which is comparable to the state-of-the-art performance~\cite{wang2016sensing}.
It is noteworthy that humans are not good at identify emotions in text.
Prior work showed that humans on average can only correctly predict 63\% of emotion labels of articles in LiveJournal~\cite{mishne2005experiments}.
Our post-study survey also showed that participants did not think the wrongly-predicted emotion colors are harmful to their chatting experiences (average rating = 0.85, ranges from 0 to 4), while they felt the correctly-predicted emotion colors are helpful (average rating = 2.5).
Given all these factors, we believe that our emotion classifiers' performances are practical for real-world applications.
In addition to improving emotion classification,
challenges also come from designing good user experience around error cases.
EmotionPush\xspace is good at identifying \textit{Joy}, \textit{Anger}, and \textit{Sadness}~\cite{wang2016sensing}.
One potential direction is to use different feedback types (e.g., vibration) to distinguish reliable predictions from uncertain ones.
\section{Challenge 5: Unconventional Content}
Similar to most text-processing systems deployed to the real world, EmotionPush\xspace faced challenges in handling unconventional content in instant messages.
In this section we describe three types of unconventional content we observed in our study: multiple languages, graphic symbols such as emojis, and long messages.
\paragraph{Multiple Languages \& Code Switching}
Real-world users speak various languages.
Even though we recruited English native speakers in our study, participants occurred to speak in, or switch to, various languages when talking with friends.
For example, user A switched between English and Chinese in the following conversation.
\begin{CJK*}{UTF8}{bsmi}
\begin{itemize}[label={}]
\small
\itemy \textbf{A:} How's ur weekend
\vspace{-.4pc} \itemp \textbf{A:} Sorry last night I didn't sleep well and needed to work ..Feel like I'm a zombie today haha
\vspace{-.4pc} \itemgrey \textbf{A:} 整天腦袋空空的
\vspace{-.4pc} \itemgrey \textbf{A:} 你們都搬到北台灣\cmmnt{台北}?
\vspace{-.4pc} \itemgrey \textbf{B:} 哈哈加油喔喔喔
\vspace{-.4pc} \itemgrey \textbf{B:} 對呀!
\vspace{-.4pc} \itemgrey \textbf{B:} 北海岸\cmmnt{淡水}附近
\vspace{-.4pc} \itemr \textbf{A:} How r u
\end{itemize}
\end{CJK*}
Not only text-based emotion classification require sufficient labeled data for training, but also code-switching processing techniques relies heavily on training data~\cite{brooke2009cross,vilares2015sentiment}.
All of these technologies are not capable of processing unseen languages.
While prior work explored cross-language acoustic features for emotion recognition in speech~\cite{pell2009recognizing}, detecting emotions in arbitrary languages' texts is still infeasible.
For deployed systems such as EmotionPush\xspace, making design decisions around languages it can not understand is inevitable.
Currently EmotionPush\xspace supports two languages, English and Chinese,
but these two modules were developed separately and still can not handle code-switching case such as the example above.
In the future, we are looking forward to incorporating a language identifier to provide more concrete feedback (e.g., ``Sorry I do not understand French.'') to users.
\begin{comment}
Many approaches and ideas are proposed to solve the multiple languages and code switching problem.
In \cite{brooke2009cross}, the author tried to adapt an existing English approach to Spanish.
\cite{vilares2015sentiment} evaluated three approaches, including monolingual model, Multilingual model and Monolingual pipeline with language detection, on four different corpus, English monolingual corpus, Spanish monolingual corpus, multilingual corpus built by previous two corpus and code-switching corpus.
The result shows that multilingual model outperforms the pipeline approach in both mixed multilingual corpus and code-switching corpus.
That is to say, considering multiple languages at the same time benefits more than detecting language and applying the corresponding model.
\begin{itemize}[label={}]
\small
\itemb \textbf{A:} salut trija comment vas tu? ca était le séjour aux states?
\vspace{-.4pc} \itemgrey \textbf{B:} Salut, je vais bien. Oui voyage au etas unis etait tres bien. Bien passée. On etait a orlando, c'était très très chaud la bas. Et puis los angeles et san francisco.
\vspace{-.4pc} \itemg \textbf{B:} Et pour toi. Ca va le stage??
\vspace{-.4pc} \itemy \textbf{A:} oui cava bien a montreal aussi il faisait chaud
\vspace{-.4pc} \itemg \textbf{A:} mais ce we il a plu
\vspace{-.4pc} \itemg \textbf{B:} Et tu continue le stage la bas?
\vspace{-.4pc} \itemo \textbf{A:} Oui pour le moment. Je ferai probablement un doctorat par la suite
\vspace{-.4pc} \itemo \textbf{A:} Et toi ça était ton exam de kine .?
\vspace{-.4pc} \itemgrey \textbf{B:} Ah c'est tres bien alors. J'etait accepé a rennes mais on m'a proposé de faire 3 ans en total et je l'ai refusé.
\vspace{-.4pc} \itemgrey \textbf{A:} 3 ans !!!!! C'EST trop et nantes ?
\vspace{-.4pc} \itemb \textbf{B:} Nantes je n'ai pas eu
\vspace{-.4pc} \itemy \textbf{A:} Ah tu peux refaire l'année prochaine
\vspace{-.4pc} \itemb \textbf{A:} esssaie poitiers aussi :)
\vspace{-.4pc} \itemb \textbf{B:} Oui peut etre
\end{itemize}
\begin{CJK*}{UTF8}{bsmi}
\begin{itemize}[label={}]
\small
\itemgrey \textbf{A:} 剛剛下雨下超大,可能要取消QQ
\vspace{-.4pc} \itemg \textbf{B:} 喔喔對...很誇張的!!突然超級大
\vspace{-.4pc} \itemg \textbf{B:} 好!那如果沒有取消再跟我說~~
\vspace{-.4pc} \itemgrey \textbf{A:} 確定取消QQ
\vspace{-.4pc} \itemg \textbf{B:} 好的:(
\end{itemize}
\end{CJK*}
In the future, as our core model are easy to modify, we plan to consider different languages at the same time by building our model on cross-language word embeddings by Wikipedia with different languages \cite{lauly2014autoencoder}.
\end{comment}
\paragraph{Emoji, Emoticons, and Stickers}
\begin{CJK*}{UTF8}{gkai}
Graphic symbols such as emojis, emoticons and stickers are widely used in instant messages, often for expressing emotions.
For example, the emoticon ``\verb|¯\_(ツ)_/¯|'' (also known as ``smugshrug''),
which represents the face and arms of a smiling person with hands raised in a shrugging gesture, was used in the following conversation.
\begin{itemize}[label={}]
\small
\itemy \textbf{A:} when can i come and pick up my Jam and also Goat
\vspace{-.4pc} \itemr \textbf{B:} whenever you want tbh?
\vspace{-.4pc} \itemr \textbf{B:} we're home rn if ur down
\vspace{-.4pc} \itemg \textbf{B:} or tomorrow sometime
\vspace{-.4pc} \itemo \textbf{B:} \verb|¯\_(ツ)_/¯|
\end{itemize}
The usages and effects of graphic symbols in on-line chatting have been thoroughly studied~\cite{jibril2013relevance,walther2001impacts,wang2015more},
and techniques of handling emojis in text processing has also been developed~\cite{barbieri2016does,eisner2016emoji2vec}.
However, the current technologies are still not capable to identify emotions from any arbitrary emojis and emoticons,
not to mention new graphic symbols are created everyday (e.g., ``smugshrug'' was just approved as part of Unicode 9.0 in 2016) and stickers are not even text.
\end{CJK*}
\paragraph{Paragraph-like Long Messages}
Often instant messaging users chunk a long message into smaller pieces and send them consecutively.
However, we observed that in our study occasionally users send exceptionally long messages.
For instance, a user sent one message that contains 10 sentences (134 words) to warn the former owner of his/her house to clean up as soon as possible,
a user sent a 10-sentence message (201 words) to advertise his/her incoming stand-up comedy performance, and a user sent a 9-sentence message (152 words) to discuss an reunion event.
In each of these long messages, the user used multiple sentences to express complex issues or emotions, which made it difficult to conclude the message with one single emotion.
While literature showed that emotion classification yielded a better performance on long sentences that contain more words because they bear more information~\cite{calvo2013emotions}, our observation suggested that long messages that contain many sentences often result in a less-confident or incorrect emotion classification as a whole.
\begin{comment}
\begin{itemize}[label={}]
\small
\itemr \textbf{A:}Hi guys. I just got to 57 yesterday, and frankly, the state of the apartment is quite unacceptable. The apartment is extremely unsanitary and still holds a lot of the things you left behind. This is very irresponsible on your part, and I'm just gonna be blunt that I'm really quite disappointed. I genuinely feel a bit cheated by the fact that we're friends and entrusted you to make the effort to pass the apartment into our hands in the best condition that you can. However, no cleaning efforts were see whatsoever. TWO OPTIONS: we will clean the apartment but charge you cleaning fee, or you can hire professional cleaning for us ASAP. School is starting and this is no fun. Not trying to hurt feelings, just trying to get things done. Thank you.
\vspace{-.4pc} \itemo \textbf{B:} Thanks Lucy for letting us know and reaching out to us regarding this issue. May I ask what specifically about the apartment you were disappointed in?
\vspace{-.4pc} \itemr \textbf{C:} Jackie have you talk to Lamond yet? There is a free clean service from mayer management.
\vspace{-.4pc} \itemr \textbf{C:} I suggested that we were going to hire someone to clean it for you, but turned out there is a service from the management. So I told Jackie to talk to Lamond. I tried to ask the maintain guy. He said it has to be the one who lives there to send the request.
\vspace{-.4pc} \itemr \textbf{A:} Hi Eric. The walls were really dirty, the carpet hasn't been cleaned, the toilet is still kind of gross, the kitchen is extremely greasy. I stayed up until 1AM to clean up last night after 24hrs of traveling, so that may also explain some grumpiness. Don't take it personally, but it's important we figure something out.
\vspace{-.4pc} \itemr \textbf{A:} Mei Song, do you have his contact information?
\vspace{-.4pc} \itemp \textbf{B:} Thanks Lucy. I asked because I haven't been inside 57 since graduation.
\end{itemize}
\end{comment}
\section{Conclusion \& Future Work}
In this paper, we describe challenges in deploying an emotion detection system, EmotionPush\xspace, for instant messaging applications.
These challenges included the continuum of emotions, multi-user conversations,
different dynamics between different users,
misclassification, and
unconventional content.
These challenges are not only about providing automatic affective feedback by using text-processing technologies, but also about designing an user experience given the interrelated factors including humanity and languages.
Through these discussions, we expect to gain insight into the deployment of applications of affective computing
and motivate researchers to elaborate the solutions of tomorrow.
In the future,
with the advantage of the developed EmotionPush\xspace, we plan to design a mechanism which encourages users to contribute their contents and feedback their emotions to advance this technology where it is most needed.
\section{Acknowledgements}
Research of this paper was partially supported by Ministry of Science and Technology, Taiwan, under the contract MOST 104-2221-E-001-024-MY2.
\bibliographystyle{aaai}
|
3,212,635,537,559 | arxiv | \section{\label{sec:level1}Introduction}
The $f$-electron state is a main driver of the chemistry and physics in actinide-based intermetallics, and influences properties ranging from the functional (e.g., crystal structure/density, melting temperature, and thermal conductivity) to the exotic (e.g., unconventional superconductivity, magnetism, and electronic topology)~\cite{moore,Pfleiderer09,Maple10,dzero}. This is due to several factors, including (i) the remarkable flexibility of the $f$-electron valence, which readily conforms to a multitude of different crystal/chemical environments and (ii) that $f$-electrons intrinsically carry unique features including a tendency to hybridize with conduction electrons and strong spin-orbit coupling that can produce large spin anisotropy. An intriguing subset of behaviors that such materials exhibit are structural/electronic instabilities: e.g., as for cerium, which undergoes an isostructural volume collapse between two face centered cubic structures under applied pressure~\cite{zhao_1997} and plutonium, which features the largest number of distinct structural phases in the temperature-pressure phase diagram of all elements~\cite{moore,Wick_1967}. In the broader family of actinide based materials, these features have implications for development of nuclear fuels~\cite{Gofryk_2017}, waste forms for storage of dangerous isotopes~\cite{Tom_2015}, and other applications ~\cite{application}.
The system UCr$_2$Si$_2$ offers the chance for an improved understanding of the relationship between structure and the $f$-state, as it was earlier shown to crystallize in the ubiquitous ThCr$_2$Si$_2$ tetragonal structure and to undergo a phase transition to a monoclinic phase near $T_{\rm{S}}$ $\approx$ 205 K~\cite{Matsuda1_2002,Matsuda1_2003,Lemoine_2018}. The U ions carry a magnetic moment, the magnetic anisotropy is Ising-like, there is evidence for Kondo lattice behavior, and antiferromagnetic ordering appears at $T_{\rm{N}}$ $\approx$ 25 K. The origin of the structural phase transition has not been established, but it is noteworthy that (i) in the broader family ($Ln/An$)Cr$_2$Si$_2$ ($Ln$ $=$ lanthanide and $An$ $=$ actinide) the Cr ion carries a magnetic moment (here it does not) and (ii) no other example exhibits a structural phase transition~\cite{Klosek_2008, Ryan_2003, Shatruk_2019,NpCr2Si2_1977,Book_1998}. This might suggest that this compound is differentiated from its analogues mainly due to strong hybridization between the uranium $f$-electrons and the conduction electron, which then produces the structural instability. On the other hand, it was earlier proposed that a phonon driven instability associated with the Cr-Si and Si-Si bonding is responsible for this behavior~\cite{Lemoine_2018}. In addition to understanding the origin for the structural phase transition, it is attractive to consider the possibility of driving it from the thermally driven (classical) regime to the zero temperature (quantum) regime, as has been done earlier for some magnetic correlated electron materials that conform to the semi-universal quantum critical point phase diagram~\cite{Rosch07,Gegenwart_08,brando16,Lee_2006,Stewart_2011,Singleton_2002}.
In order to clarify the behavior of this compound and to assess whether a structural quantum critical point might be induced, we have carried out measurements of polycrystalline UCr$_2$Si$_2$ under applied pressure and also for specimens where Cr is substituted by Ru. These measurements are complementary; while applied pressure is expected to compress the unit cell volume, X-Ray diffraction measurements show that Cr $\rightarrow$ Ru substitution results in compression along the $c$-axis and expansion in the $ab$-plane. In the former case, the structural phase transition is initially unchanged, but starting near $P$ $=$ 7.6 kbar it undergoes a rapid evolution, increases in temperature, and eventually merges with a previously overlooked higher temperature feature at $T_{\rm{X}}$ = 280 K. In contrast, Cr $\rightarrow$ Ru substitution quickly obscures $T_{\rm{X}}$ and drives a smooth suppression of $T_{\rm{S}}$ until it rapidly collapses, possibly towards a structural quantum phase transition near $x_{\rm{c,S}}$ $\approx$ 0.16. The antiferromagnetism also evolves in a complex way; under hydrostatic pressure it abruptly disappears as $T_{\rm{S}}$ begins to increase, while it is gradually suppressed with increasing $x$ and collapses at a value $x$ $<$ $x_{\rm{c,S}}$. Together, these features provide a setting in which to investigate the interplay between magnetism, structure, and Kondo lattice behavior and thereby sets the stage for developing a previously unexplored class of strongly correlated quantum materials.
\section{\label{sec:level1}Experimental Methods}
Polycrystalline specimens of UCr$_{2-x}$Ru$_x$Si$_2$ were grown from elements with purities \(>\) 99.9\% by combining elements in the molar ratio U:Cr:Ru:Si $;$ 1:(2-$x$):$x$:2 using an arc-furnace. To facilitate mixing of the elements, the U and Si were melted first, then the resulting mixture was combined with Cr and Ru. The crystal structure and chemical composition were verified by powder X-Ray diffraction (XRD) and energy dispersive spectroscopy (EDS) analysis. EDS results are shown in Fig. S1, where the measured concentration $x_{\rm{act}}$ is compared to the nominal concentration $x_{\rm{nom}}$. Throughout the rest of the manuscript we use $x_{\rm{act}}$ unless otherwise specified. Powder XRD measurements were performed at temperatures 300 K, 40 K, and 10 K using a Guinier diffractometer. Ambient pressure electrical resistance $R$ measurements for temperatures $T$ $=$ 0.5 $-$ 300 K were performed in a four-wire configuration and the heat capacity $C$ was measured for $T$ $=$ 1.8 $-$ 250 K using a Quantum Design Physical Property Measurement System. $R(T)$ measurements under applied pressure were performed using a piston cylinder pressure cell with Daphne 7474 oil as the pressure transmitting medium. The pressure was determined by the shift in ruby flourescence peaks as measured below $T$ $=$ 10 K.
\section{\label{sec:level1}Results}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\linewidth]{pressure.pdf}
\caption{(a) Electrical resistance divided by the room temperature value $R/R_{300K}$ vs temperature $T$ for UCr$_2$Si$_2$ under applied pressure. The anomaly at $T_{\rm{X}}$, the structural phase transition $T_{\rm{S}}$, and the antiferromagnetic phase transition $T_{\rm{N}}$ are indicated by arrows. For clarity, the curves are vertically offset by the value $\Delta$ $=$ 0.1.
(b) The derivative of $R/R_{300K}$ with respect to $T$ $\partial$$R$/$\partial$$T$ vs $T$ for UCr$_2$Si$_2$ under applied pressure. Curves are offset by constant values.}
\label{pressure}
\end{center}
\end{figure}
Electrical resistance measurements $R/R_{300K}$ under hydrostatic pressure were performed in order to establish the influence of isotropic volume compression (Fig.~\ref{pressure}). For $P$ $=$ 0, $R/R_{300K}$ vs. $T$ is consistent with earlier reports ~\cite{Matsuda1_2002,Matsuda1_2003,Lemoine_2018}, where the structural phase transition at $T_{\rm{S}}$ $=$ 205 K and the antiferromagnetic phase transition at $T_{\rm{N}}$ $=$ 25 K appear as a hysteretic decrease and a knee-like reduction in $R/R_{300K}$, respectively. These features are superimposed on a broader Kondo lattice-like temperature dependence. We additionally observe an abrupt change in slope near $T_{\rm{X}}$ $=$ 280 K which was earlier seen by Matsuda $et$ $al$, but was not discussed.~\cite{Matsuda1_2002,Matsuda1_2003} As shown in Figs. S2 and S3; (i) it is present in $\rho(T)$ measurements for both polycrystal and single crystal specimens, (ii) it is anisotropic in $\rho(T)$, and (iii) it appears as a knee-like feature in the heat capacity, which indicates that it is a bulk thermodynamic phase. We also note that it has a close similarity to the charge density wave features seen in $R$Pt$_2$Si$_2$($R$ = Y, La, Nd, Lu)~\cite{LaPt2Si2, RPt2Si2} and UPt$_2$Si$_2$~\cite{UPt2Si2}.
Pressure initially has a limited effect on $T_{\rm{X}}$, $T_{\rm{S}}$, and $T_{\rm{N}}$ but starting above 7.6 kbar they each undergo a clear evolution. In particular, $T_{\rm{S}}$ rapidly moves to higher temperatures, loses magnitude, and finally merges with $T_{\rm{X}}$. This suggests that compression of the unit cell volume has a tendency to stabilize both of the high temperature phase transitions, which might be tied to an increasing lattice stiffness. However, above 7.6 kbar the feature at $T_{\rm{S}}$ changes shape and eventually becomes difficult to distinguish from $T_{\rm{X}}$ as they merge together. This may suggest a transformation into another structural phase over this $P$ range. At lower temperatures $T_{\rm{N}}$ is gradually suppressed and abruptly disappears above 9.4 kbar, showing that the magnetic order is tied directly to the monoclinic structure, which may support the view that the structure changes at high $P$. Finally, we note that the Kondo lattice behavior persists at all $P$.
The influence of non-isoelectronic chemical substitution (Cr $\rightarrow$ Ru) on UCr$_2$Si$_2$ is revealed in the $R(T)$ curves shown in Fig.~\ref{substitution}. For $x$ $=$ 0, the feature at $T_{\rm{S}}$ results in an increase in $R/R_{300K}$ and the change at $T_{\rm{X}}$ is weak (Fig.~\ref{substitution}a inset). This contrasts with results for the $x$ $=$ 0 specimen that was used for measurements under $P$ (Fig.~\ref{pressure}), but is consistent with what is seen for single crystal specimens (Fig. S3) where $R/R_{300K}$ is anisotropic depending on whether the electrical current is applied in the $ab$-plane or along the $c$-axis. In particular, $R(T)$ increases (or decreases) at $T_{\rm{S}}$ for electrical current applied in the $ab$ plane (or along the $c$-axis) and $T_{\rm{X}}$ is only easily observed when the electrical current is applied along the $c$-axis.~\cite{Matsuda1_2002,Matsuda1_2003}
For the chemical substitution series, we find that the feature at $T_{\rm{X}}$ rapidly disappears and the one at $T_{\rm{S}}$ is smoothly suppressed up to $x$ $=$ 0.16, where it may collapse towards zero temperature. The disappearance of $T_{\rm{X}}$ might indicate that this phase transition is unstable against chemical substitution, but it is also possible that it merely becomes unobservable due to broadening. As $T_{\rm{S}}$ is suppressed it retains its characteristic shape and hysteresis, showing that it remains first order throughout the phase diagram. We also note that the feature at $T_{\rm{N}}$ is suppressed with increasing $x$ and is no longer observable near $x$ $=$ 0.08. The heat capacity and magnetic susceptibility data presented below also reveal a systematic suppression of the magnetic ordering with chemical substitution, resulting in its disappearance for $x$ $\approx$ 0.08 - 0.10. Across the entire substitution series, the underlying Kondo lattice behavior is preserved.
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\linewidth]{Graph4.pdf}
\caption{(a) Electrical resistance divided by the room temperature value $R/R_{300K}$ vs temperature $T$ for UCr$_{2-x}$Ru$_x$Si$_2$. For clarity, the curves are vertically offset by the value $\Delta$ $=$ 0.1. (inset) Zoom of $R(T)$ for $x$ $=$ 0 showing the features at $T_{\rm{X}}$ and $T_{\rm{S}}$.
(b) The derivative of $R/R_{300K}$ with respect to $T$ $\partial$$R$/$\partial$$T$ vs $T$ for UCr$_{2-x}$Ru$_x$Si$_2$. Curves are offset by constant value $\Delta$ $=$ 0.01. The anomalies $T_{\rm{S}}$ and $T_{\rm{N}}$ are labeled.
}
\label{substitution}
\end{center}
\end{figure}
\begin{figure*}[!tht]
\begin{center}
\includegraphics[width=1\linewidth]{waterfall.pdf}
\caption{Powder x-ray diffraction data for UCr$_{2-x}$Ru$_x$Si$_2$ spanning concentrations 0 $<$ $x$ $<$ 0.256 and at temperatures $T$ (a) 300 K and (b) 40 K. Results at $T$ $=$ 10 K are similar to those seen at 40 K. The transition from the tetragonal to the monoclinic structure is evidenced by splitting of peaks between 300 K and 40 K, as highlighted by the shaded box in panel b.
}
\label{waterfall}
\end{center}
\end{figure*}
To clarify the impact of Cr $\rightarrow$ Ru substitution on the structure, powder XRD measurements at $T$ $=$ 300 K, 40 K and 10 K were performed (Fig.~\ref{waterfall} and Fig.~\ref{xrd}); i.e., spanning the $x$ $=$ 0 structural phase transition. As shown in panel a, the high temperature tetragonal ThCr$_2$Si$_2$-type structure persists across the entire substitution series up to $x$ = 0.26, without evidence for chemical phase separation or formation of impurity phases. At 300 K, fits to the data yield lattice constants $a$ and $c$, which increase and decrease with increasing $x$, respectively, while the unit cell volume $V$ remains nearly constant (Fig.~\ref{xrd}a-c). Measurements at $T$ $=$ 40 K (Fig.~\ref{waterfall}b and 10 K, not shown) next reveal that, up to $x$ = 0.16, there is a clear splitting of the peaks due to the tetragonal $\rightarrow$ monoclinic structural phase transition. For larger $x$ the splitting abruptly disappears and the ThCr$_2$Si$_2$ symmetry persists down to $T$ $=$ 10 K, suggesting that the structural phase transition collapses near $x$ = 0.16. Within the monoclinic phase (Fig.~\ref{xrd}d-g), the lattice constants ($a$, $b$, $c$) and the distortion angle $\beta$ increase with increasing $x$. Similar to what is seen in the tetragonal phase, the spacing between the U-U atoms decreases with increasing $x$ (Fig.~\ref{xrd}g).
\begin{figure*}[!tht]
\begin{center}
\includegraphics[width=1\linewidth]{XRD.pdf}
\caption{ Summary of structural quantities determined from analysis of powder x-ray diffraction measurements of UCr$_{2-x}$Ru$_x$Si$_2$ at temperatures of 300 K (panels a-c), 40 K, and 10 K (panels d-g). The dashed lines are guide to the eye. (a) The lattice constants, $a(x)$ (left axis) and $c(x)$ (right axis) for the 300 K tetragonal structure. (b) The $c/a$ ratio for the 300 K tetragonal structure. (c) The unit cell volume $V(x)$ for the 300 K tetragonal structure. (d) Lattice constants $a$($x$), $b$($x$), and $c$($x$) for the low temperature monoclinic structure. (e) The monoclinic distortion angle $\beta$($x$) for the low temperature monoclinic structure. (f) The unit cell volume $V(x)$ for the low temperature monoclinic structure. (g) The interlayer U-U spacing for the monoclinic structure. Above $x$ = 0.16, the U-U spacing is considered as $c$/2 from the tetragonal structure at low temperature.
}
\label{xrd}
\end{center}
\end{figure*}
\begin{figure}[!tht]
\begin{center}
\includegraphics[width=1\linewidth]{M.pdf}
\caption{ (a)The magnetic susceptibility $\chi$ vs. temperature $T$ for measured in an applied magnetic field $H$ = 0.5 T for select concentrations spanning the UCr$_{2-x}$Ru$_x$Si$_2$ chemical substitution series. The antiferromagnetic ordering temperatures are indicated by arrows as $T_{\rm{N}}$. (b) The inverse magnetic susceptibility $\chi$$^{-1}$ vs. temperature $T$ for measured in an applied magnetic field $H$ = 0.5 T for select concentrations spanning the UCr$_{2-x}$Ru$_x$Si$_2$ chemical substitution series. The structural ordering temperatures is indicated by arrows as $T_{\rm{S}}$.
}
\label{chi}
\end{center}
\end{figure}
The temperature dependent polycrystalline average magnetic susceptibility $\chi(T)$ data for selected concentrations are shown in Fig.~\ref{chi}. For $x$ $=$ 0, the data compare favorably to the ideal polycrystalline average $\chi_{\rm{avg}}(T)$ = (2 * $\chi_{\rm{ab}}$ + $\chi_{\rm{c}}$) /3 that is calculated from results for a single crystal specimen where $\chi_{\rm{ab}}$ and $\chi_{\rm{c}}$ are the magnetic susceptibilities when the magnetic field is applied in the $ab$-plane and along the $c$-axis, respectively (Fig. S4). The $x$ = 0 results are similar to earlier reports, where a nearly Curie-Weiss $T$-dependence is observed starting from 300 K (Fig.~\ref{chi}b). Fits to the data reveal that an effective magnetic moment $\mu_{\rm{eff}}$ $\approx$ 3.62 $\mu_{\rm{B}}$/U and the Curie-Weiss temperature $\theta$ $\approx$ -185 K. Note that the value of $\mu_{\rm{eff}}$ is similar to that seen for other intermetallics with trivalent or tetravalent uranium, but that this value is larger than what was seen in earlier reports.~\cite{Matsuda1_2002,Matsuda1_2003} The structural phase transition appears as a hysteretic feature at $T_{\rm{S}}$ $=$ 205 K, after which the CW behavior continues until the system orders antiferromagnetically near $T_{\rm{N}}$ $=$ 25 K. As shown in other measurements, both $T_{\rm{S}}$ and $T_{\rm{N}}$ are suppressed with increasing $x$. Also noteworthy is that the underlying CW temperature dependence is undisturbed by substitution, suggesting that the uranium valence state and the hybridization with the conduction electrons is unchanged. We also note that once the antiferromagnetic order is fully suppressed for $x$ $>$ 0.1, the low temperature $\chi(T)$ increases with increasing $x$, suggesting the emergence of ferromagnetic fluctuations.
The temperature dependence of the heat capacity divided by temperature $C/T$ further exposes the structural phase transition and its impact on the electronic and magnetic properties (Fig.~\ref{HC}). At $x$ $=$ 0 it appears as a sharp first-order peak near $T_{\rm{S}}$ $=$ 205 K and with increasing $x$ it is monotonically suppressed down to 55 K at $x$ $=$ 0.16, after which it abruptly disappears before $x$ = 0.165. A close inspection of the thermal relaxation curves reveals that for $x$ = 0 there is a kink in the temperature rising curve, consistent with there being a latent heat. This feature weakens with increasing $x$, and is not easily seen for $x$ = 0.16 (Fig.~\ref{HC}c and d). This implies that the transition becomes less strongly first order as $T_{\rm{S}}$ is suppressed, possibly as a result of disorder effects. As shown in Fig.~\ref{HC}e, the entropy recovered at $T_{\rm{S}}$ also grows smaller with increasing $x$, and is significantly reduced as $x$ approaches 0.16.
\begin{figure}[!!!!!!!!!!!!!!!!!!!!!!!tht]
\begin{center}
\includegraphics[width=1\linewidth]{HC.pdf}
\caption{(a) The heat capacity divided by temperature $C$/$T$ vs $T$ for UCr$_{2-x}$Ru$_x$Si$_2$ emphasizing the structural phase transition at $T_{\rm{S}}$. (b) Low temperature $C$/$T$ for selected concentrations showing both the antiferromagnetic phase transition at $T_{\rm{N}}$ and the low temperature upturn that are described in the text. (c) and (d) relaxation curves around the structural phase transition temperature for $x$ = 0 and $x$ = 0.16. In panel c the arrow indicates the kink in the heating curve due to the latent heat of the first order phase transition. A similar feature is not detected for $x$ = 0.16. (e) The entropy associated with the structural phase transition for selected $x$, which was determined as described in the text.
}
\label{HC}
\end{center}
\end{figure}
Cr $\rightarrow$ Ru substitution also strongly modifies the signature of the antiferromagnetic ordering in $C/T$, which is seen as a second order lambda like peak at $T_{\rm{N}}$ = 25 K for $x$ $=$ 0 (Fig.~\ref{HC}b).~\cite{Matsuda1_2002} In particular, $T_{\rm{N}}$ moves to lower temperatures until it disappears near $x$ $=$ 0.08. Over this $x$-range, its shape is preserved but its overall size grows smaller, indicating that much of the associated magnetic entropy is lost or moves below $T_{\rm{N}}$. Once the antiferromagnetism is no longer observed, the low temperature $C$/$T$ exhibits a weak increase with decreasing $T$ which becomes maximal near $x$ $\approx$ 0.15. For larger $x$ the divergence weakens and finally tends to saturate at low temperature as expected for a Fermi liquid. Note that although $\chi(T)$ provides evidence for ferromagnetic fluctuations over this $x$ - range, the heat capacity data do not reveal any long range ordering: i.e., there are no sharp or lambda-like features that would indicate first or second order phase transitions, respectively.
\section{\label{sec:level1}Discussion}
\begin{figure}[!tht]
\begin{center}
\includegraphics[width=1\linewidth]{phase.pdf}
\caption{(a) Temperature $T$ vs pressure $P$ phase diagram for UCr$_2$Si$_2$ constructed from electrical resistance $R$ measurements under applied pressure. The unidentified phase transition at $T_{\rm{X}}$, the structural phase transition at $T_{\rm{S}}$, and the antiferromagnetic phase transition at $T_{\rm{N}}$ are shown. (b) Temperature $T$ vs. concentration $x$ phase diagram for UCr$_{2-x}$Ru$_x$Si$_2$ for $x$ $=$ 0 $-$ 0.26 constructed from electrical resistance $R$, powder x-ray diffraction, magnetic susceptibility $\chi$, and heat capacity $C$ measurements. The arrows represent results from XRD measurements where the tetragonal phase persists down to $T$ $=$ 10 K. At these same concentrations, $R$ and $C$ measurements show that there are no phase transitions for $T$ $>$ 500 mK.
}
\label{phase}
\end{center}
\end{figure}
Fig.~\ref{phase} compares the $T-P$ and $T-x$ phase diagrams compiled from the above measurements, where hydrostatic pressure and non-isoelectronic chemical substitution have profoundly different impacts on the structural and magnetic phase transitions. In the former case, $T_{\rm{X}}$, $T_{\rm{S}}$, and $T_{\rm{N}}$ are nearly constant up to pressures near 7.6 kbar, showing that small perturbations that isotropically compress the unit cell volume minimally impact the system. The data then reveal an evolution in the structural and magnetic phases for $P$ $>$ 7.6 kbar. While further measurements such as X-Ray diffraction are needed to fully understand the phase diagram, we point out that this behavior might be expected given that hydrostatic pressure should evenly decrease both the $ab$ and $c$ directions, which roughly mimics the influence of decreasing temperature.~\cite{Lemoine_2018} As a result, the energy scale of the structural phase transition would only be expected to increase with $P$.
In contrast, Cr $\rightarrow$ Ru substitution immediately destroys the feature at $T_{\rm{X}}$, while $T_{\rm{S}}$ and $T_{\rm{N}}$ are rapidly suppressed and collapse towards zero temperature near $x_{\rm{c, N}}$ $=$ 0.08 and $x_{\rm{c, S}}$ $=$0.16, respectively. As reported earlier, the strong Cr-Si and Si-Si bonds directly contribute to the lattice rigidity, and therefore to the relative stabilities of the high-temperature tetragonal and low-temperature monoclinic phases.~\cite{Lemoine_2018} When Ru is introduced, the Cr(Ru)-Si bond is stretched due to the size difference between Cr and Ru, leading to an expansion in the $a-b$ plane and a compression along the $c$-axis which anisotropically opens the voids where the U atoms are located. This apparently destabilizes the monoclinic phase (driving it to lower temperatures) and increases the $\beta$ angle. What is unclear in this process is the role of the bonding between the uranium and its surrounding environment. To clarify this, we point out (i) that earlier work shows that U is loosely bonded to the atoms surrounding it~\cite{Lemoine_2018}, (ii) that $\chi(T)$ measurements show that the uranium $f$-electron valence does not noticeably change between the two structural phases, which would be expected if a change of the $f$-state itself were responsible for the lattice instability and (iii) that the underlying Kondo lattice behavior appears unchanged both with applied pressure and under chemical substitution, showing that the hybridization between the $f$- and conduction electrons is robust in the $T-P-x$ phase space. Therefore, it is likely that the structural phase transition is driven by the characteristics of the strong Cr-Si and Si-Si bonds, which are anisotropically tuned by the introduction of Ru.
Structural tuning subsequently has an important impact on the low temperature electronic/magnetic state. First, both phase diagrams show that the original antiferromagnetic order is only preserved when the monoclinic phase is unambiguously present. In particular, (i) it becomes unidentifiable in $\rho(T)$ as $T_{\rm{S}}$ begins to increase above 7.6 kbar and (ii) it is suppressed as $T_{\rm{S}}$ decreases and eventually is replaced by strengthening ferromagnetic fluctuations that span $x_{\rm{c, S}}$ $=$ 0.16. The collapse of $T_{\rm{S}}$ towards zero temperature at $x_{\rm{c, S}}$ is also of interest because it raises the possibility of there being a structural quantum critical point in this region. However, several features show that $T_{\rm{S}}$ remains first order as it is suppressed including that (i) the approach of the phase boundary towards zero temperature is nearly vertical and (ii) hysteresis is observed up to $x_{\rm{c, S}}$ $=$ 0.16. This prevents the occurrence of diverging quantum critical fluctuations, but might offer insights about how to proceed in future work.
Taken together, these results provide a window into the physics of an unusual quantum material: a Kondo lattice with tunable structural and magnetic instabilities.~\cite{Pfleiderer09,Maple10,Andy_2016,Greve_2010,Handunkanda_2015,Goh_2015,Goh_15} It remains important to fully establish the mechanism for the structural phase transition, where calculations and measurement of the phonon band structure and/or measurements such as X-Ray absorption Spectroscopy or ARPES will be useful. Also important is to investigate the feature at $T_{\rm{X}}$, which may be related to charge density wave features seen in $R$Pt$_2$Si$_2$($R$ = Y, La, Nd, Lu)~\cite{LaPt2Si2, RPt2Si2} and UPt$_2$Si$_2$~\cite{UPt2Si2}. Finally, the $T-P$ and $T-x$ phase diagrams provide guidance for further efforts to induce a magnetic or structural quantum critical point. For instance, measurements under uniaxial pressure would be useful to determine whether the decrease in the $c/a$ ratio is the dominant tuning parameter in the Cr $\rightarrow$ Ru substitution series, while chemical substitution that uniformly expands the lattice (e.g., Cr $\rightarrow$ Mo or W) would be expected to suppress $T_{\rm{S}}$. This might eventually provide access to phenomena that are distinct from what is seen in other systems with structural quantum phase transitions: e.g., such as LaCu$_{6-x}$Au$_x$, ScF$_3$, and (Ca$_x$Sr$_{1-x}$)$_3$Rh$_4$Sn$_{13}$, where the structural instability arises solely from the freezing of a soft phonon mode,~\cite{Andy_2016,Greve_2010,Handunkanda_2015,Goh_2015,Goh_15,Klintberg_12,Biswas_15} or the Fe-based superconductors, where strong electronic correlations originate from the Fe $d$-electron states and the magnetic/structural order are closely tied together.~\cite{Cruz_2008,Chu_2010,Fernandes_2014}
\section{\label{sec:level1}Conclusions}
We have studied the influence of applied pressure and Cr $\rightarrow$ Ru chemical substitution in UCr$_2$Si$_2$, where the former semi-uniformly reduces the unit cell and the latter results in a decrease along the $c$-axis and an expansion in the $ab$-plane. While hydrostatic compression increases the structural ordering temperature, disrupts the antiferromagnetism, and may induce a different structural order at high pressures, Cr $\rightarrow$ Ru substitution suppresses $T_{\rm{N}}$ towards zero temperature near $x_{\rm{c, N}}$ $\approx$ 0.08 and suppresses the structural phase transition $T_{\rm{S}}$ until it disappears near $x_{\rm{c, S}}$ $\approx$ 0.16, after which the tetragonal phase persists to low temperatures. In the region where $T_{\rm{S}}$ is maximally suppressed it remains a first order phase transition, which prevents the occurrence of a quantum critical fluctuations of the order parameter. This study provides insights into the structural phase transition and magnetism of UCr$_2$Si$_2$, uncovers a high temperature phase at $T_{\rm{X}}$, and makes progress towards uncovering an electronic/structural quantum phase transition in an $f$-electron system, where alternative tuning strategies are suggested.
\section{\label{sec:level1}Acknowledgements}
A portion of this work was performed at the National High Magnetic Field Laboratory (NHMFL), which is supported by National Science Foundation Cooperative Agreement No. DMR-1157490 and DMR-1644779, and the State of Florida. Research of RB, YL, DG, were supported in part by the Center for Actinide Science and Technology, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), under Award Number DE-SC0016568. PJWM and JD acknowledge funding by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant no. 715730, MiTopMat).
|
3,212,635,537,560 | arxiv | \section{Introduction}
\label{section:introduction}
Dispersive partial differential equations
have been extensively studied in mathematical research.
Many studies have paid attention to real or
complex-valued functions as solutions to these equations.
However, some nonlinear dispersive partial differential
equations in contexts in classical mechanics and fluid mechanics
require their solutions to take values in a (curved) Riemannian
manifold.
In general, their nonlinear structures depend on the geometric
setting of the manifold.
Therefore, concerning how to solve their initial value problem,
geometric analysis of the relationship between their solvable structure
and the geometric setting of the manifold plays an essential role.
\par
In this field,
after the pioneering work of Koiso \cite{koiso},
the method of geometric analysis
for the so-called one-dimensional Schr\"odinger flow equation,
the higher-dimensional generalization
and a third-order analogue
has been developed extensively.
Many results on how to solve their initial value
problem have been established
mainly from the following three points of view:
analysis of the solvable structure of
dispersive partial differential equations(systems),
an application of Riemannian geometry,
and analysis of nonlinear partial differential equations
with physical backgrounds.
See, e.g., \cite{CSU, chihara, chihara2, CO, KLPST, koiso, McGahagan, NSVZ,
onodera0, onodera1, onodera3},
and references therein.
In this paper, we study a fourth-order analogue
whose solutions are required to take values in a compact Riemann
surface.
This is a continuation of \cite{CO2, onodera2}
and presents the answer to the problem suggested in \cite{chihara2}.
\par
The setting of our problem is stated as follows:
Given a compact Riemann surface $N$
with the complex structure $J$
and with a hermitian metric $g$,
consider the following initial value problem
\begin{alignat}{2}
& u_t
=
a\,J_u\nabla_x^3u_x
+
\{\lambda+ b\, g(u_x,u_x)\}J_u\nabla_xu_x
+
c\,g(\nabla_xu_x,u_x)J_uu_x
\quad &\text{in}\quad
&\mathbb{R}{\times} \mathbb{T},
\label{eq:pde}
\\
& u(0,x)
=
u_0(x)
\quad&\text{in}\quad &\mathbb{T}.
\label{eq:data}
\end{alignat}
Here
$\TT=\RR/2\pi \mathbb{Z}$ is the one-dimensional flat torus,
$u=u(t,x):\RR\times \TT\to N$ is the unknown map
describing the deformation of
closed curves lying on $N$ parameterized by $t$,
$u_0=u_0(x):\TT\to N$ is the given initial map,
$u_t=du(\frac{\p}{\p t})$,
$u_x=du(\frac{\p}{\p x})$,
$du$ is the differential of the map $u$,
$\nabla_x$ is the covariant derivative along $u$ in $x$,
$J_u:T_uN\to T_uN$ is the complex structure at $u\in N$,
$a$, $b$, $c$, and $\lambda$ are real constants.
If $a,b,c=0$ and $\lambda=1$,
\eqref{eq:pde} is reduced to the second-order dispersive equation
of the form
\begin{equation}
\label{eq:SM}
u_t=J_u\nabla_xu_x,
\end{equation}
which is called a one-dimensional Schr\"odinger flow equation.
As a fourth-order analogue of \eqref{eq:SM},
we call \eqref{eq:pde} with $a\ne 0$
a fourth-order dispersive flow equation.
Hereafter it is assumed that $a\ne 0$.
\par
An example of \eqref{eq:pde} with $a\ne 0$ arises in two areas of
physics, where $N$ is supposed to be the
canonical two-dimensional unit sphere $\mathbb{S}^2$.
Indeed, if $N=\mathbb{S}^2$ equipped with the complex structure acting
as $\pi/2$-degree rotation on each tangent plane
and with the canonical metric induced from the Euclidean metric in
$\RR^3$,
\eqref{eq:pde} is described by
\begin{align}
& u_t
=
u\wedge
\left[
a\,\p_x^3u_{x}
+
\{\lambda+(a+b)\, (u_x,u_x)\} \p_xu_{x}
+
(5a+c)\, (\p_xu_{x},u_x) u_{x}
\right],
\label{eq:pdes2}
\end{align}
where $u:\RR\times \TT\to \mathbb{S}^2\subset \RR^3$,
$\p_x$ is the partial differential operator in $x$ acing on
$\RR^3$-valued functions,
$(\cdot,\cdot)$ is the inner product in $\RR^3$,
and $\wedge$ is the exterior product in $\RR^3$.
In particular, the $\mathbb{S}^2$-valued model
\eqref{eq:pdes2} with
$3a-2b+c=0$ and $\lambda=1$
models the continuum limit of the Heisenberg
spin chain systems
with biquadratic exchange interactions(\cite{LPD}),
where each of $a,b,c$ is decided
by two independent physical constants.
Interestingly,
the same equation can be derived from an equation
modelling the motion of a
vortex filament in an incompressible perfect fluid in $\RR^3$
by taking into account of the elliptical deformation effect of the core
due to the self-induced strain
(\cite{fukumoto, FM}).
\par
For the Schr\"odinger flow equation \eqref{eq:SM} and the higher-dimensional generalization,
almost all results on the existence of solutions have been established
assuming essentially that $(N,J,g)$ is
a compact K\"ahler manifold.
See, e,g, \cite{CSU, KLPST, koiso, McGahagan, NSVZ, SSB} and references therein.
Under the assumption, the classical energy method
combined with geometric analysis works to show the local existence results.
On the other hand, if $(N,J,g)$ is a compact almost hermitian manifold
without the K\"ahler condition,
the classical energy method breaks down, since the so-called loss of derivatives
occurs from the covariant derivative of the almost complex structure.
However, Chihara in \cite{chihara} overcame the difficulty by the geometric energy method
combined with a kind of the gauge transformation acting on the pull-buck bundle.
Indeed, he established a local existence and uniqueness result for maps from a compact Riemannian manifold into a
compact almost hermitian manifold.
After that, he and the author obtained similar results in \cite{CO, onodera1, onodera3, onodera2}
for a third-order dispersive flow equation
for maps from $\RR$ or $\TT$ into a compact almost hermitian manifold.
\par
In contrast, for our fourth-order
dispersive flow equation \eqref{eq:pde},
we face with the difficulty due to loss of derivatives even if
$(N,J,g)$ satisfies the K\"ahler condition,
which is also the case for the $\mathbb{S}^2$-valued physical model \eqref{eq:pdes2}.
If the spacial domain is the real line $\RR$
instead of $\TT$,
the difficulty can be overcome
by making use of the local dispersive smoothing effect of the equation
in some sense.
Besides, there is much room for the solvable structure.
Indeed, in \cite{CO2},
the local existence and the uniqueness of a solution
to the problem on $\RR$
were established and were extended to
compact K\"ahler manifolds as $N$.
Unfortunately, however,
the local smoothing effect is absent in our problem
since the spacial domain $\TT$ is compact.
In other words, the method of the proof in \cite{CO2}
is not applicable to our problem.
Thus the obstruction coming from the loss of derivatives
is expected to be avoided by finding out a kind of
special nice solvable structure of the equation.
\par
The previous studies of \eqref{eq:pde} on $\TT$
are limited as follows:
Guo, Zeng, and Su in \cite{GZS} investigated the $\mathbb{S}^2$-valued
physical model \eqref{eq:pdes2}
with $3a-2b+c=0$ and $\lambda=1$ imposing
an additional assumption $c=0$.
Under the assumption,
\eqref{eq:pdes2}
is completely integrable,
and
they made use of some conservation laws of \eqref{eq:pdes2}
to show the local existence of a weak solution to
the initial value problem,
though the uniqueness was unsolved.
Chihara in \cite{chihara2} investigated
fourth-order dispersive systems for $\mathbb{C}^2$-valued
functions including a system which is
reduced from \eqref{eq:pde} by the generalized Hasimoto transformation,
and pointed out that the assumption that the sectional curvature of $N$
is constant provides the solvable structure of the initial value problem.
To the present author's knowledge,
though the insights seems to grasp the solvable structure
of \eqref{eq:pde}-\eqref{eq:data} essentially,
it is nontrivial whether we can recover the solution to
\eqref{eq:pde}-\eqref{eq:data}
from the solution to the reduced dispersive system.
\par
Motivated by them,
the present author tried to solve directly \eqref{eq:pde}-\eqref{eq:data}
imposing that the sectional curvature on $N$ is constant,
without using the generalized Hasimoto transformation.
Recently, he in \cite{onodera2} succeeded to
show the local existence of a unique solution
to the initial value
problem for the $\mathbb{S}^2$-valued model \eqref{eq:pdes2}
without any assumption on $a,b,c,\lambda$ (except for $a\ne 0$),
where $u_0$ is taken so that
$u_{0x}\in H^k(\TT;\RR^3)$ with $k\geqslant 6$.
This is proved by the energy method based on the standard
Sobolev norm for $\RR^3$-valued functions,
combined with a kind of gauge transformation.
\par
The purpose of the present paper is to extend
the results obtained in
\cite{onodera2} for $\mathbb{S}^2$-valued model \eqref{eq:pdes2},
that is, to establish the
time-local existence and uniqueness theorem for
\eqref{eq:pde}-\eqref{eq:data} under the assumption that
$k\geqslant 6$ and the sectional curvature on $(N,g)$ is constant.
More precisely, our main results is stated as follows:
\begin{theorem}
\label{theorem:uniqueness}
Suppose that $(N,J,g)$ is a compact Riemann surface whose sectional curvature is constant.
Let $k$ be an integer satisfying $k\geqslant 6$.
Then for any
$u_0\in C(\mathbb{T};N)$ satisfying
$u_{0x}\in H^k(\TT;TN)$,
there exists $T=T(\|u_{0x}\|_{H^4(\TT;TN)})>0$
such that
\eqref{eq:pde}-\eqref{eq:data}
has a unique solution
$u\in C([-T,T]\times \TT;N)$
satisfying
$u_x\in
C([-T,T];H^{k}(\TT;TN)).
$
\end{theorem}
\underline{Notation.} \
For $\phi:\TT\to N$,
we denote by $\Gamma(\phi^{-1}TN)$ the set of all vector fields along $\phi$.
Let $V\in \Gamma(\phi^{-1}TN)$ and let $m$ be nonnegative integer.
Then we say $V\in H^m(\TT;TN)$ if
$$
\|V\|_{H^m(\TT;TN)}
:=
\sum_{\ell=0}^m
\int_{\TT}
g(\nabla_x^{\ell}V(x), \nabla_x^{\ell}V(x))\,dx
<\infty.
$$
In particular, if $m=0$, we replace $H^0(\TT;TN)$
with $L^2(\TT;TN)$.
\begin{remark}
Precisely speaking, the existence time $T$ of the solution in
Theorem~\ref{theorem:uniqueness}
depends on $a,b,c,\lambda$, and the constant sectional curvature of $(N,g)$
as well as $\|u_{0x}\|_{H^4(\TT;TN)}$.
\end{remark}
\begin{remark}
The local existence of the solution in
Theorem~\ref{theorem:uniqueness} holds if $k\geqslant 4$.
The assumption $k\geqslant 6$ comes from the requirement
to show the uniqueness.
\end{remark}
\begin{remark}
Let $w$ be an isometric embedding of $(N,g)$ into
some Euclidean space $\RR^d$ so that $N$ is considered as a submanifold
of $\RR^d$. By the Gagliardo-Nirenberg inequality,
it is found for $u_0$ in Theorem~\ref{theorem:uniqueness}
that $u_{0x}\in H^k(\TT;TN)$ if and only if
$(w{\circ}u_0)_x\in H^k(\TT;\RR^d)$,
where $H^k(\TT;\RR^d)$ denotes the standard $k$-th order
Sobolev space for $\RR^d$-valued functions on $\TT$.
By the equivalence, Theorem~\ref{theorem:uniqueness}
actually extends the results obtained in \cite{onodera2}.
\end{remark}
\begin{remark}
We can extend Theorem~\ref{theorem:uniqueness} to the case
where $(N,J,g)$ is a compact K\"ahler manifold with non-zero constant sectional curvature.
Indeed,
the argument using \eqref{eq:2d} and \eqref{eq:k1} in the proof
can be replaced by that using \eqref{eq:constsec} if the curvature is not zero.
This seems a little bit artificial and the proof is not so different.
Thus we do not pursue that.
\end{remark}
\begin{remark}
It is unlikely that we can remove the assumption
on the curvature of $(N,g)$ in general.
To see this, let $(N,g)$ be a Riemann surface whose sectional curvature
is not necessarily constant.
In view of \cite[Section~4]{chihara2},
if we can construct a sufficiently smooth
solution $u$ to \eqref{eq:pde}-\eqref{eq:data},
the following necessary condition
\begin{equation}
\int_{\TT}
\frac{\p}{\p x}
\left\{
S(u(t,x))
\right\}
g(u_x(t,x),u_x(t,x))
\,dx
=0
\label{eq:structure11}
\end{equation}
is expected to be satisfied for all existence time,
where
$S(u(t,x))$ denotes the sectional curvature of $(N,g)$
at $u(t,x)\in N$.
This requires at least that the left hand side of \eqref{eq:structure11}
is a conserved quantity in time.
Even if \eqref{eq:structure11} is true, the initial map $u_0$ is
required to satisfy
\begin{equation}
\int_{\TT}
\frac{\p}{\p x}
\left\{
S(u_0(x))
\right\}
g(u_{0x}(x),u_{0x}(x))
\,dx
=0.
\label{eq:structure22}
\end{equation}
On the other hand,
\eqref{eq:structure11} and \eqref{eq:structure22} are
obviously satisfied if
the sectional curvature of $(N,g)$ is constant.
\end{remark}
\par
The idea of the proof of the local existence
comes from the following formal observation.
Suppose that $u$ solves
\eqref{eq:pde}-\eqref{eq:data}.
If $k\geqslant 4$, $\nabla_x^ku_x$ satisfies
\begin{align}
(\nabla_t-a\,J_u\nabla_x^4-c_1\,P_1\nabla_x^2-c_2\,P_2\nabla_x)
\nabla_x^ku_x
&=
\mathcal{O}
\left(
\sum_{m=0}^{k+2}
|\nabla_x^mu_x|_g
\right)
\label{eq:esspde}
\end{align}
where $|\cdot|_g=\left\{g(\cdot,\cdot)\right\}^{1/2}$,
$c_1$ and $c_2$ are real constants depending on
$a,b,c,k$ and the sectional curvature on $(N,g)$,
and $P_1$ and $P_2$ are defined by
\begin{align}
P_1Y
&=
g(Y,u_x)J_uu_x,
\quad
P_2Y
=
g(\nabla_xu_x,u_x)J_uY
\nonumber
\end{align}
for any $Y\in \Gamma(u^{-1}TN)$.
It is found that
\eqref{eq:esspde} leads to
the classical energy estimate
for $\|\nabla_x^ku_x\|_{L^2(\TT;TN)}^2$
with loss of derivatives coming only from
$c_1\,P_1\nabla_x^2$ and $c_2\,P_2\nabla_x$.
Though the right hand side of \eqref{eq:esspde} includes
$\nabla_x^2(\nabla_x^ku_x)$ and $\nabla_x(\nabla_x^ku_x)$,
no loss of derivatives occur thanks to
the curvature condition and
the K\"ahler condition on $(N,J,g)$.
To eliminate the loss of derivatives coming from
$c_1\,P_1\nabla_x^2$ and $c_2\,P_2\nabla_x$,
we introduce the so-called gauged function $V_k$ defined by
\begin{align}
V_k
&=
\nabla_x^ku_x
-\frac{d_1}{2a}\,
g(\nabla_x^{k-2}u_x,J_uu_x)J_uu_x
+
\frac{d_2}{8a}\,
g(u_x,u_x)\nabla_x^{k-2}u_x,
\label{eq:igauge}
\end{align}
where
$d_1$ and $d_2$ are constants decided later.
Here $V_k$ is formally expressed by
$V_k=(I_d+\Phi_1\nabla_x^{-2}+\Phi_2\nabla_x^{-2})\nabla_x^ku_x$,
where $I_d$ is the identity on $\Gamma(u^{-1}TN)$ and
\begin{align}
\Phi_1Y
&=
-\frac{d_1}{2a}\,
g(Y,J_uu_x)J_uu_x,
\quad
\Phi_2
=
\frac{d_2}{8a}\,
g(u_x,u_x)Y
\nonumber
\end{align}
for any $Y\in \Gamma(u^{-1}TN)$.
Noting that
$J_u$ commutes with $\Phi_2$ and not with $\Phi_1$,
we see
\begin{align}
\left[
a\,J_u\nabla_x^4, \Phi_1\nabla_x^{-2}
\right]\nabla_x^ku_x
&=
(d_1\,P_1\nabla_x^2-d_1\,P_2\nabla_x)\nabla_x^ku_x
+\text{harmless terms},
\label{eq:obs1}
\\
\left[
a\,J_u\nabla_x^4, \Phi_2\nabla_x^{-2}
\right]\nabla_x^ku_x
&=
d_2\,P_2\nabla_x\nabla_x^ku_x
+
\text{harmless terms}.
\label{eq:obs2}
\end{align}
Therefore, if we set $d_1=c_1$ and $d_2=c_1+c_2$,
the above two commutators eliminate
$c_1\,P_1\nabla_x^2+c_2\,P_2\nabla_x$ in the partial
differential equation satisfied by $V_k$,
and hence the energy estimate for
$\|V_k\|_{L^2(\TT;TN)}^2$ works.
The nice choice of the above gauged function
is inspired by \cite{chihara2}.
\par
The strategy for the proof of the local existence of a solution
is as follows:
First, we construct a family
of fourth-order parabolic regularized solutions
$\left\{u^{\ep}\right\}_{\ep\in (0,1]}$.
Second, we obtain $\ep$-independent uniform estimates for
$\|u_x^{\ep}\|_{H^{k-1}(\TT;TN)}^2+\|V_k^{\ep}\|_{L^2(\TT;TN)}^2$
and the lower bound $T>0$ of
existence time of $\left\{u^{\ep}\right\}_{\ep\in (0,1]}$,
where $V_k^{\ep}$ is defined by \eqref{eq:igauge}
replacing $u$ with $u^{\ep}$.
Finally, the standard compactness argument concludes
the existence of
$u\in C([0,T]\times \TT;N)$ so that
$u_x\in L^{\infty}(0,T;H^k(\TT;TN))\cap
C([0,T];H^{k-1}(\TT;TN))$ and $u$ solves
\eqref{eq:pde}-\eqref{eq:data}.
The two commutators \eqref{eq:obs1} and \eqref{eq:obs2}
in the above formal observation
will be generated essentially in the computation of the second
and the third term of the right hand side of
\eqref{eq:TW1}.
One can refer to
\cite{KLPST, koiso, McGahagan}
for tools of computation
and
\cite{chihara2, CO, CO2}
for the method of the gauged energy
employed in the proof.
\par
The strategy for the proof of the uniqueness of the solution
is stated as follows:
Suppose that
$u, v\in C([0,T]\times \TT;N)$
are solutions to \eqref{eq:pde}-\eqref{eq:data}
satisfying
$u_x, v_x\in
L^{\infty}(0,T;H^6(\TT;TN))
\cap
C([0,T];H^{5}(\TT;TN))
$
with same initial data $u_0$.
Their existence is ensured by the above local existence results.
To estimate the difference between $u$ and $v$,
we regard $u$ and $v$ as functions
with values in some Euclidean space $\RR^d$.
Indeed, letting
$w$ be an isometric embedding of $(N,g)$ into $\RR^d$,
we consider $\RR^d$-valued functions
defined as follows:
\begin{align}
U&:=w{\circ} u,
\quad
V:=w{\circ} v,
\quad
Z:=U-V,
\nonumber
\\
\mathcal{U}&:=dw_u(\nabla_xu_x),
\quad
\mathcal{V}:=dw_v(\nabla_xv_x),
\quad
\WW:=\UU-\VV,
\nonumber
\end{align}
where $dw_p:T_pN\to T_{w{\circ}p}\RR^d\cong \RR^d$
is the differential of $w$ at $p\in N$.
To complete the proof of the uniqueness,
it suffices to show $Z=0$.
First, as shown in \eqref{eq:WWt},
we obtain the classical energy estimate
for $\|Z\|_{L^2}^2+\|Z_x\|_{L^2}^2+\|\WW\|_{L^2}^2$
with the loss of derivatives,
where $\|\cdot\|_{L^2}$ expresses the standard
$L^2$-norm for $\RR^d$-valued functions
on $\TT$.
The loss of derivatives has similar form as
that eliminated by the method of the gauge transformation
in the proof of the local existence of a solution.
Observing the analogy, we can easily find $\TW=\WW+\widetilde{\Lambda}$
as a gauged function of $\WW$ so that
the energy estimate for
$\|Z\|_{L^2}^2+\|Z_x\|_{L^2}^2+\|\TW\|_{L^2}^2$
can be closed. This shows $Z=0$.
The precise form of $\widetilde{\Lambda}$ will be given in
\eqref{eq:e1e2}.
\par
In the proof of the uniqueness,
we face with another difficulty,
which does not appear in the proof of the local existence.
On one hand,
the proof of the local existence seems clear,
thanks to the nice matching between
the geometric formulation of \eqref{eq:pde}
and the geometric $L^2$-norm $\|\cdot\|_{L^2(\TT;TN)}$.
On the other hand, the proof of the uniqueness
requires lengthier computations,
due to the worse matching between the form of the
equation satisfied by $U$ and
the standard $L^2$-norm $\|\cdot\|_{L^2(\TT;\RR^d)}$.
More concretely, the most crucial part of the proof of the uniqueness
is how to derive the energy estimate for $\WW$
of the form \eqref{eq:WWt}.
To derive this, the partial differential equation
satisfied by $\mathcal{W}$
and the energy estimate in $L^2(\TT;\RR^d)$
are required.
However, the analysis of the structure of lower order terms
in the equation becomes complicated,
since many terms related to
the second fundamental form on $N$ and the derivatives
appear to describe the equation satisfied by $U$ or $V$.
As \eqref{eq:pde} is higher-order equation than
the Schr\"odinger flow equation or the third-order dispersive flow
equation previously studied,
the situation becomes worse.
Fortunately, however,
we can successfully formulate the K\"ahler condition
and the curvature condition on $(N,J,g)$
to be applicable to our problem,
and demonstrate
that only weak loss of derivatives
is allowed to appear in the energy estimate
for $\|\WW\|_{L^2(\TT;\RR^d)}^2$.
In addition, it is to be noted that
we does not choose $\p_xZ_x$ but choose $\WW$
in the energy estimate.
The choice also plays an important role (See,e.g.,Lemma~\ref{lemma:nu})
in our proof,
as well as the choice of $\widetilde{\Lambda}$.
\par
By the way,
the geometric formulation of \eqref{eq:pde} was originally
proposed by \cite{onodera0}.
Independently, Anco and Myrzakulov in \cite{AM}
derived the equation, named a fourth-order Schr\"odinger map equation,
for $u:\RR\times \RR\to N$
or $u:\RR\times \TT\to N$
of the form
\begin{align}
-u_t
&=J_{u}\nabla_x^3u_x
+\frac{1}{2}
\nabla_x\left\{
g(u_x,u_x)J_{u}u_x
\right\}
-\frac{1}{2}
g(J_{u}u_x,\nabla_xu_x)u_x.
\label{eq:AM}
\end{align}
Interestingly, if $N$ is a Riemann surface,
\eqref{eq:AM} is identical with \eqref{eq:pde}
with $a=-1$, $b=-1$, $c=-1/2$, and $\lambda=0$.
Therefore, we immediately find that
Theorem~\ref{theorem:uniqueness}
is valid for
the initial value problem
also for \eqref{eq:AM}.
\par
The organization of the present paper is as follows:
In Section~\ref{section:existence},
a time-local solution to \eqref{eq:pde}-\eqref{eq:data} is constructed.
In Section~\ref{section:proof},
the proof of Theorem~\ref{theorem:uniqueness} is completed.
\section{Proof of the existence of a time-local solution}
\label{section:existence}
This section is devoted to the construction of a time-local
solution to \eqref{eq:pde}-\eqref{eq:data}.
More concretely, the goal of this section is to show the
following.
\begin{theorem}
\label{theorem:existence}
Suppose that the sectional curvature of $(N,g)$ is constant.
Let $k$ be an integer satisfying $k\geqslant 4$.
Then for any
$u_0\in C(\mathbb{T};N)$ satisfying
$u_{0x}\in H^k(\TT;TN)$,
there exists $T=T(\|u_{0x}\|_{H^4(\TT;TN)})>0$
such that
\eqref{eq:pde}-\eqref{eq:data}
has a solution
$u\in C([-T,T]\times \TT;N)$
satisfying
$u_x\in
L^{\infty}(-T,T;H^{k}(\TT;TN))
\cap
C([-T,T];H^{k-1}(\TT;TN)).
$
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{theorem:existence}]
Let $k\geqslant 4$ be fixed.
It suffices to solve the problem in the positive direction in time.
We first assume that $u_{0}\in C^{\infty}(\TT;N)$ and construct a
local solution.
\par
As a beginning,
we consider the initial value problem of the form
\begin{alignat}{2}
& u_t
=
(-\ep + a\,J_u)\nabla_x^3u_x
\nonumber
\\
&\quad \quad
+
b\, g(u_x,u_x)J_u\nabla_xu_x
+
c\,g(\nabla_xu_x,u_x)J_uu_x
+\lambda\, J_u\nabla_xu_x
\quad &\text{in}\quad
&(0,\infty){\times} \mathbb{T},
\label{eq:eppde}
\\
& u(0,x)
=
u_0(x)
\quad&\text{in}\quad &\mathbb{T},
\label{eq:epdata}
\end{alignat}
where $\ep\in (0,1]$ is a small positive parameter.
Thanks to the added term
$-\ep\,\nabla_x^3u_x$,
\eqref{eq:eppde} is a fourth-order
quasilinear parabolic system,
and \eqref{eq:eppde}-\eqref{eq:epdata} has a unique local smooth
solution which we will denote $u^{\ep}$.
\begin{lemma}
\label{lemma:parabolic}
For each $\ep\in (0,1]$,
there exists a positive constant $T_{\ep}$
depending on $\ep$ and $\|u_{0x}\|_{H^4(\TT;TN)}$
such that
\eqref{eq:eppde}-\eqref{eq:epdata} possesses a
unique solution $u^{\ep}\in C^{\infty}([0,T_{\ep}]\times \TT;N)$.
\end{lemma}
We can show Lemma~\ref{lemma:parabolic} by the mix of
a sixth-order parabolic regularization and
a geometric classical energy method
without the constant curvature condition on $(N,g)$.
The proof
almost falls into the scope
of that of \cite[Lemma~3.1]{CO2} by replacing $\RR$ with $\TT$
and by restricting to a compact Riemann surface as $N$.
Thought a slight modification is required in the proof,
the difference is not essential
and thus we omit the detail of the proof.
\par
In the next step,
letting $\left\{u^{\ep}\right\}_{\ep\in(0,1]}$ be a family of solutions
to \eqref{eq:eppde}-\eqref{eq:epdata} constructed in
Lemma~\ref{lemma:parabolic},
we obtain $\ep$-independent
energy estimates for
$\left\{u^{\ep}_x\right\}_{\ep\in(0,1]}$.
Precisely speaking, we obtain a uniform lower bound
$T$ of $\left\{T_{\ep}\right\}_{\ep\in (0,1]}$
and show that $\left\{u^{\ep}_x\right\}_{\ep\in (0,1]}$
is bounded in $L^{\infty}(0,T;H^k(\TT;TN))$.
However, the classical
energy estimate for $\|u^{\ep}_x\|_{H^k(\TT;TN)}$
causes loss of derivatives.
To overcome the difficulty,
we introduce a gauged function $V^{\ep}_k$ defined by
\begin{align}
V^{\ep}_k
&=
\nabla_x^ku_x^{\ep}
+
\Lambda^{\ep}
=
\nabla_x^ku_x^{\ep}
+
\Lambda^{\ep}_1
+
\Lambda^{\ep}_2,
\label{eq:V_m}
\end{align}
where
\begin{align}
\Lambda_1^{\ep}
&=
-\frac{d_1}{2a}\,
g(\nabla_x^{k-2}u_x^{\ep},J_uu_x^{\ep})J_uu_x^{\ep},
\quad
\Lambda_2^{\ep}
=
\frac{d_2}{8a}\,
g(u_x^{\ep},u_x^{\ep})\nabla_x^{k-2}u_x^{\ep},
\nonumber
\end{align}
and $d_1, d_2\in\RR$ are real constants which will be decided later
depending only on $a,b,c,k$ and the constant sectional curvature
of $(N,g)$.
Furthermore, we introduce the associated gauged energy
$N_k(u^{\ep}(t))$ defined by
\begin{equation}
N_k(u^{\ep}(t))
=
\sqrt{
\|u_x^{\ep}(t)\|_{H^{k-1}(\TT;TN)}^2
+
\|V_k^{\ep}(t)\|_{L^2(\TT;TN)}^2
}.
\label{eq:N_k}
\end{equation}
We restrict the time interval on $[0,T_{\ep}^{\star}]$
with $T^{\star}_{\ep}$ defined by
$$
T^{\star}_{\ep}
=
\sup
\left\{
T>0 \ | \
N_4(u^{\ep}(t))\leqslant 2N_4(u_0)
\quad
\text{for all}
\quad
t\in[0,T]
\right\}.
$$
By the Sobolev embedding, we immediately find
that there holds
\begin{equation}
\frac{1}{C}N_k(u^{\ep}(t))
\leqslant
\|u_x^{\ep}(t)\|_{H^k(\TT;TN)}
\leqslant
C\,N_k(u^{\ep}(t))
\quad \
\text{for any}
\quad \
t\in [0,T_{\ep}^{\star}],
\label{eq:timecut}
\end{equation}
with $C=C(\|u_{0x}\|_{H^4(\TT;TN)})>1$
being an $\ep$-independent constant.
We shall show that there exists a constant
$T=T(\|u_{0x}\|_{H^4(\TT;TN)})>0$
which is independent of $\ep\in (0,1]$ and $k$
such that $T^{\star}_{\ep}\geqslant T$ uniformly in $\ep\in (0,1]$
and that $\left\{N_k(u^{\ep})\right\}_{\ep\in (0,1]}$
is bounded in $L^{\infty}(0,T)$.
If it is true, this together with \eqref{eq:timecut}
implies that $\left\{u_x^{\ep}\right\}_{\ep\in (0,1]}$ is
bounded in $L^{\infty}(0,T;H^k(\TT;TN))$.
\par
Having them in mind,
let us focus on the uniform energy
estimate for $\left\{N_k(u^{\ep})\right\}_{\ep\in (0,1]}$.
We set $u=u^{\ep}$, $V_k=V_k^{\ep}$,
$\Lambda=\Lambda^{\ep}$,
$\Lambda_1=\Lambda_1^{\ep}$, $\Lambda_2=\Lambda_2^{\ep}$,
$\|\cdot\|_{H^0(\TT;TN)}=\|\cdot\|_{L^2(\TT;TN)}=\|\cdot\|_{L^2}$,
$\|\cdot\|_{H^m(\TT;TN)}=\|\cdot\|_{H^m}$ for $m=1,\ldots,k$,
and
$\sqrt{g(\cdot,\cdot)}=|\cdot|_g$,
for ease of notation.
Since $g$ is a hermitian metric,
$g(J_uY_1,J_uY_2)=g(Y_1,Y_2)$ holds for any
$Y_1,Y_2\in\Gamma(u^{-1}TN)$.
Since Riemann surfaces with hermitian metric are K\"ahler manifolds,
$\nabla_xJ_u=J_u\nabla_x$ and $\nabla_tJ_u=J_u\nabla_t$ hold.
We denote the sectional curvature of $(N,g)$ by $S$ which is constant.
Any positive constant which depends on
$a$, $b$, $c$, $\lambda$, $k$, $S$,
$\|u_{0x}\|_{H^4}$
and not on $\ep\in (0,1]$ will be denoted by the same $C$.
Note that $k\geqslant 4$ and the Sobolev embedding
$H^1(\TT)\subset C(\TT)$
yield
$\|\nabla_x^4u_x\|_{L^{\infty}(0,T_{\ep}^{\star};L^2)}\leqslant C$
and $\|\nabla_x^mu_x\|_{L^{\infty}((0,T_{\ep}^{\star})\times \TT)}\leqslant C$
for $m=0,1,\ldots,3$.
These properties will be used without any comment in this section.
\par
We now investigate the energy estimate for $\|V_k\|_{L^2}^2$.
It follows that
\begin{align}
\frac{1}{2}\frac{d}{dt}
\|V_k\|_{L^2}^2
&=
\int_{\TT}
g(\nabla_tV_k,V_k)
dx
\nonumber
\\
&=
\int_{\TT}
g(\nabla_t(\nabla_x^ku_x),V_k)
dx
+
\int_{\TT}
g(\nabla_t\Lambda,V_k)
dx
\nonumber
\\
&=
\int_{\TT}
g(\nabla_t(\nabla_x^ku_x),\nabla_x^ku_x)
dx
+
\int_{\TT}
g(\nabla_t(\nabla_x^ku_x),\Lambda)
dx
+
\int_{\TT}
g(\nabla_t\Lambda,V_k)
dx.
\label{eq:eqV_k}
\end{align}
To evaluate the right hand side
(denoted by RHS hereafter for short) of \eqref{eq:eqV_k},
we compute the partial differential
equation satisfied by $\nabla_x^ku_x$.
Recalling that $\nabla_xu_t=\nabla_tu_x$ and
$(\nabla_x\nabla_t-\nabla_t\nabla_x)Y=R(u_x,u_t)Y$
for any $Y\in \Gamma(u^{-1}TN)$ where
$R=R(\cdot,\cdot)$ denotes the Riemann curvature tensor on $(N,g)$,
we have
\begin{align}
\nabla_t(\nabla_x^ku_x)
&=
\nabla_x^{k+1}u_t
+
\sum_{m=0}^{k-1}
\nabla_x^{k-1-m}
\left\{
R(u_t,u_x)\nabla_x^mu_x
\right\}
=:
\nabla_x^{k+1}u_t
+Q
.
\label{eq:na1}
\end{align}
First, we use \eqref{eq:eppde} to compute the second term of the RHS of the
above,
which becomes
\begin{align}
Q&=
-\ep\,
\sum_{m=0}^{k-1}\nabla_x^{k-1-m}
\left\{
R(\nabla_x^3u_x,u_x)\nabla_x^mu_x
\right\}
\nonumber
\\
&\quad
+a\,
\sum_{m=0}^{k-1}\nabla_x^{k-1-m}
\left\{
R(J_u\nabla_x^3u_x,u_x)\nabla_x^mu_x
\right\}
\nonumber
\\
&\quad
+\lambda\,
\sum_{m=0}^{k-1}\nabla_x^{k-1-m}
\left\{
R(J_u\nabla_xu_x,u_x)\nabla_x^mu_x
\right\}
\nonumber
\\
&\quad
+b\,
\sum_{m=0}^{k-1}\nabla_x^{k-1-m}
\left\{
g(u_x,u_x)R(J_u\nabla_xu_x,u_x)\nabla_x^mu_x
\right\}
\nonumber
\\
&\quad
+c\,
\sum_{m=0}^{k-1}\nabla_x^{k-1-m}
\left\{
g(\nabla_xu_x,u_x)R(J_uu_x,u_x)\nabla_x^mu_x
\right\}.
\nonumber
\end{align}
Thus, by using the Sobolev embedding
and the Gagliardo-Nirenberg inequality,
we obtain
\begin{align}
Q&=
\ep\,
\mathcal{O}
(|\nabla_x^{k+2}u_x|_g)
+a\,Q_0
+
\mathcal{O}
\left(
\sum_{m=0}^k
|\nabla_x^mu_x|_g
\right),
\label{eq:na2}
\end{align}
where
$$
Q_0
=
\sum_{m=0}^{k-1}\nabla_x^{k-1-m}
\left\{
R(J_u\nabla_x^3u_x,u_x)\nabla_x^mu_x
\right\}.
$$
Since $S$ is the constant sectional curvature of $(N,g)$,
\begin{equation}
R(Y_1,Y_2)Y_3
=S
\left\{
g(Y_2,Y_3)Y_1
-g(Y_1,Y_3)Y_2
\right\}
\label{eq:constsec}
\end{equation}
holds for any $Y_1,Y_2,Y_3\in \Gamma(u^{-1}TN)$.
Using the formula, $Q_0$ is expressed as follows.
\begin{align}
Q_0
&=
S\,\sum_{m=0}^{k-1}
\nabla_x^{k-1-m}
\left\{
g(\nabla_x^mu_x,u_x)J_u\nabla_x^3u_x
-g(\nabla_x^mu_x, J_u\nabla_x^3u_x)u_x
\right\}
\nonumber
\\
&=S\,(Q_{0,1}+Q_{0,2}+Q_{0,3}),
\label{eq:Q0}
\end{align}
where
\begin{align}
Q_{0,1}
&=
\nabla_x^{k-1}
\left\{
g(u_x,u_x)J_u\nabla_x^3u_x
-g(u_x,J_u\nabla_x^3u_x)u_x
\right\},
\nonumber
\\
Q_{0,2}
&=
\nabla_x^{k-2}
\left\{
g(\nabla_xu_x,u_x)J_u\nabla_x^3u_x
-g(\nabla_xu_x,J_u\nabla_x^3u_x)u_x
\right\},
\nonumber
\\
Q_{0,3}
&=
\sum_{m=2}^{k-1}
\nabla_x^{k-1-m}
\left\{
g(\nabla_x^mu_x,u_x)J_u\nabla_x^3u_x
-g(\nabla_x^mu_x, J_u\nabla_x^3u_x)u_x
\right\}.
\nonumber
\end{align}
For $Q_{0,1}$,
the product formula implies
\begin{align}
Q_{0,1}
&=
\sum_{\mu+\nu=0}^{k-1}
\frac{(k-1)!}{\mu!\nu!(k-1-\mu-\nu)!}
\,
g(\nabla_x^{\mu}u_x, \nabla_x^{\nu}u_x)J_u\nabla_x^{k+2-\mu-\nu}u_x
\nonumber
\\
&\quad
-
\sum_{\mu+\nu=0}^{k-1}
\frac{(k-1)!}{\mu!\nu!(k-1-\mu-\nu)!}
\,
g(\nabla_x^{\mu}u_x, J_u\nabla_x^{\nu+3}u_x)\nabla_x^{k-1-\mu-\nu}u_x
\nonumber
\\
&=
g(u_x,u_x)J_u\nabla_x^{k+2}u_x
+
2(k-1)g(\nabla_xu_x,u_x)J_u\nabla_x^{k+1}u_x
-g(u_x,J_u\nabla_x^{k+2}u_x)u_x
\nonumber
\\
&\quad
-(k-1)g(\nabla_xu_x,J_u\nabla_x^{k+1}u_x)u_x
-(k-1)g(u_x,J_u\nabla_x^{k+1}u_x)\nabla_xu_x
\nonumber
\\
&\quad
+
\sum_{\mu+\nu=2}^{k-1}
\frac{(k-1)!}{\mu!\nu!(k-1-\mu-\nu)!}
\,
g(\nabla_x^{\mu}u_x, \nabla_x^{\nu}u_x)J_u\nabla_x^{k+2-\mu-\nu}u_x
\nonumber
\\
&\quad
-
\sum_{\substack{\mu+\nu=0,\\ \nu\leqslant k-3}}^{k-1}
\frac{(k-1)!}{\mu!\nu!(k-1-\mu-\nu)!}
\,
g(\nabla_x^{\mu}u_x, J_u\nabla_x^{\nu+3}u_x)\nabla_x^{k-1-\mu-\nu}u_x
\nonumber
\\
&=
g(u_x,u_x)J_u\nabla_x^{k+2}u_x
+
2(k-1)g(\nabla_xu_x,u_x)J_u\nabla_x^{k+1}u_x
\nonumber
\\
&\quad
-g(u_x,J_u\nabla_x^{k+2}u_x)u_x
-(k-1)g(\nabla_xu_x,J_u\nabla_x^{k+1}u_x)u_x
\nonumber
\\
&\quad
-(k-1)g(u_x,J_u\nabla_x^{k+1}u_x)\nabla_xu_x
+
\mathcal{O}
\left(
\sum_{m=0}^k
|\nabla_x^mu_x|_g
\right).
\label{eq:na3}
\end{align}
Here it is to be emphasized that
\begin{align}
g(Y, u_x)u_x+g(Y, J_uu_x)J_uu_x&=g(u_x,u_x)Y
\label{eq:2d}
\end{align}
holds for any $Y\in \Gamma(u^{-1}TN)$,
since $N$ is a two-dimensional real manifold.
Using \eqref{eq:2d} with $Y=J_u\nabla_x^{k+2}u_x$, we rewrite
the third term of the RHS of
\eqref{eq:na3}
to have
\begin{align}
-g(u_x,J_u\nabla_x^{k+2}u_x)u_x
&=
-g(u_x,u_x)J_u\nabla_x^{k+2}u_x
+g(J_uu_x,J_u\nabla_x^{k+2}u_x)J_uu_x
\nonumber
\\
&=
-g(u_x,u_x)J_u\nabla_x^{k+2}u_x
+g(\nabla_x^{k+2}u_x,u_x)J_uu_x.
\label{eq:na4}
\end{align}
Substituting \eqref{eq:na4} into the RHS of \eqref{eq:na3}, we obtain
\begin{align}
Q_{0,1}
&=
2(k-1)g(\nabla_xu_x,u_x)J_u\nabla_x^{k+1}u_x
+g(\nabla_x^{k+2}u_x,u_x)J_uu_x
\nonumber
\\
&\quad
+(k-1)g(\nabla_x^{k+1}u_x, J_u\nabla_xu_x)u_x
+(k-1)g(\nabla_x^{k+1}u_x, J_uu_x)\nabla_xu_x
\nonumber
\\
&\quad
+
\mathcal{O}
\left(
\sum_{m=0}^k
|\nabla_x^mu_x|_g
\right).
\label{eq:Q01}
\end{align}
For $Q_{0,2}$, in the same way as that for $Q_{0,1}$,
we deduce
\begin{align}
Q_{0,2}
&=
\sum_{\mu+\nu=0}^{k-2}
\frac{(k-2)!}
{\mu!\nu!(k-2-\mu-\nu)!}
\,
g(\nabla_x^{\mu+1}u_x, \nabla_x^{\nu}u_x)J_u\nabla_x^{k+1-\mu-\nu}u_x
\nonumber
\\
&\quad
-\sum_{\mu+\nu=0}^{k-2}
\frac{(k-2)!}
{\mu!\nu!(k-2-\mu-\nu)!}
\,
g(\nabla_x^{\mu+1}u_x, J_u\nabla_x^{\nu+3}u_x)\nabla_x^{k-2-\mu-\nu}u_x
\nonumber
\\
&=
g(\nabla_xu_x,u_x)J_u\nabla_x^{k+1}u_x
-g(\nabla_xu_x, J_u\nabla_x^{k+1}u_x)u_x
\nonumber
\\
&\quad
+
\sum_{\mu+\nu=1}^{k-2}
\frac{(k-2)!}
{\mu!\nu!(k-2-\mu-\nu)!}
\,
g(\nabla_x^{\mu+1}u_x, \nabla_x^{\nu}u_x)J_u\nabla_x^{k+1-\mu-\nu}u_x
\nonumber
\\
&\quad
-\sum_{\substack{\mu+\nu=0,\\ \nu\leqslant k-3}}^{k-2}
\frac{(k-2)!}
{\mu!\nu!(k-2-\mu-\nu)!}
\,
g(\nabla_x^{\mu+1}u_x, J_u\nabla_x^{\nu+3}u_x)\nabla_x^{k-2-\mu-\nu}u_x
\nonumber
\\
&=
g(\nabla_xu_x,u_x)J_u\nabla_x^{k+1}u_x
+g(\nabla_x^{k+1}u_x, J_u\nabla_xu_x)u_x
+
\mathcal{O}
\left(
\sum_{m=0}^k
|\nabla_x^mu_x|_g
\right).
\label{eq:Q02}
\end{align}
For $Q_{0,3}$, the Sobolev embedding and
the Gagliardo-Nirenberg inequality imply
\begin{align}
Q_{0,3}
&=\mathcal{O}
\left(
\sum_{m=0}^k
|\nabla_x^mu_x|_g
\right).
\label{eq:Q03}
\end{align}
Collecting
\eqref{eq:na2},
\eqref{eq:Q0},
\eqref{eq:Q01},
\eqref{eq:Q02},
and
\eqref{eq:Q03},
we obtain
\begin{align}
Q&=
\ep\,
\mathcal{O}
\left(
|\nabla_x^{k+2}u_x|_g
\right)
+aS\,
g(\nabla_x^2(\nabla_x^ku_x), u_x)J_uu_x
\nonumber
\\
&\quad
+aS(2k-1)\,g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x)
+aSk\,g(\nabla_x(\nabla_x^ku_x),J_u\nabla_xu_x)u_x
\nonumber
\\
&\quad
+
aS(k-1)\,
g(\nabla_x(\nabla_x^ku_x), J_uu_x)\nabla_xu_x
+
\mathcal{O}
\left(
\sum_{m=0}^k
|\nabla_x^mu_x|_g
\right).
\label{eq:nab_2}
\end{align}
Second, we use \eqref{eq:eppde} to compute the first term of the RHS of
\eqref{eq:na1}.
A simple computation shows
\begin{align}
\nabla_x^{k+1}u_t
&=
-\ep\nabla_x^4(\nabla_x^ku_x)
+a\,J_u\nabla_x^4(\nabla_x^ku_x)
+\lambda\,J_u\nabla_x^2(\nabla_x^ku_x)
+b\,Q_{1,1}+c\,Q_{1,2},
\label{eq:nab_1}
\end{align}
where
\begin{align}
Q_{1,1}
&=
\nabla_x^{k+1}\left\{
g(u_x,u_x)J_u\nabla_xu_x
\right\}
\nonumber
\\
&=
\sum_{\mu+\nu=0}^{k+1}
\frac{(k+1)!}{\mu!\nu!(k+1-\mu-\nu)!}
\,
g(\nabla_x^{\mu}u_x,\nabla_x^{\nu}u_x)J_u\nabla_x^{k+2-\mu-\nu}u_x
\nonumber
\\
&=
g(u_x,u_x)J_u\nabla_x^{k+2}u_x
+2(k+1)g(\nabla_xu_x,u_x)J_u\nabla_x^{k+1}u_x
\nonumber
\\
&\quad
+2g(\nabla_x^{k+1}u_x,u_x)J_u\nabla_xu_x
\nonumber
\\
&\quad
+\sum_{\substack{\mu+\nu=2, \\ \mu,\nu\leqslant k}}^{k+1}
\frac{(k+1)!}{\mu!\nu!(k+1-\mu-\nu)!}
\,
g(\nabla_x^{\mu}u_x,\nabla_x^{\nu}u_x)J_u\nabla_x^{k+2-\mu-\nu}u_x
\nonumber
\\
&=
\nabla_x\left\{
g(u_x,u_x)J_u\nabla_x(\nabla_x^ku_x)
\right\}
+2k\,
g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x)
\nonumber
\\
&\quad
+2\,g(\nabla_x(\nabla_x^ku_x), u_x)J_u\nabla_xu_x
+
\mathcal{O}
\left(
\sum_{m=0}^k
|\nabla_x^mu_x|_g
\right),
\label{eq:Q11}
\intertext{and}
Q_{1,2}
&=
\nabla_x^{k+1}
\left\{
g(\nabla_xu_x,u_x)J_uu_x
\right\}
\nonumber
\\
&=
\sum_{\mu+\nu=0}^{k+1}
\frac{(k+1)!}{\mu!\nu!(k+1-\mu-\nu)!}
\,g(\nabla_x^{\mu+1}u_x, \nabla_x^{\nu}u_x)J_u\nabla_x^{k+1-\mu-\nu}u_x
\nonumber
\\
&=
g(\nabla_xu_x,u_x)J_u\nabla_x^{k+1}u_x
+g(\nabla_x^{k+2}u_x,u_x)J_uu_x
\nonumber
\\
&\quad
+(k+1)g(\nabla_x^{k+1}u_x,\nabla_xu_x)J_uu_x
+
g(\nabla_xu_x,\nabla_x^{k+1}u_x)J_uu_x
\nonumber
\\
&\quad
+
(k+1)g(\nabla_x^{k+1}u_x,u_x)J_u\nabla_xu_x
\nonumber
\\
&\quad
+
\sum_{\substack{\mu+\nu=1, \\ \mu\leqslant k-1, \\ \nu\leqslant k}}^{k+1}
\frac{(k+1)!}{\mu!\nu!(k+1-\mu-\nu)!}
\,g(\nabla_x^{\mu+1}u_x, \nabla_x^{\nu}u_x)J_u\nabla_x^{k+1-\mu-\nu}u_x
\nonumber
\\
&=
g(\nabla_x^2(\nabla_x^ku_x),u_x)J_uu_x
+g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x)
\nonumber
\\
&\quad
+(k+2)g(\nabla_x(\nabla_x^ku_x),\nabla_xu_x)J_uu_x
+(k+1)g(\nabla_x(\nabla_x^ku_x),u_x)J_u\nabla_xu_x
\nonumber
\\
&\quad
+
\mathcal{O}
\left(
\sum_{m=0}^k
|\nabla_x^mu_x|_g
\right).
\label{eq:Q12}
\end{align}
By collecting \eqref{eq:nab_2} and \eqref{eq:nab_1} with
\eqref{eq:Q11} and with \eqref{eq:Q12}, we have
\begin{align}
\nabla_t(\nabla_x^ku_x)
&=
-\ep\nabla_x^4(\nabla_x^ku_x)
+
\ep\,
\mathcal{O}
\left(
|\nabla_x^{k+2}u_x|_g
\right)
\nonumber
\\
&\quad
+a\,J_u\nabla_x^4(\nabla_x^ku_x)
+\lambda\,J_u\nabla_x^2(\nabla_x^ku_x)
+b\,\nabla_x
\left\{
g(u_x,u_x)J_u\nabla_x(\nabla_x^ku_x)
\right\}
\nonumber
\\
&\quad
+(aS+c)\,g(\nabla_x^2(\nabla_x^ku_x), u_x)J_uu_x
\nonumber
\\
&\quad
+\left\{
aS(2k-1)+2kb+c
\right\}
\,g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x)
\nonumber
\\
&\quad +
\left\{
2b+(k+1)c
\right\}\,
g(\nabla_x(\nabla_x^ku_x),u_x)J_u\nabla_xu_x
\nonumber
\\
&\quad
+
(k+2)c\,
g(\nabla_x(\nabla_x^ku_x),\nabla_xu_x)J_uu_x
\nonumber
\\
&\quad
+aSk\,g(\nabla_x(\nabla_x^ku_x),J_u\nabla_xu_x)u_x
\nonumber
\\
&\quad
+aS(k-1)\,g(\nabla_x(\nabla_x^ku_x), J_uu_x)\nabla_xu_x
\nonumber
\\
&\quad
+\mathcal{O}
\left(
\sum_{m=0}^k
|\nabla_x^mu_x|_g
\right).
\label{eq:maya}
\end{align}
\par
Furthermore, we modify the expression of some terms including
$\nabla_x(\nabla_x^ku_x)$
to detect their essential structure.
Let $Y\in \Gamma(u^{-1}TN)$ be fixed.
We first use \eqref{eq:2d} to see
\begin{align}
g(u_x,u_x)J_uY
&=
g(J_uY,u_x)u_x+g(J_uY,J_uu_x)J_uu_x
\nonumber
\\
&
=
g(Y,u_x)J_uu_x-g(Y,J_uu_x)u_x.
\nonumber
\end{align}
Acting $\nabla_x$ to both sides of the above,
we have
\begin{align}
2\,g(\nabla_xu_x,u_x)J_uY
&=
g(Y,\nabla_xu_x)J_uu_x
+
g(Y,u_x)J_u\nabla_xu_x
\nonumber
\\
&\quad
-g(Y,J_u\nabla_xu_x)u_x
-g(Y,J_uu_x)\nabla_xu_x.
\label{eq:maya2}
\end{align}
We next introduce the following expression:
\begin{align}
A_1Y
&=
g(Y,\nabla_xu_x)J_uu_x
+
g(Y,u_x)J_u\nabla_xu_x
\nonumber
\\
&\quad
+g(Y,J_u\nabla_xu_x)u_x
+g(Y,J_uu_x)\nabla_xu_x,
\nonumber
\\
A_2Y
&=
g(Y,J_uu_x)\nabla_xu_x
-g(Y,J_u\nabla_xu_x)u_x.
\nonumber
\end{align}
We find ${}^tA_1=A_1$ and ${}^tA_2=A_2$ in $T_uN$.
More precisely we can show the following.
\begin{proposition}
Let $Y_1,Y_2\in \Gamma(u^{-1}TN)$.
Then
\begin{align}
g(A_iY_1,Y_2)&=g(Y_1,A_iY_2)
\label{eq:gsym}
\end{align}
holds for each $(t,x)\in [0,T_{\ep}^{*}]\times \TT$ with $i=1,2$.
\label{proposition:gsym}
\end{proposition}
\begin{proof}[Proof of Proposition~\ref{proposition:gsym}]
If $i=1$, \eqref{eq:gsym} immediately follows from the definition
of $A_1$.
If $i=2$, \eqref{eq:gsym} follows from
\begin{equation}
\left\{g(u_x,u_x)\right\}^2
\left\{
g(A_2Y_1,Y_2)-g(Y_1,A_2Y_2)
\right\}=0,
\label{eq:gsym2}
\end{equation}
since both sides of \eqref{eq:gsym} vanish at the point $(t,x)$
with $u_x(t,x)=0$.
Indeed we can show \eqref{eq:gsym2} by the following computations.
We first write
\begin{align}
g(u_x,u_x)A_2Y_1
&=
g(u_x,u_x)
\left\{
g(Y_1,J_uu_x)\nabla_xu_x
-g(Y_1,J_u\nabla_xu_x)u_x
\right\}
\nonumber
\\
&=
g(g(u_x,u_x)Y_1,J_uu_x)\nabla_xu_x
-g(g(u_x,u_x)Y_1,J_u\nabla_xu_x)u_x,
\nonumber
\end{align}
and we use \eqref{eq:2d} with $Y=Y_1$ to see
\begin{align}
g(u_x,u_x)A_2Y_1
&=
g(g(Y_1,u_x)u_x+g(Y_1,J_uu_x)J_uu_x,J_uu_x)\nabla_xu_x
\nonumber
\\
&\quad
-g(g(Y_1,u_x)u_x+g(Y_1,J_uu_x)J_uu_x,J_u\nabla_xu_x)u_x
\nonumber
\\
&=
g(u_x,u_x)g(Y_1,J_uu_x)\nabla_xu_x
-g(u_x,J_u\nabla_xu_x)g(Y_1,u_x)u_x
\nonumber
\\
&\quad
-g(u_x,\nabla_xu_x)g(Y_1,J_uu_x)u_x.
\nonumber
\end{align}
This implies
\begin{align}
\left\{g(u_x,u_x)\right\}^2g(A_2Y_1,Y_2)
&=
g(g(u_x,u_x)A_2Y_1, g(u_x,u_x)Y_2)
\nonumber
\\
&=
g(u_x,u_x)g(Y_1,J_uu_x)g(\nabla_xu_x,g(u_x,u_x)Y_2)
\nonumber
\\
&\quad
-g(u_x,J_u\nabla_xu_x)g(Y_1,u_x)g(u_x,g(u_x,u_x)Y_2)
\nonumber
\\
&\quad
-g(u_x,\nabla_xu_x)g(Y_1,J_uu_x)g(u_x,g(u_x,u_x)Y_2).
\label{eq:iri1}
\end{align}
Using \eqref{eq:2d} again with $Y=Y_2$, we see
\begin{align}
g(u_x,g(u_x,u_x)Y_2)
&=
g(u_x,u_x)g(Y_2,u_x),
\nonumber
\\
g(\nabla_xu_x,g(u_x,u_x)Y_2)
&=
g(\nabla_xu_x,u_x)g(Y_2,u_x)
+
g(\nabla_xu_x,J_uu_x)g(Y_2,J_uu_x).
\nonumber
\end{align}
Substituting them into \eqref{eq:iri1}, we have
\begin{align}
&\left\{g(u_x,u_x)\right\}^2
g(A_2Y_1,Y_2)
\nonumber
\\
&=
g(u_x,u_x)g(\nabla_xu_x,J_uu_x)
\left\{
g(Y_1,J_uu_x)g(Y_2,J_uu_x)
+
g(Y_1,u_x)g(Y_2,u_x)
\right\}.
\nonumber
\end{align}
As the form of the RHS is symmetric with respect to
$Y_1$ and $Y_2$, we immediately conclude that
the desired property \eqref{eq:gsym2} holds.
\end{proof}
Using \eqref{eq:maya2} and the definition of $A_1$ and $A_2$,
we have
\begin{align}
&g(Y,J_uu_x)\nabla_xu_x
\nonumber
\\
&=
\frac{1}{4}
\biggl\{
g(Y,\nabla_xu_x)J_uu_x
+
g(Y,u_x)J_u\nabla_xu_x
+g(Y,J_u\nabla_xu_x)u_x
+g(Y,J_uu_x)\nabla_xu_x
\biggr\}
\nonumber
\\
&\quad
-\frac{1}{4}
\biggl\{
g(Y,\nabla_xu_x)J_uu_x
+
g(Y,u_x)J_u\nabla_xu_x
-g(Y,J_u\nabla_xu_x)u_x
-g(Y,J_uu_x)\nabla_xu_x
\biggr\}
\nonumber
\\
&\quad
+\frac{1}{2}
\biggl\{
g(Y,J_uu_x)\nabla_xu_x
-g(Y,J_u\nabla_xu_x)u_x
\biggr\}
\nonumber
\\
&=
-\frac{1}{2}\,
g(\nabla_xu_x,u_x)J_uY
+\frac{1}{4}A_1Y
+\frac{1}{2}A_2Y.
\label{eq:ayaA}
\end{align}
In the same way, we have
\begin{align}
g(Y,J_u\nabla_xu_x)u_x
&=
-\frac{1}{2}\,
g(\nabla_xu_x,u_x)J_uY
+\frac{1}{4}A_1Y
-\frac{1}{2}A_2Y.
\label{eq:ayaB}
\end{align}
Using ${}^{t}J_u=-J_u$ in $T_uN$, \eqref{eq:gsym}, and \eqref{eq:ayaB},
we deduce
\begin{align}
g(Y,u_x)J_u\nabla_xu_x
&=
{}^{t}\left(
g(\cdot,J_u\nabla_xu_x)u_x
\right)Y
\nonumber
\\
&=
-\frac{1}{2}\,
g(\nabla_xu_x,u_x)\,{}^{t}J_uY
+\frac{1}{4}\,{}^tA_1Y
-\frac{1}{2}\,{}^tA_2Y
\nonumber
\\
&=
\frac{1}{2}\,
g(\nabla_xu_x,u_x)J_uY
+\frac{1}{4}A_1Y
-\frac{1}{2}A_2Y,
\label{eq:ayaC}
\end{align}
and
\begin{align}
g(Y,\nabla_xu_x)J_uu_x
&=
{}^{t}\left(
g(\cdot,J_uu_x)\nabla_xu_x
\right)Y
\nonumber
\\
&=
\frac{1}{2}\,
g(\nabla_xu_x,u_x)J_uY
+\frac{1}{4}A_1Y
+\frac{1}{2}A_2Y.
\label{eq:ayaD}
\end{align}
Applying
\eqref{eq:ayaA},
\eqref{eq:ayaB},
\eqref{eq:ayaC},
and \eqref{eq:ayaD}
to the RHS of \eqref{eq:maya},
we derive
\begin{align}
\nabla_t(\nabla_x^ku_x)
&=
-\ep\nabla_x^4(\nabla_x^ku_x)
+
\ep\,
\mathcal{O}
\left(
|\nabla_x^{k+2}u_x|_g
\right)
\nonumber
\\
&\quad
+a\,J_u\nabla_x^4(\nabla_x^ku_x)
+\lambda\,J_u\nabla_x^2(\nabla_x^ku_x)
+b\,\nabla_x
\left\{
g(u_x,u_x)J_u\nabla_x(\nabla_x^ku_x)
\right\}
\nonumber
\\
&\quad
+c_1\,g(\nabla_x^2(\nabla_x^ku_x), u_x)J_uu_x
+c_2
\,g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x)
\nonumber
\\
&\quad
+c_3\,A_1\nabla_x(\nabla_x^ku_x)
+c_4\,A_2\nabla_x(\nabla_x^ku_x)
\nonumber
\\
&\quad
+\mathcal{O}
\left(
\sum_{m=0}^k
|\nabla_x^mu_x|_g
\right),
\label{eq:ayaya}
\end{align}
where $c_1,\ldots,c_4$ are constants given by
$a,b,c$ and $S$.
More concretely,
\begin{align}
c_1&=aS+c,
\label{eq:c1}
\\
c_2&=\left\{
aS(2k-1)+2kb+c
\right\}
+
\frac{1}{2}
\left\{
2b+(k+1)c
+(k+2)c-aSk-aS(k-1)
\right\}
\nonumber
\\
&=
\left(k-\frac{1}{2}\right)aS
+(2k+1)b
+\left(k+\frac{5}{2}\right)c.
\label{eq:c2}
\end{align}
We omit to describe the explicit form of $c_3$ and $c_4$,
as they will not be used later.
\par
We are now in position to evaluate the first term of
the RHS of \eqref{eq:eqV_k}.
Using \eqref{eq:ayaya},
we have
\begin{align}
&\int_{\TT}
g(\nabla_t(\nabla_x^ku_x), \nabla_x^ku_x)dx
\nonumber
\\
&=
-\ep
\int_{\TT}
g(\nabla_x^4(\nabla_x^ku_x), \nabla_x^ku_x)dx
+
\ep\,
\int_{\TT}
g(
\mathcal{O}
\left(
|\nabla_x^{k+2}u_x|_g
\right),
\nabla_x^ku_x)dx
\nonumber
\\
&\quad
+a\,
\int_{\TT}
g(J_u\nabla_x^4(\nabla_x^ku_x), \nabla_x^ku_x)dx
+\lambda\,
\int_{\TT}
g(J_u\nabla_x^2(\nabla_x^ku_x), \nabla_x^ku_x)dx
\nonumber
\\
&\quad
+b\,
\int_{\TT}
g(\nabla_x
\left\{
g(u_x,u_x)J_u\nabla_x(\nabla_x^ku_x)
\right\},
\nabla_x^ku_x)dx
\nonumber
\\
&\quad
+c_1\,
\int_{\TT}
g(
g(\nabla_x^2(\nabla_x^ku_x), u_x)J_uu_x,
\nabla_x^ku_x)dx
\nonumber
\\
&\quad
+c_2
\,\int_{\TT}
g(
g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x),
\nabla_x^ku_x)dx
\nonumber
\\
&\quad
+c_3\,\int_{\TT}
g(A_1\nabla_x(\nabla_x^ku_x), \nabla_x^ku_x)dx
+c_4\,\int_{\TT}
g(A_2\nabla_x(\nabla_x^ku_x), \nabla_x^ku_x)dx
\nonumber
\\
&\quad
+\int_{\TT}
g(\mathcal{O}
\left(
\sum_{m=0}^k
|\nabla_x^mu_x|_g
\right),
\nabla_x^ku_x)dx.
\nonumber
\end{align}
We compute each term of the above separately.
By integrating by parts, we obtain
\begin{align}
&
a\,
\int_{\TT}
g(J_u\nabla_x^4(\nabla_x^ku_x), \nabla_x^ku_x)dx
=
a\,
\int_{\TT}
g(J_u\nabla_x^2(\nabla_x^ku_x), \nabla_x^2(\nabla_x^ku_x))dx
=0,
\nonumber
\\
&\lambda\,
\int_{\TT}
g(J_u\nabla_x^2(\nabla_x^ku_x), \nabla_x^ku_x)dx
=
-\lambda\,
\int_{\TT}
g(J_u\nabla_x(\nabla_x^ku_x), \nabla_x(\nabla_x^ku_x))dx
=0,
\nonumber
\\
&b\,
\int_{\TT}
g(\nabla_x
\left\{
g(u_x,u_x)J_u\nabla_x(\nabla_x^ku_x)
\right\},
\nabla_x^ku_x)dx
\nonumber
\\
&
=
-b\,
\int_{\TT}
g(
g(u_x,u_x)J_u\nabla_x(\nabla_x^ku_x),
\nabla_x(\nabla_x^ku_x))dx
=0.
\nonumber
\end{align}
By using the Cauchy-Schwartz inequality,
we have
\begin{align}
\int_{\TT}
g(\mathcal{O}
\left(
\sum_{m=0}^k
|\nabla_x^mu_x|_g
\right),
\nabla_x^ku_x)dx
\leqslant
C\|u_x\|_{H^k}\|\nabla_x^ku_x\|_{L^2}
\leqslant
C\|u_x\|_{H^k}^2.
\end{align}
Using the integration by parts, the Young inequality
$AB\leqslant A^2/2+B^2/2$ for any $A,B\geqslant 0$,
and $\ep\leqslant 1$,
we deduce
\begin{align}
&-\ep
\int_{\TT}
g(\nabla_x^4(\nabla_x^ku_x), \nabla_x^ku_x)dx
+
\ep\,
\int_{\TT}
g(
\mathcal{O}
\left(
|\nabla_x^{k+2}u_x|_g
\right),
\nabla_x^ku_x)dx
\nonumber
\\
&\leqslant
-\ep
\|\nabla_x^2(\nabla_x^ku_x))\|_{L^2}^2
+
\ep\,C
\|\nabla_x^2(\nabla_x^ku_x)\|_{L^2}
\|\nabla_x^ku_x\|_{L^2}
\nonumber
\\
&\leqslant
-\ep
\|\nabla_x^2(\nabla_x^ku_x))\|_{L^2}^2
+
\frac{\ep}{2}
\|\nabla_x^2(\nabla_x^ku_x)\|_{L^2}^2
+
\frac{\ep\,C^2}{2}
\|\nabla_x^ku_x\|_{L^2}^2
\nonumber
\\
&\leqslant
-
\frac{\ep}{2}
\|\nabla_x^2(\nabla_x^ku_x)\|_{L^2}^2
+
\frac{C^2}{2}
\|u_x\|_{H^k}^2.
\nonumber
\end{align}
By integrating by parts
and by using \eqref{eq:gsym},
we have
\begin{align}
&c_3\,\int_{\TT}
g(A_1\nabla_x(\nabla_x^ku_x), \nabla_x^ku_x)dx
+c_4\,\int_{\TT}
g(A_2\nabla_x(\nabla_x^ku_x), \nabla_x^ku_x)dx
\nonumber
\\
&=
-\frac{c_3}{2}
g\left(\nabla_x(A_1)\nabla_x^ku_x, \nabla_x^ku_x\right)dx
-\frac{c_4}{2}
g\left(\nabla_x(A_2)\nabla_x^ku_x, \nabla_x^ku_x\right)dx
\nonumber
\\
&\leqslant
C\|u_x\|_{H^k}^2.
\nonumber
\end{align}
Collecting them and noting that
$\|u_x\|_{H^k}\leqslant CN_k(u)$
follows from \eqref{eq:timecut},
we derive
\begin{align}
&\int_{\TT}
g(\nabla_t(\nabla_x^ku_x), \nabla_x^ku_x)dx
\nonumber
\\
&\leqslant
-\frac{\ep}{2}\|\nabla_x^2(\nabla_x^ku_x)\|_{L^2}^2
+c_1\,
\int_{\TT}
g(
g(\nabla_x^2(\nabla_x^ku_x), u_x)J_uu_x,
\nabla_x^ku_x)dx
\nonumber
\\
&\quad
+c_2
\,\int_{\TT}
g(
g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x),
\nabla_x^ku_x)dx
+C\,(N_k(u))^2.
\label{eq:V1}
\end{align}
\par
We next evaluate the second term of the RHS of \eqref{eq:eqV_k}.
In the computation, it is to be noted that
$\Lambda=
\mathcal{O}(|\nabla_x^{k-2}u_x|_g)$
and
$$
\nabla_t(\nabla_x^ku_x)=
-\ep\,\nabla_x^4(\nabla_x^ku_x)
+
a\,J_u\nabla_x^4(\nabla_x^ku_x)
+
\mathcal{O}\left(
\sum_{m=0}^{k+2}
|\nabla_x^mu_x|_g
\right).
$$
By noting them and by integrating by parts, we obtain
\begin{align}
&\int_{\TT}
g(\nabla_t(\nabla_x^ku_x), \Lambda)dx
\nonumber
\\
&\leqslant
-\ep\,
\int_{\TT}
g(\nabla_x^4(\nabla_x^ku_x), \Lambda)dx
+
a\,
\int_{\TT}
g(J_u\nabla_x^4(\nabla_x^ku_x), \Lambda)dx
+
C\|u_x\|_{H^k}^2.
\label{eq:V20}
\end{align}
For the first term of the RHS of \eqref{eq:V20},
by using $\ep\leqslant 1$, the integration by parts,
the Young inequality
$AB\leqslant A^2/8+2B^2$ for any $A,B\geqslant 0$, and
$\Lambda=\mathcal{O}(|\nabla_x^{k-2}u_x|_g)$,
we have
\begin{align}
-\ep\,
\int_{\TT}
g(\nabla_x^4(\nabla_x^ku_x), \Lambda)dx
&=
-\ep\,
\int_{\TT}
g(\nabla_x^2(\nabla_x^ku_x), \nabla_x^2(\Lambda))dx
\nonumber
\\
&\leqslant
\ep\|\nabla_x^2(\nabla_x^ku_x)\|_{L^2}\| \nabla_x^2(\Lambda))\|_{L^2}
\nonumber
\\
&\leqslant
\frac{\ep}{8}
\|\nabla_x^2(\nabla_x^ku_x)\|_{L^2}^2
+
2\ep\| \nabla_x^2(\Lambda))\|_{L^2}^2
\nonumber
\\
&\leqslant
\frac{\ep}{8}
\|\nabla_x^2(\nabla_x^ku_x)\|_{L^2}^2
+C\|u_x\|_{H^k}^2.
\label{eq:V21}
\end{align}
For the second term of the RHS of \eqref{eq:V20},
we compute $\nabla_x^2\Lambda$ to see
\begin{align}
\nabla_x^2\Lambda
&=
-\frac{d_1}{2a}\nabla_x^2
\left\{
g(\nabla_x^{k-2}u_x,J_uu_x)J_uu_x
\right\}
+
\frac{d_2}{8a}
\nabla_x^2\left\{
g(u_x,u_x)\nabla_x^{k-2}u_x
\right\}
\nonumber
\\
&=
-\frac{d_1}{2a}
g(\nabla_x^ku_x,J_uu_x)J_uu_x
+
\frac{d_2}{8a}
g(u_x,u_x)\nabla_x^ku_x
\nonumber
\\
&\quad
-\frac{d_1}{a}
g(\nabla_x^{k-1}u_x,J_u\nabla_xu_x)J_uu_x
-\frac{d_1}{a}
g(\nabla_x^{k-1}u_x,J_uu_x)J_u\nabla_xu_x
\nonumber
\\
&\quad
+\frac{d_2}{2a}
g(\nabla_xu_x,u_x)\nabla_x^{k-1}u_x
+
\mathcal{O}
\left(
\sum_{m=0}^{k-2}
|\nabla_x^mu_x|_g
\right).
\nonumber
\end{align}
Thus, by integrating by parts
and by substituting the above,
we obtain
\begin{align}
&a\,
\int_{\TT}
g(J_u\nabla_x^4(\nabla_x^ku_x), \Lambda)dx
\nonumber
\\
&=
a\,
\int_{\TT}
g(J_u\nabla_x^2(\nabla_x^ku_x), \nabla_x^2\Lambda)dx
\nonumber
\\
&=
-\frac{d_1}{2}Q_{2,1}
+\frac{d_2}{8}Q_{2,2}
-d_1Q_{2,3}-d_1Q_{2,4}
+\frac{d_2}{2}Q_{2,5}
+Q_{2,6},
\label{eq:Q2}
\end{align}
where
\begin{align}
Q_{2,1}
&=
\int_{\TT}
g(\nabla_x^ku_x,J_uu_x)
g(J_u\nabla_x^2(\nabla_x^ku_x),J_uu_x)\,dx,
\nonumber
\\
Q_{2,2}
&=
\int_{\TT}
g(u_x,u_x)
g(J_u\nabla_x^2(\nabla_x^ku_x), \nabla_x^ku_x)\,dx,
\nonumber
\\
Q_{2,3}
&=
\int_{\TT}
g(\nabla_x^{k-1}u_x,J_u\nabla_xu_x)
g(J_u\nabla_x^2(\nabla_x^ku_x),J_uu_x)\,dx,
\nonumber
\\
Q_{2,4}
&=
\int_{\TT}
g(\nabla_x^{k-1}u_x,J_uu_x)
g(J_u\nabla_x^2(\nabla_x^ku_x),J_u\nabla_xu_x)\,dx,
\nonumber
\\
Q_{2,5}
&=
\int_{\TT}
g(\nabla_xu_x,u_x)
g(J_u\nabla_x^2(\nabla_x^ku_x), \nabla_x^{k-1}u_x)\,dx,
\nonumber
\\
Q_{2,6}
&=
\int_{\TT}
g(J_u\nabla_x^2(\nabla_x^ku_x),
\mathcal{O}
\left(
\sum_{m=0}^{k-2}
|\nabla_x^mu_x|_g
\right)
)\,dx.
\nonumber
\end{align}
We compute $Q_{2,1},\ldots,Q_{2,6}$ separately.
By the integration by parts and the property of hermitian metric $g$,
we deduce
\begin{align}
Q_{2,1}
&=
\int_{\TT}
g(\nabla_x^ku_x,J_uu_x)
g(\nabla_x^2(\nabla_x^ku_x),u_x)\,dx,
\nonumber
\\
&=
\int_{\TT}
g(g(\nabla_x^2(\nabla_x^ku_x), u_x)J_uu_x, \nabla_x^ku_x)\,dx,
\nonumber
\\
Q_{2,2}
&=
\int_{\TT}
g(
\nabla_x\left\{
g(u_x,u_x)J_u\nabla_x(\nabla_x^ku_x)
\right\},
\nabla_x^ku_x
)\,dx
\nonumber
\\
&\quad
-2\,\int_{\TT}
g(
g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x),
\nabla_x^ku_x
)\,dx
\nonumber
\\
&=
-2\,\int_{\TT}
g(
g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x),
\nabla_x^ku_x)\,dx,
\nonumber
\\
Q_{2,3}
&=
\int_{\TT}
g(\nabla_x^{k-1}u_x,J_u\nabla_xu_x)
g(\nabla_x^2(\nabla_x^ku_x),u_x)\,dx
\nonumber
\\
&\leqslant
-\int_{\TT}
g(\nabla_x^{k}u_x,J_u\nabla_xu_x)
g(\nabla_x(\nabla_x^ku_x),u_x)\,dx
+C\|u_x\|_{H^k}^2
\nonumber
\\
&=
-\int_{\TT}
g(
g(\nabla_x(\nabla_x^ku_x), u_x)J_u\nabla_xu_x, \nabla_x^ku_x
)\,dx
+C\|u_x\|_{H^k}^2,
\nonumber
\\
Q_{2,4}
&=
\int_{\TT}
g(\nabla_x^{k-1}u_x,J_uu_x)
g(\nabla_x^2(\nabla_x^ku_x),\nabla_xu_x)\,dx
\nonumber
\\
&\leqslant
-\int_{\TT}
g(\nabla_x^{k}u_x,J_uu_x)
g(\nabla_x(\nabla_x^ku_x),\nabla_xu_x)\,dx
+C\|u_x\|_{H^k}^2
\nonumber
\\
&=
-\int_{\TT}
g(
g(\nabla_x(\nabla_x^ku_x), \nabla_xu_x)J_uu_x, \nabla_x^ku_x
)\,dx
+C\|u_x\|_{H^k}^2,
\nonumber
\\
Q_{2,5}
&\leqslant
-\int_{\TT}
g(
g(\nabla_xu_x,u_x)
J_u\nabla_x(\nabla_x^ku_x),
\nabla_x^ku_x
)\,dx
+C\|u_x\|_{H^k}^2,
\nonumber
\\
Q_{2,6}
&\leqslant
C\|u_x\|_{H^k}^2.
\nonumber
\end{align}
Applying them to \eqref{eq:Q2},
we obtain
\begin{align}
a\,
\int_{\TT}
g(J_u\nabla_x^4(\nabla_x^ku_x), \Lambda)dx
&\leqslant
-\frac{d_1}{2}
\int_{\TT}
g(
g(\nabla_x^2(\nabla_x^ku_x), u_x)J_uu_x, \nabla_x^ku_x
)\,dx
\nonumber
\\
&\quad
-\frac{3d_2}{4}
\int_{\TT}
g(
g(\nabla_xu_x,u_x)
J_u\nabla_x(\nabla_x^ku_x),
\nabla_x^ku_x
)\,dx
\nonumber
\\
&\quad
+d_1\,
\int_{\TT}
g(
g(\nabla_x(\nabla_x^ku_x), u_x)J_u\nabla_xu_x, \nabla_x^ku_x
)\,dx
\nonumber
\\
&\quad
+d_1\,
\int_{\TT}
g(
g(\nabla_x(\nabla_x^ku_x), \nabla_xu_x)J_uu_x, \nabla_x^ku_x
)\,dx
\nonumber
\\
&\quad
+C\|u_x\|_{H^k}^2.
\label{eq:ri1}
\end{align}
Here we rewrite the sum of the third and the fourth term of the RHS
by using \eqref{eq:ayaC} and \eqref{eq:ayaD},
and use the integration by parts and \eqref{eq:gsym}
to find
\begin{align}
&d_1\,
\int_{\TT}
g(
g(\nabla_x(\nabla_x^ku_x), u_x)J_u\nabla_xu_x, \nabla_x^ku_x
)\,dx
\nonumber
\\
&\quad
+d_1\,
\int_{\TT}
g(
g(\nabla_x(\nabla_x^ku_x), \nabla_xu_x)J_uu_x, \nabla_x^ku_x
)\,dx
\nonumber
\\
&=
d_1\,\int_{\TT}
g(g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x),\nabla_x^ku_x)\,dx
+
\frac{d_1}{2}
\int_{\TT}
g(A_1\nabla_x(\nabla_x^ku_x),\nabla_x^ku_x)\,dx
\nonumber
\\
&\leqslant
d_1\,\int_{\TT}
g(g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x),\nabla_x^ku_x)\,dx
+
C\|u_x\|_{H^k}^2.
\label{eq:ri2}
\end{align}
Combining \eqref{eq:ri1} and \eqref{eq:ri2}, we have
\begin{align}
a\,
\int_{\TT}
g(J_u\nabla_x^4(\nabla_x^ku_x), \Lambda)dx
&\leqslant
-\frac{d_1}{2}
\int_{\TT}
g(
g(\nabla_x^2(\nabla_x^ku_x), u_x)J_uu_x, \nabla_x^ku_x
)\,dx
\nonumber
\\
&\quad
+\left(d_1-\frac{3d_2}{4}\right)
\int_{\TT}
g(
g(\nabla_xu_x,u_x)
J_u\nabla_x(\nabla_x^ku_x),
\nabla_x^ku_x
)\,dx
\nonumber
\\
&\quad
+C\|u_x\|_{H^k}^2.
\label{eq:V22}
\end{align}
Therefore,
from
\eqref{eq:V20},
\eqref{eq:V21},
and \eqref{eq:V22},
it follows that
\begin{align}
&\int_{\TT}
g(\nabla_t(\nabla_x^ku_x), \Lambda)\,dx
\nonumber
\\
&\leqslant
\frac{\ep}{8}
\|\nabla_x^2(\nabla_x^ku_x)\|_{L^2}^2
-\frac{d_1}{2}
\int_{\TT}
g(
g(\nabla_x^2(\nabla_x^ku_x), u_x)J_uu_x, \nabla_x^ku_x
)\,dx
\nonumber
\\
&\quad
+\left(d_1-\frac{3d_2}{4}\right)
\int_{\TT}
g(
g(\nabla_xu_x,u_x)
J_u\nabla_x(\nabla_x^ku_x),
\nabla_x^ku_x
)\,dx
+C\|u_x\|_{H^k}^2.
\label{eq:V2e}
\end{align}
\par
We next evaluate the third term of the RHS of \eqref{eq:eqV_k}.
For this purpose, we compute
$\nabla_t\Lambda$.
Using the product formula and noting
$\nabla_tu_x=\nabla_xu_t
=\mathcal{O}
\left(
\displaystyle
\sum_{m=0}^4|\nabla_x^mu_x|_g
\right),
$
we have
\begin{align}
\nabla_t\Lambda
&=
-\frac{d_1}{2a}
g(\nabla_t\nabla_x^{k-2}u_x,J_uu_x)J_uu_x
-\frac{d_1}{2a}
g(\nabla_x^{k-2}u_x,J_u\nabla_tu_x)J_uu_x
\nonumber
\\
&\quad
-\frac{d_1}{2a}
g(\nabla_x^{k-2}u_x,J_uu_x)J_u\nabla_tu_x
+
\frac{d_2}{8a}g(u_x,u_x)\nabla_t\nabla_x^{k-2}u_x
\nonumber
\\
&\quad
+\frac{d_2}{4a}g(\nabla_xu_t,u_x)\nabla_x^{k-2}u_x
\nonumber
\\
&=
-\frac{d_1}{2a}
g(\nabla_t\nabla_x^{k-2}u_x,J_uu_x)J_uu_x
+
\frac{d_2}{8a}g(u_x,u_x)\nabla_t\nabla_x^{k-2}u_x
\nonumber
\\
&\quad
+
\mathcal{O}
\left(
|\nabla_x^{k-2}u_x|_g
\sum_{m=0}^4|\nabla_x^mu_x|_g
\right).
\nonumber
\end{align}
Thus, we have
\begin{align}
\int_{\TT}
g(\nabla_t\Lambda,V_k)\,dx
&=
Q_{3,1}+Q_{3,2}+Q_{3,2},
\nonumber
\end{align}
where,
\begin{align}
Q_{3,1}
&=
-\frac{d_1}{2a}
\int_{\TT}
g(g(\nabla_t\nabla_x^{k-2}u_x, J_uu_x)J_uu_x,V_k)\,dx,
\nonumber
\\
Q_{3,2}
&=
\frac{d_2}{8a}
\int_{\TT}
g(g(u_x,u_x)\nabla_t\nabla_x^{k-2}u_x,V_k)\,dx,
\nonumber
\\
Q_{3,3}
&=
\int_{\TT}
g(
\mathcal{O}
\left(
|\nabla_x^{k-2}u_x|_g
\sum_{m=0}^4|\nabla_x^mu_x|_g
\right),V_k)\,dx.
\nonumber
\end{align}
For $Q_{3,3}$,
since $k\geqslant 4$,
we use the Sobolev embedding and the Cauchy-Schwartz inequality
to obtain
\begin{align}
Q_{3,3}
&\leqslant
C\|u_x\|_{H^k}^2.
\label{eq:Q33}
\end{align}
For $Q_{3,1}$ and $Q_{3,2}$,
we need to compute $\nabla_t\nabla_x^{k-2}u_x$.
Indeed,
by the same computation as that we obtain $\nabla_t(\nabla_x^{k}u_x)$,
we find
\begin{align}
\nabla_t\nabla_x^{k-2}u_x
&=
-\ep\nabla_x^4(\nabla_x^{k-2}u_x)
+a\,J_u\nabla_x^4(\nabla_x^{k-2}u_x)
+
\mathcal{O}
\left(
\sum_{m=0}^k
|\nabla_x^mu_x|_g
\right)
\nonumber
\\
&=
\ep\,
\mathcal{O}
(|\nabla_x^{k+2}u_x|_g)
+a\,J_u\nabla_x^2(\nabla_x^{k}u_x)
+
\mathcal{O}
\left(
\sum_{m=0}^k
|\nabla_x^mu_x|_g
\right).
\label{eq:k-2}
\end{align}
Applying \eqref{eq:k-2}, we deduce
\begin{align}
Q_{3,1}
&=
-\frac{d_1}{2a}
\int_{\TT}
g(g(\nabla_t\nabla_x^{k-2}u_x, J_uu_x)J_uu_x,\nabla_x^ku_x+\Lambda)\,dx
\nonumber
\\
&\leqslant
-\frac{d_1}{2a}
\int_{\TT}
g(g(\nabla_t\nabla_x^{k-2}u_x, J_uu_x)J_uu_x,\nabla_x^ku_x)\,dx
+C\|u_x\|_{H^k}^2
\nonumber
\\
&\leqslant
\ep\,
\int_{\TT}
g(\mathcal{O}(|\nabla_x^{k+2}u_x|_g),\nabla_x^ku_x)\,dx
\nonumber
\\
&\quad
-\frac{d_1}{2}
\int_{\TT}
g(g(J_u\nabla_x^2(\nabla_x^ku_x), J_uu_x)J_uu_x,\nabla_x^ku_x)\,dx
+C\|u_x\|_{H^k}^2
\nonumber
\\
&\leqslant
\frac{\ep}{8}\|\nabla_x^2(\nabla_x^ku_x)\|_{L^2}^2
-\frac{d_1}{2}
\int_{\TT}
g(g(\nabla_x^2(\nabla_x^ku_x), u_x)J_uu_x,\nabla_x^ku_x)\,dx
+C\|u_x\|_{H^k}^2.
\label{eq:Q31}
\end{align}
In the same way, applying \eqref{eq:k-2}, we deduce
\begin{align}
Q_{3,2}
&=
\frac{d_2}{8a}
\int_{\TT}
g(g(u_x,u_x)\nabla_t\nabla_x^{k-2}u_x,\nabla_x^ku_x+\Lambda)\,dx
\nonumber
\\
&\leqslant
\frac{d_2}{8a}
\int_{\TT}
g(g(u_x,u_x)\nabla_t\nabla_x^{k-2}u_x,\nabla_x^ku_x)\,dx
+C\|u_x\|_{H^k}^2
\nonumber
\\
&\leqslant
\ep\,
\int_{\TT}
g(\mathcal{O}(|\nabla_x^{k+2}u_x|_g),\nabla_x^ku_x)\,dx
\nonumber
\\
&\quad
+
\frac{d_2}{8}
\int_{\TT}
g(g(u_x,u_x)J_u\nabla_x^2(\nabla_x^ku_x),\nabla_x^ku_x)\,dx
+C\|u_x\|_{H^k}^2
\nonumber
\\
&\leqslant
\frac{\ep}{8}\|\nabla_x^2(\nabla_x^ku_x)\|_{L^2}^2
+
\frac{d_2}{8}
\int_{\TT}
g(\nabla_x\left\{g(u_x,u_x)J_u\nabla_x(\nabla_x^ku_x)\right\},
\nabla_x^ku_x)\,dx
\nonumber
\\
&\quad
-\frac{d_2}{4}
\int_{\TT}
g(g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x), \nabla_x^ku_x)\,dx
+C\|u_x\|_{H^k}^2
\nonumber
\\
&=
\frac{\ep}{8}\|\nabla_x^2(\nabla_x^ku_x)\|_{L^2}^2
-\frac{d_2}{4}
\int_{\TT}
g(g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x), \nabla_x^ku_x)\,dx
\nonumber
\\
&\quad
+C\|u_x\|_{H^k}^2.
\label{eq:Q32}
\end{align}
Collecting
\eqref{eq:Q33},
\eqref{eq:Q31},
and
\eqref{eq:Q32},
we obtain
\begin{align}
&\int_{\TT}
g(\nabla_t\Lambda, V_k)\,dx
\nonumber
\\
&\leqslant
\frac{\ep}{4}\|\nabla_x^2(\nabla_x^ku_x)\|_{L^2}^2
-\frac{d_1}{2}
\int_{\TT}
g(g(\nabla_x^2(\nabla_x^ku_x), u_x)J_uu_x,\nabla_x^ku_x)\,dx
\nonumber
\\
&\quad
-\frac{d_2}{4}
\int_{\TT}
g(g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x), \nabla_x^ku_x)\,dx
+C\|u_x\|_{H^k}^2.
\label{eq:V3e}
\end{align}
\par
Consequently, collecting the information
\eqref{eq:eqV_k},
\eqref{eq:V1},
\eqref{eq:V2e},
and
\eqref{eq:V3e},
we derive
\begin{align}
\frac{1}{2}
\frac{d}{dt}
\|V_k\|_{L^2}^2
&\leqslant
-\frac{\ep}{8}\|\nabla_x^2(\nabla_x^ku_x)\|_{L^2}^2
+(c_1-d_1)
\int_{\TT}
g(g(\nabla_x^2(\nabla_x^ku_x), u_x)J_uu_x,\nabla_x^ku_x)\,dx
\nonumber
\\
&\quad
+(c_2+d_1-d_2)
\int_{\TT}
g(g(\nabla_xu_x,u_x)J_u\nabla_x(\nabla_x^ku_x), \nabla_x^ku_x)\,dx
\nonumber
\\
&\quad
+C\|u_x\|_{H^k}^2
+C(N_k(u))^2,
\nonumber
\end{align}
where $c_1$ and $c_2$ are given by
\eqref{eq:c1} and \eqref{eq:c2}.
To cancell the second and the third term of the RHS of above,
we set $d_1$ and $d_2$ so that
\begin{align}
d_1&=c_1=aS+c,
\nonumber
\\
d_2&=c_2+d_1
=
\left(k+\frac{1}{2}\right)aS
+(2k+1)b
+\left(k+\frac{7}{2}\right)c.
\nonumber
\end{align}
Therefore, using $\|u_x\|_{H^k}\leqslant CN_k(u)$,
we conclude that
\begin{align}
\frac{1}{2}
\frac{d}{dt}
\|V_k\|_{L^2}^2
&\leqslant
-\frac{\ep}{8}\|\nabla_x^2(\nabla_x^ku_x)\|_{L^2}^2
+C(N_k(u))^2
\label{eq:VVe}
\end{align}
holds on $[0,T_{\ep}^{\star}]$
\par
Let us now go back
to the original purpose to derive the uniform estimate for
$\left\{N_k(u^{\ep})\right\}_{\ep\in (0,1]}$.
To achieve this,
it remains to consider
the energy estimate for $\|u_x^{\ep}\|_{H^{k-1}}^2$.
However,
by using the integration by parts, the Sobolev embedding,
and the Cauchy-Schwartz inequality repeatedly,
we can easily show that
\begin{align}
\frac{1}{2}
\frac{d}{dt}
\|u_x^{\ep}\|_{H^{k-1}}^2
&\leqslant
-\frac{\ep}{2}\sum_{m=0}^{k-1}
\|\nabla_x^{m+2}u_x^{\ep}\|_{L^2}^2
+
C\,(N_k(u^{\ep}))^2.
\label{eq:k-1}
\end{align}
Therefore, from \eqref{eq:VVe} and \eqref{eq:k-1},
we conclude that
there exits a positive constant $C$
depending on $a,b,c,k,\lambda, S, \|u_{0x}\|_{H^4}$
and not on $\ep$ such that
\begin{align}
\frac{d}{dt}(N_k(u^{\ep}))^2
&=
\frac{d}{dt}\left(
\|u_x^{\ep}\|_{H^{k-1}}^2
+
\|V_k^{\ep}\|_{L^2}^2
\right)
\leqslant
C(N_k(u^{\ep}))^2
\nonumber
\end{align}
on the time-interval $[0,T_{\ep}^{\star}]$.
This implies
$(N_k(u^{\ep}(t)))^2
\leqslant
(N_k(u_0))^2
e^{Ct}
$
for any $t\in [0,T_{\ep}^{\star}]$.
Thus, by the definition of $T_{\ep}^{\star}$,
there holds
\begin{align}
4(N_4(u_0))^2
&=
(N_4(u^{\ep}(T_{\ep}^{\star})))^2
\leqslant
(N_4(u_0))^2
e^{C_4T_{\ep}^{\star}}
\nonumber
\end{align}
with $C_4>0$ which depends on
$a,b,c,\lambda, S, \|u_{0x}\|_{H^4}$
and not on $\ep$.
This shows $e^{C_4T_{\ep}^{\star}}\geqslant 4$
and hence
$T_{\ep}^{\star}\geqslant \log 4/C_4$ holds.
Therefore, if we set $T=\log 4/C_4$,
it follows that
$T_{\ep}^{\star}\geqslant T$ for any $\ep\in (0,1]$
and
$\left\{
N_k(u^{\ep})
\right\}_{\ep\in (0,1]}$
is bounded in $L^{\infty}(0,T)$.
\par
As stated before, this shows that $\left\{u_x\right\}_{\ep\in(0,1]}$
is bounded in $L^{\infty}(0,T;H^k(\TT;TN))$.
Hence the standard compactness argument and the compactness of $N$
show the existence of a map $u\in C([0,T]\times \TT;N)$
and a subsequence $\left\{u^{\ep(j)}\right\}_{j=1}^{\infty}$ of
$\left\{u^{\ep}\right\}_{\ep\in (0,1]}$
that satisfy
\begin{alignat}{3}
&u_x^{\ep(j)}\to u_x
\quad
&\text{in}
\quad
&C([0,T];H^{k-1}(\TT;TN)),
\nonumber
\\
&u_x^{\ep(j)}\to u_x
\quad
&\text{in}
\quad
&L^{\infty}(0,T;H^{k}(\TT;TN))
\quad
\text{weakly star}
\nonumber
\end{alignat}
as $j\to \infty$,
and this $u$ is smooth and solves \eqref{eq:pde}-\eqref{eq:data}.
\par
Finally, in the general case where
$u_0\in C(\TT;N)$ and $u_{0x}\in H^k(\TT;TN)$,
we modify the above argument as follows:
We take a sequence
$\left\{u_{0}^i\right\}_{i=1}^{\infty}\subset C^{\infty}(\TT;N)$
such that
\begin{equation}
u_{0x}^i
\to
u_{0x}
\quad
\text{in}
\quad
H^{k}(\TT;TN)
\label{eq:dense}
\end{equation}
as $i\to \infty$.
There exist $T_i=T(\|u_{0x}^i\|_{H^4})>0$
and
$u^i\in C^{\infty}([0,T_i]\times \TT;N)$ which satisfies
\eqref{eq:pde} and
$u^i(0,x)=u^i(x)$ for each $i=1,2,\ldots$,
since $u_0^i\in C^{\infty}(\TT;N)$.
Recalling the above argument, it is not difficult to show
the estimate $T_i^{\star}\geqslant \log 4/C_{4,i}$
where
$$
T^{\star}_{i}
=
\sup
\left\{
T>0 \ | \
N_4(u^{i}(t))\leqslant 2N_4(u_{0}^i)
\quad
\text{for all}
\quad
t\in[0,T]
\right\},
$$
and $C_{4,i}>0$ depends on
$a,b,c,\lambda,S, \|u_{0x}^i\|_{H^4}$.
Note that $C_{4,i}$ depends on $\|u_{0x}^i\|_{H^4}$ continuously.
This together with \eqref{eq:dense} shows that
there exists $C_4^{\prime}>0$ depending on
$a,b,c,\lambda, S, \|u_{0x}\|_{H^4}$ and not on $i$
such that
$T_i^{\star}\geqslant \log 4/C_{4}^{\prime}$ for
sufficient large $i$.
Therefore, if we set $T=\log 4/C_4^{\prime}$,
there exists a sufficiently large $i_0\in \mathbb{N}$
such that
$T_{i}^{\star}\geqslant T$ for any $i\geqslant i_0$
and
$\left\{
N_k(u^{i})
\right\}_{i=i_0}^{\infty}$
is bounded in $L^{\infty}(0,T)$.
Therefore, by applying the compactness argument again,
we can construct the desired
solution to \eqref{eq:pde}-\eqref{eq:data}.
This completes the proof.
\end{proof}
\section{Proof of Theorem~\ref{theorem:uniqueness}}
\label{section:proof}
The goal of this section is to complete the proof of
Theorem~\ref{theorem:uniqueness}.
Throughout this section, it is assumed that $k\geqslant 6$.
\begin{proof}[Proof of Theorem~\ref{theorem:uniqueness}]
Since $k\geqslant 6\geqslant 4$,
Theorem~\ref{theorem:existence} established in Section~\ref{section:existence}
guarantees the existence of
$T=T(\|u_{0x}\|_{H^4(\TT;TN)})>0$
and a map $u\in C([0,T]\times \TT;N)$
so that
$u_x\in L^{\infty}(0,T;H^k(\TT;TN))
\cap
C([0,T];H^{k-1}(\TT;TN))$
and $u$ solves \eqref{eq:pde}-\eqref{eq:data} on the time-interval
$[0,T]$.
In what follows, we shall concentrate on the proof of
the uniqueness of the solution.
Once the uniqueness is established,
we can easily prove the time-continuity of $\nabla_x^ku_x$ in $L^2$
by the standard argument, which implies
$u_x\in C([0,T];H^k(\TT;TN))$.
In this way, the proof of Theorem~\ref{theorem:uniqueness} is completed.
\par
Let $u,v$ be solutions constructed in Theorem~\ref{theorem:existence}.
Then
$u$ and $v$
solve \eqref{eq:pde}-\eqref{eq:data}
and satisfy
$u_x, v_x
\in L^{\infty}(0,T;H^6(\TT;TN))
\cap
C([0,T];H^{5}(\TT;TN))$.
We shall show $u=v$.
For this purpose,
fix $w$ as an isometric embedding of $(N,g)$ into
some Euclidean space $\RR^d$
so that $N$ is considered as a submanifold of $\RR^d$.
We set
$U=w{\circ}u$,
$V=w{\circ}v$,
$Z=U-V$,
$\UU=dw_u(\nabla_xu_x)$,
$\VV=dw_v(\nabla_xv_x)$,
and $\WW=\UU-\VV$.
To prove $u=v$,
it suffices to show $Z=0$.
The proof of $Z=0$ consists of the following four steps:
\begin{enumerate}
\item[1.] Notations and tools of computations used below.
\item[2.] Analysis of the partial differential equation
satisfied by $\UU$.
\item[3.] Classical energy estimates for
$\|\WW\|_{L^2(\TT;\RR^d)}$ with the loss of derivatives.
\item[4.] Energy estimates for
$\|\TW\|_{L^2(\TT;\RR^d)}$ (defined later)
to eliminate the loss of derivatives
\end{enumerate}
\vspace{0.3em}
\par
{\bf 1. Notations and tools of computations used below.}
\\
We state some notations and gather tools of computations
which will be used below.
\par
The inner product and the norm in $\RR^d$
is expressed by
$(\cdot,\cdot)$ and $|\cdot|$ respectively.
The inner product and the norm in $L^2$
for $\RR^d$-valued functions on $\TT$
is expressed
by
$\lr{\cdot}$
and
$\|\cdot\|_{L^2}$ respectively.
That is,
for $\phi, \psi:\TT\to \RR^d$,
$\lr{\phi,\psi}$ and $\|\phi\|_{L^2}$
is given by
$
\lr{\phi,\psi}
=
\int_{\TT}
(\phi(x),\psi(x))
\,dx$
and
$\|\phi\|_{L^2}
=\sqrt{\lr{\phi,\phi}}
$
respectively.
\vspace{0.3em}
\par
Let $p\in N$ be a fixed point.
We consider the orthogonal decomposition
$
\RR^d
=
dw_p(T_pN)
\oplus
\left(
dw_p(T_pN)
\right)^{\perp}
$,
where
$dw_p:T_pN\to T_{w{\circ}p}\RR^d\cong \RR^d$ is
the differential of $w:N\to \RR^d$ at $p\in N$
and
$\left(
dw_p(T_pN)
\right)^{\perp}$
is the orthogonal complement of
$dw_p(T_pN)$ in $\RR^d$.
We denote the orthogonal projection mapping
of $\RR^d$ onto $dw_p(T_pN)$ by $P(w{\circ}p)$
and define $N(w{\circ}p)$ by
$N(w{\circ}p)=I_d-P(w{\circ}p)$, where
$I_d$ is the identity mapping on $\RR^d$.
Moreover, we define $J(w{\circ}p)$
as an action on $\RR^d$ by first projecting onto
$dw_p(T_pN)$
and then applying the complex structure at $p\in N$.
More precisely, we define $J(w{\circ}p)$ by
\begin{align}
J(w{\circ}p)&=
dw_p\circ J_{p}\circ dw_{w{\circ}p}^{-1}\circ P(w{\circ}p).
\label{eq:J(p)}
\end{align}
We can extend $P(\cdot)$, $N(\cdot)$, and $J(\cdot)$
to a smooth linear operator on $\RR^d$ so that
$P(q)$, $N(q)$, and $J(q)$ make sense for all $q\in \RR^d$
following the argument in e.g. \cite[pp.17]{NSVZ}.
Though $J(q)$ is not skew-symmetric and the square is not the minus
identity in general,
similar properties hold if $q$ is restricted to $w(N)$.
Indeed, from the definition of $P(\omega{\circ}p)$ and
$J(\omega{\circ}p)$, it follows that
\begin{align}
(P(w{\circ}p)Y_1,Y_2)&=(Y_1,P(w{\circ}p)Y_2),
\label{eq:J0}
\\
(J(w{\circ}p)Y_1,Y_2)&=-(Y_1,J(w{\circ}p)Y_2),
\label{eq:J(wp)1}
\\
\left(
J(w{\circ}p)
\right)^2Y_3
&=
-P(w{\circ}p)Y_3,
\label{eq:J(wp)2}
\end{align}
for any $p\in N$ and $Y_1,Y_2\in \RR^d$.
\par
Let $Y\in \Gamma(u^{-1}TN)$ be fixed.
For $(t,x)\in [0,T]\times \TT$,
let $\left\{\nu_3, \ldots, \nu_d\right\}$ denote a smooth local orthonormal
frame field for the normal bundle $(dw(TN))^{\perp}$
near $U(t,x)=w{\circ}u(t,x)\in w(N)$.
Recalling that $dw_u(\nabla_xY)$ is the $dw_u(T_uN)$-component
of $\p_x(dw_u(Y))$,
we see
\begin{align}
dw_u(\nabla_xY)
&=
\p_x
\left(
dw_u(Y)
\right)
-
\sum_{k=3}^d
(\p_x(dw_u(Y)), \nu_k(U))\nu_k(U)
\nonumber
\\
&=
\p_x\left(
dw_u(Y)
\right)
+
\sum_{k=3}^d
(dw_u(Y), \p_x\left(\nu_k(U)\right))\nu_k(U)
\nonumber
\\
&=
\p_x\left(
dw_u(Y)
\right)
+
\sum_{k=3}^d
(dw_u(Y), D_k(U)U_x)\nu_k(U)
\nonumber
\\
&=
\p_x\left(
dw_u(Y)
\right)
+
A(U)(dw_u(Y),U_x),
\label{eq:cox2}
\end{align}
where $D_k=\operatorname{grad} \nu_k$ for $k=3,\dots,d$
and $A(q)(\cdot,\cdot)=
\displaystyle\sum_{k=3}^d
(\cdot, D_k(q)\cdot)\nu_k(q)$
is the second fundamental form at $q\in w(N)$.
In the same way, only by replacing $x$ with $t$,
we see
\begin{align}
dw_u(\nabla_tY)
&=
\p_t\left(
dw_u(Y)
\right)
+
\sum_{k=3}^d
(dw_u(Y), D_k(U)U_t)\nu_k(U).
\label{eq:cot2}
\end{align}
\par
The Sobolev embedding and the Gagliardo-Nirenberg inequality
lead to the equivalence between
$U_x, V_x\in L^{\infty}(0,T;H^6(\TT;\RR^d))$
and $u_x,v_x\in L^{\infty}(0,T;H^6(\TT;TN))$.
In particular, from the Sobolev embedding
$H^1(\TT)$ into $C(\TT)$,
it follows that
$\p_x^kU_x, \p_x^kV_x\in L^{\infty}((0,T)\times \TT;\RR^d)$
for $k=0,1,\ldots,5$,
which will be used below without any comments.
\par
We next observe some properties related to $\nu_k$ and $D_k$
for $k=3\ldots,d$.
\begin{lemma}
\label{lemma:nu}
For each $(t,x)\in [0,T]\times \TT$,
the following properties hold.
\begin{align}
J(U)\nu_k(U)&=0,
\label{eq:J1}
\\
(\nu_k(U), \WW)&=
-(\nu_k(U)-\nu_k(V), \VV)
=\mathcal{O}(|Z|),
\label{eq:key1}
\\
(\nu_k(U),\p_x\WW)
&=
-(D_k(U)U_x,\WW)
-(D_k(U)Z_x,\VV)
+
\mathcal{O}(|Z|),
\label{eq:key2}
\\
(\nu_k(U),\p_x^2\WW)
&=
-2\,(D_k(U)U_x,\p_x\WW)
+
\mathcal{O}(|Z|+|Z_x|+|\WW|),
\label{eq:key22}
\\
(D_k(U)Y_1,Y_2)
&=(Y_1,D_k(U)Y_2)
\ \ \
\text{for any $Y_1,Y_2:[0,T]\times \TT\to \RR^d$}.
\label{eq:D_k}
\end{align}
\end{lemma}
\begin{remark}
In particular,
in view of \eqref{eq:key1},
we find
(see the argument to show \eqref{eq:18e} in the third step)
that the term including $\p_x^2\WW$ or $\p_x\WW$
can be handled as a harmless term if the vector part
is described by $\nu_k(U)$ with some $k=3,\ldots,d$.
This is related to the reason
why we choose $dw_u(\nabla_xu_x)-dw_v(\nabla_xv_x)$ as $\WW$.
\end{remark}
\begin{proof}[Proof of Lemma~\ref{lemma:nu}]
First, \eqref{eq:J1} is a direct consequence of the definition of
$J(U)$
and the orthogonality
$\nu_k(U)\perp dw_u(T_uN)$.
Next,
in view of
$(\nu_k(U),\UU)=(\nu_k(V),\VV)=0$,
we have
\begin{align}
(\nu_k(U),\WW)
=
(\nu_k(U),\UU-\VV)
=
-(\nu_k(U),\VV)
=
-(\nu_k(U)-\nu_k(V),\VV),
\nonumber
\end{align}
which shows \eqref{eq:key1}.
Moreover, by taking the derivative of \eqref{eq:key1} in $x$,
we have
\begin{align}
&(\nu_k(U),\p_x\WW)
\nonumber
\\
&=
\p_x\left\{
(\nu_k(U),\WW)
\right\}
-(\p_x\left\{
\nu_k(U)
\right\},\WW)
\nonumber
\\
&=
-(\p_x\left\{
\nu_k(U)-\nu_k(V)
\right\}, \VV)
-(\nu_k(U)-\nu_k(V),\p_x\VV)
-(D_k(U)U_x,\WW)
\nonumber
\\
&=
-(D_k(U)U_x-D_k(V)V_x, \VV)
-(\nu_k(U)-\nu_k(V),\p_x\VV)
-(D_k(U)U_x,\WW)
\nonumber
\\
&=
-(D_k(U)Z_x-(D_k(U)-D_k(V))V_x, \VV)
-(\nu_k(U)-\nu_k(V),\p_x\VV)
-(D_k(U)U_x,\WW)
\nonumber
\\
&=-(D_k(U)U_x,\WW)-(D_k(U)Z_x, \VV)
+\mathcal{O}(|Z|),
\nonumber
\end{align}
which shows \eqref{eq:key2}.
We obtain \eqref{eq:key22},
by taking the derivative of \eqref{eq:key2} in $x$.
We omit the proof of \eqref{eq:D_k}, since it has been proved
in \cite[pp.912]{onodera1}.
\end{proof}
\par
The following lemma comes from the K\"ahler condition on $(N,J,g)$.
\begin{lemma}
\label{lemma:kaehler}
(i):
For any $Y\in \Gamma(u^{-1}TN)$,
\begin{align}
\p_x(J(U))dw_u(Y)
&=
\sum_{k=3}^d
\left(
dw_u(Y), J(U)D_k(U)U_x
\right)
\nu_k(U).
\label{eq:kaehler1}
\end{align}
(ii):
For any $Y:[0,T]\times \TT\to \RR^d$,
\begin{align}
\p_x(J(U))Y
&=
\sum_{k=3}^d
\left(Y,J(U)D_k(U)U_x\right)\nu_k(U)
-\sum_{k=3}^d
(Y,\nu_k(U))J(U)D_k(U)U_x.
\label{eq:kaehler2}
\end{align}
\end{lemma}
\begin{remark}
Using \eqref{eq:kaehler2} combined with \eqref{eq:key1},
we can handle the term $\p_x(J(U))\p_xW$ as a harmless
term in the energy estimate for $\WW$ in $L^2$.
\end{remark}
\begin{proof}[Proof of Lemma~\ref{lemma:kaehler}]
First we show (i). For $Y\in \Gamma(u^{-1}TN)$,
the K\"ahler condition on $(N,J,g)$ implies
$\nabla_xJ_uY=J_u\nabla_xY$.
Hence there holds
\begin{equation}
dw_u(\nabla_xJ_uY)=dw_u(J_u\nabla_xY).
\label{eq:kae}
\end{equation}
From \eqref{eq:cox2} and \eqref{eq:J1},
the RHS
of \eqref{eq:kae} satisfies
\begin{align}
dw_u(J_u\nabla_xY)
&
=J(U)dw_u(\nabla_xY)
=
J(U)
\p_x(dw_u(Y)).
\label{eq:kae2}
\end{align}
On the other hand, from \eqref{eq:cox2},
the left hand side of \eqref{eq:kae} satisfies
\begin{align}
dw_u(\nabla_xJ_uY)
&=
\p_x\left\{dw_u(J_uY)\right\}
+
\sum_{k=3}^d
\left(
dw_u(J_uY), D_k(U)U_x
\right)
\nu_k(U)
\nonumber
\\
&=
\p_x\left\{J(U)dw_u(Y)\right\}
+
\sum_{k=3}^d
\left(
J(U)dw_u(Y), D_k(U)U_x
\right)
\nu_k(U)
\nonumber
\\
&=
\p_x(J(U))dw_u(Y)
+
J(U)\p_x(dw_u(Y))
\nonumber
\\
&\quad
+
\sum_{k=3}^d
\left(
J(U)dw_u(Y), D_k(U)U_x
\right)
\nu_k(U).
\label{eq:kae3}
\end{align}
By substituting \eqref{eq:kae2} and \eqref{eq:kae3} into
\eqref{eq:kae}, and by using \eqref{eq:J(wp)1},
we have
\begin{align}
\p_x(J(U))dw_u(Y)
&=
-\sum_{k=3}^d
\left(
J(U)dw_u(Y), D_k(U)U_x
\right)
\nu_k(U)
\nonumber
\\
&=
\sum_{k=3}^d
\left(
dw_u(Y), J(U)D_k(U)U_x
\right)
\nu_k(U),
\nonumber
\end{align}
which shows \eqref{eq:kaehler1}.
Next we show (ii).
Decomposing $Y=P(U)Y+N(U)Y$ where
$P(U)Y\in dw(T_uN)$ and
$N(U)Y\in (dw(T_uN))^{\perp}$ for each $(t,x)$,
we have
\begin{align}
\p_x(J(U))Y
&=\p_x(J(U))P(U)Y
+
\p_x(J(U))N(U)Y.
\label{eq:kae4}
\end{align}
By using \eqref{eq:kaehler1} and by noting
that $N(U)Y$ is perpendicular to $J(U)D_k(U)U_x$,
we find that
the first term of the RHS of
\eqref{eq:kae4} satisfies
\begin{align}
\p_x(J(U))P(U)Y
&=
\sum_{k=3}^d
\left(
P(U)Y, J(U)D_k(U)U_x
\right)
\nu_k(U)
\nonumber
\\
&=
\sum_{k=3}^d
\left(
Y, J(U)D_k(U)U_x
\right)
\nu_k(U).
\label{eq:kae5}
\end{align}
Moreover, since
\begin{equation}
\p_x(J(U))\nu_k(U)
=
\p_x(J(U)\nu_k(U))
-J(U)\p_x(\nu_k(U))
=
-J(U)D_k(U)U_x
\label{eq:J2}
\end{equation}
follows from \eqref{eq:J1},
the second term of the RHS of \eqref{eq:kae4} satisfies
\begin{align}
\p_x(J(U))N(U)Y
&=
\sum_{k=3}^d(Y,\nu_k(U))\p_x(J(U))\nu_k(U)
\nonumber
\\
&=
-\sum_{k=3}^d(Y,\nu_k(U))
J(U)D_k(U)U_x.
\label{eq:kae6}
\end{align}
Substituting \eqref{eq:kae5} and \eqref{eq:kae6} into \eqref{eq:kae4},
we obtain \eqref{eq:kaehler2}.
\end{proof}
\par
As in the proof of Theorem~\ref{theorem:existence},
we denote the sectional curvature on $(N,g)$ by $S$
which is supposed to be a constant.
Recall that the Riemann curvature tensor $R$ is expressed by
\begin{align}
R(Y_1,Y_2)Y_3
&=
S\left\{
g(Y_2,Y_3)Y_1
-
g(Y_1,Y_3)Y_2
\right\}
\label{eq:curvature}
\end{align}
for any $Y_1,Y_2,Y_3\in \Gamma(u^{-1}TN)$.
The next lemma comes from \eqref{eq:curvature}.
\begin{lemma}
\label{lemma:curvature3}
For any $Y_1,Y_2,Y_3\in \Gamma(u^{-1}TN)$,
\begin{align}
&dw_u
\left(
R(Y_1,Y_2)Y_3
\right)
=
\sum_k
\left(
dw_u(Y_3), D_k(U)dw_u(Y_2)
\right)
P(U)D_k(U)dw_u(Y_1)
\nonumber
\\
&\phantom{dw_u
\left(
R(Y_1,Y_2)Y_3
\right)}
\qquad
-
\sum_k
\left(
dw_u(Y_3), D_k(U)dw_u(Y_1)
\right)
P(U)D_k(U)dw_u(Y_2),
\label{eq:curvature2}
\\
&\sum_k
\left(
dw_u(Y_3), D_k(U)dw_u(Y_2)
\right)
P(U)D_k(U)dw_u(Y_1)
\nonumber
\\
&\qquad
-
\sum_k
\left(
dw_u(Y_3), D_k(U)dw_u(Y_1)
\right)
P(U)D_k(U)dw_u(Y_2)
\nonumber
\\
&=
S\,\left\{
(dw_u(Y_3), dw_u(Y_2))dw_u(Y_1)
-(dw_u(Y_3), dw_u(Y_1))dw_u(Y_2)
\right\}
\label{eq:curvature3}
\end{align}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemma:curvature3}]
We can understand that \eqref{eq:curvature2} is a kind of the expression of
the Gauss-Codazzi formula in Riemannian geometry.
Fix $(t,x)\in [0,T]\times \TT$.
We take a two-parameterized smooth map
$\gamma=\gamma(s,\sigma):(-\delta,\delta)\times (-\delta,\delta)\to N$
with sufficiently small
$\delta>0$,
and a $Y_4\in \Gamma(\gamma^{-1}TN)$
so that
$\gamma(0,0)=u(t,x)$,
$\gamma_s(0,0)=Y_1(t,x)$,
$\gamma_\sigma(0,0)=Y_2(t,x)$,
and
$Y_4(0,0)=Y_3(t,x)$.
Since $R(\gamma_s,\gamma_\sigma)Y_4=\nabla_s\nabla_\sigma Y_4-\nabla_\sigma\nabla_sY_4$,
we deduce
\begin{align}
dw_{\gamma}
\left(
R(\gamma_s,\gamma_\sigma)Y_4
\right)
&=
dw_{\gamma}
\left(
\nabla_s\nabla_\sigma Y_4
\right)
-
dw_{\gamma}
\left(
\nabla_\sigma\nabla_sY_4
\right)
\nonumber
\\
&=
P(w{\circ}\gamma)
\p_s\left(
dw_{\gamma}(\nabla_\sigma Y_4)
\right)
-
P(w{\circ}\gamma)
\p_\sigma\left(
dw_{\gamma}(\nabla_sY_4)
\right)
\nonumber
\\
&=
P(w{\circ}\gamma)
\left\{
\p_s\left(
dw_{\gamma}(\nabla_\sigma Y_4)
\right)
-
\p_\sigma\left(
dw_{\gamma}(\nabla_sY_4)
\right)
\right\}.
\label{eq:muda}
\end{align}
Similarly to \eqref{eq:cox2} or \eqref{eq:cot2},
the definition of the covariant derivatives yields
\begin{align}
\p_s\left(
dw_{\gamma}(\nabla_\sigma Y_4)
\right)
&=
\p_s\left(
\p_\sigma(dw_{\gamma}(Y_4))
+
\sum_{k=3}^d
\left(
dw_{\gamma}(Y_4), D_k(w{\circ}\gamma)\p_\sigma(w{\circ}\gamma)
\right)
\nu_k(w{\circ}\gamma)
\right)
\nonumber
\\
&=
\p_s\p_\sigma(dw_{\gamma}(Y_4))
+
\sum_{k=3}^d
\p_s
\left\{
\left(
dw_{\gamma}(Y_4), D_k(w{\circ}\gamma)\p_\sigma(w{\circ}\gamma)
\right)
\right\}
\nu_k(w{\circ}\gamma)
\nonumber
\\
&\quad
+
\sum_{k=3}^d
\left(
dw_{\gamma}(Y_4), D_k(w{\circ}\gamma)\p_\sigma(w{\circ}\gamma)
\right)
D_k(w{\circ}\gamma)
\p_s(w{\circ}\gamma),
\label{eq:muda2}
\end{align}
and
\begin{align}
\p_\sigma\left(
dw_{\gamma}(\nabla_sY_4)
\right)
&=
\p_\sigma\p_s(dw_{\gamma}(Y_4))
+
\sum_{k=3}^d
\p_\sigma
\left\{
\left(
dw_{\gamma}(Y_4), D_k(w{\circ}\gamma)\p_s(w{\circ}\gamma)
\right)
\right\}
\nu_k(w{\circ}\gamma)
\nonumber
\\
&\quad
+
\sum_{k=3}^d
\left(
dw_{\gamma}(Y_4), D_k(w{\circ}\gamma)\p_s(w{\circ}\gamma)
\right)
D_k(w{\circ}\gamma)
\p_\sigma(w{\circ}\gamma).
\label{eq:muda3}
\end{align}
By substituting \eqref{eq:muda2} and \eqref{eq:muda3}
into \eqref{eq:muda}, and by noting
$P(w{\circ}\gamma)\nu_k(w{\circ}\gamma)=0$,
we have
\begin{align}
dw_{\gamma}
\left(
R(\gamma_s,\gamma_\sigma)Y_4
\right)
&=
\sum_{k=3}^d
\left(
dw_{\gamma}(Y_4), D_k(w{\circ}\gamma)\p_\sigma(w{\circ}\gamma)
\right)
P(w{\circ}\gamma)
D_k(w{\circ}\gamma)
\p_s(w{\circ}\gamma)
\nonumber
\\
&\quad
-
\sum_{k=3}^d
\left(
dw_{\gamma}(Y_4), D_k(w{\circ}\gamma)\p_s(w{\circ}\gamma)
\right)
P(w{\circ}\gamma)
D_k(w{\circ}\gamma)
\p_\sigma(w{\circ}\gamma).
\nonumber
\end{align}
Thus, by taking the limit $(s,\sigma)\to (0,0)$,
we obtain
\begin{align}
dw_{u}
\left(
R(Y_1,Y_2)Y_3
\right)
&=
\sum_{k=3}^d
\left(
dw_{u}(Y_3), D_k(w{\circ}u)dw_u(Y_2)
\right)
P(w{\circ}u)
D_k(w{\circ}u)
dw_u(Y_1)
\nonumber
\\
&\quad
-
\sum_{k=3}^d
\left(
dw_{u}(Y_3), D_k(w{\circ}u)dw_u(Y_1)
\right)
P(w{\circ}u)
D_k(w{\circ}u)
dw_u(Y_2)
\nonumber
\end{align}
for each $(t,x)$.
This implies \eqref{eq:curvature2}.
Noting that $w:(N,g)\to (\RR^d,(\cdot,\cdot))$ is isometric,
we see that
\eqref{eq:curvature3} follows from
\eqref{eq:curvature} and \eqref{eq:curvature2}.
\end{proof}
\par
Next properties come from the assumption that
$N$ is a two-dimensional real manifold.
Noting that $\left\{
\frac{U_x}{|U_x|}, \frac{J(U)U_x}{|U_x|}, \nu_3(U), \ldots, \nu_d(U)
\right\}$
forms an orthonormal basis of $\RR^d$
for each $(t,x)$ where $U_x(t,x)\ne 0$,
we see
\begin{equation}
|U_x|^2Y
=
(Y,U_x)U_x
+
(Y,J(U)U_x)J(U)U_x
+
\sum_{k=3}^d
(|U_x|^2Y,\nu_k(U))\nu_k(U)
\label{eq:frame_uniqueness}
\end{equation}
holds for every $(t,x)$.
Note also that \eqref{eq:frame_uniqueness} is valid also for $(t,x)$
where $U_x(t,x)=0$, as
each of both sides of \eqref{eq:frame_uniqueness} vanishes.
Using \eqref{eq:frame_uniqueness} with
$J(U)Y$ instead of $Y$, we have
\begin{align}
|U_x|^2J(U)Y
&=
(J(U)Y,U_x)U_x
+
(J(U)Y,J(U)U_x)J(U)U_x
\nonumber
\\
&\quad
+
\sum_{k=3}^d
(|U_x|^2J(U)Y,\nu_k(U))\nu_k(U)
\nonumber
\\
&=
-(Y,J(U)U_x)U_x
+(Y,U_x)J(U)U_x.
\label{eq:k1}
\end{align}
Moreover, we introduce
$T_2(U), \ldots, T_5(U):[0,T]\times \TT\to \RR^d$
defined by the following.
\begin{definition}
\label{definition:symmetry_uniqueness}
For any $Y:[0,T]\times \TT\to \RR^d$,
\begin{align}
T_2(U)Y
&=
\frac{1}{2}|U_x|^2J(U)Y,
\label{eq:T_2(U)}
\\
T_3(U)Y
&=
\frac{1}{2}
\biggl\{
(Y,\p_xU_x)J(U)U_x
+
(Y,U_x)J(U)\p_xU_x
\nonumber
\\
&\qquad\quad
+
(Y,J(U)\p_xU_x)U_x
+
(Y,J(U)U_x)\p_xU_x
\biggr\},
\label{eq:T_3(U)}
\\
T_4(U)Y
&=
\left(
Y,\p_xU_x+\sum_{k=3}^d(U_x,D_k(U)U_x)\nu_k(U)
\right)
J(U)U_x
-
(Y,U_x)J(U)\p_xU_x,
\label{eq:T_4(U)}
\\
T_5(U)Y
&=
\frac{1}{2}
\biggl\{
(Y,\p_xU_x)J(U)U_x
+
(Y,U_x)J(U)\p_xU_x
\nonumber
\\
&\qquad\quad
-
(Y,J(U)\p_xU_x)U_x
-
(Y,J(U)U_x)\p_xU_x
\biggr\}.
\label{eq:T_5(U)}
\end{align}
\end{definition}
We use \eqref{eq:frame_uniqueness} or \eqref{eq:k1} to show the following.
\begin{lemma}
\label{lemma:symmetry_uniqueness}
For any
$Y,Y_1,Y_2:[0,T]\times \TT\to \RR^d$,
it follows that
\begin{align}
T_2(U)Y
&=
\frac{1}{2}
\left\{
(Y,U_x)J(U)U_x
-(Y,J(U)U_x)U_x
\right\},
\label{eq:tae1}
\\
\p_x(T_2(U))Y
&=
(\p_xU_x,U_x)J(U)Y
+
\frac{1}{2}
|U_x|^2
\p_x(J(U))Y,
\label{eq:tae2}
\\
\p_x(T_2(U))Y
&=
T_5(U)Y
-\frac{1}{2}
(Y,U_x)\sum_{k=3}^d(J(U)U_x,D_k(U)U_x)\nu_k(U)
\nonumber
\\
&\quad
+\frac{1}{2}
\sum_{k=3}^d(Y,\nu_k(U))(J(U)U_x,D_k(U)U_x)U_x,
\label{eq:tae3}
\\
(T_3(U)Y_1, Y_2)
&=
(Y_1,T_3(U)Y_2),
\label{eq:tae4}
\\
(T_4(U)Y_1, Y_2)
&=
(Y_1,T_4(U)Y_2).
\label{eq:tae5}
\end{align}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemma:symmetry_uniqueness}]
First, \eqref{eq:tae1} is a direct consequence of
\eqref{eq:k1}.
Second, \eqref{eq:tae2} follows from
substituting \eqref{eq:T_2(U)} into
$\p_x(T_2(U))Y
=
\p_x\left\{
T_2(U)Y
\right\}
-T_2(U)\p_xY$.
Third, by substituting \eqref{eq:tae1} into
$\p_x(T_2(U))Y=\p_x\left\{T_2(U)Y\right\}
-T_2(U)\p_xY$ and by using \eqref{eq:T_5(U)},
we have
\begin{align}
\p_x(T_2(U))Y
&=
T_5(U)Y
+
\frac{1}{2}
(Y,U_x)\p_x(J(U))U_x
-\frac{1}{2}
(Y,\p_x(J(U))U_x)U_x.
\label{eq:iran21}
\end{align}
Recall here that \eqref{eq:kaehler2} yields
$\p_x(J(U))U_x
=
\displaystyle\sum_{k=3}^d
(U_x,J(U)D_k(U)U_x)\nu_k(U)$.
Substituting this into the RHS of \eqref{eq:iran21},
we get \eqref{eq:tae3}.
Next, in view of \eqref{eq:T_3(U)},
it is immediate that
\eqref{eq:tae4} holds.
Finally we show \eqref{eq:tae5}.
The proof is reduced to that of \eqref{eq:gsym} with $i=2$.
Noting that there exists $\Xi_i\in \Gamma(u^{-1}TN)$ such that
$dw_u(\Xi_i)=P(U)Y_i$ for each $i=1,2$,
we have
\begin{align}
T_4(U)Y_1
&=
(Y_1,dw_u(\nabla_xu_x))dw_u(J_uu_x)
-(Y_1,dw_u(u_x))dw_u(J_u\nabla_xu_x)
\nonumber
\\
&=
(P(U)Y_1,dw_u(\nabla_xu_x))dw_u(J_uu_x)
-(P(U)Y_1,dw_u(u_x))dw_u(J_u\nabla_xu_x)
\nonumber
\\
&=
(dw_u(\Xi_1),dw_u(\nabla_xu_x))dw_u(J_uu_x)
-(dw_u(\Xi_1),dw_u(u_x))dw_u(J_u\nabla_xu_x).
\nonumber
\end{align}
Since $w$ is an isometric, this shows
$$
T_4(U)Y_1
=dw_u\left\{
g(\Xi_1,\nabla_xu_x)J_uu_x
-g(\Xi_1,u_x)J_u\nabla_xu_x
\right\},
$$
and thus we obtain
\begin{align}
(T_4(U)Y_1,Y_2)
&=
(dw_u\left\{
g(\Xi_1,\nabla_xu_x)J_uu_x
-g(\Xi_1,u_x)J_u\nabla_xu_x
\right\},P(U)Y_2)
\nonumber
\\
&=
(dw_u\left\{
g(\Xi_1,\nabla_xu_x)J_uu_x
-g(\Xi_1,u_x)J_u\nabla_xu_x
\right\},dw_u(\Xi_2))
\nonumber
\\
&=
g(g(\Xi_1,\nabla_xu_x)J_uu_x
-g(\Xi_1,u_x)J_u\nabla_xu_x, \Xi_2)
\nonumber
\\
&=
g(A_2\Xi_1,\Xi_2).
\nonumber
\end{align}
Since $g(A_2\Xi_1,\Xi_2)=g(\Xi_1,A_2\Xi_2)$
follows from \eqref{eq:gsym},
we conclude that \eqref{eq:tae5} holds.
\end{proof}
In what follows, for simplicity,
we sometimes write $dw$ instead of $dw_u$ or $dw_v$
and use the expression $\displaystyle\sum_{k}$
and $\displaystyle\sum_{k,\ell}$
instead of
$\displaystyle\sum_{k=3}^{d}$
and
$\displaystyle\sum_{k,\ell=3}^{d}$
respectively.
Any confusion will not occur.
\vspace{0.5em}
\par
{\bf 2. Analysis of the partial differential equation
satisfied by $\UU$. }
\\
We compute the PDE satisfied by $\UU$.
\par
First, we start by the computation of the PDE satisfied by $U$.
Since $u$ satisfies \eqref{eq:pde},
\begin{align}
U_t
&=
dw(u_t)
\nonumber
\\
&=
a\,dw(\nabla_xJ_u\nabla_x^2u_x)
+
\lambda\,dw(J_u\nabla_xu_x)
\nonumber
\\
&\quad
+
b\,dw(g(u_x,u_x)J_u\nabla_xu_x)
+
c\,dw(g(\nabla_xu_x,u_x)J_uu_x)
\nonumber
\\
&=
a\,dw(\nabla_xJ_u\nabla_x^2u_x)
+
\lambda\,J(U)dw(\nabla_xu_x)
\nonumber
\\
&\quad
+
b\, (dw(u_x),dw(u_x))J(U)dw(\nabla_xu_x)
+
c\, (dw(\nabla_xu_x),dw(u_x))J(U)dw(u_x)
\nonumber
\\
&=
a\,dw(\nabla_xJ_u\nabla_x^2u_x)
+
\left\{\lambda+b\,|U_x|^2\right\}
J(U)\UU
+
c\, (\UU,U_x)J(U)U_x.
\label{eq:u1}
\end{align}
Using \eqref{eq:cox2} and \eqref{eq:J1} repeatedly,
we have
\begin{align}
dw(\nabla_x^2u_x)
&=
\p_x\left(
dw(\nabla_xu_x)
\right)
+
\sum_{\ell}
(dw(\nabla_xu_x), D_{\ell}(U)U_x)\nu_{\ell}(U)
\nonumber
\\
&=
\p_x\UU
+
\sum_{\ell}
(\UU, D_{\ell}(U)U_x)\nu_{\ell}(U),
\label{eq:u2}
\\
J(U)dw(\nabla_x^2u_x)
&=
J(U)\p_x\UU
+
\sum_{\ell}
(\UU, D_{\ell}(U)U_x)J(U)\nu_{\ell}(U)
=
J(U)\p_x\UU,
\label{eq:u3}
\\
dw(\nabla_xJ_u\nabla_x^2u_x)
&=
\p_x
\left(
dw(J_u\nabla_x^2u_x)
\right)
+
\sum_k
(dw(J_u\nabla_x^2u_x), D_k(U)U_x)\nu_k(U)
\nonumber
\\
&=
\p_x\left(
J(U)dw(\nabla_x^2u_x)
\right)
+
\sum_k
(J(U)dw(\nabla_x^2u_x), D_k(U)U_x)\nu_k(U)
\nonumber
\\
&=
\p_x\left(
J(U)\p_x\UU
\right)
+
\sum_k
(J(U)\p_x\UU, D_k(U)U_x)\nu_k(U).
\label{eq:u4}
\end{align}
From \eqref{eq:u1} and \eqref{eq:u4},
we have
\begin{align}
U_t
&=
a\,\p_x\left(
J(U)\p_x\UU
\right)
+
a\,
\sum_k
(J(U)\p_x\UU, D_k(U)U_x)\nu_k(U)
+
\mathcal{O}(|U|+|U_x|+|\UU|).
\label{eq:U_t}
\end{align}
\par
Next, we compute the PDE
satisfied by $\UU$.
From
\eqref{eq:pde},
\eqref{eq:curvature},
and
\eqref{eq:cot2},
it follows that
\begin{align}
\p_t\UU
&=
\p_t(dw(\nabla_xu_x))
\nonumber
\\
&=
dw(\nabla_t\nabla_xu_x)
-
\sum_{k}
(dw(\nabla_xu_x), D_k(U)U_t)\nu_k(U)
\nonumber
\\
&=
dw(\nabla_x^2u_t+R(u_t,u_x)u_x)
-
\sum_{k}
(\UU, D_k(U)U_t)\nu_k(U)
\nonumber
\\
&=
dw(\nabla_x^2u_t+S\left\{g(u_x,u_x)u_t-g(u_t,u_x)u_x\right\})
-
\sum_{k}
(\UU, D_k(U)U_t)\nu_k(U)
\nonumber
\\
&=
dw(\nabla_x^2u_t)
+S\left\{
(dw(u_x),dw(u_x))dw(u_t)
-(dw(u_t),dw(u_x))dw(u_x)
\right\}
\nonumber
\\
&\quad
-
\sum_{k}
(\UU, D_k(U)U_t)\nu_k(U)
\nonumber
\\
&=
dw(\nabla_x^2u_t)
+
S|U_x|^2U_t
-S(U_x,U_t)U_x
-
\sum_{k}
(\UU, D_k(U)U_t)\nu_k(U)
\nonumber
\\
&=:I+II+III+IV.
\label{eq:I-IV}
\end{align}
We compute $II$, $III$, $IV$, and $I$ in order.
\par
Applying \eqref{eq:U_t},
we have
\begin{align}
II
&=
aS\,\p_x\left\{
|U_x|^2J(U)\p_x\UU
\right\}
-2aS\,(\p_xU_x,U_x)J(U)\p_x\UU
\nonumber
\\
&\quad
+aS\,|U_x|^2\sum_k
(J(U)\p_x\UU, D_k(U)U_x)\nu_k(U)
+
\mathcal{O}(|U|+|U_x|+|\UU|).
\label{eq:II}
\intertext{In the same way, by noting
$(U_x,\nu_k(U))=0$
and
\eqref{eq:J0},
we obtain
}
III&=
-aS\,
\left(
U_x,
\p_x\left(J(U)\p_x\UU\right)
\right)
U_x
+
\mathcal{O}(|U|+|U_x|+|\UU|)
\nonumber
\\
&=
-aS\, (J(U)\p_x^2\UU,U_x)U_x
-aS\, (\p_x(J(U))\p_x\UU,U_x)U_x
+\mathcal{O}(|U|+|U_x|+|\UU|)
\nonumber
\\
&=
aS\, (\p_x^2\UU,J(U)U_x)U_x
-aS\, (\p_x(J(U))\p_x\UU,U_x)U_x
+\mathcal{O}(|U|+|U_x|+|\UU|).
\label{eq:III1}
\end{align}
Furthermore, by substituting \eqref{eq:k1} with $Y=\p_x^2\UU$
into the first term of the RHS of \eqref{eq:III1},
\begin{align}
III
&=
aS\,
(\p_x^2\UU,U_x)J(U)U_x
-aS\,
|U_x|^2J(U)\p_x^2\UU
-aS\, (\p_x(J(U))\p_x\UU,U_x)U_x
\nonumber
\\
&\quad
+\mathcal{O}(|U|+|U_x|+|\UU|)
\nonumber
\\
&=
aS\,
(\p_x^2\UU,U_x)J(U)U_x
-aS\,
\p_x
\left\{
|U_x|^2J(U)\p_x\UU
\right\}
\nonumber
\\
&\quad
+2aS\,
(\p_xU_x,U_x)J(U)\p_x\UU
+aS\,
|U_x|^2
\p_x(J(U))\p_x\UU
\nonumber
\\
&\quad
-aS\, (\p_x(J(U))\p_x\UU,U_x)U_x
+\mathcal{O}(|U|+|U_x|+|\UU|).
\label{eq:III}
\end{align}
In the same way, by applying \eqref{eq:U_t},
\begin{align}
IV&=
-a\,
\sum_k
\left(
\UU, D_k(U)\p_x\left(
J(U)\p_x\UU
\right)
\right)\nu_k(U)
\nonumber
\\
&\quad
-a\,
\sum_k
\left(
\UU,
D_k(U)\sum_{\ell}
(J(U)\p_x\UU, D_{\ell}(U)U_x)\nu_{\ell}(U)
\right)
\nu_k(U)
\nonumber
\\
&\quad
+
\mathcal{O}(|U|+|U_x|+|\UU|)
\nonumber
\\
&=
-a\,
\sum_k
\left(
\UU, D_k(U)\p_x\left(
J(U)\p_x\UU
\right)
\right)\nu_k(U)
\nonumber
\\
&\quad
-a\,
\sum_{k,\ell}
(\UU, D_k(U)\nu_{\ell}(U))
(J(U)\p_x\UU, D_{\ell}(U)U_x)
\nu_k(U)
\nonumber
\\
&\quad
+
\mathcal{O}(|U|+|U_x|+|\UU|).
\label{eq:IV}
\end{align}
Let us now move on to the computation of $I$.
We start with
\begin{align}
I&=dw(\nabla_x^2u_t)
\nonumber
\\
&=
a\, dw(\nabla_x^4J_u\nabla_xu_x)
+
\lambda\,
dw(\nabla_x^2J_u\nabla_xu_x)
\nonumber
\\
&\quad
+
b\,dw
\left(
\nabla_x^2\left\{
g(u_x,u_x)J_u\nabla_xu_x
\right\}
\right)
+
c\,dw
\left(
\nabla_x^2\left\{
g(\nabla_xu_x,u_x)J_uu_x
\right\}
\right)
\nonumber
\\
&=:
I_1+I_2+I_3+I_4.
\label{eq:I_1-I_4}
\end{align}
We compute $I_2$, $I_3$, $I_4$, and $I_1$ in order.
\par
For $I_2$, we have
\begin{align}
I_2&=
\lambda\,
dw\left(\nabla_x(\nabla_xJ_u\nabla_xu_x)\right)
\nonumber
\\
&=
\lambda\,
\p_x\left(
dw(\nabla_xJ_u\nabla_xu_x)
\right)
+
\lambda\,
\sum_{\ell}
\left(
dw(\nabla_xJ_u\nabla_xu_x),
D_{\ell}(U)U_x
\right)
\nu_{\ell}(U).
\nonumber
\end{align}
Since
\begin{align}
dw(\nabla_xJ_u\nabla_xu_x)
&=
\p_x\left(
dw(J_u\nabla_xu_x)
\right)
+
\sum_k
\left(
dw(J_u\nabla_xu_x), D_k(U)U_x
\right)
\nu_k(U)
\nonumber
\\
&=
\p_x\left(
J(U)\UU
\right)
+
\sum_k
\left(
J(U)\UU, D_k(U)U_x
\right)
\nu_k(U)
\nonumber
\\
&=
J(U)\p_x\UU
+
\p_x(J(U))\UU
+
\sum_k
\left(
J(U)\UU, D_k(U)U_x
\right)
\nu_k(U),
\label{eq:eri1}
\end{align}
we obtain
\begin{align}
I_2&=
\lambda\,
\p_x\left\{
J(U)\p_x\UU
\right\}
+
\lambda\,
\p_x(J(U))\p_x\UU
+
2\lambda\,
\sum_k
\left(
J(U)\p_x\UU, D_k(U)U_x
\right)
\nu_k(U)
\nonumber
\\
&\quad
+\mathcal{O}(|U|+|U_x|+|\UU|).
\label{eq:I_2}
\end{align}
\par
For $I_3$, we have
\begin{align}
I_3
&=
b\,dw\left\{
g(u_x,u_x)\nabla_x^2J_u\nabla_xu_x
\right\}
+
b\,dw\left\{
2\nabla_x(g(u_x,u_x))\nabla_xJ_u\nabla_xu_x
\right\}
\nonumber
\\
&\quad
+
b\,dw\left\{
\nabla_x^2(g(u_x,u_x))J_u\nabla_xu_x
\right\}
\nonumber
\\
&=
b\,g(u_x,u_x)
dw(\nabla_x^2J_u\nabla_xu_x)
+
4b\, g(\nabla_xu_x,u_x)dw(\nabla_xJ_u\nabla_xu_x)
\nonumber
\\
&\quad
+
2b\, g(\nabla_x^2u_x,u_x)dw(J_u\nabla_xu_x)
+
2b\, g(\nabla_xu_x,\nabla_xu_x)dw(J_u\nabla_xu_x)
\nonumber
\\
&=
b\, |U_x|^2dw(\nabla_x^2J_u\nabla_xu_x)
+
4b\, (dw(\nabla_xu_x),U_x)dw(\nabla_xJ_u\nabla_xu_x)
\nonumber
\\
&\quad
+
2b\, (dw(\nabla_x^2u_x),U_x)dw(J_u\nabla_xu_x)
+
2b\, |dw(\nabla_xu_x)|^2dw(J_u\nabla_xu_x)
\nonumber
\\
&=
b\, |U_x|^2dw(\nabla_x^2J_u\nabla_xu_x)
+
4b\, (\UU,U_x)dw(\nabla_xJ_u\nabla_xu_x)
\nonumber
\\
&\quad
+
2b\, (dw(\nabla_x^2u_x),U_x)J(U)\UU
+
2b\, |\UU|^2J(U)\UU.
\nonumber
\end{align}
Here, we recall the K\"ahler condition on $(N,J,g)$
to see
$dw(\nabla_x^2J_u\nabla_xu_x)=dw(\nabla_xJ_u\nabla_x^2u_x)$.
Hence, substituting \eqref{eq:u4}, \eqref{eq:eri1}, and \eqref{eq:u2},
we deduce
\begin{align}
I_3&=
b\,|U_x|^2
\p_x\left\{
J(U)\p_x\UU
\right\}
+b\,|U_x|^2
\sum_k
(J(U)\p_x\UU, D_k(U)U_x)\nu_k(U)
\nonumber
\\
&\quad +
4b\, (\UU,U_x)J(U)\p_x\UU
+
2b\, (\p_x\UU,U_x)J(U)\UU
+\mathcal{O}(|U|+|U_x|+|\UU|)
\nonumber
\\
&=
b\,
\p_x
\left\{
|U_x|^2
J(U)\p_x\UU
\right\}
-2b\,
(\p_xU_x,U_x)
J(U)\p_x\UU
\nonumber
\\
&\quad
+b\,|U_x|^2
\sum_k
(J(U)\p_x\UU, D_k(U)U_x)\nu_k(U)
\nonumber
\\
&\quad +
4b\, (\UU,U_x)J(U)\p_x\UU
+
2b\, (\p_x\UU,U_x)J(U)\UU
+\mathcal{O}(|U|+|U_x|+|\UU|).
\label{eq:erieri}
\end{align}
Furthermore,
by noting
$\UU=
dw(\nabla_xu_x)
=
\p_xU_x
+
\displaystyle\sum_k
(U_x,D_k(U)U_x)\nu_k(U)$,
we see
\begin{align}
(\UU,U_x)
&=
(\p_xU_x,U_x)
+
\sum_k(U_x,D_k(U)U_x)(\nu_k(U),U_x)
=
(\p_xU_x,U_x),
\label{eq:eri6}
\\
J(U)\UU
&=
J(U)\left(
\p_xU_x
+
\sum_k
(U_x,D_k(U)U_x)\nu_k(U)
\right)
=
J(U)\p_xU_x.
\label{eq:eri7}
\end{align}
Collecting the information
\eqref{eq:erieri},
\eqref{eq:eri6},
and \eqref{eq:eri7},
we obtain
\begin{align}
I_3&=
b\,
\p_x
\left\{
|U_x|^2
J(U)\p_x\UU
\right\}
+2b\,
(\p_xU_x,U_x)
J(U)\p_x\UU
+
2b\, (\p_x\UU,U_x)J(U)\p_xU_x
\nonumber
\\
&\quad
+b\,|U_x|^2
\sum_k
(J(U)\p_x\UU, D_k(U)U_x)\nu_k(U)
+\mathcal{O}(|U|+|U_x|+|\UU|).
\label{eq:I_3}
\end{align}
\par
For $I_4$, we have
\begin{align}
I_4
&=
c\,dw
\left(
g(\nabla_x^3u_x,u_x)J_uu_x
\right)
+
3c\,dw
\left(
g(\nabla_x^2u_x,\nabla_xu_x)J_uu_x
\right)
\nonumber
\\
&\quad
+
2c\,dw
\left(
g(\nabla_x^2u_x,u_x)J_u\nabla_xu_x
\right)
+
2c\,dw
\left(
g(\nabla_xu_x,\nabla_xu_x)J_u\nabla_xu_x
\right)
\nonumber
\\
&\quad
+
c\,dw
\left(
g(\nabla_xu_x,u_x)J_u\nabla_x^2u_x.
\right)
\nonumber
\\
&=
c\,
(dw(\nabla_x^3u_x),U_x)J(U)U_x
+
3c\,
(dw(\nabla_x^2u_x),\UU)J(U)U_x
\nonumber
\\
&\quad
+
2c\,
(dw(\nabla_x^2u_x),U_x)J(U)\UU
+
2c\,
|\UU|^2J(U)\UU
+
c\,
(\UU,U_x)J(U)dw(\nabla_x^2u_x).
\nonumber
\end{align}
From \eqref{eq:u2}, it follows that
\begin{align}
dw(\nabla_x^3u_x)
&=
\p_x\left\{
dw(\nabla_x^2u_x)
\right\}
+
\sum_{\ell}
\left(
dw(\nabla_x^2u_x),D_{\ell}(U)U_x
\right)
\nu_{\ell}(U)
\nonumber
\\
&=
\p_x^2\UU
+
2\sum_{k}
\left(
\p_x\UU, D_k(U)U_x
\right)
\nu_k(U)
+
\mathcal{O}(|U|+|U_x|+|\UU|).
\nonumber
\end{align}
This together with $(\nu_k(U),U_x)=0$ yields
\begin{align}
(dw(\nabla_x^3u_x),U_x)
&=
(\p_x^2\UU,U_x)
+
\mathcal{O}(|U|+|U_x|+|\UU|).
\label{eq:eri9}
\end{align}
By using \eqref{eq:u2}, \eqref{eq:eri6},
\eqref{eq:eri7}, and \eqref{eq:eri9},
we obtain
\begin{align}
I_4
&=
c\,
(\p_x^2\UU,U_x)J(U)U_x
+
3c\,
(\p_x\UU,\UU)J(U)U_x
\nonumber
\\
&\quad
+
2c\,
(\p_x\UU,U_x)J(U)\UU
+
c\,
(\UU,U_x)J(U)\p_x\UU
+
\mathcal{O}(|U|+|U_x|+|\UU|)
\nonumber
\\
&=
c\,
(\p_x^2\UU,U_x)J(U)U_x
+
3c\,
(\p_x\UU,\p_xU_x)J(U)U_x
\nonumber
\\
&\quad
+
2c\,
(\p_x\UU,U_x)J(U)\p_xU_x
+
c\,
(\p_xU_x,U_x)J(U)\p_x\UU
+
\mathcal{O}(|U|+|U_x|+|\UU|).
\label{eq:I_4}
\end{align}
\par
For $I_1=a\,dw(\nabla_x\nabla_xJ_u\nabla_x^2\nabla_xu_x)$,
we start with
\begin{align}
&dw(\nabla_x\nabla_xJ_u\nabla_x^2\nabla_xu_x)
\nonumber
\\
&=
\p_x\left\{
dw(\nabla_xJ_u\nabla_x^2\nabla_xu_x)
\right\}
+
\sum_{k}
\left(
dw(\nabla_xJ_u\nabla_x^2\nabla_xu_x),
D_k(U)U_x
\right)\nu_k(U),
\label{eq:I11}
\end{align}
and
\begin{align}
&dw(\nabla_xJ_u\nabla_x^2\nabla_xu_x)
\nonumber
\\
&=
\p_x\left\{
dw(J_u\nabla_x^2\nabla_xu_x)
\right\}
+
\sum_{\ell}
\left(
dw(J_u\nabla_x^2\nabla_xu_x),
D_{\ell}(U)U_x
\right)
\nu_{\ell}(U).
\label{eq:I12}
\end{align}
From \eqref{eq:J(wp)1}, \eqref{eq:kaehler2} with $Y=\p_x\UU$,
the K\"ahler condition on $(N,J,g)$,
and \eqref{eq:u4},
it follows that
\begin{align}
dw(J_u\nabla_x^2\nabla_xu_x)
&=
J(U)\p_x^2\UU
+
\p_x(J(U))\p_x\UU
+
\sum_k
(J(U)\p_x\UU, D_k(U)U_x)\nu_k(U)
\nonumber
\\
&=
J(U)\p_x^2\UU
+
\sum_k
\left(\p_x\UU,J(U)D_k(U)U_x\right)\nu_k(U)
\nonumber
\\
&\quad
-\sum_k
(\p_x\UU,\nu_k(U))J(U)D_k(U)U_x
-
\sum_k
(\p_x\UU, J(U)D_k(U)U_x)\nu_k(U)
\nonumber
\\
&=
J(U)\p_x^2\UU
-\sum_k
(\p_x\UU,\nu_k(U))J(U)D_k(U)U_x.
\nonumber
\end{align}
Here note that
$(\UU, \nu_k(U))=0$ holds.
By taking the derivative of both sides in $x$,
we see
\begin{equation}
(\p_x\UU,\nu_k(U))
=
-(\UU,\p_x(\nu_k(U)))
=
-(\UU, D_k(U)U_x).
\nonumber
\end{equation}
Using this,
we obtain
\begin{align}
dw(J_u\nabla_x^2\nabla_xu_x)
&=
J(U)\p_x^2\UU
+\sum_k
(\UU,D_k(U)U_x)J(U)D_k(U)U_x.
\label{eq:I15}
\end{align}
Furthermore, by substituting \eqref{eq:I15} into
\eqref{eq:I12}, we have
\begin{align}
&dw(\nabla_xJ_u\nabla_x^2\nabla_xu_x)
\nonumber
\\
&=
\p_x\left\{
J(U)\p_x^2\UU
+\sum_n
(\UU,D_n(U)U_x)J(U)D_n(U)U_x
\right\}
\nonumber
\\
&\quad
+
\sum_{\ell}
\left(
J(U)\p_x^2\UU
+\sum_n
(\UU,D_n(U)U_x)J(U)D_n(U)U_x,
D_{\ell}(U)U_x
\right)
\nu_{\ell}(U)
\nonumber
\\
&=
\p_x\left\{
J(U)\p_x^2\UU
\right\}
-
\sum_{\ell}
\left(\p_x^2\UU,
J(U)D_{\ell}(U)U_x
\right)\nu_{\ell}(U)
\nonumber
\\
&\quad
+
\sum_{n}
(\p_x\UU, D_n(U)U_x)J(U)D_n(U)U_x
\nonumber
\\
&\quad
+\sum_{n}
\left(
\UU,
\p_x
\left\{
D_n(U)U_x
\right\}
\right)
J(U)D_n(U)U_x
\nonumber
\\
&\quad
+
\sum_{n}
(\UU, D_n(U)U_x)
\p_x\left\{
J(U)D_n(U)U_x
\right\}
\nonumber
\\
&\quad
+
\sum_{\ell,n}
(\UU,D_n(U)U_x)
\left(
J(U)D_n(U)U_x,D_{\ell}(U)U_x
\right)
\nu_{\ell}(U).
\label{eq:I16}
\end{align}
Therefore, by substituting \eqref{eq:I16} into \eqref{eq:I11},
and by using
$\p_x^2U_x=\p_x\UU+\mathcal{O}(|U|+|U_x|+|\UU|)$,
we deduce
\begin{align}
I_1
&=
a\,\p_x^2\left\{
J(U)\p_x^2\UU
\right\}
-
a\,\sum_{\ell}
\left(\p_x^3\UU,
J(U)D_{\ell}(U)U_x
\right)\nu_{\ell}(U)
\nonumber
\\
&\quad
-
a\,\sum_{\ell}
\left(\p_x^2\UU,
\p_x\left\{
J(U)D_{\ell}(U)U_x
\right\}
\right)\nu_{\ell}(U)
\nonumber
\\
&\quad
-
a\,\sum_{\ell}
\left(\p_x^2\UU,
J(U)D_{\ell}(U)U_x
\right)
D_{\ell}(U)U_x
\nonumber
\\
&\quad
+
a\,\sum_{n}
(\p_x^2\UU, D_n(U)U_x)J(U)D_n(U)U_x
\nonumber
\\
&\quad
+
a\,\sum_{n}
(\p_x\UU, \p_x\left\{D_n(U)U_x\right\})J(U)D_n(U)U_x
\nonumber
\\
&\quad
+
a\,\sum_{n}
(\p_x\UU, D_n(U)U_x)\p_x\left\{J(U)D_n(U)U_x\right\}
\nonumber
\\
&\quad
+a\,\sum_{n}
\left(
\p_x\UU,
\p_x
\left\{
D_n(U)U_x
\right\}
\right)
J(U)D_n(U)U_x
\nonumber
\\
&\quad
+
a\,\sum_{n}
\left(
\UU,
D_n(U)\p_x^2U_x
\right)
J(U)D_n(U)U_x
\nonumber
\\
&\quad
+
a\,\sum_{n}
(\p_x\UU, D_n(U)U_x)
\p_x\left\{
J(U)D_n(U)U_x
\right\}
\nonumber
\\
&\quad
+
a\,\sum_{n}
(\UU, D_n(U)U_x)
J(U)
D_n(U)\p_x^2U_x
\nonumber
\\
&\quad
+
a\,\sum_{\ell,n}
(\p_x\UU,D_n(U)U_x)
\left(
J(U)D_n(U)U_x,D_{\ell}(U)U_x
\right)
\nu_{\ell}(U)
\nonumber
\\
&\quad
+
a\,\sum_{k}
\left(
\p_x\left\{
J(U)\p_x^2\UU
\right\},
D_k(U)U_x
\right)\nu_k(U)
\nonumber
\\
&\quad
-
a\,\sum_{k,\ell}
\left(\p_x^2\UU,
J(U)D_{\ell}(U)U_x
\right)
\left(
\nu_{\ell}(U),
D_k(U)U_x
\right)\nu_k(U)
\nonumber
\\
&\quad
+
a\,\sum_{k,n}
(\p_x\UU, D_n(U)U_x)
\left(
J(U)D_n(U)U_x,
D_k(U)U_x
\right)\nu_k(U)
\nonumber
\\
&\quad
+
\mathcal{O}
(|U|,|U_x|,|\UU|)
\nonumber
\\
&=
a\,\p_x^2\left\{
J(U)\p_x^2\UU
\right\}
-
2a\,
F_1(\p_x^3\UU)
-
a\,F_2(\p_x^2\UU)
+
a\,F_3(\p_x^2\UU)
\nonumber
\\
&\quad
+
2a\,F_4(\p_x\UU)
+
2a\,F_5(\p_x\UU)
+a\,F_6(\p_x\UU)
+
a\,F_7(\p_x\UU)
\nonumber
\\
&\quad
+
\sum_k
\mathcal{O}
\left(
|U|+|U_x|+|\UU|+|\p_x\UU|+|\p_x^2\UU|
\right)\nu_k(U)
\nonumber
\\
&\quad
+
\mathcal{O}
(|U|+|U_x|+|\UU|),
\label{eq:I20}
\end{align}
where for any $Y:[0,T]\times \TT\to \RR^d$,
\begin{align}
F_1(Y)&=\sum_{k}
\left(Y,
J(U)D_{k}(U)U_x
\right)\nu_{k}(U),
\nonumber
\\
F_2(Y)&=\sum_{k}
\left(Y,
J(U)D_{k}(U)U_x
\right)
D_{k}(U)U_x,
\nonumber
\\
F_3(Y)&=\sum_{k}
(Y, D_k(U)U_x)J(U)D_k(U)U_x,
\nonumber
\\
F_4(Y)&=\sum_{k}
(Y, \p_x\left\{D_k(U)U_x\right\})J(U)D_k(U)U_x,
\nonumber
\\
F_5(Y)&=\sum_{k}
(Y, D_k(U)U_x)\p_x\left\{J(U)D_k(U)U_x\right\},
\nonumber
\\
F_6(Y)&=\sum_{k}
\left(
\UU,
D_k(U)Y
\right)
J(U)D_k(U)U_x,
\nonumber
\\
F_7(Y)&=\sum_{k}
(\UU, D_k(U)U_x)
J(U)
D_k(U)Y.
\nonumber
\end{align}
Combining
\eqref{eq:II},
\eqref{eq:III},
\eqref{eq:IV},
\eqref{eq:I_2},
\eqref{eq:I_3},
\eqref{eq:I_4},
and
\eqref{eq:I20},
we derive
\begin{align}
\p_x\UU
&=
I_1+I_2+I_3+I_4+II+III+IV
\nonumber
\\
&=
a\,\p_x^2\left\{
J(U)\p_x^2\UU
\right\}
-
2a\,F_1(\p_x^3\UU)
+
\lambda\,
\p_x\left\{
J(U)\p_x\UU
\right\}
\nonumber
\\
&\quad
+
(b+aS-aS)\,
\p_x
\left\{
|U_x|^2
J(U)\p_x\UU
\right\}
+
(c+aS)\,
(\p_x^2\UU,U_x)J(U)U_x
\nonumber
\\
&\quad
-
a\,F_2(\p_x^2\UU)
+
a\,F_3(\p_x^2\UU)
+
2a\,F_4(\p_x\UU)
+
2a\,F_5(\p_x\UU)
\nonumber
\\
&\quad
+a\,F_6(\p_x\UU)
+
a\,F_7(\p_x\UU)
+(2b+c-2aS+2aS)\,
(\p_xU_x,U_x)
J(U)\p_x\UU
\nonumber
\\
&\quad
+
(2b+2c)\, (\p_x\UU,U_x)J(U)\p_xU_x
+
3c\,
(\p_x\UU,\p_xU_x)J(U)U_x
\nonumber
\\
&\quad
-aS\, (\p_x(J(U))\p_x\UU,U_x)U_x
+aS\,
|U_x|^2
\p_x(J(U))\p_x\UU
\nonumber
\\
&\quad
+\lambda\,
\p_x(J(U))\p_x\UU
+
r(U,U_x,\UU,\p_x\UU,\p_x^2\UU)
+
\mathcal{O}
(|U|+|U_x|+|\UU|),
\label{eq:eqUU}
\end{align}
where
\begin{equation}
r(U,U_x,\UU,\p_x\UU,\p_x^2\UU)
=
\sum_k
\mathcal{O}
\left(
|U|+|U_x|+|\UU|+|\p_x\UU|+|\p_x^2\UU|
\right)\nu_k(U).
\nonumber
\end{equation}
\vspace{0.5em}
\\
{\bf 3. Classical energy estimates for
$\|\WW\|_{L^2(\TT;\RR^d)}$ with the loss of derivatives}
\\
We compute $\p_t\WW=\p_t\UU-\p_t\VV$
and next evaluate the classical energy estimate for $\WW$ in $L^2$.
Obviously, $\VV$ also satisfies \eqref{eq:eqUU}
replacing $\UU$ with $\VV$.
Hence, by using the mean value formula,
we obtain
\begin{align}
\p_t\WW
&=
a\,\p_x^2\left\{
J(U)\p_x^2\WW
\right\}
-
2a\,F_1(\p_x^3\WW)
+
\lambda\,
\p_x\left\{
J(U)\p_x\WW
\right\}
\nonumber
\\
&\quad
+
b\,
\p_x
\left\{
|U_x|^2
J(U)\p_x\WW
\right\}
+
(c+aS)\,
(\p_x^2\WW,U_x)J(U)U_x
\nonumber
\\
&\quad
-
a\,F_2(\p_x^2\WW)
+
a\,F_3(\p_x^2\WW)
+
2a\,F_4(\p_x\WW)
+
2a\,F_5(\p_x\WW)
\nonumber
\\
&\quad
+a\,F_6(\p_x\WW)
+
a\,F_7(\p_x\WW)
+(2b+c)\,
(\p_xU_x,U_x)
J(U)\p_x\WW
\nonumber
\\
&\quad
+
(2b+2c)\, (\p_x\WW,U_x)J(U)\p_xU_x
+
3c\,
(\p_x\WW,\p_xU_x)J(U)U_x
\nonumber
\\
&\quad
-aS\, (\p_x(J(U))\p_x\WW,U_x)U_x
+aS\,
|U_x|^2
\p_x(J(U))\p_x\WW
+\lambda\,
\p_x(J(U))\p_x\WW
\nonumber
\\
&\quad
+
r(U,U_x,\UU,\p_x\UU,\p_x^2\UU)
-
r(V,V_x,\VV,\p_x\VV,\p_x^2\VV)
\nonumber
\\
&\quad
+
\mathcal{O}
(|Z|+|Z_x|+|\WW|).
\label{eq:eqWW}
\end{align}
Note that
$F_1(\cdot),\ldots,F_7(\cdot)$
should be expressed globally not by using
local orthonormal frame.
It is possible
by using the second fundamental form on $w(N)$
and the derivatives,
or by following the argument in \cite{onodera2} to use
the partition of unity on $w(N)$.
However, for simplicity and for better understandings,
we will continue to use the local expression
without loss of generality.
\vspace{0.3em}
\par
We move on to the classical energy estimate for
$\|\WW\|_{L^2}^2$.
Since $k\geqslant 6$,
$\WW\in L^{\infty}(0,T;H^5)\cap C([0,T];H^4)\cap C^1([0,T];L^2)$.
This together with \eqref{eq:eqWW} implies
\begin{align}
&\frac{1}{2}
\frac{d}{dt}
\|\WW\|_{L^2}^2
\nonumber
\\
&=
\lr{\p_t\WW,\WW}
\nonumber
\\
&=
a\,
\lr{
\p_x^2\left\{
J(U)\p_x^2\WW
\right\},
\WW}
-
2a\,
\lr{
F_1(\p_x^3\WW)
, \WW
}
+
\lambda\,
\lr{
\p_x\left\{
J(U)\p_x\WW
\right\}
,\WW
}
\nonumber
\\
&\quad
+
b\,
\lr{
\p_x
\left\{
|U_x|^2
J(U)\p_x\WW
\right\}
,\WW
}
+
(c+aS)\,
\lr{
(\p_x^2\WW,U_x)J(U)U_x,
\WW
}
\nonumber
\\
&\quad
-
a\,
\lr{
F_2(\p_x^2\WW),
\WW
}
+
a\,
\lr{
F_3(\p_x^2\WW),
\WW
}
+
2a\,
\lr{
F_4(\p_x\WW),
\WW
}
\nonumber
\\
&\quad
+
2a\,
\lr{
F_5(\p_x\WW),
\WW
}
+a\,
\lr{
F_6(\p_x\WW),
\WW
}
+
a\,
\lr{
F_7(\p_x\WW),
\WW
}
\nonumber
\\
&\quad
+(2b+c)\,
\lr{
(\p_xU_x,U_x)
J(U)\p_x\WW,
\WW
}
+
(2b+2c)\,
\lr{
(\p_x\WW,U_x)J(U)\p_xU_x,
\WW
}
\nonumber
\\
&\quad
+
3c\,
\lr{
(\p_x\WW,\p_xU_x)J(U)U_x,
\WW
}
-aS\,
\lr{
(\p_x(J(U))\p_x\WW,U_x)U_x,
\WW
}
\nonumber
\\
&\quad
+aS\,
\lr{
|U_x|^2
\p_x(J(U))\p_x\WW,
\WW
}
+\lambda\,
\lr{
\p_x(J(U))\p_x\WW,
\WW
}
\nonumber
\\
&\quad
+
\lr{
r(U,U_x,\UU,\p_x\UU,\p_x^2\UU)
-
r(V,V_x,\VV,\p_x\VV,\p_x^2\VV),
\WW
}
\nonumber
\\
&\quad
+
\lr{
\mathcal{O}
(|Z|+|Z_x|+|\WW|),
\WW
}.
\label{eq:EEn}
\end{align}
\par
Let us compute the RHS of the above
term by term.
First, by integrating by parts, it is immediate to see
\begin{align}
a\,
\lr{
\p_x^2\left\{
J(U)\p_x^2\WW
\right\},
\WW}
&=
a\,
\lr{
J(U)\p_x^2\WW,
\p_x^2\WW
}
=0,
\nonumber
\\
\lambda\,
\lr{
\p_x\left\{
J(U)\p_x\WW
\right\}
,\WW
}
&=
-\lambda\,
\lr{
J(U)\p_x\WW
,\p_x\WW
}
=0,
\nonumber
\\
b\,
\lr{
\p_x
\left\{
|U_x|^2
J(U)\p_x\WW
\right\}
,\WW
}
&=
-b\,
\lr{
|U_x|^2
J(U)\p_x\WW
,\p_x\WW
}
=0.
\nonumber
\end{align}
\par
Next,
by the Cauchy-Schwartz inequality,
there holds
\begin{align}
\lr{
\mathcal{O}
(|Z|+|Z_x|+|\WW|),
\WW
}
&\leqslant
\|\mathcal{O}
(|Z|+|Z_x|+|\WW|)\|_{L^2}
\|\WW\|_{L^2}
\nonumber
\\
&
\leqslant
C\left\{
\|Z\|_{L^2}^2
+\|Z_x\|_{L^2}^2
+\|\WW\|_{L^2}^2
\right\}
\nonumber
\end{align}
for some $C>0$.
Here and hereafter,
various positive constants depending on
$\|u_x\|_{L^{\infty}(0,T;H^6)}$ and $\|v_x\|_{L^{\infty}(0,T;H^6)}$
will be denoted by the same $C$
without any comments.
Besides, we use the notation $D(t)$ so that the square is defined by
$$D(t)^2=\|Z\|_{L^2}^2
+\|Z_x\|_{L^2}^2
+\|\WW\|_{L^2}^2.$$
\par
Next, by noting
\begin{align}
&
r(U,U_x,\UU,\p_x\UU,\p_x^2\UU)
-
r(V,V_x,\VV,\p_x\VV,\p_x^2\VV)
\nonumber
\\
&=
\sum_k
\mathcal{O}
\left(
|Z|+|Z_x|+|\WW|+|\p_x\WW|+|\p_x^2\WW|
\right)\nu_k(U)
\nonumber
\\
&\quad
+\sum_k
\mathcal{O}
\left(
|U|+|U_x|+|\UU|+|\p_x\UU|+|\p_x^2\UU|
\right)(\nu_k(U)-\nu_k(V)),
\nonumber
\end{align}
we use \eqref{eq:key1} obtained in Lemma~\ref{lemma:nu},
$\p_xZ_x=\WW+\mathcal{O}(|Z|+|Z_x|)$,
and the integration by parts,
to obtain
\begin{align}
&\lr{
r(U,U_x,\UU,\p_x\UU,\p_x^2\UU)
-
r(V,V_x,\VV,\p_x\VV,\p_x^2\VV),
\WW
}
\nonumber
\\
&\leqslant
\lr{
\sum_k
\mathcal{O}
\left(
|Z|+|Z_x|+|\WW|+|\p_x\WW|+|\p_x^2\WW|
\right)\nu_k(U),
\WW
}
+C\,D(t)^2
\nonumber
\\
&=
\int_{\TT}
\sum_k
\mathcal{O}
\left(
|Z|+|Z_x|+|\WW|+|\p_x\WW|+|\p_x^2\WW|
\right)
\mathcal{O}(|Z|)\,dx
+C\,D(t)^2
\nonumber
\\
&=
\int_{\TT}
\sum_k
\mathcal{O}
\left(
|Z|+|Z_x|+|\WW|
\right)
\mathcal{O}(|Z|+|Z_x|+|\WW|)\,dx
+C\,D(t)^2
\nonumber
\\
&\leqslant
C\,D(t)^2.
\label{eq:18e}
\end{align}
\par
In the next computation,
the K\"ahler condition on $(N,J,g)$
plays the crucial parts.
Indeed, we
apply \eqref{eq:kaehler2} with $Y=\p_x\WW$ and use \eqref{eq:key2}
to obtain
\begin{align}
\p_x(J(U))\p_x\WW
&=
\sum_k
\left(\p_x\WW,J(U)D_k(U)U_x\right)\nu_k(U)
-\sum_k
(\p_x\WW,\nu_k(U))J(U)D_k(U)U_x
\nonumber
\\
&=
\sum_k
\left(\p_x\WW,J(U)D_k(U)U_x\right)\nu_k(U)
+\mathcal{O}(|Z|+|Z_x|+|\WW|).
\label{eq:0yumi}
\end{align}
By using \eqref{eq:0yumi} and \eqref{eq:key1}, we see
\begin{align}
(\p_x(J(U))\p_x\WW,U_x)
&=
\sum_k
\left(\p_x\WW,J(U)D_k(U)U_x\right)(\nu_k(U), U_x)
+\mathcal{O}(|Z|+|Z_x|+|\WW|)
\nonumber
\\
&=
\mathcal{O}(|Z|+|Z_x|+|\WW|),
\label{eq:1yumi}
\\
(\p_x(J(U))\p_x\WW,\WW)
&=
\sum_k
\left(\p_x\WW,J(U)D_k(U)U_x\right)(\nu_k(U), \WW)
+\mathcal{O}(|Z|+|Z_x|+|\WW|)
\nonumber
\\
&=
\sum_k
\left(\p_x\WW,J(U)D_k(U)U_x\right)\mathcal{O}(|Z|)
+
\mathcal{O}(|Z|+|Z_x|+|\WW|).
\label{eq:2yumi}
\end{align}
Thus, by using \eqref{eq:1yumi} and the Cauchy-Schwartz inequality,
we have
\begin{align}
-aS\,
\lr{
(\p_x(J(U))\p_x\WW,U_x)U_x,
\WW
}
&\leqslant
C\,D(t)^2.
\nonumber
\end{align}
In the same manner,
by using \eqref{eq:2yumi}, the Cauchy-Schwartz inequality,
together with the integration by parts,
we deduce
\begin{align}
aS\,
\lr{
|U_x|^2
\p_x(J(U))\p_x\WW,
\WW}
+\lambda\,
\lr{
\p_x(J(U))\p_x\WW,
\WW
}
&\leqslant
C\,D(t)^2.
\nonumber
\end{align}
Collecting them, we obtain
\begin{align}
&\frac{1}{2}\frac{d}{dt}
\|\WW\|_{L^2}^2
\nonumber
\\
&\leqslant
(c+aS)\,
\lr{
(\p_x^2\WW,U_x)J(U)U_x,
\WW
}
+(2b+c)\,
\lr{
(\p_xU_x,U_x)
J(U)\p_x\WW,
\WW
}
\nonumber
\\
&\quad
+
(2b+2c)\,
\lr{
(\p_x\WW,U_x)J(U)\p_xU_x,
\WW
}
+
3c\,
\lr{
(\p_x\WW,\p_xU_x)J(U)U_x,
\WW
}
\nonumber
\\
&\quad
+\sum_{i=1}^7R_i
+C\,D(t)^2,
\label{eq:nn}
\end{align}
where
\begin{alignat}{2}
R_1&=-
2a\,
\lr{
F_1(\p_x^3\WW)
, \WW
},
R_2=-
a\,
\lr{
F_2(\p_x^2\WW),
\WW
},
R_3=a\,
\lr{
F_3(\p_x^2\WW),
\WW
},
\nonumber
\\
R_4&=2a\,
\lr{
F_4(\p_x\WW),
\WW
},
R_5
=2a\,
\lr{
F_5(\p_x\WW),
\WW
},
R_6=a\,
\lr{
F_6(\p_x\WW),
\WW
},
\nonumber
\\
R_7&=a\,
\lr{
F_7(\p_x\WW),
\WW
}.
\nonumber
\end{alignat}
\par
In what follows, we need to compute more carefully.
Let us consider $R_1$.
We start by integrating by parts to see
\begin{align}
R_1
&=
2a\,
\lr{
\sum_{k}
\left(\p_x^2\WW,
\p_x\left\{J(U)D_{k}(U)U_x\right\}
\right)\nu_{k}(U)
, \WW
}
\nonumber
\\
&\quad
+
2a\,
\lr{
\sum_{k}
\left(\p_x^2\WW,
J(U)D_{k}(U)U_x
\right)
D_k(U)U_x
, \WW
}
\nonumber
\\
&\quad
+
2a\,
\lr{
\sum_{k}
\left(\p_x^2\WW,
J(U)D_{k}(U)U_x
\right)
\nu_{k}(U)
, \p_x\WW
}.
\nonumber
\end{align}
By applying \eqref{eq:key1} to
the first term of the RHS of the above and
by applying \eqref{eq:key2} to the third term of the
RHS of the above, we have
\begin{align}
R_1
&=
2a\,
\int_{\TT}
\sum_{k}
\left(\p_x^2\WW,
\p_x\left\{J(U)D_{k}(U)U_x\right\}
\right)
\mathcal{O}(|Z|)
\,dx
\nonumber
\\
&\quad
+
2a\,
\lr{
\sum_{k}
\left(\p_x^2\WW,
J(U)D_{k}(U)U_x
\right)
D_k(U)U_x
, \WW
}
\nonumber
\\
&\quad
-
2a\,
\lr{
\sum_{k}
\left(\p_x^2\WW,
J(U)D_{k}(U)U_x
\right)
D_k(U)U_x
, \WW
}
\nonumber
\\
&\quad
-
2a\,
\lr{
\sum_{k}
\left(\p_x^2\WW,
J(U)D_{k}(U)U_x
\right)
D_k(U)Z_x
, \VV
}
\nonumber
\\
&\quad
-
2a\,
\int_{\TT}
\sum_{k}
\left(\p_x^2\WW,
J(U)D_{k}(U)U_x
\right)
\mathcal{O}(|Z|)
\,dx
\nonumber
\\
&=
2a\,
\int_{\TT}
\sum_{k}
\left(\p_x^2\WW,
\p_x\left\{J(U)D_{k}(U)U_x\right\}
\right)
\mathcal{O}(|Z|)
\,dx
\nonumber
\\
&\quad
-
2a\,
\lr{
\sum_{k}
\left(\p_x^2\WW,
J(U)D_{k}(U)U_x
\right)
D_k(U)Z_x
, \VV
}
\nonumber
\\
&\quad
-
2a\,
\int_{\TT}
\sum_{k}
\left(\p_x^2\WW,
J(U)D_{k}(U)U_x
\right)
\mathcal{O}(|Z|)
\,dx.
\nonumber
\\
\intertext{Here,
by integrating by parts,
the first and the third term are bounded by
$C
\,D(t)^2
$.
Therefore we obtain
}
R_1
&\leqslant
-
2a\,
\lr{
\sum_{k}
\left(\p_x^2\WW,
J(U)D_{k}(U)U_x
\right)
D_k(U)Z_x
, \VV
}
+C\,D(t)^2.
\nonumber
\end{align}
Furthermore,
by using integration by parts,
$\p_xZ_x=\WW+\mathcal{O}(|Z|+|Z_x|)$,
\eqref{eq:J(wp)1},
and
\eqref{eq:D_k},
we deduce
\begin{align}
R_1
&\leqslant
2a\,
\lr{
\sum_{k}
\left(\p_x\WW,
J(U)D_{k}(U)U_x
\right)
D_k(U)\p_xZ_x
, \VV
}
+C\,D(t)^2
\nonumber
\\
&\leqslant
2a\,
\lr{
\sum_{k}
\left(\p_x\WW,
J(U)D_{k}(U)U_x
\right)
D_k(U)(\WW+\mathcal{O}(|Z|+|Z_x|)), \VV
}
+C\,D(t)^2
\nonumber
\\
&\leqslant
2a\,
\lr{
\sum_{k}
\left(\p_x\WW,
J(U)D_{k}(U)U_x
\right)
D_k(U)\WW
, \VV
}
+C\,D(t)^2
\nonumber
\\
&\leqslant
2a\,
\lr{
\sum_{k}
\left(\p_x\WW,
J(U)D_{k}(U)U_x
\right)
D_k(U)\WW
, \UU
}
\nonumber
\\
&\quad
-
2a\,
\lr{
\sum_{k}
\left(\p_x\WW,
J(U)D_{k}(U)U_x
\right)
D_k(U)\WW
, \WW
}
+C\,D(t)^2
\nonumber
\\
&\leqslant
2a\,
\lr{
\sum_{k}
\left(\p_x\WW,
J(U)D_{k}(U)U_x
\right)
D_k(U)\WW
, \UU
}
+C\,D(t)^2
\nonumber
\\
&=
-2a\,
\lr{
\sum_{k}
\left(J(U)\p_x\WW,
D_{k}(U)U_x
\right)
D_k(U)\UU
, \WW
}
+C\,D(t)^2
\nonumber
\\
&\leqslant
2a\,
\lr{
\sum_{k}
\left(J(U)\WW,
D_{k}(U)U_x
\right)
D_k(U)\UU
, \p_x\WW
}
+C\,D(t)^2
\nonumber
\\
&=:
R_{11}+R_{12}+C\,D(t)^2,
\label{eq:2e1}
\end{align}
where
\begin{align}
R_{11}
&=2a\,
\lr{
\sum_{k}
\left(J(U)\WW,
D_{k}(U)U_x
\right)
P(U)D_k(U)\UU
, \p_x\WW
},
\nonumber
\\
R_{12}
&=2a\,
\lr{
\sum_{k}
\left(J(U)\WW,
D_{k}(U)U_x
\right)
N(U)D_k(U)\UU
, \p_x\WW
}.
\nonumber
\end{align}
For $R_{12}$, recall \eqref{eq:key2} to see
$$(N(U)D_k(U)\UU
, \p_x\WW)
=\sum_{\ell}
(D_k(U)\UU,\nu_{\ell}(U))(\nu_{\ell}(U),\p_x\WW)
=
\mathcal{O}
\left(
|Z|+|Z_x|+|\WW|
\right).
$$
This shows $R_{12}\leqslant C\,D(t)^2$.
For $R_{11}$, by using \eqref{eq:J0} and \eqref{eq:D_k},
we see
$$
R_{11}
=2a\,
\lr{
\sum_{k}
\left(J(U)\WW,
D_{k}(U)U_x
\right)
D_k(U)P(U)\p_x\WW, \UU
}.
$$
Since $(N(U)D_k(U)P(U)\p_x\WW, \UU)=0$, we have
$$
R_{11}
=2a\,
\lr{
\sum_{k}
\left(J(U)\WW,
D_{k}(U)U_x
\right)
P(U)D_k(U)P(U)\p_x\WW, \UU
}.
$$
Applying \eqref{eq:curvature3} in Lemma~\ref{lemma:curvature3} with
$dw_u(Y_1)=P(U)\p_x\WW$, $dw_u(Y_2)=U_x$, and with
$dw_u(Y_3)=J(U)\WW$, we obtain
\begin{align}
R_{11}
&=
2a\,
\lr{
\sum_{k}
\left(J(U)\WW,
D_{k}(U)P(U)\p_x\WW
\right)
P(U)D_k(U)U_x, \UU
}
\nonumber
\\
&\quad
+2aS\,
\lr{
(J(U)\WW, U_x)P(U)\p_x\WW,
\UU
}
-2aS
\lr{
(J(U)\WW, P(U)\p_x\WW)U_x,
\UU
}
\nonumber
\\
&=:R_{111}
+R_{112}+R_{113}.
\nonumber
\end{align}
Here we recall \eqref{eq:key2} to see
\begin{equation}
N(U)\p_x\WW
=\sum_k
(\p_x\WW,\nu_k(U))\nu_k(U)
=
\mathcal{O}(|Z|+|Z_x|+|\WW|).
\label{eq:key5}
\end{equation}
This implies
$P(U)\p_x\WW=\p_x\WW+\mathcal{O}(|Z|+|Z_x|+|\WW|).$
Using this \eqref{eq:J0}, \eqref{eq:J(wp)1} and
$P(U)\UU=\UU$, we obtain
\begin{align}
R_{111}
&=
-2a\,
\lr{
\sum_{k}
\left(\WW,
J(U)D_{k}(U)P(U)\p_x\WW
\right)
D_k(U)U_x, P(U)\UU
}
\nonumber
\\
&\leqslant
-2a\,
\lr{
\sum_{k}
\left(\WW,
J(U)D_{k}(U)\p_x\WW
\right)
D_k(U)U_x, \UU
}
+C\,D(t)^2
\nonumber
\\
&=
-2a\,
\lr{
\sum_{k}
(\UU, D_k(U)U_x)
J(U)D_{k}(U)\p_x\WW,
\WW
}
+C\,D(t)^2.
\nonumber
\end{align}
In the same way,
using \eqref{eq:J0}, \eqref{eq:J(wp)1}
and
$
\UU=\p_xU_x+
\sum_{\ell}(U_x,D_{\ell}(U)U_x)\nu_{\ell}(U)$,
we obtain
\begin{align}
R_{112}
&\leqslant
-2aS\,
\lr{
(\p_x\WW,\p_xU_x)J(U)U_x,\WW
}
+C\,D(t)^2,
\nonumber
\\
R_{113}
&\leqslant
2aS\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW,\WW
}
+C\,D(t)^2.
\nonumber
\end{align}
Collecting them,
we obtain
\begin{align}
R_1&=R_{111}+R_{112}+R_{113}+R_{12}+C\,D(t)^2
\nonumber
\\
&\leqslant
-2a\,
\lr{
\sum_{k}
(\UU, D_k(U)U_x)
J(U)D_{k}(U)\p_x\WW,
\WW
}
\nonumber
\\
&\quad
-2aS\,
\lr{
(\p_x\WW,\p_xU_x)J(U)U_x,\WW
}
+
2aS\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW,\WW
}
\nonumber
\\
&\quad
+C\,D(t)^2.
\label{eq:2e}
\end{align}
The first term of the RHS of \eqref{eq:2e}
is cancelled with
the same term appearing from the computation of
$R_6+R_7$.
\par
We compute
$R_6+R_7
=
a\,\lr{F_6(\p_x\WW),\WW}
+a\,\lr{F_7(\p_x\WW),\WW}$.
By noting $J(U)=J(U)P(U)$ and
by applying \eqref{eq:curvature3}
with
$dw_u(Y_1)=U_x$,
$dw_u(Y_2)=P(U)\p_x\WW$
and with
$dw_u(Y_3)=\UU$,
we obtain
\begin{align}
F_6(\p_x\WW)
&=
\sum_{k}
\left(
\UU,
D_k(U)\p_x\WW
\right)
J(U)D_k(U)U_x
\nonumber
\\
&=
J(U)\sum_{k}
\left(
\UU,
D_k(U)\p_x\WW
\right)
P(U)D_k(U)U_x
\nonumber
\\
&=
J(U)\sum_{k}
\left(
\UU,
D_k(U)P(U)\p_x\WW
\right)
P(U)D_k(U)U_x
\nonumber
\\
&\quad
+
J(U)\sum_{k}
\left(
\UU,
D_k(U)N(U)\p_x\WW
\right)
P(U)D_k(U)U_x
\nonumber
\\
&=
J(U)\sum_{k}
\left(
\UU,
D_k(U)U_x
\right)
P(U)D_k(U)P(U)\p_x\WW
\nonumber
\\
&\quad
+S\,J(U)
\left\{
(\UU,P(U)\p_x\WW)U_x
-(\UU,U_x)P(U)\p_x\WW
\right\}
\nonumber
\\
&\quad
+
J(U)\sum_{k}
\left(
\UU,
D_k(U)N(U)\p_x\WW
\right)
P(U)D_k(U)U_x.
\nonumber
\end{align}
Furthermore,
we use $J(U)=J(U)P(U)$
and \eqref{eq:key5} to obtain
\begin{align}
F_6(\p_x\WW)
&=
\sum_{k}
\left(
\UU,
D_k(U)U_x
\right)
J(U)D_k(U)P(U)\p_x\WW
\nonumber
\\
&\quad
+S\,(\p_x\WW,\UU)J(U)U_x
-S\,(\UU,U_x)J(U)\p_x\WW
+\mathcal{O}
(|Z|+|Z_x|+|\WW)
\nonumber
\\
&=
\sum_{k}
\left(
\UU,
D_k(U)U_x
\right)
J(U)D_k(U)\p_x\WW
\nonumber
\\
&\quad
+S\,(\p_x\WW,\p_xU_x)J(U)U_x
-S\,(\p_xU_x,U_x)J(U)\p_x\WW
\nonumber
\\
&\quad
+\mathcal{O}
(|Z|+|Z_x|+|\WW)
\nonumber
\\
&=
F_7(\p_x\WW)
+S\,(\p_x\WW,\p_xU_x)J(U)U_x
-S\,(\p_xU_x,U_x)J(U)\p_x\WW
\nonumber
\\
&\quad
+\mathcal{O}
(|Z|+|Z_x|+|\WW).
\nonumber
\end{align}
Hence we obtain
\begin{align}
R_6
+
R_7
&\leqslant
2a\,
\lr{
\sum_{k}
\left(
\UU,
D_k(U)U_x
\right)
J(U)D_k(U)\p_x\WW
,\WW
}
\nonumber
\\
&\quad
+aS\,
\lr{
(\p_x\WW,\p_xU_x)J(U)U_x
,\WW
}
\nonumber
\\
&\quad
-aS\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW
,\WW}
+C\,D(t)^2.
\label{eq:1011e}
\end{align}
Combining \eqref{eq:2e} and \eqref{eq:1011e}, and
using \eqref{eq:D_k},
we have
\begin{align}
&R_1
+
R_6
+
R_7
\nonumber
\\
&\leqslant
-aS\,
\lr{
(\p_x\WW,\p_xU_x)J(U)U_x
,\WW}
+aS\,
\lr{
(U_x,\p_xU_x)J(U)\p_x\WW
,\WW}
+C\,D(t)^2.
\label{eq:RR167}
\end{align}
\par
Next, we compute
$R_2+R_3=-a\lr{F_2(\p_x^2\WW),\WW}+a\lr{F_3(\p_x^2\WW),\WW}$.
As in deriving \eqref{eq:2e},
we use \eqref{eq:J(wp)1}, \eqref{eq:D_k},
and \eqref{eq:curvature3} with
$dw_u(Y_1)=U_x$,
$dw_u(Y_2)=J(U)\p_x^2\WW$
and with
$dw_u(Y_3)=U_x$,
to deduce
\begin{align}
F_2(\p_x^2\WW)
&=
\sum_k
(\p_x^2\WW,J(U)D_k(U)U_x)D_k(U)U_x
\nonumber
\\
&=
\sum_k
(\p_x^2\WW,J(U)D_k(U)U_x)N(U)D_k(U)U_x
\nonumber
\\
&\quad
+
\sum_k
(\p_x^2\WW,J(U)D_k(U)U_x)P(U)D_k(U)U_x
\nonumber
\\
&=
\sum_k
(\p_x^2\WW,J(U)D_k(U)U_x)N(U)D_k(U)U_x
\nonumber
\\
&\quad
-
\sum_k
(U_x, D_k(U)J(U)\p_x^2\WW)P(U)D_k(U)U_x
\nonumber
\\
&=
\sum_k
(\p_x^2\WW,J(U)D_k(U)U_x)N(U)D_k(U)U_x
\nonumber
\\
&\quad
-
\sum_k
(U_x, D_k(U)U_x)P(U)D_k(U)J(U)\p_x^2\WW
\nonumber
\\
&\quad
-
S\,
\left\{
(U_x,J(U)\p_x^2\WW)U_x
-(U_x,U_x)J(U)\p_x^2\WW
\right\}
\nonumber
\\
&=
-\sum_k
(U_x, D_k(U)U_x)D_k(U)J(U)\p_x^2\WW
\nonumber
\\
&\quad
+S\,(\p_x^2\WW,J(U)U_x)U_x
+S\,|U_x|^2J(U)\p_x^2\WW
\nonumber
\\
&\quad
+\sum_k
(U_x, D_k(U)U_x)N(U)D_k(U)J(U)\p_x^2\WW
\nonumber
\\
&\quad
+
\sum_k
(\p_x^2\WW,J(U)D_k(U)U_x)N(U)D_k(U)U_x
\nonumber
\\
&=
-\sum_k
(U_x, D_k(U)U_x)D_k(U)J(U)\p_x^2\WW
\nonumber
\\
&\quad
+S\,(\p_x^2\WW,J(U)U_x)U_x
+S\,|U_x|^2J(U)\p_x^2\WW
\nonumber
\\
&\quad
+\sum_{\ell}
\mathcal{O}
\left(
|\p_x^2\WW|
\right)\nu_{\ell}(U),
\label{eq:6e1}
\\
\intertext{and in the same way
we use \eqref{eq:J(wp)1}, \eqref{eq:D_k},
and \eqref{eq:curvature3} with
$dw_u(Y_1)=U_x$,
$dw_u(Y_2)=\p_x^2\WW$
and with
$dw_u(Y_3)=U_x$,
to deduce }
F_3(\p_x^2\WW)
&=
\sum_{k}
(\p_x^2\WW, D_k(U)U_x)J(U)D_k(U)U_x
\nonumber
\\
&=
J(U)\sum_{k}
(U_x, D_k(U)\p_x^2\WW)P(U)D_k(U)U_x
\nonumber
\\
&=
J(U)\sum_{k}
(U_x, D_k(U)N(U)\p_x^2\WW)P(U)D_k(U)U_x
\nonumber
\\
&\quad
+
J(U)\sum_{k}
(U_x, D_k(U)P(U)\p_x^2\WW)P(U)D_k(U)U_x
\nonumber
\\
&=
J(U)\sum_{k}
(U_x, D_k(U)N(U)\p_x^2\WW)P(U)D_k(U)U_x
\nonumber
\\
&\quad
+
J(U)\sum_{k}
(U_x, D_k(U)U_x)P(U)D_k(U)P(U)\p_x^2\WW
\nonumber
\\
&\quad
+S\,J(U)
\left\{
(U_x, P(U)\p_x^2\WW)U_x
-(U_x,U_x)P(U)\p_x^2\WW
\right\}
\nonumber
\\
&=
\sum_{k}
(U_x, D_k(U)N(U)\p_x^2\WW)J(U)D_k(U)U_x
\nonumber
\\
&\quad
+
\sum_{k}
(U_x, D_k(U)U_x)J(U)D_k(U)P(U)\p_x^2\WW
\nonumber
\\
&\quad
+S\,(\p_x^2\WW,U_x)J(U)U_x
-S\,|U_x|^2J(U)\p_x^2\WW
\nonumber
\\
&=
\sum_{k}
(U_x, D_k(U)U_x)J(U)D_k(U)\p_x^2\WW
\nonumber
\\
&\quad
+S\,(\p_x^2\WW,U_x)J(U)U_x
-S\,|U_x|^2J(U)\p_x^2\WW
\nonumber
\\
&\quad
+
\sum_{k}
(U_x, D_k(U)N(U)\p_x^2\WW)J(U)D_k(U)U_x
\nonumber
\\
&\quad
-
\sum_{k}
(U_x, D_k(U)U_x)J(U)D_k(U)N(U)\p_x^2\WW.
\label{eq:6e2}
\end{align}
Here, we use \eqref{eq:key22} to see
\begin{align}
N(U)\p_x^2\WW
&=
\sum_{\ell}
(\p_x^2\WW,\nu_{\ell}(U))\nu_{\ell}(U)
\nonumber
\\
&=
-2\sum_{\ell}
(\p_x\WW, D_{\ell}(U)U_x)\nu_{\ell}(U)
+
\mathcal{O}(|Z|+|Z_x|+|\WW|).
\label{eq:6e22}
\end{align}
By substituting \eqref{eq:6e22} into
the fourth and fifth term of the RHS of \eqref{eq:6e2},
we have
\begin{align}
F_3(\p_x^2\WW)
&=
\sum_{k}
(U_x, D_k(U)U_x)J(U)D_k(U)\p_x^2\WW
\nonumber
\\
&\quad
+S\,(\p_x^2\WW,U_x)J(U)U_x
-S\,|U_x|^2J(U)\p_x^2\WW
\nonumber
\\
&\quad
-2
\sum_{k,\ell}
(\p_x\WW, D_{\ell}(U)U_x)
(U_x, D_k(U)\nu_{\ell}(U))J(U)D_k(U)U_x
\nonumber
\\
&\quad
+2
\sum_{k,\ell}
(\p_x\WW, D_{\ell}(U)U_x)
(U_x, D_k(U)U_x)J(U)D_k(U)\nu_{\ell}(U)
\nonumber
\\
&\quad
+
\mathcal{O}
(|Z|+|Z_x|+|\WW|).
\label{eq:6e23}
\end{align}
Thus, from \eqref{eq:6e1}, \eqref{eq:6e23},
and \eqref{eq:k1},
it follows that
\begin{align}
&
-F_2(\p_x^2\WW)
+F_3(\p_x^2\WW)
\nonumber
\\
&=
\sum_k
(U_x, D_k(U)U_x)D_k(U)J(U)\p_x^2\WW
+
\sum_{k}
(U_x, D_k(U)U_x)J(U)D_k(U)\p_x^2\WW
\nonumber
\\
&\quad
-S\,(\p_x^2\WW,J(U)U_x)U_x
+S\,(\p_x^2\WW,U_x)J(U)U_x
-2S\,|U_x|^2J(U)\p_x^2\WW
\nonumber
\\
&\quad
-2
\sum_{k,\ell}
(\p_x\WW, D_{\ell}(U)U_x)
(U_x, D_k(U)\nu_{\ell}(U))J(U)D_k(U)U_x
\nonumber
\\
&\quad
+2
\sum_{k,\ell}
(\p_x\WW, D_{\ell}(U)U_x)
(U_x, D_k(U)U_x)J(U)D_k(U)\nu_{\ell}(U)
\nonumber
\\
&\quad
+\sum_{\ell}
\mathcal{O}
\left(
|\p_x^2\WW|
\right)\nu_{\ell}(U)
+
\mathcal{O}
(|Z|+|Z_x|+|\WW|)
\nonumber
\\
&=
\sum_k
(U_x, D_k(U)U_x)(D_k(U)J(U)+J(U)D_k(U))\p_x^2\WW
\nonumber
\\
&\quad
-S\,|U_x|^2J(U)\p_x^2\WW
\nonumber
\\
&\quad
-2
\sum_{k,\ell}
(\p_x\WW, D_{\ell}(U)U_x)
(U_x, D_k(U)\nu_{\ell}(U))J(U)D_k(U)U_x
\nonumber
\\
&\quad
+2
\sum_{k,\ell}
(\p_x\WW, D_{\ell}(U)U_x)
(U_x, D_k(U)U_x)J(U)D_k(U)\nu_{\ell}(U)
\nonumber
\\
&\quad
+\sum_{\ell}
\mathcal{O}
\left(
|\p_x^2\WW|
\right)\nu_{\ell}(U)
+
\mathcal{O}
(|Z|+|Z_x|+|\WW|).
\label{eq:6e3}
\end{align}
Therefore, from \eqref{eq:6e3} and
$$|U_x|^2J(U)\p_x^2\WW
=\p_x
\left\{
|U_x|^2J(U)\p_x\WW
\right\}
-2(\p_xU_x,U_x)J(U)\p_x\WW
-|U_x|^2\p_x(J(U))\p_x\WW,
$$
we see that
$R_2+R_3
=-a\lr{F_2(\p_x^2\WW),\WW}
+a\lr{F_3(\p_x^2\WW),\WW}$
is evaluated as follows:
\begin{align}
R_2
+
R_3
&\leqslant
a\,
\left\langle
\p_x\biggl\{
\sum_k
(U_x, D_k(U)U_x)(D_k(U)J(U)+J(U)D_k(U))\p_x\WW
\biggr\},
\WW
\right\rangle
\nonumber
\\
&\quad
-a\,
\left\langle
\p_x\biggl\{
\sum_k
(U_x, D_k(U)U_x)(D_k(U)J(U)+J(U)D_k(U))
\biggr\}
\p_x\WW,
\WW
\right\rangle
\nonumber
\\
&\quad
-aS\,
\lr{
\p_x\biggl\{
|U_x|^2J(U)\p_x\WW
\biggr\}, \WW
}
+2aS\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW
,\WW
}
\nonumber
\\
&\quad
+aS\,
\lr{
|U_x|^2\p_x(J(U))\p_x\WW
,\WW
}
\nonumber
\\
&\quad
-2a\,
\lr{
\sum_{k,\ell}
(\p_x\WW, D_{\ell}(U)U_x)
(U_x, D_k(U)\nu_{\ell}(U))J(U)D_k(U)U_x,\WW
}
\nonumber
\\
&\quad
+2a\,
\lr{
\sum_{k,\ell}
(\p_x\WW, D_{\ell}(U)U_x)
(U_x, D_k(U)U_x)J(U)D_k(U)\nu_{\ell}(U), \WW
}
\nonumber
\\
&\quad
+
\lr{
\sum_{\ell}
\mathcal{O}
\left(|\p_x^2\WW|\right)\nu_{\ell}(U),
\WW
}
\nonumber
\\
&\quad
+C\,D(t)^2.
\label{eq:6e4}
\end{align}
Note here that
$$
((J(U)D_k(U)+D_k(U)J(U))Y_1,Y_2)
=
-(Y_1, (J(U)D_k(U)+D_k(U)J(U))Y_2)
$$
holds for any $Y_1,Y_2:[0,T]\times \TT\to \RR^d$.
This implies that the first term of the RHS of \eqref{eq:6e4}
vanishes.
Indeed, the integration by parts yields
\begin{align}
&
a\,\left\langle
\p_x\biggl\{
\sum_k
(U_x, D_k(U)U_x)(D_k(U)J(U)+J(U)D_k(U))\p_x\WW
\biggr\},
\WW
\right\rangle
\nonumber
\\
&=
-a\,\left\langle
\sum_k
(U_x, D_k(U)U_x)(D_k(U)J(U)+J(U)D_k(U))\p_x\WW,
\p_x\WW
\right\rangle
\nonumber
\\
&=0.
\nonumber
\end{align}
In addition, the third term of the RHS of \eqref{eq:6e4}
vanishes by integrating by parts.
Beside,
due to the presence of $N(U)$, we can bound
the eighth term of the RHS of \eqref{eq:6e4}
by $C\,D(t)^2$
using the same argument to show \eqref{eq:18e}.
Consequently, we derive
\begin{align}
R_2+R_3
&\leqslant
-a\,
\left\langle
\p_x\biggl\{
\sum_k
(U_x, D_k(U)U_x)(D_k(U)J(U)+J(U)D_k(U))
\biggr\}
\p_x\WW,
\WW
\right\rangle
\nonumber
\\
&\quad
+2aS\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW
,\WW
}
\nonumber
\\
&\quad
-2a\,
\lr{
\sum_{k,\ell}
(\p_x\WW, D_{\ell}(U)U_x)
(U_x, D_k(U)\nu_{\ell}(U))J(U)D_k(U)U_x,\WW
}
\nonumber
\\
&\quad
+2a\,
\lr{
\sum_{k,\ell}
(\p_x\WW, D_{\ell}(U)U_x)
(U_x, D_k(U)U_x)J(U)D_k(U)\nu_{\ell}(U), \WW
}
\nonumber
\\
&\quad
+
C\,D(t)^2.
\label{eq:67e}
\end{align}
The third and fourth term and the bad part of the first term
of the RHS of \eqref{eq:67e} will be cancelled with the same term
appearing in the computation of
$R_4+R_5$.
\par
Let us next compute $R_4+R_5$.
To begin with,
we introduce
$T_1(U)$
which is defined by
\begin{align}
T_1(U)Y
&=
\sum_k
(Y,D_k(U)U_x)J(U)D_k(U)U_x
\nonumber
\end{align}
for any $Y:[0,T]\times \TT\to \RR^d$.
Substituting this with $Y=\p_x^2\WW$
into the RHS of
$\p_x(T_1(U))\p_x\WW=\p_x\left\{T_1(U)\p_x\WW\right\}-T_1(U)\p_x^2\WW$,
we can write
\begin{align}
R_4
+
R_5
&=
2a\,
\lr{
\p_x(T_1(U))\p_x\WW
,\WW}.
\label{eq:89e1}
\end{align}
On the other hand,
using \eqref{eq:curvature3} with
$dw_u(Y_1)=U_x$,
$dw_u(Y_2)=P(U)Y$
and with
$dw_u(Y_3)=U_x$,
we find that
$T_1(U)Y$ has the following another expression.
\begin{align}
T_1(U)Y
&=
J(U)\sum_k
(Y,D_k(U)U_x)P(U)D_k(U)U_x
\nonumber
\\
&=
J(U)\sum_k
(P(U)Y,D_k(U)U_x)P(U)D_k(U)U_x
\nonumber
\\
&\quad
+
J(U)\sum_k
(N(U)Y,D_k(U)U_x)P(U)D_k(U)U_x
\nonumber
\\
&=
J(U)\sum_k
(U_x, D_k(U)P(U)Y)P(U)D_k(U)U_x
\nonumber
\\
&\quad
+
J(U)\sum_k
(N(U)Y,D_k(U)U_x)P(U)D_k(U)U_x
\nonumber
\\
&=
J(U)\sum_k
(U_x, D_k(U)U_x)P(U)D_k(U)P(U)Y
\nonumber
\\
&\quad
+
S\,J(U)
\left\{
(U_x,P(U)Y)U_x
-|U_x|^2P(U)Y
\right\}
\nonumber
\\
&\quad
+
J(U)\sum_k
(N(U)Y,D_k(U)U_x)P(U)D_k(U)U_x
\nonumber
\\
&=
\sum_k
(U_x, D_k(U)U_x)J(U)D_k(U)P(U)Y
\nonumber
\\
&\quad
+
S\,(U_x,P(U)Y)J(U)U_x
-S|U_x|^2J(U)Y
\nonumber
\\
&\quad
+
\sum_k
(N(U)Y,D_k(U)U_x)J(U)D_k(U)U_x
\nonumber
\\
&=
\sum_k
(U_x, D_k(U)U_x)J(U)D_k(U)Y
\nonumber
\\
&\quad
-\sum_k
(U_x, D_k(U)U_x)J(U)D_k(U)N(U)Y
\nonumber
\\
&\quad
+
S\,(Y,U_x)J(U)U_x
-S|U_x|^2J(U)Y
\nonumber
\\
&\quad
+
\sum_k
(N(U)Y,D_k(U)U_x)J(U)D_k(U)U_x
\nonumber
\end{align}
for any $Y:[0,T]\times \TT\to \RR^d$.
If we adopt this formulation,
we have
\begin{align}
\p_x(T_1(U))Y
&=
\p_x\left\{
\sum_k
(U_x, D_k(U)U_x)J(U)D_k(U)
\right\}
Y
\nonumber
\\
&\quad
-\p_x\left\{
\sum_k
(U_x, D_k(U)U_x)J(U)D_k(U)\right\}
N(U)Y
\nonumber
\\
&\quad
-\sum_k
(U_x, D_k(U)U_x)J(U)D_k(U)\p_x(N(U))Y
\nonumber
\\
&\quad
+S\,(Y,\p_xU_x)J(U)U_x
+S\,(Y,U_x)\p_x(J(U))U_x
+S\,(Y,U_x)J(U)\p_xU_x
\nonumber
\\
&\quad
-2S(\p_xU_x,U_x)J(U)Y-S|U_x|^2\p_x(J(U))Y
\nonumber
\\
&\quad
+
\sum_k
(\p_x(N(U))Y,D_k(U)U_x)J(U)D_k(U)U_x
\nonumber
\\
&\quad
+
\sum_k
(N(U)Y,\p_x\left\{D_k(U)U_x\right\})J(U)D_k(U)U_x
\nonumber
\\
&\quad
+
\sum_k
(N(U)Y,D_k(U)U_x)\p_x\left\{J(U)D_k(U)U_x\right\}.
\label{eq:89e2}
\end{align}
By substituting \eqref{eq:89e2} into \eqref{eq:89e1},
we have
\begin{align}
R_4
+
R_5
&=
2a\,
\lr{
\p_x\biggl\{
\sum_k
(U_x, D_k(U)U_x)J(U)D_k(U)
\biggr\}
\p_x\WW,
\WW
}
\nonumber
\\
&\quad
-2a\,
\lr{
\p_x
\biggl\{
\sum_k
(U_x, D_k(U)U_x)J(U)D_k(U)
\biggr\}
N(U)\p_x\WW,
\WW
}
\nonumber
\\
&\quad
-2a\,
\lr{
\sum_k
(U_x, D_k(U)U_x)J(U)D_k(U)\p_x(N(U))\p_x\WW,
\WW
}
\nonumber
\\
&\quad
+2aS\,
\lr{
(\p_x\WW,\p_xU_x)J(U)U_x,
\WW
}
+2aS\,
\lr{
(\p_x\WW,U_x)\p_x(J(U))U_x,
\WW
}
\nonumber
\\
&\quad
+2aS\,
\lr{
(\p_x\WW,U_x)J(U)\p_xU_x,
\WW
}
-4aS\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW,
\WW
}
\nonumber
\\
&\quad
-2aS\,
\lr{
|U_x|^2\p_x(J(U))\p_x\WW,
\WW
}
\nonumber
\\
&\quad
+
2a\,
\lr{
\sum_k
(\p_x(N(U))\p_x\WW,D_k(U)U_x)J(U)D_k(U)U_x,\WW
}
\nonumber
\\
&\quad
+
2a\,
\lr{
\sum_k
(N(U)\p_x\WW,\p_x\left\{D_k(U)U_x\right\})J(U)D_k(U)U_x,\WW
}
\nonumber
\\
&\quad
+
2a\,
\lr{
\sum_k
(N(U)\p_x\WW,D_k(U)U_x)\p_x\left\{J(U)D_k(U)U_x\right\},
\WW
}.
\label{eq:89e3}
\end{align}
The second, the tenth, and the eleventh term of the
RHS of \eqref{eq:89e3} are bounded by
$C\,D(t)^2$, in view of \eqref{eq:key5}.
For the fifth term of the RHS of \eqref{eq:89e3},
we use \eqref{eq:kaehler2} and $(U_x,\nu_k(U))=0$
to see
\begin{align}
\p_x(J(U))U_x
&=
\sum_k
\left(U_x,J(U)D_k(U)U_x\right)\nu_k(U),
\label{eq:kaehler4}
\end{align}
which combined with \eqref{eq:key1} implies
$(\p_x(J(U))U_x,\WW)=\mathcal{O}(|Z|)$.
Therefore, by the integration by parts,
the fifth term of the RHS of \eqref{eq:89e3}
is bounded by
$C\,D(t)^2$.
In the same way, we use \eqref{eq:kaehler2}, \eqref{eq:key1},
and \eqref{eq:key2},
to have
\begin{align}
&(\p_x(J(U))\p_x\WW, \WW)
\nonumber
\\
&=
\sum_k
\left(\p_x\WW,J(U)D_k(U)U_x\right)(\nu_k(U),\WW)
-\sum_k
(\p_x\WW,\nu_k(U))(J(U)D_k(U)U_x,\WW)
\nonumber
\\
&=
\sum_k
\left(\p_x\WW,J(U)D_k(U)U_x\right)\mathcal{O}(|Z|)
+
\mathcal{O}
\left(
(|Z|+|Z_x|+|\WW|)|\WW|\right).
\nonumber
\end{align}
Thus the integration by parts shows that
the eighth term of the RHS of \eqref{eq:89e3}
is bounded by $C\,D(t)^2$.
For, the third and the ninth term of the RHS of
\eqref{eq:89e3},
in view of \eqref{eq:key5}, we have
\begin{align}
\p_x(N(U))\p_x\WW
&=
\sum_{\ell}
(\p_x\WW,D_{\ell}(U)U_x)\nu_{\ell}(U)
+
\sum_{\ell}
(\p_x\WW,\nu_{\ell}(U))D_{\ell}(U)U_x
\nonumber
\\
&=
\sum_{\ell}
(\p_x\WW,D_{\ell}(U)U_x)\nu_{\ell}(U)
+
\mathcal{O}
(|Z|+|Z_x|+|\WW|),
\nonumber
\end{align}
which implies
\begin{align}
&-2a\,
\lr{
\sum_k
(U_x, D_k(U)U_x)J(U)D_k(U)\p_x(N(U))\p_x\WW,
\WW
}
\nonumber
\\
&\leqslant
-2a\,
\lr{
\sum_{k,\ell}
(\p_x\WW, D_{\ell}(U)U_x)
(U_x, D_k(U)U_x)
J(U)D_k(U)\nu_{\ell}(U),
\WW
}
+
C\,D(t)^2,
\nonumber
\\
\intertext{and}
&
2a\,
\lr{
\sum_k
(\p_x(N(U))\p_x\WW,D_k(U)U_x)J(U)D_k(U)U_x,\WW
}
\nonumber
\\
&\leqslant
2a\,
\lr{
\sum_{k,\ell}
(\p_x\WW, D_{\ell}(U)U_x)
(\nu_{\ell}(U),D_k(U)U_x)J(U)D_k(U)U_x,\WW
}
+C\,D(t)^2.
\nonumber
\end{align}
Collecting them, we derive
\begin{align}
R_4
+
R_5
&\leqslant
2a\,
\lr{
\p_x\biggl\{
\sum_k
(U_x, D_k(U)U_x)J(U)D_k(U)
\biggr\}
\p_x\WW,
\WW
}
\nonumber
\\
&\quad
-2a\,
\lr{
\sum_{k,\ell}
(\p_x\WW, D_{\ell}(U)U_x)
(U_x, D_k(U)U_x)
J(U)D_k(U)\nu_{\ell}(U),
\WW
}
\nonumber
\\
&\quad
+
2a\,
\lr{
\sum_{k,\ell}
(\p_x\WW, D_{\ell}(U)U_x)
(\nu_{\ell}(U),D_k(U)U_x)J(U)D_k(U)U_x,\WW
}
\nonumber
\\
&\quad
+2aS\,
\lr{
(\p_x\WW,\p_xU_x)J(U)U_x,
\WW
}
+2aS\,
\lr{
(\p_x\WW,U_x)J(U)\p_xU_x,
\WW
}
\nonumber
\\
&\quad
-4aS\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW,
\WW
}
+
C\,D(t)^2.
\label{eq:89e}
\end{align}
\par
Therefore, by \eqref{eq:67e} and \eqref{eq:89e},
we obtain
\begin{align}
&R_2+R_3+R_4+R_5
\nonumber
\\
&\leqslant
a\,
\lr{
\p_x\biggl\{
\sum_k
(U_x, D_k(U)U_x)(J(U)D_k(U)-D_k(U)J(U))
\biggr\}
\p_x\WW,
\WW
}
\nonumber
\\
&\quad
+2aS\,
\lr{
(\p_x\WW,\p_xU_x)J(U)U_x,
\WW
}
+2aS\,
\lr{
(\p_x\WW,U_x)J(U)\p_xU_x,
\WW
}
\nonumber
\\
&\quad
-2aS\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW,
\WW
}
+
C\,D(t)^2.
\label{eq:R2345}
\end{align}
Note here that
\begin{align}
&\left(
\sum_k(\UU,D_k(U)U_x)(J(U)D_k(U)-D_k(U)J(U))Y_1,Y_2
\right)
\nonumber
\\
&=
\left(
Y_1, \sum_k(\UU,D_k(U)U_x)(J(U)D_k(U)-D_k(U)J(U))Y_2
\right)
\label{eq:sym41}
\end{align}
holds for any $Y_1,Y_2:[0,T]\times \TT\to \RR^d$.
This follows immediately from \eqref{eq:J(wp)1} and
\eqref{eq:D_k}.
By taking the derivative of both sides of \eqref{eq:sym41} in $x$,
we see that
\begin{align}
&\left(
\p_x\left\{
\sum_k(\UU,D_k(U)U_x)(J(U)D_k(U)-D_k(U)J(U))
\right\}
Y_1,Y_2
\right)
\nonumber
\\
&=
\left(
Y_1, \p_x\left\{
\sum_k(\UU,D_k(U)U_x)(J(U)D_k(U)-D_k(U)J(U))
\right\}
Y_2
\right)
\nonumber
\end{align}
holds for any $Y_1,Y_2:[0,T]\times \TT\to \RR^d$.
Hence, by integrating by parts,
we see that
the first term of the RHS of \eqref{eq:R2345}
is bounded by $C\,D(t)^2$.
Therefore we obtain
\begin{align}
&R_2+R_3+R_4+R_5
\nonumber
\\
&\leqslant
2aS\,
\lr{
(\p_x\WW,\p_xU_x)J(U)U_x,
\WW
}
+2aS\,
\lr{
(\p_x\WW,U_x)J(U)\p_xU_x,\WW
}
\nonumber
\\
&\quad
-2aS\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW,\WW
}
+
C\,D(t)^2.
\label{eq:RR2345}
\end{align}
\par
Gathering the information
\eqref{eq:nn},
\eqref{eq:RR167},
and
\eqref{eq:RR2345},
we derive
\begin{align}
\frac{1}{2}
\frac{d}{dt}
\|\WW\|_{L^2}^2
&\leqslant
(c+aS)\,
\lr{
(\p_x^2\WW,U_x)J(U)U_x,
\WW
}
\nonumber
\\
&\quad
+(-aS+2b+c)\,
\lr{
(\p_xU_x,U_x)
J(U)\p_x\WW,
\WW
}
\nonumber
\\
&\quad
+
(2aS+2b+2c)\,
\lr{
(\p_x\WW,U_x)J(U)\p_xU_x,
\WW
}
\nonumber
\\
&\quad
+
(aS+3c)\,
\lr{
(\p_x\WW,\p_xU_x)J(U)U_x,
\WW
}
+C\,D(t)^2.
\label{eq:WWt2}
\end{align}
Furthermore we rewrite the third and fourth term of the RHS
of \eqref{eq:WWt2} recalling Definition~\ref{definition:symmetry_uniqueness}
and Lemma~\ref{lemma:symmetry_uniqueness}.
From Definition~\ref{definition:symmetry_uniqueness},
it follows that
\begin{align}
&(\p_x\WW,U_x)J(U)\p_xU_x
\nonumber
\\
&=
\frac{1}{2}
\left(
T_3(U)-T_4(U)+T_5(U)
\right)\p_x\WW
+
\frac{1}{2}
\sum_k
(U_x,D_k(U)U_x)
(\p_x\WW,\nu_k(U))J(U)U_x.
\nonumber
\end{align}
Using \eqref{eq:tae2} and \eqref{eq:tae3} with $Y=\p_x\WW$,
we see
\begin{align}
T_5(U)\p_x\WW
&=
(\p_xU_x,U_x)J(U)\p_x\WW
+\frac{1}{2}
|U_x|^2\p_x(J(U))\p_x\WW
\nonumber
\\
&\quad
+\frac{1}{2}(\p_x\WW,U_x)
\sum_k
(J(U)U_x,D_k(U)U_x)\nu_k(U)
\nonumber
\\
&\quad
-\frac{1}{2}
\sum_k
(\p_x\WW,\nu_k(U))
(J(U)U_x,D_k(U)U_x)U_x.
\nonumber
\end{align}
Substituting this and using \eqref{eq:key2},
we obtain
\begin{align}
&(\p_x\WW,U_x)J(U)\p_xU_x
\nonumber
\\
&=
\frac{1}{2}(\p_xU_x,U_x)J(U)\p_x\WW
+
\frac{1}{2}
\left(
T_3(U)-T_4(U)
\right)\p_x\WW
\nonumber
\\
&\quad
+
\frac{1}{4}
|U_x|^2\p_x(J(U))\p_x\WW
+\frac{1}{4}(\p_x\WW,U_x)
\sum_k
(J(U)U_x,D_k(U)U_x)\nu_k(U)
\nonumber
\\
&\quad
+\mathcal{O}(|Z|+|Z_x|+|\WW|).
\label{eq:tae6}
\end{align}
In the same way, by using Definition~\ref{definition:symmetry_uniqueness}
and Lemma~\ref{lemma:symmetry_uniqueness},
we obtain
\begin{align}
&(\p_x\WW,\p_xU_x)J(U)U_x
\nonumber
\\
&=
\frac{1}{2}(\p_xU_x,U_x)J(U)\p_x\WW
+
\frac{1}{2}
\left(
T_3(U)+T_4(U)
\right)\p_x\WW
\nonumber
\\
&\quad
+
\frac{1}{4}
|U_x|^2\p_x(J(U))\p_x\WW
+\frac{1}{4}(\p_x\WW,U_x)
\sum_k
(J(U)U_x,D_k(U)U_x)\nu_k(U)
\nonumber
\\
&\quad
+\mathcal{O}(|Z|+|Z_x|+|\WW|).
\label{eq:tae7}
\end{align}
Thanks to \eqref{eq:tae4} and \eqref{eq:tae5},
we can easily show
$\lr{T_i(U)\p_x\WW,\WW}
\leqslant
C\,D(t)^2$ with $i=3,4$,
by integrating by parts.
Besides,
it is now immediate to see
\begin{align}
&\lr{
|U_x|^2\p_x(J(U))\p_x\WW
,
\WW
}
\leqslant
C\,D(t)^2,
\nonumber
\\
&\lr{
(\p_x\WW,U_x)
\sum_k
(J(U)U_x,D_k(U)U_x)\nu_k(U)
\WW
}
\leqslant
C\,D(t)^2
\nonumber
\end{align}
by the argument
using \eqref{eq:kaehler2} and \eqref{eq:key1},
Therefore,
we substitute \eqref{eq:tae6} and \eqref{eq:tae7}
into \eqref{eq:WWt2} to derive
\begin{align}
&\frac{1}{2}
\frac{d}{dt}
\|\WW\|_{L^2}^2
\nonumber
\\
&\leqslant
(c+aS)\,
\lr{
(\p_x^2\WW,U_x)J(U)U_x,
\WW
}
+(-aS+2b+c)\,
\lr{
(\p_xU_x,U_x)
J(U)\p_x\WW,
\WW
}
\nonumber
\\
&\quad
+
(aS+b+c)\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW,
\WW
}
\nonumber
\\
&\quad
+
\frac{aS+3c}{2}\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW,
\WW
}
+C\,D(t)^2.
\nonumber
\\
&=
(c+aS)\,
\lr{
(\p_x^2\WW,U_x)J(U)U_x,
\WW
}
\nonumber
\\
&\quad
+\frac{aS+6b+7c}{2}\,
\lr{
(\p_xU_x,U_x)
J(U)\p_x\WW,
\WW
}
+C\,D(t)^2.
\label{eq:WWt}
\end{align}
Even if we use the integration parts,
the first and the second term of
the RHS of \eqref{eq:WWt} cannot be bounded by
$C\,D(t)^2$.
Fortunately, however, we will find in the next step
that the two terms can be eliminated essentially
by introducing a gauged function.
\vspace{0.5em}
\\
{\bf 4. Energy estimates for
$\|\TW\|_{L^2(\TT;\RR^d)}$
to eliminate the loss of derivatives.}
\\
We introduce the function $\TW$
which is defined by
\begin{align}
\widetilde{\WW}
&=
\WW+\widetilde{\Lambda},
\label{eq:sag1}
\end{align}
where
\begin{align}
\widetilde{\Lambda}
&=
-\frac{e_1}{2a}(Z,J(U)U_x)J(U)U_x
+
\frac{e_2}{8a}|U_x|^2Z,
\label{eq:sag2}
\\
e_1&=aS+c,
\quad
e_2=e_1+\frac{aS+6b+7c}{2}.
\label{eq:e1e2}
\end{align}
Moreover, we introduce the energy $\widetilde{D}(t)$ whose square is
defined by
\begin{align}
\widetilde{D}(t)^2
&=
\|Z(t)\|_{L^2}^2
+\|Z_x(t)\|_{L^2}^2
+\|\TW(t)\|_{L^2}^2.
\label{eq:menergy2}
\end{align}
Since $u$ and $v$ satisfy the same initial value,
$\widetilde{D}(0)=0$ holds.
We shall show that
there exists a positive constant $C$ such that
\begin{equation}
\frac{1}{2}
\frac{d}{dt}\widetilde{D}(t)^2
\leqslant C\,
\widetilde{D}(t)^2
\label{eq:De}
\end{equation}
for all $t\in (0,T)$.
If it is true,
\eqref{eq:De} together with $\widetilde{D}(0)=0$ shows $\widetilde{D}(t)\equiv 0$.
This implies $Z=0$.
\par
In the proof of \eqref{eq:De},
by integrating by parts repeatedly,
it is now not difficult to obtain the following estimate
permitting the loss of derivatives of order one:
\begin{equation}
\frac{1}{2}\frac{d}{dt}
\left\{
\|Z(t)\|_{L^2}^2
+
\|Z_x(t)\|_{L^2}^2
\right\}
\leqslant
C\, \widetilde{D}(t)^2.
\label{eq:notmainineq}
\end{equation}
Having them in mind,
we hereafter concentrate on how to derive
the estimate of the form
\begin{equation}
\frac{1}{2}\frac{d}{dt}
\|\TW(t)\|_{L^2}^2
\leqslant
C\, \widetilde{D}(t)^2.
\label{eq:mainineq}
\end{equation}
For this purpose,
we begin with
\begin{align}
\frac{1}{2}
\frac{d}{dt}
\|\TW\|_{L^2}^2
&=
\lr{
\p_t\TW,
\TW
}
\nonumber
\\
&=
\lr{
\p_t\WW,
\TW
}
+
\lr{
\p_t\widetilde{\Lambda},
\TW
}
\nonumber
\\
&=
\lr{
\p_t\WW,\WW
}
+
\lr{
\p_t\WW,\widetilde{\Lambda}
}
+
\lr{
\p_t\widetilde{\Lambda},\TW
}.
\label{eq:TW1}
\end{align}
The first term of the RHS of \eqref{eq:TW1}
has already been investigated to satisfy
\eqref{eq:WWt}.
Hence we compute the second and the third term
of the RHS of \eqref{eq:TW1} below.
Observing $\widetilde{\Lambda}=\mathcal{O}(|Z|)$,
we see
$\TW=\WW+\mathcal{O}(|Z|)$,
$\p_x\TW=\p_x\WW+\mathcal{O}(|Z|+|Z_x|)$,
and
$\p_x^2\TW=\p_x^2\WW+\mathcal{O}(|Z|+|Z_x|+|\TW|)$,
which will be often used without comments.
\par
We start the computation of
$\lr{
\p_t\widetilde{\Lambda},\TW}$
by investigating $\p_t\widetilde{\Lambda}$.
A simple computation shows
\begin{align}
\p_t\widetilde{\Lambda}
&=
-\frac{e_1}{2a}(Z_t,J(U)U_x)J(U)U_x
+
\frac{e_2}{8a}|U_x|^2Z_t
+\mathcal{O}(|Z).
\label{eq:aya1}
\end{align}
Recalling \eqref{eq:U_t}, we see
\begin{align}
Z_t
&=
a\,\p_x\left(
J(U)\p_x\WW
\right)
+
a\,
\sum_k
(J(U)\p_x\WW, D_k(U)U_x)\nu_k(U)
+
\mathcal{O}(|Z|+|Z_x|+|\TW|)
\label{eq:aya2}
\\
&=
a\,J(U)\p_x^2\WW
+a\,\p_x(J(U))\p_x\WW
+
a\,
\sum_k
(J(U)\p_x\WW, D_k(U)U_x)\nu_k(U)
\nonumber
\\
&\quad
+
\mathcal{O}(|Z|+|Z_x|+|\TW|).
\label{eq:aya3}
\end{align}
By using \eqref{eq:aya3}, we see
\begin{align}
&-\frac{e_1}{2a}(Z_t,J(U)U_x)J(U)U_x
\nonumber
\\
&=
-\frac{e_1}{2}(J(U)\p_x^2\WW,J(U)U_x)J(U)U_x
-\frac{e_1}{2}(\p_x(J(U))\p_x\WW,J(U)U_x)J(U)U_x
\nonumber
\\
&\quad
-\frac{e_1}{2}
\sum_k
(J(U)\p_x\WW, D_k(U)U_x)(\nu_k(U),J(U)U_x)J(U)U_x
+
\mathcal{O}(|Z|+|Z_x|+|\TW|).
\nonumber
\end{align}
The third term of the RHS vanishes, since $(\nu_k(U),J(U)U_x)=0$.
By noting \eqref{eq:0yumi}, we see that
the second term of the RHS is $\mathcal{O}(|Z|+|Z_x|+|\TW|)$.
Thus we have
\begin{align}
-\frac{e_1}{2a}(Z_t,J(U)U_x)J(U)U_x
&=
-\frac{e_1}{2}(\p_x^2\WW,U_x)J(U)U_x
+\mathcal{O}(|Z|+|Z_x|+|\TW|).
\label{eq:aya4}
\end{align}
On the other hand, by using \eqref{eq:aya2},
we obtain
\begin{align}
\frac{e_2}{8a}|U_x|^2Z_t
&=
\frac{e_2}{8}|U_x|^2
\p_x\left(
J(U)\p_x\WW
\right)
+
\frac{e_2}{8}|U_x|^2
\sum_k
(J(U)\p_x\WW, D_k(U)U_x)\nu_k(U)
\nonumber
\\
&\quad
+\mathcal{O}(|Z|+|Z_x|+|\TW|)
\nonumber
\\
&=
\frac{e_2}{8}
\p_x\left\{
|U_x|^2
J(U)\p_x\WW
\right\}
-
\frac{e_2}{4}
(\p_xU_x,U_x)J(U)\p_x\WW
\nonumber
\\
&\quad
+
\sum_k
\mathcal{O}
\left(
|\p_x\WW|
\right)
\nu_k(U)
+\mathcal{O}(|Z|+|Z_x|+|\TW|).
\label{eq:aya5}
\end{align}
By substituting \eqref{eq:aya4} and \eqref{eq:aya5}
into \eqref{eq:aya1},
we obtain
\begin{align}
\p_t\widetilde{\Lambda}
&=
-\frac{e_1}{2}(\p_x^2\WW,U_x)J(U)U_x
+
\frac{e_2}{8}
\p_x\left\{
|U_x|^2
J(U)\p_x\WW
\right\}
-
\frac{e_2}{4}
(\p_xU_x,U_x)J(U)\p_x\WW
\nonumber
\\
&\quad
+
\sum_k
\mathcal{O}
\left(
|\p_x\WW|
\right)
\nu_k(U)
+\mathcal{O}(|Z|+|Z_x|+|\TW|).
\nonumber
\end{align}
This shows that
\begin{align}
\lr{
\p_t\widetilde{\Lambda},\TW
}
&=
-\frac{e_1}{2}
\lr{
(\p_x^2\WW,U_x)J(U)U_x,\WW+\mathcal{O}(|Z|)
}
\nonumber
\\
&\quad
+\frac{e_2}{8}
\lr{
\p_x\left\{
|U_x|^2
J(U)\p_x\WW
\right\},\WW+\mathcal{O}(|Z|)
}
\nonumber
\\
&\quad
-\frac{e_2}{4}
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW,\WW+\mathcal{O}(|Z|)
}
\nonumber
\\
&\quad
+
\lr{
\sum_k
\mathcal{O}
\left(
|\p_x\WW|
\right)
\nu_k(U)
, \WW+\mathcal{O}(|Z|)
}
\nonumber
\\
&\quad
+\lr{
\mathcal{O}(|Z|+|Z_x|+|\TW|),\WW+\mathcal{O}(|Z|)
}
\nonumber
\\
&\leqslant
-\frac{e_1}{2}
\lr{
(\p_x^2\WW,U_x)J(U)U_x,\WW
}
+\frac{e_2}{8}
\lr{
\p_x\left\{
|U_x|^2
J(U)\p_x\WW
\right\},\WW
}
\nonumber
\\
&\quad
-\frac{e_2}{4}
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW,\WW
}
\nonumber
\\
&\quad
+
\lr{
\sum_k
\mathcal{O}
\left(
|\p_x\WW|
\right)
\nu_k(U), \WW
}
+C\,\widetilde{D}(t)^2
\nonumber
\\
&\leqslant
-\frac{e_1}{2}
\lr{
(\p_x^2\WW,U_x)J(U)U_x,\WW
}
-\frac{e_2}{4}
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW,\WW
}
\nonumber
\\
&\quad
+C\,\widetilde{D}(t)^2.
\label{eq:tom1}
\end{align}
\par
We next compute
$\lr{\p_t\WW,\widetilde{\Lambda}}$.
Observing \eqref{eq:eqWW}, we see
\begin{align}
\p_t\WW
&=a\,\p_x^2\left\{
J(U)\p_x^2\WW
\right\}
-
2a\,
\sum_{k}
\left(\p_x^3\WW,
J(U)D_{k}(U)U_x
\right)\nu_{k}(U)
\nonumber
\\
&\quad
+
\mathcal{O}(|Z|+|Z_x|+|\WW|+|\p_x\WW|+|\p_x^2\WW|).
\nonumber
\end{align}
By using this and by noting $\widetilde{\Lambda}=\mathcal{O}(|Z|)$,
we integrate by parts to obtain
\begin{align}
\lr{
\p_t\WW,\widetilde{\Lambda}
}
&\leqslant
R_8+R_9
+C\,\widetilde{D}(t)^2,
\label{eq:ma1}
\end{align}
where
\begin{align}
R_8&=a\,
\lr{
\p_x^2\left\{
J(U)\p_x^2\WW
\right\},\widetilde{\Lambda}
},
\nonumber
\\
R_9
&=
-2a\,
\lr{
\sum_{k}
\left(\p_x^3\WW,
J(U)D_{k}(U)U_x
\right)\nu_{k}(U),\widetilde{\Lambda}
}.
\nonumber
\end{align}
For $R_9$, noting $\widetilde{\Lambda}=\mathcal{O}(|Z|)$,
we use the integration by parts
and $(\nu_k(U),J(U)U_x)=0$ to obtain
\begin{align}
R_9
&\leqslant
-2a\,(-1)^3
\lr{
\sum_{k}
\left(\WW,
J(U)D_{k}(U)U_x
\right)\nu_{k}(U),\p_x^3\widetilde{\Lambda}
}
+
C\,\widetilde{D}(t)^2
\nonumber
\\
&\leqslant
2a\,
\lr{
\sum_{k}
\left(\WW,
J(U)D_{k}(U)U_x
\right)\nu_{k}(U),
-\frac{e_1}{2a}(\p_x^3Z,J(U)U_x)J(U)U_x
+
\frac{e_2}{8a}|U_x|^2\p_x^3Z
}
\nonumber
\\
&\quad
+
C\,\widetilde{D}(t)^2
\nonumber
\\
&=\frac{e_2}{4}
\lr{
\sum_{k}
\left(\WW,
J(U)D_{k}(U)U_x
\right)\nu_{k}(U),
|U_x|^2\p_x^3Z
}
+
C\,\widetilde{D}(t)^2.
\nonumber
\end{align}
Furthermore,
since
$\p_x^3Z=\p_x^2Z_x
=
\p_x\WW
+
\mathcal{O}(|Z|+|Z_x|+|\WW|)
=\p_x\WW+\mathcal{O}(|Z|+|Z_x|+|\TW|)$
and
$(\nu_k(U),\p_x\WW)=\mathcal{O}(|Z|+|Z_x|+|\WW|)$,
we have
\begin{align}
R_9
&\leqslant
\frac{e_2}{4}
\lr{
\sum_{k}
\left(\WW,
J(U)D_{k}(U)U_x
\right)\nu_{k}(U),
|U_x|^2\p_x\WW
}
+
C\,\widetilde{D}(t)^2
\leqslant
C\,\widetilde{D}(t)^2.
\label{eq:ma2}
\end{align}
For $R_8$,
we begin with
\begin{align}
R_8
&=
-\frac{e_1}{2}\,
\lr{
\p_x^2\left\{
J(U)\p_x^2\WW
\right\}, (Z,J(U)U_x)J(U)U_x
}
+
\frac{e_2}{8}\,
\lr{
\p_x^2\left\{
J(U)\p_x^2\WW
\right\}, |U_x|^2Z
}
\nonumber
\\
&=:R_{81}+R_{82}.
\nonumber
\end{align}
The integration by parts implies
\begin{align}
R_{81}
&=
-\frac{e_1}{2}\,
\lr{
J(U)\p_x^2\WW,
\p_x^2\left\{(Z,J(U)U_x)J(U)U_x\right\}
}
\nonumber
\\
&\leqslant
-\frac{e_1}{2}\,
\lr{
J(U)\p_x^2\WW,
(\p_xZ_x,J(U)U_x)J(U)U_x
}
\nonumber
\\
&\quad
-e_1\,
\lr{
J(U)\p_x^2\WW,
(Z_x,\p_x\left\{J(U)U_x\right\})J(U)U_x
}
\nonumber
\\
&\quad
-e_1\,
\lr{
J(U)\p_x^2\WW,
(Z_x,J(U)U_x)\p_x\left\{J(U)U_x\right\}
}
+C\,\widetilde{D}(t)^2
\nonumber
\\
&\leqslant
-\frac{e_1}{2}\,
\lr{
J(U)\p_x^2\WW,
(\p_xZ_x,J(U)U_x)J(U)U_x
}
\nonumber
\\
&\quad
+e_1\,
\lr{
J(U)\p_x\WW,
(\p_xZ_x,\p_x\left\{J(U)U_x\right\})J(U)U_x
}
\nonumber
\\
&\quad
+e_1\,
\lr{
J(U)\p_x\WW,
(\p_xZ_x,J(U)U_x)\p_x\left\{J(U)U_x\right\}
}
+C\,\widetilde{D}(t)^2
\nonumber
\\
&=:R_{83}+R_{84}+R_{85}+C\,\widetilde{D}(t)^2.
\label{eq:ma4}
\end{align}
Since $\UU=\p_xU_x+\displaystyle\sum_{k}(U_x,D_k(U)U_x)\nu_k(U)$
and $\VV=\p_xV_x+\displaystyle\sum_{k}(V_x,D_k(V)V_x)\nu_k(V)$,
we see
$$\p_xZ_x=\WW-\sum_{k}(Z_x,D_k(U)U_x)\nu_k(U)
-\sum_k(V_x,D_k(U)Z_x)\nu_k(U)
+
\mathcal{O}(|Z|),
$$
and thus
$
(\p_xZ_x, J(U)U_x)
=
(\WW, J(U)U_x)
+\mathcal{O}(|Z|)$.
Substituting this, using \eqref{eq:J(wp)1} and \eqref{eq:J(wp)2},
and integrating by parts,
we have
\begin{align}
R_{83}&=
-\frac{e_1}{2}\,
\lr{
\p_x^2\WW,
(\p_xZ_x,J(U)U_x)U_x
}
\nonumber
\\
&\leqslant
-\frac{e_1}{2}\,
\lr{
\p_x^2\WW,
(\WW,J(U)U_x)U_x
}
-\frac{e_1}{2}\,
\lr{
\p_x^2\WW,
\mathcal{O}(|Z|)
}
+C\,\widetilde{D}(t)^2
\nonumber
\\
&\leqslant
-\frac{e_1}{2}\,
\lr{
(\p_x^2\WW, U_x)J(U)U_x, \WW
}
+C\,\widetilde{D}(t)^2.
\label{eq:ma5}
\end{align}
From \eqref{eq:kaehler4}, it follows that
\begin{equation}
\p_x\left\{J(U)U_x\right\}
=
J(U)\p_xU_x+\sum_k(U_x,J(U)D_k(U)U_x)\nu_k(U).
\nonumber
\end{equation}
Using this, $\p_xZ_x=\WW+\mathcal{O}(|Z|+|Z_x|)$,
and \eqref{eq:key1},
we see
\begin{align}
&(\p_xZ_x, \p_x\left\{J(U)U_x\right\})
\nonumber
\\
&=
(\p_xZ_x, J(U)\p_xU_x)
+\sum_k(U_x,J(U)D_k(U)U_x)(\nu_k(U),\p_xZ_x)
+
\mathcal{O}(|Z|+|Z_x|)
\nonumber
\\
&=
(\WW, J(U)\p_xU_x)
+\sum_k(U_x,J(U)D_k(U)U_x)(\nu_k(U),\WW)
+
\mathcal{O}(|Z|+|Z_x|)
\nonumber
\\
&=(\WW, J(U)\p_xU_x)+
\mathcal{O}(|Z|+|Z_x|).
\nonumber
\end{align}
This implies
\begin{align}
R_{84}&=
e_1\,
\lr{
\p_x\WW,
(\p_xZ_x,\p_x\left\{J(U)U_x\right\})U_x
}
\nonumber
\\
&\leqslant
e_1\,
\lr{
\p_x\WW,
(\WW, J(U)\p_xU_x)U_x
}
+C\,\widetilde{D}(t)^2
\nonumber
\\
&=
e_1\,
\lr{
(\p_x\WW, U_x)
J(U)\p_xU_x, \WW
}
+C\,\widetilde{D}(t)^2.
\label{eq:ma6}
\end{align}
In the same way, we use \eqref{eq:J(wp)2} to see
\begin{align}
&(J(U)\p_x\WW, \p_x\left\{J(U)U_x\right\})
\nonumber
\\
&=
(J(U)\p_x\WW, J(U)\p_xU_x)
+
\sum_k(U_x,J(U)D_k(U)U_x)
(J(U)\p_x\WW, \nu_k(U))
\nonumber
\\
&=
(\p_x\WW, P(U)\p_xU_x)
\nonumber
\\
&=
(\p_x\WW, \p_xU_x+\sum_k(U_x, D_k(U)U_x)\nu_k(U))
\nonumber
\\
&=
(\p_x\WW,\p_xU_x)
+
\mathcal{O}(|Z|+|Z_x|+|\WW|).
\nonumber
\end{align}
Substituting this, we obtain
\begin{align}
R_{85}
&=e_1\,
\lr{
J(U)\p_x\WW,
(\p_xZ_x,J(U)U_x)\p_x\left\{J(U)U_x\right\}
}
\nonumber
\\
&\leqslant
e_1\,
\lr{
\p_x\WW,
(\p_xZ_x,J(U)U_x)\p_xU_x
}
+C\,\widetilde{D}(t)^2
\nonumber
\\
&\leqslant
e_1\,
\lr{
\p_x\WW,
(\WW,J(U)U_x)\p_xU_x
}
+C\,\widetilde{D}(t)^2
\nonumber
\\
&=
e_1\,
\lr{
(\p_x\WW, \p_xU_x)J(U)U_x,\WW
}
+C\,\widetilde{D}(t)^2.
\label{eq:ma7}
\end{align}
Substituting \eqref{eq:ma5}, \eqref{eq:ma6}, and \eqref{eq:ma7}
into \eqref{eq:ma4},
we see $R_{81}=R_{83}+R_{84}+R_{85}+C\,\widetilde{D}(t)^2$
is bounded as follows:
\begin{align}
R_{81}
&\leqslant
-\frac{e_1}{2}\,
\lr{
(\p_x^2\WW, U_x)J(U)U_x, \WW
}
+
e_1\,
\lr{
(\p_x\WW, U_x)
J(U)\p_xU_x, \WW
}
\nonumber
\\
&\quad
+
e_1\,
\lr{
(\p_x\WW, \p_xU_x)J(U)U_x,\WW
}
+C\,\widetilde{D}(t)^2.
\nonumber
\end{align}
Furthermore, by applying \eqref{eq:tae6} and \eqref{eq:tae7}
to the second and the third term of the RHS of above,
and by using
$\lr{T_3(U)\p_x\WW,\WW}\leqslant C\,\widetilde{D}(t)^2$ again,
we deduce
\begin{align}
R_{81}
&\leqslant
-\frac{e_1}{2}\,
\lr{
(\p_x^2\WW, U_x)J(U)U_x, \WW
}
+
e_1\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW, \WW
}
\nonumber
\\
&\quad
+
e_1\,
\lr{
T_3(U)\p_x\WW,\WW
}
+C\,\widetilde{D}(t)^2
\nonumber
\\
&\leqslant
-\frac{e_1}{2}\,
\lr{
(\p_x^2\WW, U_x)J(U)U_x, \WW
}
+
e_1\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW, \WW
}
+C\,\widetilde{D}(t)^2.
\label{eq:ma8}
\end{align}
For $R_{82}$, the integration by parts and the same argument as above
lead to
\begin{align}
R_{82}
&=
\frac{e_2}{8}
\lr{
J(U)\p_x^2\WW, \p_x^2\left\{|U_x|^2Z\right\}
}
\nonumber
\\
&=
\frac{e_2}{8}
\lr{
J(U)\p_x^2\WW, |U_x|^2\p_xZ_x+4(\p_xU_x,U_x)Z_x+\mathcal{O}(|Z|)
}
\nonumber
\\
&\leqslant
\frac{e_2}{8}
\lr{
|U_x|^2J(U)\p_x^2\WW, \WW
}
+
\frac{e_2}{2}
\lr{
J(U)\p_x^2\WW,(\p_xU_x,U_x)Z_x
}
+C\,\widetilde{D}(t)^2
\nonumber
\\
&\leqslant
\frac{e_2}{8}
\lr{
\p_x
\left\{|U_x|^2J(U)\p_x\WW\right\}, \WW
}
-
\frac{e_2}{4}
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW, \WW
}
\nonumber
\\
&\quad
-
\frac{e_2}{8}
\lr{
|U_x|^2\p_x(J(U))\p_x\WW, \WW
}
-
\frac{e_2}{2}
\lr{
J(U)\p_x\WW,(\p_xU_x,U_x)\WW
}
+C\,\widetilde{D}(t)^2
\nonumber
\\
&\leqslant
-
\frac{3e_2}{4}
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW, \WW
}
+C\,\widetilde{D}(t)^2.
\label{eq:ma9}
\end{align}
Therefore, from \eqref{eq:ma8}, and \eqref{eq:ma9},
it follows that $R_8=R_{81}+R_{82}$ is bounded as follows:
\begin{align}
R_8
&\leqslant
-\frac{e_1}{2}\,
\lr{
(\p_x^2\WW, U_x)J(U)U_x, \WW
}
+
\left(e_1-\frac{3e_2}{4}\right)\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW, \WW
}
\nonumber
\\
&\quad
+C\,\widetilde{D}(t)^2.
\label{eq:ma10}
\end{align}
Consequently, by substituting \eqref{eq:ma2} and \eqref{eq:ma10}
into \eqref{eq:ma1},
we have
\begin{align}
\lr{
\p_t\WW, \widetilde{\Lambda}
}
&\leqslant
-\frac{e_1}{2}\,
\lr{
(\p_x^2\WW, U_x)J(U)U_x, \WW
}
\nonumber
\\
&\quad
+
\left(e_1-\frac{3e_2}{4}\right)\,
\lr{
(\p_xU_x,U_x)J(U)\p_x\WW, \WW
}
+C\,\widetilde{D}(t)^2.
\label{eq:tom2}
\end{align}
\par
Collecting the information
\eqref{eq:TW1}, \eqref{eq:WWt}, \eqref{eq:tom1}, \eqref{eq:tom2},
and \eqref{eq:e1e2},
we conclude
\begin{align}
\frac{1}{2}
\frac{d}{dt}
\|\TW\|_{L^2}^2
&\leqslant
(c+aS-e_1)\,
\lr{
(\p_x^2\WW,U_x)J(U)U_x,
\WW
}
\nonumber
\\
&\ \
+
\left(
\frac{aS+6b+7c}{2}+e_1-e_2
\right)
\lr{
(\p_xU_x,U_x)
J(U)\p_x\WW,
\WW
}
+C\,\widetilde{D}(t)^2
\nonumber
\\
&=
C\,\widetilde{D}(t)^2,
\nonumber
\end{align}
which is the desired result \eqref{eq:mainineq}.
\end{proof}
{\bf Acknowledgements.} \\
The author would like to thank Hiroyuki Chihara
for valuable comments and encouragement.
Thanks to his comments in \cite{chihara2},
the proof of Theorem~\ref{theorem:uniqueness} and
\ref{theorem:existence}
is improved to be
comprehensible.
This work is supported by
JSPS Grant-in-Aid for
Young Scientists (B) \#24740090.
|
3,212,635,537,561 | arxiv | \section{Introduction}
Dense packings of hard spheres are an important starting point for
the study of simple liquids, metallic glasses, colloids, biological
systems, and granular matter \cite{Bernal,Finney,Scott,Jodrey,
Tobochnik, Lubachevsky, Torquato2005,chaudhuri10}. Of particular interest
is the densest possible packing that still possesses random
structure, ``random close packing'' (RCP), which is important for
physics and engineering. For example, the viscosity of dense
particle suspensions diverges when the particles approach the
RCP state \cite{Krieger}. In a classic experiment, Bernal and
Mason obtained the volume fraction of RCP $\phi_{RCP} \approx
0.637$. They compressed and shook a rubber balloon which was
full of metal ball bearings for a sufficiently long time to
achieve maximum density \cite{Bernal}. Scott and Kilgour also
reported $\phi_{RCP} \approx 0.637$ by pouring balls into a
large vibrating container \cite{Scott}. Their results were
sensitive to the experimental method, for example both the
frequency and amplitude of vibration. Likewise in computer
simulations, the value of $\phi_{RCP}$ depends on the protocol.
$\phi_{RCP}$ is between 0.642 and 0.648 with a rate dependent
densification algorithm \cite{Jodrey}, 0.68 with a Monte Carlo
methods \cite{Tobochnik} and 0.644 with Lubachevsky-Stillinger
packing algorithm \cite{Lubachevsky,Torquato2005}. All of these
results are for monodisperse spheres, in other words, spheres with
identical diameters.
The variety of results for $\phi_{RCP}$, in addition to being due to
the method of preparing the RCP state, perhaps also comes from the
poor definition of RCP \cite{Torquato2000,radin08}. The phrase ``random
close packing" is composed of two terms, ``random" and ``close
packing," which are inherently in conflict with each other.
An ideal \textit{random} state would have no correlation between
particles, but the constraint that particles cannot overlap already
diminishes the randomness of a physical packing. Furthermore, to
get a \textit{close packing} the most efficient method is to pack
particles into a crystalline array, which is highly non-random
\cite{Hales}. For example, a random arrangement of spheres
can be made denser if it partially includes dense crystalline
regions, but then it is less random \cite{Davis,Pouliquen}.
In 2000 Torquato and coworkers proposed ``Maximum randomly jammed
(MRJ)" state as a more tight definition of RCP. MRJ states
are defined as the least locally ordered structures which are
also jammed so that no particles can move \cite{Torquato2000}.
A strictly jammed state should be incompressible and unshearable
\cite{Torquato2003}, while other definitions of jammed states
can involve external forces \cite{Cates} or experimental time
scales \cite{Liu}; the latter can involve questions of glassiness.
Returning to strictly jammed states, one method of quantifying
jamming is by considering the isothermal compressibility $K_T$,
which is determined by the structure factor at wave number $q=0$,
$K_T = 1/\rho (\partial \rho/ \partial p) = S(0)/\rho k_BT$
where $\rho$, $p$, $k_B$ and $T$ are density of the material,
pressure, Boltzmann constant, and temperature, respectively. Thus,
a strictly jammed state requires $K_T = S(0) = 0$ since this state
should be incompressible. Indeed, prior simulation works for the
strict jammed state of hard spheres show $S(0) \approx 0$ to within
numerical resolution \cite{Torquato2005,Torquato2003,Silbert}.
The observation $S(q \rightarrow 0) \rightarrow 0$ has been termed
``hyperuniformity,'' in that the density looks increasingly
uniform when considered on longer length scales
\cite{Torquato2005}.
The first physics study of the internal structure of a random closed
packed system that we are aware of is the work of Smith, Foote,
and Busang \cite{smith29}. In 1929, they studied the packing of
shot and used acid to mark the contacts between spheres, reporting
the contact numbers for 1,562 particles taken from the interior
of a sample with 2,400 particles. In the 1960's, Bernal first
studied 500 particles taken from the interior of a sample with 5,000
particles \cite{Bernal}, and later 1,000 particles \cite{bernal64}.
In more recent times, 16,000 spheres were studied by Slotterback
{\it et al.} who used an index-matching fluid and laser-sheet
illumination to find the positions \cite{Losert}. Aste {\it et al.}
used x-ray tomography to study several different granular packings
containing 90,000 particles in an interior region \cite{Aste}.
These experiments provide useful data for testing theories and
studying the properties of RCP packings on large length scales.
In this article, we use a sedimented dense colloidal suspension as
an experimental realization of a RCP material, in the loose sense
of RCP rather than the strict sense of a MRJ state. We study the
detailed structure of our sample with confocal microscopy, which can
determine the three-dimensional positions of the particle centers
to high accuracy. By carefully imaging overlapping regions,
we observe a large volume containing over 450,000 particles. Our
data set is available online \cite{epaps}.
The data are used to determine which features of our realistic RCP
system are similar to the stricter ideal MRJ packing. The sample
satisfies several criteria for randomness, for example, having
only a tiny fraction of particles having even short-range order.
However, in contrast to simulated MRJ packings, we find the
isothermal compressibility is not zero, thus suggesting that in
at least this particular experimental realization of a RCP system,
there are differences with simulations.
It is important to note that our colloidal experiment differs
in several particulars from both granular experiments (such
as the early ones with ball bearings \cite{Bernal,Scott})
and simulations. First, the particles are not all identical;
they have a polydispersity of 5\% in their diameters. Second,
as the RCP state is formed by sedimentation, the particles have a
chance to diffuse due to Brownian motion. In some situations this
motion can help particles rearrange into crystalline packings, if
the sample has a volume fraction in the range $0.49 \le \phi \le
0.58$ \cite{Alder,Pusey}. While our experimental preparation method
avoids full crystallization, it is plausible that the sample could
be more ordered as a result of subtle rearrangements as particles
sediment toward their final positions. However, conventionally
such sedimented colloidal samples are thought of as RCP states.
Our primary motivation is to use our sample to discern properties
of the RCP state, and test the applicability of ideas derived
from simulation.
\section{Method}
\subsection{Sample preparation}
We use poly(methyl methacrylate) (PMMA) particles sterically
stabilized with poly-12-hydroxystearic acid \cite{Antl}.
To visualize the particles, they are dyed with rhodamine 6G
\cite{Dinsmore}. The mean diameter $d$ of our
particles is $d = 2.53$~$\mu$m with an uncertainty 1\%.
Additionally the particles have a polydispersity of $\sim 5$\%.
According to prior simulations, the volume fraction
for random close packing $\phi_{RCP}$ is between 0.64 and 0.66
for a 5\% polydisperse system \cite{Farr,schaertl94,hermes10},
which is almost same as $\phi_{RCP}$ for monodisperse spheres.
References \cite{chaudhuri10,Torquato2000,hermes10} point out that the specific
value often depends on the simulation details.
We use a fast laser scanning confocal microscope (VT-Eye, Visitech)
which yields clear images deep inside our dense samples. Despite
the high density, the colloidal particles can be easily discerned as
shown in Fig.~\ref{Image}. We acquire three-dimensional (3D) scans
of our sample yielding a $62.7 \times 65.4 \times 30$~$\mu$m$^3$
observation volume for each image. As the sample is jammed,
particles do not move and we can scan slowly to achieve very
clean images: each 3D scan takes about 30~s.
Within each 3D image, particles are identified within
0.03~$\mu$m in $x$ and $y$, and within 0.05~$\mu$m in $z$
\cite{Dinsmore,Crocker96}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=6cm]{fig1.eps}
\end{center}
\caption{Confocal micrograph of the colloidal sediment in ($x$, $y$) plane.
The image was taken 30 $\mu$m inside the
sample. The scale bar represents 10 $\mu$m.
The arrow indicates the direction of gravity during sedimentation.
}
\label{Image}
\end{figure}
The PMMA particles are initially suspended in a mixture of 85\%
cyclohexylbromide and 15\% decalin by weight. This mixture closely
matches both the density and refractive index of the particles
\cite{Dinsmore}. Then, to induce the particles to sediment, we add
a small amount of decalin to slightly decrease the density of the
dispersion fluid.
We can quantify the importance of sedimentation by computing the
nondimensional Peclet number. This is the ratio of the time for
a particle to diffuse its own radius $d/2$ to the time for it
to fall a distance $d/2$. The diffusion time scale is $\tau_D
= d^2/(8D)$, using the diffusion constant $D$, which for our
particles and solvent is $D = 0.1$~$\mu$m$^2$/s. This gives
us $\tau_D = 6$~s. The sedimentation time scale is $\tau_S =
d/(2 v_{\rm sed})$. We observe the height of the sediment as a
function of time in a macroscopic sample of dilute particles, and
find $v_{\rm sed} = 0.035$~$\mu$m/s, giving us $\tau_S = 32$~s.
Thus Pe $\approx 0.2$, suggesting that particles can diffuse long
distances while they sediment; an alternate implication is that
hydrodynamic interactions between particles due to sedimentation
are perhaps less important than diffusion \cite{Segre}. Prior to
the start of sedimentation, the initial volume fraction is about
0.30 (in the stable liquid phase). We stir the particles by
ultrasonic wave before sedimentation to avoid the Rayleigh-Taylor
instability \cite{Wysocki}.
During sedimentation, the sample
passes through the volume fraction range where crystals can be
nucleated, approximately $0.51 < \phi < 0.60$ for our sample with
5\% polydispersity \cite{pusey09,fasolo04}. We do not observe
crystals in our final sample, and the most likely explanation is
that sedimentation happens faster than nucleation, which is quite
slow for polydisperse samples \cite{pusey09,auer01,schope07}.
For samples with $Pe < 0.1$, we do observe crystallization,
although we have not carefully studied the critical $Pe$ for
which crystallization is suppressed; see Ref.~\cite{hermes10} for
further discussion.
Given that diffusion is faster than sedimentation, the sample
readily equilibrates, at least at low volume fractions as the
sedimentation starts. Hence,
we believe that our final state is well-defined and
insensitive to the initial state.
During sedimentation, the Stokes drag force acting on the
particles is given by $F = 3 \pi \eta d v$, with viscosity
$\eta=2.18$~mPa$\cdot$s and $v=v_{\rm sed}$. The buoyant weight
of the particles is given by $W_b = \Delta \rho \pi d^3 g/6$
with $g$ the acceleration due to gravity and $\Delta \rho$ being
the density difference. Balancing the gravitational force with
the drag force, we can estimate the density difference as $\Delta
\rho = 0.038$~g/cm$^3$. For reference, the particle density is
1.2340 g/cm$^3$. Balancing the gravitational energy $W_b h$ with
the thermal energy $k_B T$ lets us determine the scale height $h =
1.8$~$\mu$m (using $k_B$ as Boltzmann's constant). The small scale
height suggests that in the final sedimented state, there will be
no density stratification except right at the interface between
the dense sediment and the remaining solvent; that interface will
have a thickness $O(h)$.
During the sedimentation process, it takes about 2 days for the
sample to initially sediment to the bottom and form a glassy
state. However, the sedimentation speed is slow at high $\phi$
\cite{Paddy,Marconi}. Thus, we wait 90 days to complete the
sedimentation before we put the sample on
the microscope. We also re-checked the
sample 300 days after the initial sediment, and found the same
results as a 90 day old sample.
We use the convention that the $y$ direction is the axis
corresponding to gravity during the sedimentation process (see
Fig.~\ref{Image}).
The sample chamber is made from glass slides and
coverslips, sealed with UV-curing epoxy (Norland), with the sample dimensions
being $x=6$~mm, $y=20$~mm, and $z=0.14$~mm. When we measure the
structure, we lay our sample on the microscope; that is, the optical
$z$ axis is parallel to gravity and the microscope looks into the
thinnest dimension of the sample chamber (for ease of viewing).
In the highly concentrated sample, any subsequent gravity-induced
particle rearrangements are much slower than our measuring time.
In particular, we do not observe any particle flow in the sample,
and the structure does not change at all during measurement.
Near the flat coverslip of our sample chamber, particles layer
against the wall \cite{Ken,nugent07prl}. To avoid influence of
this, we take our 3D images at about 1 mm above the $y$ axis sample
chamber bottom and at about 15 $\mu$m above the glass slide along
the $z$ axis. Simulations show that wall effects decay fairly
rapidly ($\sim 4$ diameters = 10 $\mu$m) \cite{Ken}, and in our data
we see no density fluctuations as a function of the distance $z$
away from the coverslip.
Of course, sedimentation with hydrodynamic interactions and
Brownian motion is not a protocol followed in simulations of RCP
states. The algorithm developed by Lubachevsky and Stillinger
considers hard particles moving ballistically \cite{Lubachevsky}. The particles
start very small, and continue interacting as they gradually
are swelled until the system jams.
The method of O'Hern and co-workers is similar, starting with
small particles that grow, but their particles are not infinitely
hard, nor do they have velocities \cite{OHern}. Rather, the
simulation proceeds until the particles are maximally swelled but
non-overlapping, thus giving the final hard-sphere state.
Tobochnik and Chapin devised a similar algorithm which used Monte
Carlo moves to eliminate overlaps \cite{Tobochnik}.
These ``expand and eliminate the overlap'' methods
are similar to an earlier method due to Jodrey and Tory which
slowly shrank spheres, sliding pairs of spheres linearly to minimize
their overlap, until all spheres had no overlaps \cite{Jodrey}.
These methods all have the strength that the RCP state is generated
isotropically, in contrast to our experiment where gravity sets
a direction. (As discussed below, we do not see anything special
about the direction of gravity in our data.) Our experimental
method does have the feature that our spheres never overlap, in
contrast to algorithms where overlaps are allowed at
intermediate stages \cite{Jodrey,OHern,Tobochnik,hermes10}, although it is not
obvious that intermediate stage overlaps would cause substantially
different results in the final state. In some ways our
experimental protocol is similar to a method by Visscher and
Bolsterli from 1972 \cite{visscher}. Their algorithm dropped
particles at random positions until the particles collided with the floor
or a previous particle; the falling particle then rolls downhill
until it reaches a locally stable position. In our experiment,
all the particles fall
simultaneously, and also their Brownian motion gives them the
ability to find better packings than the Visscher and Bolsterli
algorithm.
\subsection{Connection of 3D images}
To take a large ensemble, we scan a grid of 3D images with a
small amount of overlap in $x$ and $y$. We compute particle
positions from each image and then we connect one image to
an adjacent overlapping image. Particles are considered as
superimposed when $|\vec{r}_{ij} - \vec{r}_{lk}| <$ 0.2 $\mu$m
where $\vec{r}_{ij}$ is the position of particle $i$ in image $j$
and $\vec{r}_{lk}$ is the position of particle $l$ in adjacent
image $k$. To achieve this, we apply small displacement shifts
$\Delta x$, $\Delta y$ and $\Delta z$ to one image, and look for the
fraction $f$ of superimposed particles within the overlapped zones.
Figure \ref{connect}(a) shows $f$ in a ($\Delta x$, $\Delta y$)
plane with the resolution of one pixel accuracy, and we find one
spot where $f \sim 1$. The secondary ring around the central
spot corresponds to the first peak of the pair correlation function,
where some coincidences between particle positions are expected.
While Fig.~\ref{connect}(a) shows $f$ in a two-dimensional plane,
we calculate $f$ using shifts in the $z$ direction as well.
We next apply sub-pixel displacement shifts around the spot
in Fig.~\ref{connect}(a) to better resolve the peak.
Finally, we calculate the sum of the squared distances
between the positions of the overlapped particles within
each region, and find the global choices of shift values
that minimize the overall squared error, to provide the most
accurate shift factors for the overlap. Using the shift factors,
we then link up the particle positions in adjacent sections.
The coincident particles are replaced by their average position.
Figure \ref{connect}(b) shows particle positions at $5 < z <
5.2$ $\mu$m after connecting 4 separate overlapping images.
The particle positions are well-superimposed in the overlapping
regions.
Our sample chamber contains approximately 500,000,000
particles.
Using the overlapping image method,
we obtain a large 3D data set with size
492 $\mu$m $\times$ 513 $\mu$m $\times$ 28 $\mu$m, containing more
than 500,000 particles. Due to artifacts when identifying
particles near the image edges, we clip the data evenly from the
boundaries and our final data set is $V=492$~$\mu$m $\times$ 513 $\mu$m
$\times$ 23.5 $\mu$m, containing $N=453,136$ particles.
This gives us $\phi_{RCP} = N (\pi d^3/6) / V = 0.646 \pm
0.020$, with the error bars due to the 1\% uncertainty of the
mean particle diameter. Our value is in agreement with
simulations that considered polydispersity
\cite{Farr,schaertl94}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{fig2.eps}
\end{center}
\caption{(color online) (a) The image plot of the fraction of
successfully superimposed particles $f$ in a plane of ($\Delta x$,
$\Delta y$). The dark central region corresponds to $f$ = 1, meaning that
all possible particle overlaps are successful. (b) The circles,
triangles, squares and crosses correspond to particle positions
obtained from 4 separate 3D images. (c) The local number of
particles observed $N$ normalized by the average, as a function
of each axis after connecting the images. We add an offset to the data of $y$
and $z$ so they can be seen clearly.
}
\label{connect}
\end{figure}
We also examine the average number of particles $N$ observed as
a function of $x$, $y$, and $z$. To do this as a function of
$x$, we count the particles which are located between $x$ and
$x+0.2$ $\mu$m for a sequence of $x$ values; a similar procedure
is used for $N(y)$ and $N(z)$. The number of particles $N(x)$
as a function of $x$ is fairly flat, as are $N(y)$ and $N(z)$,
as shown in Fig.~\ref{connect}(c).
However, there are small residual oscillations in $x$ with the
standard deviation of $N(x)/\langle N \rangle$ being 0.027 and
a period of approximately 33~$\mu$m$\approx 13d$.
This is an artifact of our connection algorithm, as we connect
the images along $y$ direction first, then we connect them along
the $x$ direction. If we change this order, we find $N(x)$ becomes
flat and $N(y)$ undulates. To evaluate effects of this oscillation,
we calculate the structure factor and the pair correlation function
using both connection orders ($x$ first or $y$ first), and find
almost identical results. Thus, we ignore these oscillations.
\subsection{Detection of ordered particles} \label{sec:loc}
We use a rotationally invariant local bond order parameters $d_6$
to look for crystalline particles \cite{Steinhardt,Wolde,
Gasser}. The idea is to calculate for each particle a complex
vector $q_{6m}(i)$, whose components $m$ depend on the
orientation of the neighbors of particle $i$ relative to $i$.
Each of the 13 components of the vector is given by:
\begin{equation}
q_{6m}(i) = \frac{1}{N_b}{\sum^{N_b}_{j=1} Y_{6m}(\hat{r}_{ij})},
\end{equation}
where $N_b$ is the number of nearest neighbor particles for
particle $i$, $\hat{r}_{ij}$ is the unit vector pointing from
particle $i$ to its $j$th neighbor, and $Y_{lm}$ is a spherical
harmonic function. The $q_{6m}$ parameters are the
coefficients for the spherical harmonics in an expansion of the
vector directions $\hat{r}_{ij}$, and thus capture a sense of the
structure around particle $i$. The $l=6$ harmonics are used as
it is known that on a local level, hexagonal symmetry is often
present due to packing constraints \cite{Steinhardt,Wolde}.
The neighbors of a particle are defined as those with centers
separated by less than $1.4d$ (which is the location of the
first minimum of the pair correlation function).
These 13-dimensional complex vectors are then
normalized to unity using
\begin{equation}
\hat{q}_{6m}(i) = \frac{q_{6m}(i)}{(\sum_{m}q_{6m}(i)\cdot
q^*_{6m}(i) )^{0.5}}.
\end{equation}
Then, we compute $d_6$ as:
\begin{equation}
d_6(i,j) = \sum_{m=-6}^6q_{6m}(i)\cdot q^*_{6m}(j).
\end{equation}
$d_6(i,j)$ is a normalized quantity correlating the local
environments of neighboring $i$ and $j$ particles. $d_6(i,j)$ is a
scalar and its range is $-1 \le d_6(i,j) \le 1$;
$d_6(i,j)=1$
would correspond to two particles who have identical local
environments, at least identical in the sense captured by the
$q_{6m}$ data. Two neighboring
particles are termed ``ordered neighbors" if $d_6(i,j) > 0.5$.
The number of ordered neighbors $N_o^i$ is decided for each
particle. $N_o^i$ measures the amount of similarity of structure
around neighboring particles. $N_o^i$=0 corresponds to random
structure around particle $i$, while a large value of $N_o^i$
means that particle $i$ and its neighbor particles have similar
surroundings. Following prior work, particles with $N_o^i \ge 8$
are classed as crystalline particles, and the other particles are
liquid-like particles \cite{Wolde}.
We also compute the $\hat{W}_l^i$ parameter to specify local structures:
face centered cubic (fcc),
icosahedral structure (icos), hexagonal close packed (hcp)
and body centered cubic (bcc) \cite{Steinhardt}.
The $\hat{W}_l^i$ parameter is defined as
\begin{eqnarray}
\bar{Q}^i_{lm} &\equiv& \langle Y_{lm}(\hat{r}_{ij}) \rangle \\
W_l^i &=& \sum_{m_1,m_2,m_3} \left (
\begin{array}{ccc}
l & l & l \\
m_1 & m_2 & m_3
\end{array}
\right ) \bar{Q}_{lm_1}^i\bar{Q}_{lm_2}^i\bar{Q}_{lm_3}^i
\end{eqnarray}
where $\langle \rangle$ corresponds to the average over neighboring
particles $j$, $m_1 + m_2 + m_3 = 0$, and
\begin{eqnarray}
\hat{W}_l^i &\equiv& W_l^i / \left ( \sum^l_{m=-l} |\bar{Q}^i_{lm}|^2 \right )^{3/2}.
\end{eqnarray}
The coefficients
\begin{eqnarray}
\left (
\begin{array}{ccc}
l & l & l \\
m_1 & m_2 & m_3
\end{array}
\right ) \nonumber
\end{eqnarray}
are Wigner $3j$ symbols. Similar to the $q_{6m}$ parameters
discussed above, the $\hat{W}_l$ parameters are able to capture a
sense of the local ordering with $l$-fold symmetry, and have been
used before to help classify local structure; see
Ref.~\cite{Steinhardt}.
The values of $\hat{W}_l^i$ for ideal
structures are listed in Table \ref{Wl} \cite{Steinhardt}.
These ideal structures are unrealistic for experimental data,
so we generate 50,000 representations of each ordered structure
and perturb their positions by 5 \% of the particle diameter,
to match the polydispersity of our experimental particle sizes.
This gives us a distribution of $\hat{W}_l^i$ for each ordered
structure (Table \ref{Wl2}). Within our experimental data,
we calculate $\hat{W}_l^i$ ($l$ = 4, 6, 8) for each particle.
A particle is classed as a ordered particle when $\hat{W}_4$,
$\hat{W}_6$ and $\hat{W}_8$ of a particle are simultaneously within
the ranges of one structure shown in Table \ref{Wl2}. Otherwise,
particles are classed as random particles.
\begin{table}
\begin{center}
\caption{The values of $\hat{W_l}$ ($l$ =4, 6, 8) for
ideal structures of fcc, icosahedron, hcp and bcc \cite{Steinhardt}.
We add the notation of (i) for each ideal structure.
}
\begin{ruledtabular}
\begin{tabular}{lccc}
& $\hat{W}_4$ & $\hat{W}_6$ & $\hat{W}_8$ \\
\hline
fcc(i) & -0.159316 & -0.013161 & 0.058454 \\
icos(i) & & -0.169754 & \\
hcp(i) & 0.134097 & -0.012442 & 0.051259 \\
bcc(i) & 0.159317 & 0.013161 & -0.058455 \\
\end{tabular}
\end{ruledtabular}
\label{Wl}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{The ranges of values of $\hat{W_l}$ ($l$ =4, 6, 8) for
structures with 5\% perturbations
from ideal structures \cite{Steinhardt}.
We add the notation of (p) for the perturbed structures.
}
\begin{ruledtabular}
\begin{tabular}{lccc}
& $\hat{W}_4$ & $\hat{W}_6$ & $\hat{W}_8$ \\
\hline
fcc(p) & -0.085 $\sim$ -0.169 & -0.0109 $\sim$ -0.0193 & -0.0180 $\sim$ 0.0640 \\
icos(p) & -0.050 $\sim$ 0.200 & -0.171 $\sim$ -0.162 & -0.090 $\sim$ 0.090 \\
hcp(p) & 0.067 $\sim$ 0.138 & -0.036 $\sim$ -0.004 & 0.000 $\sim$ 0.080 \\
bcc(p) & 0.152 $\sim$ 0.161 & -0.015 $\sim$ 0.021 & -0.072 $\sim$ 0.060 \\
\end{tabular}
\end{ruledtabular}
\label{Wl2}
\end{center}
\end{table}
\subsection{Calculating the structure factor}
\label{sec:sq}
We compute the structure factor $S(\vec{q})$ via a direct Fourier
transform of the particle position, $S(\vec{q}) =
N^{-1}|\sum^N_{i=1} \exp(i \vec{q} \cdot \vec{r}_i)|^2$
where $\vec{r}_i$ is the particle position. $S(q)$ is the average
of $S(\vec{q})$ over $q = \vec{q}$.
Our large images have two advantages for calculating the structure
factor $S(q)$. The first is a high resolution with respect to $q$,
as the resolution is given by $\delta q = 2 \pi / L$ where $L$ is
the image size. Our sample size is 492 $\mu$m $\times$ 513 $\mu$m
$\times$ 23.5 $\mu$m and this yields $\delta q = 0.0128$~$\mu m^{-1}$.
The second advantage of a large image is the reduction of boundary
effects. Our experimental data set does not obey periodic boundary
conditions, unlike most simulations. Thus, we need to use a
window function to minimize the influence of the data cutting off
at the boundaries, or we need to periodically replicate the data.
Both these procedures increase $S(q)$ only near $q=0$, but it is
precisely $S(q=0)$ that is of interest.
Larger images allow us to
go to smaller $q$ with less problems.
We checked the Hann window,
Hamming window, the Blackman window, and also using no window function.
We find that $S(q)$ varies for $q < 0.55$~$\mu m^{-1}$, corresponding to $qd/2\pi
= 0.2$. That is, our results for $q > 0.55$~$\mu m^{-1}$ are
independent of our choice of window functions. In what follows, we
do not use a window function, and will focus on the results for
small $q$ but considering only $q > 0.55$~$\mu m^{-1}$.
\section{Results}
\subsection{Minimal local ordering}
First, we investigate the randomness of our sample.
We compute the fraction of ordered neighbors in the sediment
of our colloidal suspension using the $d_6$ parameter described in
Sec.~\ref{sec:loc}.
Fig.~\ref{local}(a) shows the probability of finding particles
with a given number of ordered neighbors $N_o$.
Following prior works, particles with $N_o^i \ge 8$ are classed
as crystalline particles, and the other particles are liquid-like
particles \cite{Wolde}. We find the fraction of crystalline
particles is below 0.03, and that these
particles are well-dispersed throughout the sample,
and shown in Fig.~\ref{local}(b). At most, we see small
crystallites composed of 3 or 4 crystalline particles which
are nearest neighbors. Furthermore,
the fraction of particles which $N_o$ is below 3 is over 0.8.
It means that the coordinate particle arrangement of over 80\% of
particles are not similar to those of nearest neighbor particles.
The effects on the structure by the spatial distribution of crystalline particles
will be discussed below.
We consider that our system is essentially randomly packed
as the crystalline particles are a quite low fraction and well-dispersed.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig3.eps}
\end{center}
\caption{(color online). (a) The probability of the number of
ordered neighbors $N_o$. When a particle has $N_o^i \ge 8$, it
is classed as crystalline. The circles corresponds to local volume
fraction calculated from Voronoi cell volume, averaged over
all particles with $N_o$ having the given value.
(b) The spatial distribution of crystalline
particles in 3D image. This image is 115 $\mu$m $\times$ 115
$\mu$m $\times$ 23.5 $\mu$m. The other particles are not drawn,
to better show the crystalline particles.
}
\label{local}
\end{figure}
We also compute the fraction of specific local ordered structures:
fcc, icosahedron structure (icos), hcp and bcc. The importance
of those structures was emphasized over 50 years ago by Frank
\cite{Frank}; for example, the icosahedral arrangement has a
significantly lower energy than an hcp or fcc cluster for simple
Lennard-Jones potentials. To specify local ordered structure,
we compute $\hat{W}_l^i$ (l = 4, 6, 8) parameters for each
particle \cite{Steinhardt} (see Sec.~\ref{sec:loc}). We find
that the fraction of particles that are fcc, icosahedron, hcp
and bcc are 0.0020, 0.0001, 0.0066 and 0.0014, respectively.
The sum of those fraction is $\sim$ 0.01 and this is consistent
with the result of $N_o^i$ analysis. Again, this suggests that
the sample is randomly packed. In addition, it is interesting
that icosahedron is the least fraction we observe in our packed
hard sphere-like particles, whereas icosahedral structure is most
stable local structure for Lennard-Jones potentials \cite{Frank}.
This is consistent with many prior observations, and recent
simulations suggest that icosahedral structures are indeed not as
relevant for random close packed spheres as polytetrahedrons are
more favored local structures \cite{vanmeel09}. We also find that
the fraction of hcp ordering in our sample is higher than that of fcc.
\subsection{Voronoi cell volume distribution}
Next, we study the local volume fraction of the sediment
at the particle length scale.
We compute the Voronoi decomposition which is
a unique partitioning of space. Each particle is within its
own Voronoi polyhedron, and the Voronoi polyhedron is the region
of space which is closer to the given particle than any other particle
\cite{Preparata}.
We calculate a volume for each Voronoi cell except for
those cells located
on the boundaries, which have incorrectly defined volumes.
We compute the local volume fraction for each particle as
$\phi_i = \pi \langle d \rangle^3 /6 V_i$
where $\phi_i$ and $V_i$ are the local volume fraction and
Voronoi cell volume for particle $i$, respectively.
We use the average diameter $d$
since we can not detect each particle diameter.
The circles in Fig.~\ref{local}(a) show the average local volume fraction
as a function of $N_o$.
We find that the local volume fraction is almost constant at $N_o \le$ 2,
but increases with larger $N_o$.
This result means that few highly ordered particles have a higher local
volume fraction than random particles.
It is natural since ordered phase such as fcc crystal is the most packed phase
and this tendency is suggested by previous reports \cite{Perera, Luchnikov, Eric2002}.
Next, we compute a distribution of Voronoi cell volume.
Aste and coworkers proposed a universal function of the
distribution of Voronoi cell volumes \cite{Aste}, and the form is
described as
\begin{eqnarray}
P(V,k) = \frac{k^k}{(k-1)!}\frac{(V-V_{min})^{k-1}}{(\langle V \rangle - V_{min})^k}
\exp \left (-k\frac{V-V_{min}}{\langle V \rangle - V_{min}} \right )
\label{voleq}
\end{eqnarray}
where $\langle V \rangle$ is the average of the Voronoi cell
volumes. It is worth noting that the only adjustable parameter in
Eq.~\ref{voleq} is $k$, other than $V_{min}$ which is constrained.
$k$ is termed the ``shape parameter'' and corresponds the
number of elementary cells composing the Voronoi cell \cite{Aste}.
For instance, the value of $k$ is 1 in an fcc crystal, while $k$ is
close to the number of nearest neighbor particles (about 12 or 13)
in random structure \cite{Aste}. We choose $V_{min} = 0.694 d^3$,
which is the smallest Voronoi cell that can be built in a packing
of monodisperse spheres \cite{Aste}. Figure \ref{voronoi}(a)
shows a distribution of the Voronoi cell volumes as a function
of $(V-V_{min})/(\langle V \rangle - V_{min})$. The shape of
the distribution is asymmetric and not Gaussian, that is, the
distribution is narrow at small volumes and broad at large volumes.
We fit the distribution of Voronoi cell volumes with Eq.~\ref{voleq}
and obtain $k$ = 13.1 (the solid line in Fig.~\ref{voronoi}).
The tail of the distribution is broader than the fitting line,
perhaps due to the 5\% polydispersity of our particles. The $k$
value was investigated in experiments using small glass beads
($\sim$ 250 $\mu$m) in water \cite{Aste}, acrylic spheres with
different preparation methods \cite{Aste} and larger glass beads
($\sim$ 3 mm) in oil \cite{Losert}. Those similar experiments
found 11 $\le k \le$ 13 \cite{Aste} and close to $k$ = 14.2 $\pm$
0.6 \cite{Losert} for random sphere packing. $k$ varies with
each experiment since $k$ slightly depends on the polydispersity.
Our experimentally observed value of $k$ = 13.1 is consistent
with those prior experiments. This is further evidence that the
arrangement of our sample is random. We note that a universality
of $k$ value is proposed of the distribution of Voronoi cell
volumes for random sphere packing, with the evidence coming from
experimental with non Brownian particles \cite{Aste, Losert}.
Our agreement with the prior work suggests that our close-packed
sediment is not strongly different despite the Brownian motion
that the particles have during sedimentation.
Within each Voronoi cell, we now consider the positions of particles
relative to the Voronoi cell ``center of mass.''
We compute a vector $\vec{\Delta r}_i \equiv
\vec{r}_i - \vec{g}_i$ where $\vec{r}_i$ is the position vector
for particle $i$ and $\vec{g}_i$ is the position vector for
the center of mass of the Voronoi cell which include particle $i$.
Figure \ref{voronoi}(b) shows the distribution of each axis component
of $\vec{\Delta r}_i$. We find almost all particles are located at the
centers of their Voronoi cells within the resolution of particle tracking
($\sim$ 0.05 $\mu$m), even along the direction of gravity.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=8cm]{fig4.eps}
\end{center}
\caption{(color online) (a) Distribution of the Voronoi cell volumes plotted
as a function of
$(V-V_{min})/(\langle V \rangle - V_{min})$. The solid
line is a fitting line with Eq.~\ref{voleq} using
$k=13.1$.
(b) Distribution of the position differences
between the particle position and the center of mass of the Voronoi cell.
Almost all particles are located at the center of mass of the Voronoi cell.
The values on the horizontal axis only go from -0.04 to +0.04 $\mu$m,
much less than the particle diameter $d=2.53$~$\mu$m.
}
\label{voronoi}
\end{figure}
\subsection{Density fluctuations}
We next check whether the sediment is
in a ``strictly
jammed state" or not. As we mentioned above, $S(0)$ = 0 is
required in strict jamming states since
strict jamming states should be incompressible
(equivalently,
hyperuniform \cite{Torquato2005}).
To obtain $S(0)$ value, we directly calculate $S(\vec{q})$ from the
particle positions. The inset in Fig.~\ref{sq}(a) shows a image
plot of the structure factor in a plane of ($q_x$, $q_y$) where $q_x$
and $q_y$ are the $x$ and $y$ components of vector $q$. $S(\vec{q})$
is quite isotropic even though $y$ is the direction of gravity,
again further implying our sediment is randomly packed.
We average $S(\vec{q})$ over $q = |\vec{q}|$ and obtain $S(q)$,
shown in Fig.~\ref{sq}(a). Figure \ref{sq}(b) shows $S(q)$
near $q$=0 (circles). $S(q)$ increases near $q$ = 0 because of
computational artifacts (see Sec.~\ref{sec:sq}); we find
that $S(q)$ is reliable over $qd/2\pi \ge 0.2$, indicated
by the vertical dashed line in the figure. We fit $S(q)$
with a linear function between $0.2 \le qd/2\pi \le 0.5$ and obtain
$S(0) = 0.049 \pm 0.008$ by extrapolation. $S(q)$ is also well fitted by a
parabolic function between $0.2 \le qd/2\pi \le 0.5$ and $S(0)$ is
almost the same. Our data are
insufficiently strong to determine if the linear fit or
parabolic fit is more reasonable \cite{Torquato2005}. Our uncertainty (0.008) is
determined by trying the different Fourier transform windowing functions, in
combination with linear or parabolic fits: all possible combinations yield
values within the range $S(0) = 0.049 \pm 0.008$, and thus we state with confidence
that $S(0) \ne 0$.
Donev, Stillinger and Torquato obtained
$S(0)$ = 6.1 $\times$ $10^{-4}$ by numerical simulation with one
million monodisperse particles \cite{Torquato2005} and
our experimental value of $S(0)$ is about 100 times larger than simulation result,
a significant difference well beyond the uncertainty of our data.
This implies that our sample is much more compressive than the
structure found by simulation.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig5.eps}
\end{center}
\caption{(color online)
(a) The structure factor $S(q)$ as a function of wavenumber $q$.
The inset is the quarter image of the structure factor in a plane of
($q_x$, $q_y$). (b) The expanded view of $S(q)$ (circles)
and $S_c(q)$ (squares) near $q$ = 0.
$S(q)$ increases near $q$ = 0 because of computational
artifacts due to the finite size of our data set;
the data are reliable for $qd/2\pi > 0.2$, indicated by
the vertical dashed line. The solid line is a fitting
line with a linear function over the data with $0.2 < qd/2\pi < 0.5$. We obtain $S(0)$ =
0.049 by interpolating the fitting line to $q$ = 0.
On the other hand, the
structure factor for the crystalline particles $S_c(q)$ decreases with larger $q$ and
it is well fitted by Ornstein-Zernike function (dashed line) over $0.2 < qd/2\pi < 0.6$.
}
\label{sq}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig6.eps}
\end{center}
\caption{The pair correlation function $g(r)$ of the sediment.
The inset is enlarged view at 2 $< r/d < $ 6.
The solid line is a fitting line with Eq.~\ref{gre} over $r/d > 2$.
}
\label{gr}
\end{figure}
To support this result, we use a real space function: the pair
correlation function $g(r)$ shown in Fig.~\ref{gr}. $g(r)$ at
$r/d >$ 2 is well fitted by a exponentially damped oscillatory
function \cite{Perry, Torquato2002} described as: \begin{eqnarray}
g(r) \sim \frac{C}{r}\exp(-r/\xi)\cos[K_0(r-r_0)] +1 \label{gre}
\end{eqnarray} where $C$, $\xi$ and $K_0$ correspond to an
amplitude, a characteristic length of spatial correlation and
the period of the oscillations, respectively. From the fitting,
we obtain $C$ = 2.27 $\pm$ 0.08, $\xi = 1.50d \pm 0.03d$
and $K_0 = 7.55/d \pm 0.01/d$. Again, we compare with the
simulation of a strictly jammed state which yields $\xi = 1.83d$
and $K_0 = 7.58/d$ \cite{Torquato2005}. Though $K_0$ is similar
between experiment and simulation, the length scale $\xi$ from
our experiment is
shorter than that of simulation. The decay of $g(r)$ in an experiment
is related to the broadness of each peak, that is, $g(r)$ decays
quickly when peaks are slightly broad. Broad peaks mean that
a distance between two particles are distributed. Hence, the
rapid decay of $g(r)$ are also connected with the fluctuations in
density, supporting the result
$S(0) \neq 0$. Thus, we conclude that the
arrangement in our experiment is not a strictly jammed state.
It is important to note that uncertainty in
particle positions will broaden the first peak of $g(r)$, but
this does not strongly affect $g(r)$ for larger $r$ as those
uncertainties do not accumulate over large distances. That is,
the true separation between two particles has a distance $r_{ij}$
with an uncertainty of $\pm 0.06$~$\mu$m, which is somewhat
significant when $r_{ij}$ is small and less important when
$r_{ij}$ is large.
There are several possible explanations for this observed
``softness.'' One possibility is the polydispersity of colloidal
size which is a crucial reality for experimental situations, and a
difference with the simulations to which we are comparing our data.
When particle sizes are slightly different, the minimum distance
between two particles can be changed from $\langle d \rangle$ and
then the first peak of $g(r)$ becomes slightly broad, consistent
with Fig.~\ref{gr}. This small difference adds up over a long
distance and then it may induce long wavelength fluctuations. In
particular, the local fluctuations in composition (slightly more
large particles or small particles) are coupled to the number
fluctuations.
A polydispersity of 5\% such as we have in our
experiment results in $S(q\rightarrow 0) = 0.04 \pm 0.01$
based on simulations \cite{ludovic}, consistent with our data.
Unfortunately, we cannot determine the individual particle sizes
as the resolution of optical microscopy blurs particle images on
the same scale as the variability of particle size. Furthermore,
images of neighboring particles overlap, again due to the finite optical
resolution. This makes determining individual particle size
problematic, and prevents us from disentangling the influence of
polydispersity from our data.
A second possibility is that our sample is not at random close
packing due to friction effects between the particles, which is
quite important for granular packings. It is known that granular
packings are often looser than RCP, with volume fractions as
low as $\phi \approx 0.58$, termed ``random loose packing''
\cite{Onoda}. By vibrating the system, the packing fraction can
be increased, perhaps coming close to RCP \cite{Bernal,knight95}.
In our experiment, particles move by Brownian motion, and this
may let them find the RCP state. Furthermore, the particles are
sterically stabilized to prevent them from sticking together.
In general, friction is not a concept that is usually applied to
colloidal particles. However, we cannot completely rule out the
possibility of some possible occasional attractive interaction
between our particles which might result in friction-like behavior,
resulting in a slightly loose packing. Small amounts of static
friction gave nonzero $S(0)$ values in simulations \cite{Silbert}.
A third possibility is based on the $N_o$ dependence of the local
volume fraction (Fig.~\ref{local}(a)), that is, particles in more
ordered local environments are packed better. The crystalline
particles, which have high local volume fraction, are distributed
throughout the sample (see Fig.~\ref{local}(b)). To quantify the
spatial distribution of the crystalline particles, we calculate
a crystalline structure factor $S_c(q)$ as $S_c(\vec{q}) =
{N}^{-1}|\sum^{N}_{i=1} W_i \exp(i \vec{q} \cdot \vec{r}_i)|^2$
where $W_i$ = 1 when $i$ particle is classed as crystalline,
otherwise $W_i$ = 0. $S_c(q)$ is the average of $S_c(\vec{q})$
over $q = \vec{q}$. The square symbols in Fig.~\ref{sq}(b)
correspond to $S_c(q)$ and we find that $S_c(q)$ can be fitted with
Ornstein-Zernike function [dashed line in Fig.~\ref{sq}(b)]. This
fit gives us that the typical length scale between the crystalline
particles is $12.8d \pm 1.8d$. Thus, the spatial distribution of
crystalline particles can induce density fluctuations with long
wavelength and it might be another reason for our observation that
$S(0) \neq 0$. It is worth noting that the small but nonzero
fraction of the crystallites (less than 3\%) is crucial in this
conjecture. It is possible that these tiny crystallites are due
to Brownian motion during the sedimentation. We are unaware of
any measurements of tiny crystallites in simulations of random
close packing, although one recent study of a binary mixture of
spheres used the same order parameter that we do and found the
average number of ordered bonds (our $N_o$) was small \cite{Ken}.
\section{Conclusions}
We use confocal microscopy to study both the local and long-range
structure of a random close packed colloidal suspension. We find
that the fraction of crystalline particles is at most 3\%, and
furthermore that almost no regions in the sample have icosohedral
order (less than 0.01\%). These observations suggest that the
sample is randomly packed. This is further supported by the
distribution of Voronoi volumes, which is well fit by a prediction
based on a model of random packing.
We also compute the static structure factor $S(q)$ and find that
$S(0) = 0.05$, in contrast to simulations which found $S(0) =
6 \times 10^{-4}$ \cite{Torquato2005}. $S(0)$ is proportional to
the isothermal compressibility, implying that the simulated states
are essentially incompressible (to within numerical precision),
while our experimental sample is compressible. This may be due
to the presence of tiny crystalline regions in our sample, which
are associated with slightly higher local density (and thus give
rise to long wavelength density fluctuations). Alternatively,
it may be due to the polydispersity of particle sizes in the
experiment ($\sim 5$\%). This softness ($S(0) \neq 0$) is
crucial to how the sample would respond to an external force,
for example, shear stress. The viscosity and elasticity of
a sample are extremely sensitive to density near jamming point
\cite{Olsson, OHern}. Near the jammed state, small fluctuations in
density result in large fluctuations of viscosity and elasticity,
which can lead to shear instability or cracking \cite{Furukawa}.
This suggests that real-world RCP materials may possess nontrivially
different properties from idealized simulations. Our work points
to polydispersity and sample preparation as the possible origin of
these differences, both of which are worthy of further exploration
in both simulation and experiment.
\section*{Acknowledgments}
We thank L.~Berthier and P.~Charbonneau for helpful discussions,
and we thank G.~Cianci and K.~Edmond for making our colloids.
R.~K.~was supported by a JSPS Postdoctoral Fellowship for Research Abroad.
E.~R.~W.~was supported by a grant from the National Science
Foundation (DMR-0804174).
|
3,212,635,537,562 | arxiv | \section{Introduction}
The Bekenstein-Hawking area-entropy law $S_{\rm BH}=\text{Area}/4$ strongly suggests that black holes admit a dual description as a quantum system with $e^{S_{\rm BH}}$ microstates.
A natural question is then: \textit{Where in the bulk geometry do these microstates reside?}
Of course, there may be either zero or several different consistent answers to this question and they may vary depending on the various masses and charges of the black hole, and according to whether the asymptotic region is flat, dS or AdS.
While taking cues from the large body of knowledge about black holes in AdS, we are here primarily interested in astrophysical
black holes, which are well approximated by the asymptotically flat Kerr metric.
An obvious first guess is that the black hole entropy literally counts all the possible microstates that could lie inside its event horizon.
This guess runs into the ``bag of gold'' problem \cite{Wheeler} and has not been usefully developed thus far.
Another natural guess is that the microstates lie within a few Planck lengths of the horizon.
One of the earliest attempts to reproduce the black hole entropy along these lines \cite{Zurek1985} derived an area law by counting high-frequency but highly redshifted near-horizon modes below a Planckian cutoff.
This resonates with the membrane paradigm \cite{Thorne1986}, in which black hole dynamics is succinctly described by a membrane possessing $\mathcal{O}(1)$ degrees of freedom per Planck area that naturally account for the entropy.
However, there are other indications that the dual quantum state occupies a region that extends over multiple Schwarzschild radii outside the horizon.
In CFT descriptions of black holes with or without string theory, greybody factors needed for the agreement of microscopic and macroscopic scattering are determined by solving differential equations over this larger region \cite{Maldacena1997}.
It is only once these factors have been included that the CFT and semiclassical emission rates agree.
Moreover, recent analyses of information flow using quantum extremal surfaces indicate that infalling bodies are stripped of their quantum information content several Schwarzschild radii prior to reaching the horizon \cite{Penington2020,Almheiri2019,Almheiri2020}.
The question of where the quantum dual resides has taken on renewed interest in light of the unprecedentedly high resolutions recently obtained in black hole imaging \cite{EHT2019a}.
We are now able to directly view the region outside a real black hole in our sky.
The accessible region notably contains the photon ring, which possesses a remarkable subring substructure.
Significantly, this substructure displays highly intricate but universal properties, which follow from general relativity alone and which are insensitive to the detailed composition of the black hole atmosphere \cite{Gralla2019,Johnson2020,Himwich2020,GrallaLupsasca2020a,Hadar2021}.
Wonderfully, this intricate structure is potentially measurable and is a prime target for future space-based VLBI missions \cite{Johnson2020,Gralla2020,GrallaLupsasca2020c,GLM2020,Chael2021}.
In this paper, we argue that the photon ring is indeed a part of the holographic dual for an astrophysical black hole.
Specifically, it encodes Ruelle resonances of the quantum dual (equivalently, quasinormal modes of the black hole) that characterize the late-time decay of a perturbation back to thermal equilibrium.
Intuitively, it is easy to understand this relation.
Black holes are surrounded by photon shells containing the unstably bound orbits whose images produce the photon ring (and its subrings) seen by a distant observer.
Any object thrown at the black hole must cross this photon shell, wherein it generally excites nearly bound photons that leak out of the shell very slowly.
Therefore, the last thing one expects to see as the black hole settles back to its thermal ground state is the photons that have orbited many times before escaping to infinity \cite{Ames1968,Cardoso2021}.\footnote{\label{fn:AdS}In the case of black holes in AdS$_D$ for $D>3$, which we do not consider herein, the photons may bounce around the AdS$_D$ barrier a number of times after leaking off the photon shell, thus only reaching infinity much later and thereby introducing a new time scale.
This obscures the direct relation between the photon ring Lyapunov exponents and their dual Ruelle resonances for the case of AdS spacetimes, and the arguments that we give here are not directly applicable to AdS black holes.
See \cite{Chan1997,Festuccia2008,Cardoso2009} for extensive discussions.}
These trajectories provide eikonal approximations to the quasinormal modes (QNMs) whose damped ringing signals the approach back to equilibrium.
Anticipated measurements of both the photon ring and quasinormal ringdown therefore provide a potentially fertile point of contact between the significant but largely disparate developments over the last several decades in observational and quantum-theoretical black hole physics.
Fully consistent microscopic descriptions of the quantum duals have been given for certain black holes in string theory.
Several largely interrelated proposals have also been made for dual descriptions of astrophysical Kerr black holes such as M87* (see, e.g., \cite{Compere2012}).
However, none of them are complete and the subject remains open and active.
In this paper, we do not assume any specific model.
Rather, our goal is to translate the data from the photon ring into a form that can be used to constrain candidate models for quantum duals to M87*.
Interestingly, our analysis implies a very specific but highly universal form of the \textit{high}-frequency, \textit{short}-distance Ruelle spectrum of the dual theory where the eikonal approximation is valid.
This contrasts with previous macroscopic analyses (which typically probe long-wavelength hydrodynamic properties of the microscopic dual \cite{Bredberg2011,Minwalla2012}) and identifies a second universal regime in the spectrum of quantum systems dual to black holes.
While we refrain from commenting in this paper on how our results bear on proposed duals, we stress that so far no existing proposal has explained the universal structure of the photon ring.
Finding such a universal explanation within a quantum dual is an open challenge for theorists.
A central role in our analysis is played by what we call the \textit{near-ring region}, which is present for any Kerr black hole.
It is an analog of the more familiar near-horizon regions of extremal black hole spacetimes, which are notable for their emergent conformal symmetries.
However, the near-ring region differs in that it is not a region of the {\it spacetime}, but rather of the {\it phase space} of particles or fields propagating on the spacetime geometry.\footnote{Of course, the emergence of conformal symmetry in regions of phase space rather than spacetime is common in condensed matter systems.}
Moreover, it is present for generic black holes, and its existence is not confined to near-extremal ones.
However, the near-ring region does resemble the extremal near-horizon region in that it is characterized by emergent conformal symmetry.
To be more precise, in the special case of a Schwarzschild black hole of mass $M$, the near-ring region for either a massless wave or a photon geodesic with real part of the frequency $\omega_R$ and angular momentum $\ell$ is defined by\footnote{Assuming that the holographic dual lives on a space with time and angular directions, these conditions suggest that the near-ring region is relevant to short distances in the dual theory and that the photon shell itself should perhaps be thought of as the holographic plate.}
\begin{align}
\label{eq:Intro}
\text{NEAR-RING REGION:}\qquad
\begin{cases}
\ab{r-3M}\ll M&\qquad\text{(near-peak)},\\
\displaystyle\frac{\ell}{\omega_R}-3\sqrt 3M\ll M&\qquad\text{(near-critical)},\vspace{3pt}\\
\displaystyle\frac{1}{\omega_R}\ll {M}&\qquad\text{(high-frequency)}.
\end{cases}
\end{align}
Since this definition involves the frequency and angular momentum of the excitations, it pertains to a subregion of phase space, and not of spacetime itself.
The general definition of the Kerr near-ring region is given in section~\ref{sec:Kerr} below.
We show herein that photon trajectories in the near-ring region are acted on by an emergent\footnote{$\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ actually admits an extension as a group action over the entire phase space, but it appears to be of practical utility only in the near-ring region where the expressions become very simple and local.} conformal symmetry group which we refer to as $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$.
Geodesics for which the LHS of the first two inequalities in \eqref{eq:Intro} are strictly zero are bound orbits comprising the codimension-2 submanifold of the (affinely parameterized) null geodesic phase space known as the photon shell.
This shell is an invariant subspace of the $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ action, (i.e., it is a fixed point), and the near-ring region is its near-neighborhood.
There exists a scaling subgroup of $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ that drives all photon geodesics to the fixed point.
Near-ring photons that are not exactly on the fixed point can orbit the black hole multiple times before escaping and potentially arriving at a distant telescope.
Such photons form an important part of the black hole image known as the photon ring, whose observation is proposed in \cite{Johnson2020,GLM2020}.
We show that the successive subrings of the photon ring, labelled by orbit number, transform into one another under a discrete scaling subgroup of $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ that preserves the position of the observer's telescope.
This discrete subgroup is generated by finite scaling transformations $e^{-\gamma H_0}$, where $H_0$ is the $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ dilation generator and $\gamma$ a Lyapunov exponent controlling the demagnification of successive subring images.
Hence, a future measurement of these Lyapunov exponents would constitute a detection of a qualitatively new kind of emergent symmetry of nature.
We hope that this will serve as a further incentive to the ongoing efforts to measure the fine structure of the photon ring.
It is also of interest to consider massless wave propagation in the near-ring region \eqref{eq:Intro}.
It is well-known that in this approximation, the massless wave equation reduces to the Schr\"{o}dinger equation for the upside-down harmonic oscillator.
This system has a well-known conformal symmetry\footnote{Arising as a square of the harmonic oscillator algebra.} which we denote $\mathsf{SL}(2,\mathbb{R})_{\rm QN}$, and the eikonal QNMs fall into highest-weight representations of this symmetry algebra.
Our construction is highly reminiscent of the observations made for unwarped \cite{Birmingham2002} and warped \cite{Chen2009} BTZ black holes that quasinormal modes form representations of the AdS$_3$ conformal group.
This hints that $\mathsf{SL}(2,\mathbb{R})_{\rm QN}$ may actually act on the full set of quantum black hole microstates, although we certainly do not establish any such concrete connection here.
Solutions to the massless wave equation are related, in the eikonal (geometric optics) approximation, to congruences of null geodesics.
The eikonal approximation is valid in the subset of the near-ring region where $\omega_R(r-3M)^2\gg M$.
We show explicitly that the tower of quasinormal modes can be constructed from the so-called homoclinic geodesics, for which $\frac{\ell}{\omega_R}=3\sqrt 3M$ exactly, and we provide a simple new geometric derivation of the overtone wavefunctions that applies to all black hole spacetimes and that follows straightforwardly from properties of the photon ring.
We have found two\footnote{We will also encounter a third conformal group of potential interest, denoted $\widehat{\mathsf{SL}}(2,\mathbb{R})$ and discussed in section \ref{sec:Schwarzschild}, which is more closely tied to isometries.} conformal groups, $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ and $\mathsf{SL}(2,\mathbb{R})_{\rm QN}$, both of which emerge only in the near-ring region.
$\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ acts naturally on geodesics, while $\mathsf{SL}(2,\mathbb{R})_{\rm QN}$ acts naturally on quasinormal modes.
They clearly bear some relation to each other, but from what we have understood so far the precise connection is rather subtle, and we leave this question unanswered for now.
Conformal symmetries have played a central role in the microscopic accounting for the Bekenstein-Hawking black hole entropy \cite{Strominger1996,Strominger1998,Guica2009}.
It is natural to ask if the near-ring emergent conformal symmetries described here could potentially also play such a role.
We'd like to know the answer!
One might ask why we are so interested in the small and peculiar near-ring region of a black hole.
The answer for observers is that this is the region that dominates the portion of the black hole image belonging to the black hole itself and not to the circulating plasma.
The answer for pure theorists is that the emergent symmetries in this region may provide important clues for the construction of the holographic duals of real-world black holes.
More generally, regions with emergent symmetries are almost always of special interest.
We hope to have made some observations that will constrain and provide a jumping-off point for future attempts at a bottom-up construction of the holographic duals of astrophysical black holes, and to have deepened the theoretical understanding of the structure being probed by recent and continuing spectacular advances in observational black hole astrophysics.
The outline of the remainder of this paper is as follows.
Section~\ref{sec:Schwarzschild} contains an analysis of the Schwarzschild photon ring, its relation to the eikonal quasinormal mode spectrum, and the emergence of $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ and $\mathsf{SL}(2,\mathbb{R})_{\rm QN}$ symmetries in the near-ring region.
The Kerr analysis is technically more complicated but conceptually identical and appears in section~\ref{sec:Kerr}.
We close in section~\ref{sec:KerrHologram} with a discussion of the relation between classical photon ring Lyapunov exponents of the Kerr geometry and quantum Ruelle exponents of the purported holographic dual.
\section{The Schwarzschild black hole}
\label{sec:Schwarzschild}
In this section, we consider the photon ring and quasinormal mode spectrum of the four-dimensional Schwarzschild black hole.
In coordinates $(t,r,x^a)$ with $x^a=(\theta,\phi)$, its line element is
\begin{align}
ds^2=-\pa{1-\frac{2M}{r}}\mathop{}\!\mathrm{d} t^2+\pa{1-\frac{2M}{r}}^{-1}\mathop{}\!\mathrm{d} r^2+r^2\gamma_{ab}\mathop{}\!\mathrm{d} x^a\mathop{}\!\mathrm{d} x^b\;,
\end{align}
where $\gamma_{ab}\mathop{}\!\mathrm{d} x^a\mathop{}\!\mathrm{d} x^b=\mathop{}\!\mathrm{d}\theta^2+\sin^2{\theta}\mathop{}\!\mathrm{d}\phi^2$ is the round metric on the sphere and $f(r)=1-\frac{2M}{r}$ is the blackening factor.
\subsection{The near-ring region}
The Schwarzschild geometry is static and spherically symmetric, so geodesic motion is confined to lie in a plane that we will take to be equatorial; all other trajectories can be obtained by symmetry transformations.
For a null geodesic $(t(s),r(s),x^a(s))$ parameterized by affine time $s$,
\begin{align}
-f(r)\pa{\frac{dt}{ds}}^2+\frac{1}{f(r)}\pa{\frac{dr}{ds}}^2+r^2\gamma_{ab}\frac{dx^a}{ds}\frac{dx^b}{ds}=0\;.
\end{align}
Throughout this paper, we always consider affinely parameterized geodesics.
This associates an energy (or frequency) to a photon traveling along such a geodesic.
The energy and angular momentum in the equatorial plane are
\begin{align}
E=f(r)\frac{dt}{ds}\;,\qquad
L=r^2\frac{d\phi}{ds}\;,
\end{align}
so the null geodesic equation takes the form
\begin{align}
\label{eq:SchGeo}
-\pa{\frac{dr}{ds}}^2+\mathcal{V}(r)=0\;,\qquad
\mathcal{V}(r)=E^2-f(r)\frac{L^2}{r^2}\;.
\end{align}
We will only consider geodesics with $L>0$.
Those with $L<0$ can be obtained by a rotation.
Spherical photon orbits require $\mathcal{V}(r)=\mathcal{V}'(r)=0$.
The second condition reads
\begin{align}
\mathcal{V}'(r)=-f'(r)\frac{L^2}{r^2}+f(r)\frac{2L^2}{r^3}
=0\;,
\end{align}
so such an orbit can only exist at the critical orbital radius
\begin{align}
\label{eq:SchCriRad}
\tilde{r}=3M\;.
\end{align}
The condition $\mathcal{V}(\tilde{r})=0$ then requires the energy-rescaled angular momentum $\lambda=\frac{L}{E}$ to take the critical value
\begin{align}
\label{eq:SchCriMom}
\tilde{\lambda}=3\sqrt{3}M\;.
\end{align}
The angular velocity $\tilde{\Omega}$ and orbital half-period $\tau$ of the bound photon orbit are therefore
\begin{align}
\label{eq:SchOrb}
\tilde{\Omega}=\frac{1}{3\sqrt{3}M}\;,\qquad
\tau=3\sqrt{3} M\pi\;.
\end{align}
We will consider nearly bound null geodesics with small radial deviation $\mathop{}\!\delta r=r-\tilde{r}$.
More specifically, we are interested in the near-ring region defined \textit{in phase space} by
\begin{align}
\label{eq:SchGeoNearRing}
\text{NEAR-RING REGION:}\qquad
\begin{cases}
\ab{\mathop{}\!\delta r}\ll M&\qquad\text{(near-peak)}\;,\\
|\lambda-\tilde{\lambda}|\ll M&\qquad\text{(near-critical)}\;,\\
\displaystyle\frac{1}{E}\ll M&\qquad\text{(high-energy)}\;.
\end{cases}
\end{align}
The first condition zooms in on the bound orbit in spacetime, while the second condition zooms in on the bound orbit in momentum space.
Taken together, these conditions scale into the region of phase space known as the photon shell, defined as the locus $\mathop{}\!\delta r=0=\lambda-\tilde{\lambda}$.\footnote{This is distinct from the photon ring itself, which is usually defined as the ring image produced by near-shell photons when they reach a telescope at infinity.
Sometimes, however, the term ``photon ring'' is used with a more general meaning.}
The last condition is required in order to relate solutions of the wave equation to geodesic congruences and will only become important in the discussion of quasinormal modes in the next section.
Linearizing about the near-ring region, one finds that
\begin{align}
\frac{d\mathop{}\!\delta r}{ds}=\sqrt{\mathcal{V}(\tilde{r}+\mathop{}\!\delta r)}
\approx\sqrt{\frac{1}{2}\mathcal{V}''(\tilde{r})}\mathop{}\!\delta r
=\frac{L}{(3M)^2}\mathop{}\!\delta r\;,
\end{align}
which implies that slightly perturbed near-critical orbits diverge exponentially in coordinate time:
\begin{align}
\frac{d\mathop{}\!\delta r}{dt}\approx\frac{\mathop{}\!\delta r}{3\sqrt{3}M}\;.
\end{align}
The Lyapunov exponent is therefore
\begin{align}
\label{eq:SchLyp}
\mathop{}\!\delta r(t)\approx e^{\gamma_L t}\mathop{}\!\delta r_0\;,\qquad
\gamma_L=\frac{1}{3\sqrt{3}M}\;.
\end{align}
Together with $\phi\approx\tilde{\Omega}t$, this gives the solution of the Schwarzschild null geodesic equation in the near-ring region \eqref{eq:SchGeoNearRing}.
We have derived the explicit form of the geodesics only in the near-ring region.
A generic geodesic will leave this region in finite time.
The full solution is of course more complicated, but given the angular momentum and energy, it is fully determined by an ODE in the radial variable $r$.
Most of the interesting motion occurs in the near-ring region.
\subsection{Conformal symmetry of the quasinormal mode spectrum}
In this subsection, we solve for the eikonal quasinormal modes in the Schwarzschild near-ring region and show that they form a shadow pair of highest-weight representations of the emergent near-ring $\mathsf{SL}(2,\mathbb{R})_{\rm QN}$ symmetry.
In the linearized approximation to black hole dynamics, scalar perturbations obey the wave equation\footnote{We could also consider the moderately more complicated case of photons and gravitons, but the effects of spin are subleading in the eikonal and near-ring limits of interest to us here, so \eqref{eq:Wav} suffices for our purposes.}
\begin{align}
\label{eq:Wav}
\nabla^2\Phi(x)=0\;.
\end{align}
The Schwarzschild black hole has a canonically normalized Killing vector $\mathop{}\!\partial_t$ that we use to define frequencies.
Spherical symmetry allows for a mode decomposition in terms of spherical harmonics $Y_{\ell m}(\theta,\phi)$:
\begin{align}
\label{eq:SchAns}
\Phi(t,r,\theta,\phi)=\int\mathop{}\!\mathrm{d}\omega\sum_{\ell=0}^\infty\sum_{m=-\ell}^{\ell}c_{\ell m}(\omega)\Phi_{\ell m\omega}(t,r,\theta,\phi)\;,\qquad
\Phi_{\ell m\omega}(t,r,\theta,\phi)=e^{-i\omega t}\frac{\psi_{\ell\omega}(r)}{r}Y_{\ell m}(\theta,\phi)\;.
\end{align}
The radial part of the wave equation takes the form
\begin{align}
\label{eq:SchRad}
\br{\mathop{}\!\partial_{r_*}^2+V(r_*)}\psi_{\ell\omega}(r_*)=0\;,
\end{align}
where $r_*=r+2M\log\pa{\frac{r}{2M}-1}$ is the tortoise coordinate and the wave potential is given by
\begin{align}
\label{eq:SchWavPot}
V(r_*)=\omega^2-f(r)\br{\frac{\ell(\ell+1)}{r^2}+\frac{2M}{r^3}}\;.
\end{align}
Quasinormal modes correspond to solutions of \eqref{eq:SchRad} that obey an ingoing boundary condition $\Phi\sim e^{-i\omega r_*}$ at the horizon ($r_*\to-\infty$) and an outgoing boundary condition $\Phi\sim e^{i\omega r_*}$ at spatial infinity ($r_*\to\infty$).
The imposition of two boundary conditions on this second-order ODE defines a ``shooting problem'' that results in a discrete spectrum
\begin{align}
\omega=\omega_R+i\omega_I\;,
\end{align}
which is in general complex due to the non-Hermitian (dissipative) boundary condition.
The associated solutions are interpreted as short-lived resonances whose lifetimes are typically set by the temperature of the black hole.
They come in families $\psi_{\ell n}(r)$ with discrete frequencies $\omega_{\ell n}$ labelled by an overtone number $n$, with higher overtones decaying exponentially faster.
Exact solutions to this problem are rare (although exceptions exist in spacetimes with $\mathsf{SL}(2,\mathbb{R})$ isometries) and one is typically forced to resort to numerical approximation schemes.
However, there is a ``near-ring'' limit that can be understood analytically and that connects directly with the geometry of the photon shell and its (nearly) bound geodesics reviewed in the previous section.
When $\omega_R$ and $\ell$ are both large and of comparable magnitude, the wave potential \eqref{eq:SchWavPot} may be approximated by
\begin{align}
V(r)\approx\omega_R^2-f(r)\frac{\ell^2}{r^2}\;,
\end{align}
which matches the geodesic potential \eqref{eq:SchGeo} provided that we identify $E=\omega_R$ and $L=\ell$.
This observation is the basis for the geometric optics approximation discussed in the next section, and applies everywhere in the spacetime.
Here we want to solve the wave equation in the near-ring region, which we define in analogy to \eqref{eq:SchGeoNearRing} as
\begin{align}
\label{eq:SchWavNearRing}
\text{NEAR-RING REGION:}\qquad
\begin{cases}
\ab{\mathop{}\!\delta r}\ll M&\qquad\text{(near-peak)}\;,\\
\displaystyle\ab{\frac{\ell}{\omega_R}-\tilde{\lambda}}\ll M&\qquad\text{(near-critical)}\;,\vspace{3pt}\\
\displaystyle\frac{1}{\omega_R}\ll M&\qquad\text{(high-frequency)}\;.
\end{cases}
\end{align}
This defines a region of the phase space of waves on Schwarzschild rather than simply a region of spacetime.
The wave potential \eqref{eq:SchWavPot} in this near-ring region is
\begin{align}
V(\mathop{}\!\delta r)\approx\frac{\omega_R^2}{3M^2}\mathop{}\!\delta r^2+2i\omega_R\omega_I\;,
\end{align}
and the radial ODE \eqref{eq:SchRad} reduces to
\begin{align}
\br{\mathop{}\!\partial_{r_*}^2+\frac{\omega_R^2}{3M^2}\mathop{}\!\delta r^2+2i\omega_R\omega_I}\psi(\mathop{}\!\delta r)=0\;.
\end{align}
Near the critical radius $\tilde{r}=3M$, $\mathop{}\!\partial_{r_*}\approx\tilde{f}\mathop{}\!\partial_{\mathop{}\!\delta r}$ with $\tilde{f}=1-\frac{2M}{\tilde{r}}=\frac{1}{3}$, so we may rewrite this as
\begin{align}
\mathcal{H}\psi=i\omega_I\psi\;,\qquad
\mathcal{H}=-\frac{1}{2\omega_R}\br{\mathop{}\!\partial_x^2+\gamma_L^2\omega_R^2x^2}\;,\qquad
x=r_*-\tilde{r}_*
=\frac{\mathop{}\!\delta r}{\tilde{f}}\;,
\end{align}
which we recognize as the time-independent Schr\"odinger equation for eigenstates $\psi$ of the inverted harmonic oscillator with associated eigenvalues $i\omega_I$.
The eigenvalues are imaginary because the boundary conditions are non-Hermitian.
Following \cite{Subramanyan2021,Raffaelli2022}, we now define the operators
\begin{align}
\label{eq:SLR}
a_\pm=\frac{e^{\pm\gamma_Lt}}{\sqrt{2\gamma_L\omega_R}}\pa{\mp i\mathop{}\!\partial_x-\gamma_L\omega_Rx}\;,\qquad
L_0&=-\frac{i}{4}\pa{a_+a_-+a_-a_+}
=\frac{i}{2\gamma_L}\mathcal{H}\;,\qquad
L_\pm=\pm\frac{a_\pm^2}{2}\;.
\end{align}
The $a_\pm$ generate the Heisenberg algebra $\br{a_+,a_-}=iI$, while the $L_m$ obey the exact $\mathsf{SL}(2,\mathbb{R})_{\rm QN}$ commutation relations:
\begin{align}
\br{L_0,L_\pm}=\mp L_\pm\;,\qquad
\br{L_+,L_-}=2L_0\;.
\end{align}
These operators are defined everywhere but are of interest only in the near-ring region where $L_0$ is proportional to the Hamiltonian.
Eigenstates of $L_0$ satisfy $L_0\psi_h=h\psi_h$,
so we identify $\omega_I=-2\gamma_Lh$.
The mode ansatz \eqref{eq:SchAns} reduces in the near-ring region \eqref{eq:SchWavNearRing} to
\begin{align}
\Phi_{\ell m\omega}(t,r,\theta,\phi)\approx e^{-i\omega_Rt}\frac{\Phi_h(t,x)}{r}Y_{\ell m}(\theta,\phi)\;,\qquad
\Phi_h(t,x)=e^{\omega_It}\psi_h(x)
=e^{-2\gamma_Lht}\psi_h(x)\;.
\end{align}
The operator $L_0$ obviously has no normalizable ground state, but it can still have a discrete spectrum if the boundary conditions are chosen appropriately.
The quasinormal mode boundary condition is equivalent to the imposition of a highest-weight condition $L_+\Phi_h=0$ on the fundamental mode.
There are two solutions with $h=\frac{1}{4}$ and $h=\frac{3}{4}$:
\begin{align}
\Phi_\frac{1}{4}(t,x)&=e^{-\frac{1}{2}\gamma_Lt}\psi_\frac{1}{4}(x)\;,
&&\psi_\frac{1}{4}(x)=e^{\frac{i}{2}\gamma_L\omega_Rx^2}\;,\\
\Phi_\frac{3}{4}(t,x)&=e^{-\pa{1+\frac{1}{2}}\gamma_Lt}\psi_\frac{3}{4}(x)\;,
&&\psi_\frac{3}{4}(x)=xe^{\frac{i}{2}\gamma_L\omega_Rx^2}\;.
\end{align}
Higher overtones are then obtained as $\mathsf{SL}(2,\mathbb{R})_{\rm QN}$-descendants:
\begin{align}
\Phi_{h,N}(t,x)=L_-^N\Phi_h(t,x)
=e^{-2\gamma_L(h+N)t}\psi_{h+N}(x)
\propto e^{-2\gamma_L(h+N)t}D_{2(h+N)-\frac{1}{2}}\pa{\sqrt{-2i\gamma_L\omega_R}x}\;,
\end{align}
where $D_n(x)$ denotes the $n^\text{th}$ parabolic cylinder function.
The two towers of $\mathsf{SL}(2,\mathbb{R})_{\rm QN}$-descendants obtained from the primary states with $h=\frac{1}{4}$ and $h=\frac{3}{4}$ are the QNM overtones in the near-ring region \eqref{eq:SchWavNearRing}, with the $n^\text{th}$ overtone corresponding to the state with
\begin{align}
N=\frac{1}{2}\pa{n+\frac{1}{2}}-h\;,\qquad
h=\begin{cases}
\frac{1}{4}&\qquad\text{if $n$ is even}\; ,\\
\frac{3}{4}&\qquad\text{if $n$ is odd}\; .
\end{cases}
\end{align}
In other words, the QNM overtones fall into two irreps of the $\mathsf{SL}(2,\mathbb{R})$ generated by \eqref{eq:SLR}.
The Casimir is
\begin{align}
\mathcal{C}\Phi_{h,N}\equiv\pa{-L_0^2+\frac{L_+L_-+L_-L_+}{2}}\Phi_{h,N}
=h(1-h)\Phi_{h,N}\; ,
\end{align}
so the two representations that appear are shadows of each other with Casimir
\begin{align}
\mathcal{C}=\frac{3}{16}\; .
\end{align}
These two irreps combine to form a single irrep of the Heisenberg algebra generated by $a_\pm$ and $I$.
Algebraically, it is also possible to trivially combine these two representations into a single highest-weight $h=\frac{1}{2}$ representation of a different $\widehat{\mathsf{SL}}(2,\mathbb{R})$.
One defines
\begin{align}
\hat{L}_0=2L_0\;,
\end{align}
along with
\begin{align}
\hat{\Phi}_{\frac{1}{2}+n}=
\begin{cases}
\Phi_{\frac{1}{4}+\frac{n}{2}}&\qquad\text{if $n$ is even}\;,\\
\Phi_{\frac{3}{4}+\frac{n-1}{2}}&\qquad\text{if $n$ is odd}\;.
\end{cases}
\end{align}
The action of all three $\hat{L}_m$ is then simply defined by the commutation relations $[\hat{L}_m,\hat{L}_n]=(m-n)\hat{L}_{m+n}$.
However, with the exception of $\hat{L}_0$, we have so far been unable to explicitly represent this $\widehat{\mathsf{SL}}(2,\mathbb{R})$ in terms of a differential operator or an action on phase space.
We leave this to future work.
We note that at the edges $\mathop{}\!\delta r\to\pm\infty$ of the near-peak region,
\begin{align}
D_n\pa{\sqrt{-2i\gamma_L\omega_R}x}\stackrel{x\to\pm\infty}{\sim}x^ne^{\frac{i}{2}\gamma_L\omega_Rx^2}\;.
\end{align}
Therefore, the $n^\text{th}$ overtone behaves near the edges as
\begin{align}
\label{eq:SchOvt}
\Phi_{\ell mn}(t,r,\theta,\phi)\stackrel{x\to\pm\infty}{\sim}e^{-\pa{n+\frac{1}{2}}\gamma_L t}x^ne^{-i\omega_R\pa{t-\frac{1}{2}\gamma_Lx^2}}Y_{\ell m}(\theta,\phi)\;,\qquad
\omega_R=\frac{\ell+\frac{1}{2}}{\tilde{\lambda}}\;.
\end{align}
Recalling that $\tilde{\Omega}=\frac{1}{\tilde{\lambda}}=\frac{1}{3\sqrt{3}M}=\gamma_L$, the eikonal QNM spectrum of a Schwarzschild black hole is
\begin{align}
\omega_{\ell n}=\pa{\ell+\frac{1}{2}}\tilde{\Omega}-i\pa{n+\frac{1}{2}}\gamma_L \;.
\end{align}
We have given the analytic form of the QNMs only in the near-ring region \eqref{eq:SchWavNearRing}.
The full solutions with the same conserved quantities $(\omega_R,\omega_I,\ell,m)$ can be extended everywhere in the spacetime by solving the exact radial ODE \eqref{eq:SchRad}.
The full wavefunctions still obey QNM boundary conditions since the photon shell is the only place where the radial momentum can change sign.
Outside the near-ring region, these solutions still form $\mathsf{SL}(2,\mathbb{R})_{\rm QN}$ multiplets but they are no longer related by the simple operators given in \eqref{eq:SLR}, which fail to commute with the wave equation and do not map solutions to solutions.
Although not present globally, the $\mathsf{SL}(2,\mathbb{R})_{\rm QN}$ symmetry of the wave equation in the near-ring region is enough to derive a conformal multiplet structure for the full spacetime QNM solutions.
\subsection{Quasinormal modes from geometric optics}
\label{sec:QNMGeo}
In the eikonal limit of large frequencies, the geometric optics approximation relates solutions of the massless wave equation to null geodesic congruences.
Applied to the null congruences in the neighborhood of the photon shell---the ``eikonal near-ring'' region---this approximation has been shown in a variety of circumstances \cite{Goebel1972,Ferrari1984,Mashhoon1985,Iyer1987a,Iyer1987b,Seidel1990,Decanini2003,Dolan2009,Cardoso2009,Dolan2010,Dolan2011,Decanini2010,Yang2012} to reproduce the eikonal QNM frequencies, with the real and imaginary parts respectively given in terms of the orbital frequency and Lyapunov exponents of (nearly) bound orbits.
We will see below that the region of validity of the eikonal approximation overlaps with a subset of the near-ring region.
In this short-wavelength, large-$\omega$ limit, one approximates a solution to the wave equation \eqref{eq:Wav} in terms of a rapidly oscillating phase $S(x)$ and a slowly varying amplitude $A(x)$:
\begin{align}
\Phi(x)\sim A(x)e^{iS(x)}\;.
\end{align}
In terms of the gradient of the phase
\begin{align}
p_\mu=\mathop{}\!\partial_\mu S(x)\;,
\end{align}
the wave equation takes the form
\begin{align}
\label{eq:ShortWave}
-p_\mu p^\mu A+i\pa{2p^\mu\nabla_\mu A+\nabla_\mu p^\mu A}+\nabla^2A=0\;,
\end{align}
and one attempts to solve this equation order by order in $p_\mu$.
The leading-order term implies that $p_\mu$ is a null vector satisfying the geodesic equation:
\begin{align}
p_\mu p^\mu=0\;,\qquad
p^\mu\nabla_\mu p_\nu=0\;.
\end{align}
The second equality follows from the first since $p_\mu$ is a gradient, and it implies that $s$ defined by \begin{align}
\mathop{}\!\partial_s=p^\mu\mathop{}\!\partial_\mu
\end{align}
is an affine parameter.
The first equation also implies that the phase $S(x)$ is a solution of the Hamilton-Jacobi equation for the covariant geodesic Hamiltonian $\frac{1}{2}g^{\mu\nu}p_\mu p_\nu$.
The subleading term in \eqref{eq:ShortWave} relates the expansion $\theta=\nabla_\mu p^\mu$ of the null congruence to the directional derivative of the wave amplitude along the geodesic:
\begin{align}
\label{eq:GeoAmp}
p^\mu\mathop{}\!\partial_\mu\log{A(x)}=-\frac{1}{2}\theta(x)\;.
\end{align}
For a congruence with positive expansion, the amplitude of the wave therefore decays exponentially with affine time:
\begin{align}
\label{eq:AmpDec}
\mathop{}\!\partial_s\log{A(x)}=-\frac{1}{2}\theta(x)
\qquad\implies\qquad
A\sim A_0e^{-\frac{1}{2}\theta s}\;.
\end{align}
To summarize, given a (rotation-free) null congruence with local tangent $p^\mu(x)$ and expansion $\theta(x)$, one can construct an approximate solution to the wave equation whose wavefronts (level sets of constant phase $S$) propagate along null geodesics.
Quasinormal modes correspond to trajectories that are asymptotically bound to the photon shell, and the exponential divergence of rays near the shell determines the exponential decay of the quasinormal modes with time.
In the case of null equatorial Schwarzschild geodesics, the Hamilton-Jacobi principal function is
\begin{align}
S(t,r_*,\phi)=-Et+L\phi\pm\int^{r_*}\sqrt{\mathcal{V}(r_*')}\mathop{}\!\mathrm{d} r_*'\;.
\end{align}
For a null congruence to satisfy the quasinormal mode boundary conditions, the potential $\mathcal{V}(r_*)$ must have a double root, i.e., the geodesics must have momentum satisfying the critical condition \eqref{eq:SchCriMom} and be asymptotically bound to the photon shell.
Evaluating the radial integral from the critical radius \eqref{eq:SchCriRad} then gives
\begin{align}
\tilde{S}(t,\mathop{}\!\delta r,\phi)=E\pa{-t+\tilde{\lambda}\phi+\int_0^{\mathop{}\!\delta r}\sqrt{\frac{\pa{9M+\mathop{}\!\delta r'}\mathop{}\!\delta r'^2}{\pa{3M+\mathop{}\!\delta r'}\pa{M+\mathop{}\!\delta r'}^2}}\mathop{}\!\mathrm{d}\mathop{}\!\delta r'}
\stackrel{\ab{\mathop{}\!\delta r}\to 0}{\approx}E\pa{-t+\tilde{\lambda}\phi+\frac{\sqrt{3}}{2M}\mathop{}\!\delta r^2}\;.
\end{align}
Noting that $\frac{\sqrt{3}}{2M}\mathop{}\!\delta r^2=\frac{1}{2}\gamma_Lx^2$ and identifying $E=\omega_R$ and $L=m$ as in the previous section, we recognize this as the phase of \eqref{eq:SchOvt} with $m=+\ell$.
The critical geodesics have an expansion
\begin{align}
\tilde{\theta}=\nabla^2\tilde{S}
\stackrel{\ab{\mathop{}\!\delta r}\to 0}{\approx}3\gamma_L\omega_R\,,
\end{align}
and hence, a simple solution describing the wave amplitude \eqref{eq:GeoAmp} of the QNM null congruence is $A_0(t,\mathop{}\!\delta r,\phi)=e^{-\frac{1}{2}\gamma_Lt}$.
The eikonal approximation is valid as long as $\gamma_L\omega_R\mathop{}\!\delta r^2\gg 1$, which for Schwarzschild amounts to the condition $\omega_R\mathop{}\!\delta r^2\gg M$.
As such, the eikonal region overlaps with the near-ring region \eqref{eq:SchWavNearRing} when $\frac{M}{\omega_R}\ll\mathop{}\!\delta r^2\ll M^2$.
According to \eqref{eq:AmpDec}, any quantity which is constant along the null congruence can be used to produce new solutions to the wave equation from a given seed solution.
In other words, if a function $u(x)$ satisfies
\begin{align}
\label{eq:GeoCon}
p^\mu\mathop{}\!\partial_\mu u=0\;,
\end{align}
and moreover if
\begin{align}
\label{eq:A0}
\Phi_0(x)=A_0(x)e^{iS(x)}
\end{align}
is a solution to \eqref{eq:ShortWave}, then
\begin{align}
\Phi_n(x)=u^n(x)A_0(x)e^{iS(x)}
\end{align}
is also an approximate solution to the wave equation.
In all the examples that we consider (including the one above), the minimal solution to the amplitude equation \eqref{eq:GeoAmp} takes the form $A_0\sim e^{-\frac{1}{2}\gamma_Lt}$, and
a straightforward analysis of the geodesic equation indicates that the quantity
\begin{align}
u(x)=e^{-\gamma_Lt}\mathop{}\!\delta r
\end{align}
is constant on the unstable homoclinic orbit, and therefore obeys \eqref{eq:GeoCon}.
Hence, we can immediately write down a family of solutions associated to the same critical orbit with phase $\tilde{S}$, but differing in the amplitude:
\begin{align}
\Phi_n(x)\sim e^{-\pa{n+\frac{1}{2}}\gamma_L t}\mathop{}\!\delta r^ne^{i\tilde{S}(x)}\;.
\end{align}
This eikonal QNM approximation agrees with the near-ring approximation \eqref{eq:SchOvt} in their overlap region.
\subsection{Observable conformal symmetry of the photon ring}
\label{subsec:SchScr}
In this subsection, we identify another emergent near-ring conformal symmetry, denoted $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$, which acts on null geodesics rather than waves.
A discrete subgroup of this scaling symmetry preserves the endpoints of null geodesics terminating at a fixed telescope and maps successive subrings to one another in black hole images.
This emergent scaling symmetry is potentially observable with upcoming space-based VLBI missions.
As before, spherical symmetry allows us to restrict our attention to geodesics in the equatorial plane.
Let $\Gamma$ denote the four-dimensional phase space of colored, equatorial null geodesics in Schwarzschild with coordinates $(r,\phi,p_r,p_\phi)$ and canonical symplectic form.
Time evolution is generated by the Hamiltonian
\begin{align}
\label{eq:SchH}
H(r,p_r,p_\phi)=\sqrt{\pa{1-\frac{2M}{r}}\br{\frac{p_\phi^2}{r^2}+\pa{1-\frac{2M}{r}}p_r^2}}\;,
\end{align}
which is obtained by solving the null condition $g^{\mu\nu}p_\mu p_\nu=0$ for $p_t=-H$.
Inverting \eqref{eq:SchH} gives
\begin{align}
\label{eq:SchPr}
p_r(r,H,L)=\pm\frac{\sqrt{\mathcal{V}(r)}}{f(r)}\;,\qquad \mathcal{V}(r)=H^2-f(r)\frac{L^2}{r^2}\;,
\end{align}
where $L=p_\phi$.
The coordinate transformation $(r,\phi,p_r,p_\phi)\to(T,\Phi,H,L)$ defined by
\begin{align}
\mathop{}\!\mathrm{d} T&=\frac{H}{f(r)\sqrt{\mathcal{V}(r)}}\mathop{}\!\mathrm{d} r\;,\qquad
\mathop{}\!\mathrm{d}\Phi=\mathop{}\!\mathrm{d}\phi-\frac{L}{r^2\sqrt{\mathcal{V}(r)}}\mathop{}\!\mathrm{d} r\;,
\end{align}
is canonical since it preserves the symplectic form
\begin{align}
\Omega=\mathop{}\!\mathrm{d} p_r\wedge\mathop{}\!\mathrm{d} r+\mathop{}\!\mathrm{d} p_\phi\wedge\mathop{}\!\mathrm{d}\phi
=\mathop{}\!\mathrm{d} H\wedge\mathop{}\!\mathrm{d} T+\mathop{}\!\mathrm{d} L\wedge\mathop{}\!\mathrm{d}\Phi\;.
\end{align}
These action-angle variables lead to trivial equations of motion:
\begin{align}
\dot{H}=\cu{H,H}=0\;,&&
\dot{L}=\cu{L,H}=0\;,&&
\dot{\Phi}=\cu{\Phi,H}=0\;,&&
\dot{T}=\cu{T,H}=1\;.
\end{align}
The first two equations indicate that the phase space $\Gamma$ foliates into superselection sectors of fixed $(H,L)$, which are conserved momenta.
The third equation implies that the Hamiltonian flow sends a photon with initial coordinates $(r_s,\phi_s,H,L)$ to final coordinates $(r_o,\phi_o,H,L)$ according to the rule
\begin{align}
\label{eq:Azimuth}
\Delta\phi=\phi_o-\phi_s
=\fint_{\phi_s}^{\phi_o}\mathop{}\!\mathrm{d}\phi
=\fint_{r_s}^{r_o}\frac{L}{r^2\sqrt{\mathcal{V}(r)}}\mathop{}\!\mathrm{d} r\;,
\end{align}
where the slash indicates that the integral is to be evaluated along the photon trajectory.
The last equation identifies $T$ as the variable conjugate to energy, i.e., time.
Hence, the time elapsed during evolution from a state $(r_s,\phi_s,H,L)$ to $(r_o,\phi_o,H,L)$ is
\begin{align}
\label{eq:TimeLapse}
T=\fint_{r_s}^{r_o}\frac{H}{f(r)\sqrt{\mathcal{V}(r)}}\mathop{}\!\mathrm{d} r\; .
\end{align}
Equations \eqref{eq:Azimuth} and \eqref{eq:TimeLapse} are the usual solution to the null geodesic equation in Schwarzschild.
These integrals can be evaluated explicitly in terms of elliptic functions \cite{GrallaLupsasca2020b}, but their detailed form will not be needed here.
The important point to note is that the integral \eqref{eq:TimeLapse} diverges logarithmically when $\mathcal{V}(r)$ has a double root.
As such, the time function therefore diverges for a homoclinic trajectory with one endpoint on the photon shell, and therefore $T$ is only a local coordinate on phase space.
Since we are concerned with optical images, we focus on geodesics that begin and end at null infinity, always remaining outside the sphere of bound photon orbits at $\tilde{r}=3M$.
These have
\begin{align}
\label{eq:SchHhat}
\hat{H}\equiv H-\frac{\ab{L}}{3\sqrt{3}M}<0\;.
\end{align}
A distant observer at large radius $r_o\to\infty$ receives these geodesics with impact parameter
\begin{align}
\label{eq:ImpactParameter}
b=\frac{\ab{L}}{H}>3\sqrt{3}M\;.
\end{align}
Their radius of closest approach is reached when the radial momentum \eqref{eq:SchPr} vanishes.
This occurs at the largest root of the radial potential $\mathcal{V}(r)$ \cite{Gates2020}:
\begin{align}
\label{eq:RadialTurningPoint}
r_{\rm min}=\frac{2b}{\sqrt{3}}\cos\br{\frac{1}{3}\arccos\pa{-\frac{3\sqrt{3}M}{b}}}
>3M\;.
\end{align}
Geodesics with $\hat{H}=0$ are homoclinic and asymptote to the closed photon orbit at $\tilde{r}=3M$ in the far past and/or future.
Their impact parameter $b=3\sqrt{3}M$ defines the critical curve in the observer sky.
Using the fact that the coordinates $(T,\Phi,H,L)$ are canonical, it is straightforward to check that the functions
\begin{align}
\label{eq:Generators}
H_+=\hat{H}\;,\qquad
H_0=-\hat{H} T\;,\qquad
H_{-}=\hat{H} T^2\;,
\end{align}
obey the $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ algebra.
This conformal algebra commutes with $L=p_\phi$ and therefore acts within superselection sectors $\Gamma_L$ of fixed angular momentum.
However, as indicated by \eqref{eq:Generators}, it does modify the energy (or photon color) $H=\hat{H}+\frac{\ab{L}}{3\sqrt{3}M}$.
Thus, it will also modify the impact parameter \eqref{eq:ImpactParameter} as well as the radius of closest approach \eqref{eq:RadialTurningPoint}.
The $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ group action on the unbound elements of $\Gamma_L$ is transitive: finite $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ transformations can be used to map any such geodesic to any other.
Of course, one could replace $\hat{H}$ in this construction with any function of the form $H-g(L)$, but only the choice \eqref{eq:SchHhat} leads to dilations that scale into the photon shell as in \eqref{eq:FiniteDilation} below.
We are particularly interested in $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$-invariant submanifolds of phase space, which are sets of points $(\tilde{r},\tilde{\phi},\tilde{p}_r,\tilde{p}_\phi)$ left invariant by all three $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ generators:
\begin{align}
\left.\cu{H_m,x}\right|_{(\tilde{r},\tilde{\phi},\tilde{p}_r,\tilde{p}_\phi)}=0\;,\qquad
\forall x\in\cu{r,\phi,p_r,p_\phi}\;,\quad
\forall m\in\cu{-1,0,1}\;.
\end{align}
The locus of such points in phase space constitutes the photon shell.
For Schwarzschild, it consists of a single photon sphere.
Since we have specialized to the equatorial plane, we find
\begin{align}
\tilde{r}=3M\;,\qquad
\tilde{\phi}\in[0,2\pi)\;,\qquad
\tilde{p}_r=0\;,\qquad
\tilde{p}_\phi=\pm3\sqrt{3}MH\;,
\end{align}
with the sign corresponding to the prograde/retrograde circular orbit, parameterized by $\tilde{\phi}$.
Importantly, the homoclinic orbits control the long-time behavior of the flow generated by $H_0$.
Under these scalings, $\hat{H}$ flows according to $\mathop{}\!\partial_\alpha\hat{H}=\{H_0,\hat{H}\}=-\hat{H}$, so that after a finite dilation $e^{-\alpha H_0}$,
\begin{align}
\label{eq:FiniteDilation}
\hat{H}(0)\to\hat{H}(\alpha)=e^{-\alpha}\hat{H}(0)\;.
\end{align}
For large $\alpha$, $\hat{H}$ becomes small, while conversely $T \to\infty$.
Introducing a dimensionless radius $r=3M\pa{1+R}$, the point of closest approach to the photon shell given in \eqref{eq:RadialTurningPoint} becomes
\begin{align}
R_{\rm min}^2=-\frac{2\sqrt{3}\hat{H} M}{L}+\ldots
\end{align}
to leading order as $\hat{H}\to0$.
It follows that under $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ dilations,
\begin{align}
\label{eq:RadialDeviation}
\mathop{}\!\partial_\alpha\ln{R_{\rm min}}=-\frac{1}{2}\;.
\end{align}
For $\alpha\to\infty$, $R_{\rm min}\to0$ and $\hat{H}\to0$ so any geodesic approaches the bound orbit at $\tilde{r}=3M$.
A geodesic that begins and ends at null infinity will do so for any $\alpha$, but since
\begin{align}
\label{eq:WindingDivergence}
\Delta\phi=2\int_{r_{\rm min}}^\infty\frac{L}{r^2\sqrt{\mathcal{V}(r)}}\mathop{}\!\mathrm{d} r
=\log\pa{\frac{1}{R_{\rm min}^2}}+2\log\br{12\pa{2-\sqrt{3}}}+\mathcal{O}\!\pa{R_{\rm min}}
\end{align}
to leading order as $R_{\rm min}\to0$, the number of times $w=\Delta\phi/(2\pi)$ that it orbits the black hole will diverge like\footnote{The variable $w$ interpolates between an integer-spaced set of windings that are inequivalent to the image label $n\in\mathbb{N}$ employed in previous treatments of the photon ring \cite{Johnson2020,Himwich2020,GrallaLupsasca2020a,Hadar2021}.
Rather, the direct ($n=0$) image arises from the unique light ray connecting the source to the observer after executing no more than half a turn around the black hole, i.e., with winding $-\frac{1}{2}\leq w_0\leq\frac{1}{2}$.
Light rays with $n>0$ correspond to relativistic images with
$\Delta w=w-w_0=\sign(w_0)\br{(-1)^n(\frac{n}{2}+\frac{1}{4})-\frac{1}{4}}=\sign(w_0)\cu{-1,1,-2,2,-3,\ldots}$.}
\begin{align}
\label{eq:WindingExponent}
\mathop{}\!\partial_\alpha w=\frac{1}{2\pi}\;.
\end{align}
The emergence of an enhanced $\mathsf{SL}(2,\mathbb{R})$ symmetry at a fixed point of a scaling transformation is a ubiquitous phenomenon in a wide variety of physical systems.
It is characterized by critical exponents that quantify how various physical quantities scale in the approach to the fixed point.
In the present example, these exponents include the scaling of the radial deviation from the photon shell \eqref{eq:RadialDeviation} or equivalently the orbit number \eqref{eq:WindingExponent}.
Here we are interested in how the critical exponents can be measured astronomically.
In order to discuss this, we introduce a source star at $(r_s,\phi_s)$ and a telescope at $(r_o,\phi_o)$ and consider the (colored) geodesics connecting the two.
There are an infinite number of such geodesics labeled by the number of times $w$ they wind around the black hole en route from star to telescope.
Since each of these geodesics has the same endpoints, they all share the same net angular shift $\Delta\phi$ modulo $2\pi$.
Consider the superselection sector $\Gamma_L=(r,\phi,p_r,p_\phi=L)$ of geodesics with fixed angular momentum $L$.
Demanding that a geodesic in $\Gamma_L$ originate at the star and end at the telescope cuts this three-dimensional subspace of $\Gamma$ down to an infinite but discrete set of geodesics labelled by $w$ and denoted $\Gamma_{\rm obs}$.\footnote{Fixing one endpoint of the geodesic at the star leaves one degree of freedom in $\Gamma_L$: the energy $H$ parameterizing the emission direction via \eqref{eq:SchPr}.
Requiring the geodesic to reach the telescope imposes a condition on $H$ via \eqref{eq:Azimuth}.
Given \eqref{eq:WindingDivergence}, this condition admits a discrete spectrum of solutions labeled by $w$.}
Since $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ acts transitively on $\Gamma_L$, any two points in $\Gamma_{\rm obs}$ can be related by an $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ transformation.
However, in general, the set of such transformations do not form a discrete subgroup of $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$.
Such a discrete subgroup emerges near the fixed point for $w\gg1$, or equivalently small $R_{\rm min}$.\footnote{A similar argument applies to orbits that wind in the other direction with $w\ll-1$.}
If we act on such a geodesic with a small finite dilation $e^{-\alpha H_0}$, then according to \eqref{eq:WindingExponent}, $\Delta\phi\to\Delta\phi+\alpha$.
It follows that, for the $w$-independent dilation
\begin{align}
\label{eq:Dilation}
D_0=e^{-2\pi H_0}\;,
\end{align}
we obtain an $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ element which maps $\Gamma_{\rm obs}$ to itself (for large $w\gg1$) with
\begin{align}
w\to w+1\;.
\end{align}
The semigroup formed by products of $D_0$ is an emergent discrete scaling symmetry of the photon ring.
\begin{figure}[h]
\centering
\includegraphics[scale=.5]{SchwarzschildScreen.pdf}
\caption{
Action of $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ dilations on the image plane of an observer at a large distance from a Schwarzschild black hole.
}
\label{fig:Schwarzschild}
\end{figure}
The action of $H_0$ on the observer screen is illustrated in Fig.~\ref{fig:Schwarzschild}.
The red critical curve corresponds to photons with $\hat{H}=0$ that asymptote to bound photon orbits at $\tilde{r}=3M$.
Photons in the interior of this curve have $\hat{H}>0$ and are captured by the black hole, while those in its exterior have $\hat{H}<0$ and are deflected back to null infinity.
If the color (energy) of observed photons is fixed, then every choice of a geodesic plane and angular momentum $L$ defines a unique point on the screen (i.e., a direction of approach to the telescope).
However, if $E$ is not fixed, then each choice of a geodesic plane and angular momentum $L$ corresponds to a segment of a ray on the image (either the green or blue segment according to whether $\hat{H}\gtrless0$).
Moving along such a segment scales $\hat{H}$, keeping the endpoint of the geodesic on the observer screen fixed but varying the other endpoint in the bulk.
Only when $\hat{H}$ takes certain discrete values does a photon reach a given source star.
Near the critical curve ($R_{\rm min}\to0$), such photons wind multiple times $\ab{w}\gg1$ around the black hole, and successive images of the star are squeezed exponentially closer to the critical curve by a demagnification factor $e^{-\gamma_L\tau}=e^{-\pi}$, where $\gamma_L$ is the Lyapunov exponent \eqref{eq:SchLyp} and $\tau$ the half-orbital period \eqref{eq:SchOrb}.
These images decompose into two families of photons with different signs of the angular momentum.
Each family consists of a representation of the discrete subgroup of $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ generated by the finite dilation $D_0$ [Eq.~\eqref{eq:Dilation}].
The preceding discussion did not depend sensitively on the radius of the emitting star and could be any kind of light source outside the photon ring.
In fact, it also applies to light sources appearing inside the photon ring, except that those images arise from photons emitted on geodesics with $\hat{H}>0$ that emanate from the past horizon before escaping to future null infinity.
The discrete action of $D_0$ acts on all geodesics that hover near the photon shell and increases their winding number by one.
Hence, acting on the photon ring image at the telescope, it maps the $w^\text{th}$ photon subring to the $(w+1)^\text{th}$ subring.\footnote{Equivalently, in the convention of \cite{Johnson2020}, it maps the $n^\text{th}$ subring to the $(n+2)^\text{th}$ subring.
The full photon ring decomposes into two representations of the discrete subgroup of $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ generated by $D_0$, corresponding to two sets of lensed images appearing on diametrically opposed positions on the photon ring.}
The radial separation of successive images on the observer screen directly measures the critical exponent $\gamma=\gamma_L\tau$ and therefore the imaginary part of the eikonal quasinormal mode spectrum.
A space-VLBI experiment could therefore directly measure a part of the QNM spectrum that is not accessible by other means, and thereby discover an entirely new emergent symmetry in nature!
\section{Kerr black holes}
\label{sec:Kerr}
We now turn to the astrophysically interesting case of Kerr black holes, which possess a much richer spectrum of bound orbits and quasinormal modes.
The relationship between the Lyapunov exponents of the photon shell and the QNM frequencies in the geometric optics limit was worked out in a series of papers \cite{Goebel1972,Ferrari1984,Mashhoon1985,Iyer1987a,Iyer1987b,Seidel1990,Decanini2003,Dolan2009,Cardoso2009,Dolan2010,Dolan2011,Decanini2010}, culminating in \cite{Yang2012}.
We review and elaborate on this correspondence in sections \ref{subsec:KerrShell} and \ref{subsec:KerrQNM}.
In section \ref{subsec:KerrOvertones}, we describe a novel and very simple geometric optics derivation of the Kerr QNM overtone wavefunctions, and in section \ref{subsec:KerrScreen} we describe the emergent conformal symmetry of the optical image of astrophysical black holes such as M87*.
\subsection{The near-ring region}
\label{subsec:KerrShell}
The Kerr photon shell is the region of phase space corresponding to (unstably) bound photon orbits that neither escape to infinity nor fall across the event horizon (see, e.g., Fig.~2 of \cite{Johnson2020} for an illustration).
For Schwarzschild, the photon shell can be described in configuration space as a two-sphere at $\tilde{r}=3M$.
For a spinning black hole, this sphere radially fattens into a three-dimensional shell.
In Boyer-Lindquist coordinates, each bound orbit has a fixed radial coordinate $r$ but executes complicated motion in the angular directions.
The Kerr metric for mass $M$ and angular momentum $J=aM$ (with $0\le a\le M$) is
\begin{subequations}
\begin{gather}
ds^2=-\frac{\Delta}{\Sigma}\pa{\mathop{}\!\mathrm{d} t-a\sin^2{\theta}\mathop{}\!\mathrm{d}\phi}^2+\frac{\Sigma}{\Delta}\mathop{}\!\mathrm{d} r^2+\Sigma\mathop{}\!\mathrm{d}\theta^2+\frac{\sin^2{\theta}}{\Sigma}\br{\pa{r^2+a^2}\mathop{}\!\mathrm{d}\phi-a\mathop{}\!\mathrm{d} t}^2\;,\\
\Delta=r^2-2Mr+a^2,\qquad
\Sigma=r^2+a^2\cos^2{\theta}\;.
\end{gather}
\end{subequations}
This geometry has two independent Killing vectors $\mathop{}\!\partial_\phi$ and $\mathop{}\!\partial_t$ and an independently conserved Killing tensor
\begin{align}
\label{eq:KerrKilling}
K_{\mu\nu}=-{J_\mu}^\lambda J_{\lambda\nu}\;,\qquad
J=a\cos{\theta}\mathop{}\!\mathrm{d} r\wedge\pa{\mathop{}\!\mathrm{d} t-a\sin^2{\theta}\mathop{}\!\mathrm{d}\phi}+r\sin{\theta}\mathop{}\!\mathrm{d}\theta\wedge\br{\pa{r^2+a^2}\mathop{}\!\mathrm{d}\phi-a\mathop{}\!\mathrm{d} t}\;.
\end{align}
The corresponding conserved quantities are
\begin{align}
E=p_\mu\mathop{}\!\partial_t^\mu
=-p_t\;,\qquad
L=p_\mu\mathop{}\!\partial_\phi^\mu
=p_\phi\;,\qquad
k=K^{\mu\nu}p_\mu p_\nu\;.
\end{align}
It is often more convenient to work with the Carter constant $Q=k-\pa{L-aE}^2$.
The local tangent to a null geodesic is completely determined by the conserved quantities $E$, $L$ and $Q$ as
\begin{align}
\label{eq:KerrPlow}
p(x^\mu,E,L,Q)=-E\mathop{}\!\mathrm{d} t\pm_r\frac{\sqrt{\mathcal{R}(r)}}{\Delta(r)}\mathop{}\!\mathrm{d} r\pm_\theta\sqrt{\Theta(\theta)}\mathop{}\!\mathrm{d}\theta+L\mathop{}\!\mathrm{d}\phi \;.
\end{align}
The signs $\pm_r$ and $\pm_\theta$ determine the radial and polar directions of travel and the potentials take the form
\begin{subequations}
\begin{align}
\label{eq:KerrRadialPotential}
\mathcal{R}(r)&=\br{E\pa{r^2+a^2}-aL}^2-\Delta(r)\br{Q+\pa{L-aE}^2}\;,\\
\label{eq:KerrPolarPotential}
\Theta(\theta)&=Q+L^2+a^2E^2\cos^2\theta-\frac{L^2}{\sin^2\theta}\;.
\end{align}
\end{subequations}
Null geodesics with positive $\eta>0$ oscillate in the $\theta$-direction between the zeroes $\theta_\pm$ of the angular potential \eqref{eq:KerrPolarPotential}.
Bound photon orbits require $\mathcal{R}(r)=\mathcal{R}'(r)=0$ and lie at fixed orbital radius $r=\tilde{r}$ in the range $\tilde{r}\in\br{\tilde{r}_-,\tilde{r}_+}$ with
\begin{align}
\tilde{r}_\pm=2M\br{1+\cos\pa{\frac{2}{3}\arccos\pa{\pm\frac{a}{M}}}}\;.
\end{align}
Their energy-rescaled angular momentum $\lambda=\frac{L}{E}$ and Carter constant $\eta=\frac{Q}{E^2}$ are fixed by the orbital radius:
\begin{align}
\label{eq:CriticalParameters}
\tilde{\lambda}=a+\frac{\tilde{r}}{a}\left[\tilde{r}-\frac{2\Delta(\tilde{r})}{\tilde{r}-M}\right]\;, \qquad
\tilde{\eta}=\frac{\tilde{r}^3\br{4a^2M-\tilde{r}\pa{\tilde{r}-3M}^2}}{a^2\pa{\tilde{r}-M}^2}\;.
\end{align}
On the boundaries $r=\tilde{r}_\pm$, the orbits are equatorial with $\tilde{\eta}=0$, but the bound geodesics in the interior of the photon shell have $\tilde{\eta}>0$ and oscillate in the $\theta$-direction between polar angles given by
\begin{align}
\label{eq:CriticalPoles}
\tilde{\theta}_\pm=\arccos\pa{\mp\sqrt{\tilde{u}_+}}\;,\qquad
\tilde{u}_\pm=\frac{-\tilde{r}^4+3M^2\tilde{r}^2-2a^2M\tilde{r}\pm2\tilde{r}\sqrt{M\Delta(\tilde{r})\pa{2\tilde{r}^3-3M\tilde{r}^2+a^2M}}}{a^2\pa{\tilde{r}-M}^2}
\gtrless0\;.
\end{align}
Following \cite{Johnson2020}, we will refer to one such complete oscillation (e.g., from $\tilde{\theta}_-$ back to itself) as one orbit, since the photon typically returns to a point near, but not identical to (since the angle $\phi$ also shifts), its initial position. Note that $\tilde{\theta}_\pm$ tends to the north/south poles (i.e., the bound orbit passes over the poles) if and only if $\tilde{\lambda}=0$.
The angular momentum vanishes at the orbital radius $\tilde{r}_0\in\br{\tilde{r}_-,\tilde{r}_+}$ given by
\begin{align}
\tilde{r}_0=M+2M\triangle\cos\br{\frac{1}{3}\arccos\pa{\frac{1-\frac{a^2}{M^2}}{\triangle^3}}}\;,\qquad
\triangle=\sqrt{1-\frac{a^2}{3M^2}}\;.
\end{align}
The bound geodesics are unstable in the sense that any small perturbation will push them into the black hole or towards infinity where they can reach a telescope.
The observed photon shell image arises from photons traveling on such ``nearly bound'' geodesics.
Consider two such nearby geodesics, one of which is exactly bound at $\tilde{r}$, with the other initially differing only by an infinitesimal radial separation $\mathop{}\!\delta r_0$.
After $n$ half-orbits the separation grows as
\begin{align}
\mathop{}\!\delta r_n=e^{\gamma n}\mathop{}\!\delta r_0\;,
\end{align}
where the Lyapunov exponent $\gamma$ is a function on the space of bound orbits given by
\begin{align}
\label{eq:KerrLyp}
\gamma=\sqrt{\frac{\mathcal{R}''(\tilde{r})}{2E^2}}\tilde{G}_\theta\;,\qquad
\tilde{G}_\theta=\int_{\tilde{\theta}_-}^{\tilde{\theta}_+}\frac{E}{\sqrt{\Theta(\theta)}}\mathop{}\!\mathrm{d}\theta
=\frac{2}{\sqrt{-\tilde{u}_-a^2}}K\!\pa{\frac{\tilde{u}_+}{\tilde{u}_-}}\;.
\end{align}
The elliptic integral arises from averaging over an angular period.
The half-period of a bound orbit at $r=\tilde{r}$ is \cite{GrallaLupsasca2020a}
\begin{align}
\tau=\tilde{r}^2\pa{\frac{\tilde{r}+3M}{\tilde{r}-M}}\tilde{G}_\theta+a^2\tilde{G}_t\;,\qquad
\tilde{G}_t=\int_{\tilde{\theta}_-}^{\tilde{\theta}_+}\frac{E\cos^2{\theta}}{\sqrt{\Theta(\theta)}}\mathop{}\!\mathrm{d}\theta
=2\sqrt{-\frac{\tilde{u}_-}{a^2}}\br{E\!\pa{\frac{\tilde{u}_+}{\tilde{u}_-}}-K\!\pa{\frac{\tilde{u}_+}{\tilde{u}_-}}}\;.
\end{align}
The period-averaged radial deviation as a function of Boyer-Lindquist time becomes \cite{Yang2012}
\begin{align}
\label{eq:KerrRadialDeviation}
\mathop{}\!\delta r(t)=e^{\gamma_Lt}\mathop{}\!\delta r_0\;,\qquad \gamma_L=\frac{\gamma}{\tau}\;.
\end{align}
\subsection{Conformal symmetry of the quasinormal mode spectrum}
\label{subsec:KerrQNM}
We now turn to a discussion of the quasinormal modes of the Kerr black hole.
The wave equation \eqref{eq:Wav} in this geometry separates due to the existence of the Killing tensor \eqref{eq:KerrKilling}.
Solutions to the massless scalar wave equation therefore take the form
\begin{align}\label{eq:KerrQNMansatz}
\Phi(t,r,\theta,\phi)=\int\mathop{}\!\mathrm{d}\omega\sum_{\ell=0}^\infty\sum_{m=-\ell}^{\ell}c_{\ell m}(\omega)\Phi_{ \ell m \omega}(t,r,\theta,\phi)\;, \qquad
\Phi_{\ell m\omega}(t,r,\theta,\phi)=e^{-i\omega t}R_{\ell m\omega}(r)S_{\ell m\omega}(\theta)e^{im\phi}\;,
\end{align}
where the $S_{\ell m\omega}(\theta)$ denote the spheroidal harmonics.
These special functions satisfy the equation
\begin{align}
\label{eq:KerrAngularEq}
\br{\frac{1}{\sin{\theta}}\mathop{}\!\partial_\theta\pa{\sin{\theta}\mathop{}\!\partial_\theta}+a^2\omega^2\cos^2\theta -\frac{m^2}{\sin^2\theta}+A_{\ell m}}S_{\ell m\omega}(\theta)=0\;,
\end{align}
where the $A_{\ell m}(a\omega)$ are angular separation constants, which are quantized by requiring $S_{\ell m\omega}(\theta)$ to be regular at the poles.
Note that the potential in this ODE matches the geodesic polar potential \eqref{eq:KerrPolarPotential} provided that we identify
\begin{align}
\label{eq:GeoIdf}
\omega=E\;,\qquad
m=L\;,\qquad
A_{\ell m}=Q+m^2\;.
\end{align}
The $S_{\ell m\omega}(\theta)$ are labelled by the integers $\ell$ and $m$ with $-\ell\le m\le\ell$ and depend explicitly on the quantity $a^2\omega^2$.
For a round sphere (as in Schwarzschild), the separation constants are $A_{\ell m}=\ell(\ell+1)$, but there is no analytic formula for the $A_{\ell m}$ in generic Kerr and they must be computed numerically
The constants $A_{\ell m}=A_{\ell m}^R+iA_{\ell m}^I$ are in general complex (as is $\omega=\omega_R+i\omega_I$), but in the eikonal approximation the real parts dominate the imaginary parts.
Since the angular separation constants enter the radial equation, a separate WKB analysis must also be performed on \eqref{eq:KerrAngularEq}.
Defining $e^y=\tan{\frac{\theta}{2}}$, so that $\mathop{}\!\partial_y=\sin{\theta}\mathop{}\!\partial_\theta$, the equation becomes
\begin{align}
\br{\frac{1}{\sin^2{\theta}}\mathop{}\!\partial_y^2+a^2\omega_R^2\cos^2\theta -\frac{m^2}{\sin^2\theta}+A^R_{\ell m}}S_{\ell m\omega}(y)=-i\pa{2\omega_R\omega_Ia^2\cos^2{\theta}+A^I_{\ell m}}S_{\ell m\omega}(y)\;.
\end{align}
We assume that $A^R_{\ell m}\gg A^I_{\ell m}$ as well as $\omega_R\gg\omega_I$, with $A_{\ell m}^R\sim \omega_R^2\sim m^2\sim\pa{A_{\ell m}^I}^2$ of comparable magnitudes. In this regime, a WKB analysis of the equation then yields \cite{Yang2012,Yang2014}
\begin{align}
\label{eq:SepConst}
A^I_{\ell m}=-2a^2\omega_R\omega_I\frac{\tilde{G}_t}{\tilde{G}_\theta}\;.
\end{align}
The radial part of the scalar wave equation takes the form
\begin{align}
\label{eq:KerrRadialOperator}
\br{\Delta(r)\mathop{}\!\partial_r\pa{\Delta(r)\mathop{}\!\partial_r}+V(r)}R_{\ell m\omega}(r)=0\;,
\end{align}
where the wave potential is given by
\begin{align}
V(r)=\br{\omega\pa{r^2+a^2}-am}^2-\Delta(r)\pa{A_{\ell m}+a^2\omega^2-2am\omega}\;.
\end{align}
Again, note that the potential in this ODE matches the geodesic radial potential \eqref{eq:KerrRadialPotential} provided that we make the same identifications as in \eqref{eq:GeoIdf}.
In terms of the tortoise coordinate defined by $\frac{dr_*}{dr}=\frac{r^2+a^2}{\Delta(r)}$ and the rescaled radial function $\psi_{\ell m\omega}(r)=\sqrt{r^2+a^2}R_{\ell m\omega}(r)$, this radial ODE becomes
\begin{align}
\label{eq:KerrExact}
\br{\mathop{}\!\partial_{r_*}^2+\frac{V(r)}{\pa{r^2+a^2}^2}-\frac{g(r)\Delta(r)}{\pa{r^2+a^2}^4}}\psi_{\ell m\omega}(r_*)=0\;,\qquad
g(r)=2Mr^3+a^2r(r-4M)+a^4 \;.
\end{align}
We are interested in the geometric optics limit, where analytic formulas are available.
This limit is $\ell\to\infty$ with
\begin{align}
\label{eq:KerrEikonal}
\mu\equiv\frac{m}{\ell+\frac{1}{2}}\in\pa{-1,1}\;,\qquad
\Omega_R(\mu)\equiv\frac{\omega_R}{\ell+\frac12}\;,\qquad
A(\mu)\equiv\frac{A_{\ell m}^R}{\pa{\ell+\frac{1}{2}}^2}\;,\qquad
\frac{A_{\ell m}^I}{\pa{\ell+\frac{1}{2}}}\;,
\end{align}
all held fixed. As demonstrated in \cite{Yang2012}, at leading order in this limit, a Kerr quasinormal mode with real energy $\omega_R$, angular momentum $m$, and real separation constant $A_{\ell m}^R$ can be associated with a homoclinic congruence of null geodesics with energy $E=\omega_R$, angular momentum $L=m$, and Carter constant $Q=A_{\ell m}^R-m^2$, that asymptote to bound orbits at photon shell radius $\tilde{r}$ such that\footnote{This is an excellent approximation to the exact bijection between $\mu$ and $\tilde{r}$ obtained from the Bohr-Sommerfeld quantization condition $\int_{\tilde{\theta}_-}^{\tilde{\theta}_+}\sqrt{a^2\omega^2\cos^2{\theta}-\frac{m^2}{\sin^2{\theta}}+A_{\ell m}}\mathop{}\!\mathrm{d}\theta=\pi\pa{\ell+\frac{1}{2}-\ab{m}}$ \cite{Yang2012}, which in the eikonal limit implies that $\mu(\tilde{r})=\pa{\sign(\tilde{r}_0-\tilde{r})+\frac{\tilde{J}_\theta}{\pi\tilde{\lambda}}}^{-1}$.
Here $\tilde{J}_\theta\equiv\frac{1}{E}\int_{\tilde{\theta}_-}^{\tilde{\theta}_+}\sqrt{\Theta(\theta)}\mathop{}\!\mathrm{d}\theta$ is the action integral for the polar motion, which can be evaluated in terms of elliptic integrals as \cite{Kapec2020}
\begin{align*}
\tilde{J}_\theta=\int_{\tilde{\theta}_-}^{\tilde{\theta}_+}\sqrt{\tilde{\eta}^2+\tilde{\lambda}^2+a^2\cos^2{\theta}-\frac{\tilde{\lambda}^2}{\sin^2{\theta}}}\mathop{}\!\mathrm{d}\theta
=\frac{2}{\sqrt{-\tilde{u}_-a^2}}\br{a^2\pa{1-\tilde{u}_+}K\!\pa{\frac{\tilde{u}_+}{\tilde{u}_-}}-a^2\tilde{u}_-E\!\pa{\frac{\tilde{u}_+}{\tilde{u}_-}}-\tilde{\lambda}^2\Pi\!\pa{\tilde{u}_+;\frac{\tilde{u}_+}{\tilde{u}_-}}}.
\end{align*}
}
\begin{align}
\label{eq:Bijection}
\mu(\tilde{r})=\sign(\tilde{r}_0-\tilde{r})\sin{\tilde{\theta}_\pm}
=\sign(\tilde{r}_0-\tilde{r})\sqrt{1-\tilde{u}_+(\tilde{r})}
\in\br{-1,1}\;.
\end{align}
This correspondence determines the QNM wavefronts, while the subleading equation \eqref{eq:GeoAmp} determines their amplitude in terms of the expansion of the null congruence.
The amplitude decay (overtone structure) is determined by expanding \eqref{eq:KerrExact}, again assuming that $A^R_{\ell m}\gg A^I_{\ell m}$ and $\omega_R\gg\omega_I$ with $A_{\ell m}^R\sim \omega_R^2\sim m^2\sim\pa{A_{\ell m}^I}^2$ of comparable magnitudes:
\begin{align}
&\br{\mathop{}\!\partial_{r_*}^2+\frac{\br{\omega_R\pa{r^2+a^2}-am}^2-\Delta(r)\pa{A_{\ell m}^R+a^2\omega_R^2-2am\omega_R}}{(r^2+a^2)^2}}\psi_{\ell m\omega}(r_*)\notag\\
&\qquad\qquad=-i\br{2\omega_R\omega_I-\frac{2am\omega_I}{r^2+a^2}-\frac{\Delta(r)\pa{A_{\ell m}^I+2a^2\omega_R\omega_I-2am\omega_I}}{\pa{r^2+a^2}^2}}\psi_{\ell m\omega}(r_*)\;.
\end{align}
This is still an intractable ODE.
However, it simplifies dramatically in the Kerr near-ring region defined for each orbital radius $\tilde{r}\in\br{\tilde{r}_-,\tilde{r}_+}$ and radial deviation $\mathop{}\!\delta r=r-\tilde{r}$ by
\begin{align}
\label{eq:NearRingRegionKerrQNMs}
\text{NEAR-RING REGION:}\qquad
\begin{cases}
\ab{\mathop{}\!\delta r}\ll M&\qquad\text{(near-peak)},\\
\displaystyle\ab{\frac{m}{\omega_R}-\tilde{\lambda}}\ll M&\qquad\text{(near-critical in $m$ and $\phi$)},\vspace{3pt}\\
\displaystyle\ab{\frac{A_{\ell m}^R-m^2}{\omega_R^2}-\tilde{\eta}}\ll M&\qquad\text{(near-critical in $\ell$ and $\theta$)},\vspace{3pt}\\
\displaystyle\frac{1}{\omega_R}\ll M&\qquad\text{(high-frequency)},
\end{cases}
\end{align}
where at leading order, we can approximate the LHS by a quadratic potential in $\mathop{}\!\delta r$ while treating the radius on the RHS as a constant $r=\tilde{r}$.
In this way, we recover an inverted harmonic oscillator eigenvalue problem as in section ~\ref{sec:Schwarzschild}. Using \eqref{eq:SepConst}, this becomes
\begin{align}
\frac{1}{2\omega_R}\br{\mathop{}\!\partial_{r_*}^2+\frac{1}{2}\frac{\mathcal{R}''(\tilde{r})}{\pa{\tilde{r}^2+a^2}^2}\mathop{}\!\delta r^2}\psi_{\ell m\omega}(r_*)
&=-i\omega_I\br{1-\frac{a\tilde{\lambda}}{\tilde{r}^2+a^2}+\frac{a\Delta(\tilde{r})}{\pa{\tilde{r}^2+a^2}^2}\pa{\tilde{\lambda}-a+a\frac{\tilde{G}_t}{\tilde{G}_\theta}}}\psi_{\ell m\omega}(r_*)\\
&=-i\omega_I\br{\frac{\Delta(\tilde{r})}{\pa{\tilde{r}^2+a^2}^2}\frac{\tau}{\tilde{G}_\theta}}\psi_{\ell m\omega}(r_*)\;.
\end{align}
Noting that near the ring $\mathop{}\!\partial_{r_*}=\frac{\Delta(\tilde{r})}{\tilde{r}^2+a^2}\mathop{}\!\partial_r$, this can be rewritten as
\begin{align}
\mathcal{H}\psi=i\omega_I\psi\;,\qquad
\mathcal{H}=-\frac{\tilde{G}_\theta\pa{\tilde{r}^2+a^2}^2}{2\omega_R\tau\Delta(\tilde{r})}\br{\mathop{}\!\partial_x^2+\pa{\frac{\gamma}{\tilde{G}_\theta}\frac{\Delta(\tilde{r})}{\pa{\tilde{r}^2+a^2}^2}}^2\omega_R^2x^2}\;,\qquad
x=r_*-\tilde{r}_*
=\frac{\tilde{r}^2+a^2}{\Delta(\tilde{r})}\mathop{}\!\delta r\;.
\end{align}
If we impose quasinormal mode boundary conditions, then we find that the eigenvalues of $\mathcal{H}$ must be imaginary with
\begin{align}
\omega_I=-\pa{n+\frac{1}{2}}\frac{\gamma}{\tau}\;,
\end{align}
in complete agreement with the overtone structure expected from \eqref{eq:KerrRadialDeviation}.
Note that in Kerr, the specific normalization of the Hamiltonian (which is determined by the eikonal form of the angular separation constants) accounts for the $\theta$-averaged motion that defines the Lyapunov exponent.
We next define $k=\frac{\Delta(\tilde{r})}{\tilde{G}_\theta\pa{\tilde{r}^2+a^2}^2}$ and the operators
\begin{align}
\label{eq:KerrGenerators}
a_\pm=\frac{e^{\pm\gamma_Lt}}{\sqrt{2k\gamma\omega_R}}\pa{\mp i\mathop{}\!\partial_x-k\gamma\omega_Rx}\;,\qquad
L_0&=-\frac{i}{4}\pa{a_+a_-+a_-a_+}
=\frac{i}{2\gamma_L}\mathcal{H}\;,\qquad
L_\pm=\pm\frac{a_\pm^2}{2}\;.
\end{align}
As in Schwarzschild, the $a_\pm$ obey the Heisenberg algebra $\br{a_+,a_-}=iI$, while the $L_m$ obey the exact $\mathsf{SL}(2,\mathbb{R})_{\rm QN}$ commutation relations.
Of course, the $L_m$ only commute with the wave equation in the near-ring region, in a particular superselection sector labelled by $(\omega_R,\ell,m)$.
Eigenstates of $L_0$ satisfy $L_0\psi_h=h\psi_h$, so we identify $\omega_I=-2\gamma_L h$.
In the near-ring region, the mode ansatz \eqref{eq:KerrQNMansatz} reduces to
\begin{align}
\Phi_{\ell m\omega}(t,r,\theta,\phi)\approx e^{-i\omega_Rt}\frac{\Phi_h(t,x)}{\sqrt{r^2+a^2}}S_{\ell m\omega}(\theta)e^{im\phi}\;,\qquad
\Phi_h(t,x)=e^{\omega_It}\psi_h(x)
=e^{-2\gamma_L ht}\psi_h(x)\;.
\end{align}
In this framework, the quasinormal mode boundary condition is equivalent to the imposition of a highest-weight condition $L_+\psi_h=0$ on the fundamental mode.
There are two solutions with $h=\frac{1}{4}$ and $h=\frac{3}{4}$:
\begin{align}
\label{eq:KerrHighestWeight}
\Phi_\frac{1}{4}(t,x)&=e^{-\frac{1}{2}\gamma_Lt}\psi_\frac{1}{4}(x)\;,
&&\psi_\frac{1}{4}(x)=e^{\frac{i}{2}k\gamma\omega_Rx^2}\;,\\
\Phi_\frac{3}{4}(t,x)&=e^{-\pa{1+\frac{1}{2}}\gamma_L t}\psi_\frac{3}{4}(x)\;,
&&\psi_\frac{3}{4}(x)=xe^{\frac{i}{2}k\gamma\omega_Rx^2}\;.
\end{align}
Higher overtones are then obtained as $\mathsf{SL}(2,\mathbb{R})_{\rm QN}$-descendants:
\begin{align}
\Phi_{h,N}(t,x)=L_-^N\Phi_h(t,x)
=e^{-2\gamma_L(h+N)t}\psi_{h+N}(x)
\propto e^{-2\gamma_L(h+N)t}D_{2(h+N)-\frac{1}{2}}\pa{\sqrt{-2ik\gamma\omega_R}x}\;,
\end{align}
where $D_n(x)$ denotes the $n^\text{th}$ parabolic cylinder function.
Thus, the QNM overtones in the Kerr near-ring region \eqref{eq:NearRingRegionKerrQNMs} fall into two irreps of the $\mathsf{SL}(2,\mathbb{R})$ generated by \eqref{eq:KerrGenerators}, obtained from the primary states with $h=\frac{1}{4}$ and $h=\frac{3}{4}$.
The Casimir is
\begin{align}
\mathcal{C}\Phi_{h,N}\equiv\pa{-L_0^2+\frac{L_+L_-+L_-L_+}{2}}\Phi_{h,N}=h(1-h)\Phi_{h,N}\;,
\end{align}
so, as in Schwarzschild, the two representations that appear are shadows of each other with Casimir
\begin{align}
\mathcal{C}=\frac{3}{16}\;.
\end{align}
Finally, we note that at the edges $x\to\pm\infty$ of the near-peak region,
\begin{align}
D_n\pa{\sqrt{-2ik\gamma\omega_R}x}\stackrel{x\to\pm\infty}{\sim}x^ne^{\frac{i}{2}k\gamma\omega_Rx^2}\;.
\end{align}
Therefore, the $n^\text{th}$ overtone behaves as
\begin{align}
\label{eq:KerrOvt}
\Phi_{\ell mn}(t,r,\theta,\phi)\stackrel{x\to\pm\infty}{\sim}
e^{-\pa{n+\frac{1}{2}}\gamma_Lt}x^ne^{-i\omega_R\pa{t-\frac{1}{2}k\gamma x^2}}S_{\ell m\omega}(\theta)e^{im\phi}\;,\qquad
\omega_R=\pa{\ell+\frac{1}{2}}\Omega_R(\mu)\;.
\end{align}
To summarize, the QNM spectrum $\omega_{\ell mn}$ is labelled by the overtone number $n\in\mathbb{N}$, which is quantized by the quasinormal boundary condition, as well as by the spheroidal number $\ell$ and the angular frequency $m$.
The eikonal limit of the spectrum takes the form
\begin{align}
\label{eq:KerrSpectrum}
\omega_{\ell\mu n}\stackrel{\ell\to\infty}{\approx}\pa{\ell+\frac{1}{2}}\Omega_R(\mu)-i\pa{n+\frac{1}{2}}\gamma_L(\mu)\;,
\end{align}
and the geometric optics approximation determines $\Omega_R(\mu)$ and $\gamma_L(\mu)$ in terms of the geometric data of the photon shell: inverting the bijective function \eqref{eq:Bijection} for $\tilde{r}(\mu)$, we have
\begin{align}
\Omega_R(\mu)=\frac{\mu}{\tilde{\lambda}(\tilde{r}(\mu))}\;,\qquad
\gamma_L(\mu)=\frac{\gamma(\tilde{r}(\mu))}{\tau(\tilde{r}(\mu))}\;.
\end{align}
The higher overtones are $\mathsf{SL}(2,\mathbb{R})_{\rm QN}$-descendants and the fundamental mode is highest-weight in the near-ring region.
It is a challenge for any proposed holographic dual to Kerr to reproduce the dispersion relation \eqref{eq:KerrSpectrum}.
\subsection{Quasinormal modes from geometric optics}
\label{subsec:KerrOvertones}
Next, we turn to a brief discussion of the QNM wavefunctions and their description within the geometric optics approximation.
The Hamilton-Jacobi principal function for null geodesics in Kerr is
\begin{align}
\label{eq:HamiltonJacobiKerr}
S(t,r_*,\theta,\phi)=-Et+L\phi+\int^{r_*}\frac{\sqrt{\mathcal{R}(r_*')}}{\pa{r_*'^2+a^2}}\mathop{}\!\mathrm{d} r_*'+\int^\theta\sqrt{\Theta(\theta')}\mathop{}\!\mathrm{d}\theta'\;,
\end{align}
which reproduces \eqref{eq:KerrPlow} via $p_\mu=\mathop{}\!\partial_\mu S(x)$.
As is apparent from \eqref{eq:HamiltonJacobiKerr}, the angular modes $S_{\ell m\omega}(\theta)$ of the QNM wavefunctions are oscillatory in the region between the angular turning points of the corresponding bound null geodesic at $r=\tilde{r}(\mu)$ and decay exponentially toward the poles.
This behavior corresponds to the angular trapping of photon shell geodesics, which are confined to oscillate about the equatorial plane.
In the near-ring region, the radial potential in \eqref{eq:HamiltonJacobiKerr} exhibits a double zero and integrates to the phase in \eqref{eq:KerrHighestWeight}.
The leading ($\theta$-averaged) solution to \eqref{eq:GeoAmp} is simply $A_0=e^{-\frac{1}{2}\gamma_Lt}$, which when combined with the critical phase $\tilde{S}$ in \eqref{eq:HamiltonJacobiKerr}, gives an excellent approximation to the fundamental quasinormal mode.
As discussed in section \ref{sec:QNMGeo}, it is possible to construct additional solutions to the geometric optics wave equation given the fundamental seed solution $\Phi_0(x)=A_0(x)e^{i\tilde{S}(x)}$ and a solution to the homogeneous amplitude equation
\begin{align}
p^\mu\mathop{}\!\partial_\mu u=0\;,
\end{align}
in this case averaged over the polar motion.
According to \eqref{eq:KerrRadialDeviation}, the quantity $u=e^{-\gamma_Lt}(r-\tilde{r})$ is conserved along the homoclinic trajectories after an appropriate averaging over the polar motion.
We can therefore write down a family of solutions associated to the same critical orbit with phase $\tilde{S}$, but differing in the amplitude
\begin{align}
\Phi_n(x)\sim e^{-\pa{n+\frac{1}{2}}\gamma_Lt}\pa{r-\tilde{r}}^ne^{i\tilde{S}(x)}\;.
\end{align}
This eikonal QNM approximation agrees with the near-ring approximation \eqref{eq:KerrOvt} in their overlap region.
\subsection{Observable conformal symmetry of the photon ring}
\label{subsec:KerrScreen}
In this subsection, we identify the emergent near-ring conformal symmetry $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ for the Kerr black hole.
This structure is far richer in Kerr than in Schwarzschild, and a measurement of its critical exponents would provide a sensitive probe of spinning black holes.
The intricacy of the Kerr lens will lead us to consider stationary, axisymmetric, fixed-$\theta$ source rings rather than pointlike sources, in order to allow a simplified study of the consequences of the conformal symmetry on black hole images.
For time-averaged images, this restriction is well-motivated observationally.
Let $\Gamma$ denote the six-dimensional phase space of colored null geodesics in Kerr, spanned by $(r,\theta,\phi,p_r,p_\theta,p_\phi)$ with canonical symplectic form.
Time evolution is generated by the Hamiltonian
\begin{gather}
\label{eq:KerrH}
H(r,\theta,p_r,p_\theta,p_\phi)=\br{\frac{\pa{r^2+a^2}^2}{\Delta(r)}-a^2\sin^2{\theta}}^{-1}\pa{\frac{2Mar}{\Delta(r)}p_\phi+\sqrt{G}}\; ,\\
G=\pa{\frac{2Mar}{\Delta(r)}p_\phi}^2+\br{\frac{\pa{r^2+a^2}^2}{\Delta(r)}-a^2\sin^2{\theta}}\br{\Delta(r)p_r^2+p_\theta^2+\pa{\frac{1}{\sin^2{\theta}}-\frac{a^2}{\Delta(r)}}p_\phi^2}\;,
\end{gather}
which is obtained by solving the null condition $g^{\mu\nu}p_\mu p_\nu=0$ for $p_t=-H$.
The Carter constant
\begin{align}
\label{eq:RadialCarter}
Q(r,\theta,p_r,p_\theta,p_\phi)
&=-\Delta(r)p_r^2+\frac{\br{H\pa{r^2+a^2}-ap_\phi}^2}{\Delta(r)}-\pa{p_\phi-aH}^2\\
\label{eq:AngularCarter}
&=p_\theta^2-a^2H^2\cos^2{\theta}+p_\phi^2\cot^2{\theta}
\end{align}
commutes with the Hamiltonian \eqref{eq:KerrH} and is therefore conserved along each photon trajectory, as is the angular momentum $L=p_\phi$.
Inverting \eqref{eq:RadialCarter} and \eqref{eq:AngularCarter} respectively gives
\begin{align}
\label{eq:RadialMomentum}
p_r(r,H,L,Q)&=\frac{\pm\sqrt{\mathcal{R}(r)}}{\Delta(r)}\;,&&
\mathcal{R}(r)=\br{H\pa{r^2+a^2}-aL}^2-\Delta(r)\br{Q+\pa{L-aH}^2}\;,\\
\label{eq:PolarMomentum}
p_\theta(\theta,H,L,Q)&=\pm\sqrt{\Theta(\theta)}\;,&&
\Theta(\theta)=Q+L^2+a^2H^2\cos^2{\theta}-\frac{L^2}{\sin^2{\theta}}\;.
\end{align}
The coordinate transformation $(r,\theta,\phi,p_r,p_\theta,p_\phi)\to\pa{T,\Phi,\Psi,H,L,Q+L^2}$ defined by
\begin{align}
\mathop{}\!\mathrm{d} T&=\frac{Hr^2\Delta+2Mr\br{H\pa{r^2+a^2}-aL}}{\Delta(r)\sqrt{\mathcal{R}(r)}}\mathop{}\!\mathrm{d} r+\frac{a^2H\cos^2{\theta}}{\sqrt{\Theta(\theta)}}\mathop{}\!\mathrm{d}\theta\;,\\
\mathop{}\!\mathrm{d}\Phi&=\mathop{}\!\mathrm{d}\phi-\br{\frac{a\pa{2HMr-aL}}{\Delta(r)\sqrt{\mathcal{R}(r)}}\mathop{}\!\mathrm{d} r+\frac{L\csc^2{\theta}}{\sqrt{\Theta(\theta)}}\mathop{}\!\mathrm{d}\theta}\;,\\
\mathop{}\!\mathrm{d}\Psi&=-\frac{1}{2}\br{\frac{1}{\sqrt{\mathcal{R}(r)}}\mathop{}\!\mathrm{d} r-\frac{1}{\sqrt{\Theta(\theta)}}\mathop{}\!\mathrm{d}\theta}\;,
\end{align}
is canonical since it preserves the symplectic form
\begin{align}
\Omega=\mathop{}\!\mathrm{d} p_r\wedge\mathop{}\!\mathrm{d} r+\mathop{}\!\mathrm{d} p_\theta\wedge\mathop{}\!\mathrm{d}\theta+\mathop{}\!\mathrm{d} p_\phi\wedge\phi
=\mathop{}\!\mathrm{d} H\wedge\mathop{}\!\mathrm{d} T+\mathop{}\!\mathrm{d} L\wedge\mathop{}\!\mathrm{d}\Phi+\mathop{}\!\mathrm{d}\pa{Q+L^2}\wedge\mathop{}\!\mathrm{d}\Psi\;.
\end{align}
These canonical coordinates lead to trivial equations of motion for the Hamiltonian \eqref{eq:KerrH}:
\begin{align}
\dot{H}&=\cu{H,H}=0\;,
&&\dot{L}=\cu{L,H}=0\;,
&&\dot{Q}=\cu{Q,H}=0\;,\\
\dot{\Psi}&=\cu{\Psi,H}=0\;,
&&\dot{\Phi}=\cu{\Phi,H}=0\;,
&&\dot{T}=\cu{T,H}=1\;.
\end{align}
The first three equations indicate that the phase space $\Gamma$ foliates into superselection sectors of fixed $(H,L,Q)$, which are conserved momenta.
The fourth and fifth equations imply that
the Hamiltonian flow sends a photon with initial coordinates $(r_s,\theta_s,\phi_s,H,L,Q)$ to final coordinates $(r_o,\theta_o,\phi_o,H,L,Q)$ according to the rule
\begin{gather}
\label{eq:MinoTime}
\fint_{r_s}^{r_o}\frac{\mathop{}\!\mathrm{d} r}{\sqrt{\mathcal{R}(r)}}=\fint_{\theta_s}^{\theta_o}\frac{\mathop{}\!\mathrm{d}\theta}{\sqrt{\Theta(\theta)}}\;,\\
\label{eq:KerrAzimuth}
\Delta\phi=\phi_o-\phi_s
=\fint_{\phi_s}^{\phi_o}\mathop{}\!\mathrm{d}\phi
=\fint_{r_s}^{r_o}\frac{a\pa{2HMr-aL}}{\Delta(r)\sqrt{\mathcal{R}(r)}}\mathop{}\!\mathrm{d} r+\fint_{\theta_s}^{\theta_o}\frac{L\csc^2{\theta}}{\sqrt{\Theta(\theta)}}\mathop{}\!\mathrm{d}\theta\;,
\end{gather}
where the slash indicates that an integral is to be evaluated along the photon trajectory.
Finally, the last equation identifies $T$ as the variable conjugate to energy, i.e., time.
Hence, the time elapsed during evolution from a state $(r_s,\theta_s,\phi_s,H,L,Q)$ to $(r_o,\theta_o,\phi_o,H,L,Q)$ is
\begin{align}
\label{eq:KerrTimeLapse}
T&=\fint_{r_s}^{r_o}\frac{Hr^2\Delta+2Mr\br{H\pa{r^2+a^2}-aL}}{\Delta(r)\sqrt{\mathcal{R}(r)}}\mathop{}\!\mathrm{d} r+\fint_{\theta_s}^{\theta_o}\frac{a^2H\cos^2{\theta}}{\sqrt{\Theta(\theta)}}\mathop{}\!\mathrm{d}\theta\;.
\end{align}
Equations \eqref{eq:MinoTime}, \eqref{eq:KerrAzimuth} and \eqref{eq:KerrTimeLapse} are the solution to the null geodesic equation in Kerr.
These integrals can be evaluated explicitly in terms of elliptic functions \cite{GrallaLupsasca2020b}, but their detailed form will not be needed here.
As in Schwarzschild, the salient feature of \eqref{eq:KerrTimeLapse} is the logarithmic divergence along the homoclinic trajectories associated to the double zero of the radial potential.
This $T$ is a local coordinate, and it becomes singular in the vicinity of a hyperbolic fixed point.
Bound photon orbits occur in the range $\tilde{r}_-\leq\tilde{r}\leq\tilde{r}_+$, in which $\dot{r}=\dot{p}_r=0$ can vanish simultaneously.
The conserved quantities associated to these orbits are determined by the conditions $\mathcal{R}(\tilde{r})=\mathcal{R}'(\tilde{r})=0$ that define the photon shell in phase space.
The energy-rescaled critical parameters $(\tilde{\lambda},\tilde{\eta})$ are given by the relations \eqref{eq:CriticalParameters}, which can be inverted to obtain $\tilde{r}(L,Q)$, and thence the zero-point energy $\tilde{H}(L,Q)$.
As in the Schwarzschild case, we now define $\hat{H}=H-\tilde{H}$ and consider the unbound geodesics with $\hat{H}<0$ that begin and end at null infinity, always remaining outside the black hole.
A distant observer at large radius $r_o\to\infty$ receives such a geodesic with impact parameters $(\lambda,\eta)$ at the position $(\alpha,\beta)$ on the observer sky given by
\begin{align}
\label{eq:BardeenCoordinates}
\alpha=-\frac{\lambda}{\sin{\theta_o}}\;,\qquad
\beta=\pm\sqrt{\eta+a^2\cos^2{\theta_o}-\lambda^2\cot^2{\theta_o}}\;.
\end{align}
For such a (non-homoclinic) geodesic, the radius of closest approach to the black hole is attained when the radial momentum \eqref{eq:RadialMomentum} vanishes.
This occurs at the largest real root of the quartic potential $\mathcal{R}(r)$, which is given explicitly in Eq.~(95d) of \cite{GrallaLupsasca2020b}.
Geodesics with $\hat{H}=0$ asymptote to bound photon orbits at $r=\tilde{r}(L,Q)$ in the far past or future.
Their impact parameters $(\tilde{\alpha},\tilde{\beta})$, obtained by substituting \eqref{eq:CriticalParameters} into \eqref{eq:BardeenCoordinates}, define the Kerr critical curve $\mathcal{C}(\tilde{r})$ in the observer sky.
Since the coordinates $\pa{T,\Phi,\Psi,H,L,Q+L^2}$ are canonical, the functions\footnote{One can generalize $\hat{H}$ to any function of the form $H-g(L,Q)$ for some $g$, but only the choice in \eqref{eq:GeneratorsKerr} leads to dilations that scale onto homoclinic orbits.
One could also add a Casimir $\frac{\mathcal{C}(L,Q)}{\hat{H}}$ to $H_-$, but we will not use special conformal transformations explicitly here.}
\begin{align}
\label{eq:GeneratorsKerr}
H_+=\hat{H}\;,\qquad
H_0=-\hat{H} T\;,\qquad
H_-=\hat{H} T^2\;,
\end{align}
obey the $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ algebra.
This $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ commutes with both $L$ and $Q$ and therefore acts within superselection sectors $\Gamma_{L,Q}$ of fixed angular momentum and Carter constant.
However, the flow generated by $H_0$ does modify the energy (or photon color) $H=\hat{H}+\tilde{H}(L,Q)$ and therefore
acts on the impact parameters as well as the radius of closest approach.
The action on $\Gamma_{L,Q}$ is transitive: finite $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ transformations can be used to map any unbound geodesic in $\Gamma_{L,Q}$ to any other.
In Kerr, the $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$-invariant locus in phase space is the photon shell.
As in Schwarzschild, the large finite dilation \eqref{eq:FiniteDilation} scales down $\hat{H}$ and scales up $T$, pushing any trajectory asymptotically onto the homoclinic orbits at large times.
The dimensionless radius of the point of closest approach of the orbit $r=\tilde{r}\pa{1+R_{\rm min}}$ becomes
\begin{align}
R_{\rm min}^2=-\frac{\pa{\tilde{r}+3M}\pa{\tilde{r}-M}\Delta(\tilde{r})}{2\tilde{H}\tilde{r}\br{\tilde{r}\pa{\tilde{r}^2-3M\tilde{r}+3M^2}-a^2M}}\hat{H}\;,
\end{align}
so \eqref{eq:RadialDeviation} still holds.
In Kerr, it is convenient to characterize the approach to criticality by the (fractional) half-orbit number $n_{\rm orb}$ (not to be confused with the QNM overtone number) which diverges as $\alpha\to\infty$ like the inverse power of the Lyapunov exponent \eqref{eq:KerrLyp}
\begin{align}
\label{eq:half-orbit dilation}
\mathop{}\!\partial_\alpha n_{\rm orb}=\frac{1}{\gamma}\;.
\end{align}
\begin{figure}[htp!]
\centering
\includegraphics[width=.49\textwidth]{KerrScreen1.pdf}\quad
\includegraphics[width=.49\textwidth]{KerrScreen2.pdf}
\caption{
Action of $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ dilations on the image plane of an observer at a large distance from a Kerr black hole, located in the equatorial plane (first panel) or at an inclination of $60^\circ$ (second panel).
The critical curve (red) corresponds to photons with $\hat{H}=0$ that asymptote to bound photon orbits at $\tilde{r}$.
Photons in its interior have $\hat{H}>0$ and are captured by the black hole, while those in its exterior have $\hat{H}<0$ and are deflected back to null infinity.
If the photon energy (color) $E$ is fixed, then every choice of $(L,Q)$ defines a point $(\alpha,\beta)$ with coordinates fixed by $\lambda=\frac{L}{E}$ and $\eta=\frac{Q}{E^2}$.
Otherwise, every $(L,Q)$ defines a ray, with each ray corresponding to the equivalence class of points $(\alpha,\beta)$ whose conserved quantities $(\lambda,\eta)$ are related by energy rescaling.
The last two panels show the action of $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ dilations in the phase space of conserved quantities $(\lambda,\eta)$.
Only the equatorial observer can see the entire photon shell in the sky (first panel), and therefore all of the $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ orbits in phase space (all of the third panel).
A non-equatorial observer sees only the subshell of the full photon shell for which $\beta^2(\tilde{r})>0$.
For the observer at an inclination of $60^\circ$, this corresponds to the unshaded region in the fourth panel.
Some $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ flows cross into the shaded region of phase space, which includes the part of the photon shell that is inaccessible to the observer.
The corresponding dilation flows on the observer screen (second panel) vanish into the horizontal $\alpha$ axis.
}
\label{fig:Kerr}
\end{figure}
For the simple and observationally relevant case of stationary, axisymmetric source rings of fixed polar angle $\theta$, the emissivity is independent of $\phi_s$ and emission time $t_s$.
We can then easily repeat the construction of section \ref{subsec:SchScr} for the Kerr black hole.
Considering only geodesics that connect the (axisymmetric) source ring to the telescope cuts the five-dimensional space $\Gamma_{L,Q}\times S^1$ to an infinite, discrete set of circles $S^1$, which we will again call $\Gamma_{\rm obs}$.
Using \eqref{eq:half-orbit dilation}, we identify the $\mathsf{SL}(2,\mathbb{R})_{\rm PR}$ element
\begin{align}
D_0=e^{-\gamma H_0}\;,
\end{align}
already discussed above, which maps $\Gamma_{\rm obs}$ to itself while taking $n_{\rm orb}\to n_{\rm orb}+1$.
The semigroup formed by products of $D_0$ is an emergent discrete scaling symmetry of the photon ring.
\section{Quantum Ruelle resonances = classical Lyapunov exponents}
\label{sec:KerrHologram}
In known examples, the $e^{S_{\rm BH}}$ black hole microstates are described by approximately thermal high-energy states in a lower-dimensional quantum field theory, and the response of the black hole to small perturbations is described by linear response theory in the dual quantum mechanics.
This dictionary maps the quasinormal ringing of the black hole atmosphere to the damped oscillations of a perturbed thermal state as it relaxes towards equilibrium \cite{Horowitz2000,Son2002,Birmingham2002,Birmingham2003,Polchinski2015}.
Operating under the relatively mild assumption that the holographic principle applies to asymptotically flat black holes, we expect that Kerr black holes like M87* can be described by a quantum system that we will refer to as the quantum dual.
This \textit{quantum-mechanical} system, if it exists, is constrained by a number of universal features of \textit{classical} black hole physics.
Absent the ability to derive this dual directly from a microscopic theory of quantum gravity, we can instead attempt to infer its properties indirectly from the bottom-up by throwing objects at the black hole and measuring the universal aspects of its response.
As discussed in sections \ref{sec:Schwarzschild} and \ref{sec:Kerr}, the high-frequency part of the Ruelle spectrum has a universal form when expressed in terms of the geometric data of the photon shell.
Following known examples of the holographic dictionary, these QNMs are interpreted as poles in the real-time (retarded) thermal two-point functions in the quantum dual.
In other words, we {\it assume} that these frequencies characterize the long-time ($\Delta t\equiv t-t'\gg T_H^{-1}$) correlations of operators in the quantum dual, which obey
\begin{align}
\av{\mathcal{O}_{\ell,m}(t)\mathcal{O}_{\ell,-m}(t')}\sim\sum_n e^{-i\omega_{\ell mn}\Delta t}\;.
\end{align}
The brackets denote a thermal average at the Kerr temperature $T_H$ and angular potential $\Omega_H$ in the dual quantum theory\footnote{For black holes in asymptotically flat space, thermal traces (and the partition function itself) do not converge due to negative specific heats and superradiant instabilities, so this expression must be interpreted with some care.}
\begin{align}
\av{X}={\rm Tr}\!\br{e^{-\frac{\omega-m\Omega_H}{T_H}}X}\;.
\end{align}
The explicit form of the Ruelle spectrum \eqref{eq:KerrSpectrum} has some salient features.
In a quantum theory, the integers $\ell$ are presumably cut off before $\ell\sim\frac{M}{M_{\rm Planck}}$ when the real parts of the frequencies reach the Planck scale.
As the (rescaled) momentum $\mu=\frac{m}{\ell}$ around the circle runs from $-1$ to $1$, the (rescaled) frequency $\Omega_R(\mu)$ remains positive and increases monotonically
\begin{align}
-\frac{1}{\tilde{\lambda}(\tilde{r}_+)}
\le\Omega_R(\mu)
\le\frac{1}{\tilde{\lambda}(\tilde{r}_-)}\;.
\end{align}
The dispersion relation $\Omega_R(\mu)$, although universal in Einstein gravity, is complicated and no proposed dual to the Kerr black hole has been able to account for it.
\section*{Acknowledgements}
This work is supported by the Center of Mathematical Sciences and Applications and the Black Hole Initiative at Harvard University, as well as DOE grant de-sc/0007870.
A.L. gratefully acknowledges Will and Kacie Snellings for their generous support.
|
3,212,635,537,563 | arxiv | \section{Background}
\label{sec:bg}
In this section, we describe the read and write paths involved in a key-value store, and the anti-entropy protocols which are used to implement eventual consistency by reconciling divergent distributed replicas.\XSpace{ Readers familiar with key-value store system internals can skip this section without loss of continuity.} Key-value stores persist pairs of keys and values, and usually have two basic operations: get(key) for retrieving the value corresponding to a key, and put(key, value) for storing the value of a particular key\XSpace{\footnote{We use both read/write and get/put terms to mean data fetch and data update operations.}}. Key-value stores typically use consistent hashing~\cite{Karger:1997:CHR:258533.258660} to distribute keys to servers, and each key is replicated across multiple servers for fault tolerance. When a client issues a put or get operation, it first interacts with a server, e.g., the server closest to the client. This server acts as a \emph{coordinator}: it coordinates the client and replica servers to complete the put and get operations. The CAP theorem~\cite{lg:cap} implies that during network partitions (where servers are split into two groups with no intercommunication), a key-value store must choose either strong consistency (linearizability)~\cite{her:lin} or availability. Even when the network is not partitioned, the system is sometimes configured to favor latency over consistency~\cite{abadi:cap}. As a result, popular key-value stores \XSpace{like Cassandra~\cite{laksh:cass} and Riak~\cite{riak}}expose tunable \emph{consistency levels}.
These consistency levels control the number of servers the coordinator needs to hear from before declaring success on reads and writes.
For instance, a write threshold of one allows the system to return with success after writing to just one replica.
When the sum of read and write thresholds is greater than the number of replicas, the system will ensure strong consistency.
In general, a consistency model can be characterized by its restrictions on operation ordering. The strongest models, e.g., linearizability~\cite{her:lin} severely restrict possible operation orderings that can lead to correct behavior. Eventual consistency, in contrast, is one of the weakest consistency models. Informally, it guarantees that, if no further updates are made to a given data item, reads to that item will eventually return the same value~\cite{V09}. Thus, until some undefined time in the future when the system is supposed to converge, the user can never rule out the possibility of data inconsistency.\XSpace{Despite the weak guarantees, many applications have been successfully built on top of eventually consistent stores.}
To achieve high availability and reliability, key value stores typically replicate data on multiple servers. For example, each key can be replicated on $N$ servers, where $N$ is a configurable parameter. In the weakest consistency setting (with read and write thresholds of one), each get and put operation only touches one replica (e.g., the one closest to the coordinator). Thus, in the worst case scenario where all puts go to one server, and all get operations are served by a different server, the replicas will never converge to the same value. To ensure convergence to the same value, production key-value stores employ \emph{anti-entropy} protocols~\cite{Demers:1987:EAR:41840.41841}. An anti-entropy protocol operates by comparing replicas and reconciling differences. The three main anti-entropy protocols are read-repair, hinted-handoff, and node-repair. While the first two are \emph{real-time} protocols involved in, respectively, the read and write paths, the third one is an offline protocol, which runs periodically\XSpace{ (e.g., during non-peak load hours)} to repair out-of-sync nodes (e.g., when a node rejoins after recovering from a crash). Here, we only consider the real-time anti-entropy protocols. \XSpace{Node-repair is mostly an offline process whose correctness lies solely in the semantics of the merge, so we do not consider it in this paper.}
Read-repair~\cite{DeCandia07} is a real-time anti-entropy mechanism that ensures that all replicas have (eventually) the most recent version of a value for a given key.
In a typical read path, the coordinator forwards read requests to all replicas, and waits for a consistency level ($CL$ out of $N$) number of replicas to reply. If read-repair is enabled, the coordinator checks all the read responses (from the nodes currently alive), determines the most recent read value\XSpace{\footnote{Determining the most recent version of data to push to out of date replicas is implementation dependent. For Cassandra, the replica value with highest client timestamp wins. Riak uses vector clocks to decide the winner, and can deduce multiple winners in case of concurrent writes.}}, and finally pushes the latest version to all out of date replicas.
Hinted-handoff~\cite{DeCandia07}, unlike read-repair, is part of the write path. It offers full write availability in case of failures, and can improve consistency after temporary network failures. When the coordinator finds that one of the replicas responsible for storing an update is temporarily down (e.g., based on failure detector predictions), it stores a hint meta-data for the down node for a configurable duration of time. Once the coordinator detects that the down node is up, it will attempt to send the stored hint to that recovered node. Thus hinted-handoff ensures that no writes are lost, even in the presence of temporary node failures. In other words, this mechanism is used to ensure that eventually all writes are propagated to all the replicas responsible for the key.
\section{Conclusions}
\label{sec:conclusion}
We have shown how formal properties of production key-value (NoSQL) storage systems can be inferred by finding properties
that are provable for a high-level distributed protocol model modeling the implementation.
Furthermore, we have proposed a modeling technique using programs in concurrent C, which gives executability of models, testing, and
mostly automated formal verification using the \textsc{Vcc}\xspace tool.
We have verified both eventual delivery of the hinted-handoff protocol under transient failures as well as eventual consistency of the read-repair protocol when arbitrary
number of reads are issued. We also discovered several surprising counterexamples during the verification for related conjectures based on online documentation, and this experience helped us develop a firm understanding of when and how these protocols guarantee eventual consistency. To the best of our knowledge, this is the first time these anti-entropy protocols have been verified exhaustively using deductive verification.
We believe the methodology proposed in this work is promising and applying our methodology to a larger class of production protocols (e.g., blockchain, Google's Cloud Spanner) is interesting future work.
\section{Verification of the Anti-entropy protocols}
\label{sec:verification}
In this section, we describe our verification methodology, and our verification of the hinted-handoff and read-repair anti-entropy protocols using the program model. We use the \emph{deductive verification} style for proving programs correct.
For sequential programs, this style is close to Hoare logic style reasoning~\cite{Apt81}.
It proceeds by the programmer annotating each method with pre/post
conditions and annotating loops with loop invariants with desirable program properties. Furthermore, in order to prove that functions terminate, the user provides \emph{ranking functions} for loops (and recursive calls) that map states to natural numbers and must strictly decrease with each iteration. Reasoning that annotations are
correct is done \emph{mostly} automatically using SMT solvers\XSpace{, with very little help from the user}.
There are several different approaches to verify concurrent programs, especially for modular verification.
We use the \textsc{Vcc}\xspace tool~\cite{Cohen09} to verify our models. \textsc{Vcc}\xspace is a verifier for concurrent C programs\XSpace{~\footnote{Even though our model and invariants apply to unbounded number of instances, verification of C programs, strictly speaking, assumes integer manipulations to \texttt{MAX\_INT} (i.e., typically $2^{32}$ on 32-bit architectures).}}. The basic approach we take to verify our models
is to treat each concurrent thread as a sequential thread for verification purposes, but where every access to a shared variable is preceded and succeeded by a \emph{havoc} that entirely destroys the structures shared with other threads. However, this havoc-ing is guarded by an \emph{invariant} for the global structures that the user provides. Furthermore, we check that whenever
a thread changes a global structure, it maintains this global invariant.
This approach to verification is similar to \emph{rely-guarantee} reasoning~\cite{Jones83},
where all threads rely and guarantee to maintain the global invariant on
the shared structures.
Another key aspect of the verification process is writing the \emph{specification}.
Though the specification is written mainly as assertions and demanding that certain functions terminate, specifications are often described accurately\XSpace{ and naturally} using \emph{ghost code}~\cite{Apt81,Cohen09}. Ghost code is code written purely for verification purposes (it does not get executed) and is written as instructions
that manipulate ghost variables. It is syntactically constrained so that real code can never see the ghost state. Hence this ensures that the ghost code cannot affect the real code. We use ghost code to model the taint-based specification
for eventual delivery (see Section~\ref{subsec:hh}). It is important that the protocol does not see the tainted write, because we do not want a flow of information between the executable program and the specification. We also use ghost code to maintain abstractions of concrete data structures (e.g., a set abstracts an array).
We performed the program model verification on an Intel Core i7 laptop with 8 GB of RAM, running Windows 8 and using Visual Studio 2012 with \textsc{Vcc}\xspace v2.3 as a plugin. Our model\footnote{The code is available at \url{https://github.com/palmskog/evcons-model}} consists of about 1500 lines of code and annotations, where about 900 lines are executable C code and the rest are annotations not seen by the C compiler. \XSpace{The annotations comprise ghost code (20\%) and invariants (80\%). The total time taken for the verification of the whole model is around a minute.}
\XSpace{We extensively used testing, especially in early stages, to assert invariants
that we believe held in the system at various points in the code. Prior to verification, which requires strong inductive invariants, testing allowed
us to gain confidence in the proof we were building (as well as the model
we were constructing). These invariants then were the foundation on which
the final proof was built upon.}
\vspace{-0.3cm}
\subsection{Verifying the Hinted-handoff Protocol}
\label{subsec:hh}
As explained in Section~\ref{sec:main-result}, verification that the hinted-handoff protocol maintains strong eventual consistency under transient failures and for idempotent operation-based CRDT reduces to verification of eventual delivery. Recall that eventual delivery is the property that every successful write eventually gets delivered to every replica at least once.
We model eventual delivery using a ghost field \emph{taint}, that records a particular (exactly one) write operation issued to the coordinator. We assert that this taint will eventually propagate to each replica's local store. Intuitively, the write that was chosen to be
tainted will taint the value written, and this taint will persist as the
value moves across the network, including when it is stored in the hint store
and the pending store, before being written to the local store.
Taints persist and do not disappear when they reach the local store.
Hence, demanding that the local stores eventually get tainted captures the
property that the chosen write is eventually delivered at least once to every local store.
Note that the tainted values are system-agnostic ghost fields, and hence proving the above property for an arbitrary write ensures that \emph{all writes} are eventually delivered.
To prove the specification, we introduce 3 ghost fields: (a) \textit{ls\_tainted\_nodes}, the set of replicas that have updated their local store with the tainted write, (b) \textit{hs\_tainted\_nodes}, the set of replicas for which the coordinator has stored the tainted write operation as a hint in its hint store, and (c) \textit{ps\_tainted\_nodes}, the set of replicas for which the tainted write has been issued, but where its delivery is pending in the network.
We add ghost code to maintain the semantics of the taint in various functions,
including \textit{put}, \textit{network} and \textit{handoff\_hint}.
Every time any of these functions transfers values, we ensure that the taints
also get propagated. When a value is written to a local store, the store is considered tainted if it either already had a tainted value
or the new value being written is tainted; otherwise, it is untainted.
\XSpace{(In fact, the taint-based store can itself be seen as an operation based CRDT
which never loses the taints.) }Furthermore, we add ghost code to update the ghost fields described above.
For eventual delivery, we want to prove that, when all replicas remain available and all the read/write operations have stopped, the tainted write operation is eventually propagated to the local stores of all the replicas.
We prove eventual taintedness of stores by proving a global \emph{termination property}.
We model the point where inputs stop arriving using a variable $\textit{stop}$ and by making all nodes alive while
disabling the functions $\textit{get}$ and $\textit{put}$. We then prove, using a
ranking function, that the model will eventually reach a state where all nodes corresponding to
the tainted key are tainted.
We first specify a (safety) invariant for the shared state and specify a ranking function
on the state for arguing eventual taintedness. The invariant of the shared state asserts that for every replica
responsible for the tainted key, either its local store is tainted or there is a
tainted write pending in the network for it, or there is a hint in the
corresponding coordinator which has a tainted write for it.
More precisely, for each replica responsible for the tainted key, we demand
that the replica is in one of the ghost sets\XSpace{, namely,
\textit{ps\_tainted\_nodes}, \textit{hs\_tainted\_nodes},
and \textit{ls\_tainted\_nodes}}:
$\forall\, r.~ (~r \in \textit{ps\_tainted\_nodes~} ~\vee ~ r \in \textit{hs\_tainted\_nodes~} ~ \vee ~ r \in \textit{ls\_tainted\_nodes}~),$
\noindent where the quantification is over replicas $r$ responsible for the tainted key.
The ranking function is a function that quantifies, approximately, the \emph{time} it would take for the system to reach a consistent state. In our case, the ranking function $| \textit{hint\_store} | +$ $2\cdot|\textit{pending\_store}|$ suffices. We prove
that the rank decreases every time the function executes.
\XSpace{Intuitively, a taint that is pending in the network may take two steps to get to the local store, since it may first be transferred
to the hint store and then to the local store, while tainted messages in the hint store take one step to the local store.}
Finally, we assert and prove that when the rank is zero, all nodes corresponding to the tainted key are tainted.
\section{Introduction}
Distributed systems are complex software systems that pose myriad challenges to formal verification. Some systems are constructed from rigorously described distributed algorithms~\cite{Paxos}, which requires bridging a substantial gap from an abstract algorithm to executable code~\cite{PaxosMadeLive}. The implementation of distributed protocols developed this way is, of course, not usually formally proven
to be a refinement of the textbook algorithm, though some research on developing the implementation using \emph{formally verified refinements} has been done~\cite{Hawblitzel15}. However, most \emph{production systems} have not been built through a top-down approach from well-understood and proven-on-paper algorithms, but rather have been developed in an ad-hoc fashion, from scratch (e.g., on whiteboards), undergoing iteration and revision over a long period of time. A large bulk of today's open-source distributed systems software fits this category, the most prominent among these being key-value/NoSQL storage systems.
Consequently, to understand the formal properties guaranteed by these systems, we need to build
high-level protocol models and infer the properties guaranteed by these protocols using the implementation and its available documentation.
In this paper, we build models and using them derive formal properties for two core distributed protocols used in \emph{eventually consistent} distributed key-value stores: the \emph{hinted-handoff protocol} and the \emph{read-repair} protocol. Eventually consistent key-value stores originate from the Dynamo system by Amazon~\cite{DeCandia07} and are currently implemented in production key-value stores such as Cassandra\footnote{\url{http://cassandra.apache.org}} and Riak\footnote{\url{http://basho.com/products/riak-kv/}}. They are used widely today, e.g., Amazon relies on Dynamo for its shopping cart, and Cassandra is used by Netflix and many other companies. Yet, none of these systems were built from rigorously proven distributed algorithms.
Our approach is to model the high-level protocols in these implementations, and then derive formal properties that the models guarantee
by finding properties that are formally verifiable for the models. We derive formal properties for the two protocols mentioned above and
verify them (against the model) by using a novel methodology called \emph{certified programs models}, where a high-level distributed algorithm is modeled using \emph{programs} written in traditional systems languages, e.g., C with concurrency,
and then certified to be correct against their specifications using program verification. The program models capture not only the behavior of distributed processes and their memory and secondary storage states, but also network communication, delays, and failures, using non-determinism where necessary.
Modeling and verification using certified program models has several salient aspects.
First, program models are \emph{executable} and can be validated for conformance to the system using testing,
where the programmer can write test harnesses that control inputs as well as physical events such as node and network failures,
and test using mature systematic testing tools for concurrent software, e.g., \textsc{Chess}\xspace~\cite{Musuvathi08}.
Moreover, program models permit accurate modeling of specifications of protocols using \emph{ghost state} as well as
assertions in powerful logics. Finally, program models lend themselves well to \emph{program verification techniques}, especially using tools such as \textsc{Vcc}\xspace~\cite{Cohen09} that automate large parts of the reasoning using logical constraint solvers.
Our experience in this work shows that certified programs models are an appealing sweet spot for verifying distributed prototcols,
that facilitates executable models that capture arbitrarily parameterized protocols and at the same time are amenable to mostly automated verification.
\vspace{-0.5cm}
\subsection{Key-value Stores and Eventual Consistency}
Key-value storage systems arose out of the CAP theorem/conjecture, which was postulated by Brewer and proved by Gilbert and Lynch~\cite{lg:cap}. The conjecture states that a distributed storage system can choose at most two out of three important char\-ac\-te\-ris\-tics---strong data Consistency (i.e., linearizability or sequential consistency), Availability of data (to reads and writes), and Partition-tolerance. Hence, achieving strong consistency while at the same time providing availability in a partitioned system with failures is impossible.
While traditional databases preferred consistency and availability, the new generation of key-value systems are designed to be partition-tolerant, both within a datacenter as well as across multiple data-centers. As a result, a key-value system is forced to chose between one of either strong consistency or availability---the latter option provides lower latencies for reads and writes~\cite{Abadi12}.
Key-value systems that prefer availability include Cassandra, Riak, and Dynamo~\cite{DeCandia07}, and support weak models of consistency (e.g., eventual consistency). Other key-value systems, e.g., Bigtable~\cite{DBLP:journals/tocs/ChangDGHWBCFG08}, instead prefer strong consistency, and may be unavailable under failure scenarios.
One popular weak consistency notion is eventual consistency, which roughly speaking, says that {\it if no further updates are made to a given data item, all replicas will eventually hold the same value (and a read would then produce this value)}~\cite{DeCandia07}. Eventual consistency is a \emph{liveness property}, not a safety property~\cite{Bailis:2013:ECT:2460276.2462076}.
The precise notion of what eventual consistency means in these protocols (the precise assumptions under which they hold, the failure models, the
assumptions on the environment, etc.) are not well understood, let alone proven. Programmers also do not understand the subtleties of eventually
consistent stores; for instance, default modes in Riak and Cassandra can permanently lose writes---this has been exploited
in an attack involving Bitcoin\footnote{\url{http://hackingdistributed.com/2014/04/06/another-one-bites-the-dust-flexcoin/}}.
\vspace{-0.3cm}
\subsection{Contributions}
\label{sec:contribs}
The primary contribution of this paper is to precisely reason about the guarantees of eventual consistency that
two core protocols used in production implementations of key-value stores provide.
More specifically, we model and verify the correctness of the \emph{hinted-handoff} protocol and the \emph{read-repair} protocol, which are anti-entropy mechanisms first proposed in the Amazon Dynamo system~\cite{DeCandia07}, and later implemented in systems such as Riak and Cassandra.
We build program models for these protocols in concurrent C that we verify for eventual consistency.
The programs use \emph{threads} to model concurrency, where each get/put operation as well as the asynchronous calls they make
are modeled using concurrently running threads.
The state of the processes, such as stores at replicas and the hinted-handoff tables, are modeled
as shared arrays. Communication between processes is also modeled using data-structures: the network is simulated using
a set that stores pending messages to replicas, with an independent thread sending them to their destinations.
Failures and non-determinism of message arrivals, etc., are also captured programmatically using
non-determinism (modeled using stubs during verification and using random coin-tosses during testing).
In particular, system latency is captured by background threads that are free to execute any time,
modeling arbitrarily long delays.
In the case of the \emph{hinted-handoff protocol}, we prove that this protocol working alone guarantees eventual consistency provided there
are only transient faults. In fact, we prove a stronger theorem by showing that for any operation based (commutative) conflict-free replicated data-type implementing a register, the protocol ensures \emph{strong eventual consistency}--- this covers a variety of schemes that systems use, including Riak and Cassandra, to resolve conflict when implementing a key-value store. Strong eventual consistency guarantees not only eventual consistency, but that the store always contains a value which is a function of the set of updates it has received, independent of the order in which it was received. We prove this by showing that the hinted-handoff protocol (under only transient failures) ensures
\emph{eventual delivery} of updates---when this is combined with an idempotent and commutative datastructure like a CmRDT~\cite{Shapiro-Tech-Report11} , it ensures strong eventual consistency.
We model the eventual delivery property in the program model using a ghost \emph{taint} that taints a particular write at a coordinator
(unbeknownst to protocol), and asserts that the taint propagates eventually to every replica.
Like eventual consistency, eventual delivery is also a \emph{liveness property}. It is established by finding a ranking function
that models abstractly the time needed to reach a consistent state, and a slew of corresponding safety properties to prove this program correct.
For the \emph{read-repair protocol}, the literature and documentation of the above systems indicated
that a read-repair (issued during a read) would bring the nodes
that are alive to a consistent state eventually. However, while attempting to prove this property, we realized that
no invariant could prove this property, and that it is false.
In fact, a single read is insufficient to reach eventual consistency. Hence, we prove a more complex property: beyond a point, if a set of nodes are alive and they all
stay alive, and if all requests stop except for an unbounded sequence of reads to a key, then the live nodes that are responsible
for the key will eventually converge. In other words, one read is not enough for convergence, and the system needs a long series of reads.
Note that the certification that the program models satisfy their specification
is for an \emph{unbounded} number of threads, which model an unbounded number of replicas,
keys, values, etc., model arbitrarily long input sequences of updates and reads to the keys, and model the concurrency prevalent in the system
using parallelism in the program. The verification is hence a \emph{complete} verification of the models, in contrast to approaches using under-approximations to systematically test a bounded-resource system~\cite{maudeCass,Newcombe15}. \XSpace{In particular, Amazon has reported modeling of distributed protocols using TLA, a formal system, and used model-checking (systematic testing) on bounded instances of the TLA system to help understand the protocols, check their properties, and help make design decisions.} Our approach is to model protocols using C programs, which we believe are much simpler for systems engineers to use to model protocols, and being executable, are easy to test using test harnesses. Most importantly, we have proved the entire behavior of the protocol correct \XSpace{(as opposed to the work using TLA)}using the \XSpace{state-of-the-art} program verification framework \textsc{Vcc}\xspace~\cite{Cohen09} that automates several stages of the reasoning.
\paragraph{Paper Organization.} The rest of the paper is structured as follows. Section~\ref{sec:bg} gives more details on key-value stores, eventual consistency, and the read-repair and hinted-handoff anti-entropy protocols\XSpace{ (readers familiar with these topics can choose to skip this section)}. We state our main results in Section~\ref{sec:main-result}, where we describe the precise properties we prove for the protocol models as well as some properties that we expected to be true initially, but which we learned were not true in general. Section~\ref{sec:model} describes our program models of protocols in detail, including the testing approach we used to check that our model was reasonable. The verification process, including background on program verification, the invariants and ranking functions required for proving the properties is covered in Section~\ref{sec:verification}. Section~\ref{sec:related} describes related work and Section~\ref{sec:conclusion} concludes.
\section{Program Models for the Protocols}
\label{sec:model}
In this section we describe how we model the anti-entropy protocols used in eventually consistent key-value stores.
The architecture of our model is depicted in Figure~\ref{fig:arch}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=16cm, height=6cm]{arch.pdf}
\caption{Architecture of the model with boxes for functions, ellipses for data structures and arrows for communication.}
\label{fig:arch}
\end{center}
\vspace{-1cm}
\end{figure}
\subsection{Program Model Overview}
Our C program model represents replicas as concurrently executing threads that communicate by asynchronous message passing. Each replica uses several functions (\textit{get}, \textit{put}, \textit{write\_ls}, \textit{read\_ls}, etc.) to update their state and communicate. Replica state is kept in globally shared array data structures (\textit{local\_store}, \textit{hint\_store}, etc.). Furthermore, in order to model asynchronous messaging, we maintain a data structure \textit{pending\_store}, that represents messages in the network that have not yet been delivered. The functions in our model include:
\begin{itemize}
\item The \textit{get} and \textit{put} functions at coordinators that forms
the interface to clients for reading and writing key-values.
\item An internal function \textit{handoff\_hint} for each replica that runs all the time and removes hints from the hinted-handoff table and propagates them to the appropriate replicas (provided they are alive).
\item An internal function \textit{read\_repair} which is part of the read path, waits for all the replicas to reply, and on detecting replicas with inconsistent values writes the consistent value to those replicas.
\item Internal functions \textit{read\_ls} and \textit{write\_ls}, that read from and write to the local stores (provided they are alive).
\item An internal function \textit{network} that is invoked repeatedly and delivers messages in the pending store to replicas.
\item An internal function \textit{permanent\_failures}, which when permanent failure is modeled is invoked repeatedly, and can remove elements from the pending set (modeling loss of messages), restore any local store to its default value (modeling store crashes), and destroy hinted-handoff tables.
\end{itemize}
Note that modeling these function using fine-grained concurrency ensures the possibility of
arbitrary interleaving of function invocations as well as arbitrary delays.
Also, transient failures, where nodes fail but resume later with the correct
state, can be seen as delays in processes, and hence are captured in this
concurrency model. The thread that delivers messages in the pending set captures arbitrary delays in the network.
The \textit{read\_ls} and \textit{write\_ls} functions are modeled abstractly
as idempotent CRDTs by defining them as stubs which maintain specified properties. When
testing, these functions need to be instantiated to particular conflict-resolution strategies (e.g., MV or LWW).
When a client issues a get request for a key in our model, the request is routed to the coordinator that is determined for this key according to an abstract map (hence capturing all possible hashing schemes). Every key-value pair is replicated across multiple nodes, where the number of nodes that contain the key-value pair is determined by a replication factor. The coordinator maintains a preference list of replicas that contain data values for keys that are mapped to it. Along the read path, the coordinator asynchronously issues the read request to all replica threads (an asynchronous call to a replica is depicted in Figure~\ref{fig:arch} as an arrow from the \textit{get} function to \textit{read\_ls}). As shown in Figure~\ref{fig:arch}, the coordinator blocks for a non-deterministic amount of time or until it receives enough responses (the arrow directed from \textit{read\_ls} to \textit{get}) as specified by the read consistency level $R$. After receiving responses from $R$ replicas, it returns the read value(s) to the client. If read-repair is enabled, the coordinator also spawns a background thread (depicted as a call to \textit{read\_repair} from \textit{get} in Figure~\ref{fig:arch}) which will wait for responses from the other replicas \XSpace{(it already knows about responses from the $R$ replicas)} for a non-deterministic amount of time. This thread determines the most recent data value of all the values stored in the various replicas, and writes it to the replicas with stale values.
When a client issues a \textit{put} request to store a key-value pair, the request is routed to the appropriate coordinator. The coordinator asynchronously issues write requests to all replica threads in its preference list. The coordinator then blocks for a non-deterministic amount of time or until it receives enough responses, as specified by the write consistency level $W$. To model arbitrary network delays and replica failures, write operations to these replicas are inserted by the coordinator into the pending store (in Figure~\ref{fig:arch}, this is depicted as an arrow from \textit{put} to \textit{pending\_store}).
If the coordinator receives responses from $W$ replicas, it informs the client about the successful \textit{put} operation.
A background \textit{network} thread models arbitrary network delays and failure scenarios as it removes writes operation from the pending store data structure and either updates the local store of the appropriate replica with the write or simply loses the operation.
When the hinted-handoff protocol is enabled and read-repair is disabled, we assume that the write operations are not lost. In this scenario, when losing/removing the write operation from the pending store, the \textit{network} thread inserts the operation as a hint in the hinted-handoff table of the appropriate coordinator.
The \textit{permanent\_failures} thread does not execute in this case and data in the global data structures is not lost.
\section{Related Work}
\label{sec:related}
Amazon's use of formal techniques~\cite{Newcombe15} for increasing confidence in the correctness of their production distributed systems is in the same spirit as our work.\XSpace{ The engineers at Amazon have successfully used TLA+~\cite{tla} to formally specify the design of various components of distributed systems.}
Like our programs, TLA+ models by Amazon engineers are executable and can be explored using model checking to uncover subtle bugs.
\XSpace{The formal TLA+ specifications are executable like our program models and these design specifications have been systematically explored using a model checker to uncover several subtle bugs in these systems.}
\XSpace{However, instead of modeling distributed algorithms in TLA+, we model them as C programs. }Newcombe et al.~\cite{Newcombe15} acknowledge that modeling systems in a high-level language like C, as we do, increases the productivity of the engineers. More importantly, in addition to model checking up to certain trace lengths, a model written in C lends itself to mostly automated verification using tools like \textsc{Vcc}\xspace that utilize automated constraint solvers, and that can verify unbounded instances of the system.
There have also been efforts towards formally modeling key-value stores like Cassandra using rewriting logic in Maude~\cite{maudeCass}. However, this model checking approach is either not exhaustive or is exhaustive on bounded instances, while ours is exhaustive on unbounded instances.
Recently, there has been work on programming languages that ease development of distributed systems, in particular, with respect to consistency properties at the application level~\cite{Burckhardt-ECOOP-12} and
fail-free idempotence~\cite{Ramalingam13}. Also, work by Sivaramakrishnan et al.~\cite{Kaki15} introduces a declarative programming model for eventually consistent data stores, that includes a \emph{contract} language that can express fine-grained application level consistency properties.\XSpace{ Lesani et al.~\cite{Lesani16} developed a framework for for modular verification of causally
consistency for replicated key-value stores.}
Hawblitzel et al.~\cite{Hawblitzel15} use TLA-style refinement to verify an implementation of Paxos consensus in Dafny, while Woos et al.~\cite{Woos2016}, verify an implementation of the Raft consensus algorithm in the Coq proof assistant; both approaches capture node and network behavior in a programming language setting, like we do, but focus on top-down system development.
Burckhardt et al~\cite{Burckhardt-POPL-14} explore logical
mechanisms for specifying and verifying properties over replicated data types,\XSpace{ In particular, Burckhardt in his book~\cite{Burckhardt-Book} gives an extensive treatment of the principles of consistency of distributed systems.} and propose a framework for specifying replicated data types using relations over events and verifying their implementation using replication-aware simulations.
\XSpace{Deductive verification using automatic tools, such as VCC~\cite{Cohen09} have been extensively used for verifying systems in domains other than distributed systems.}\XSpace{Some of the examples are: verifying a hypervisor for the isolation property~\cite{vcc-hypervisor}, verifying operating systems like Verve~\cite{verve} and ExpressOS~\cite{Mai13} for security, verifying the L4 microkernel for functional correctness~\cite{sel4} and verifying high-level applications for end-to-end security~\cite{Ironclad}. Recently, a blend of TLA-style state-machine refinement and Hoare-logic verification was used in \cite{Hawblitzel15} for verification of distributed systems.}
\vspace{-0.3cm}
\subsection{Verifying the Read-repair Protocol}
As explained in Section~\ref{sec:main-result}, we want to verify that the read-repair protocol maintains eventual consistency in the presence of permanent failures (as stated in Result~\ref{result2} in Section~\ref{sec:readrepair}).
We prove this result both when hinted-handoff is turned on as well as when
it is disabled (we capture whether hinted-handoff is enabled/disabled using
a macro directive, and prove both versions correct).
For simplicity of presentation we only explain here the case when the hinted handoff protocol is disabled.
Recall that permanent failures can (a) modify the local store by setting them to default values, (b) remove an operation from the pending store, and (c) destroy the hint store.
For eventual consistency, we want to prove that when all the write operations have successfully returned to the client, then after only a finite number of read operations on a key, the read-repair mechanism ensures that the set of $R$ available replicas will converge. When the writes stop and only the read of a particular key occurs\XSpace{ (infinitely often)},
we disallow the scheduling of $\textit{put}$ and disallow $\textit{get}$ on all but the tainted key,
and also bring all nodes alive and disable the methods that model failure of nodes.
We verify eventual consistency by specifying safety invariants and a ranking function.
The main component of the ranking function is the size of the pending store,
$|\textit{pending\_store}|$, which decreases whenever the network executes.
\XSpace{Intuitively, once the pending messages in the network get delivered, the subsequent
execution of $\textit{get}$ will issue a read-repair that will bring the nodes of the
tainted key to convergence.}
\vspace{-0.3cm}
\subsection{Program Model Testing and Validation}
\label{sec:test}
We tested our program model to make sure that it corresponds to actual systems. For our tests, we implemented the stubs for model failure and non-determinism in message arrivals. In particular, we used random coin-tosses instead of non-deterministic choices as in the verification model. We also provided concrete implementations for conflict-resolution strategies for operations on CRDTs in the form of LWW and MV. We then wrote a test harness that arbitrarily issues put and get operations for various key-value pairs. We then checked that the results of these operations could be realized by the actual eventually consistent key-value stores. We also used \textsc{Chess}\xspace~\cite{Musuvathi08}, which is a systematic testing tool for concurrent programs, to systematically enumerate all possible thread schedules. Using \textsc{Chess}\xspace, we were able to ensure that our model realized strange but possible behaviors of the eventually-consistent stores.
We exhaustively tested many scenarios. Here, we discuss a configuration with three replicas, where the write consistency level is set to two, and the read consistency level is set to one. One interesting scenario is where the client successfully performs a write operation on a key with a value $0$, followed by an unsuccessful write on the same key with a value $1$. A subsequent read of the key returns the value $1$. This is a counterintuitive scenario, but it can manifest in a real system because failures are not guaranteed to leave the stores unaffected and an unsuccessful write can still write to some of the replicas.
In another scenario, the client successfully performs two consecutive write operations to a key with values $0$ and $1$. Subsequently, one read returns the value $1$, while a subsequent read returns the stale value $0$. This behavior can happen in a real system where the client gets staler values over time. In particular, this scenario occurs when the two replicas store the value $1$ after the second write operation (remember the write consistency level is two) and the third replica still stores the stale value $0$.
Finally, we consider a scenario where there are four consecutive successful writes to a key with values $0$, $1$, $2$, and $3$. (As above, consider a configuration with three replicas but where both read and write consistency levels are set to one.) If the subsequent two reads for the same key return values $2$ followed by $1$, then a following third read cannot return the value $0$. This scenario cannot happen because the three replicas must have values $1$, $2$, and $3$ at the time of the last read (the reader is invited to work this case out on paper). We used \textsc{Chess}\xspace to confirm the realizability of the first three scenarios, and the infeasibility of the last scenario.\XSpace{ \textsc{Chess}\xspace took from less than a second to up to 10 minutes to exhaustively explore all interleavings corresponding to these four test harnesses.} We were also able to observe some of these scenarios in a real installation of Cassandra.
\vspace{-0.3cm}
\section{Characterizing and Proving Eventual Consistency}
\label{sec:main-result}
\XSpace{The goal of this paper is to prove eventual consistency of the hinted-handoff
and read-repair protocols that systems like Cassandra and Riak implement,
delineating precisely the conditions under which they hold. Our effort spanned a period of 15 months, with about 6 person months of effort for modeling and verification.}
To model and verify the read-repair and hinted-handoff protocols, we first \emph{abstract} away from the particular
instantiation of these protocols in these systems, and also abstract away
from the various options they provide to users to modify the behavior of the system. For instance, in Riak, using one set of options, every write is tagged with a vector clock at the client, and every replica responsible for it maps it to a \emph{set of values}, one for each last concurrent write that it has received. When a read is issued, Riak can return the set of \emph{all} last concurrently written values to it (these values are called ``siblings'' in Riak). However, in Cassandra, vector clocks are not used; instead each client labels every write with a timestamp, and despite there being drift among the clocks of clients, each replica stores only the last write according to this timestamp.\XSpace{ Furthermore, these policies can be changed; e.g., in Riak, a user can set options to mimic the Cassandra model.}
We capture these instantiations by generalizing the semantics of how the store is maintained. For the hinted-handoff protocol, we prove eventual consistency under the assumption that the stores are maintained using some \emph{idempotent} operation-based commutative replicated data-type (CRDT)~\cite{Shapiro-Tech-Report11,Shapiro11} that implements a \emph{register}, while for read-repair, we prove eventual consistency assuming an arbitrary form of conflict resolution. We consider two failure modes: (a) \emph{transient failure} where failed nodes and network edges remember their pre-crash state when they come back; and (b) \emph{permanent failure} where failed nodes or network edges lose memory and start with some default state when they come back.
\XSpace{Let us first discuss the failure models we consider, which are part of the assumptions needed to prove properties of protocols.}
\subsection{Properties of the Hinted-Handoff Protocol}
\label{sec:hh}
The hinted-handoff protocol is an opportunistic anti-entropy mechanism that happens during writes. When a write is issued, and the asynchronous call to write to certain replicas fail (either explicitly or due to a timeout), the coordinator knows that these replicas could be out of sync, and hence stores these update messages in a hinted-handoff table locally to send them later to the replicas when they come back alive. However, if there is a memory wipe or a permanent failure, the hinted-handoff table will be lost, and all replicas may not receive the messages. In practice, the read-repair and node-repair protocols protect against such failures.
Our main abstraction of the key-value store is to view the underlying protocol as implementing a \emph{register} using
an operation-based conflict-free replicated datatype (CRDT)~\cite{Shapiro11}, also called a commutative replicated data-type (CmRDT). As in the work of Shapiro et al.~\cite{Shapiro-Tech-Report11}, a register is a memory cell storing opaque content. A register can be queried using a read operation get, and updated with a value $v$ using a write operation put($v$). The semantics of non-concurrent put operations corresponds to the expected sequential semantics. However, when concurrent put operations do not commute, the two common conflict resolution approaches are that (a) one operation takes precedence over the other and (b) both operations are retained. When the former approach is used, the register said to be a \emph{last write wins} (LWW) register, and when the latter approach is used, it is said to be a \emph{multi-valued} (MV) register. When implementing a simple key-value store, the vector-clock based updates in Riak can be seen as an MV register, while the simpler timestamp based updates in Cassandra can be seen as an LWW register~\cite{Shapiro-Tech-Report11}.
\XSpace{(However, since a global wall-clock time is not available, in general, this strategy in Cassandra can \emph{lose} updates~\cite{maybe-cassandra}). The CmRDTs for both LWW and MV registers are in fact idempotent---the systems tags each write with a timestamp,
and the conflict-resolution will ignore the future deliveries of
a message with same time-stamp (see~\cite{Shapiro-Tech-Report11}, Section~\ref{sec:readrepair}).}
We also assume another property of these CmRDTs, namely idempotency---we assume that all messages are tagged with a unique id, and
when a message is delivered multiple times, the effect on the store is the same as when exactly one message is delivered. Let us call such registers idempotent CRDTs\XSpace{\footnote{Standard definitions of operation-based CRDTs do not guarantee
idempotency---instead they assume the environment delivers every message
precisely once to each replica~\cite{Shapiro11}\XSpace{(see~\cite{Shapiro11}, text after Definition~5)}. Note that state-based CRDTs are usually defined to be idempotent.}}.
The main property we prove about the hinted-handoff protocol is called \emph{eventual delivery}.
This property says that every successful write eventually gets delivered to every replica at least once
(under assumptions on the kinds of possible failures and on replicas being eventually alive, etc.).
Hence, instead of eventual consistency, we argue eventual delivery, which in fact
is the precise function of these protocols, as they are agnostic of the conflict
resolution mechanism that is actually implemented in the system.
Furthermore, assuming that each replica actually implements an idempotent operation-based CRDT register,
and update procedures for these datatypes are terminating,
eventual delivery ensures eventual consistency, and in fact \emph{strong eventual consistency}~\cite{Shapiro11}.
\XSpace{(the proof in~\cite{Shapiro11} proves that when there
are reliable broadcast channels that ensure messages are delivered exactly once, CRDTs give
strong eventual consistency; this proof is readily adapted to show that when messages
are delivered at least once, idempotent CmRDTs give strong eventual consistency)}
Strong eventual consistency guarantees not only eventual consistency, but also that the store always
contains a value that is a function of the set of updates it has received, independent of the order in which they were received.
\XSpace{Our first result is that a system running only hinted-handoff-based repair provides eventual delivery of updates to all replicas, provided there are only transient faults.}
\begin{theorem}
The hinted-handoff protocol ensures eventual delivery of updates to all replicas, provided there are only transient faults. More precisely, if there is any successful write, then assuming that all replicas recover at some point, and reads and write requests stop coming at some point, the write will eventually get propagated to every replica.
\label{result1}
\end{theorem}
We formally prove the above result (and Theorem~\ref{result2} below) for arbitrary system configurations using program verification techniques on the program model; see Section~\ref{sec:model} and Section~\ref{sec:verification} for details. The following is an immediate corollary from the properties of eventual delivery and idempotent CRDTs:
\begin{corollary}
A system following the hinted-handoff protocol, where
each replica runs an operation-based idempotent CRDT mechanism that has terminating updates, is strong\-ly eventually consistent, provided there are only transient faults.\XSpace{\footnote{
The corollary may lead us to think that we can use any
operation-based CmRDT for counters at stores to obtain strong
eventually consistent counters in the presence of transient failures.
However, CmRDTs for counters are in fact \emph{not idempotent} (and
the CmRDT counters in~\cite{Shapiro11} assume that the system will
deliver messages precisely once, which hinted handoff cannot guarantee).}}
\label{corollary1}
\end{corollary}
\subsection{Properties of the Read-repair Protocol}
\label{sec:readrepair}
Our second result concerns the read-repair protocol. Read-repair is expected to be resilient to memory-crash failures, but only guarantees eventual consistency on a key provided future reads are issued at all to the key. Again, we abstract away from the conflict resolution mechanism,
and we assume that the coordinator, when doing a read and getting different replies from replicas, propagates
\emph{some} consistent value back to all the replicas.
This also allows our result to accommodate anti-entropy mechanisms~\cite{Demers:1987:EAR:41840.41841} that are used instead of read-repair, in a reactive manner after a read. Note that this result holds irrespective of the hinted-handoff protocol being enabled or disabled.
It is commonly believed\footnote{\url{http://wiki.apache.org/cassandra/ReadRepair}} that when a read happens, the read repair protocol will repair the live nodes at the time of the read (assuming they stay alive), bringing them to a common state.
We modeled the read-repair protocol and tried to prove this property, but we failed to come up with appropriate
invariants that would ensure the property. This led us to conclude that the property is not always true.
To see why, consider the timeline in Figure~\ref{fig:msc1}. In this scenario, the client issues a put request with the value $2$, which is routed by the coordinator to all three replicas-- $A, B$, and $C$ (via messages $w_{A}(2), w_{B}(2)$, and $w_{C}(2)$). The replica $C$ successfully updates its local store with this value. Consider the case when the write consistency is one and the put operation succeeds (in spite of the message $w_{B}(2)$ being lost and the message $w_{A}(2)$ being delayed). Now assume that the replica $C$ crashes, and the last write (with value $2$) is in \emph{none} of the alive replicas-- $A$ and $B$. If we consider the case where $B$ has the latest write (with value $1$) among these two live nodes, a subsequent read-repair would write the value $1$ read from $B$ to $A's$ store (via message $rrw_{A}(1)$ in Figure~\ref{fig:msc1}). But before this write reaches $A$, $A$ could get a pending message from the network ($w_{A}(2)$) and update its value to a more recent value-- $2$. In this situation, after replica $A$ has updated its value to $2$, the two alive replicas ($A$ and $B$) do not have consistent values. Due to the lack of hints or processes with hints having crashed, $B$ may never receive the later write (message $w_{B}(2)$).
\vspace{-0.5cm}
\begin{figure}
\begin{center}
\includegraphics[scale=0.2]{timeline1.pdf}
\caption{A timeline showing that a single read-repair operation does not guarantee convergence of live replicas. In the figure, $w_{r}$ is a write messages to replica $r$, $rd_{r}$ is a message from replica $r$ to the coordinator on the read path, and $rrw_{r}$ is the read-repair message to replica $r$. Time in the figure advances from top to bottom. The messages along the read(-repair) path are shown as dotted lines and along the write path as solid lines.}
\label{fig:msc1}
\end{center}
\vspace{-0.9cm}
\end{figure}
Based on these insights, we prove a more involved property of read-repair:
\begin{theorem}
After any sequence of reads and writes, if all operations stop\XSpace{\footnote{The assumption
that updates stop coming is part of the original definition of eventual consistency~\cite{bayou}. There are other formalizations without this assumption~\cite{Kaki15}; however, the read-repair protocol does \emph{not} satisfy eventual consistency without it.}}
except for an infinite sequence of reads of a key, then assuming the set $R$ of replicas are alive at the time of the first such read
and thereafter, the replicas in $R$ will eventually converge to the same value.
\label{result2}
\end{theorem}
We prove the above result also using program verification on the program model. Intuitively, as long as an indefinite number of reads to the key happen, the system will ensure that the subset of live replicas
responsible for the key converge to the same value, eventually. A read-repair may not bring the live replicas to sync
if there are some pending messages in the system. However, since there is only a finite amount of \emph{lag} in the system
(pending messages and hints, etc.), and once the system is given enough time to finish its pending work, a read-repair will
succeed in synchronizing these replicas.
\XSpace{It is tempting to think that one could implement any CRDT and reach eventual consistency of the CRDT store using solely read-repair,
similar to the Corollary we obtained for Theorem~\ref{result1}. However, this is tricky when clients send operations to do on the
CRDT and the conflict-resolution in read-repair happens using state-based merges.
For instance, assume that we implement a counter CRDT, where state-merges take the maximum of the counters, and operations increment the counter~\cite{Shapiro11}. Then we could
have the following scenario: there are $7$ increments given by
clients, and the counter at replica $A$ has the value $7$ and replica B has $5$ (with two increments yet to
reach $B$), and where a read-repair merges the values at these replicas to $7$, after which the two pending increments arrive at $B$ incrementing it to $9$ (followed by another read-repair where B also gets updated to $9$).
Note that consistency is achieved (respecting our Theorem~\ref{result2}), but the counter stores the wrong value.
Systems such as Riak implement CRDTs~\cite{riak-crdts}
using these underlying protocols by \emph{not} propagating operations (like increments) across replicas, but rather increment one replica, and pass the \emph{state} to other replicas, and hence implement a purely state-based CRDT~\cite{RiakCRDT}.}
|
3,212,635,537,564 | arxiv | \section{Introduction}
\begin{abstract}
The efficiency and operating range of a photonic crystal laser is improved by passivating the InGaAs quantum well (QW) gain medium and GaAs membrane using an (NH$_4$)S treatment. The passivated laser shows a four-fold reduction in nonradiative surface recombination rate, resulting in a four-fold reduction in lasing threshold. A three-level carrier dynamics model explains the results and shows that lasing threshold is as much determined by surface recombination losses as by the cavity quality factor ($Q$). Surface passivation therefore appears crucial in operating such lasers under practical conditions.
\end{abstract}
\maketitle
\clearpage
Photonic crystals (PCs) allow unprecedented control over the radiative properties of integrated emitters. By defining small mode-volume, high-quality factor ($Q$) cavities in PCs, enhanced light-matter interaction becomes possible. This property has opened possibilities in fields including cavity quantum electrodynamics, detection, and light sources. Lasers in particular stand to gain through dramatically improved lasing threshold, modulation rate, cost, and large-scale device integration. From their first demonstration\cite{Painter99science}, PC lasers have most commonly relied on QWs for optical gain. However, QWs limit PC laser performance in many material systems because of large nonradiative (NR) surface recombination. This problem is particularly damaging in PC structures where embedded QWs expose a large surface area. Here we address the NR recombination problem by surface passivation. We show that (NH$_4$)S-mediated surface passivation of PC laser structures lowers the NR recombination rate by more than $4\times$ and leads to $4\times$ reduction of lasing threshold. The increased efficiency extends the operating range from cryogenic to practical regimes, enabling room-temperature lasing at THz-modulation rates, as described in \cite{Englund2007APL_2}. A three-level rate equations model fits our experimental data well and suggests that surface passivation is crucial for PC lasers in InGaAs/GaAs and other material systems with fast NR surface recombination.
The PC nanocavity lasers consist of 172 nm-thick GaAs slabs patterned with 9x9 arrays of single-hole cavities defined in a square-lattice PC, similar to those described in Ref.\cite{Altug2006Nature}. A central stack of four 8-nm In$_{0.2}$Ga$_{0.8}$As QWs, spaced by 8-nm GaAs barriers, forms the gain medium.
This sample is passivated using a solution of 7\% (NH$_4$)S in water. The treatment removes contamination and oxides from the GaAs and In$_{0.2}$Ga$_{0.8}$As surfaces and caps the fresh surface with sulfur atoms \cite{Oigawa1991JJAP}. Samples were first cleaned in Leksol, acetone, and ethanol, then submerged in the (NH$_4$)S solution for 15 minutes at 35$^\circ$C, and finally air-dried, following Ref.\cite{Petrovykh2002SS}. We measured the radiative and NR properties, as well as lasing characteristics, before and after surface passivation.
Before presenting the experimental results, we describe the carrier dynamics at low temperature ($\sim10$K) using a three-level rate model. Letting $N_E$ represent the pump level carrier concentration (populated above the GaAs-bandgap using a laser with power $L_{in}$), $N_G$ the QW lasing level carrier concentration (resonant with the cavity frequency), and $P$ the coupled cavity photon density, we have\footnote{$V_a$: pump active volume; $\omega_p$: cavity angular frequency; $\tau_p=Q/\omega_p$: cavity ring-down time; $G(N)$: gain; $\Gamma\approx 0.16$: gain confinement factor for cavity mode with $4\times$ 8nm QWs; $\eta$: pump energy absorption ratio; $\tau_r$: SE lifetime in unpatterned QW; $\tau_{PC,nr}$: NR lifetime in PC; $\tau_{E,f},\tau_{E,r},\tau_{E,nr}$: lifetimes of pump-level relaxation, SE, and NR transitions.}
\begin{eqnarray}
\label{eq:laser_rate}
\D{P}{t} &=& \Gamma G(N_G) P + \fr{F_{cav} N_G}{\tau_{r}} - \fr{P}{\tau_p} \\ \nonumber
\D{N_G}{t} &=& \fr{N_E}{\tau_{E,f}} - N_G \left( \fr{F_{cav} +F_{PC}}{\tau_r}+\fr{1}{\tau_{PC,nr}} \right) - \Gamma G(N_G) P \\ \nonumber
\D{N_E}{t} &=& \eta \fr{\msc{L}{in}}{\hbar \omega_p V_a} - N_E \left( \fr{1}{\tau_{E,r}}+\fr{1}{\tau_{E,nr}}+\fr{1}{\tau_{E,f}} \right)
\end{eqnarray}
In the center equation, the total lasing-level decay rate $d N_G/dt$ is separated into cavity decay, PC leaky-mode decay, and NR loss: $1/\tau_{G} = (F_{cav} +F_{PC})/\tau_r + 1/\tau_{PC,nr}$. Here, $F_{PC}\approx 0.2$ expresses spontaneous emission (SE) rate quenching inside the PC bandgap compared to the SE rate $1/\tau_r$ in the bulk QW (following simulations in \cite{Englund05PRL}), while $F_{cav}$ denotes the SE rate enhancement into the lasing mode
\begin{figure
\includegraphics[width=3in]{Fig1.jpg}
\caption{{\footnotesize Low-temperature photoluminescence measurements on unpatterned and PC regions. (a) PL from the bulk sample (after passivation). (b) Expanded view of PL from untreated PC region shows short lifetime $\tau_{PC} \approx 33.8$\unit{ps}; data is fitted by the rate model of Eqs.\ref{eq:laser_rate}. (c) PL measurements for the untreated (blue) and passivated (red) samples, from the PC and unpatterned regions.}}
\label{fig:Fig2}
\end{figure}
We estimate the lifetime constants in Eqs. \ref{eq:laser_rate} from time-resolved photoluminescence (PL) recorded with a streak camera (Hamamatsu N5716-03) in a confocal microscope setup \cite{2007.OpEx.Englund}. The measurements are performed separately on PC mirrors and bulk regions with 3.5 ps-long excitation pulses at 780\unit{nm} wavelength and 82 MHz repetition rate (Fig.\ref{fig:Fig2}(b,c)). Samples were cooled to 10 K in a liquid-helium continuous-flow cryostat so that both unpassivated and passivated samples could be brought into lasing for comparison. From a fit of Eqs.\ref{eq:laser_rate} to the rise-time of PL from the untreated sample, shown in Fig.\ref{fig:Fig2}(b), we estimate the relaxation time from the pump level into the lasing level at $\tau_{E,f}\sim$6\unit{ps}. The passivated sample also gives $\tau_{E,f}\sim$6\unit{ps}.
Fig.\ref{fig:Fig2}(c) shows the reduction in NR surface recombination after passivation: the PL decay lifetime from the PC mirror region is extended to $\tau_{PC}\sim142$ \unit{ps} from $\tau_{PC}\sim 33.8$\unit{ps} before passivation, while the decay lifetime from the bulk QW has nearly unchanged lifetime $\tau_{bulk}\sim 571-614$\unit{ps} at 10$\mu$W pump power. This data is analyzed using the bottom two equations of Eqs. \ref{eq:laser_rate} applied to PC and bulk regions, i.e., $1/\tau_{i} =1/\tau_{i,nr}+F_{i}/\tau_r$ with $i$ denoting bulk or PC ($F_{bulk}=1$). Assuming $\tau_{nr,bulk}\gg \tau_r$, the lifetime data then lets us estimate the unpatterned bulk SE lifetime $\tau_{r}\approx$ 654 (605)\unit{ps} and NR lifetime $\tau_{PC,nr} \approx$ 35.5 (149)\unit{ps} in the PC mirrors before (after) passivation. We assume equal NR lifetime across the cavity and surrounding PC mirrors since the diffusion length of rate-limiting holes $\sim 3\mu$m, greatly exceeding the cavity size.
To put this reduction in NR loss rate into perspective and compare it to reports on other types of structures, we extract the surface recombination velocity $S$ that describes the recombination at the QW surface. From the lifetime data in Fig.\ref{fig:Fig2}(c), it is clear that most NR recombination results from the PC holes. The effect of passivation is therefore to reduce $S$, and a simple model allows us to quantify by how much (pump power is small enough to neglect Auger recombination). The diffusion and recombination of the QW carrier concentration $N_G$, uncoupled to the PC cavities, are described by the equation (following \cite{Hayes1988APL}
\begin{equation}
\label{eq:diffusion}
\Dp{N_G}{t}=D \nabla^{2} N_G - N_G\fr{F_{PC}}{\tau_r},
\end{equation}
where $D$ is the ambipolar diffusion coefficient. Surface recombination enters through the boundary condition $D \Dp{N_G}{r} + S N_G=0$. Assuming isotropic minority-carrier density over the PC period $a=315$\unit{nm}, the total recombination rate of the PC depends only on the exposed QW surface area. This area is equal if the PC is replaced with an array of mesas whose radius equals the PC hole radius $r$. Eq.\ref{eq:diffusion} is then easily solved in cylindrical coordinates\cite{Hayes1988APL}, giving the total recombination rate $1/\tau_{PC}=F_{PC}/ \tau_{r}+1/\tau_{PC,nr}=F_{PC}/\tau_{r}+2 S/r$, i.e., $\tau_{PC,nr}=r/2S$. We then find that $S\approx 1.7\cdot 10^{5}$\unit{cm/s} ($4.0 \cdot 10^{4} $\unit{cm/s}) for the original (passivated) structure. This value for the surface recombination velocity is somewhat lower than previous room-temperature measurements on similar InGaAs/GaAs structures by \cite{Wenzel2004SST,Hu1994JAP}, which put it at between $\sim 1\cdot 10^{5}$ and $5\cdot 10^{6}$\unit{cm/s}. This is expected, since $S \propto v_{th}\approx \sqrt{3k T/m^{*}}$, the thermal velocity, which is $\sim 6\times$ smaller at 10K \cite{Sze1981}. Our observation of a four-fold lowering in $S$ with surface passivation is similar to other reports with (NH$_4$)S \cite{Boroditsky2000JAP}. However, better passivation results could probably be achieved with (NH$_4$)S$_x$, $x>1$, for which up to 50$\times$ improvement was reported \cite{Wenzel2004SST}.
With this understanding of the carrier dynamics in the PC, we now consider the coupled cavity array laser. Microscope images show that only 7-9 cavities simultaneously lase in a single mode as fabrication inaccuracies lifted the cavity array's resonance degeneracies. Fig.\ref{fig:Fig4}(c) shows that the passivation treatment slightly blue-shifts the cavity resonance and raises $Q$ by $\sim 1.5\times$ by cleaning and thinning the membrane, as observed in digital cavity etching \cite{Hennessy2005APL}. The figure also shows the passivated structure when pumped $2\times$ above threshold; $Q$ is then raised to $2670$ due to gain. We estimate the average SE enhancement factor $F_{cav}$ of emission coupled to the PC cavities from a lifetime measurement of the non-lasing cavity measured at $\sim 1/2$ lasing threshold, giving $\tau_{\mbox{\small{cav}}}\approx 17$\unit{ps}. The relation for the cavity-coupled SE rate,
\begin{equation}
\fr{1}{\tau_{\mbox{\small{cav}}}} = \fr{F_{cav}+F_{PC}}{\tau_{r}} + \fr{1}{\tau_{PC,nr}},
\end{equation}
gives $F_{cav}\approx 33$.
Fig.\ref{fig:Fig4}(a) shows the lasing curves for the original and passivated structures and indicates a four-fold reduction in threshold. This reduction in the pump power $\msc{L}{in}$ directly follows from Eqs. \ref{eq:laser_rate}: for threshold, we solve Eqs. \ref{eq:laser_rate} in steady-state with $P V_{mode}=1$ (an average of one photon inside the resonant mode) and $N_G\rightarrow N_{tr}$, the transparency carrier concentration where QW gain cancels absorption. Neglecting the slow pump-level radiative recombination $\tau_{E,r}$, this gives \vspace{6pt} \\
$L_{in,th}=$
\begin{eqnarray}
\label{eq:laser_threshold}
\fr{\hbar \omega_p}{\tau_p \eta} \fr{V_a}{V_{mode}} \left[ N_{tr} V_{mode} \left(F_{PC} \fr{ \tau_p}{\tau_r}+\fr{\tau_p}{\tau_{nr}} \right) + 1 \right]\left(1+\fr{\tau_{E,f}}{\tau_{E,nr}}\right)
\end{eqnarray}
For typical parameters, $N_{tr}\approx 10^{18}$\unit{cm$^{-3}$} \cite{1995Coldren} and $V_{mode} \approx 6 (\lambda/n)^{3}$, the first term in the brackets dominates. Within this term, the nonradiative part $\tau_p/\tau_{nr}$ dominates the radiative one $F_{PC} \tau_p/\tau_r$. Thus, in PC lasers using InGaAs QWs, or other gain media with similar surface recombination velocity, threshold is largely determined by NR recombination losses at the QW and GaAs membrane surfaces. After passivation, Eq.\ref{eq:laser_threshold} predicts a threshold reduction by factor 4.1 of the original value if the NR pump-level loss rate $1/\tau_{E,nr}$ is assumed much smaller than the relaxation rate $1/\tau_{E,f}$ into the lasing level (otherwise an even larger reduction). We measured a decrease by factor $3.7$, which is shows good agreement with the predicted value. The differential quantum efficiency, on the other hand, is nearly unaffected by the NR recombination rate, as can be easily derived from the rate equations (the physical reason is that once lasing begins, the stimulated emission rate is much faster than the NR loss rate.)
One of the most remarkable aspects of the PC nanocavity laser is the extremely fast modulation rate. In Fig.\ref{fig:Fig4}(b), we present streak camera measurements of the lasing response to 3.4-ps-long pump pulses. The low-temperature measurements for the passivated and unpassivated samples were obtained at the same average pump power of $\sim$28\unit{$\mu$W} (3.5\unit{ps}, 13 ns repetition), and the normalized lasing response is compared in the red and blue plots. After passivation, the laser responds somewhat faster with an exponential decay time of $6.1$\unit{ps}. We attribute this speed-up largely to relatively higher pump power above threshold (due to lower NR loss and higher cavity $Q$). Faster time response is possible at higher pump power, as noted in \cite{Altug2006Nature}. The rate model of Eqs.\ref{eq:laser_rate} explains these time-response measurements well, as shown in the continuous-line fits.
\begin{figure
\includegraphics[width=3in]{Fig2new.jpg}
\caption{{\footnotesize Cavity resonances below and above lasing threshold. (a) Lasing curves for unpassivated and passivated structures at low temperature (10K) with pulsed excitation (3.5\unit{ps}, 13\unit{ns}-rep.). Passivation reduces threshold from 24\unit{$\mu$W} to 6 \unit{$\mu$W} averaged power (measured before an objective lens focusing to a $\sim 3\mu$m-radius-spot). (b) Laser time response for untreated (blue) and treated (red) samples at 10K; Eqs.\ref{eq:laser_rate} fit the data well. The treated laser shows an exponential decay time of 6.1\unit{ps} (thick fit). Some deviations at longer times are caused by background PL from regions not coupled to the resonant mode. (c) Cavity resonances below and above lasing. Passivation lowers the resonance wavelength and slightly increases $Q$, as seen in the untreated (blue) and treated (red) cavities spectra at $1/2$ threshold pump power. Top spectrum (red): lasing of passivated structure, pumped $2\times$ above threshold. }}
\label{fig:Fig4}
\end{figure}
In conclusion, we have demonstrated the threshold-lowering effect of surface-passivation treatment of InGaAs QWs in a PC coupled nanocavity array laser. The 4-fold reduction of NR surface recombination lowers the threshold pump power to 27\% of its original value. Our three-level laser model agrees well with experimental observations and shows that NR recombination strongly affects lasing when the NR loss rate is faster than the modified SE rate in the PC. In this regime, reducing the NR surface recombination rate lowers the lasing threshold as much as lowering the cavity loss rate $1/\tau_p$ would, but has the advantage of not slowing lasing modulation rate. Using a carrier diffusion model, we calculate a drop in the QW surface recombination velocity from $S\approx 1.7\cdot 10^{5}$\unit{cm/s} to $3.2 \cdot 10^{4} $\unit{cm/s} after passivation; comparing this to literature, we believe that our results could be improved by applying better surface passivation techniques \cite{Wenzel2004SST,Petrovykh2005}. The increased efficiency achieved in our lasers alleviates heating problems, which opens the door to room-temperature and CW operation \cite{Englund2007APL_2} and brings PC lasers closer to practical applications.
The authors thank Dr. D.Y. Petrovykh for his helpful comments. This work was supported by the MARCO IFC Center, NSF Grants ECS-0424080 and ECS-0421483, the MURI Center (ARO/DTO Program No.DAAD19-03-1-0199), as well as the NDSEG Fellowship (D.E.).
|
3,212,635,537,565 | arxiv | \section{Introduction}
Magnetoelectric multiferroic materials which exhibit simultaneously ferroelectric and
magnetic order are promising for new generation of random access memories (RAM), where
the information can be written by electric field and read non-destructively by magnetic
sensing. Such memories avoid the weak points of the ferroelectric RAMs (destructive
reading causes fatigue) as well as of magnetic RAMs (high electric current is needed for
overwriting, which rules out high integration of magnetic RAMs). Unfortunately, there are
not many magnetoelectric multiferroics known up to now and only a few of them have both
magnetic and ferroelectric critical temperatures above room temperature. Therefore there
is nowadays an intensive search for magnetoelectric multiferroic materials with high
magnetization and spontaneous polarization above room
temperature.\cite{fiebig05,cheong07}
Baettig and Spaldin predicted from \textit{ab initio} calculations that the chemically
ordered double perovskite Bi$_{2}$FeCrO$_{6}$ (BFCO) will have - at zero temperature - a
polarization of $\sim$80\,$\mu$C/cm$^{2}$ and a magnetization of $\sim$160 emu/cm$^{3}$
(2 $\mu_{B}$ per formula unit).\cite{baettig05a} Such properties far exceed the
properties of any known multiferroic. Nechache \textit{et al.} for the first time
experimentally prepared epitaxial thin film of BFCO which exhibited at room temperature
(RT) a polarization of 2.8\,$\mu$C/cm$^{2}$ and a saturated magnetization of
0.26$\mu_{B}$ per unit cell.\cite{nechache06,nechache07} Recently Kim et al.\cite{kim07}
reported a remanent polarization of 60\,$\mu$C/cm$^{2}$ at 77\ensuremath{\,\mbox{K}}, for their
BiFe$_{0.5}$Cr$_{0.5}$O$_{3}$ solid solution epitaxial filmsa nd Alexe even measured
70-80\,$\mu$C/cm$^{2}$ at RT on the films grown by Nechache et al.\cite{alexe07} The
magnetic T$_{N}$\, and ferroelectric T$_{c}$\, phase transition temperatures in ordered BFCO are
not known up to now. Beattig \textit{et al.}\cite{baettig05b} predicted from first
principles a N\'{e}el temperature T$_{N}$\, near 100\ensuremath{\,\mbox{K}}, which was not confirmed in the
experiments performed by Nechache \textit{et al.}, who observed magnetic order at RT.
Very recently Suchomel \textit{et al.}\cite{suchomel07} prepared BFCO ceramics and
observed a magnetic phase transition below 130\ensuremath{\,\mbox{K}}, but it is worth noting that their
ceramics exhibited chemical disorder of the Fe$^{3+}$ and Cr$^{3+}$ cations on the
perovskite B site, which reduces T$_{N}$. Thus it is not excluded that T$_{N}$\, can be higher in
ordered samples as in the case for the ordered BFCO films reported by Nechache at
al.\cite{nechache07b}
Determination of the ferroelectric phase transition temperature is not possible from the
low-frequency dielectric measurements due to the too high DC conductivity of the BFCO
film. The extrinsic leakage conductivity does not play an appreciable role in the THz
dielectric response of the film, therefore high-frequency dielectric studies are
advantageous. For this purpose we performed infrared (IR) measurements including
investigation in hardly achievable far infrared (FIR) range below 200\ensuremath{\,\mbox{cm}^{-1}}, which can give
information about the complete phonon contributions to the static permittivity (note that
in the case of displacive ferroelectrics only polar phonons are responsible for the
dielectric anomaly near T$_{c}$). Moreover, IR spectra usually change at the
ferroelectric (structural) phase transition temperature due to the change of selection
rules for IR active polar phonons. Therefore the IR spectra (including FIR) of the BFCO
film can help to estimate its T$_{c}$\, as well as the symmetry of the high-temperature
phase.
IR studies of ferroelectric thin films are rather rare in the literature and up to now
almost only FIR transmission spectra of the films deposited on FIR-transparent substrates
like Si, sapphire or MgO were investigated. FIR transmission can give results only in a
limited frequency range determined by the transparency window of the substrates, which is
mostly very narrow, particularly at high temperatures (e.g. sapphire is partially
transparent only below 150\ensuremath{\,\mbox{cm}^{-1}}\, at 900\ensuremath{\,\mbox{K}}).\cite{kamba05} IR reflectance can yield results
in a much broader spectral range, but its sensitivity is limited a) by the thickness of
the film, b) by the strengths of polar phonons in the IR spectra and c) by the IR
properties of the substrate. Our experience shows that the substrates with buffer
electrodes are not suitable due to the negative permittivity of the buffer layers, which
reduces the sensitivity of the method. Therefore, dielectric substrates, which do not
show any strongly temperature-dependent IR reflectivity spectra, are the most suitable
for reflectance studies of thin films. Nevertheless, IR reflectance spectra of the thin
films deposited on the substrate are strongly influenced by the substrate, since the thin
films are partially transparent for the IR wavelength. Therefore both IR spectra of the
bare substrate and of the film on the substrate should be measured at the same
temperatures and the film properties are evaluated from the spectra fits to such a
multilayer system. This method was used only twice in the literature for room or
low-temperature IR studies of SrTiO$_{3}$ films.\cite{almeida06,yamada06} Here, we will
use this method for the first time above room temperature and up to 900\ensuremath{\,\mbox{K}}.
In this paper we shall show that the static permittivity of the BFCO thin film,
determined from the polar phonon contributions, increases monotonically on heating to
900\ensuremath{\,\mbox{K}}, due to the phonon softening. Some phonon anomalies, probably connected with a
magnetic phase transition, were observed near 600\ensuremath{\,\mbox{K}}, but no dramatic changes, such as
those usually related to a ferroelectric phase transition, were observed. Therefore, it
seems that the phase transition to the paraelectric phase in BFCO thin film occurs above
the highest investigated temperature of 900\ensuremath{\,\mbox{K}}. We shall also report on our study of the
magnetic properties of the BFCO film up to 1000\ensuremath{\,\mbox{K}}\, and we shall show that the magnetic
phase transition occurs between 600 and 800\ensuremath{\,\mbox{K}}.
\section{Experimental}
BFCO films were grown directly on (100)-oriented SrTiO$_{3}$ substrates doped with 0.5
wt\% of Nb (abbreviated STO:Nb) as well as on a (100)-oriented LaAlO$_{3}$ substrate,
better suited for IR measurements. An epitaxial 210 nm thick film deposited on the
conducting STO:Nb substrate was used for M\"{o}ssbauer spectroscopy and vibrating sample
magnetometry studies. XRD and TEM data have shown that the thin film is epitaxial. Weak
superlattice spots showed evidence of partial chemical order of the Fe and Cr
cations.\cite{nechache07b}
The SrTiO$_{3}$ substrate exhibits a strongly temperature-dependent FIR spectra due to
the presence of an optic soft mode, which gives large inaccuracies in evaluation of the
FIR properties of the thin film. We therefore studied FIR spectra of the BFCO film
deposited on non-conducting LaAlO$_{3}$ substrates with the size 5x10x0.5\,mm. Since
polar phonons are rather weak in BFCO, we investigated a 600\,nm thick film. The relaxed
film on LaAlO$_{3}$ substrate (orientation (001)) was epitaxial with orientation (100),
but it revealed only very weak superlattice spots, so its chemical order in the B site
was only partial. XRD analysis of the film on LaAlO$_{3}$ revealed a slight Bi-deficiency
in the BFCO phase as well as about 5\% of Cr-doped Bi$_{2}$O$_{3}$ secondary phase, which
were not observed for the films on SrTiO$_{3}$.
The unpolarized FIR and IR reflectance spectra were taken using a Bruker IFS 113v FTIR
spectrometer at temperatures between 10 and 900\ensuremath{\,\mbox{K}}\, with the resolution of 2\ensuremath{\,\mbox{cm}^{-1}}. An
optistat CF cryostat from Oxford Instruments equipped with polyethylene windows was used
for cooling the sample down to 10\ensuremath{\,\mbox{K}}, while a commercial high temperature cell SPECAC P/N
5850 was used for heating it up to 900\ensuremath{\,\mbox{K}}. A helium-cooled Si bolometer operating at
1.6\ensuremath{\,\mbox{K}}\, was used as a detector at low temperature measurements, while pyroelectric DTGS
detectors were used for the IR measurements above RT.
Magnetic properties of the BFCO films on substrates 3x3 mm in size were investigated
using a PPMS 14 vibrating-sample magnetometer (Quantum design) between 3 and 1000\ensuremath{\,\mbox{K}}.
The M\"{o}ssbauer spectrum measurement was carried out in the Conversion electron
M\"{o}ssbauer spectroscopy (CEMS) mode with $^{57}$Co diffused into an Rh matrix as a
source moving with a constant acceleration. The spectrum was accumulated for 7 days.
Classical M\"{o}ssbauer spectroscopy in transmission mode could not be used due to the
small volume of the investigated thin films. The Wissel spectrometer was calibrated by
means of a standard $\alpha$-Fe foil, and the isomer shift was expressed with respect to
this standard at 293\ensuremath{\,\mbox{K}}. The fitting of the spectra was performed using the NORMOS
program. The CEMS method requires a conducting sample, therefore we investigated the thin
film deposited on a conducting STO:Nb substrate, while the films deposited on
non-conducting LaAlO$_{3}$ were more suitable for the IR studies.
\section{Results and discussion}
Fig.~\ref{Fig1} shows IR reflectance spectra of both a pure LaAlO$_{3}$ substrate and a
BFCO thin film (deposited on LaAlO$_{3}$) at selected temperatures between 10 and 900\ensuremath{\,\mbox{K}}.
Only small temperature dependence of the reflectivity spectra of the LaAlO$_{3}$
substrate can be seen, mostly due to an increase in phonon damping with temperature. Also
the sharp peaks near 500 and 600\ensuremath{\,\mbox{cm}^{-1}}\, gradually disappear due to a second order
structural phase transition in LaAlO$_{3}$ from trigonal to cubic phase at
800\ensuremath{\,\mbox{K}}.\cite{mueller68}
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{Fig1.eps}
\end{center}
\caption{(color online) (a) Infrared reflectivity spectra of LaAlO$_{3}$ substrate and (b) reflectance spectra
of Bi$_{2}$FeCrO$_{6}$
film (600 nm thick) deposited on LaAlO$_{3}$. The arrows show anomalous phonons. Note that
1\ensuremath{\,\mbox{cm}^{-1}}\, corresponds to 30\,GHz.}
\label{Fig1}
\end{figure}
IR reflectance spectra of the semitransparent BFCO film deposited on the opaque
LaAlO$_{3}$ substrate exhibits more pronounced changes with temperature mainly near 100,
250 and 550\ensuremath{\,\mbox{cm}^{-1}}\, (marked by arrows in Fig.~\ref{Fig1}). For the detailed analysis we
evaluated the complex permittivity
$\varepsilon^{*}(\omega)=\varepsilon'(\omega)-i\varepsilon''(\omega)$ spectra of the film
(see Fig.\ref{Fig2}) using the following procedure: The reflectivity R($\omega$) of the
bare substrate at each temperature was first fitted using the formula
\begin{equation}\label{refl}
R(\omega)=\left|\frac{\sqrt{\varepsilon^{*}(\omega)}-1}{\sqrt{\varepsilon^{*}(\omega)}+1}\right|^2
,\end{equation}
where for the $\varepsilon^{*}$($\omega$) the factorized form of the complex permittivity
\cite{gervais83} was used
\begin{equation}\label{eps4p}
\varepsilon^{*}(\omega)=\varepsilon_{\infty}\prod_{j}\frac{\omega^{2}_{LOj}-\omega^{2}+i\omega\gamma_{LOj}}{\omega^{2}_{TOj}-\omega^{2}+i\omega\gamma_{TOj}}\,.
\end{equation}
$\omega_{TOj}$ and $\omega_{LOj}$ denote the frequencies of the j-th transverse and
longitudinal polar phonon, respectively, and $\gamma$$_{TOj}$ and $\gamma$$_{LOj}$ denote
their corresponding damping constants. The high-frequency permittivity
$\varepsilon_{\infty}$ results from the electron absorption processes and from the phonon
contribution above 600\ensuremath{\,\mbox{cm}^{-1}}. Then the spectrum of the two-slab system (film + substrate)
was fitted using the full formula for the coherent reflectance of a two-layer
system\cite{Born,Heavens} where the oscillator parameters of the substrate were fixed, in
order to determine the oscillator parameters of the polar phonons in the film. We note
that for the film IR spectra fits we used the classical Lorentz model of the damped
harmonic oscillators instead of Eq.~(\ref{eps4p})
\begin{equation}
\label{eps3p}
\varepsilon^*(\omega)
= \varepsilon_{\infty} + \sum_{j=1}^{n}
\frac{\Delta\varepsilon_{j}\omega_{TOj}^{2}} {\omega_{TOj}^{2} -
\omega^2+\textrm{i}\omega\gamma_{TOj}} \, ,
\end{equation}
where $\Delta\varepsilon_{j}$ means the contribution of the j-th mode to the static
permittivity. The rest of the parameters in Eq.~(\ref{eps3p}) have the same meaning as in
Eq.~(\ref{eps4p}). Eq.~(\ref{eps4p}) is more suitable for the reflectivity fits of phonon
spectra with a large TO-LO splitting, when both kinds of phonon modes have different
damping. Such a model was necessary to use for a good fit of the LaAlO$_{3}$ substrate.
Eq.~(\ref{eps3p}) is more appropriate for fitting of the reflectivity spectra of phonons
with a small TO-LO splitting and/or transmission spectra which do not show up anomalies
at LO frequencies. It has fewer parameters and gives acceptable physical results, while
the former model can sometimes yield un-physical negative dielectric losses, when the
parameters are not properly chosen.
Complex dielectric spectra of BFCO film obtained from the above described fit of the IR
reflectance spectra displayed on Fig.~\ref{Fig1} are plotted in Fig.~\ref{Fig2}. The
temperature dependence of TO phonon frequencies is plotted in Fig.~\ref{Fig3}. One can
clearly see the shift of most of the phonon modes to lower frequencies on heating (phonon
softening). It causes the gradual increase of the static permittivity with rising
temperature (see inset in Fig.~\ref{Fig2}).
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{Fig2.eps}
\end{center}
\caption{(color online) Complex dielectric spectra of the BFCO film obtained at selected temperatures
from the fit of the reflectance spectra in Fig.~\ref{Fig1}. Frequencies of the peaks
in $\varepsilon''$($\omega$) spectra roughly correspond
to the TO phonon frequencies.
Note the continuous increase of the static permittivity $\varepsilon'$(0) with rising temperature (see inset).}
\label{Fig2}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{Fig3.eps}
\end{center}
\caption{Temperature dependence of the transverse polar phonon eigen-frequencies in the
BFCO film deposited on LaAlO$_{3}$.}
\label{Fig3}
\end{figure}
Six modes (respectively seven below 200\ensuremath{\,\mbox{K}}) were necessary for the fits. Let us compare
the number of observed polar modes with the prediction of factor group analysis. BFCO
crystallizes in the rhombohedral space group $R3-C^{4}_{3}$. Bi, Fe and Cr cations have
the site symmetry $C_{3}$(1), while oxygen ions have the $C_{1}$ site
symmetry.\cite{baettig05a,baettig05b} Factor group analysis of the lattice vibrations
based on tables published by Rousseau et al. \cite{rousseau81} yields the following optic
phonons
\begin{eqnarray}
\Gamma_{R3} = 9A(z,x^{2}+y^{2},z^{2}) + 9 E(x,y,x^{2}-y^{2},xy,xz,yz).
\label{eq:rhombo}
\end{eqnarray}
It means that the 9$A$ and 9$E$ modes are both Raman and IR active. The analysis gives
also additional 1$A$+1$E$ acoustic modes. The modes with the $A$ symmetry are active in
spectra with the electric vector \textbf{E} of the IR wave parallel to the $z$ axis,
while the $E$ modes are active in \textbf{E}$\parallel$ $x,y$ spectra. The rest of
symbols ($z^{2}$, $xy$ etc.) in brackets of the group analysis in Eq.~\ref{eq:rhombo}
shows components of Raman tensors, in which the phonons are Raman active. Our epitaxial
BFCO film is (001) oriented and since we measure the in-plane response, we see mostly the
$E$ symmetry modes in our FIR spectra. We resolved 7 modes in the low-temperature spectra
although 9 $E$ modes are allowed. This is quite reasonable, if we take into account that
some of the modes have small intensity or they may overlap with other modes. It is worth
to note that the TO phonon frequencies in BFCO correspond very well to the $E$ symmetry
TO phonon frequencies in chemically and structurally related
BiFeO$_{3}$.\cite{kamba07,hermet07,lobo07}
One mode near 50\ensuremath{\,\mbox{cm}^{-1}}\, disappears from the FIR spectra above 200\ensuremath{\,\mbox{K}}\, (see black solid
squares in Fig.~\ref{Fig3}). Such change could be a hint of some structural phase
transition, but the FIR spectra near 50\ensuremath{\,\mbox{cm}^{-1}}\, are rather noisy at high temperatures, so we
cannot exclude that the mode is present in the spectra also at higher temperatures, but
we do not resolve it due to lower sensitivity of the high-temperature FIR experiment. The
absence of any other phonon anomalies at higher frequencies also does not give some phase
transition near 200\ensuremath{\,\mbox{K}}.
Interesting phonon anomalies are seen near 600\ensuremath{\,\mbox{K}}\, (see Fig.~\ref{Fig3}). Some mode
frequencies show relatively large temperature changes and splitting of the modes near 490
and 550\ensuremath{\,\mbox{cm}^{-1}}\, that almost disappears above 600\ensuremath{\,\mbox{K}}. However, it is important to stress that
no mode from the doublet disappears above 600\ensuremath{\,\mbox{K}}. Both modes remain in the spectra with
similar frequencies near 520\ensuremath{\,\mbox{cm}^{-1}}\, up to 900\ensuremath{\,\mbox{K}}. Suchomel \textit{at al.},\cite{suchomel07}
observed decomposition of BFCO ceramics on heating above 400\ensuremath{\,{}^\circ}\!C. This effect could
be also responsible for the phonon anomalies seen near 600\ensuremath{\,\mbox{K}}\, in our film, but we have
to emphasize that we did not observe any decomposition in our sample (placed in a vacuum
chamber of the spectrometer), because the IR spectra and magnetic properties (see below)
were reproducible before and after the thermal cycling.
Phonon anomalies near T$_{N}$=640\,K, similar to ours in Fig.~\ref{Fig3}, were observed in
Raman spectra of BiFeO$_{3}$ \cite{haumont06} and they were explained by spin-phonon
coupling. We will discuss this possibility below together with the magnetic data.
The phonon frequency changes seen near 600\ensuremath{\,\mbox{K}}\, can be a consequence of some phase
transition, but probably not a ferroelectric one because we see a gradual increase of the
static permittivity $\varepsilon'$(0) (from phonon contributions) on heating (see inset
in Fig.~\ref{Fig2}), while for a ferroelectric transition a maximum in
$\varepsilon$'($T$) should be seen near T$_{c}$. It seems that the ferroelectric phase
transition in BFCO lies above 900\ensuremath{\,\mbox{K}}, like for BiFeO$_{3}$. Note that BiFeO$_{3}$ has a
rhombohedral $R3c$ structure with a structural phase transition (according to earlier
literature) to cubic paraelectric $Pm\bar{3}m$ phase near 1120\ensuremath{\,\mbox{K}}. Very recent structural
studies\cite{scott07} revealed an intermediate orthorhombic phase with the space group
$C_{2v}^{1}-P2mm$ or $C_{2v}^{11}-C2mm$ at temperatures between $\sim$1100 and
$\sim$1200\ensuremath{\,\mbox{K}}\, and probably only above $\sim$1200\ensuremath{\,\mbox{K}}\, BiFeO$_{3}$ transforms into the
cubic and simultaneously conducting phase.\cite{scott07}
If we assume that the structural phase sequence in BFCO is the
same as in BiFeO$_{3}$, than the following factor group analysis
of the optic phonons applies in the orthorhombic phase
\begin{equation}
\Gamma_{P2mm} = 10A_{1}(z,x^{2})+4A_{2}(xy)+7B_{1}(x,xz)+6B_{2}(y,yz) .
\label{eq:ortho}
\end{equation}
It means that instead of the 18 modes in the rhombohedral
structure, 23 IR active modes (13 in \textbf{E}$\parallel$ $x,y$)
should be seen in the IR spectra of the orthorhombic phase.
Unfortunately, no new mode appears in our FIR spectra, so we do
not see any evidence for a phase transition into the orthorhombic
phase at temperatures below 900\ensuremath{\,\mbox{K}}.
On the other hand, in cubic paraelectric phase the following optic
modes are expected:
\begin{multline}
\label{eq:cubic}
\mathrm{\Gamma_{Pm\bar{3}m} = 4 F_{1u}(x) + 2 F_{2u}(-) + 1 A_{1g}(x^{2}+y^{2}+z^{2})} \\
\mathrm{ + 1 E_{g}(x^{2}+y^{2}-2z^{2}, \sqrt{3}x^{2}-\sqrt{3}y^{2})} \\
\mathrm{ + 2 F_{2g}(xy,yz,xz).}
\end{multline}
It seems that only 4 phonons of $F_{1u}$ symmetry should be seen in the FIR spectra and 4
modes ($A_{1g}$, $E_{g}$ and $F_{2g}$ symmetries) in Raman spectra. We see 6 modes in
Fig.~\ref{Fig3}. It means that BFCO probably remains in the rhombohedral phase in the
whole investigated temperature range up to 900\ensuremath{\,\mbox{K}}\, and the structural phase transition
only occurs, similarly as in BiFeO$_{3}$, at higher temperatures. The absence of a phase
transition from the ferroelectric to paraelectric phase below 900\ensuremath{\,\mbox{K}}\, is also supported
by observed gradual increase of the static permittivity on rising temperature - see inset
in Fig.~\ref{Fig2}. Further investigations, like high-temperature structural or second
harmonic generation, are needed for revealing the T$_{c}$ and symmetry of the
high-temperature phase(s).
\begin{figure}
\begin{center}
\includegraphics[width=85mm]{Fig4.eps}
\end{center}
\caption{(color online) Magnetic hysteresis loops of BFCO thin B-site ordered film deposited
on STO:Nb substrate measured at selected temperatures up to 1000\ensuremath{\,\mbox{K}}.
Inset shows the room-temperature CEMS M\"{o}ssbauer spectrum of the same film together with its fit.}
\label{Fig4}
\end{figure}
Let us discuss the magnetic properties of BFCO thin film, which we investigated by means
of vibration magnetometry and CEMS M\"{o}ssbauer spectroscopy. The BFCO film deposited on
the LaAlO$_{3}$ substrate, originally investigated by IR spectroscopy, exhibits a strong
diamagnetic signal from the substrate and the magnetic hysteresis loops were only
revealed at low temperatures below 20\ensuremath{\,\mbox{K}}\, (not shown here). It can be explained by a weak
B-site order which suppresses the magnetic phase transition temperature. On the other
hand, the well-ordered BFCO thin film on the STO:Nb substrate exhibits nice magnetic
hysteresis loops not only at RT but also at higher temperatures (see Fig.~\ref{Fig4}).
Negative slope of magnetization at higher magnetic fields seen above 800\ensuremath{\,\mbox{K}}\, can be
explained by a diamagnetic contribution of the STO:Nb substrate, but below 600\ensuremath{\,\mbox{K}}\, the
open hysteresis loop is clearly seen. The value of saturated magnetization is typical for
antiferromagnets with weak ferromagnetism induced by a canted spin structure and the
value of RT spontaneous magnetization corresponds well to the previously published
results.\cite{nechache06,nechache07,nechache07b} The low value ($\sim$ 0.3
$\mu_{B}$/f.u.) of magnetization at saturation of the film regarding the expected
theoretical value of 2 $\mu_{B}$/f.u. \cite{baettig05a} could be explained by (i) the
Fe-Cr ordering which may be only partial, (ii) the partial chemical disorder that
generates an antiferromagnetic antisites contribution (Fe-Fe, Cr-Cr), or/and (iii) the
partial relaxation of the strain in the film leading to a more distorted structure.
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{Fig5.eps}
\end{center}
\caption{(color online) Temperature dependence of the magnetization in BFCO/STO:Nb film at various magnetic fields.}
\label{Fig5}
\end{figure}
Fig.~\ref{Fig5} shows the temperature dependence of the magnetization at various magnetic
fields. The magnetic experiments above and below 300\ensuremath{\,\mbox{K}}\, were performed separately, which
is probably the reason of the change of slope seen at 300\ensuremath{\,\mbox{K}}\, in temperature dependence
of magnetization measured at 1500\,Oe. The magnetization remains nonzero up to 1000\ensuremath{\,\mbox{K}}\,
in the field of 1500\,Oe, but the hysteresis loop is very slim at temperatures above
800\ensuremath{\,\mbox{K}}. It is difficult to determine exactly the magnetic phase transition from
Fig.~\ref{Fig4} and \ref{Fig5}, but it seems that it could lie somewhere between 600 and
800\ensuremath{\,\mbox{K}}. It is worth noting that Beattig \textit{et al.}\cite{baettig05b} predicted $T_{N}$
in BFCO only near 100\ensuremath{\,\mbox{K}}, which can correspond to recent result of Suchomel et al.
\cite{suchomel07}, who claim, based on transmission M\"{o}ssbauer spectrum of BFCO
ceramics, that $T_{N}$ lies below 130\ensuremath{\,\mbox{K}}. Suchomel's low $T_{N}$ can be explained by
chemical disorder in the Fe and Cr cations in the [111] direction,\cite{suchomel07} while
our thin film is at least partially chemically ordered (we observed satellites peaks both
in the XRD and in the selected area electron diffraction patterns taken by TEM).
The inset of Fig.~\ref{Fig4} shows the room-temperature CEMS M\"{o}ssbauer spectrum of
the BFCO film. Surprisingly, only a doublet is seen, which is typical for the
\textit{paramagnetic} state of Fe$^{3+}$ ions in the sample, while a sextet is expected
in a magnetically ordered state. This is rather puzzling because clear magnetic
hysteresis loops are seen in the same sample by vibration magnetometry (Fig.~\ref{Fig4}).
Conversion Electron M\"{o}ssbauer spectroscopy of $^{57}$Fe is based on the detection of
electrons with the energy of 7.3 keV which were knocked out of the K shell of the
$^{57}$Fe atom after re-emission of the gamma quantum originally resonantly absorbed by
the $^{57}$Fe nucleus. The release is almost instantaneous (within 10$^{-7}$ s) and has a
rather high probability. Most of these electrons are again absorbed in the material, but
some of them, depending on the depth, where the electron emission occurs and on the
electron work function, reach the surface of the sample and are finally detected. The
depth from which the information is collected by the CEMS method is usually $\sim$ 200 nm
depending on the absorption properties of the material, the electrons emitted from deeper
regions of the sample do not reach the surface. Our thin film is only 210 nm thick, which
means that we should see the CEMS signal from the whole volume of the film. However, the
film is not only magnetic but also ferroelectric, and electric field in the ferroelectric
domains should substantially influence the work function of the electrons. The
ferroelectric domain structure of BFCO is complex and assuming it is similar to that
reported for BiFeO$_{3}$,\cite{chu07} the polarization (and related internal electric
field) is oriented 41.8$^{\circ}$ or even 131.8$^{\circ}$ to the normal surface of (001)
oriented thin film. Therefore, the emitted electrons are returned back to the film and
most of them lose their energy, are absorbed and do not leave the film. Only electrons
emitted from a very thin surface layer ($\sim$ 10 nm) may reach the surface and are
detected in the CEMS experiment. The thin film surface layer is most likely non-magnetic
(probably due to chemical disorder of Fe and Cr cations at the surface), therefore only a
doublet is observed in our CEMS M\"{o}ssbauer spectra shown in the inset of
Fig.~\ref{Fig4}, although the volume of the film is magnetically ordered, as seen from
the magnetic hysteresis loops measured by vibrating-sample magnetometry.
Nevetheless, we have to stress that we repeated the CEMS M\"{o}ssbauer experiment also
with another BFCO film (thickness 86 nm, STO:Nb substrate), which exhibited strong
satellites in the XRD (i.e. a higher Fe and Cr chemical order than in the previous
sample), as well as broad magnetic hysteresis loop and still we found only doublet in
CEMS spectra typical for the paramagnetic order. Finally, we note that the doublet cannot
originate from the substrate, because the M\"{o}ssbauer spectrum is sensitive only to the
Fe cations not present in the STO:Nb substrate.
\begin{table
\caption{Comparison of the fit parameters of M\"{o}ssbauer spectra in Fig.~\ref{Fig1} and
in Ref.~\cite{suchomel07}.}
\begin{tabular}{|l l l l |}\hline
& Our data& & Suchomel et al.\cite{suchomel07}\\
&(CEMS) & & (Trans. mode) \\\hline
Isomer shift $\delta$ &0.39 mm/s& & 0.39 mm/s \\
Quadrupole splitting $\Delta E_{q}$ &0.52 mm/s & & 0.48 mm/s \\
"Peak width"-FWHM ($\Gamma$) &0.39 mm/s & & N/A\\
\hline
\end{tabular}
\label{moesbauer}
\end{table}
Parameters of the M\"{o}ssbauer spectra fit are summarized in Table~\ref{moesbauer}. From
the Fe isomer shift $\delta$ the valency of the iron can be clearly estimated. The
Fe$^{3+}$ cations in the oxidic compound have its isomer shift in the range of 0.1-0.5
mm/s while the Fe$^{2+}$ cations show $\delta$ in the range of 0.8-1.5
mm/s.\cite{menil85} Our obtained value $\delta$=0.39\,mm/s confirms the absence of
Fe$^{2+}$ states and the presence of only Fe$^{3+}$ states in the investigated film.
M\"{o}ssbauer spectra also allow determining the site symmetry for Fe$^{3+}$ cations in
the structure. According to Refs.\cite{menil85,parmentier99}, the usual isomer shift
values for Fe$^{3+}$ in the case of the spectra measured at RT are as follows: 0.10-0.30
mm/s for Fe$^{3+}$ in a tetrahedral site, 0.28-0.50 mm/s for Fe$^{3+}$ in an octahedral
site. When we compare the above-mentioned ranges with our $\delta$=0.39 mm/s, we can
confirm that Fe$^{3+}$ in BFCO is in the octahedral position.
When we compare our fitting parameters in Table~\ref{moesbauer} with the parameters
obtained from the M\"{o}ssbauer spectra (measured in the transmission mode) of disordered
ceramics published by Suchomel \textit{at al.},\cite{suchomel07}, we can state that both
ceramics and surface layer of our thin film have the same or similar non-magnetic
structure at RT, although the magnetic measurements by vibrating-sample magnetometry give
evidence for a magnetic order in the thin film far above RT.
In the light of our high-temperature magnetic data we can suggest that the phonon
anomalies seen near 600\ensuremath{\,\mbox{K}}\, are due to a magnetic phase transition. Near this temperature
a sudden drop of the permittivity is seen, which is typical for spin-phonon
coupling.\cite{smolenskii82} Nevertheless, further magnetic, structural and dielectrics
studies are necessary for the confirmation of this suggestion.
\section{conclusion}
The complex dielectric response of a BFCO film was determined by a novel method - IR
reflectance of a BFCO thin film deposited on a LaAlO$_{3}$ substrate. Most of the polar
phonons seen in the IR spectra reveal gradual softening on heating from 20 to 900\ensuremath{\,\mbox{K}},
which causes a progressive increase of the static permittivity with increasing
temperature. Therefore we speculate that the ferroelectric phase transition lies
(similarly to the related BiFeO$_{3}$) above 900\ensuremath{\,\mbox{K}}, although some phonon anomalies,
probably connected with the magnetic phase transition, were observed near 600\ensuremath{\,\mbox{K}}. Magnetic
properties of the BFCO thin film were investigated between 6 and 1000\ensuremath{\,\mbox{K}}\, and revealed
that the BFCO film is a good high-temperature multiferroic with a magnetic phase
transition that we assumed to take place somewhere between 600 and 800\ensuremath{\,\mbox{K}}. Conversion
electron M\"{o}ssbauer spectrum did not reveal magnetic order in the BFCO thin film in
contrast to vibration magnetometry, because the electric field presented in ferroelectric
domains extends the track of emitted electrons and prevents their detection from most the
volume depth of the thin film. Therefore probably only electrons from the thin skin
non-magnetic layer of the film are detected. Further structural, magnetic and dielectric
high-temperature studies on a well B-site-ordered samples are in progress.
\begin{acknowledgments}
The work was supported by the Grant Agency of the Czech Republic (Project No.
202/06/0403) and Grant Agency of Academy of Science of the Czech Republic (Project No.
KJB100100704) and AVOZ10100520. Part of this research was supported by grants from the
Natural Sciences and Engineering Research Council of Canada (NSERC) as well as from the
Fond Qu\'{e}b\'{e}cois de la Recherche sur la Nature et les Technologies (FQRNT). The
authors thank J. \v{S}ebek for valuable discussions as well as J. Petzelt and T.W.
Johnston for critical reading of the manuscript.
\end{acknowledgments}
|
3,212,635,537,566 | arxiv | \section{INTRODUCTION}
Efficient modeling and control of complex systems in the presence of
uncertainties is important for modern engineering. This is
especially true in the domain of intelligent systems that are
designed to operate in uncertain environments. Uncertainties in such
systems are usually quantitative relations (maps) between measured
signals
\[
\begin{split}
& x_1(t),x_2(t),\dots,x_d(t) \mapsto f(x_1(t),x_2(t),\dots,x_d(t)),\\
& \ \ \ \ \ \ \ \ \ x_i:\mathbb{R}\rightarrow\mathbb{R},
\end{split}
\]
and the number of these signals may be large.
Physical models of such relations $f(\cdot)$ are not always
available, and it is quite common to use mathematical substitutes
such as, e.g., superpositions of (basis) functions that are capable
of approximating a-priori unknown $f(\cdot)$ with the required
precision. Thus, successful modeling and control in the domain of
intelligent systems are critically dependent on availability of
adequate and efficient function approximators which can take care of
various uncertainties in the system.
In the domain of modeling and control of intelligent systems the
multilayer perceptrons (MLP) and radial basis functions (RBF)
networks are popular function approximators \cite{Haykin99}. The MLP
uses a basis in the form of the sigmoids with {\em global} support.
For one-hidden layer MLP, its output is determined by
\begin{equation}\label{eq:mlp}
f_n(x)=\sum_{i=1}^n c_i \frac{1}{1+e^{(w_i^{T}x+b_i)}}
\end{equation}
Typically, both nonlinear ($w_i$ and $b_i$) and linear ($c_i$)
parameters, or weights, are subject to training on data specific to
the problem at hand (full network training).
The RBF networks use a basis in the form of the Gaussians with {\em
local} (but not compact) support:
\begin{equation}\label{eq:rbf}
f_n(x)=\sum_{i=1}^n c_i e^{(-\|w_i^{T}x+b_i\|^2)}
\end{equation}
Though all parameters may be trained in principle, typically only
linear weights $c_i$ of the RBF network are trained. The locations
and the widths of the Gaussians are usually set on a uniform or
nonuniform grid covering the operating domain of the system.
The popularity of approximators (\ref{eq:mlp}), (\ref{eq:rbf}) is
not only due to their approximation capabilities (see e.g.,
\cite{Cybenko}, \cite{Hornik90}, \cite{Park93}) and their homogenous
structure but also due to efficiency of approximators
(\ref{eq:mlp}), (\ref{eq:rbf}) in high dimensions. In particular, if
{\it all parameters $w_i$, $c_i$, $b_i$ } are allowed to vary, the
rate of convergence of the approximation error of a target function
$f\in\mathcal{C}^{0}[0,1]^d$ as a function of $n$
(the number of elements in the network) is shown to be independent
of the input dimension $d$ \cite{Jones:1992}, \cite{Barron}.
Furthermore, the achievable rate of convergence of the $L_2$-norm of
$f(x)-f_n(x)$ is shown to be of order $O(1/n)$.
Despite these advantageous features of approximators (\ref{eq:mlp}),
(\ref{eq:rbf}), viz. favorable independence of the convergence rates
on the input dimension of the function to be approximated, the issue
is how to achieve the convergence rate of order $O(1/n)$ in
practice. Even though \cite{Jones:1992}, \cite{Barron} offer a
constructive procedure for optimal selection of basis functions,
each step of these procedures involves a nonlinear optimization
routine searching for the best possible values of $w_i$, $b_i$ (see
Section II for details). It is also shown in \cite{Barron} that if
only linear parameters of (\ref{eq:mlp}), (\ref{eq:rbf}) are
adjusted the approximation error {\it cannot be made} smaller than
$1/n^{2/d}$ uniformly for functions satisfying the same smoothness
constraints.
The necessity to adjust nonlinear parameters of (\ref{eq:mlp}),
(\ref{eq:rbf}), restricts practical application of these models to
those problems in which such optimization is feasible. Adaptive
control with nonlinearly parameterized models remains a challenging
issue; see e.g., \cite{tpt2002_at}, \cite{tpt2003_tac},
\cite{tpvl2007_tac}, \cite{Annaswamy99}.
Though published quite a while ago, the paper by Igelnik and Pao
\cite{igelpao95} (see also comments \cite{lichow97}) has recently received numerous citations in a
variety of intelligent control publications; see, e.g.,
\cite{he05}--\cite{Liuetal07}. The paper advocates the use of
random basis in the MLP (\ref{eq:mlp}) and RBF (\ref{eq:rbf})
networks. That is, the nonlinear parameters $w_i$ and $b_i$ are to
be set randomly at the initialization, rather than through training.
The only trainable parameters are those which enter the network
equation linearly ($c_i$).
The paper \cite{igelpao95} provides mathematical justification to
the use of linear-in-parameters function approximators for modeling
and control, crucially simplifying analysis of properties of the
closed-loop control system featuring such approximators. While
analysis simplification is attractive, it entails a number of issues
which are important to consider whenever planning to apply such
random basis function approximators in practice. We show that the
rate of convergence of order $O(1/n)$ for such approximators is
achievable only for large $n$, and it is probabilistic in nature.
The latter feature may require introduction of a supervisory
mechanism in the control system to re-initialize the network if the
required accuracy is not met.
The paper is organized as follows. In Section \ref{sec:Analysis} we
analyze the reasoning in \cite{igelpao95} and compare these results
with \cite{Jones:1992}, \cite{Barron}. We show that, although these
results may seem inconsistent (i.e., the lower approximation bound
$1/n^{2/d}$ derived in \cite{Barron} for any linear-in-parameter
approximator vs. the rate of convergence of order $O(1/n)$ and
independent of $d$ in \cite{igelpao95}), they are derived for
different asymptotics (for every $n$ in \cite{Jones:1992},
\cite{Barron} vs. for large $n$ in \cite{igelpao95}) and use
different convergence criteria (deterministic in \cite{Jones:1992},
\cite{Barron} vs. statistical in \cite{igelpao95}). Implications of
our analysis are illustrated in Section \ref{sec:Example} with a
simple example, followed by a discussion in Section
\ref{sec:Discussion}. Section \ref{sec:Conclusion} concludes the
paper.
\section{Function Approximation Concepts}\label{sec:Analysis}
In this section we review and compare two results for function
approximation with neural networks. The first result is the
so-called {\it greedy approximation} upon which the famous Barron's
construction is based \cite{Barron}. In this framework a function is
approximated by a sequence of linear combinations of basis
functions. Each basis function is to satisfy certain optimality
condition, and as a result the overall rate of convergence is
optimized as well.
The second result is the random basis function approximator also
known as the Random Vector Functional-Link (RVFL) network
\cite{igelpao95} in which the basis functions are randomly chosen,
and only their linear parameters are optimized.
Both results enjoy the convergence rates that do not depend on the
input dimension $d$ of the target functions. However, there are
differences important for practical use of these results.
First, as we show below the number of practically required
approximation elements (network size) that guarantees given
approximation quality differ substantially. Second, the quality
criteria are also different: in the framework of greedy
approximation this is merely the $L_2$-norm which is a deterministic
functional, whereas in the RVFL framework the criterion is {\it
statistical}.
\subsection{Approximation problem}
Consider the following class of problems. Let $f:[0,1]^d\subset
\mathbb{R}^d\rightarrow\mathbb{R}$ be a continuous function, and
\[
\|f\|^2=\langle f,f \rangle=\int_{[0,1]^d} f(x)f(x) d x,
\]
be the $L_2$-norm of $f$. Suppose that
$g:\mathbb{R}\rightarrow\mathbb{R}$ be a function such that
\[
\|g(\cdot)\|\leq M, \ M\in\mathbb{R}_{>0},
\]
and that
\[
f\in \overline{\mathrm{convex} \ \mathrm{hull}} \ \{g(w^{T}x + b)\}, \ w\in\mathbb{R}^d, \ b\in\mathbb{R},
\]
In other words there is a sequence of $w_i$, $b_i$, and $c_i$ such that
\[
f(x)=\sum_{i=1}^{\infty} c_i g(w_i^{T}x + b_i), \ \sum_{i=1}^{\infty}c_i=1
\]
Let
\begin{equation}\label{eq:approximation}
f_n(x)=\sum_{i=1}^n c_i g(w_i^{T}x+b_i)
\end{equation}
be a superposition of functions $g(w_i^{T}x+b_i)$. The question is
how many elements do we need to pick in (\ref{eq:approximation}) to
assure that the approximation error does not exceed certain
specified value?
\subsection{Greedy approximation and Jones Lemma}
In order to answer the question above one needs first to determine
the error of approximation. It is natural for functions from $L_2$
to define the approximation error as follows:
\begin{equation}\label{eq:error_greedy}
e_n=\|f_n - f\|
\end{equation}
The classical Jones iteration \cite{Jones:1992} (refined later by
Barron \cite{Barron}) provides us with the following estimate of
achievable convergence rate:
\begin{equation}\label{eq:rate:greedy}
\begin{split}
e_n^2&\leq \frac{M'^2 e_0^2}{n e_0^2 + M'^2}, \ M'\in\mathbb{R}_{>0},\\
M'&> \sup_{g} \|g\|+\|f\|.
\end{split}
\end{equation}
The rate of convergence depends on $d$ only through the $L_2$-norms
of $f_0$, $g$, and $f$. The iteration itself is deterministic and
can be described as follows:
\begin{equation}\label{eq:Jones_iteration}
\begin{split}
f_{n+1}&=(1-\alpha_n)f_n + \alpha_n g_n\\
\alpha_n&= \frac{e_n^2}{M''^2 + e_n^2}, \ \ M''>M'
\end{split}
\end{equation}
where $g_n$ is chosen such that the following condition holds
\begin{equation}\label{eq:Jones_iteration:g_n}
\langle f_n-f,g_n-f\rangle < \frac{((M'')^2-(M')^2) e_n^2}{2(M'')^2},
\end{equation}
This choice is always possible (see \cite{Jones:1992} for
details).
According to (\ref{eq:rate:greedy}) the rate of convergence of such
approximators is estimated as
\[
\|e_n\|^2 = O(1/n).
\]
This convergence estimate is {\it guaranteed} because it is the
upper bound for the approximation error at the $n$th step of
iteration (\ref{eq:Jones_iteration}).
\subsection{Approximation with randomly chosen basis functions}
We now turn our attention to the result in \cite{igelpao95}. In this
approximator the original function $f(\cdot)$ is assumed to have the
following integral representation\footnote{We keep the original
notation of \cite{igelpao95} which uses both $\omega$ and $w$ for
the sake of consistency.}
\begin{equation}\label{eq:approximation:2}
f(x)=\lim_{\alpha\rightarrow\infty}\lim_{\Omega\rightarrow\infty} \int_{W^d} F_{\alpha,\Omega}(\omega)g(\alpha w^{T}x+b)d \omega,
\end{equation}
where $g:\mathbb{R}\rightarrow\mathbb{R}$ is a non-trivial function from $L_2$:
\[
0<\int_{\mathbb{R}}g^2(s)ds < \infty,
\]
where
$\omega=(y,w,u)\in\mathbb{R}^{d}\times\mathbb{R}^d\times[-\Omega,\Omega]$, $\Omega\in\mathbb{R}_{>0}$, $W^d=[-2d\Omega;2d\Omega]\times I^d \times V^d$, $V^d=[0;\Omega]\times[-\Omega;\Omega]^{d-1}$, $b=-(\alpha w^{T} y + u)$ and
\[
F_{\alpha,\Omega}(\omega)\sim \frac{\alpha\prod_{i=1}^d w_i}{\Omega^d 2^{d-1}} f(y).
\]
See \cite{igelpao95}, \cite{lichow97} for more detailed description.
Function $g(\cdot)$ induces a parameterized basis. Indeed if we were
to take integral (\ref{eq:approximation:2}) in quadratures for
sufficiently large values of $\alpha$ and $\Omega$, we would then
express $f(x)$ by the following sums of parameterized $g(\alpha
w^{T}x+b)$ \cite{igelpao95}:
\begin{equation}\label{eq:approximation:R}
f_n(x)\approx \sum c_{i} g(\alpha w_i^{T}x + b_i), \ b_i=-
\alpha(w_i^{T} y_i + u_i)
\end{equation}
The summation in (\ref{eq:approximation:R}) is taken over points
$\omega_i$ in $W^d$, and $c_i$ are weighting coefficients.
Variables $\alpha$ in (\ref{eq:approximation:2}) and $\alpha_n$ in
(\ref{eq:Jones_iteration}) play different roles in each
approximations schemes. In (\ref{eq:Jones_iteration}) the value of
$\alpha_n$ is set to ensure that the approximation error is
decreasing with every iteration, and in (\ref{eq:approximation:2})
it stands for a scaling factor of random sampling.
The main idea of \cite{igelpao95} is to approximate integral
representation (\ref{eq:approximation:2}) of $f(x)$ using the Monte-Carlo integration method as
\begin{equation}\label{eq:MC_approximation}
\begin{split}
f(x)&\sim \frac{4 \Omega^d}{n} \lim_{\alpha\rightarrow\infty}\lim_{\Omega\rightarrow\infty} \sum_{k=1}^n F_{\alpha,\Omega}(\omega_k)g(\alpha w^{T}_k x+b_k)\\
&=\frac{4}{n} \lim_{\alpha\rightarrow\infty}\lim_{\Omega\rightarrow\infty} \sum_{k=1}^n c_{k,\Omega}(\alpha,\omega_k)g(\alpha w^{T}_kx+b_k)\\
&=f_{n,\omega,\Omega}(x),
\end{split}
\end{equation}
where the coefficients $c_k(\alpha,w_k)$ are defined as
\begin{equation}\label{eq:MC_coefficients}
c_{k,\Omega}(\alpha,w_k)\sim \frac{\alpha\prod_{i=1}^d w_{k,i}}{2^{d-1}} f(y_k)
\end{equation}
and $\omega_k=(y_k,w_k,u_k)$ are randomly sampled in $W^d$ (domain
of parameters, i.e., weights and biases of the network).
When the number of samples, $n$, i.e., {\em the network size}, is
large, then the expectation $E_\omega(n,x)$
\[
\begin{split}
E_\omega(n,x)&=f(x)-\frac{4}{n}\sum_{k=1}^n c_{k,\Omega}(\alpha,\omega_k)g(\alpha w^{T}_kx+b_k)
\end{split}
\]
converges to zero for large $n$ (Theorem 1 in \cite{igelpao95}):
\[
\lim_{n\rightarrow\infty} E_\omega(n,x) =0.
\]
The advantage of the Monte-Carlo integration, and hence the
approximation techniques that are based upon this method, is its
order of convergence for large $n$. It is known that if $W^d$ is
bounded (i.e., its volume is bounded) then {\it the variance} of the
estimate (\ref{eq:MC_approximation}) is bounded pointwise from
above:
\begin{equation}\label{eq:rate:MC}
\begin{split}
\mathrm{Var}_\omega
(n,x)&=\lim_{n\rightarrow\infty}|W^d|\frac{\sigma^2_f(x)}{n}
\end{split}
\end{equation}
where
\[
\sigma^2_f(x)=\int_{W^d} (c_{k,\Omega}(\alpha,\omega)g(\alpha w^{T}_kx+b_k)-f(x))^2d\omega.
\]
In this sense the order of Monte-Carlo approximation for large
number of processing elements of the approximator (network nodes)
$n$ may be made similar to that of the greedy approximation.
\subsection{Comparison}
There are, however, important points that make this method different
from the greedy approximation:
\begin{itemize}
\item Approximation ``error'' (\ref{eq:rate:MC}) is statistical, whereas the approximation
error (\ref{eq:error_greedy}) is deterministic. This means that
$f_{n,\omega,\Omega}(x)$ is not at all guaranteed to be close to $f(x)$
for {\it every} randomly chosen set of length $n$.
We can, however, conclude that for sufficiently small $\gamma=\sigma^2_f(x)/(N \varepsilon)^2$
the probability that $f_{n,\omega,\Omega}(x)$ is close to $f(x)$ approaches $1$ (from the Chebyshev inequality):
\[
\begin{split}
&P\left(\left|\frac{4}{n}\sum_{k=1}^n c_{k,\Omega}(\alpha,\omega_k)g(\alpha w^{T}_kx+b_k)-f(x) \right|<\varepsilon\right)\\
& \ \ \ \ \ \ \ \ \geq 1-\gamma
\end{split}
\]
\item For the Monte-Carlo based scheme (\ref{eq:MC_approximation})--(\ref{eq:rate:MC}) to converge, one needs
to ensure that $W^d$ is bounded. This, however, conflicts with the
requirement that $\Omega\rightarrow\infty$
(\ref{eq:approximation:2}). Hence the class of functions to which
the scheme applies is restricted. In order to mitigate this
restriction, it is proposed to consider functions $g(\cdot)$ with
compact support, and for this class of functions
dimension-independent (statistical) rate of convergence
(\ref{eq:rate:MC}) is guaranteed.
\item The relatively fast rate of convergence (\ref{eq:rate:MC}) is guaranteed only for large $n$.
\end{itemize}
These points are summarized in Table 1.
\begin{table}
\caption{Greedy vs Random Vector Functional-Link approximators}
\begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}|c|c|c|}
\hline
& & \\
Feature & Greedy Approximation & Random \\
& & \\
\hline
& & \\
Quality criterion & Deterministic: & Statistical:\\
& & \\
& $e_n=\|f_n-f\|$ & $ e_n=\sqrt{\mathrm{Var}_{\omega}(n,x)}$ \\
& & \\
\hline
& & \\
Convergence rate & $e_n^2\leq O(1/n)$ & $e_n^2\leq O(1/n)$ \\
& & \\
& for all $n\geq 1$ & for large $n$\\
& & \\
\hline
\end{tabular*}
\end{table}
\section{Example}\label{sec:Example}
In order to illustrate the main difference between greedy and RVFL
approximators, we consider the following example in which a simple
function is approximated by both methods, greedy approximation
(\ref{eq:approximation})--(\ref{eq:Jones_iteration}) and
approximation based on the Monte-Carlo integration
(\ref{eq:approximation:2})--(\ref{eq:rate:MC}). Let $f(x)$ be
defined as follows:
\[
f(x)=0.2 e^{-(10x-4)^2} + 0.5e^{-(80x-40)^2}+0.3e^{-(80x-20)^2}
\]
The function $f(x)$ is shown in Fig. \ref{fig:example}, top
panel. Clearly, $f(\cdot)$
belongs to the convex hull of $G$, and hence to its closure.
First, we implemented greedy approximation
(\ref{eq:approximation})--(\ref{eq:Jones_iteration}) in which we
searched for $g_n$ in the following set of functions
\[
G=\{e^{-(w^T x +
b)^2}\},
\]
where $w\in[0,200]$, $b\in[-100,0]$. The procedure for constructing $f_n$ was as follows.
Assuming $f_0(x)=0$, $e_0=-f$ we started with searching for $w_1$, $b_1$ such that
\begin{equation}\label{eq:exaple_greedy:1}
\begin{split}
&\langle0-f(x),g(w_1x+b)-f(x)\rangle=\\
&-\langle f(x),g(w_1x+b)\rangle+\|f(x)\|^2 < \varepsilon.
\end{split}
\end{equation}
where $\varepsilon$ was set to be small ($\varepsilon=10^{-6}$ in
our case). When searching for a solution of
(\ref{eq:exaple_greedy:1}) (which exists because the function $f$ is
in the convex hull of $G$ \cite{Jones:1992}), we did not utilize any
specific optimization routine. We sampled the space of parameters
$w_i$, $b_i$ randomly and picked the first values of $w_i$, $b_i$
which satisfy (\ref{eq:exaple_greedy:1}). Integral
(\ref{eq:exaple_greedy:1}) was evaluated in quadratures over a
uniform grid of $1000$ points in $[0,1]$.
The values of $\alpha_1$ and the function $f_1$ were chosen in
accordance with (\ref{eq:Jones_iteration}) with $M''=2$, $M'=1.5$
(these values are chosen to assure $M''>M'> \sup_{g} \|g\|+\|f\|$).
The iteration was repeated, resulting in the following sequence of
functions
\[
\begin{split}
f_n(x)&=\sum_{i=1}^n c_i g(w_i^{T}x+b_i), \\ c_i&=\alpha_i(1-\alpha_{i+1})(1-\alpha_{i+2})\cdots(1-\alpha_{n})
\end{split}
\]
Evolution of the normalized approximation error
\begin{equation}\label{eq:exaple_error_normalized}
\bar{e}_n=\frac{e_n^2}{\|f\|^2}=\frac{\|f_n-f\|^2}{\|f\|^2}
\end{equation}
for $100$ trials is shown in Fig. \ref{fig:example} (middle panel).
Each trial consisted of $100$ iterations
(\ref{eq:approximation})--(\ref{eq:Jones_iteration}), thus leading
to the networks of $100$ elements at the $100$th step. We observe
that the values of $\bar{e}_n$ monotonically decrease as $O(1/n)$,
with the behavior of this approximation procedure consistent across
trials.
Second, we implemented an approximator based on the Monte-Carlo
integration. At the $n$th step of the approximation procedure we
pick randomly an element from $G$, where $w\in[0,200]$,
$b\in[-200,200]$ (uniform distribution). After an element is
selected, we add it to the current pool of basis functions
\[
P_{n-1}=\{g(w_1^Tx+b_1),\dots,g(w_{n-1}x+b_{n-1})\}.
\]
Then the weights $c_i$ in the superposition
\[
f_n=\sum_{i=1}^n c_i g(w_i^Tx+b_i)
\]
are optimized so that $\|f_n-f\|\rightarrow\min$. Evolution of the
normalized approximation error $\bar{e}_n$
(\ref{eq:exaple_error_normalized}) over $100$ trials is shown in
Fig. \ref{fig:example} (bottom panel). As can be observed from the
figure, even though the values of $\bar{e}_n$ form a monotonically
decreasing sequence, they are far from $1/n$, at least for $1\leq
n\leq 100$. Behavior across trials is not consistent, at least for
the networks smaller than $100$ elements, as indicated by a
significant spread among the curves.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.85\columnwidth]{function_example.eps}
\vskip 6mm
\includegraphics[width=0.85\columnwidth]{error_greedy_trajectories.eps}
\vskip 6mm
\includegraphics[width=0.85\columnwidth]{error_RVFL_trajectories.eps}
\end{center}
\caption{Practical speed of convergence of function approximators
that use greedy algorithm (middle panel) and Monte-Carlo based
random choice of basis functions (bottom panel). The target function
is shown on the top panel. }\label{fig:example}
\end{figure}
Overall comparison of these two methods is provided in Fig.
\ref{fig:example:2}, in which the errors $\bar{e}_n$ are presented
in the form of a box plot. Black solid curves depict the median of
the error as a function of the number of elements, $n$, in the
network; blue boxes contain $50\%$ of the data points in all trials;
``whiskers'' delimit the areas containing $75\%$ of data, and red
crosses show the remaining part of the data. As we can see from
these plots, random basis function approximators, such as the RVFL
networks, mostly do not match performance of greedy approximators
for networks of reasonable size. Perhaps, employing integration
methods with variance minimization could improve the performance.
This, however, would amount to using prior knowledge about the
target function $f$, making it difficult to apply the RVFL networks
to problems in which the function $f$ is uncertain.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.85\columnwidth]{function_example_greedy.eps}
\vskip 6mm
\includegraphics[width=0.85\columnwidth]{function_example_random.eps}
\vskip 6mm
\includegraphics[width=0.85\columnwidth]{function_example_random_2.eps}
\end{center}
\caption{Box plots of convergence rates for function approximators
that use greedy algorithm (top panel) and Monte-Carlo random choice
of basis functions (middle and bottom panels). The middle panel
corresponds to the case in which the basis functions leading to
ill-conditioning were discarded. The bottom panel shows performance
of the MLP trained by the method in \cite{Feldkamp98b} which is
effective at counteracting ill-conditioning while adjusting the
linear weights only. The red curve shows the upper bound for
$\bar{e}_n$ calculated in accordance with (\ref{eq:rate:greedy}). We
duplicated the average performance of the greedy algorithm (grey
solid curve) in the middle and bottom panels for convenience of
comparison.}\label{fig:example:2}
\end{figure}
Now we demonstrate performance of an MLP trained to approximate this
target function. The NN is trained by a gradient based method
described in \cite{Feldkamp98b}. At first, the full network training
is carried out for several network sizes $n=20$, $40$, $60$, $80$
and $100$ and input samples randomly drawn from $x\in [0,1]$. The
values of $\bar{e}_n$ are $1.5\cdot10^{-4}$ for all the network
sizes (as confirmed in many training trials repeated to assess
sensitivity to weight initialization). This suggests that training
and performance of much smaller networks should be examined. The
networks with $n=2,4,6,8,10$ are trained, resulting in
$\bar{e}_n=0.5749,0.1416,0.0193,0.0011,0.0004$, respectively,
averaged over $100$ trials per the network size. Next, we train
only the linear weights ($c_i$ in (\ref{eq:mlp})) of the MLP, fixing
the nonlinear weights $w_i$ and $b_i$ to random values. The results
for $\bar{e}_n$ averaged over $100$ trials are shown in Fig.
\ref{fig:example:2}, bottom panel (black curve). Remarkably, the
results of random basis network with $n=100$ are worse than those of
the MLP with $n\ge 4$ and full network training. These results
indicate that both the greedy and the Monte-Carlo approximation
results shown in Fig. \ref{fig:example:2} are quite conservative.
Furthermore, the best of those two, i.e., the greedy
approximation's, can be dramatically improved by a practical
gradient based training.
\addtolength{\textheight}{-1cm}
\section{Discussion}\label{sec:Discussion}
We just analyzed theoretically and illustrated on a simple example
what may happen if the basis for function approximation is chosen at
random. We wish to discuss recent result presented in \cite{he05}
regarding the use of the random basis function approximators. We
choose this work because it is representative of a recent trend in
neural network control literature exemplified by
\cite{ok07}--\cite{Liuetal07}. In this trend, the purpose of one or
several neural networks implementing random basis is to account for
(and ideally - cancel asymptotically) an unknown bounded modeling
nonlinearity.
While ensuring that the tracking errors are bounded asymptotically,
the main theorem in \cite{he05} and its proof do not imply
performance improvement.
Instead the proof attempts to relate design parameters $\gamma_i$
with magnitudes of disturbances and weights of neural networks.
Though the disturbance magnitude may indeed be known a priori, one
can not assume sufficiently small bounds on the values of weights
because the weights may need to be large in order to compensate for
residual modeling errors from randomly assigned basis functions.
Furthermore, the larger the weights or the farther the system of
basis functions from an orthogonal one (ill-conditioning), the more
time is needed for an adaptive system to converge into the desired
domain; see, e.g., \cite{French2000}. In fact, in the example of
Section \ref{sec:Example} we observed values of the hidden layer
weights as large as $200$. However, large bounds on the weights
force the control system designer to decrease design parameters
$\gamma_i$ which, in turn, results in an increase of the region of
uniform ultimate boundedness (UUB) (determined by equations
(A.5-A.9) in \cite{he05}). Ironically, the region of UUB in
\cite{he05} may not shrink to zero even in the ideal case of zero
disturbances.
The UUB depending on such uncontrollable quantities as weights makes
it impossible to provide practically valuable guarantees of the
closed-loop system performance.
\section{Conclusion}\label{sec:Conclusion}
In this work we demonstrate that, despite increasing popularity of
random basis function networks in control literature, especially in
the domain of intelligent/adaptive control, one needs to pay special
attention to practical aspects that may affect performance of these
systems in applications.
First, as we analyzed in Section II and showed in our example,
although the rate of convergence of the random basis function
approximator is qualitatively similar to that of the greedy
approximator, the rate of the random basis function approximator is
achievable only when the number of elements in the network is
sufficiently large. Second, approximators which are motivated by the
Monte-Carlo integration method offer only {\it statistical} measure
of approximation quality.
In other words, small approximation errors are guaranteed here in
{\it probability}. This means that, for practical adaptive control
in which the RVFL networks are to model or compensate system
uncertainties, employment of a re-initialization with a supervisory
mechanism monitoring quality of the RVFL network is necessary.
Unlike network training methods that adjust both linear and
nonlinear weights of the network, such mechanism may have to be made
robust against numerical problems (ill-conditioning) which often
occurs in the Monte-Carlo method.
Our conclusion about the random basis function approximators is also
consistent with the following intuition. If the approximating
elements (network nodes) are chosen at random and not subsequently
trained, they are usually not placed in accordance with the density
of the input data. Though computationally easier than for nonlinear
parameters, training of linear parameters becomes ineffective at
reducing errors ``inherited" from the nonlinear part of the
approximator. Thus, in order to improve effectiveness of the random
basis function approximators one could combine unsupervised
placement of network nodes according to the input data density with
subsequent supervised or reinforcement learning values of the linear
parameters of the approximator. However, such a combination of
methods is not-trivial because in adaptive control and modeling one
often has to be able to allocate approximation resources adaptively
-- and the full network training seems to be the natural way to
handle such adaptation.
\section*{Acknowledgment}
The authors are grateful to Prof. A.N. Gorban for useful comments and numerous technical discussions during preparation of this work. The first author's research was
supported by a Royal Society International Joint Project grant, and
partially supported by RFBR grant 8-08-00103-a.
\bibliographystyle{plain}
|
3,212,635,537,567 | arxiv | \section{Introduction and main results}
The motion of particles in an ideal fluid in $\mathbb R^3$ is described by its velocity field $\mathbf{v}(\boldsymbol x,t)$ which satisfies the Euler equation
\begin{align}\label{1-1}
\begin{cases}
\partial_t\mathbf v+(\mathbf v\cdot \nabla)\mathbf v=-\nabla P,
\\
\nabla\cdot\mathbf v=0,
\end{cases}
\end{align}
for some pressure function $P(\boldsymbol x,t)$. Corresponding to $\mathbf{v}$ is its vorticity vector defined by $\pmb{\omega}:=\nabla\times\mathbf{v}$. Taking curl of the first equation in Euler equation \eqref{1-1}, H. Helmholtz obtained the equation for vorticity
\begin{align}\label{1-2}
\begin{cases}
\partial_t \pmb{\omega}+(\mathbf{v}\cdot\nabla)\pmb{\omega}=(\pmb{\omega}\cdot\nabla)\pmb{\omega},
\\
\mathbf{v}=\nabla\times(-\Delta)^{-1}\pmb{\omega}.
\end{cases}
\end{align}
We refer to \cite{Che, MB} for more detail about this system.
We are interested in solutions of the Euler equation whose vorticities are large and uniformly concentrated near an evolving smooth curve embedded in entire $\mathbb{R}^3$. This type of solutions, \emph{vortex filaments}, have been a subject of active studies for a long time. By the first Helmholtz theorem, in $\mathbb{R}^3$ a vortex must form a loop with compact support. The simplest vortex loop is a circular \emph{vortex ring}, whose analysis traces back to the works of Helmholtz \cite{Hel} in 1858 and Lord Kelvin \cite{Tho} in 1867. Vortex rings are an intriguing marvel of fluid dynamics that can be easily observed experimentally, e.g. when smoke is ejected from a tube, a bubble rises in a liquid, or an ink is dropped in another fluid, and so on. We refer the reader to \cite{Akh, MGT, Sa92} for some good historical reviews of the achievements in experimental, analytical, and numerical studies of vortex rings.
Helmholtz detected that vortex rings have an approximately steady form and travel with a large constant velocity along the axis of the ring. In 1970, Fraenkel \cite{Fra1} (see also \cite{Fra2}) provided a first constructive proof for the existence of a vortex ring concentrated around a torus with fixed radius $r^*$ with a small, nearly singular cross-section $\varepsilon>0$, traveling with constant speed $\sim|\ln\varepsilon|$, rigorously establishing the behavior predicted by Helmholtz (see, figure \eqref{fig1} (a), where the cross-section is depicted much `fatter' than in reality, so as to show the streamline pattern clearly). Indeed, Lord Kelvin and Hicks showed that such a vortex ring would approximately move at the velocity (see \cite{Lam,Tho})
\begin{equation}\label{KH}
\frac{\kappa}{4\pi r^*}\left(\ln\frac{8r^*}{\varepsilon}-\frac{1}{4}\right),
\end{equation}
where $\kappa$ denotes its circulation. Fraenkel's result is consistent with the Kelvin--Hicks formula \eqref{KH}.
Roughly speaking, vortex rings can be characterized simply as an axi-symmetric flow with a (thin or fat) toroidal vortex tube. Here the word `toroidal' means topologically equivalent to a torus. In the usual cylindrical coordinate frame $\{\mathbf{e}_r, \mathbf{e}_\theta, \mathbf{e}_z\}$, the velocity field $\mathbf{v}$ of an axi-symmetric flow can be expressed in the following way
\begin{equation*}
\mathbf{v}=v^r(r,z)\mathbf{e}_r+v^\theta(r,z)\mathbf{e}_\theta+v^z(r,z)\mathbf{e}_z.
\end{equation*}
The component $v^\theta$ in the $\mathbf{e}_\theta$ direction is usually called the swirl velocity. If an axi-symmetric flow is non-swirling (i.e., $v^\theta \equiv 0$), then the vorticity admits its angular component $\omega^\theta$ only, namely, $\pmb{\omega}=\omega^\theta \mathbf{e}_\theta$. Let $\zeta=\omega^\theta/r$ be the potential vorticity. Then the vorticity equation \eqref{1-2} is reduced to an active scalar equation for $\zeta$
\begin{equation}\label{1-3}
\partial_t \zeta+\mathbf{v}\cdot\nabla\zeta=0,\ \ \ \mathbf{v}=\nabla\times(-\Delta)^{-1}\left(r\zeta \right).
\end{equation}
We shall refer to an axi-symmetric non-swirling flow as `\emph{vortex ring}' if there is a toroidal region inside of which $\boldsymbol{\omega}\not = 0$ (the core), and outside of which $\boldsymbol{\omega}= 0$.
By a \emph{steady vortex ring} we mean a vortex ring that moves vertically at a constant speed forever without changing its shape or size. In other words, a steady vortex ring is of the form
\begin{equation}\label{1-4}
\zeta(\boldsymbol x,t)=\zeta(\boldsymbol x+t\mathbf{v}_\infty),
\end{equation}
where $\mathbf{v}_\infty=-W\mathbf{e}_z$ is a constant propagation speed. Substituting \eqref{1-4} into \eqref{1-3}, we arrive at a stationary equation
\begin{equation}\label{1-5}
\left(\mathbf{v}_\infty+\mathbf{v}\right)\cdot\nabla\zeta=0,\ \ \ \mathbf{v}=\nabla\times(-\Delta)^{-1}\left(r\zeta \right).
\end{equation}
In 1894, Hill \cite{Hil} found an explicit solution of \eqref{1-5} supported in a sphere (Hill's spherical vortex, see, figure \eqref{fig1} (b)). In 1972, Norbury \cite{Nor72} provided a constructive proof for the existence of steady vortex rings with constant $\zeta$ that are close to Hill's vortex but are homeomorphic to a solid torus; and he also presented some numerical results for the existence of a family of steady vortex rings of small cross-section \cite{Nor73}. General existence results of steady vortex rings with a given vorticity function was first established by Fraenkel--Berger \cite{BF1} in 1974. Following these pioneering works, the existence and abundance of steady vortex rings has been rigorously established; see \cite{AS, AT88, Bad, CWZ, DW2,FT,MGT, Ni, VS, YJ} and the references therein.
Compared with the results on the existence, rather limited work has been done on the uniqueness of steady vortex rings. In 1986, Amick--Fraenkel \cite{AF} proved that Hill's vortex is the unique solution when viewed in a natural weak formulation by the method of moving planes; and they (1988) \cite{AF88} also established local uniqueness for Norbury's nearly spherical vortex. However, to the best of our current knowledge, the uniqueness of steady vortex rings of small cross-section is still open. The first goal of this paper is to give a answer to this question.
The stability problem for steady flows are classical objects of study in fluid dynamics. Very recently, Choi \cite{Choi20} established the orbital stability of Hill's vortex. We would like to mention that Hill's vortex is not exactly a steady vortex ring since its vortex core is a ball, not a topological torus. It is still not clear whether some stable steady vortex rings exist. Recent numerical computations in \cite{Pro} revealed that while ‘thin’ vortex rings remain neutrally stable to axi-symmetric perturbations, they become linearly unstable to such perturbations when they are sufficiently ‘fat’. By virtue of our local uniqueness result, we will establish orbital stability of a family of steady vortex rings of small cross-section, which is also the second main goal of this paper.
\begin{center}\label{fig1}
\begin{tikzpicture}
\draw[line width=0.8pt,<-]
(0,5) coordinate --(0,-5);
\draw[line width=0.8pt]
(0,0) coordinate--(0.95,0);
\draw[line width=0.8pt]
(2.95,0) coordinate--(6.5,0);
\draw (1,1.4)--(1.5,1);
\draw[line width=1pt,->] (6.4,0)--(6.5,0);
\draw[line width=1pt,->] (3.6,0.1)--(3.6,0);
\draw[line width=1pt,->] (0.55,-0.1)--(0.55,0);
\draw[line width=1pt,->] (1.8,1.75)--(1.9,1.75);
\draw[line width=1pt,->] (1.8,-1.75)--(1.7,-1.75);
\draw[line width=1pt,->] (2.25,0.1)--(2.25,0);
\draw[line width=1pt,->] (1.25,-0.1)--(1.25,0);
\draw[line width=1pt,->] (1.75,0.55)--(1.85,0.55);
\draw[line width=1pt,->] (1.75,-0.55)--(1.65,-0.55);
\draw[line width=1pt,->] (4.25,0.1)--(4.25,0);
\draw[line width=1pt,->] (2.35,4)--(2.35,3.9);
\draw[line width=1pt,->] (2.35,-4)--(2.35,-4.1);
\draw[line width=1pt,->] (0.8,-4)--(0.8,-4.1);
\draw[line width=1pt,->] (0.8,4)--(0.8,3.9);
\draw[line width=1pt,->] (0.25,0.1)--(0.25,0);
\draw[line width=0.8pt]
(1.75,0) ellipse (0.5 and 0.55);
\draw [line width=0.8pt](1.8,1.75)arc(90:270:1.25 and 1.75);
\draw [line width=0.8pt](1.8,-1.75)arc(270:450:1.8 and 1.75);
\draw [line width=1.75pt](1.85,1.05)arc(90:270:0.9 and 1.05);
\draw [line width=1.75pt](1.85,-1.05)arc(270:450:1.125 and 1.05);
\draw[line width=0.8pt] plot[smooth] coordinates{(0.8,5) (0.8,2.8) (0.45,1.3) (0.25,0) (0.45,-1.3) (0.8,-2.8) (0.8,-5)};
\draw[line width=0.8pt] plot[smooth] coordinates{(2.35,5) (2.5,3) (3.75,1.5) (4.25,0) (3.75,-1.5) (2.5,-3) (2.35,-5)};
\draw[line width=0.8pt,<-]
(8,5) coordinate --(8,-5);
\draw[line width=0.8pt]
(10.5,0) coordinate--(14.5,0);
\draw[line width=1pt,->] (14.4,0)--(14.5,0);
\draw (10.9,1.7)--(10.3,1);
\draw[line width=1pt,->] (11.25,0.1)--(11.25,0);
\draw[line width=1pt,->] (9.35,4)--(9.35,3.9);
\draw[line width=1pt,->] (9.35,-4)--(9.35,-4.1);
\draw[line width=1pt,->] (10.1,0.1)--(10.1,0);
\draw[line width=1pt,->] (8,-0.1)--(8,0);
\draw[line width=1pt,->] (8.4,-0.1)--(8.4,0);
\draw[line width=1pt,->] (8,3.5)--(8,3.4);
\draw[line width=1pt,->] (8,-3.5)--(8,-3.6);
\draw[line width=1pt,->] (9.85,0.1)--(9.85,0);
\draw[line width=1pt,->] (9.15,-0.1)--(9.15,0);
\draw [line width=0.8pt](8.9,1.55)arc(90:270:0.5 and 1.55);
\draw [line width=0.8pt](8.9,-1.55)arc(270:450:1.2 and 1.55);
\draw[line width=0.8pt]
(9.5,0) ellipse (0.35 and 0.65);
\draw[line width=1.75pt](8,2.5) .. controls (11.25,2) and (11.25,-2) .. (8,-2.5);
\draw[line width=0.8pt] plot[smooth] coordinates{(9.35,5) (9.5,3) (10.75,1.5) (11.25,0) (10.75,-1.5) (9.5,-3) (9.35,-5)};
\filldraw
(-0.25,5) node[below] {$z$}
(6.75,0) node{$r$}
(1,1.3) node[above]{$\partial \Omega$}
(11,1.6) node[above]{$\partial \Omega$};
\filldraw
(7.75,5) node[below] {$z$}
(14.75,0) node{$r$};
\filldraw
(11,-5.5) node{(b) Streamline pattern for Hill's vortex.};
\filldraw
(3,-5.5) node{(a) Streamline pattern for vortex ring}
(3,-6) node{of small cross-section.};
\filldraw
(7.25,-6.8) node{\textbf{Fig.1. Two types of vortex in axi-symmetric flow.}};
\filldraw
(1.75,0) circle(0.035)
(9.5,0) circle(0.035);
\end{tikzpicture}
\end{center}
We shall focus on steady vortex rings for which $\zeta$ is a constant throughout the core. As remarked by Fraenkel \cite{Fra2}, this simplest of all admissible vorticity distributions has been a favourite for over a century. Now, we turn to state our main results. To this end, we need to introduce some notation. We shall say that a scalar function $\vartheta:\mathbb{R}^3\to \mathbb{R}$ is axi-symmetric if it has the form of $\vartheta(\boldsymbol x)=\vartheta(r,z)$, and a subset $\Omega\subset \mathbb{R}^3$ is axi-symmetric if its characteristic function $\boldsymbol 1_\Omega$ is axi-symmetric. The cross-section parameter $\sigma$ of an axi-symmetric set $\Omega\subset \mathbb{R}^3$ is defined by
\begin{equation*}
\sigma(\Omega):=\frac{1}{2}\cdot\sup\left\{\boldsymbol \delta_{z}(\boldsymbol x,\boldsymbol y)\,\,|\,\,\boldsymbol x,\boldsymbol y\in \Omega \right\},
\end{equation*}
where the axisymmetric distance $\boldsymbol \delta_z$ is given by
\begin{equation*}
\boldsymbol \delta_z(\boldsymbol x,\boldsymbol y):=\inf\left\{|\boldsymbol x-Q(\boldsymbol y)|\,\,\,\,| \,\, \ Q \ \text{is a rotation around}\ \mathbf{e}_z\right\}.
\end{equation*}
Let $\mathcal{C}_r=\{\boldsymbol x\in\mathbb{R}^3~|x_1^2+x_2^2=r^2,x_3=0\}$ be a circle of radius $r$ on the plane perpendicular to $\mathbf{e}_z$. For an axi-symmetric set $\Omega\subset \mathbb{R}^3$, we define the axi-symmetric distance between $\Omega$ and $\mathcal C_r$ as follows
\begin{equation*}
\text{dist}_{\mathcal{C}_r}(\Omega)=\sup_{\boldsymbol x\in \Omega}\inf_{\boldsymbol x'\in{\mathcal{C}_r}}|\boldsymbol x-\boldsymbol x'|.
\end{equation*}
The circulation of a steady vortex ring $\zeta$ is given by
\begin{equation*}
\frac{1}{2\pi}\int_{\mathbb{R}^3}\zeta(\boldsymbol x)d\boldsymbol x.
\end{equation*}
A steady vortex ring $\zeta$ is said to be \emph{centralized} if $\zeta$ is symmetric non-increasing in $z$, namely,
\begin{equation*}
\begin{split}
& \zeta(r,z)=\zeta(r,-z),\ \ \text{and} \\
& \zeta(r,z)\ \text{is a non-increasing function of}\ z\ \text{for}\ z>0,\,\,\,\text{for each fixed}\ r>0.
\end{split}
\end{equation*}
Our first main result is on the existence of steady vortex rings of small cross-section for which $\zeta$ is constant throughout the core. The existence for such kind of solutions was proved in \cite{CWZ,Fra2,FT} by different methods. However, we will construct steady vortex rings from a new perspective of Stokes stream function, which not only leads to a desired estimate for the cross-section, but also casts a profound light on our approach for uniqueness.
\begin{theorem}[Existence]\label{thm1}
Let $\kappa$ and $W$ be two positive numbers. Then there exists a small number $\varepsilon_0>0$ such that, for every $\varepsilon\in (0,\varepsilon_0]$ there is a centralized steady vortex ring $\zeta_\varepsilon$ with fixed circulation $\kappa$ and translational velocity $W\ln \varepsilon\,\mathbf{e}_z$. Moreover,
\begin{itemize}
\item [(i)]$\zeta_\varepsilon=\varepsilon^{-2}\boldsymbol 1_{\Omega_\varepsilon}$ for some axi-symmetric topological torus $\Omega_\varepsilon\subset \mathbb{R}^3$.
\item [(ii)]It holds $C_1\varepsilon \le \sigma\left(\Omega_\varepsilon\right)<C_2\varepsilon$ for some constants $0<C_1<C_2$.
\item [(iii)]As $\varepsilon\to 0$, $\mathrm{dist}_{\mathcal C_{r^*}}( \Omega_\varepsilon)\to0$ with $r^*:={\kappa}/{4\pi W}$.
\end{itemize}
\end{theorem}
Our existence result is established by an improved Lyapunov--Schmidt reduction argument on planar vortex patch problem in \cite{CPY}. Compared with the method taken in \cite{CPY}, our approach in the present paper is the first time reduction argument being used to deal with a non-uniform elliptic operator. To obtain desired estimates, we use an equivalent integral formulation of the problem, and introduce a weighted $L^\infty$ norm to handle the degeneracy at infinity and singularity near $z$-axis. Another difficulty in our construction is the lack of compactness, which arises from whole-space $\mathbb R^3$. To overcome it, we will use a few techniques, so that versions of Ascoli--Arzel\`a theorem can be applied to recover the compactness.
There are similar existence results for different types of steady vortex rings in the works \cite{AS, Bad,CWZ,DV, Fra1, Fra2, FT}. For instance, de Valeriola et al. \cite{DV} constructed vortex rings with $C^{1,\alpha}$ regularity by mountain pass theorem, and recently Cao et al. \cite{CWZ} studied desingularization of vortex rings by solving variational problems for the potential vorticity $\zeta$. However, in the absence of a comprehensive uniqueness theory, the corresponding relations between solutions with fixed vorticity distributions constructed by the various methods remains unclear. Our second main result is to address this question.
\begin{theorem}[Uniqueness]\label{thm2}
Let $\kappa$ and $W$ be two positive numbers. Let $\{\zeta^{(1)}_\varepsilon\}_{\varepsilon>0}$ and $\{\zeta^{(2)}_\varepsilon\}_{\varepsilon>0}$ be two families of centralized steady vortex rings with fixed circulation $\kappa$ and translational velocity $W\ln \varepsilon\,\mathbf{e}_z$. If, in addition,
\begin{itemize}
\item [(i)]$\zeta^{(1)}_\varepsilon=\varepsilon^{-2}\boldsymbol 1_{\Omega^{(1)}_\varepsilon}$ and $\zeta^{(2)}_\varepsilon=\varepsilon^{-2}\boldsymbol 1_{\Omega^{(2)}_\varepsilon}$ for certain axi-symmetric topological tori $\Omega^{(1)}_\varepsilon$, $\Omega^{(2)}_\varepsilon\subset \mathbb{R}^3$.
\item [(ii)]As $\varepsilon\to 0$, $\sigma\left(\Omega^{(1)}_\varepsilon\right)+\sigma\left(\Omega^{(2)}_\varepsilon\right)\to 0$.
\item [(iii)]There exists a $\delta_0>0$ such that $\Omega^{(1)}_\varepsilon \cup \Omega^{(2)}_\varepsilon\subset \left\{\boldsymbol x\in \mathbb{R}^3\mid \sqrt{x_1^2+x_2^2}\ge \delta_0 \right\}$ for all $\varepsilon>0$.
\end{itemize}
Then there exists a small $\varepsilon_0>0$ such that $\zeta^{(1)}_\varepsilon\equiv\zeta^{(2)}_\varepsilon$ for all $\varepsilon \in (0,\varepsilon_0]$.
\end{theorem}
To obtain the uniqueness, we first give a rough estimate for vortex rings by blow up analysis. Then we improve the estimate step by step, and obtain an accurate version of Kelvin--Hicks formula \eqref{KH}. Actually, our result is slightly stronger than Fraenkel's in \cite{Fra1} by a careful study of vortex boundary and a bootstrap procedure. With a delicate estimate in hand, a local Pohozaev identity can be used to derive contradiction if there are two different vortex rings satisfying assumptions in Theorem \ref{thm2}. It is notable that the methods in \cite{AF,AF88} depend strongly on specific distribution of vorticity in cross-section. While our method has much broader applicability, and provides a general approach for uniqueness of `thin' vortex in axi-symmetry case.
Using the uniqueness result in Theorem \ref{thm2}, we can further show that the solutions constructed in Theorem \ref{thm1} is orbitally stable in the Lyapunov sense. Recalling \eqref{1-3}, for an axisymmetric flow without swirl, the vorticity equation \eqref{1-2} can be reduced to the active scalar equation for the potential vorticity $\zeta=\omega^\theta/r$:
\begin{equation}\label{1-7}
\begin{cases}
\partial_t \zeta+\mathbf{v}\cdot\nabla\zeta=0,\,\qquad\,\,\,\,\, \, \, \, \boldsymbol x\in \mathbb{R}^3,\ \ t>0, \\
\mathbf{v}=\nabla\times(-\Delta)^{-1}\left(r\zeta \right),\,\,\,\,\,\,\,\boldsymbol x\in \mathbb{R}^3,\ \ t>0,\\
\zeta|_{t=0} =\zeta_0,\,\qquad\qquad\qquad\, \boldsymbol x\in \mathbb{R}^3.
\end{cases}
\end{equation}
The existence and uniqueness of solutions $\zeta(x,t)$ can be studied analogously as the two-dimensional case. We refer to \cite{B96,Choi20, MB, Nobi, Sain, Ukh} for some discussion in this direction. Let $BC([0,\infty);X)$ denote the space of all bounded continuous functions from $[0,\infty)$ into a Banach space $X$. Define the weighted space $L^1_\text{w}(\mathbb{R}^3)$ by $L^1_\text{w}(\mathbb{R}^3)=\{\vartheta: \mathbb{R}^3 \to \mathbb{R}\ \text{measurable} \mid r^2\vartheta\in L^1(\mathbb{R}^3)\}$. We introduce the kinetic energy of the fluid
\begin{equation*}
E[\zeta]:=\frac{1}{2}\int_{\mathbb{R}^3}|\mathbf{v}(\boldsymbol x)|^2d\boldsymbol x,\ \ \ \mathbf{v}=\nabla\times(-\Delta)^{-1}\left(r\zeta \right),
\end{equation*}
and its impulse
\begin{equation*}
\mathcal{P}[\zeta]=\frac{1}{2}\int_{\mathbb{R}^3}r^2\zeta(\boldsymbol x)d \boldsymbol x=\pi\int_\Pi r^3 \zeta drdz.
\end{equation*}
The following result has been established, see e.g. Lemma 3.4 in \cite{Choi20}.
\begin{proposition}\label{Pro1}
For any non-negative axi-symmetric function $\zeta_0\in L^1\cap L^\infty\cap L^1_\mathrm{w}(\mathbb{R}^3)$ satisfying $r\zeta_0\in L^\infty(\mathbb{R}^3)$, there exists a unique weak solution $\zeta\in BC([0,\infty);L^1\cap L^\infty\cap L^1_\mathrm{w}(\mathbb{R}^3))$ of \eqref{1-7} for the initial data $\zeta_0$ such that
\begin{equation*}
\begin{split}
\zeta(\cdot,t)&\ge 0:\ \text{axi-symmetric}, \\
\|\zeta(\cdot,t)\|_{L^p(\mathbb{R}^3)} & =\|\zeta_0\|_{L^p(\mathbb{R}^3)},\ \ 1\le p\le \infty, \\
\mathcal{P}[\zeta(\cdot,t)] &= \mathcal{P}[\zeta_0], \\
E[\zeta(\cdot,t)] &=E[\zeta_0],\ \ \ \text{for all}\ t>0,
\end{split}
\end{equation*}
and, for any $0<\upsilon_1<\upsilon_2<\infty$ and for each $t>0$,
\begin{equation*}
\int_{\{\boldsymbol x\in\mathbb{R}^3\mid \upsilon_1<\zeta(\boldsymbol x,t)<\upsilon_2\}}\zeta(\boldsymbol x,t)d\boldsymbol x=\int_{\{\boldsymbol x\in\mathbb{R}^3\mid \upsilon_1<\zeta_0(\boldsymbol x)<\upsilon_2\}}\zeta_0(\boldsymbol x)d\boldsymbol x.
\end{equation*}
\end{proposition}
Our result on nonlinear orbital stability is as follows.
\begin{theorem}[Stability]\label{thm4}
The steady vortex ring $\zeta_\varepsilon$ in Theorem \ref{thm1} is stable up to translations in the following sense:
For any $\eta>0$, there exists $\delta_1>0$ such that for any non-negative axi-symmetric function $\zeta_0$ satisfying $\zeta_0, r\zeta_0\in L^\infty(\mathbb{R}^3)$ and
\begin{equation*}
\|\zeta_0-\zeta_\varepsilon\|_{L^1\cap L^2(\mathbb{R}^3)}+\|r^2(\zeta_0-\zeta_\varepsilon)\|_{L^1(\mathbb{R}^3)}\le \delta_1,
\end{equation*}
the corresponding solution $\zeta(\boldsymbol x,t)$ of \eqref{1-7} for the initial data $\zeta_0$ satisfies
\begin{equation*}
\inf_{\tau\in \mathbb{R}}\left\{\|\zeta(\cdot-\tau \mathbf{e}_z,t)-\zeta_\varepsilon\|_{L^1\cap L^2(\mathbb{R}^3)}+\|r^2(\zeta(\cdot-\tau \mathbf{e}_z,t)-\zeta_\varepsilon\|_{L^1(\mathbb{R}^3)} \right\}\le \eta
\end{equation*}
for all $t>0$. Here, $\|\cdot\|_{L^1\cap L^2(\mathbb{R}^3)}$ means $\|\cdot\|_{L^1(\mathbb{R}^3)}+\|\cdot\|_{L^2(\mathbb{R}^3)}$.
\end{theorem}
The paper is organized as follows. In Section \ref{sec2}, we construct vortex rings of small cross-section by a Lyapunov--Schmidt reduction argument. In Section \ref{sec3}, we study the asymptotic behavior of vortex rings carefully as its cross-section shrinks, and prove the uniqueness result in Theorem \ref{thm2}. The nonlinear orbital stability for vortex rings of small cross-section is proved in Section \ref{sec4} based on variational method. In Appendix \ref{appA} and \ref{appB}, we discuss the symmetry and boundary shape of the cross-section. In Appendix \ref{appC}, we give several estimates for the local Pohozaev identity, which are used to prove uniqueness in Section \ref{sec3}.
\bigskip
\section{Existence}\label{sec2}
\subsection{Formulation of the problem}
The main objective of this paper is to deal with steady vortex rings, which are actually traveling-wave solutions for \eqref{1-7}.
Thanks to the continuity equation in \eqref{1-1}, we can find a Stokes stream function $\Psi$ such that
\begin{equation*}
\mathbf{v}=\frac{1}{r}\left(-\frac{\partial\Psi}{\partial z}\mathbf{e}_r+\frac{\partial\Psi}{\partial r}\mathbf{e}_z\right).
\end{equation*}
In terms of the Stokes stream function $\Psi$, the problem of steady vortex rings can be reduced to a steady problem on the meridional half plane $\Pi=\{(r,z)\mid r>0\}$ of the form:
\begin{numcases}
{ }
\label{2-1} \mathcal{L}\Psi =0, \,\ \, \ \ \ \ \ \ \ \ \, &\text{in}\ $\Pi \setminus A$, \\
\label{2-2} \mathcal{L}\Psi=\lambda f_0(\Psi), \ \ \ \ &\text{in}\ $A$,\\
\label{2-3} \Psi(0,z)=-\mu \le 0, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \, &\\
\label{2-4} \Psi=0, \ \ \ \ &\text{on}\ $\partial A$,\\
\label{2-5} \frac{1}{r}\frac{\partial \Psi}{\partial r} \to -\mathscr{W}\ \text{and}\ \frac{1}{r}\frac{\partial \Psi}{\partial z} \to 0,\ \ \text{as}\ \ r^2+z^2\to \infty,&
\end{numcases}
where
\begin{equation*}
\mathcal{L}:=-\frac{1}{r}\frac{\partial}{\partial r}\Big(\frac{1}{r}\frac{\partial}{\partial r}\Big)-\frac{1}{r^2}\frac{\partial^2}{\partial z^2}.
\end{equation*}
Here the positive vorticity function $f_0$ and the vortex-strength parameter $\lambda>0$ are prescribed; $A$ is the (a priori unknown) cross-section of the vortex ring; $\mu$ is called the flux constant measuring the flow rate between the $z$-axis and $\partial A$; The constant $\mathscr{W}>0$ is the translational speed, and the condition \eqref{2-5} means that the limit of the velocity field $\mathbf{v}$ at infinity is $-\mathscr{W} \mathbf{e}_z$. For a detailed derivation of this system, we refer to \cite{AF, Choi20, BF1} and the references therein.
By the maximum principle, we see that $\Psi>0$ in $A$ and $\Psi<0$ in $\Pi\backslash \bar{A}$. Therefore the cross-section $A$ is given by
\begin{equation*}
A=\left\{(r,z)\in \Pi \mid \Psi(r,z)>0 \right\}.
\end{equation*}
It is convenient to write
\begin{equation*}
\Psi(r,z)=\psi(r,z)-\frac{1}{2}\mathscr{W}r^2-\mu,
\end{equation*}
where $\psi$ is the stream function due to vorticity. In addition, it is also convenient to define
\[
f(\tau)=\left\{
\begin{array}{ll}
0, & \tau\le 0, \\
f_0(\tau), & \tau>0,
\end{array}
\right.
\]
so that $\lambda f(\Psi)$ is exactly the potential vorticity $\zeta$. We now can rewrite \eqref{2-1}-\eqref{2-5} as
\begin{numcases}
{(\mathscr{P})\ \ \ }
\label{2-6} \mathcal{L}\psi =\lambda f(\psi-\frac{1}{2}\mathscr{W}r^2-\mu),\ \ \ \text{in}\ \Pi, & \\
\label{2-7} \psi(0,z)=0, &\\
\label{2-8} \psi,\ \ {|\nabla \psi|}/{r} \to 0 \ \ \text{as}\ \ r^2+z^2\to \infty.&
\end{numcases}
In the following, we will focus on the construction of $\psi$ satisfying $(\mathscr{P})$.
In order to simplify notations, we will use
$$\mathbb R^2_+=\{\boldsymbol x=(x_1,x_2) \ | \ x_1>0\}$$
to substitute the meridional half plane $\Pi$, and abbreviate the elliptic operator $\mathcal L$ as
\begin{equation}\label{delta*}
\Delta^*:=\frac{1}{x_1}\text{div}\left(\frac{1}{x_1}\nabla\right).
\end{equation}
We will use $\varepsilon:=\lambda^{-1/2}$ as the parameter instead of $\lambda$ in the rest of this paper. Since we are concerned with steady vortex rings for which $\zeta$ is a constant throughout the core, we will choose the vorticity function $f$ in \eqref{2-6} having the following form
\[
f(\tau)=\left\{
\begin{array}{ll}
0, & \tau\le0, \\
1, & \tau>0,
\end{array}
\right.
\]
and the cross-section of the vortex ring is
$$A_\varepsilon=\left\{\boldsymbol x\in \mathbb R^2_+ \ \big| \ \psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\right\}$$ for some flux constant $\mu_\varepsilon>0$.
Here we let $\mathscr{W}$ equal $W\ln(1/\varepsilon)$ according to Kelvin--Hicks formula \eqref{1-3}. The fact that $\mu_\varepsilon>0$ means $A_\varepsilon$ will not touch the $x_2$-axis. Thus we can rewrite $(\mathscr{P})$ to
\begin{equation}\label{2-9}
\begin{cases}
-\varepsilon^2{\Delta^*}\psi_\varepsilon=
\boldsymbol1_{\left\{\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\right\}}, & \text{in} \ \mathbb R^2_+,
\\
\psi_\varepsilon=0, & \text{on} \ x_1=0,
\\
\psi_\varepsilon, \ |\nabla\psi_\varepsilon|/x_1\to0, &\text{as} \ |\boldsymbol x |\to \infty.
\end{cases}
\end{equation}
Since the problem is invariant in $x_2$-direction, we may assume
\begin{equation}\label{2-10}
\psi_\varepsilon(x_1,x_2)=\psi_\varepsilon(x_1,-x_2)
\end{equation}
due to the method of moving planes in Appendix \ref{appA} (see also Lemma 2.1 in \cite{AF88}), which also means the steady vortex ring $\zeta_\varepsilon$ corresponding to $\psi_\varepsilon$ is centralized; see \cite{AF88}.
The existence result in Theorem \ref{thm1} can be deduced from following proposition.
\begin{proposition}\label{prop2-1}
For every $\kappa>0$ and $W>0$, there exists an $\varepsilon_0>0$ such that for each $\varepsilon\in (0,\varepsilon_0]$, problem \eqref{2-9} has a solution $\psi_\varepsilon$ satisfying \eqref{2-10}. Moreover,
\begin{itemize}
\item[(i)] The cross-section $A_\varepsilon$ is a convex domain, and satisfies
\begin{equation*}
B_{\sqrt{\frac{\kappa}{z_1\pi}}\varepsilon(1-L_1\varepsilon|\ln\varepsilon|)}(\boldsymbol z)\subset A_\varepsilon\subset B_{\sqrt{\frac{\kappa}{z_1\pi}}\varepsilon(1+L_2\varepsilon|\ln\varepsilon|)}(\boldsymbol z),
\end{equation*}
where $L_1$, $L_2$ are two positive constants independent of $\varepsilon$, and $\boldsymbol z=(z_1,0)$ is on $x_1$-axis with the estimate
$$z_1-\frac{\kappa}{4\pi W}=O\left(\frac{1}{|\ln\varepsilon|}\right).$$
\item[(ii)]
As $\varepsilon\to 0$, it holds
\begin{equation*}
\kappa_\varepsilon:=\varepsilon^{-2}\int_{A_\varepsilon}x_1d\boldsymbol x\to \kappa.
\end{equation*}
\end{itemize}
\end{proposition}
\begin{remark}
Notice that in Proposition \ref{prop2-1}, the circulation parameter $\kappa_\varepsilon$ is not fixed, which only has the limiting behavior described in property (ii). To obtain a family of vortex rings with fixed circulation $\kappa$ as in Theorem \ref{thm1}, we can rescale $\psi_\varepsilon$ as follows
$$\bar \psi_\varepsilon(\boldsymbol x):=\frac{\kappa_\varepsilon^2}{\kappa^2}\cdot \psi_\varepsilon\left(\frac{\kappa}{\kappa_\varepsilon}\cdot \boldsymbol x\right).$$
Then $\bar \psi_\varepsilon(\boldsymbol x)$ is the solution to
\begin{equation*}
-\bar\varepsilon^2\Delta^*\bar\psi_\varepsilon=\boldsymbol 1_{\{\bar\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\bar\mu_\varepsilon\}},
\end{equation*}
where
$$\bar\varepsilon=\frac{\kappa_\varepsilon}{\kappa}\cdot \varepsilon, \quad \text{and} \quad \bar\mu_\varepsilon=\frac{\kappa^2}{\kappa_\varepsilon^2}\cdot \mu_\varepsilon.$$
It is easy to verify that
$$\int_{\mathbb R^2_+} x_1\boldsymbol 1_{\{\bar\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\bar\mu_\varepsilon\}}d\boldsymbol x=\kappa,$$
and the vortex ring $\bar\zeta_\varepsilon$ corresponding to $\bar\psi_\varepsilon$ satisfies all assumptions in Theorem \ref{thm1}.
\end{remark}
For the study of steady vortex rings of small cross-section, our main tool is the Green's representation of Stokes stream function $\psi_\varepsilon$. To be more rigorous, $\psi_\varepsilon$ satisfies the integral equation
\begin{equation}\label{2-11}
\psi_\varepsilon(\boldsymbol x)=\frac{1}{\varepsilon^2}\int_{\mathbb R^2_+}G_*(\boldsymbol x,\boldsymbol x') \boldsymbol 1_{A_\varepsilon}(\boldsymbol x')d\boldsymbol x',
\end{equation}
where ${G_*}(\boldsymbol x,\boldsymbol x')$ is the Green's function for $-{\Delta^*}$ with boundary condition in \eqref{2-9}. Using Biot--Savart law in $\mathbb R^3$ and a coordinate transformation, we can derive an explicit formula of ${G_*}(\boldsymbol x,\boldsymbol x')$ as
\begin{equation*}
{G_*}(\boldsymbol x,\boldsymbol x')=\frac{x_1x_1'^2}{4\pi}\int_{-\pi}^\pi\frac{\cos\theta d\theta}{\left[(x_2-x_2')^2+x_1^2+x_1'^2-2x_1 x_1'\cos\theta\right]^{\frac{1}{2}}}.
\end{equation*}
Then, denoting
\begin{equation}\label{rho}
\rho(\boldsymbol x, \boldsymbol x')=\frac{(x_1-x_1')^2+(x_2-x_2')^2}{x_1x_1'},
\end{equation}
we have the following asymptotic estimates
\begin{equation}\label{2-12}
{G_*}(\boldsymbol x,\boldsymbol x^\prime)=
\frac{x_1^{1/2}x_1^{\prime 3/2}}{4\pi}\left(\ln\left(\frac{1}{\rho}\right)
+2\ln 8-4+O\left(\rho\ln\frac{1}{\rho}\right)\right),\quad \text{as} \ \rho\to 0,
\end{equation}
and
\begin{equation}\label{2-13}
{G_*}(\boldsymbol x,\boldsymbol x^\prime) =\frac{x_1^{1/2}x_1^{\prime 3/2}}{4}\left(\frac{1}{\rho^{3/2}}+O(\rho^{-5/2})\right), \quad \text{as} \ \rho\to \infty,
\end{equation}
which can be found in \cite{Fen,Fra2,Lam,Sve}. Actually, the theory of elliptic integrals can be used to obtain a more precise expansion of ${G_*}$ on $\rho$.
To simplify integral equation \eqref{2-11}, we let $\boldsymbol z=(z_1,0)$ with $z_1>0$ determined later, and split ${G_*}$ as
\begin{equation*}
{G_*}(\boldsymbol x,\boldsymbol x')=z_1^2G(\boldsymbol x,\boldsymbol x')+H(\boldsymbol x,\boldsymbol x'),
\end{equation*}
where
\begin{equation*}
G(\boldsymbol x,\boldsymbol x')=\frac{1}{4\pi}\ln\frac{(x_1+x_1')^2+(x_2-x_2')^2}{(x_1-x_1')^2+(x_2-x_2')^2},
\end{equation*}
is the Green's function for $-\Delta$ in right half plane, and $H(\boldsymbol x,\boldsymbol x')$ is a relatively regular function.
By the definition of ${G_*}$ and $G$, it is obvious that $H(\boldsymbol x,\boldsymbol z)\in C^\alpha(\mathbb R^2_+)$ for every $\alpha\in (0,1)$ on $\boldsymbol x$. A slightly more careful estimate shows that $H(\boldsymbol x,\boldsymbol z)$ is quasi-Lipschitz near $\boldsymbol z$, namely, for any $\boldsymbol x^{(1)},\boldsymbol x^{(2)}$ in a neighborhood $D\subset \mathbb R^2_+$ of $\boldsymbol z$, there exists a constant $C(D)$ such that
$$|H(\boldsymbol x^{(1)},\boldsymbol z)-H(\boldsymbol x^{(2)},\boldsymbol z)|\le C(D)\cdot|\boldsymbol x^{(1)}-\boldsymbol x^{(2)}|(1+\ln|\boldsymbol x^{(1)}-\boldsymbol x^{(2)}|).$$
Our construction is divided into several steps, which is known as the Lyapunov--Schmidt reduction. We will first give a series of approximate solutions of $\psi_\varepsilon$, so that \eqref{2-9} is transformed to a semilinear problem on the error term $\phi_\varepsilon$. Then, we establish the linear theory of corresponding projected problem. The existence and limiting behavior of $\psi_\varepsilon$ will be obtained by contraction mapping theorem and one-dimensional reduction in the last part of our proof.
\bigskip
\subsection{Approximate solutions}
To give suitable approximate solutions to \eqref{2-9} and \eqref{2-10}, let us consider the following problem
\begin{equation*}
\begin{cases}
-\varepsilon^2\Delta V_{\boldsymbol z,\varepsilon}(\boldsymbol x)=z_1^2\boldsymbol{1}_{B_s(\boldsymbol z)}, \ \ \ & \text{in} \ \mathbb R^2,\\
V_{\boldsymbol z,\varepsilon}(\boldsymbol x)=\frac{a}{2\pi}\ln\frac{1}{\varepsilon}, &\text{on} \ \partial B_s(\boldsymbol z),
\end{cases}
\end{equation*}
with $\boldsymbol z=(z_1,z_2)\in \mathbb R^2$ and $z_1\neq 0$, $a$ is a parameter to be determined later, and $s>0$ sufficiently small such that $B_s(\boldsymbol z)\cap \{\boldsymbol x=(x_1,x_2)\in\mathbb R^2 \ | \ x_1=0\}=\varnothing$. Recalling the planar Rankine vortex, we can write $V_{\boldsymbol z,\varepsilon}$ explicitly as
\begin{equation}\label{approxsolu}
V_{\boldsymbol z,\varepsilon}(\boldsymbol x)=\left\{
\begin{array}{lll}
\frac{a}{2\pi}\ln\frac{1}{\varepsilon}+\frac{z_1^2}{4\varepsilon^2}(s^2-|\boldsymbol x-\boldsymbol z|^2), \ \ \ \ \ & |\boldsymbol x-\boldsymbol z|\le s,\\
\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\cdot\frac{\ln|\boldsymbol x-\boldsymbol z|}{\ln s},&|\boldsymbol x-\boldsymbol z|\ge s.
\end{array}
\right.
\end{equation}
To make $V_{\boldsymbol z,\varepsilon}$ a $C^1$ function, we impose the gradient condition on $\partial B_s(\boldsymbol z)$
\begin{equation}\label{2-14}
\mathcal N:=\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\cdot\frac{1}{s|\ln s|}=\frac{s}{2\varepsilon^2}\cdot z_1^2,
\end{equation}
where $\mathcal N$ is the value of $|\nabla V_{\boldsymbol z,\varepsilon}|$ at $|\boldsymbol x-\boldsymbol z|= s$. From \eqref{2-14}, we see that $s$ is asymptotically linearly dependent on $\varepsilon$ by
$$s=\left(\sqrt{\frac{a}{\pi z_1^2}}+o_\varepsilon(1)\right)\varepsilon.$$
In our construction, $V_{\boldsymbol z,\varepsilon}(\boldsymbol x)$ will be used as the building block of approximate solutions. To further explain our strategy, for general $\boldsymbol x=(x_1,x_2)\in \mathbb R^2_+$ we denote $\boldsymbol {\bar x}=(-x_1, x_2)$ as the reflection of $\boldsymbol x$ with respect to $x_2$-axis, and let
\begin{equation*}
\begin{split}
\mathcal V_{\boldsymbol z,\varepsilon}(\boldsymbol x):&=V_{\boldsymbol z,\varepsilon}(\boldsymbol x)-V_{\boldsymbol{\bar z},\varepsilon}(\boldsymbol x)\\
&=\frac{1}{2\pi\varepsilon^2}\int_{\mathbb R^2_+}z_1^2\ln\left(\frac{1}{|\boldsymbol x-\boldsymbol x'|}\right)\boldsymbol{1}_{B_s(\boldsymbol z)}(\boldsymbol x')d\boldsymbol x'- \frac{1}{2\pi\varepsilon^2}\int_{\mathbb R^2_+}z_1^2\ln\left(\frac{1}{|\boldsymbol x-\boldsymbol{\bar x}'|}\right)\boldsymbol{1}_{B_s(\boldsymbol z)}(\boldsymbol x')d\boldsymbol x'\\
&=\frac{z_1^2}{\varepsilon^2}\int_{\mathbb R^2_+}G(\boldsymbol x,\boldsymbol x') \boldsymbol 1_{B_s(\boldsymbol z)}(\boldsymbol x')d\boldsymbol x'
\end{split}
\end{equation*}
be an approximation of singular part of $\psi_\varepsilon$, where $\boldsymbol z=(z_1,0)$ will be determined in the last part of construction (Note that we introduce a conjugate part $V_{\boldsymbol{\bar z},\varepsilon}$ to obtain desired boundary condition). Then $\mathcal V_{\boldsymbol z,\varepsilon}(\boldsymbol x)$ is the unique solution to the following problem
\begin{equation*}
\begin{cases}
-\varepsilon^2\Delta \mathcal{V}_{\boldsymbol z,\varepsilon}(\boldsymbol x)=z_1^2\boldsymbol{1}_{B_s(\boldsymbol z)}, \ \ \ & \text{on} \ \mathbb R^2_+,\\
\mathcal{V}_{\boldsymbol z,\varepsilon}=0, &\text{on} \ x_1=0,\\
\mathcal{V}_{\boldsymbol z,\varepsilon}, \ |\nabla\mathcal{V}_{\boldsymbol z,\varepsilon}|/x_1\to0, &\text{as} \ |\boldsymbol x|\to \infty.
\end{cases}
\end{equation*}
To approximate the regular part of $\psi_\varepsilon$, let
\begin{equation*}
\mathcal H_{\boldsymbol z,\varepsilon}(\boldsymbol x)=\frac{1}{\varepsilon^2}\int_{\mathbb R^2_+}H(\boldsymbol x,\boldsymbol x')\boldsymbol{1}_{B_s(\boldsymbol z)}(\boldsymbol x')d\boldsymbol x'.
\end{equation*}
According to the definition of $H(\boldsymbol x,\boldsymbol x')$, it is obvious that $\mathcal H_{\boldsymbol z,\varepsilon}(\boldsymbol x)$ solves
\begin{equation*}
\begin{cases}
-\varepsilon^2{\Delta^*}\left(\mathcal V_{\boldsymbol z,\varepsilon}+\mathcal H_{\boldsymbol z,\varepsilon}\right)=z_1^2\boldsymbol{1}_{B_s(\boldsymbol z)}, \ \ \ &\text{on} \ \mathbb R^2_+,\\
\mathcal{H}_{\boldsymbol z,\varepsilon}=0, &\text{on} \ x_1=0,\\
\mathcal{H}_{\boldsymbol z,\varepsilon}, \ |\nabla\mathcal{H}_{\boldsymbol z,\varepsilon}|/x_1\to0, &\text{as} \ |\boldsymbol x|\to \infty.
\end{cases}
\end{equation*}
Morever, using the definition of $H(\boldsymbol x,\boldsymbol x')$ and standard elliptic estimates, we have
\begin{equation*}
\mathcal H_{\boldsymbol z,\varepsilon}(\boldsymbol x)-\frac{s^2\pi}{\varepsilon^2}H(\boldsymbol x,\boldsymbol z)=\frac{1}{\varepsilon^2}\int_{\mathbb R^2_+}\left(H(\boldsymbol x,\boldsymbol x')-H(\boldsymbol x,\boldsymbol z)\right)\boldsymbol{1}_{B_s(\boldsymbol z)}(\boldsymbol x')d\boldsymbol x'=O(\varepsilon),
\end{equation*}
and
\begin{equation*}
\partial_1\mathcal H_{\boldsymbol z,\varepsilon}(\boldsymbol x)=\frac{1}{\varepsilon^2}\int_{\mathbb R^2_+}\partial_{x_1}H(\boldsymbol x,\boldsymbol x')\boldsymbol{1}_{B_s(\boldsymbol z)}(\boldsymbol x')d\boldsymbol x'=O(\varepsilon|\ln\varepsilon|).
\end{equation*}
After all this preparation, we write a solution $\psi_\varepsilon$ to \eqref{2-9} as
\begin{equation*}
\psi_\varepsilon(\boldsymbol x)=\mathcal V_{\boldsymbol z,\varepsilon}+\mathcal H_{\boldsymbol z,\varepsilon}+\phi_\varepsilon,
\end{equation*}
where $\phi_\varepsilon(\boldsymbol x)$ is a error term with boundary condition
\begin{equation*}
\begin{cases}
\phi_\varepsilon=0, & \text{on}\ x_1=0,
\\
\phi_\varepsilon, \ |\nabla\phi_\varepsilon|/x_1\to0, &\text{as} \ |\boldsymbol x |\to \infty,
\end{cases}
\end{equation*}
and symmetry condition
\begin{equation*}
\phi_\varepsilon(x_1,x_2)=\phi_\varepsilon(x_1,-x_2).
\end{equation*}
Then we can derive the equation for $\phi_\varepsilon$ by direct computations
\begin{equation*}
\begin{split}
0&=-x_1\varepsilon^2{\Delta^*}\left(\mathcal V_{\boldsymbol z,\varepsilon}+\mathcal H_{\boldsymbol z,\varepsilon}+\phi_\varepsilon\right)-x_1\boldsymbol1_{\{\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\}}\\
&=x_1\left(-\varepsilon^2{\Delta^*}(\mathcal V_{\boldsymbol z,\varepsilon}+\mathcal H_{\boldsymbol z,\varepsilon})-\boldsymbol1_{\{V_{\boldsymbol z,\varepsilon}>\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\}}\right)\\
& \ \ \ +\varepsilon^2\left(-x_1{\Delta^*}\phi_\varepsilon-\frac{2}{sz_1}\phi_\varepsilon(s_j,\theta)\boldsymbol\delta_{|\boldsymbol x-\boldsymbol z|=s}\right)\\
& \ \ \ -\bigg(x_1\boldsymbol1_{\{\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\}}-x_1\boldsymbol1_{\{V_{\boldsymbol z,\varepsilon}>\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\}}-\frac{2}{sz_1}\phi_\varepsilon(s,\theta)\boldsymbol\delta_{|\boldsymbol x-\boldsymbol z|=s}\bigg)\\
&=\varepsilon^2\mathbb L_\varepsilon\phi_\varepsilon-\varepsilon^2R_\varepsilon(\phi_\varepsilon),
\end{split}
\end{equation*}
where $\mathbb L_\varepsilon$ is a linear operator defined by
\begin{equation}\label{2-15}
\mathbb L_\varepsilon\phi =-x_1{\Delta^*}\phi -\frac{2}{sz_1}\phi (s,\theta)\boldsymbol\delta_{|\boldsymbol x-\boldsymbol z|=s},
\end{equation}
and
\begin{equation*}
R_\varepsilon(\phi)=\frac{1}{\varepsilon^2}\bigg(x_1\boldsymbol1_{\{\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\}}-x_1\boldsymbol1_{\{V_{\boldsymbol z,\varepsilon}>\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\}}-\frac{2}{sz_1}\phi(s,\theta)\boldsymbol\delta_{|\boldsymbol x-\boldsymbol z|=s}\bigg)
\end{equation*}
is the nonlinear perturbation.
To make $R_\varepsilon(\phi_\varepsilon)$ as small as possible, we are to take
$$\mu_\varepsilon=\frac{z_1}{2\pi}\cdot\kappa\ln\frac{1}{\varepsilon}-\frac{W}{2}z_1^2\ln\frac{1}{\varepsilon}$$
and choose the parameter $a$ such that
\begin{equation}\label{2-16}
\frac{a}{2\pi}\ln\frac{1}{\varepsilon}=\mu_\varepsilon+\frac{W}{2}z_1^2\ln\frac{1}{\varepsilon}-\mathcal H_{\boldsymbol z,\varepsilon}(\boldsymbol z)+V_{\boldsymbol{\bar z},\varepsilon}(\boldsymbol z).
\end{equation}
For simplicity in further discussion, we will denote
\begin{equation*}
\mathbf U_{\boldsymbol z,\varepsilon}(\boldsymbol x)=\mathcal V_{\boldsymbol z,\varepsilon}(\boldsymbol x)+\mathcal H_{\boldsymbol z,\varepsilon}(\boldsymbol x)-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}-\mu_\varepsilon.
\end{equation*}
Problem \eqref{2-9} and \eqref{2-10} is then transformed into finding the pairs $(\boldsymbol z,\phi_\varepsilon)$ for each $\varepsilon\in (0,\varepsilon_0)$ with $\varepsilon_0$ sufficiently small, such that
\begin{equation}\label{Eqforperturbation}
\begin{cases}
\mathbb L_\varepsilon\phi_\varepsilon=R_\varepsilon(\phi_\varepsilon), & \text{in} \ \mathbb R^2_+,
\\
\phi_\varepsilon=0, & \text{on} \ x_1=0,
\\
\phi_\varepsilon, \ |\nabla\phi_\varepsilon|/x_1\to0, &\text{as} \ |\boldsymbol x |\to \infty.
\end{cases}
\end{equation}
\bigskip
\subsection{The linear theory}
To solve \eqref{Eqforperturbation} we need first to study the properties of linear operator $\mathbb L_\varepsilon$ and the corresponding projected problem. Fix a point $\boldsymbol z=(z_1,0)\in \mathbb R^2$ with $z_1\neq 0$. Let $\mathcal K$ be the operator defined on the whole plane $\mathbb R^2$ by
\begin{equation*}
\mathcal K v:=-\frac{1}{z_1}\Delta v-\varepsilon^{-2}z_1\boldsymbol{1}_{\{v>\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\}},
\end{equation*}
where $a$ is the same parameter as in approximate solutions. A direct calculation yields its linearized operator $\mathbb L$ as
\begin{equation}\label{linear}
\mathbb L\phi:=-\frac{1}{z_1}\Delta\phi-\frac{2}{sz_1}\phi(s,\theta)\boldsymbol\delta_{|\boldsymbol x-\boldsymbol z|=s}
\end{equation}
with $\phi(s,\theta)=\phi(z_1+s\cos\theta,s\sin\theta)$. In view of the nondegeneracy properity for $\mathbb L$ in \cite{CPY}, we have
\begin{equation*}
\text{ker}(\mathbb L)=\text{span} \left\{\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1},\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_2}\right\},
\end{equation*}
where
\begin{equation*}
\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_m}=\left\{
\begin{array}{lll}
-\frac{z_1^2}{2\varepsilon^2}(x_m-z_m), \ \ \ \ \ & |\boldsymbol x-\boldsymbol z|\le s,\\
-\frac{a|\ln \varepsilon|}{2\pi|\ln s|}\frac{x_m-z_m}{|\boldsymbol x-\boldsymbol z|^2}, & |\boldsymbol x-\boldsymbol z|\ge s.
\end{array}
\right.
\end{equation*}
Recall that $\mathbb L_\varepsilon$ is defined on $\mathbb R^2_+$ and $\phi_\varepsilon$ is even symmetric with respect to $x_1$-axis. When $\varepsilon$ is chosen sufficiently small, the kernel of $\mathbb L$ can be approximated by
\begin{equation*}
Z_{\boldsymbol z, \varepsilon}=\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1},
\end{equation*}
where $\chi_\varepsilon$ are smooth truncation functions satisfy
\begin{equation}\label{truncation}
\chi_\varepsilon(\boldsymbol x)=\left\{
\begin{array}{lll}
1, \ \ \ \ \ & |\boldsymbol x-\boldsymbol z|\le \delta_\varepsilon,\\
0, & |\boldsymbol x-\boldsymbol z|\ge 2\delta_\varepsilon
\end{array}
\right.
\end{equation}
for $\delta_\varepsilon=\varepsilon|\ln\varepsilon|$. Moreover, we assume that $\chi_\varepsilon$ are radially symmetric with respect to $\boldsymbol z$ and
\begin{equation*}
|\nabla \chi_\varepsilon|\le \frac{2}{\delta_\varepsilon}, \quad\quad |\nabla^2 \chi_\varepsilon|\le \frac{2}{\delta_\varepsilon^2}.
\end{equation*}
To solve \eqref{Eqforperturbation}, we will first consider the following projected problem
\begin{equation}\label{2-17}
\begin{cases}
\mathbb L_\varepsilon\phi=\mathbf h(\boldsymbol x)-\Lambda x_1 {\Delta^*}Z_{\boldsymbol z, \varepsilon}, \ \ \ &\text{in} \ \ \mathbb R^2_+,\\
\int_{\mathbb R^2_+} \frac{1}{x_1} \nabla\phi\cdot\nabla Z_{\boldsymbol z, \varepsilon}d\boldsymbol x=0,\\
\phi= 0, \ \ &\text{on} \ x_1=0,\\
\phi,\ |\nabla\phi|/x_1\to0, \ \ &\text{as} \ |\boldsymbol x|\to \infty,
\end{cases}
\end{equation}
where $\phi$ is even with respect to $x_1$-axis, $\text{supp} \, \mathbf h\subset B_{2s}(\boldsymbol z)$, and $\Lambda$ is the projection coefficient such that
$$\int_{\mathbb R^2_+}Z_{\boldsymbol z, \varepsilon}(\mathbb L_\varepsilon\phi-\mathbf h+\Lambda x_1 {\Delta^*}Z_{\boldsymbol z, \varepsilon})d\boldsymbol x=0.$$
Let
\begin{equation}\label{rho2}
\rho_1(\boldsymbol x):=\frac{(1+|\boldsymbol x-\boldsymbol z|^2)^{\frac{3}{2}}}{1+x_1^2} \ \ \ \text{and} \ \ \ \rho_2(\boldsymbol x):=\left(\frac{1}{x_1}+1\right).
\end{equation}
We define the weighted $L^\infty$ norm of $\phi$ by
\begin{equation}\label{2-18}
||\phi||_*:=\sup_{\boldsymbol x\in\mathbb R^2_+}\rho_1(\boldsymbol x)\rho_2(\boldsymbol x)|\phi(\boldsymbol x)|.
\end{equation}
We have a priori estimate for solutions of the projective problem \eqref{2-17}.
\begin{lemma}\label{lem2-2}
Assume that $\mathbf h$ satisfies $\mathrm{supp}\, \mathbf h\subset B_{2s}(\boldsymbol z)$ and $$\varepsilon^{1-\frac{2}{p}}\|\mathbf h\|_{W^{-1,p}(B_{Ls}(\boldsymbol z))}<\infty$$
with $p\in (2,+\infty]$, then there exists a small $\varepsilon_0>0$, a large constant $L>0$ and a positive constant $c_0$ such that for any $\varepsilon\in(0,\varepsilon_0]$ and solution pair $(\phi,\Lambda)$ to \eqref{2-17}, one has
\begin{equation}\label{2-19}
\|\phi\|_*+\varepsilon^{1-\frac{2}{p}}\|\nabla\phi\|_{L^p(B_{Ls}(\boldsymbol z))}\le c_0\varepsilon^{1-\frac{2}{p}}\|\mathbf h\|_{W^{-1,p}(B_{Ls}(\boldsymbol z))},
\end{equation}
and
\begin{equation}\label{2-20}
|\Lambda|\le c_0\varepsilon^{2-\frac{2}{p}}\|\mathbf h\|_{W^{-1,p}(B_{Ls}(\boldsymbol z))}.
\end{equation}
\end{lemma}
\begin{proof}
First we are to obtain an estimate for coefficient $\Lambda$. To proceed an energy method, we multiply the first equation in \eqref{2-17} by $Z_{\boldsymbol z, \varepsilon}$. By integrations by parts we obtain
\begin{equation}\label{2-21}
\Lambda\int_{\mathbb R^2_+} \frac{1}{x_1}\nabla Z_{\boldsymbol z, \varepsilon}\cdot\nabla Z_{\boldsymbol z, \varepsilon}d\boldsymbol x=\int_{\mathbb R^2_+}Z_{\boldsymbol z, \varepsilon}\mathbb L_\varepsilon\phi d\boldsymbol x-\int_{\mathbb R^2_+} Z_{\boldsymbol z, \varepsilon}\mathbf h d\boldsymbol x.
\end{equation}
Recall the definition of $Z_{\boldsymbol z, \varepsilon}$. For the integral in the left hand side of \eqref{2-21}, we have
\begin{equation*}
\begin{split}
&\quad\int_{\mathbb R^2_+} \frac{1}{x_1}\nabla\left(\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)\cdot\nabla \left(\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)d\boldsymbol x\\
&=\int_{\mathbb R^2_+} \frac{\chi_\varepsilon^2}{z_1}\cdot\left(\nabla\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)^2d\boldsymbol x+\int_{\mathbb R^2_+} \frac{2\chi_\varepsilon\nabla\chi_\varepsilon}{z_1}\cdot\left(\nabla\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}d\boldsymbol x\\
&\quad+\int_{\mathbb R^2_+} \frac{(\nabla\chi_\varepsilon)^2}{z_1}\cdot\left(\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)^2d\boldsymbol x+\frac{C}{\varepsilon^2}\cdot \delta_\varepsilon\\
&=\frac{C_Z}{\varepsilon^2}\cdot (1+o_\varepsilon(1)),
\end{split}
\end{equation*}
where $C_Z>0$ is some constant independent of $\varepsilon$. We let $\chi^*(\boldsymbol x)$ be a smooth truncation function taking the value $1$ in $B_{2s}(\boldsymbol z)$, and $0$ in $\mathbb R^2_+\setminus B_{Ls}(\boldsymbol z)$. Then it holds following estimate
\begin{equation*}
\begin{split}
&\quad\left\|\nabla\left(\chi^*\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)\right\|_{L^{p'}(B_{Ls}(\boldsymbol z))}\\
&\le \left\|\left(\nabla \chi^*\right)\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right\|_{L^{p'}(B_{Ls}(\boldsymbol z))}+\left\|\chi^*\cdot\left(\nabla\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)\right\|_{L^{p'}(B_{Ls}(\boldsymbol z))} \\
&\le\frac{C}{\varepsilon}\left(\int_{2s}^{Ls}\frac{\tau}{\tau^{p'}}d\tau\right)^{\frac{1}{p'}}+\frac{C}{\varepsilon^2}\left(\int_0^{s}\tau d\tau\right)^{\frac{1}{p'}}+\left(\int_{s}^{Ls}\frac{\tau}{\tau^{2p'}}d\tau\right)^{\frac{1}{p'}}\\
&=C\varepsilon^{\frac{2}{p'}-2}.
\end{split}
\end{equation*}
Since $\text{supp}\, \mathbf h \subset B_{2s}(\boldsymbol z)$, for the second term in the right hand side of \eqref{2-21}, we have
\begin{equation*}
\begin{split}
\left|\int_{\mathbb R^2_+}\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\cdot\mathbf h d\boldsymbol x\right|&=\left|\int_{\mathbb R^2_+}\chi^*\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1} \cdot\mathbf h d\boldsymbol x \right|\\
&\le C\|\mathbf h\|_{W^{-1,p}(B_{Ls}(\boldsymbol z))}\left\|\nabla\left(\chi^*\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)\right\|_{L^{p'}(B_{Ls}(\boldsymbol z))}\\
&\le C\varepsilon^{\frac{2}{p'}-2} \|\mathbf h\|_{W^{-1,p}(B_{Ls}(\boldsymbol z))},
\end{split}
\end{equation*}
where a Poincar\'{e} inequality
$$\left\|\chi^*\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right\|_{L^{p'}(B_{Ls}(\boldsymbol z))}\le C\varepsilon\left\|\nabla\left(\chi^*\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)\right\|_{L^{p'}(B_{Ls}(\boldsymbol z))}$$
is used.
For the first term in the right hand side of \eqref{2-21}, it holds
\begin{equation*}
\begin{split}
&\quad\int_{\mathbb R^2_+}\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\cdot\mathbb L_\varepsilon\phi d\boldsymbol x=\int_{\mathbb R^2_+} \phi\cdot\mathbb L_\varepsilon\left(\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)d\boldsymbol x\\
&=\int_{\mathbb R^2_+}\frac{1}{x_1} \nabla\phi\cdot\nabla\left(\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)d\boldsymbol x-\frac{2}{sz_1}\int_{|\boldsymbol x-\boldsymbol z|=s}\phi\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\\
&=-\int_{\mathbb R^2_+}\phi\cdot\nabla\left(\frac{1}{x_1}\right)\cdot\nabla\left(\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)d\boldsymbol x-\int_{\mathbb R^2_+}\phi\left(\frac{1}{x_1}-\frac{1}{z_1}\right)\Delta\left(\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)d\boldsymbol x\\
& \ \ \ -\int_{\mathbb R^2_+}\frac{\phi}{x_1}\cdot\left(2\nabla\chi_\varepsilon\cdot \nabla\left(\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)+(\Delta\chi_\varepsilon)\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)d\boldsymbol x,
\end{split}
\end{equation*}
where we have used the fact that $\partial V_{\boldsymbol z,\varepsilon}/\partial x_1$ is in the kernel of $\mathbb L$. Notice that for terms in above identity we have the following estimates
\begin{equation*}
\int_{\mathbb R^2_+}\left|\nabla\left(\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)\right|d\boldsymbol x\le C|\ln\varepsilon|,
\end{equation*}
\begin{equation*}
\int_{\mathbb R^2_+}\left|\left(\frac{1}{x_1}-\frac{1}{z_1}\right)\Delta\left(\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)\right|d\boldsymbol x\le s\cdot 2\pi s\cdot \frac{1}{s^2}\le C,
\end{equation*}
\begin{equation*}
\int_{\mathbb R^2_+}\left|\nabla\chi_\varepsilon\cdot\left(\nabla\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right)\right|d\boldsymbol x\le \frac{C}{\delta_\varepsilon}\cdot\int_{\delta_\varepsilon}^{2\delta_\varepsilon}\frac{1}{\tau}d\tau\le \frac{C}{\delta_\varepsilon},
\end{equation*}
\begin{equation*}
\int_{\mathbb R^2_+}\left|(\Delta\chi_\varepsilon)\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\right|d\boldsymbol x\le \frac{C}{\delta_\varepsilon^2}\cdot\int_{\delta_\varepsilon}^{2\delta_\varepsilon}d\tau\le \frac{C}{\delta_\varepsilon}.
\end{equation*}
As a result, it holds
\begin{equation*}
\begin{split}
\left|\int_{\mathbb R^2_+}\chi_\varepsilon\cdot\frac{\partial V_{\boldsymbol z,\varepsilon}}{\partial x_1}\cdot \mathbb L_\varepsilon\phi d\boldsymbol x\right|&\le (|\ln\varepsilon|+\delta_\varepsilon^{-1})\|\phi\|_{L^\infty(B_{2\delta_\varepsilon}(\boldsymbol z))}\\
&\le (|\ln\varepsilon|+\delta_\varepsilon^{-1})\|\phi\|_*.
\end{split}
\end{equation*}
Then combining all above estimates for \eqref{2-21}, we derive
\begin{equation}\label{2-24}
|\Lambda|\le C\varepsilon^2(|\ln\varepsilon|+\delta_\varepsilon^{-1})\cdot\|\phi\|_*+C\varepsilon^{\frac{2}{p'}}\|\mathbf h\|_{W^{-1,p}(B_{Ls}(\boldsymbol z))},
\end{equation}
By the explicit formulation of $Z_{\boldsymbol z, \varepsilon}$ in $B_{Ls}(\boldsymbol z)$, it holds
$$||x_1{\Delta^*} Z_{\boldsymbol z, \varepsilon}||_{W^{-1,p}(B_{Ls}(\boldsymbol z))}\le C\|\nabla Z_{\boldsymbol z, \varepsilon}\|_{L^{p}(B_{Ls}(\boldsymbol z))}=C\varepsilon^{\frac{2}{p}-2}.$$
So we finally deduce from \eqref{2-24} that
\begin{equation*}
\begin{split}
||\Lambda x_1{\Delta^*} Z_{\boldsymbol z, \varepsilon}||_{W^{-1,p}(B_{Ls}(\boldsymbol z))}&\le C|\Lambda|\cdot \varepsilon^{\frac{2}{p}-2}\\
&=C\varepsilon^{\frac{2}{p}}(|\ln\varepsilon|+\delta_\varepsilon^{-1})\cdot\|\phi\|_*+C\|\mathbf h\|_{W^{-1,p}(B_{Ls}(\boldsymbol z))}.
\end{split}
\end{equation*}
Now we are to prove \eqref{2-19}. Suppose not, then there exists a sequence $\{\varepsilon_n\}$ tending to $0$ and $\phi_n$ such that
\begin{equation}\label{2-25}
\|\phi_n\|_*+\varepsilon_n^{1-\frac{2}{p}}\|\nabla\phi_n\|_{B_{Ls}(\boldsymbol z))}=1,
\end{equation}
and
\begin{equation*}
\varepsilon_n^{1-\frac{2}{p}}\|\mathbf h\|_{W^{-1,p}(B_{Ls}(\boldsymbol z))}\le \frac{1}{n}.
\end{equation*}
Let
\begin{equation*}
\begin{split}
-\text{div}\left(\frac{1}{x_1}\nabla\phi_n(\boldsymbol x)\right)&=\frac{2}{sz_1}\boldsymbol{\delta}_{|\boldsymbol x-\boldsymbol z|=s}\phi_n(s,\theta)+\mathbf h-\Lambda x_1{\Delta^*} Z_{\boldsymbol z, \varepsilon}\\
&=\frac{2}{sz_1}\boldsymbol{\delta}_{|\boldsymbol x-\boldsymbol z|=s}\phi_n(s,\theta)+f_n
\end{split}
\end{equation*}
with $\text{supp}\, f_n\subset B_{2\delta_{\varepsilon_n}}(\boldsymbol z)$. For a general function $v$, we define its rescaled version centered at $\boldsymbol z$ as:
\begin{equation*}
\tilde v(\boldsymbol y):=v(s\boldsymbol y+\boldsymbol z).
\end{equation*}
Notice that parameter $s$ also depends on $\varepsilon_n$. Denoting $D_n=\{\boldsymbol y \ | \ s\boldsymbol y+\boldsymbol z\in\mathbb R^2_+\}$, then we obtain
\begin{equation*}
\int_{D_n}\frac{1}{sy_1+z_1}\cdot\nabla\tilde\phi_n\cdot\nabla\varphi d\boldsymbol y=2\int_{|\boldsymbol y|=1}\frac{1}{z_1}\tilde\phi_n\varphi+\langle\tilde f_n,\varphi\rangle, \quad \forall \, \varphi\in C_0^\infty(D_n),
\end{equation*}
where for each $p\in(2,\infty]$, it holds
\begin{equation*}
\|\tilde f_n\|_{W^{-1,p}(B_{L}(\boldsymbol 0))}\le C\varepsilon_n^{1-\frac{2}{p}}\left(\varepsilon_n^{\frac{2}{p}}(|\ln\varepsilon_n|+\delta_{\varepsilon_n}^{-1})\cdot\|\phi_n\|_*+\|\mathbf h\|_{W^{-1,p}(B_{Ls}(\boldsymbol z))}\right)= o_n(1).
\end{equation*}
Hence $\tilde\phi_n$ is bounded in $C_{\text{loc}}^\alpha(\mathbb{R}^2)$ for some $\alpha>0$, and $\tilde\phi_n$ converges uniformly in any fixed compact set of $\mathbb{R}^2$ to $\phi^*\in L^\infty(\mathbb R^2)\cap C(\mathbb R^2)$, which satisfies
\begin{equation*}
-\Delta \phi^*=2\phi^*(1,\theta)\boldsymbol{\delta}_{|\boldsymbol y|=1}, \ \ \ \text{in} \ \mathbb R^2,
\end{equation*}
and $\phi^*$ can be written as
$$\phi^*=C_1\frac{\partial w}{\partial y_1}+C_2\frac{\partial w}{\partial y_2}$$
with
\begin{equation*}
w(\boldsymbol y)=\left\{
\begin{array}{lll}
\frac{1}{4}(1-|\boldsymbol y|^2), \ \ \ \ \ & |\boldsymbol y|\le 1,\\
\\
\frac{1}{2}\ln\frac{1}{|\boldsymbol y|}, & |\boldsymbol y|\ge 1.
\end{array}
\right.
\end{equation*}
Since $\phi^*$ is even with respect to $x_1$-axis, it holds $C_2=0$. Then, from the second equation in \eqref{2-17}, we have
\begin{equation*}
\int_{\mathbb R^2}\nabla\phi^*\nabla\frac{\partial w}{\partial x_1}=0.
\end{equation*}
Thus we get $C_1=0$, and $\phi_n\to 0$ in $B_{Ls}(\boldsymbol z)$ as $n\to\infty$.
To derive the estimate for $||\phi_n||_*$, we will use a comparison principle. We see that $\phi_n$ satisfy
\begin{equation*}
\begin{cases}
\phi_n(\boldsymbol x)=0, & \text{on}\ x_1=0,
\\
\phi_n, \ |\nabla\phi_n|/x_1\to0, &\text{as} \ |\boldsymbol x |\to \infty.
\end{cases}
\end{equation*}
Moreover, $\phi_n\to 0$ in $B_{Ls}(\boldsymbol z)$ as $n\to\infty$, and $x_1{\Delta^*}\phi_n=0$ in $\mathbb R^2_+\setminus B_{Ls}(\boldsymbol z)$. By letting
$$\bar \phi_n(\boldsymbol x):= ||\phi_n||_{L^\infty(B_{Ls}(\boldsymbol z))}\cdot {G_*}(\boldsymbol x,\boldsymbol z),$$
we have
\begin{equation*}
\begin{cases}
\bar \phi_n-\phi_n\ge0, & \text{on}\ x_1=0,
\\
\bar \phi_n-\phi_n\ge0, &\text{as} \ |\boldsymbol x |\to \infty,
\end{cases}
\end{equation*}
and
\begin{equation*}
x_1^2{\Delta^*}\bar \phi_n-x_1^2{\Delta^*}\phi_n=\Delta(\bar \phi_n-\phi_n)+x_1\nabla\left(\frac{1}{x_1}\right)\cdot\nabla(\bar \phi_n-\phi_n)=0, \ \ \ \text{in} \ \mathbb R^2_+\setminus B_{Ls}(\boldsymbol z).
\end{equation*}
Since $x_1\nabla(1/x_1)$ is locally bounded on $\mathbb R^2_+\setminus B_{Ls}(\boldsymbol z)$, we can use the strong maximum principle to deduce $\phi_n\le\bar\phi_n$ on $\mathbb R^2_+\setminus B_{Ls}(\boldsymbol z)$, and hence $|\phi_n|\le\bar\phi_n$ on $\mathbb R^2_+\setminus B_{Ls}(\boldsymbol z)$. By the definition of $\bar\phi_n(\boldsymbol x)$, we have actually shown that
\begin{equation}\label{2-26}
||\phi_n||_*\le ||\phi_n||_{L^\infty(B_{Ls}(\boldsymbol z))}=o_n(1).
\end{equation}
On the other hand, for any $\tilde\varphi\in C_0^\infty(D_n)$ it holds
\begin{equation*}
\begin{split}
&\left|\int_{D_n}\frac{1}{sy_1+z_1}\cdot\nabla\tilde\phi_n\cdot\nabla\tilde\varphi d\boldsymbol y\right|=\left|2\int_{|\boldsymbol y|=1}\frac{1}{\boldsymbol z }\tilde\phi_n\tilde\varphi+\langle\tilde f_n,\tilde\varphi\rangle \right|\\
& \ \ \ \ \ \ \ \ \ =o_n(1)\cdot\|\tilde\varphi\|_{W^{1,1}(B_L(\boldsymbol 0))}+o_n(1)\cdot\|\tilde\varphi\|_{W^{1,p'}(B_L(\boldsymbol 0))}\\
& \ \ \ \ \ \ \ \ \ =o_n(1)\cdot\left(\int_{B_L(\boldsymbol 0)}|\nabla\tilde\varphi|^{p'}\right)^{\frac{1}{p'}},
\end{split}
\end{equation*}
which leads to
\begin{equation}\label{2-27}
\varepsilon^{1-\frac{2}{p}}\|\nabla\phi_n\|_{L^p(B_{Ls}(\boldsymbol z))}\le C||\nabla\tilde\phi_n||_{L^p(B_L(\boldsymbol 0))}=o_n(1).
\end{equation}
Combining \eqref{2-26} and \eqref{2-27}, we get a contradiction to \eqref{2-25}. Hence \eqref{2-19} holds, and \eqref{2-20} is a consequence of \eqref{2-19} and \eqref{2-24}.
\end{proof}
Using Lemma \ref{lem2-2}, we obtain the following result.
\begin{lemma}\label{lem2-3}
Suppose that $\mathrm{supp}\, \mathbf h\subset B_{2s}(\boldsymbol z)$ and $$\varepsilon^{1-\frac{2}{p}}\|\mathbf h\|_{W^{-1,p}(B_{Ls}(\boldsymbol z))}<\infty$$
with $p\in(2,+\infty]$. Then there exists a small $\varepsilon_0>0$ such that for any $\varepsilon\in(0,\varepsilon_0]$, \eqref{2-17} has a unique solution $\phi_\varepsilon=\mathcal T_\varepsilon \, \mathbf h$, where $\mathcal T_\varepsilon$ is a linear operator of $\mathbf h$. Moreover, there exists a constant $c_0>0$ independent of $\varepsilon$, such that
\begin{equation}\label{2-28}
\|\phi_\varepsilon\|_*+\varepsilon^{1-\frac{2}{p}}\|\nabla\phi_\varepsilon\|_{L^p(B_{Ls}(\boldsymbol z))}\le c_0\varepsilon^{1-\frac{2}{p}}\|\mathbf h\|_{W^{-1,p}(B_{Ls}(\boldsymbol z))},
\end{equation}
where $L>0$ is a large constant.
\end{lemma}
\begin{proof}
Let $H_a(\mathbb R^2_+)$ be the Hilbert space consists of functions satisfying the boundary condition
\begin{equation*}
\begin{cases}
u=0, & \text{on}\ x_1=0,
\\
u, \ |\nabla u|/x_1\to0, &\text{as} \ |\boldsymbol x |\to \infty,
\end{cases}
\end{equation*}
and endowed with the inner product
\begin{equation*}
[u,v]_{H_a(\mathbb R^2_+)}=\int_{\mathbb R^2_+} \frac{1}{x_1}\nabla u\cdot\nabla v d\boldsymbol x.
\end{equation*}
To yield the compactness of operator in $\mathbb R^2_+$, we also introduce another weighted $L^\infty$ norm as
\begin{equation*}
||\phi||_{*,\nu}:=\sup_{\boldsymbol x\in\mathbb R^2}\rho_1(\boldsymbol x)^{1-\nu}\rho_2(\boldsymbol x)^{1-\nu}|\phi(\boldsymbol x)|,
\end{equation*}
where $0<\nu<1/4$ is a small number, and $\rho_1,\rho_2$ are defined in \eqref{rho2}. We introduce
two spaces. The first one is
\begin{equation*}
E_\varepsilon:=\left\{u\in H_a(\mathbb R^2_+)\,\,\, \big|\, \,\, ||u||_{*,\nu}<\infty, \ u(x_1,x_2)=u(x_1,-x_2), \ \int_{\mathbb R^2_+}\frac{1}{x_1}\nabla u\cdot\nabla Z_{\boldsymbol z,\varepsilon}d\boldsymbol x=0\right\}
\end{equation*}
with norm $||\cdot||_{*,\nu}$, and the second one is
\begin{equation*}
F_\varepsilon:=\left\{\mathbf h^* \in W^{-1,p}(B_{Ls}(\boldsymbol z)) \,\,\, \big| \,\,\, \ \mathbf h^*(x_1,x_2)=\mathbf h^*(x_1,-x_2), \ \int_{\mathbb R^2_+}Z_{\boldsymbol z,\varepsilon}\mathbf h^*d\boldsymbol x=0\right\}
\end{equation*}
with $p\in (2,+\infty]$. Then for $\phi_\varepsilon\in E_\varepsilon$, problem \eqref{2-17} has an equivalent operation form
\begin{equation*}
\begin{split}
\phi_\varepsilon&=(-x_1{\Delta^*})^{-1}P_\varepsilon\left(\frac{1}{sz_1}\phi_\varepsilon(s,\varepsilon)\boldsymbol{\delta}_{|\boldsymbol x-\boldsymbol z|=s}\right)+(-x_1{\Delta^*})^{-1} P_\varepsilon \mathbf h\\
&=\mathcal K\phi_\varepsilon+(-x_1{\Delta^*})^{-1} P_\varepsilon \mathbf h,
\end{split}
\end{equation*}
where
\begin{equation*}
(-x_1{\Delta^*})^{-1}u:=\int_{\mathbb R^2_+}{G_*}(\boldsymbol x,\boldsymbol x')x_1'^{-1}u(\boldsymbol x')d\boldsymbol x',
\end{equation*}
and $P_\varepsilon$ is the projection operator to $F_\varepsilon$. Since $Z_{\boldsymbol z,\varepsilon}$ has a compact support due to the truncation \eqref{truncation}, by the definition of ${G_*}(\boldsymbol x,\boldsymbol x')$, we see that $\mathcal K$ maps $E_\varepsilon$ to $E_\varepsilon$.
To show that $\mathcal K$ is a compact operator, we let $K_n:=\{\boldsymbol x\in\mathbb R^2\, | \,\, 1/n<x_1<n, \ |x_2|<n \}$ with $n\in N^*$. It is obvious that $K_n\to \mathbb R^2_+$ as $n\to +\infty$. Recall that the asymptotic estimate for the Green's function ${G_*}$ given in \eqref{2-12} and \eqref{2-13}. For any small $\epsilon>0$, we can find an $N$ sufficiently large such that if $n>N$, then it holds
$$\rho_1(\boldsymbol x)^{1-\nu}\rho_2(\boldsymbol x)^{1-\nu}|\mathcal Ku(\boldsymbol x)|<\epsilon, \ \ \ u\in E_\varepsilon, \ \ \ \boldsymbol x\in \mathbb R^2_+\setminus K_n.$$
While for $\boldsymbol x\in K_n$, standard elliptic estimates shows that the $C^\alpha$ norm of $\mathcal Ku(\boldsymbol x)$ is bounded, and hence $\mathcal Ku(\boldsymbol x)$ is uniformly bounded and equi-continuous in $K_n$. By the Ascoli--Arzela theorem, we conclude that $\mathcal K$ is indeed a compact operator. It is also noteworthy that this approach of recovering compactness is generally applicable in `gluing method', see \cite{DW,DW2}.
Using the Fredholm alternative, \eqref{2-17} has a unique solution if the homogeneous equation
\begin{equation*}
\phi_\varepsilon=\mathcal K\phi_\varepsilon
\end{equation*}
has only trivial solution in $E_\varepsilon$, which can be obtained from Lemma \ref{lem2-2}. Now we let
$$\mathcal T_\varepsilon:=(\text{Id}-\mathcal K)^{-1}(-x_1{\Delta^*})^{-1} P_\varepsilon,$$
and the estimate \eqref{2-28} holds by Lemma \ref{lem2-2}. The proof is thus complete.
\end{proof}
\bigskip
\subsection{The reduction and one-dimensional problem}
Recall that our aim is to solve \eqref{Eqforperturbation}. However, since the linear operator $\mathbb L_\varepsilon$ has a nontrival kernel, we have to settle for second best, and first deal with the projective problem in the space $E_\varepsilon$. Using the linear operator $\mathcal T_\varepsilon$ given in Lemma \ref{lem2-3}, we are to consider
\begin{equation}\label{2-29}
\phi_\varepsilon=\mathcal T_\varepsilon R_\varepsilon(\phi_\varepsilon)
\end{equation}
with
\begin{equation*}
R_\varepsilon(\phi_\varepsilon)=\frac{1}{\varepsilon^2}\bigg(x_1\boldsymbol1_{\{\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\}}-x_1\boldsymbol1_{\{V_{\boldsymbol z,\varepsilon}>\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\}}-\frac{2}{sz_1}\phi_\varepsilon(s,\theta)\boldsymbol\delta_{|\boldsymbol x-\boldsymbol z|=s}\bigg)
\end{equation*}
for each small $\varepsilon\in (0,\varepsilon_0]$. In the following lemma, we will give a delicate estimate for the error term $R_\varepsilon(\phi_\varepsilon)$, so that a contraction mapping theorem can be applied to obtain the existence of $\phi_\varepsilon$ in $E_\varepsilon$.
\begin{lemma}\label{lem2-4}
There exists a small $\varepsilon_0>0$ such that for any $\varepsilon\in(0,\varepsilon_0]$, there is a unique solution $\phi_\varepsilon\in E_\varepsilon$ to \eqref{2-29}, which satisfies
\begin{equation}\label{2-30}
\|\phi_\varepsilon\|_*+\varepsilon^{1-\frac{2}{p}}\|\nabla\phi_\varepsilon\|_{L^p(B_{Ls}(\boldsymbol z))}= O(\varepsilon|\ln\varepsilon|)
\end{equation}
with the norm $|| \cdot ||_*$ defined in \eqref{2-11}, $p\in(2,+\infty]$.
\end{lemma}
\begin{proof}
Denote $\mathcal G_\varepsilon:=\mathcal T_\varepsilon R_\varepsilon$, and a neighborhood of origin in $E_\varepsilon$ as
\begin{equation*}
\mathcal B_\varepsilon:=E_\varepsilon\cap \left\{\phi \ | \ \|\phi\|_*+\varepsilon^{1-\frac{2}{p}}\|\nabla\phi\|_{L^p(B_{Ls}(\boldsymbol z))}\le \varepsilon|\ln\varepsilon|^2, \ p\in(2,\infty]\right\}.
\end{equation*}
We will show that $\mathcal G_\varepsilon$ is a contraction map from $\mathcal B_\varepsilon$ to $\mathcal B_\varepsilon$, so that a unique fixed point $\phi_\varepsilon$ can be obtained by the contraction mapping theorem. Actually, letting $\mathbf h=R_\varepsilon(\phi)$ for $\phi\in\mathcal B_\varepsilon$, and noticing that $R_\varepsilon(\phi)$ satisfies assumptions for $\mathbf h$ in Lemma \ref{lem2-3} by Appendix \ref{appB}, we hence have
\begin{equation*}
\|\mathcal T_\varepsilon R_\varepsilon(\phi)\|_*+\varepsilon^{1-\frac{2}{p}}\|\nabla\mathcal T_\varepsilon R_\varepsilon(\phi)\|_{L^p(B_{Ls}(\boldsymbol z))}\le c_0\varepsilon^{1-\frac{2}{p}}\|R_\varepsilon(\phi) \|_{W^{-1,p}(B_{Ls}(\boldsymbol z))}.
\end{equation*}
To begin with, we are to show that $\mathcal G_\varepsilon$ maps $\mathcal B_\varepsilon$ continously into itself. We use $\tilde v(\boldsymbol y)$ to denote $v(s\boldsymbol y+\boldsymbol z)$. For each $\varphi\in C_0^\infty(B_{Ls}(\boldsymbol z))$, in view of Lemma \ref{B2} and Lemma \ref{B3} in Appendix \ref{appB}, we have
\begin{equation*}
\begin{split}
\langle R_\varepsilon(\phi)&,\varphi \rangle=\frac{s^2}{\varepsilon^2}\int_{B_L(\boldsymbol 0)}(sy_1+z_1)\left(\boldsymbol1_{\{\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\}}-\boldsymbol 1_{\{V_{\boldsymbol z,\varepsilon}>\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\}}\right)\tilde\varphi d\boldsymbol y\\
& \ \ \ -\frac{2}{z_1}\int_0^{2\pi}\tilde\phi\tilde\varphi(1,\theta)d\theta\\
&=(1+O(\varepsilon))\cdot z_1 \cdot\frac{s^2}{\varepsilon^2}\int_0^{2\pi}\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde\phi}}t\tilde\varphi(t,\theta)dtd\theta-\frac{2}{z_1}\int_0^{2\pi}\tilde\phi\tilde\varphi(1,\theta)d\theta\\
&=\frac{s^2}{\varepsilon^2}\cdot z_1\int_0^{2\pi}\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde\phi}}t\tilde\varphi(1,\theta)dtd\theta-\frac{2}{z_1}\int_0^{2\pi}\tilde\phi\tilde\varphi(1,\theta)d\theta\\
& \ \ \ +\frac{s^2}{\varepsilon^2}\cdot z_1\int_0^{2\pi}\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde\phi}}t(\tilde\varphi(t,\theta)-\tilde\varphi(1,\theta))dtd\theta+O(\varepsilon)\cdot\int_0^{2\pi}|\tilde\varphi|d\theta\\
&=\frac{s^2}{\varepsilon^2}\cdot z_1\int_0^{2\pi}\left(\frac{\tilde\phi(1,\theta)}{s\mathcal N}+O(\varepsilon|\ln\varepsilon|)\right)\tilde\varphi(1,\theta)d\theta+O(\varepsilon)\cdot\int_0^{2\pi}|\tilde\varphi|d\theta\\
& \ \ \ +\frac{s^2}{\varepsilon^2}\cdot z_1\int_0^{2\pi}\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde\phi}}t\int_1^t\frac{\partial \tilde\varphi(s,\theta)}{\partial s}dsdtd\theta-\frac{2}{z_1}\int_0^{2\pi}\tilde\phi\tilde\varphi(1,\theta)d\theta\\
&=\frac{s^2}{\varepsilon^2}\cdot z_1\int_0^{2\pi}|t_\varepsilon+t_{\varepsilon,\tilde\phi}|\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde\phi}}\left|\frac{\partial \tilde\varphi(s,\theta)}{\partial s}\right|dsd\theta+O(\varepsilon|\ln\varepsilon| )\cdot\|\tilde\varphi\|_{W^{1,p'}(B_L(\boldsymbol 0))}\\
&=O(\varepsilon|\ln\varepsilon| )\cdot\|\tilde\varphi\|_{W^{1,p'}(B_L(\boldsymbol 0))},
\end{split}
\end{equation*}
where we have used the definition of $\mathcal N$ in \eqref{2-14}. Thus we have
\begin{equation*}
\varepsilon^{1-\frac{2}{p}}\|R_\varepsilon(\phi)\|_{W^{-1,p}(B_{Ls}(\boldsymbol z))}=O(\varepsilon|\ln\varepsilon|),
\end{equation*}
which yields
\begin{equation*}
\|\mathcal T_\varepsilon R_\varepsilon(\phi)\|_*+\varepsilon^{1-\frac{2}{p}}\|\nabla\mathcal T_\varepsilon R_\varepsilon(\phi)\|_{L^p(B_{Ls}(\boldsymbol z))}=O(\varepsilon|\ln\varepsilon|)<\varepsilon|\ln\varepsilon|^2
\end{equation*}
by Lemma \ref{lem2-3}. Arguing in a same way, we can deduce
\begin{equation*}
\varepsilon\|\nabla\phi\|_{L^\infty(B_{Ls}(\boldsymbol z))}=O(\varepsilon|\ln\varepsilon|)<\varepsilon|\ln\varepsilon|^2
\end{equation*}
from the estimate
\begin{equation*}
\varepsilon\|R_\varepsilon(\phi) \|_{W^{-1,\infty}(B_{Ls}(\boldsymbol z))}=O(\varepsilon|\ln\varepsilon|).
\end{equation*}
Thus operator $\mathcal G_\varepsilon$ indeed maps $\mathcal B_\varepsilon$ to $\mathcal B_\varepsilon$ continously.
In the next step, we are to verify that $\mathcal G_\varepsilon$ is a contraction mapping under the norm
\begin{equation*}
\|\cdot\|_{\mathcal G_\varepsilon}=\|\cdot\|_*+\varepsilon^{1-\frac{2}{p}}\|\cdot\|_{W^{1,p}(B_{Ls}(\boldsymbol z))}, \quad p\in(2,+\infty].
\end{equation*}
We already know that $\mathcal B_\varepsilon$ is close under this norm. Let $\phi_1$ and $\phi_2$ be two functions in $\mathcal B_\varepsilon$. From Lemma \ref{lem2-3}, it holds
\begin{equation}\label{2-32}
\|\mathcal G_\varepsilon\phi_1-\mathcal G_\varepsilon\phi_2\|_{\mathcal G_\varepsilon}\le C\varepsilon^{1-\frac{2}{p}}\|R_\varepsilon(\phi_1)-R_\varepsilon(\phi_2) \|_{W^{-1,p}(B_{Ls}(\boldsymbol z))},
\end{equation}
where
\begin{equation*}
\begin{split}
R_\varepsilon(\phi_1)&-R_\varepsilon(\phi_2)\\
&=\frac{1}{\varepsilon^2}\bigg(x_1\boldsymbol1_{\{\mathbf U_{\boldsymbol z,\varepsilon}+\phi_1>0\}}-x_1\boldsymbol1_{\{\mathbf U_{\boldsymbol z,\varepsilon}+\phi_2>0\}}-\frac{2}{sz_1}(\phi_1(s,\theta)-\phi_2(s,\theta))\boldsymbol\delta_{|\boldsymbol x-\boldsymbol z|=s}\bigg).
\end{split}
\end{equation*}
For $m=1,2$, let
\begin{equation*}
S_{m1}:=\{y\,\,|\,\, \tilde{\mathbf U}_{\boldsymbol z,\varepsilon}+\tilde\phi_m>0\}\cap B_{L}(\boldsymbol 0),
\end{equation*}
and
\begin{equation*}
S_{m2}:=\{y\,\,|\,\, \tilde{\mathbf U}_{\boldsymbol z,\varepsilon}+\tilde\phi_m<0\}\cap B_{L}(\boldsymbol 0).
\end{equation*}
Then it holds
\begin{equation*}
\boldsymbol1_{\{\tilde{\mathbf U}_{\boldsymbol z,\varepsilon}+\tilde\phi_1>0\}}-\boldsymbol1_{\{\tilde{\mathbf U}_{\boldsymbol z,\varepsilon}+\tilde\phi_2>0\}}=0, \ \ \ \ \ \ \text{in}\ \ (S_{11}\cap S_{21})\cup(S_{12}\cap S_{22}).
\end{equation*}
According to Lemma \ref{B3}, for each $\tilde\varphi\in C_0^\infty(B_L(\boldsymbol 0))$, we have
\begin{equation*}
\begin{split}
&\frac{s^2}{\varepsilon^2}\int_{B_L(0)}(sy_1+z_1)\left(\boldsymbol1_{\{\tilde{\mathbf U}_{\boldsymbol z,\varepsilon}+\tilde\phi_1>0\}\}}-\boldsymbol1_{\{\tilde{\mathbf U}_{\boldsymbol z,\varepsilon}+\tilde\phi_2>0\}}\right)\tilde\varphi d\boldsymbol y\\
&=\frac{s^2}{\varepsilon^2}\left(\int_{S_{11}\cap S_{22}}(sy_1+z_1)\tilde\varphi d\boldsymbol y-\int_{S_{12}\cap S_{21}}(sy_1+z_1)\tilde\varphi d\boldsymbol y\right)\\
&=\frac{s^2}{\varepsilon^2}\int_0^{2\pi}\int_{1+t_\varepsilon+t_{\varepsilon,\tilde\phi_2}}^{1+t_\varepsilon+t_{\varepsilon,\tilde\phi_1}}(sy_1+z_1)t\tilde\varphi dtd\theta\\
&=\frac{s^2}{\varepsilon^2}\int_0^{2\pi}(t_{\varepsilon,\tilde\phi_1}-t_{\varepsilon,\tilde\phi_2})(sy_1+z_1)\tilde\varphi(1,\theta)d\theta+\frac{s^2}{\varepsilon^2}\int_0^{2\pi}\int_{1+t_\varepsilon+t_{\varepsilon,\tilde\phi_2}}^{1+t_\varepsilon+t_{\varepsilon,\tilde\phi_1}}(sy_1+z_1)t(\tilde\varphi(t,\theta)-\tilde\varphi(1,\theta))dtd\theta\\
&=\frac{s^2}{\varepsilon^2}\int_0^{2\pi}(t_{\varepsilon,\tilde\phi_1}-t_{\varepsilon,\tilde\phi_2})(sy_1+z_1)\tilde\varphi(1,\theta)d\theta+O\left((\varepsilon|\ln\varepsilon|^2) ^{\frac{1}{p}}\right)\cdot\max_{\theta\in(0,2\pi]}|t_{\varepsilon,\tilde\phi_1}-t_{\varepsilon,\tilde\phi_2}|\cdot\|\tilde\varphi\|_{W^{1,p'}(B_L(\boldsymbol 0))}\\
&=\frac{s^2}{\varepsilon^2}\int_0^{2\pi}(t_{\varepsilon,\tilde\phi_1}-t_{\varepsilon,\tilde\phi_2})(sy_1+z_1)\tilde\varphi(1,\theta)d\theta+o_\varepsilon(1)\cdot\|\tilde\phi_1-\tilde\phi_2\|_{L^\infty(B_L(\boldsymbol 0))}\|\tilde\varphi\|_{W^{1,p'}(B_L(\boldsymbol 0))},
\end{split}
\end{equation*}
where we have used the fact
\begin{equation*}
|t_{\varepsilon,\tilde\phi_1}-t_{\varepsilon,\tilde\phi_2}|\le C\|\tilde\phi_1-\tilde\phi_2\|_{L^\infty(B_L(\boldsymbol 0))}.
\end{equation*}
To handle the first term in above identity, we let $\phi_*:=\tilde\phi_1-\tilde\phi_2$, and
\begin{equation*}
\begin{split}
\boldsymbol y_{\varepsilon,m}:&=(1+t_\varepsilon(\theta)+t_{\varepsilon,\tilde\phi_m}(\theta))(\cos\theta,\sin\theta)\\
&\in\{\boldsymbol y \,\,\,|\,\, \ \tilde{\mathbf U}_{\boldsymbol z,\varepsilon}(\boldsymbol y_{\varepsilon,m})+\tilde\phi_m(\boldsymbol y_{\varepsilon,m})=\mu_\varepsilon\}\cap B_{2L}(\boldsymbol 0).
\end{split}
\end{equation*}
Then it holds
\begin{equation*}
\begin{split}
\tilde{\mathbf U}_{\boldsymbol z,\varepsilon}(\boldsymbol y_{\varepsilon,1})&-\tilde{\mathbf U}_{\boldsymbol z,\varepsilon}(\boldsymbol y_{\varepsilon,2})=\tilde\phi_2(\boldsymbol y_{\varepsilon,2})-\tilde\phi_1(\boldsymbol y_{\varepsilon,1})\\
&=\tilde\phi_2(\boldsymbol y_{\varepsilon,1})-\tilde\phi_1(\boldsymbol y_{\varepsilon,1})+\int_{1+t_\varepsilon+t_{\varepsilon,\tilde\phi_1}}^{1+t_\varepsilon+t_{\varepsilon,\tilde\phi_2}}\frac{\partial\tilde\phi_2(t,\theta)}{\partial t}dt\\
&=\phi_*(1,\theta)+\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde\phi_1}}\frac{\partial\tilde\phi_*(t,\theta)}{\partial t}dt+\int_{1+t_\varepsilon+t_{\varepsilon,\tilde\phi_1}}^{1+t_\varepsilon+t_{\varepsilon,\tilde\phi_2}}\frac{\partial\tilde\phi_2(t,\theta)}{\partial t}dt.
\end{split}
\end{equation*}
By the expansion
\begin{equation*}
\tilde{\mathbf U}_{\boldsymbol z,\varepsilon}(\boldsymbol y_{\varepsilon,1})-\tilde{\mathbf U}_{\boldsymbol z,\varepsilon}(\boldsymbol y_{\varepsilon,2})=-\frac{1}{s\mathcal N}(\boldsymbol y_{\varepsilon,1}-\boldsymbol y_{\varepsilon,2})+O(\varepsilon|\ln\varepsilon|^2),
\end{equation*}
we have
\begin{equation*}
\begin{split}
t_{\varepsilon,\tilde\phi_1}&-t_{\varepsilon,\tilde\phi_2}=|\boldsymbol y_{\varepsilon,1}-\boldsymbol y_{\varepsilon,2}|\\
&=-s\mathcal N(1+o_\varepsilon(1))\cdot\left(\phi_*(1,\theta)+\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde\phi_1}}\frac{\partial\tilde\phi_*(t,\theta)}{\partial t}dt+\int_{1+t_\varepsilon+t_{\varepsilon,\tilde\phi_1}}^{1+t_\varepsilon+t_{\varepsilon,\tilde\phi_2}}\frac{\partial\tilde\phi_2(t,\theta)}{\partial t}dt\right).
\end{split}
\end{equation*}
Then using the definition of $\mathcal N$ in \eqref{2-14}, one can deduce
\begin{equation*}
\begin{split}
\frac{s^2}{\varepsilon^2}\int_0^{2\pi}(t_{\varepsilon,\tilde\phi_1}&-t_{\varepsilon,\tilde\phi_2})(sy_1+z_1)\tilde\varphi(1,\theta)d\theta=\frac{2}{z_1}(1+o_\varepsilon(1))\cdot\int_0^{2\pi}(\tilde\phi_1-\tilde\phi_2)\tilde\varphi(1,\theta)d\theta\\
& \ \ \ -\frac{2}{z_1}(1+o_\varepsilon(1))\cdot\left(\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde\phi_1}}\frac{\partial\tilde\phi_*(t,\theta)}{\partial t}dt+\int_{1+t_\varepsilon+t_{\varepsilon,\tilde\omega_1}}^{1+t_\varepsilon+t_{\varepsilon,\tilde\phi_2}}\frac{\partial\tilde\phi_2(t,\theta)}{\partial t}dt\right)\\
&=\frac{2}{z_1}\int_0^{2\pi}(\tilde\phi_1-\tilde\phi_2)\tilde\varphi(1,\theta)d\theta+o_\varepsilon(1)\cdot\|\tilde\phi_1-\tilde\phi_2\|_{L^\infty(B_L(\boldsymbol 0))}\\
& \ \ \ +\left(O\left((\varepsilon|\ln\varepsilon|^2)^{\frac{1}{p}}\right)+
\|\tilde\phi_2\|_{W^{1,p}(B_L(\boldsymbol 0))}\right)\cdot\|\tilde\phi_1
-\tilde\phi_2\|_{L^\infty(B_L(\boldsymbol 0))}\cdot\|\tilde\varphi\|_{W^{1,p'}(B_L(\boldsymbol 0))}.
\end{split}
\end{equation*}
Finally, we conclude that
\begin{equation*}
\varepsilon^{1-\frac{2}{p}}\|R_\varepsilon(\phi_1)-R_\varepsilon(\phi_2) \|_{W^{-1,p}(B_{L}(\boldsymbol z))}=o_\varepsilon(1)\cdot \|\phi_1-\phi_2\|_{\mathcal G_\varepsilon},
\end{equation*}
which yields
\begin{equation*}
\|\mathcal G_\varepsilon\phi_1-\mathcal G_\varepsilon\phi_2\|_{\mathcal G_\varepsilon}=o_\varepsilon(1)\cdot \|\phi_1-\phi_2\|_{\mathcal G_\varepsilon}
\end{equation*}
from \eqref{2-32}. Hence we have shown that $\mathcal G_\varepsilon$ is a contraction map from $\mathcal B_\varepsilon$ into itself.
Using the contraction mapping theorem, we now can claim that there is a unique $\phi_\varepsilon\in \mathcal B_\varepsilon$ such that $\phi_\varepsilon=\mathcal G_\varepsilon\phi_\varepsilon$, which satisfies \eqref{2-30}. Since $\|\phi_\varepsilon\|_{\mathcal G_\varepsilon}$ is bounded by a constant $C$ independent of $\boldsymbol z$, we conclude that $\phi_\varepsilon$ is continuous with respect to $\boldsymbol z$ in the norm $\|\cdot\|_{\mathcal G_\varepsilon}$.
\end{proof}
\bigskip
From the above lemma, the problem of solving \eqref{Eqforperturbation} is now transformed into a one-dimensional problem: Finding the sufficient condition to ensure
\begin{equation*}
\Lambda=0,
\end{equation*}
which will also determine the location of $\boldsymbol z=(z_1,0)$ as a crucial parameter in approximate solutions. In the next lemma, we will derive a condition equivalent to $\Lambda=0$, which enables us to prove the existence of $\psi_\varepsilon$.
\begin{lemma}\label{lem2-5}
If $\boldsymbol z=(z_1, 0)$ satisfies
\begin{equation}\label{2-35}
\varepsilon^2\int_{\mathbb R^2_+}\frac{1}{x_1}\nabla\psi_\varepsilon\cdot\nabla Z_{\boldsymbol z,\varepsilon}d\boldsymbol x
-\int_{A_\varepsilon}x_1\cdot Z_{\boldsymbol z,\varepsilon}d\boldsymbol x=0,
\end{equation}
then $\psi_\varepsilon$ is a solution to \eqref{2-9} and \eqref{2-10}.
\end{lemma}
\begin{proof}
If the assumption \eqref{2-35} holds true, from \eqref{Eqforperturbation} we will have
\begin{equation*}
\varepsilon^2\Lambda\int_{\mathbb R^2_+}\frac{1}{x_1}\nabla Z_{\boldsymbol z,\varepsilon}\cdot\nabla Z_{\boldsymbol z,\varepsilon}d\boldsymbol x=0.
\end{equation*}
Proceeding as in the proof of Lemma \ref{lem2-2}, we deduce
\begin{equation*}
\varepsilon^2\int_{\mathbb R^2_+}\frac{1}{x_1}\nabla Z_{\boldsymbol z,\varepsilon}\cdot\nabla Z_{\boldsymbol z,\varepsilon}d\boldsymbol x=C_Z+o_\varepsilon(1).
\end{equation*}
Hence it holds $\Lambda=0$ when $\varepsilon$ is sufficiently small. This fact implies that $\psi_\varepsilon$ is a solution to \eqref{2-9} and \eqref{2-10}.
\end{proof}
\bigskip
Taking advantage of the above characterization, we are now in the position to prove Proposition \ref{prop2-1}.
{\bf Proof of Proposition \ref{prop2-1}:}
We will show that condition \eqref{2-35} is equivalent to
\begin{equation}\label{2-36}
z_1-\frac{\kappa}{4\pi W}=O\left(\frac{1}{|\ln\varepsilon|}\right).
\end{equation}
Since $\phi_\varepsilon\in E_\varepsilon$, we have
\begin{equation*}
\int_{\mathbb R^2_+}\frac{1}{x_1}\nabla\phi_\varepsilon\cdot\nabla Z_{\boldsymbol z,\varepsilon}d\boldsymbol x=0.
\end{equation*}
Hence it holds
\begin{equation*}
\begin{split}
&\quad\varepsilon^2\int_{\mathbb R^2_+}\frac{1}{x_1}\nabla\psi_\varepsilon\cdot\nabla Z_{\boldsymbol z,\varepsilon}d\boldsymbol x
-\int_{A_\varepsilon}x_1\cdot Z_{\boldsymbol z,\varepsilon}d\boldsymbol x\\
&=\varepsilon^2\int_{\mathbb R^2_+}\frac{1}{x_1}\nabla(\mathcal V_{\boldsymbol z,\varepsilon}+\mathcal H_{\boldsymbol z,\varepsilon})\cdot\nabla Z_{\boldsymbol z,\varepsilon}d\boldsymbol x
-\int_{A_\varepsilon}x_1\cdot Z_{\boldsymbol z,\varepsilon}d\boldsymbol x\\
&=\int_{B_{Ls}(\boldsymbol z)}x_1(\boldsymbol 1_{\{V_{\boldsymbol z,\varepsilon}>\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\}}-\boldsymbol 1_{\{\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\}})\cdot Z_{\boldsymbol z,\varepsilon}d\boldsymbol x.
\end{split}
\end{equation*}
By denoting
$$\tilde Z_{\boldsymbol z,\varepsilon}=Z_{\boldsymbol z,\varepsilon}(s\boldsymbol y+\boldsymbol z),$$
direct computation yields
$$\|\tilde Z_{\boldsymbol z,\varepsilon}\|_{W^{1,p'}(B_L(\boldsymbol 0))}=O(\varepsilon^{-1}).$$
Note that
\begin{equation*}
\begin{split}
&\quad\frac{2}{z_1}\int_0^{2\pi}\tilde\phi_\varepsilon(1,\theta)\tilde Z_{\boldsymbol z,\varepsilon}d\theta\\
&=\int_{\mathbb R^2_+}\frac{1}{sy_1+z_1}\cdot\nabla\tilde\phi_\varepsilon\cdot\nabla \tilde Z_{\boldsymbol z,\varepsilon}d\boldsymbol x+O_\varepsilon(1)\cdot\left(\|\phi_\varepsilon\|_*+\varepsilon\|\nabla\phi_\varepsilon\|_{L^\infty(B_{Ls}(\boldsymbol z))}\right)\\
&=O_\varepsilon(1)\cdot\left(\|\phi_\varepsilon\|_*+\varepsilon\|\nabla\phi_\varepsilon\|_{L^\infty(B_{Ls}(\boldsymbol z))}\right),
\end{split}
\end{equation*}
due to the nondegeneracy property of operator $\mathbb L$ defined in \eqref{linear}. Then, similar to the proof of Lemma \ref{lem2-5}, we can deduce
\begin{equation*}
\begin{split}
&\quad\int_{B_{Ls}(\boldsymbol z)}x_1(\boldsymbol 1_{\{V_{\boldsymbol z,\varepsilon}>\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\}}-\boldsymbol 1_{\{\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\}})\cdot Z_{\boldsymbol z,\varepsilon}d\boldsymbol x\\
&=-\frac{s^2}{\varepsilon^2}\int_{B_L(\boldsymbol 0)}(sy_1+z_1)\left(\boldsymbol1_{\{\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\}}-\boldsymbol 1_{\{V_{\boldsymbol z,\varepsilon}>\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\}}\right)\tilde Z_{\boldsymbol z,\varepsilon}d\boldsymbol y\\
&=-\frac{s^2}{\varepsilon^2}\cdot z_1\int_0^{2\pi}\left(\frac{\tilde\phi(1,\theta)}{s\mathcal N}+s\cos\theta\cdot \left(\frac{s^2}{4\varepsilon^2}\cdot z_1\ln\frac{1}{\varepsilon}-Wz_1\ln\frac{1}{\varepsilon}\right)+o(\varepsilon)\right)\tilde Z_{\boldsymbol z,\varepsilon}d\theta+O_\varepsilon(1)\\
&=\frac{\pi}{2}\cdot\frac{s^4}{\varepsilon^4}\cdot z_1^3 \left(\frac{s^2}{4\varepsilon^2}\cdot z_1\ln\frac{1}{\varepsilon}-Wz_1\ln\frac{1}{\varepsilon}\right)+O_\varepsilon(1).
\end{split}
\end{equation*}
Since it holds $s^2\pi z_1/\varepsilon^2=\kappa+O(1/|\ln\varepsilon|)$ by our choice of $a$ in \eqref{2-16}, condition \eqref{2-35} yields
$$\frac{\kappa}{4\pi}-Wz_1=O\left(\frac{1}{|\ln\varepsilon|}\right).$$
Then we can solve above equation on $z_1$ and obtain at least one $z_1$ satisfying \eqref{2-36}. In view of Lemma \ref{lem2-5}, we obtain the existence of $\psi_\varepsilon$ for every $\varepsilon\in(0,\varepsilon_0]$. Moreover, the estimates for $A_\varepsilon$ can be deduced from Lemma \ref{lem2-4} and Appendix \ref{appB}. Thus we have completed the proof of Proposition \ref{prop2-1}.
\qed
\bigskip
\section{Uniqueness}\label{sec3}
In this section, we will prove the local uniqueness of a vortex ring of small cross-section for which $\zeta$ is constant throughout the core. Moreover, we assume the cross-section $A_\varepsilon$ is simply-connected and has a positive distance from $x_2$-axis, so that it is given by
$$A_\varepsilon=\left\{\boldsymbol x\in \mathbb R^2_+ \,\,\, \big| \,\, \psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\right\},$$
where $\mu_\varepsilon>0$ have a positive lower bound independent of $\varepsilon$. Using notations in Section 2, the Stokes stream function $\psi_\varepsilon$ satisfies
\begin{equation}\label{3-1}
\begin{cases}
-\varepsilon^2{\Delta^*}\psi_\varepsilon=\boldsymbol1_{A_\varepsilon}, & \text{in} \ \mathbb R^2_+,
\\
\psi_\varepsilon=0, & \text{on} \ x_1=0,
\\
\psi_\varepsilon, \ |\nabla\psi_\varepsilon|/x_1\to0, &\text{as} \ |\boldsymbol x |\to \infty.
\end{cases}
\end{equation}
To discuss the uniqueness of vortex rings of small cross-section, we will fix the circulation
\begin{equation}\label{3-2}
\kappa=\frac{1}{\varepsilon^2}\int_{A_\varepsilon}x_1d\boldsymbol x,
\end{equation}
and the parameter $W$ in translational velocity $W\ln \varepsilon\, \mathbf e_z$. Since $\psi_\varepsilon$ determines the vortex ring $\zeta_\varepsilon$ absolutely, the uniqueness result in Theorem \ref{thm2} can be concluded from following proposition.
\begin{proposition}\label{prop3-1}
Let $\kappa$ and $W$ be two fixed positive constants. Suppose that the cross-section $A_\varepsilon$ is simply-connected with a positive distance from $x_2$-axis, and satisfies
$$\mathrm{diam}\, A_\varepsilon\to 0, \quad \mathrm{as} \ \ \varepsilon\to 0.$$
Then for each $\varepsilon\in (0,\varepsilon_0]$ with $\varepsilon_0>0$ sufficiently small, equation \eqref{3-1} together with \eqref{3-2} has a unique solution $\psi_\varepsilon$ up to translations in the $x_2$-direction.
\end{proposition}
To study the local behavior of $\psi_\varepsilon$ near $A_\varepsilon$, we denote
$$\sigma_\varepsilon:=\frac{1}{2}\text{diam}\, A_\varepsilon$$
as the cross-section parameter. By our assumptions, it will hold $\sigma_\varepsilon\to 0$ as $\varepsilon\to 0$. Intuitively, the maximum point of $\psi_\varepsilon$ in $A_\varepsilon$ gives the exact location of cross-section. So we can choose a point $\boldsymbol p_\varepsilon\in A_\varepsilon$ satisfying
$$\psi_\varepsilon(\boldsymbol p_\varepsilon)=\max_{\boldsymbol x\in A_\varepsilon}\psi_\varepsilon(\boldsymbol x),$$
which is always possible by maximum principle of $-{\Delta^*}$. In view of Lemma \ref{A1} in Appendix \ref{appA}, the set $A_\varepsilon$ must be symmetric with respect to some horizontal line $x_2=h$. Using the translation invariance of \eqref{3-1} in $x_2$-direction, we may always assume $A_\varepsilon$ is even symmetric with respect to $x_1$-axis (i.e. $(x_1,x_2)\in A_\varepsilon$ if and only if
$(x_1,-x_2)\in A_\varepsilon$). Then, by the integral equation
\begin{equation*}
\psi_\varepsilon(\boldsymbol x)=\frac{1}{\varepsilon^2}\int_{\mathbb R^2_+}{G_*}(\boldsymbol x,\boldsymbol x') \boldsymbol 1_{A_\varepsilon}(\boldsymbol x')d\boldsymbol x',
\end{equation*}
we see that $\psi_\varepsilon$ attains its maximum on $x_1$-axis, and
$$\psi_\varepsilon(\boldsymbol x)-\frac{W}{2}\ln\frac{1}{\varepsilon}x_1^2<0, \quad \text{as}\ \ x_1\to +\infty. $$
Thus we may assume that $\boldsymbol p_\varepsilon=(p_\varepsilon,0)$, and $p_\varepsilon$ satisfies $c_1<p_\varepsilon<c_2$, where $c_1,c_2$ are two positive constants.
Now, by letting $\boldsymbol z=(z_1,0)$ with $z_1>0$, we decompose the Green's function for $-{\Delta^*}$ in boundary condition of \eqref{3-1} as
\begin{equation*}
{G_*}(\boldsymbol x,\boldsymbol x')=z_1^2G(\boldsymbol x,\boldsymbol x')+H(\boldsymbol x,\boldsymbol x'),
\end{equation*}
where $G(\boldsymbol x,\boldsymbol x')$ is the Green's function of $-\Delta$ on the half plane, and $H(\boldsymbol x,\boldsymbol x')$ is the rest regular part. At this stage, we only assume $|z_1-p_\varepsilon|=o(\varepsilon)$. More accurate description of $\boldsymbol z$ will be given in second part of our proof.
Applying this decomposition of ${G_*}(\boldsymbol x,\boldsymbol x')$, we can split the stream function $\psi_\varepsilon$ as $\psi_{1,\varepsilon}+\psi_{2,\varepsilon}$, where
\begin{equation*}
\psi_{1,\varepsilon}(\boldsymbol x)=\frac{z_1^2}{\varepsilon^2}\int_{\mathbb R^2_+}G(\boldsymbol x,\boldsymbol x')\boldsymbol{1}_{A_\varepsilon}(\boldsymbol x')d\boldsymbol x',
\end{equation*}
and
\begin{equation*}
\psi_{2,\varepsilon}(\boldsymbol x)=\frac{1}{\varepsilon^2}\int_{\mathbb R^2_+}H(\boldsymbol x,\boldsymbol x')\boldsymbol{1}_{A_\varepsilon}(\boldsymbol x')d\boldsymbol x'.
\end{equation*}
According to \eqref{3-1}, $\psi_{1,\varepsilon}(\boldsymbol x)$ solves the problem
\begin{equation*}
\begin{cases}
-\varepsilon^2\Delta \psi_{1,\varepsilon}(\boldsymbol x)=z_1^2\boldsymbol{1}_{A_\varepsilon}, \ \ \ &\text{in} \ \mathbb R^2_+,\\
\psi_{1,\varepsilon}=0, &\text{on} \ x_1=0,\\
\psi_{1,\varepsilon}, \ |\nabla\psi_{1,\varepsilon}|/x_1\to0, &\text{as} \ |\boldsymbol x|\to \infty,
\end{cases}
\end{equation*}
and $\psi_{2,\varepsilon}(\boldsymbol x)$ satisfies
\begin{equation*}
\begin{cases}
-\varepsilon^2{\Delta^*}(\psi_{1,\varepsilon}(\boldsymbol x)+\psi_{2,\varepsilon}(\boldsymbol x))=\boldsymbol{1}_{A_\varepsilon}, \ \ \ &\text{in} \ \mathbb R^2_+,\\
\psi_{2,\varepsilon}=0, &\text{on} \ x_1=0,\\
\psi_{2,\varepsilon}, \ |\nabla\psi_{2,\varepsilon}|/x_1\to0, &\text{as} \ |\boldsymbol x|\to \infty,
\end{cases}
\end{equation*}
We see that the above two equations constitute a coupled system of $\psi_{1,\varepsilon}$ and $\psi_{2,\varepsilon}$, which seemes more complicated than \eqref{3-1}. However, it should be noted that $\psi_{1,\varepsilon}$ is a solution to a semilinear Laplace equation. While $\psi_{2,\varepsilon}$ is a more regular function than $\psi_{1,\varepsilon}$ with the $L^\infty$ norm bounded independent of $\varepsilon$. These fine properties enable us to decouple $\psi_{1,\varepsilon}$ and $\psi_{2,\varepsilon}$ in the main order, and use the local Pohozaev identity in Appendix \ref{appC} to analyse the asymptotic behavior.
To prove the uniqueness, the key idea is to derive the main parts for $\psi_\varepsilon$ and $\nabla\psi_\varepsilon$ as precise as possible, which are to be obtained by several steps of approximation and bootstrap arguments. In this process, we also obtain a relationship of $\kappa$, $W$, $\sigma_\varepsilon$ and $z_1$, namely, an accurate version of Kelvin--Hicks formula \eqref{KH}.
\begin{proposition}\label{prop3-2}
For steady vortex rings of small cross-section depicted in Proposition \ref{prop3-1}, the parameters $\kappa$, $W$, $\sigma_\varepsilon$, and $z_1$ satisfy
\begin{equation*}
Wz_1\ln\frac{1}{\varepsilon}=\frac{\kappa}{4\pi}\left(\ln\frac{8z_1}{\sigma_\varepsilon}-\frac{1}{4}\right)+O(\varepsilon^2|\ln\varepsilon|), \quad \mathrm{as} \ \varepsilon \to 0.
\end{equation*}
\end{proposition}
In \cite{Fra1}, Fraenkel has obtained a slightly weaker form of the above estimate with the error term $O(\varepsilon^2|\ln\varepsilon|^2)$. We reach a level of $O(\varepsilon^2|\ln\varepsilon|)$ since a better $\boldsymbol z$ is chosen to be the center of $V_{\boldsymbol z,\varepsilon}$ in the approximate solution. Actually, if we replace $\boldsymbol z$ with $\boldsymbol p_\varepsilon$ in above formula, then the error term will be the same as \cite{Fra1}.
Our approach for uniqueness is divided into several parts. In the first part of our proof, we give a coarse estimate for $\psi_\varepsilon$ and $A_\varepsilon$. Then we improve this estimate by constructing approximate solutions and deal with the error term carefully, which can be regarded as an inverse of Lyapunov--Schmidt reduction we have done in Section 2. The uniqueness for $\psi_\varepsilon$ is obtained by contradiction in the last part of this section.
\bigskip
\subsection{Asymptotic estimates for vortex ring}
The purpose of this part is to derive an asymptotic estimate for $\psi_\varepsilon$, and to obtain the following necessary condition on the location of $A_\varepsilon$, which is a coarse version of Kelvin--Hicks formula in Proposition \ref{prop3-2}.
\begin{proposition}\label{prop3-3}
As $\varepsilon\to 0$, it holds
\begin{equation*}
Wp_\varepsilon\ln\frac{1}{\varepsilon}-\frac{\kappa}{4\pi}\ln\frac{8p_\varepsilon}{\sigma_\varepsilon}+\frac{\kappa}{16\pi}=o_\varepsilon(1).
\end{equation*}
\end{proposition}
To prove Proposition \ref{prop3-3}, we will begin with the estimate for $\psi_{1,\varepsilon}$ away from the cross-section $A_\varepsilon$. In the following, we always assume that $L>0$ is a large constant.
\begin{lemma}\label{lem3-4}
For every $\boldsymbol x\in \mathbb R^2_+\setminus\{\boldsymbol x\ | \ \mathrm{dist}(\boldsymbol x,A_\varepsilon)\le L\sigma_\varepsilon\}$, we have
\begin{equation*}
\psi_{1,\varepsilon}(\boldsymbol x)=\frac{\kappa}{2\pi}\cdot p_\varepsilon\ln \frac{|\boldsymbol x-\boldsymbol {\bar p_\varepsilon}|}{|\boldsymbol x-\boldsymbol p_\varepsilon|}+O\left(\frac{ \sigma_\varepsilon}{|\boldsymbol x-\boldsymbol p_\varepsilon|}\right),
\end{equation*}
and
\begin{equation*}
\nabla \psi_{1,\varepsilon}(\boldsymbol x)=-\frac{\kappa}{2\pi}\cdot p_\varepsilon\frac{\boldsymbol x-\boldsymbol p_\varepsilon}{|\boldsymbol x-\boldsymbol p_\varepsilon|^2}+\frac{\kappa}{2\pi}\cdot p_\varepsilon\frac{\boldsymbol x-\boldsymbol {\bar p_\varepsilon}}{|\boldsymbol x-\boldsymbol {\bar p_\varepsilon}|^2}+O\left(\frac{ \sigma_\varepsilon}{|\boldsymbol x-\boldsymbol p_\varepsilon|^2}\right).
\end{equation*}
\end{lemma}
\begin{proof}
For every $\boldsymbol x\in \mathbb R^2_+\setminus\{\boldsymbol x \ | \ \mathrm{dist}(\boldsymbol x,A_\varepsilon)\le L\sigma_\varepsilon\}$, it holds $\boldsymbol x \notin A_\varepsilon$. Recall the notation $\bar{\boldsymbol x}=(-x_1,x_2)$. For each $\boldsymbol x'\in A_\varepsilon$ we have
\begin{equation*}
|\boldsymbol x-\boldsymbol x'|=|\boldsymbol x-\boldsymbol p_\varepsilon|-\langle\frac{\boldsymbol x-\boldsymbol p_\varepsilon}{|\boldsymbol x-\boldsymbol p_\varepsilon|},\boldsymbol x'-\boldsymbol p_\varepsilon\rangle+O\left(\frac{|\boldsymbol x'-\boldsymbol p_\varepsilon|^2}{|\boldsymbol x-\boldsymbol p_\varepsilon|}\right),
\end{equation*}
and
\begin{equation*}
|\boldsymbol x-\boldsymbol {\bar x}'|=|\boldsymbol x-\boldsymbol {\bar p_\varepsilon}|-\langle\frac{\boldsymbol x-\boldsymbol {\bar p_\varepsilon}}{|\boldsymbol x-\boldsymbol {\bar p_\varepsilon}|},\boldsymbol {\bar x}'-\boldsymbol {\bar p_\varepsilon}\rangle+O\left(\frac{|\boldsymbol x'-\boldsymbol p_\varepsilon|^2}{|\boldsymbol x-\boldsymbol {\bar p_\varepsilon}|}\right).
\end{equation*}
Hence we deduce
\begin{equation*}
\begin{split}
\psi_{1,\varepsilon}&(\boldsymbol x)=\frac{z_1^2}{2\pi\varepsilon^2}\int_{A_\varepsilon}\ln \frac{|\boldsymbol x-\boldsymbol {\bar x}'|}{|\boldsymbol x-\boldsymbol x'|}d\boldsymbol x'\\
&=\frac{\kappa}{2\pi}\cdot p_\varepsilon\ln \frac{|\boldsymbol x-\boldsymbol {\bar p_\varepsilon}|}{|\boldsymbol x-\boldsymbol p_\varepsilon|}+\frac{p_\varepsilon^2}{2\pi\varepsilon^2}\int_{A_\varepsilon}\ln \frac{|\boldsymbol x-\boldsymbol p_\varepsilon|}{|\boldsymbol x-\boldsymbol x'|}d\boldsymbol x'-\frac{p_\varepsilon^2}{2\pi\varepsilon^2}\int_{A_\varepsilon}\ln \frac{|\boldsymbol x-\boldsymbol {\bar p_\varepsilon}|}{|\boldsymbol x-\boldsymbol {\bar x}'|}d\boldsymbol x'+O\left(\frac{ \sigma_\varepsilon}{|\boldsymbol x-\boldsymbol p_\varepsilon|}\right)\\
&=\frac{\kappa}{2\pi}\cdot p_\varepsilon\ln \frac{|\boldsymbol x-\boldsymbol {\bar p_\varepsilon}|}{|\boldsymbol x-\boldsymbol p_\varepsilon|}+O\left(\frac{ \sigma_\varepsilon}{|\boldsymbol x-\boldsymbol p_\varepsilon|}\right),
\end{split}
\end{equation*}
where we use the circulation constraint \eqref{3-2} and $|\boldsymbol x-\boldsymbol p_\varepsilon|<|\boldsymbol x-\boldsymbol {\bar p_\varepsilon}|$. Similarly, from the relations
\begin{equation*}
\frac{\boldsymbol x-\boldsymbol p_\varepsilon}{|\boldsymbol x-\boldsymbol p_\varepsilon|^2}-\frac{\boldsymbol x-\boldsymbol x'}{|\boldsymbol x-\boldsymbol x'|^2}=O\left(\frac{ \sigma_\varepsilon}{|\boldsymbol x-\boldsymbol p_\varepsilon|^2}\right),
\end{equation*}
and
\begin{equation*}
\frac{\boldsymbol x-\boldsymbol {\bar p_\varepsilon}}{|\boldsymbol x-\boldsymbol {\bar p_\varepsilon}|^2}-\frac{\boldsymbol x-\boldsymbol {\bar x}'}{|\boldsymbol x-\boldsymbol {\bar x}'|^2}=O\left(\frac{ \sigma_\varepsilon}{|\boldsymbol x-\boldsymbol {\bar p_\varepsilon}|^2}\right),
\end{equation*}
we obtain
\begin{equation*}
\nabla \psi_{1,\varepsilon}(\boldsymbol x)=-\frac{\kappa}{2\pi}\cdot p_\varepsilon\frac{\boldsymbol x-\boldsymbol p_\varepsilon}{|\boldsymbol x-\boldsymbol p_\varepsilon|^2}+\frac{\kappa}{2\pi}\cdot p_\varepsilon\frac{\boldsymbol x-\boldsymbol {\bar p_\varepsilon}}{|\boldsymbol x-\boldsymbol {\bar p_\varepsilon}|^2}+O\left(\frac{ \sigma_\varepsilon}{|\boldsymbol x-\boldsymbol p_\varepsilon|^2}\right).
\end{equation*}
Thus the proof is complete.
\end{proof}
Compared with the main term $\psi_{1,\varepsilon}$, the secondary term $\psi_{2,\varepsilon}$ is more regular, as can be seen from the following estimate, and we can therefore obtain its estimates in the whole right half-plane.
\begin{lemma}\label{lem3-5}
For $\boldsymbol x\in \mathbb R^2_+$, it holds
\begin{equation*}
\psi_{2,\varepsilon}(\boldsymbol x)=\frac{\kappa}{p_\varepsilon} H(\boldsymbol x,\boldsymbol z)+O(\sigma_\varepsilon|\ln \sigma_\varepsilon|).
\end{equation*}
\end{lemma}
\begin{proof}
Using the definition of $H(\boldsymbol x,\boldsymbol x')$ and standard elliptic estimate, it holds
\begin{equation*}
\begin{split}
\psi_{2,\varepsilon}(\boldsymbol x)-\frac{\kappa}{p_\varepsilon} H(\boldsymbol x,\boldsymbol z)&=\frac{1}{\varepsilon^2}\int_{\mathbb R^2_+}\left(H(\boldsymbol x,\boldsymbol x')-H(\boldsymbol x,\boldsymbol z)\right)\boldsymbol{1}_{A_\varepsilon}d\boldsymbol x'+O(\sigma_\varepsilon)\\
&=O(\sigma_\varepsilon|\ln \sigma_\varepsilon|),
\end{split}
\end{equation*}
which is the desired result.
\end{proof}
Next we turn to study the local behavior of $\psi_{1,\varepsilon}$ near $\boldsymbol p_\varepsilon$.
\begin{proposition}\label{prop3-6}
$\psi_{1,\varepsilon}$ has the following asymptotic behavior as $\varepsilon\to 0$,
\begin{equation*}
\psi_{1,\varepsilon}(\boldsymbol x)=\frac{\sigma_\varepsilon^2}{\varepsilon^2}\cdot p_\varepsilon^2\left(w\left(\frac{\boldsymbol x-\boldsymbol p_\varepsilon}{\sigma_\varepsilon}\right)+o_\varepsilon(1)\right)+\mu_\varepsilon+\frac{W}{2}p_\varepsilon^2\ln\frac{1}{\varepsilon}-\frac{\kappa}{p_\varepsilon} H(\boldsymbol p_\varepsilon,\boldsymbol z),\ \ \ \boldsymbol x\in B_{L\sigma_\varepsilon}(\boldsymbol p_\varepsilon),
\end{equation*}
\begin{equation*}
\frac{\kappa}{2\pi}\cdot p_\varepsilon\ln \left(\frac{1}{\sigma_\varepsilon}\right)-\frac{\kappa}{2\pi}\cdot p_\varepsilon\ln\frac{1}{2p_\varepsilon}+\frac{\kappa}{p_\varepsilon} H(\boldsymbol p_\varepsilon,\boldsymbol z)-\frac{W}{2}p_\varepsilon^2\ln\frac{1}{\varepsilon}-\mu_\varepsilon=o_\varepsilon(1),
\end{equation*}
and
\begin{equation*}
\frac{|A_\varepsilon|}{\sigma_\varepsilon^2}\rightarrow \pi,
\end{equation*}
where
\begin{equation*}
w(\boldsymbol y)=\begin{cases}
\frac{1}{4}(1-|\boldsymbol y|^2),\quad &|\boldsymbol y|\le 1,\\
\frac{1}{2}\ln \frac{1}{|\boldsymbol y|},\quad &|\boldsymbol y|\geq 1.
\end{cases}
\end{equation*}
\end{proposition}
In order to show Proposition \ref{prop3-6}, we first prove the following lemma, which means the kinetic energy of the flow in vortex core is bounded.
\begin{lemma}\label{lem3-7}
As $\varepsilon\to 0$, it holds
\begin{equation*}
\frac{1}{\varepsilon^2}\int_{A_\varepsilon}x_1\left(\psi_\varepsilon-\frac{W}{2}\ln\frac{1}{\varepsilon}x_1^2-\mu_\varepsilon\right)_+d\boldsymbol x=O_\varepsilon(1).
\end{equation*}
\end{lemma}
\begin{proof}
We take $\psi_+=\left(\psi_\varepsilon-\frac{W}{2}\ln\frac{1}{\varepsilon}x_1^2-\mu_\varepsilon\right)_+$ as the upper truncation of $\psi_\varepsilon$. From equation \eqref{3-1}, it holds
\begin{equation*}
\begin{cases}
-\varepsilon^2{\Delta^*} \psi_+(\boldsymbol x)=\boldsymbol{1}_{A_\varepsilon},\\
\psi_+(\boldsymbol x)=0, \ \ \ \text{on} \ \partial A_\varepsilon.
\end{cases}
\end{equation*}
Thus we can integrate by part to obtain
\begin{equation*}
\int_{A_\varepsilon}\frac{1}{x_1}|\nabla\psi_+|^2d\boldsymbol x=\frac{1}{\varepsilon^2}\int_{A_\varepsilon}x_1\psi_+d\boldsymbol x\le \frac{C|A_\varepsilon|^{1/2}}{\varepsilon^2}\left( \int_{A_\varepsilon}|\psi_+|^2d\boldsymbol x\right)^{1/2},
\end{equation*}
where we use the restriction $c_1<p_\varepsilon<c_2$. By Sobolev embedding, it holds
\begin{equation*}
\left( \int_{A_\varepsilon}|\psi_+|^2d\boldsymbol x\right)^{1/2}\le C\int_{A_\varepsilon}|\nabla\psi_+|d\boldsymbol x.
\end{equation*}
Hence we deduce
\begin{equation*}
\int_{A_\varepsilon}\frac{1}{x_1}|\nabla\psi_+|^2d\boldsymbol x\le\frac{C|A_\varepsilon|^{1/2}}{\varepsilon^2}\int_{A_\varepsilon}|\nabla\psi_+|d\boldsymbol x\le\frac{C|A_\varepsilon|}{\varepsilon^2}\left( \int_{A_\varepsilon}|\nabla\psi_+|^2d\boldsymbol x\right)^{1/2}.
\end{equation*}
Using the circulation constraint \eqref{3-2}, we finally obtain
\begin{equation*}
\frac{1}{\varepsilon^2}\int_{A_\varepsilon}x_1\psi_+d\boldsymbol x=\int_{A_\varepsilon}\frac{1}{x_1}|\nabla\psi_+|^2d\boldsymbol x=O_\varepsilon(1),
\end{equation*}
which is the estimate we need by the definition of $\psi_+$.
\end{proof}
Now we introduce a scaling version of $\psi_{1,\varepsilon}$ by letting
\begin{equation*}
w_\varepsilon(\boldsymbol y)=\frac{1}{p_\varepsilon^2}\cdot\frac{\varepsilon^2}{\sigma_\varepsilon^2}\left(\psi_{1,\varepsilon}(\sigma_\varepsilon \boldsymbol y+\boldsymbol p_\varepsilon)+\frac{\kappa}{p_\varepsilon} H(\boldsymbol p_\varepsilon,\boldsymbol z)-\frac{W}{2}p_\varepsilon^2\ln\frac{1}{\varepsilon}-\mu_\varepsilon\right),
\end{equation*}
so that $w_\varepsilon$ satisfies
\begin{equation}\label{3-3}
-\Delta w_\varepsilon=\boldsymbol 1_{\{w_\varepsilon>0\}}+f(\sigma_\varepsilon \boldsymbol y+\boldsymbol p_\varepsilon, w_\varepsilon), \ \ \ \text{in}\ \mathbb{R}^2,
\end{equation}
with
\begin{equation*}
f(\boldsymbol x,w):=\frac{z_1^2}{p_\varepsilon^2}\cdot \boldsymbol 1_{\{\psi_\varepsilon(\boldsymbol x)-Wx_1^2\ln\frac{1}{\varepsilon}-\mu_\varepsilon>0\}}-\boldsymbol 1_{\{w>0\}},
\end{equation*}
and $w_\varepsilon(\boldsymbol y)=O(\sigma_\varepsilon|\ln\sigma_\varepsilon|)$, if $\sigma_\varepsilon \boldsymbol y+\boldsymbol p_\varepsilon\in \partial A_\varepsilon$.
Intuitively, the limiting equation for $w_\varepsilon$ as $\varepsilon\to 0$ is $-\Delta w=\boldsymbol 1_{\{w>0\}}$. To show the convergence, we are to give a uniform bound for $w_\varepsilon$ in $L^\infty$ norm.
\begin{lemma}\label{lem3-8}
For any $R>0$, there exists a constant $C_R>0$ independent of $\varepsilon$ such that $$||w_\varepsilon||_{L^\infty(B_R(\boldsymbol 0))}\leq C_R.$$
\end{lemma}
\begin{proof}
It follows from Lemma \ref{lem3-7} and the assumption on $p_\varepsilon$ that
\begin{align*}
O_\varepsilon(1)&=\frac{1}{\varepsilon^2}\int_{A_\varepsilon}x_1\left(\psi_\varepsilon-\frac{W}{2}\ln\frac{1}{\varepsilon}x_1^2-\mu_\varepsilon\right)_+d\boldsymbol x\\
&=\frac{\sigma_\varepsilon^4}{\varepsilon^4}\cdot \left(p_\varepsilon^3+O(\sigma_\varepsilon)\right)\cdot\int_{\mathbb R^2} (w_\varepsilon)_+ d\boldsymbol y+O(\sigma_\varepsilon|\ln\sigma_\varepsilon|).
\end{align*}
Notice that $\kappa=\varepsilon^{-2}\cdot p_\varepsilon|A_\varepsilon|+o_\varepsilon(1) \leq C\varepsilon^{-2}\sigma_\varepsilon^2$. We deduce
$$\int_{\mathbb R^2} (w_\varepsilon)_+d\boldsymbol y\le C.$$
By Morse iteration, we then obtain
$$||(w_\varepsilon)_+||_{L^\infty(B_R(\boldsymbol 0))}\leq C.$$
To prove that the $L^\infty$ norm of $w_\varepsilon$ is bounded, we consider the following problem.
\begin{equation*}
\begin{cases}-\Delta w_1=\boldsymbol 1_{\{w_\varepsilon>0\}}+f(\sigma_\varepsilon \boldsymbol y+p_\varepsilon, w_\varepsilon),\quad &\text{in} \ B_R(\boldsymbol 0),\\
w_1=0,&\text{on} \ \partial B_R(\boldsymbol 0).\end{cases}
\end{equation*}
It is obvious that $|w_1|\le C$. Let $w_2:=w_\varepsilon-w_1$. Since $\sup_{B_R(\boldsymbol 0)} w_\varepsilon\ge 0$, function $w_2$ is harmonic in $B_R(\boldsymbol 0)$ and satisfies
$$\sup_{B_R(\boldsymbol 0)} w_2\ge \sup_{B_R(\boldsymbol 0)} w_\varepsilon-C\geq -C.$$
On the other hand, we infer from $||(w_\varepsilon)_+||_{L^\infty(B_R(\boldsymbol 0))}\leq C$ that
$$\sup_{B_R(\boldsymbol 0)} w_2\leq \sup_{B_R(\boldsymbol 0)} w_\varepsilon+C\leq M,$$
for some constant $M$. Hence $M-w_2$ is a positive harmonic function. Using the Harnack inequality, we have
$$\sup_{B_R(\boldsymbol 0)}( M- w_2)\leq L\inf_{B_R(\boldsymbol 0)}(M- w_2)\leq L(M+\sup_{B_R(\boldsymbol 0)} w_2)\leq C.$$
Since $\sup_{B_R(\boldsymbol 0)}( M- w_2)=M-\inf_{B_R(\boldsymbol 0)} w_2$, we deduce
$$\inf_{B_R(\boldsymbol 0)} w_2\geq C,$$
which implies the boundedness of $w_\varepsilon$.
\end{proof}
The limiting function for $w_\varepsilon$ as $\varepsilon\to 0$ is established in the following lemma.
\begin{lemma}\label{lem3-9}
As $\varepsilon\to0$, it holds
$$w_\varepsilon\to w$$
in $C_{\mathrm{loc}}^1(\mathbb{R}^2)$ for some radial function $w$.
\end{lemma}
\begin{proof}
For $\boldsymbol y\in B_R(\boldsymbol 0)\setminus B_L(\boldsymbol 0)$, we infer from Lemma \ref{lem3-4} and Lemma \ref{lem3-5} that
\begin{align*}
w_\varepsilon(\boldsymbol y)&=\frac{1}{p_\varepsilon^2}\cdot\frac{\varepsilon^2}{\sigma_\varepsilon^2}\cdot\left(\psi_{1,\varepsilon}(\sigma_\varepsilon \boldsymbol y+\boldsymbol p_\varepsilon)+\frac{\kappa}{p_\varepsilon} H(\boldsymbol p_\varepsilon,\boldsymbol z)-\frac{W}{2}p_\varepsilon^2\ln\frac{1}{\varepsilon}-\mu_\varepsilon\right)\\
&=\frac{|A_\varepsilon|\cdot(1+O(\sigma_\varepsilon))}{\sigma^2_\varepsilon}\cdot\left(\frac{1}{2\pi}\ln \left(\frac{1}{|\sigma_\varepsilon\boldsymbol y|}\right)-\frac{1}{2\pi}\ln\frac{1}{|\sigma_\varepsilon\boldsymbol y+\boldsymbol {\bar p_\varepsilon}-\boldsymbol p_\varepsilon|}\right.\\
&\quad\left.+\frac{1}{p_\varepsilon^2} H(\boldsymbol p_\varepsilon,\boldsymbol z)
-\frac{W}{2\kappa}\cdot p_\varepsilon\ln\frac{1}{\varepsilon}-\frac{\mu_\varepsilon}{p_\varepsilon\kappa}+O\left(\frac{1}{L}\right)\right)\\
&=\frac{|A_\varepsilon|\cdot(1+O(\sigma_\varepsilon))}{\sigma^2_\varepsilon}\cdot\frac{1}{2\pi}\ln \frac{1}{|\boldsymbol y|}\\
&\quad+\frac{|A_\varepsilon|\cdot(1+O(\sigma_\varepsilon))}{\sigma^2_\varepsilon}\cdot\left(\frac{1}{2\pi}\ln \left(\frac{1}{\sigma_\varepsilon}\right)-\frac{1}{2\pi}\ln\frac{1}{|\sigma_\varepsilon\boldsymbol y+\boldsymbol {\bar p_\varepsilon}-\boldsymbol p_\varepsilon|}\right.\\
&\quad\left.+\frac{1}{p^2_\varepsilon} H(\boldsymbol p_\varepsilon,\boldsymbol z)-\frac{W}{2\kappa}\cdot p_\varepsilon\ln\frac{1}{\varepsilon}-\frac{\mu_\varepsilon}{p_\varepsilon\kappa}+O\left(\frac{1}{L}\right)\right).
\end{align*}
Since $|A_\varepsilon|/\sigma^2_\varepsilon\le C$ and $||w_\varepsilon||_{L^\infty(B_R(\boldsymbol 0))}\leq C_R$ by Lemma \ref{lem3-8}, we may assume
$$|A_\varepsilon|/\sigma^2_\varepsilon\to\boldsymbol t,$$
and
\begin{equation*}
\begin{split}
\frac{|A_\varepsilon|\cdot(1+O(\sigma_\varepsilon))}{\sigma^2_\varepsilon}\cdot\bigg(\frac{1}{2\pi}\ln \left(\frac{1}{\sigma_\varepsilon}\right)&-\frac{1}{2\pi}\ln\frac{1}{|\sigma_\varepsilon\boldsymbol y+\boldsymbol {\bar p_\varepsilon}-\boldsymbol p_\varepsilon|}\\
&+\frac{1}{p^2_\varepsilon} H(\boldsymbol p_\varepsilon,\boldsymbol z)-\frac{W}{2\kappa}\cdot p_\varepsilon\ln\frac{1}{\varepsilon}-\frac{\mu_\varepsilon}{p_\varepsilon\kappa}\bigg)\to \tau,
\end{split}
\end{equation*}
for some $\boldsymbol t\in[0,+\infty)$ and $\tau\in(-\infty,+\infty)$.
By \eqref{3-3}, we may further assume that $w_\varepsilon\to w$ in $C_{\text{loc}}^1(\mathbb{R}^2)$ and $w$ satisfies
\begin{equation*}
\begin{cases}
-\Delta w=\boldsymbol1_{\{w>0\}},\quad &\text{in}\,B_R(\boldsymbol 0),\\
w=\frac{\boldsymbol t}{2\pi}\ln \frac{1}{|\boldsymbol y|}+\tau+O\left(\frac{1}{L}\right), &\text{in} \, B_R(\boldsymbol 0)\setminus B_L(\boldsymbol 0).
\end{cases}
\end{equation*}
Moreover, $w$ will satisfy the integral equation
$$w(\boldsymbol y)=\frac{1}{2\pi}\int_{\mathbb{R}^2} \ln\left(\frac{1}{|\boldsymbol y-\boldsymbol y'|}\right) \boldsymbol 1_{\{w>0\}}(\boldsymbol y')d\boldsymbol y'+\tau.$$
Then the method of moving planes shows that $w$ is radial and decreasing (See e.g. \cite{T}), which completes the proof of this lemma.
\end{proof}
\bigskip
\noindent{\bf Proof of Proposition \ref{prop3-6}:}
By the definition of $\sigma_\varepsilon$, there exists a $\boldsymbol y_\varepsilon$ with $|\boldsymbol y_\varepsilon|=1$ and $\sigma_\varepsilon \boldsymbol y_\varepsilon+\boldsymbol p_\varepsilon\in \partial A_\varepsilon$. Thus it holds
\begin{equation*}
w(\boldsymbol y)=\begin{cases}
\frac{1}{4}(1-|\boldsymbol y|^2),\quad &|\boldsymbol y|\le 1,\\
\frac{1}{2}\ln \frac{1}{|\boldsymbol y|},\quad &|\boldsymbol y|\ge 1.
\end{cases}
\end{equation*}
We further have that $\boldsymbol t=\pi$ and $\tau+O(1/L)=0$. Since $\tau$ is not dependent on $L$, while $O(1/L)\to 0$ as $L\to+\infty$, one must have $\tau=0$ and $O(1/L)=0$. The proof of Proposition \ref{prop3-6} is hence complete. \qed
\bigskip
\noindent{\bf Proof of Proposition \ref{prop3-3}:}
Now we can apply the local Pohozaev identity \eqref{C-1} in Appendix \ref{appC} to $\psi_{1,\varepsilon}$ and obtain
\begin{equation*}
\begin{split}
&\quad-\int_{\partial B_\delta(\boldsymbol z)} \frac{\partial\psi_{1,\varepsilon}}{\partial \nu}\frac{\partial\psi_{1,\varepsilon}}{\partial x_1}dS+ \frac{1}{2}\int_{\partial B_\delta(\boldsymbol z)} |\nabla\psi_{1,\varepsilon}|^2 \nu_1dS\\
&=-\frac{z_1^2}{\varepsilon^2}\int_{B_\delta(\boldsymbol z)} \partial_1\psi_{2,\varepsilon}(\boldsymbol x)\cdot\boldsymbol 1_{A_\varepsilon}(\boldsymbol x)d\boldsymbol x+\frac{z_1^2}{\varepsilon^2}\int_{B_\delta(\boldsymbol z)} Wx_1\ln\frac{1}{\varepsilon}\cdot\boldsymbol 1_{A_\varepsilon}(\boldsymbol x)d\boldsymbol x,
\end{split}
\end{equation*}
where $\delta$ is a small positive number. Since $|A_\varepsilon|/\sigma^2_\varepsilon\to \pi$ as $\varepsilon\to 0$ and $|z_1-p_\varepsilon|=o(\varepsilon)$, from the isoperimetric inequality, we see that $A_\varepsilon$ tends to a disc with radus $\sigma_\varepsilon\to s_0:=(\frac{\kappa}{z_1\pi})^{1/2}\varepsilon$ centered at $\boldsymbol z$, and $|A_\varepsilon\Delta B_{s_0}(\boldsymbol z)|=o(\varepsilon^2)$.
Using Lemma \ref{C4}, we have
\begin{equation*}
Wp_\varepsilon\ln\frac{1}{\varepsilon}-\frac{\kappa}{4\pi}\ln\frac{8p_\varepsilon}{\sigma_\varepsilon}+\frac{\kappa}{16\pi}=o_\varepsilon(1).
\end{equation*}
So we have finished the proof of Proposition \ref{prop3-3}. \qed
\bigskip
\subsection{Refined estimates and revised Kelvin--Hicks formula }
For the uniqueness of $\psi_\varepsilon$, we need to improve the results in Propositions \ref{prop3-3} and \ref{prop3-6}. So we reconsider the problem \eqref{3-1}
\begin{equation*}
\begin{cases}
-\varepsilon^2{\Delta^*}\psi_\varepsilon=\boldsymbol1_{\{\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\}}, & \text{in} \ \mathbb R^2_+,
\\
\psi_\varepsilon=0, & \text{on} \ x_1=0,
\\
\psi_\varepsilon, \ |\nabla\psi_\varepsilon|/x_1\to0, & \text{as} \ |\boldsymbol x |\to \infty
\end{cases}
\end{equation*}
together with circulation constraint \eqref{3-2}
\begin{equation*}
\frac{1}{\varepsilon^2}\int_{A_\varepsilon}x_1d\boldsymbol x=\kappa.
\end{equation*}
To obtain a more accurate estimate for $\psi_\varepsilon$, we will construct a series of approximate solutions $\Phi_{\boldsymbol z,\varepsilon}$, and calculate their differences with $\psi_\varepsilon$. Let us recall the definition of functions $\mathcal V_{\boldsymbol z,\varepsilon}$, $\mathcal H_{\boldsymbol z,\varepsilon}$, whose properties are discussed in the second part of Section 1. We choose the approximate solutions to \eqref{3-1} and \eqref{3-2} of the form
\begin{equation*}
\Phi_{\boldsymbol z,\varepsilon}(\boldsymbol x)=\mathcal V_{\boldsymbol z,\varepsilon}(\boldsymbol x)+\mathcal H_{\boldsymbol z,\varepsilon}(\boldsymbol x),
\end{equation*}
where the parameters $\boldsymbol z$, $s$ and $a$ in $\Phi_{\boldsymbol z,\varepsilon}(\boldsymbol x)$ satisfy
\begin{equation}\label{3-4}
\partial_1\Phi_{\boldsymbol z,\varepsilon}(\boldsymbol p_\varepsilon)=0,
\end{equation}
\begin{equation}\label{3-5}
\frac{a}{2\pi}\ln\frac{1}{\varepsilon}=\mu_\varepsilon+\frac{W}{2}z_1^2\ln\frac{1}{\varepsilon}-\mathcal H_{\boldsymbol z,\varepsilon}(\boldsymbol z)+V_{\boldsymbol{\bar z},\varepsilon}(\boldsymbol z),
\end{equation}
and
\begin{equation}\label{3-6}
\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\cdot\frac{1}{s|\ln s|}=\frac{s}{2\varepsilon^2}\cdot z_1^2.
\end{equation}
As \eqref{2-14} in Section 2, here we also denote
\begin{equation}\label{3-7}
\mathcal N:=\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\cdot\frac{1}{s|\ln s|}=\frac{s}{2\varepsilon^2}\cdot z_1^2
\end{equation}
as the value of $|\nabla V_{\boldsymbol z,\varepsilon}|$ at $|\boldsymbol x-\boldsymbol z|= s$. Notice the first condition \eqref{3-4} is equivalent to
\begin{equation*}
\frac{|z_1- p_\varepsilon|}{2\varepsilon^2}+O(\varepsilon)=\partial_1 V_{\boldsymbol{\bar z},\varepsilon}(\boldsymbol p_\varepsilon)-\partial_1\mathcal H_{\boldsymbol z,\varepsilon}(\boldsymbol p_\varepsilon)+O(\varepsilon),
\end{equation*}
where the right hand side blows up at order $O(|\ln\varepsilon|)$. By the asymptotic estimates given in Proposition \ref{prop3-6}, we then obtain
\begin{equation*}
|z_1-p_\varepsilon|=O(\varepsilon^2|\ln\varepsilon|),
\end{equation*}
\begin{equation*}
\frac{a}{2\pi}\ln\frac{1}{\varepsilon}=\mu_\varepsilon+\frac{W}{2}p_\varepsilon^2\ln\frac{1}{\varepsilon}+O_\varepsilon(1),
\end{equation*}
and
\begin{equation*}
|\sigma_\varepsilon-s|=o(\varepsilon).
\end{equation*}
The same as in Section 2, we also denote the difference of $\psi_\varepsilon$ and $\Phi_{\boldsymbol z,\varepsilon}$ as the error term
\begin{equation*}
\phi_\varepsilon(\boldsymbol x):=\psi_\varepsilon(\boldsymbol x)-\Phi_{\boldsymbol z,\varepsilon}(\boldsymbol x).
\end{equation*}
Hence our task in this part is to improve the estimate for $\phi_\varepsilon$.
Recall the definition of $\|\cdot\|_*$ norm in \eqref{2-18}. With the result in Proposition \ref{prop3-6}, we have the following lemma concerning $\phi_\varepsilon$.
\begin{lemma}\label{lem3-10}
As $\varepsilon\to 0$, it holds
$$\|\phi_\varepsilon\|_*=o_\varepsilon(1).$$
\end{lemma}
\begin{proof}
In view of Proposition \ref{prop3-6} and our assumptions \eqref{3-4}--\eqref{3-6}, it is obvious that
\begin{equation*}
||\phi_\varepsilon||_{L^\infty(B_{Ls}(\boldsymbol z))}=o_\varepsilon(1)
\end{equation*}
for some $L>0$ large.
While for those $\boldsymbol x$ far away from $B_{Ls}(\boldsymbol z)$, it holds
\begin{equation*}
\phi_\varepsilon(\boldsymbol x)=\frac{1}{\varepsilon^2}\int_{\mathbb R^2_+}{G_*}(\boldsymbol x,\boldsymbol x')(\boldsymbol 1_{A_\varepsilon}(\boldsymbol x')-\boldsymbol 1_{B_s(\boldsymbol z)}(\boldsymbol x'))d\boldsymbol x'.
\end{equation*}
Since
\begin{equation*}
\frac{1}{\varepsilon^2}||\boldsymbol 1_{A_\varepsilon}-\boldsymbol 1_{B_s(\boldsymbol z)}||_{L^1(B_{Ls}(\boldsymbol z))}=o_\varepsilon(1),
\end{equation*}
we can use the expansion
\begin{equation*}
\left(\frac{1}{x_1}+1\right){G_*}(\boldsymbol x,\boldsymbol x')\le C\cdot \frac{1+x_1^2}{(1+|\boldsymbol x-\boldsymbol z|^2)^{\frac{3}{2}}},
\end{equation*}
and Young inequality to derive
\begin{equation*}
||\phi_\varepsilon||_*=o_\varepsilon(1),
\end{equation*}
which yields the conclusion.
\end{proof}
By a linearization procedure, we see that $\phi_\varepsilon$ satisfies the equation
\begin{equation*}
\mathbb L_\varepsilon\phi_\varepsilon=R_\varepsilon(\phi_\varepsilon),
\end{equation*}
where $\mathbb L_\varepsilon$ is the linear operator defined by
\begin{equation*}
\mathbb L_\varepsilon\phi_\varepsilon=-x_1{\Delta^*}\phi_\varepsilon-\frac{2}{sz_1}\phi_\varepsilon(s,\theta)\boldsymbol\delta_{|\boldsymbol x-\boldsymbol z|=s},
\end{equation*}
and
\begin{equation*}
R_\varepsilon(\phi_\varepsilon)=\frac{1}{\varepsilon^2}\bigg(x_1\boldsymbol1_{\{\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\}}-x_1\boldsymbol1_{\{V_{\boldsymbol z,\varepsilon}>\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\}}-\frac{2}{sz_1}\phi_\varepsilon(s,\theta)\boldsymbol\delta_{|\boldsymbol x-\boldsymbol z|=s}\bigg).
\end{equation*}
By Lemma \ref{B4} in Appendix \ref{appB}, it holds
\begin{equation*}
R_\varepsilon(\phi_\varepsilon)=0, \ \ \ \text{in} \ \left(\mathbb R^2_+\setminus B_{2s}(\boldsymbol z)\right) \cup B_{s/2}(\boldsymbol z)
\end{equation*}
for some $L>0$ large.
To derive a better estimate for $\phi_\varepsilon$, let us first establish the following lemma about the linear operator $\mathbb L_\varepsilon$.
\begin{lemma}\label{lem3-11}
Suppose that $\mathrm{supp} \,\mathbb L_\varepsilon\phi_\varepsilon\subset B_{2s}(\boldsymbol z)$. Then for any $p\in(2, +\infty]$ and a constant $c_0>0$, there exists an $\epsilon_0>0$ small such that for any $\varepsilon\in (0,\varepsilon_0]$, it holds
\begin{equation*}
\quad \varepsilon^{1-\frac{2}{p}} ||\mathbb{L}_\varepsilon \phi_\varepsilon||_{W^{-1,p}(B_{Ls}(\boldsymbol z))}+\varepsilon^2||\mathbb{L}_\varepsilon \phi_\varepsilon||_{L^\infty(B_{s/2}(\boldsymbol z))}\ge c_0 \left(\varepsilon^{1-\frac{2}{p}} ||\nabla \phi_\varepsilon||_{L^{p}(B_{Ls}(\boldsymbol z))}+||\phi_\varepsilon||_*\right)
\end{equation*}
with $L>0$ a large constant.
\end{lemma}
\begin{proof}
We will argue by contradiction. Suppose on the contrary that there exists $\varepsilon_n\to 0$ such that $\phi_n:=\phi_{\varepsilon_n}$ satisfies
\begin{equation*}
\varepsilon_n^{1-\frac{2}{p}} ||\mathbb{L}_{\varepsilon_n} \phi_n||_{W^{-1,p}(B_{Ls}(\boldsymbol z))}+\varepsilon_n^2||\mathbb{L}_{\varepsilon_n} \phi_n||_{L^\infty(B_{s/2}(\boldsymbol z))}\le \frac{1}{n},
\end{equation*}
and
\begin{equation}\label{3-8}
\varepsilon_n^{1-\frac{2}{p}} ||\nabla \phi_n||_{L^{p}(B_{Ls}(\boldsymbol z))}+||\phi_n||_*=1.
\end{equation}
By letting $f_n=\mathbb{L}_{\varepsilon_n}\phi_n$, we have
\begin{equation*}
-{\Delta^*} \phi_n=\frac{2}{sz_1}\phi_n(s,\theta)\boldsymbol\delta_{|\boldsymbol x-\boldsymbol z|=s}+f_n.
\end{equation*}
Here, we also denote $\tilde v(\boldsymbol y):=v(s \boldsymbol y+\boldsymbol z)$ for an arbitrary function. Then the above equation has a weak form
\begin{equation*}
\int_{\mathbb R^2_+}\frac{1}{sy_1+z_1}\cdot\nabla\tilde\phi_n\cdot\nabla\varphi d\boldsymbol y=2\int_{|\boldsymbol y|=1}\frac{1}{z_1}\tilde\phi_n\varphi+\langle\tilde f_n,\varphi\rangle, \ \ \ \ \ \ \forall \, \varphi\in C_0^\infty(\mathbb R^2).
\end{equation*}
Since the right hand side of the equation is bounded in $W_{\text{loc}}^{-1,p}(\mathbb{R}^2)$, $\tilde \phi_n$ is bounded in $W^{1,p}_{\text{loc}}(\mathbb{R}^2)$ and hence bounded in $C_{\text{loc}}^\alpha (\mathbb{R}^2)$ for some $\alpha>0$ by Sobolev embedding. We may assume that $\tilde \phi_n$ converges uniformly in any compact subset of $\mathbb{R}^2$ to $\phi^*\in L^\infty(\mathbb{R}^2)\cap C(\mathbb{R}^2)$, and the limiting function $\phi^*$ satisfies
\begin{equation*}
-\Delta \phi^* =2\phi^*(1,\theta)\boldsymbol \delta_{|\boldsymbol y|=1}, \quad \text{in} \ \mathbb{R}^2.
\end{equation*}
Therefore, we conclude from the nondegeneracy of limiting operator and symmetry with respect to $x_1$-axis that
\begin{equation*}
\phi^*=C_1\cdot\frac{\partial w}{\partial y_1}
\end{equation*}
with $C_1$ a constant, and
\begin{equation*}
w(\boldsymbol y)=\left\{
\begin{array}{lll}
\frac{1}{4}(1-|\boldsymbol y|^2), \ \ \ \ \ & |\boldsymbol y|\le 1,\\
\frac{1}{2}\ln\frac{1}{|\boldsymbol y|}, & |\boldsymbol y|\ge 1.
\end{array}
\right.
\end{equation*}
On the other hand, since $\varepsilon_n^2|f_n|\leq 1/n$ in $B_{s/2}(\boldsymbol z)$ and $|\tilde \phi_n|\leq 1$, we know that $\tilde \phi_n$ is bounded in $W^{2,p}({B_{1/4}(\boldsymbol 0)})$. Thus we may assume $\tilde \phi_n\to \phi^*$ in $C^1({B_{1/4}(\boldsymbol 0)})$. Since $\partial_1 \tilde \phi_n (\frac{\boldsymbol p_{\varepsilon_n}-\boldsymbol z}{s})=s\partial_1 \phi_n(\boldsymbol p_{\varepsilon_n})=0$ by \eqref{3-5} and $\frac{\boldsymbol p_{\varepsilon_n}-\boldsymbol z}{s}\to 0$, it holds $\partial_1 \phi^*(0)=0$. This implies $C_1=0$ and hence $\phi^*\equiv 0$.
Therefore, we have proved $\phi_n=o_n(1)$ in $B_{Ls}(\boldsymbol z)$ for any $L>0$ fixed. Then, using the strong maximum principle and a similar argument as in the proof of Lemma \ref{lem2-2}, we can derive
\begin{equation}\label{3-9}
||\phi_n||_*\le C||\phi_n||_{L^\infty(B_{Ls}(\boldsymbol z))}=o_n(1).
\end{equation}
Now we turn to consider the norm $||\nabla \phi_\varepsilon||_{L^{p}(B_{Ls}(\boldsymbol z))}$. For any $\tilde\varphi\in C_0^\infty (B_{L}(\boldsymbol 0))$, it holds
\begin{equation}\label{3-10}
\begin{split}
&\left|\int_{D_n}\frac{1}{sy_1+z_1}\cdot\nabla\tilde\phi_n\cdot\nabla\tilde\varphi d\boldsymbol y\right|=\left|2\int_{|\boldsymbol y|=1}\frac{1}{\boldsymbol z }\tilde\phi_n\varphi+\langle\tilde f_n,\tilde\varphi\rangle \right|\\
& \ \ \ \ \ \ \ \ \ =o_n(1)\cdot\|\tilde\varphi\|_{W^{1,1}(B_L(\boldsymbol 0))}+o_n(1)\cdot\|\tilde\varphi\|_{W^{1,p'}(B_L(\boldsymbol 0))}\\
& \ \ \ \ \ \ \ \ \ =o_n(1)\cdot\left(\int_{B_L(\boldsymbol 0)}|\nabla\tilde\varphi|^{p'}\right)^{\frac{1}{p'}}.
\end{split}
\end{equation}
Thus we have
\begin{equation*}
\varepsilon^{1-\frac{2}{p}}\|\nabla\phi_n\|_{L^p(B_{Ls}(\boldsymbol z))}\le C||\nabla\tilde\phi_n||_{L^p(B_L(\boldsymbol 0))}=o_n(1).
\end{equation*}
We see that \eqref{3-9} and \eqref{3-10} is a contradiction to \eqref{3-8}, and hence the proof of Lemma \ref{lem3-11} is finished.
\end{proof}
Now we are in the position to improve the estimate for error term $\phi_\varepsilon$.
\begin{lemma}\label{lem3-12}
For $p\in (2,+\infty]$ and $\varepsilon\in (0,\varepsilon_0]$ small, it holds
\begin{equation}\label{3-11}
||\phi_\varepsilon||_*+\varepsilon^{1-\frac{2}{p}} ||\nabla \phi_\varepsilon||_{L^{p}(B_{Ls}(\boldsymbol z))}=O_\varepsilon\left(s\mathcal W(s)+\varepsilon^2|\ln\varepsilon|+\varepsilon \boldsymbol \gamma_\varepsilon^{\frac{1}{2}+\frac{1}{p}}\right),
\end{equation}
with $\mathcal W(x)$ defined in \eqref{B-1} of Appendix \ref{appB}, and
$$\boldsymbol\gamma_\varepsilon:=\|\phi_\varepsilon\|_{L^\infty(B_{Ls}(\boldsymbol z))}+s\mathcal W(s).$$
\end{lemma}
\begin{proof}
In view of Lemma \ref{lem3-11}, it is sufficient to verify that
\begin{equation*}
\begin{split}
&\quad\varepsilon^{1-\frac{2}{p}} ||R_\varepsilon(\phi_\varepsilon)||_{W^{-1,p}(B_{Ls}(\boldsymbol z))}+\varepsilon^2||R_\varepsilon(\phi_\varepsilon)||_{L^\infty(B_{s/2}(\boldsymbol z))}\\
&=O_\varepsilon\left(s\mathcal W(s)+\varepsilon^2|\ln\varepsilon|+\varepsilon \boldsymbol \gamma_\varepsilon^{\frac{1}{2}+\frac{1}{p}}\right).
\end{split}
\end{equation*}
Notice that we have
\begin{equation*}
R_\varepsilon(\phi_\varepsilon)\equiv 0,\quad \text{in}\ B_{s/2}(\boldsymbol z).
\end{equation*}
So it remains to estimate $\varepsilon^{1-\frac{2}{p}} ||R_\varepsilon(\phi_\varepsilon)||_{W^{-1,p}(B_{Ls}(\boldsymbol z))}$.
We will make an appropriate scaling, and use $\tilde v(\boldsymbol y)$ to denote $v(s\boldsymbol y+\boldsymbol z)$. For each $\varphi\in C_0^1(B_{Ls}(\boldsymbol z))$, we have
\begin{equation*}
\begin{split}
\langle R_\varepsilon(\phi_\varepsilon),\varphi \rangle&=\frac{s^2}{\varepsilon^2}\int_{B_L(\boldsymbol 0)}(sy_1+z_1)\left(\boldsymbol1_{\{\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\}}-\boldsymbol 1_{\{V_{\boldsymbol z,\varepsilon}>\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\}}\right)\tilde\varphi d\boldsymbol y\\
& \ \ \ -\frac{2}{z_1}\int_0^{2\pi}\tilde\phi_\varepsilon\tilde\varphi(1,\theta)d\theta.
\end{split}
\end{equation*}
Denote $\boldsymbol y_\varepsilon(\theta)=((1+t_\varepsilon+t_{\varepsilon,\tilde \phi_\varepsilon})\cos\theta, (1+t_\varepsilon+t_{\varepsilon,\tilde \phi_\varepsilon})\sin\theta)$ as the notations given in Lemma \ref{B4}. We deduce that
\begin{equation*}
\begin{split}
&\frac{s^2}{\varepsilon^2}\int_{B_L(\boldsymbol 0)}(sy_1+z_1)\left(\boldsymbol1_{\{\psi_\varepsilon-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon\}}-\boldsymbol 1_{\{V_{\boldsymbol z,\varepsilon}>\frac{a}{2\pi}\ln\frac{1}{\varepsilon}\}}\right)\tilde\varphi d\boldsymbol y\\
&=\frac{s^2}{\varepsilon^2}\int_0^{2\pi}\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde \phi_\varepsilon}} z_1 t \tilde \varphi(t,\theta) dt d\theta+O(\varepsilon)\cdot |t_\varepsilon+t_{\varepsilon,\tilde \phi_\varepsilon}|^{\frac{1}{q'}}\cdot\|\tilde \varphi\|_{L^q(B_L(\boldsymbol 0))}\\
&=\frac{s^2}{\varepsilon^2}\int_0^{2\pi}\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde \phi_\varepsilon}} z_1 t \tilde \varphi(1,\theta) dt d\theta+\frac{s^2}{\varepsilon^2}\int_0^{2\pi}\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde \phi_\varepsilon}} z_1t (\tilde \varphi(t,\theta)-\tilde \varphi(1,\theta)) dt d\theta\\
& \ \ \ +O(\varepsilon)\cdot|t_\varepsilon+t_{\varepsilon,\tilde \phi_\varepsilon}|^{\frac{1}{2}+\frac{1}{p}}\cdot\|\tilde\varphi\|_{W^{1,p'}(B_L(\boldsymbol 0))}\\
&=I_1+I_2+O_\varepsilon\left(\varepsilon \boldsymbol \gamma_\varepsilon^{\frac{1}{2}+\frac{1}{p}}\right)\cdot \|\tilde\varphi\|_{W^{1,p'}(B_L(\boldsymbol 0))},
\end{split}
\end{equation*}
where we use Sobolev embedding and choose $q=\frac{2p'}{2-p'}$. It follows from Lemma \ref{lem3-10} and Lemma \ref{B4} that
\begin{equation*}
\begin{split}
I_1&=\frac{s^2}{\varepsilon^2}\int_0^{2\pi}\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde \phi_\varepsilon}}z_1 t \tilde \varphi(1,\theta) dt d\theta\\
&=\frac{2}{z_1}\int_0^{2\pi}\left(\tilde\phi_\varepsilon(\boldsymbol y_\varepsilon(\theta))+O_\varepsilon\left(s\mathcal W(s)+\varepsilon^2|\ln\varepsilon|+\|\tilde\phi_\varepsilon\|_{L^\infty(B_L(\boldsymbol 0))}^2\right)\right) \tilde \varphi(1,\theta) d\theta\\
&=\frac{2}{z_1}\int_{|\boldsymbol y|=1} \tilde\phi_\varepsilon\tilde \varphi d\theta+\frac{2}{z_1}\int_0^{2\pi} (\tilde\phi_\varepsilon(\boldsymbol y_\varepsilon(\theta))-\tilde\phi_\varepsilon(1,\theta))\tilde \varphi d\theta\\
&\quad+O_\varepsilon\left(s\mathcal W(s)+\varepsilon^2|\ln\varepsilon|+o_\varepsilon(1)\cdot\|\tilde\phi_\varepsilon\|_{L^\infty(B_L(\boldsymbol 0))}\right) \int_{|\boldsymbol y|=1}\tilde \varphi(1,\theta) d\theta\\
&=\frac{2}{z_1}\int_{|\boldsymbol y|=1} \tilde\phi_\varepsilon\tilde \varphi d\theta +\frac{2}{z_1}\int_0^{2\pi}\int_{1}^{1+t_\varepsilon+t_{\varepsilon,\tilde \phi_\varepsilon}} \frac{\partial \tilde\phi_\varepsilon(s,\theta)}{\partial s}\tilde \varphi ds d\theta\\
&\quad+O_\varepsilon\left(s\mathcal W(s)+\varepsilon^2|\ln\varepsilon|+o_\varepsilon(1)\cdot\|\tilde\phi_\varepsilon\|_{L^\infty(B_L(\boldsymbol 0))}\right)\int_{|\boldsymbol y|=1}\tilde \varphi(1,\theta) d\theta\\
&=\frac{2}{z_1}\int_{|\boldsymbol y|=1} \tilde\phi_\varepsilon\tilde \varphi d\theta\\
&\quad+O_\varepsilon\left(s\mathcal W(s)+\varepsilon^2|\ln\varepsilon|+o_\varepsilon(1)\cdot\|\tilde\phi_\varepsilon\|_{L^\infty(B_L(\boldsymbol 0))}+o_\varepsilon(1)\cdot\|\nabla\tilde\phi_\varepsilon\|_{L^p(B_L(\boldsymbol 0))}\right)\cdot||\tilde \varphi||_{W^{1,p'}(B_L(\boldsymbol 0))}.
\end{split}
\end{equation*}
Using Lemma \ref{B4}, we can also deduce that
\begin{equation*}
\begin{split}
I_2&=\frac{s^2}{\varepsilon^2}\int_0^{2\pi}\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde \phi_\varepsilon}} z_1t (\tilde \varphi(t,\theta)-\tilde \varphi(1,\theta)) dt d\theta\\
&=\frac{s^2}{\varepsilon^2}\int_0^{2\pi}\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde \phi_\varepsilon}}z_1t \int_1^t \frac{\partial \tilde \varphi(s,\theta)}{\partial s} dsdt d\theta\\
&\leq \frac{s^2}{\varepsilon^2}\int_0^{2\pi}z_1|t_\varepsilon(\theta)+t_{\varepsilon,\tilde \phi_\varepsilon}(\theta)|\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde \phi_\varepsilon}} \left|\frac{\partial \tilde \varphi(s,\theta)}{\partial s} \right|ds d\theta\\
&=O_\varepsilon\left(s\mathcal W(s)+\varepsilon^2|\ln\varepsilon|+O_\varepsilon(1)\cdot\|\tilde\phi_\varepsilon\|_{L^\infty(B_L(\boldsymbol 0))}\right)\int_0^{2\pi}\int_1^{1+t_\varepsilon+t_{\varepsilon,\tilde \phi_\varepsilon}}\left|\frac{\partial \tilde \varphi(s,\theta)}{\partial s} \right|ds d\theta\\
&=o_\varepsilon(1)\cdot O_\varepsilon\left(s\mathcal W(s)+\varepsilon^2|\ln\varepsilon|+||\tilde\phi_\varepsilon||_{L^\infty}\right)\cdot||\tilde \varphi||_{W^{1,p'}(B_L(\boldsymbol 0))}.
\end{split}
\end{equation*}
Combining above estimates, we arrive at
\begin{equation*}
\begin{split}
&\quad\langle \mathcal{R_\varepsilon}(\phi_\varepsilon), \varphi\rangle\\
&=O_\varepsilon\left(s\mathcal W(\boldsymbol x)\big|_{|\boldsymbol x-\boldsymbol z|=s}+\varepsilon^2|\ln\varepsilon|+\varepsilon\boldsymbol \gamma_\varepsilon^{\frac{1}{2}+\frac{1}{p}}\right)\cdot||\tilde \varphi||_{W^{1,p'}(B_L(\boldsymbol 0))}\\
&\quad+o_\varepsilon(1)\cdot\left(\|\tilde\phi_\varepsilon\|_{L^\infty(B_L(\boldsymbol 0))}+\|\nabla\tilde\phi_\varepsilon\|_{L^p(B_L(\boldsymbol 0))}\right)\cdot||\tilde \varphi||_{W^{1,p'}(B_L(\boldsymbol 0))},
\end{split}
\end{equation*}
which implies
\begin{equation*}
\begin{split}
&\quad\varepsilon^{1-\frac{2}{p}} ||\mathcal{R_\varepsilon}(\phi_\varepsilon)||_{W^{-1,p}(B_{Ls}(\boldsymbol z))}\\
&=O_\varepsilon\left(s\mathcal W(s)+\varepsilon^2|\ln\varepsilon|+\varepsilon\boldsymbol \gamma_\varepsilon^{\frac{1}{2}+\frac{1}{p}}\right)\\
&\quad+o_\varepsilon(1)\cdot\left(\|\phi_\varepsilon\|_*+\varepsilon^{1-\frac{2}{p}}\|\nabla\phi_\varepsilon\|_{L^p(B_L(\boldsymbol 0))}\right).
\end{split}
\end{equation*}
Thus from the above discussion, we finally obtain
\begin{align*}
&\quad||\phi_\varepsilon||_*+\varepsilon^{1-\frac{2}{p}} ||\nabla \phi_\varepsilon||_{L^{p}(B_{Ls}(\boldsymbol z))}\\
&=O_\varepsilon\left(s\mathcal W(s)+\varepsilon^2|\ln\varepsilon|+\varepsilon \boldsymbol \gamma_\varepsilon^{\frac{1}{2}+\frac{1}{p}}\right),
\end{align*}
which is exactly the result we desired.
\end{proof}
With the refined estimate of $\phi_\varepsilon$ in hand, we can improve the estimate for $\tilde{\mathbf\Gamma}_{\varepsilon,\tilde \phi_\varepsilon}$ in Lemma \ref{B4} as follows.
\begin{lemma}\label{lem3-13}
The set
$$\tilde{\mathbf\Gamma}_{\varepsilon,\tilde \phi_\varepsilon}:=\left\{\boldsymbol y \ | \ \psi_\varepsilon(s\boldsymbol y+\boldsymbol z)-\frac{W}{2}(s y_1+ z_1)^2\ln\frac{1}{\varepsilon}\cdot \mathbf{e}_1=\mu_\varepsilon\right\}$$
is a continuous closed convex curve in $\mathbb{R}^2$, and for each $\theta\in (0,2\pi]$, it holds
\begin{equation*}
\begin{split}
\tilde{\mathbf\Gamma}_{\varepsilon,\tilde \phi_\varepsilon}&=(1+t_\varepsilon(\theta)+t_{\varepsilon,\tilde \phi_\varepsilon}(\theta))(\cos\theta,\sin\theta)\\
&=(\cos\theta,\sin\theta)+O_\varepsilon\left(s\mathcal W(s)+\varepsilon \boldsymbol \gamma_\varepsilon^{\frac{1}{2}+\frac{1}{p}}\right)
\end{split}
\end{equation*}
with
$$\boldsymbol\gamma_\varepsilon=\|\phi_\varepsilon\|_{L^\infty(B_{Ls}(\boldsymbol z))}+s\mathcal W(s).$$
\end{lemma}
Using a bootstrap method, we can further improve the estimate for $\phi_\varepsilon$ and $|A_\varepsilon\Delta B_s(\boldsymbol z)|$ to our desired level.
\begin{lemma}\label{lem3-14}
For $p\in (2,+\infty]$, it holds
$$||\phi_\varepsilon||_*+\varepsilon^{1-\frac{2}{p}} ||\nabla \phi_\varepsilon||_{L^{p}(B_{Ls}(\boldsymbol z))}=O(\varepsilon^2|\ln\varepsilon|).$$
Moreover, we have
$$|A_\varepsilon\Delta B_{s_0}(\boldsymbol z)|=O(\varepsilon^4|\ln\varepsilon|),$$
and
$$\mathcal W(s)=O(\varepsilon^2|\ln\varepsilon|).$$
\end{lemma}
\begin{proof}
At the first stage, we have $\mathcal W(s)=O(|\ln\varepsilon|)$ in hand by the definition of $\mathcal W(x)$ in \eqref{B-1}. Hence from \eqref{3-11}, we can deduce
\begin{equation*}
||\phi_\varepsilon||_*+\varepsilon^{1-\frac{2}{p}} ||\nabla \phi_\varepsilon||_{L^{p}(B_{Ls}(\boldsymbol z))}=O(\varepsilon|\ln\varepsilon|).
\end{equation*}
Note that $s_0=(\frac{\kappa}{z_1\pi})^{1/2}\varepsilon$. By the circulation constraint \eqref{3-2} and Lemma \ref{B3}, we have
\begin{equation*}
\begin{split}
\frac{s_0^2}{\varepsilon^2}\cdot z_1\pi&=\frac{s^2}{2\varepsilon^2}\int_0^{2\pi}z_1\left(1+t_\varepsilon(\theta)+t_{\varepsilon,\tilde \phi_\varepsilon}(\theta)\right)^2 d\theta\\
&\quad+\frac{s^3}{3\varepsilon^2}\int_0^{2\pi}\left(1+t_\varepsilon(\theta)+t_{\varepsilon,\tilde \phi_\varepsilon}(\theta)\right)^3\cos\theta d\theta\\
&=\frac{s^2}{\varepsilon^2}\cdot z_1\pi+O_\varepsilon(|t_\varepsilon(\theta)+t_{\varepsilon,\tilde \phi_\varepsilon}(\theta)|).
\end{split}
\end{equation*}
Hence it holds
$$\frac{|s_0-s|}{\varepsilon}=O_\varepsilon\left(||\phi_\varepsilon||_{L^\infty(B_{Ls}(\boldsymbol z))}+s\mathcal W(s)+\varepsilon^2|\ln\varepsilon|\right).$$
Using Lemma \ref{lem3-13}, we then derive
$$|A_\varepsilon\Delta B_{s_0}(\boldsymbol z)|=O(\varepsilon^3|\ln\varepsilon|).$$
In view of Lemma \ref{C4} in Appendex \ref{appC}, it holds
\begin{equation}\label{West}
\begin{split}
\mathcal W(s)&=Wz_1\ln\frac{1}{\varepsilon}-\frac{\kappa}{4\pi}\ln\frac{8z_1}{s_0}+\frac{\kappa}{16\pi}\\
&\quad+O_\varepsilon\left(||\phi_\varepsilon||_{L^\infty(B_{Ls}(\boldsymbol z))}+s\mathcal W(s)+\varepsilon^2|\ln\varepsilon|+\varepsilon \boldsymbol \gamma_\varepsilon^{\frac{1}{2}+\frac{1}{p}}\right)\\
&=O(\varepsilon|\ln\varepsilon|).
\end{split}
\end{equation}
So we have improved the estimate for $\mathcal W(s)$ from $O(|\ln\varepsilon|)$ to $O(\varepsilon|\ln\varepsilon|)$.
In the second step, we combine above estimates with \eqref{3-11} to obain
\begin{equation*}
\|\phi_\varepsilon\|_{L^\infty(B_{Ls}(\boldsymbol z))}\le ||\phi_\varepsilon||_*=O_\varepsilon\left(\varepsilon^2|\ln\varepsilon|+\varepsilon\|\phi_\varepsilon\|_{L^\infty(B_{Ls}(\boldsymbol z))}^{\frac{1}{2}+\frac{1}{p}}\right), \quad \forall \, p\in (2,+\infty].
\end{equation*}
Now we claim
\begin{equation}\label{phiest}
\|\phi_\varepsilon\|_{L^\infty(B_{Ls}(\boldsymbol z))}=O(\varepsilon^2|\ln\varepsilon|).
\end{equation}
Suppose not. Then there exists a series $\{\varepsilon_n\}$ tends to $0$, such that $\|\phi_{\varepsilon_n}\|_{L^\infty(B_{Ls}(\boldsymbol z))}>n\varepsilon_n^2|\ln\varepsilon_n|$. Since it holds
\begin{equation*}
\begin{split}
\varepsilon_n\|\phi_{\varepsilon_n}\|_{L^\infty(B_{Ls}(\boldsymbol z))}^{\frac{1}{2}+\frac{1}{p}}&=\varepsilon_n\left(n\varepsilon_n^2|\ln\varepsilon_n|\right)^{\frac{1}{p}-\frac{1}{2}}\cdot\left(n\varepsilon_n^2|\ln\varepsilon_n|\right)^{\frac{1}{2}-\frac{1}{p}}\|\phi_{\varepsilon_n}\|_{L^\infty(B_{Ls}(\boldsymbol z))}^{\frac{1}{2}+\frac{1}{p}}\\
&\le \varepsilon_n\left(n\varepsilon_n^2|\ln\varepsilon_n|\right)^{\frac{1}{p}-\frac{1}{2}}\|\phi_{\varepsilon_n}\|_{L^\infty(B_{Ls}(\boldsymbol z))},
\end{split}
\end{equation*}
we can let $p>2$ be sufficiently close to $2$ and $\varepsilon_n\left(n\varepsilon_n^2|\ln\varepsilon_n|\right)^{\frac{1}{p}-\frac{1}{2}}=o_{\varepsilon_n}(1)$. According to \eqref{3-11}, we have
\begin{equation*}
\|\phi_{\varepsilon_n}\|_{L^\infty(B_{Ls}(\boldsymbol z))}=O(\varepsilon_n^2|\ln\varepsilon_n|)+o_{\varepsilon_n}(1)\cdot \|\phi_{\varepsilon_n}\|_{L^\infty(B_{Ls}(\boldsymbol z))},
\end{equation*}
which is a contradiction to $\|\phi_{\varepsilon_n}\|_{L^\infty(B_{Ls}(\boldsymbol z))}>n\varepsilon_n^2|\ln\varepsilon_n|$, and verifies \eqref{phiest}.
In the last step, we use \eqref{3-11} again, and improve the estimate for $\phi_\varepsilon$ to
\begin{equation*}
||\phi_\varepsilon||_*+\varepsilon^{1-\frac{2}{p}} ||\nabla \phi_\varepsilon||_{L^{p}(B_{Ls}(\boldsymbol z))}=O\left(\varepsilon|\ln\varepsilon|+\varepsilon(\varepsilon^2|\ln\varepsilon|)^{\frac{1}{2}+\frac{1}{p}}\right)=O(\varepsilon^2|\ln\varepsilon|).
\end{equation*}
Note that we have obtained $\mathcal W(s)=O(\varepsilon|\ln\varepsilon|)$ in \eqref{West}. Proceeding as the first step, we deduce
$$|A_\varepsilon\Delta B_{s_0}(\boldsymbol z)|=O(\varepsilon^4|\ln\varepsilon|),$$
and
$$\mathcal W(s)=O(\varepsilon^2|\ln\varepsilon|).$$
Hence the proof is complete.
\end{proof}
\bigskip
Now we can obtain the Kelvin--Hicks formula in Proposition \ref{prop3-2}.
\noindent{\bf Proof of Proposition \ref{prop3-2}:}
It holds $|A_\varepsilon\Delta B_{s_0}(\boldsymbol z)|=O(\varepsilon^4|\ln\varepsilon|)$ by Lemma \ref{lem3-14}. Using Lemma \ref{C4}, we obtain
\begin{equation}\label{3-12}
Wz_1\ln\frac{1}{\varepsilon}-\frac{\kappa}{4\pi}\ln\frac{8z_1}{s_0}+\frac{\kappa}{16\pi}=O(\varepsilon^2|\ln\varepsilon|).
\end{equation}
On the other hand, we have
\begin{equation}\label{3-13}
\frac{|s_0-s|}{\varepsilon}=O_\varepsilon\left(||\phi_\varepsilon||_{L^\infty(B_{Ls}(\boldsymbol z))}+s\mathcal W(s)+\varepsilon^2|\ln\varepsilon|\right)=O(\varepsilon^2|\ln\varepsilon|),
\end{equation}
and
$$\frac{|s-\sigma_\varepsilon|}{\varepsilon}=O_\varepsilon\left(||\phi_\varepsilon||_{L^\infty(B_{Ls}(\boldsymbol z))}+s\mathcal W(s)+\varepsilon^2|\ln\varepsilon|+\varepsilon \boldsymbol \gamma_\varepsilon^{\frac{1}{2}+\frac{1}{p}}\right)=O(\varepsilon^2|\ln\varepsilon|)$$
by Lemma \ref{lem3-13}. Thus we have verified Proposition \ref{prop3-2}.
\qed
\bigskip
\subsection{The uniqueness}
To show the uniqueness of $\psi_\varepsilon$ satisfying \eqref{3-1} and \eqref{3-2}, we first refine the estimate for the cross-section $A_\varepsilon$. Notice that the value of $s$ depends on $\varepsilon$ and $z_1$ by \eqref{3-6}. The following result is a direct consequence of Lemma \ref{lem3-14} and Proposition \ref{prop3-2}.
\begin{lemma}\label{lem3-15}
For each $\varepsilon\in(0,\varepsilon_0]$ with $\varepsilon_0>0$ sufficiently small, let $x^*$ be the only zero point of
$$ g(x)=Wx\ln\frac{1}{\varepsilon}-\frac{\kappa}{4\pi}\left(\ln\frac{8x}{s_0(x)}-\frac{1}{4}\right), \quad x>0,$$
with $s_0(x)=(\frac{\kappa}{\pi x})^{1/2}\varepsilon$. Then we have
\begin{equation*}
|z_1-x^*|=O(\varepsilon^2),
\end{equation*}
and
\begin{equation*}
s(z_1)=s(x^*)+O(\varepsilon^3|\ln\varepsilon|).
\end{equation*}
\end{lemma}
\begin{proof}
Direct computation yields $g'(x^*)=(W+o_\varepsilon(1))\cdot|\ln\varepsilon|$. By \eqref{3-12}, we have
\begin{equation*}
|z_1-x^*|=O(\varepsilon^2).
\end{equation*}
To derive the estimate for $s$, we can use the definition $s_0(x)=(\frac{\kappa}{\pi x})^{1/2}\varepsilon$ and above estimate for $z_1$ to obtain
$$s_0(z_1)=s_0(x^*)+O(\varepsilon^3).$$
Since $|s(x)-s_0(x)|=O(\varepsilon^3|\ln\varepsilon|)$ from \eqref{3-13}, we then conclude
\begin{equation*}
s(z_1)=s(x^*)+O(\varepsilon^3|\ln\varepsilon|)
\end{equation*}
by triangle inequality.
\end{proof}
Suppose on the contrary there are two different $\psi_\varepsilon^{(1)}$ and $\psi_\varepsilon^{(2)}$ that are even symmetric respect to $x_1$-axis and solve \eqref{3-1} \eqref{3-2}. Define
\begin{equation*}
\Theta_\varepsilon(\boldsymbol x):=\frac{\psi_\varepsilon^{(1)}(\boldsymbol x)-\psi_\varepsilon^{(2)}(\boldsymbol x)}{||\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}||_{L^\infty(\mathbb R^2_+)}}.
\end{equation*}
Then, $\Theta_\varepsilon$ satisfies $||\Theta_\varepsilon||_{L^\infty(\mathbb R^2_+)}=1$ and
\begin{equation*}
\begin{cases}
-\varepsilon^2x_1{\Delta^*}\Theta_\varepsilon=f_\varepsilon(\boldsymbol x), & \text{in} \ \mathbb R^2_+,
\\
\Theta_\varepsilon=0, & \text{on} \ x_1=0,
\\
\Theta_\varepsilon, \ |\nabla\Theta_\varepsilon|/x_1\to0, &\text{as} \ |\boldsymbol x |\to \infty,
\end{cases}
\end{equation*}
where
\begin{equation*}
f_\varepsilon(\boldsymbol x)=\frac{x_1\left(\boldsymbol 1_{\{\psi_\varepsilon^{(1)}-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon^{(1)}\}}-\boldsymbol 1_{\{\psi_\varepsilon^{(2)}-\frac{W}{2}x_1^2\ln\frac{1}{\varepsilon}>\mu_\varepsilon^{(2)}\}}\right)}{\varepsilon^2||\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}||_{L^\infty(\mathbb R^2_+)}}.
\end{equation*}
We see that $f_\varepsilon(\boldsymbol x)=0$ in $\mathbb R^2_+\setminus B_{Ls^{(1)}}(\boldsymbol z^{(1)})$ for some large $L>0$ due to Lemma \ref{lem3-15}.
In the following, we are to obtain a series of estimates for $\Theta_\varepsilon$ and $f_\varepsilon$. Then we will derive a contradiction by local Pohozaev identity whenever $\psi_\varepsilon^{(1)} \not\equiv \psi_\varepsilon^{(2)}$. For simplicity, we always use $|\,\cdot\,|_\infty$ to denote $||\,\cdot\,||_{L^\infty(\mathbb R^2_+)}$, and abbreviate the parameters $s^{(1)}$ as $s$, $\boldsymbol z^{(1)}$ as $\boldsymbol z$.
\begin{lemma}\label{lem3-16}
For $p\in (2,\infty]$ and any large $L>0$, It holds
$$||s^2 f_\varepsilon(s \boldsymbol y+\boldsymbol z)||_{W^{-1,p}(B_L(\boldsymbol 0))}=O_\varepsilon(1).$$
Moreover, as $\varepsilon\to 0$, for all $\tilde\varphi\in \ C_0^\infty(\mathbb{R}^2)$ it holds
\begin{equation*}
\int_{\mathbb{R}^2} s^2f_\varepsilon(s\boldsymbol y+\boldsymbol z)\tilde\varphi d\boldsymbol y= \frac{2}{z_1}\int_{|\boldsymbol y|=1} \left(b_\varepsilon\cdot\frac{\partial w}{\partial y_1}+O(\varepsilon)\right) \tilde\varphi,
\end{equation*}
where $b_\varepsilon$ is bounded independent of $\varepsilon$, and $w$ is defined by
\begin{equation*}
w(\boldsymbol y)=\left\{
\begin{array}{lll}
\frac{1}{4}(1-|\boldsymbol y|^2), \ \ \ \ \ & |\boldsymbol y|\le 1,\\
\frac{1}{2}\ln\frac{1}{|\boldsymbol y|}, & |\boldsymbol y|\ge 1.
\end{array}
\right.
\end{equation*}
\end{lemma}
\begin{proof}
Let
$$\tilde{\mathbf\Gamma}_\varepsilon^{(i)}:=\left\{\boldsymbol y\,\,\, | \,\, \psi_\varepsilon^{(i)}(s\boldsymbol y+\boldsymbol z^{(i)})-\frac{W}{2}(s y_1+ z_1^{(i)})^2\ln\frac{1}{\varepsilon}\cdot \boldsymbol{e}_1=\mu_\varepsilon^{(i)}\right\}, \ \ \ i=1,2.$$
We take
$$\boldsymbol y_\varepsilon^{(1)}=\left(1+ t_\varepsilon^{(1)}(\theta)\right)(\cos\theta,\sin\theta)\in \tilde{\mathbf \Gamma}_\varepsilon^{(1)}$$
with $| t_\varepsilon^{(1)}(\theta)|=O(\varepsilon^2|\ln\varepsilon|)$ by Lemma \ref{lem3-14}. Similarly, there is a $t_\varepsilon^{(2)}$ satisfying $| t_\varepsilon^{(2)}(\theta)|=O(\varepsilon^2|\ln\varepsilon|)$ such that
$$\boldsymbol y_\varepsilon^{(2)}=\left(1+ t_\varepsilon^{(2)}(\theta)\right)(\cos\theta,\sin\theta)\in \tilde{\mathbf \Gamma}_\varepsilon^{(2)}.$$
We will take $\boldsymbol z^{(1)}$ and $\boldsymbol z^{(2)}$ as a same point $\boldsymbol z=\boldsymbol z^{(1)}$ in the following. As a cost, this leads to some loss on the estimate of $t_\varepsilon^{(2)}(\theta)$: since $|z_1^{(i)}-x^*|=O(\varepsilon^2)$ from Lemma \ref{lem3-15}, we only have
$$| t_\varepsilon^{(2)}(\theta)|=O(\varepsilon)$$
by letting $\boldsymbol z^{(2)}$ coincide with $\boldsymbol z^{(1)}$.
Using the definition of $\tilde{\mathbf\Gamma}_\varepsilon^{(i)}$ and the estimate
$$\mathcal W(s)=O(\varepsilon^2\ln|\varepsilon|)$$
obtained from Lemma \ref{lem3-14}, we have
\begin{equation*}
\begin{split}
&\quad \psi_\varepsilon^{(1)}\left(s\boldsymbol y_\varepsilon^{(2)}+\boldsymbol z\right)-\psi_\varepsilon^{(2)}\left(s \boldsymbol y_\varepsilon^{(2)}+\boldsymbol z\right)\\
&=\psi_\varepsilon^{(1)}\left(s\boldsymbol y_\varepsilon^{(2)}+\boldsymbol z\right)-\psi_\varepsilon^{(1)}\left(s \boldsymbol y_\varepsilon^{(1)}+\boldsymbol z\right)+\psi_\varepsilon^{(1)}\left(s\boldsymbol y_\varepsilon^{(1)}+\boldsymbol z\right)-\psi_\varepsilon^{(2)}\left(s \boldsymbol y_\varepsilon^{(2)}+\boldsymbol z\right)\\
&=\psi_\varepsilon^{(1)}\left(s\boldsymbol y_\varepsilon^{(2)}+\boldsymbol z\right)-\psi_\varepsilon^{(1)}\left(s \boldsymbol y_\varepsilon^{(1)}+\boldsymbol z\right)-\left(\mu_\varepsilon^{(2)}-\mu_\varepsilon^{(1)}\right)\\
&\quad-W\left(sy_{1,\varepsilon}^{(2)}+z_1\right)^2\ln\frac{1}{\varepsilon}+W\left(sy_{1,\varepsilon}^{(1)}+z_1\right)^2\ln\frac{1}{\varepsilon}\\
&= (-s\mathcal N+O(\varepsilon^2|\ln\varepsilon|))\left( t_\varepsilon^{(2)}(\theta)- t_\varepsilon^{(1)}(\theta)\right)-\left(\mu_\varepsilon^{(2)}-\mu_\varepsilon^{(1)}\right),
\end{split}
\end{equation*}
with
$$\mathcal N=\frac{s}{2\varepsilon^2}\cdot z_1^2$$
in \eqref{3-7} as the value of $|\nabla V_{\boldsymbol z,\varepsilon}|$ at $|\boldsymbol x-\boldsymbol z|=s$. Thus it holds
\begin{equation}\label{3-14}
\begin{split}
t_\varepsilon^{(2)}(\theta)- t_\varepsilon^{(1)}(\theta)&=(-s\mathcal N+O(\varepsilon^2|\ln\varepsilon|))\\
&\quad\times\left(\psi_{1,\varepsilon}^{(1)}\left(s\boldsymbol y_\varepsilon^{(2)}+\boldsymbol z\right)-\psi_{1,\varepsilon}^{(2)}\left(s \boldsymbol y_\varepsilon^{(1)}+\boldsymbol z\right)-\left(\mu_\varepsilon^{(2)}-\mu_\varepsilon^{(2)}\right)\right).
\end{split}
\end{equation}
On the other hand, the circulation constraint \eqref{3-2} yields
\begin{equation*}
\begin{split}
\kappa&=\frac{s^2}{2\varepsilon^2}\int_0^{2\pi}z_1\left(1+t_\varepsilon^{(1)}(\theta)\right)^2 d\theta+\frac{s^3}{3\varepsilon^2}\int_0^{2\pi}\left(1+ t_\varepsilon^{(1)}(\theta)\right)^3\cos\theta d\theta\\
&=\frac{s^2}{2\varepsilon}\int_0^{2\pi}z_1\left(1+t_\varepsilon^{(2)}(\theta)\right)^2 d\theta+\frac{s^3}{3\varepsilon^2}\int_0^{2\pi}\left(1+t_\varepsilon^{(2)}(\theta)\right)^3\cos\theta d\theta,
\end{split}
\end{equation*}
and hence
$$\int_0^{2\pi} z_1\left( t_\varepsilon^{(2)}(\theta)- t_\varepsilon^{(1)}(\theta)\right) \left(1+\frac{1}{2} t_\varepsilon^{(1)}(\theta)+\frac{1}{2}t_\varepsilon^{(2)}(\theta)+O(\varepsilon)\right) d\theta=0.$$
It follows that
\begin{equation*}
\begin{split}
&\int_0^{2\pi}(s\mathcal N+O(\varepsilon^2|\ln\varepsilon|))\left(\psi_\varepsilon^{(1)}\left(s\boldsymbol y_\varepsilon^{(2)}+\boldsymbol z\right)-\psi_\varepsilon^{(2)}\left(s \boldsymbol y_\varepsilon^{(2)}+\boldsymbol z\right)\right) \left(2+t_\varepsilon^{(1)}(\theta)+ t_\varepsilon^{(2)}(\theta)+O(\varepsilon)\right) d\theta\\
&=\left(\mu_\varepsilon^{(1)}-\mu_\varepsilon^{(2)}\right)\int_0^{2\pi}(s\mathcal N+O(\varepsilon^2|\ln\varepsilon|))\left(2+ t_\varepsilon^{(1)}(\theta)+t_\varepsilon^{(2)}(\theta)+O(\varepsilon)\right) d\theta,
\end{split}
\end{equation*}
which implies
\begin{equation*}
\frac{|\mu_\varepsilon^{(1)}-\mu_\varepsilon^{(2)}|}{|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}=O_\varepsilon(1),
\end{equation*}
and
\begin{equation*}
\frac{|t_\varepsilon^{(2)}(\theta)-t_\varepsilon^{(1)}(\theta)|}{|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}=O_\varepsilon(1)
\end{equation*}
by \eqref{3-14}.
We then define the normalized difference of $\psi_\varepsilon^{(i)}-\mu_\varepsilon^{(i)}$ as
\begin{equation*}
\Theta_{\varepsilon,\mu}:=\frac{\left(\psi_\varepsilon^{(1)}-\mu_\varepsilon^{(1)}\right)-\left(\psi_\varepsilon^{(2)}-\mu_\varepsilon^{(2)}\right)}{|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}.
\end{equation*}
Recall that for a general function $v$, we denote $\tilde v(\boldsymbol y)=v(s\boldsymbol y+\boldsymbol z)$, and $D_s=\{\boldsymbol y \ | \ s\boldsymbol y+\boldsymbol z\in\mathbb R^2_+\}$. $\tilde\Theta_{\varepsilon,\mu}$ will satisfy the equation
\begin{equation*}
-\text{div}\left(\frac{\nabla\tilde\Theta_{\varepsilon,\mu}}{sy_1+z_1}\right)=\tilde f_\varepsilon(\boldsymbol y), \quad \text{in} \ D_s.
\end{equation*}
For any $\varphi \in C_0^\infty(B_{Ls}(\boldsymbol z))$ and $p'\in [1,2)$, we have
\begin{equation*}
\begin{split}
&\quad \int_{\mathbb{R}^2} s^2 f_\varepsilon (s \boldsymbol y+ \boldsymbol z) \tilde\varphi d\boldsymbol y\\
&=-\frac{s^2}{\varepsilon^2|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_0^{2\pi} \int_{1+t_\varepsilon^{(1)}}^{1+ t_\varepsilon^{(2)}} (z_1+t\cos\theta)t\tilde\varphi(t,\theta) dt d\theta\\
&=-\frac{s^2}{\varepsilon^2|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_{|\boldsymbol y|=1} z_1 \tilde\varphi(\boldsymbol y)( t_\varepsilon^{(2)}(\theta)- t_\varepsilon^{(1)}(\theta)) \left(1+\frac{1}{2} t_\varepsilon^{(1)}(\theta)+\frac{1}{2} t_\varepsilon^{(2)}(\theta)\right) d\boldsymbol y\\
&\quad -\frac{s^2}{\varepsilon^2|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_{|\boldsymbol y|=1} \int_{1+ t_\varepsilon^{(1)}}^{1+t_\varepsilon^{(2)}} (z_1+t\cos\theta)t\left[\tilde\varphi(t\boldsymbol y)-\tilde\varphi(\boldsymbol y)\right] dt d\boldsymbol y\\
&\quad+o_\varepsilon\left(\int_{|\boldsymbol y|=1} |\tilde\varphi(\boldsymbol y)| d\boldsymbol y\right)\\
&= -\frac{s^2(1+o_\varepsilon(1))}{\varepsilon^2|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_{|\boldsymbol y|=1} \int_{1+ t_\varepsilon^{(1)}}^{1+ t_\varepsilon^{(2)}}\int_0^1 z_1t(t-1)\nabla\tilde\varphi((1+\sigma (t-1))\boldsymbol y)\cdot \boldsymbol y d\sigma dt d\boldsymbol y\\
&\quad+O_\varepsilon\left(\int_{|\boldsymbol y|=1} |\tilde\varphi(\boldsymbol y)| d\boldsymbol y\right)\\
&=o_\varepsilon\left(\frac{||\nabla\tilde\varphi||_{L^1(B_2(\boldsymbol 0))}}{|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_{1+ t_\varepsilon^{(1)}}^{1+ t_\varepsilon^{(2)}} tdt \right)+O_\varepsilon\left(\int_{|\boldsymbol y|=1} |\tilde\varphi(\boldsymbol y)| d\boldsymbol y\right)\\
&=O_\varepsilon\left(\int_{|\boldsymbol y|=1} |\tilde\varphi(\boldsymbol y)| dy+||\nabla \tilde\varphi||_{L^1(B_2(\boldsymbol 0))}\right) \\
&=O_\varepsilon(||\tilde\varphi||_{W^{1,p'}(B_2(\boldsymbol 0))}).
\end{split}
\end{equation*}
So for $p\in(2,+\infty]$, we obtain
$$||s^2f_\varepsilon(s \boldsymbol y+\boldsymbol z)||_{W^{-1,p}(B_L(\boldsymbol 0))}=O_\varepsilon(1).$$
By standard elliptic estimate, $\tilde\Theta_{\varepsilon,\mu}$ is bounded in $W^{1,p}_{\text{loc}}(\mathbb{R}^2)$ for $p\in[2,+\infty)$ and hence in $C^\alpha_{\text{loc}}(\mathbb{R}^2)$. For further use, we let
\begin{equation}\label{projection}
\tilde\Theta_{\varepsilon,\mu}^*:=\tilde\Theta_{\varepsilon,\mu}-b_\varepsilon\frac{\partial w}{\partial y_1}
\end{equation}
with $w$ defined in the statement of lemma, and
\begin{equation*}
b_\varepsilon=\left(\int_{B_L(\boldsymbol 0)} \tilde\Theta_{\varepsilon,\mu}(\boldsymbol y)\cdot(-\Delta)\frac{\partial w}{\partial y_1} d\boldsymbol y\right)\left(\int_{B_L(\boldsymbol 0)}\frac{\partial w}{\partial y_1}\cdot(-\Delta)\frac{\partial w}{\partial y_1} d\boldsymbol y\right)^{-1}
\end{equation*}
as the projection coefficient bounded independent of $\varepsilon$. Then for any $\varphi \in C_0^\infty(B_{Ls}(\boldsymbol z))$, $\tilde\Theta_{\varepsilon,\mu}^*$ satisfies
\begin{equation}\label{3-15}
\begin{split}
&\quad\int_{B_L(\boldsymbol 0)} \frac{1}{sy_1+z_1}\cdot\nabla \tilde\Theta_{\varepsilon,\mu}^*\cdot\nabla\tilde\varphi d\boldsymbol y-\frac{2}{z_1}\int_{|\boldsymbol y|=1}\tilde\Theta_{\varepsilon,\mu}^*\tilde\varphi\\
&=-b_\varepsilon\left(\int_{B_L(\boldsymbol 0)} \frac{1}{sy_1+z_1}\cdot\nabla\left(\frac{\partial w}{\partial y_1}\right)\cdot\nabla\tilde\varphi d\boldsymbol y-\frac{2}{z_1}\int_{|\boldsymbol y|=1}\frac{\partial w}{\partial y_1}\tilde\varphi\right)\\
&\quad+\left(\int_{B_L(\boldsymbol 0)} s^2\tilde f_\varepsilon\tilde\varphi d\boldsymbol y-\frac{2}{z_1}\int_{|\boldsymbol y|=1}\tilde\Theta_{\varepsilon,\mu}\tilde\varphi\right)\\
&=I_1+I_2.
\end{split}
\end{equation}
Since the kernel of
\begin{equation*}
\mathbb L^*v=-\Delta v-2v(1,\theta)\boldsymbol{\delta}_{|\boldsymbol y|=1}, \ \ \ \text{in} \ \mathbb R^2
\end{equation*}
is spanned by
$$\left\{\frac{\partial w}{\partial y_1},\frac{\partial w}{\partial y_2}\right\},$$
we deduce $I_1=O(\varepsilon)\cdot \|\tilde\varphi\|_{W^{1,p'}(B_L(\boldsymbol 0))}$. For the term $I_2$, using \eqref{3-14} and the estimate $| t_\varepsilon^{(2)}(\theta)|=O(\varepsilon)$, we have
\begin{equation*}
\begin{split}
I_2&=-\frac{s^2}{\varepsilon^2|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_0^{2\pi} \int_{1+ t_\varepsilon^{(1)}}^{1+ t_\varepsilon^{(2)}} (z_1+t\cos\theta)t\tilde\varphi(t,\theta) dt d\theta-\frac{2}{z_1}\int_{|\boldsymbol y=1|}\tilde\Theta_{\varepsilon,\mu}\tilde\varphi\\
&=-\frac{s^2}{\varepsilon^2|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_0^{2\pi} \int_{1+ t_\varepsilon^{(1)}}^{1+ t_\varepsilon^{(2)}} (z_1+t\cos\theta)t(\tilde\varphi(t,\theta)-\tilde\varphi(1,\theta)) dt d\theta\\
&\quad -\frac{s^2}{\varepsilon^2|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_0^{2\pi} \int_{1+ t_\varepsilon^{(1)}}^{1+ t_\varepsilon^{(2)}} (z_1+t\cos\theta)t\tilde\varphi(1,\theta) dt d\theta-\frac{2}{z_1}\int_{|\boldsymbol y=1|}\tilde\Theta_{\varepsilon,\mu}\tilde\varphi\\
&=-\frac{s^2(1+o_\varepsilon(1))}{\varepsilon^2|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_{1+ t_\varepsilon^{(1)}}^{1+ t_\varepsilon^{(2)}} z_1 t(t-1)\nabla\tilde\varphi((1+\sigma (t-1))\boldsymbol y)\cdot \boldsymbol y d\sigma dt d\boldsymbol y\\
&\quad+\left(\frac{2}{z_1}+O(\varepsilon^2|\ln\varepsilon|)\right)\int_{|\boldsymbol y|=1}\tilde\Theta_{\varepsilon,\mu}(1+O_\varepsilon( t_\varepsilon^{(2)}))\tilde\varphi\\
&\quad-\frac{2}{z_1}\int_{|\boldsymbol y|=1}\tilde\Theta_{\varepsilon,\mu}\tilde\varphi+O(\varepsilon)\cdot \|\tilde\varphi\|_{W^{1,p'}(B_L(\boldsymbol 0))}\\
&=O(\varepsilon)\cdot \|\tilde\varphi\|_{W^{1,p'}(B_L(\boldsymbol 0))}.
\end{split}
\end{equation*}
Actually, we can regard the left hand side of \eqref{3-15} as the weak form of linear operator
$$\mathbb L^*_sv=-\text{div}\left(\frac{\nabla v}{sy_1+z_1}\right)-\frac{2}{z_1}v(1,\theta)\boldsymbol{\delta}_{|\boldsymbol y|=1}$$
acting on $\tilde\Theta_{\varepsilon,\mu}^*$. Since both $\tilde\Theta_{\varepsilon,\mu}$ and $\tilde\Theta_{\varepsilon,\mu}^*$ are even with respect to $x_1$-axis, the kernel of $\mathbb L^*_s$ is then approximated by $\partial w/\partial y_1$. Consequently, if a function $v^*\in W^{-1,p}(B_L(\boldsymbol 0))$ with $p\in(2,+\infty]$ satisfies orthogonality condition
$$\int_{B_L(\boldsymbol 0)} v^*\cdot(-\Delta)\frac{\partial w}{\partial y_1} d\boldsymbol y=0,$$
then it holds following local coercive estimate
$$\|v^*\|_{L^\infty(B_L(\boldsymbol 0))}+\|\nabla v^*\|_{L^p(B_{L}(\boldsymbol 0))}\le C\|\mathbb L^*_sv^*\|_{W^{-1,p}(B_L(\boldsymbol 0))}, \quad \forall \, p\in(2,+\infty],$$
which is verified in the proof of Lemma \ref{lem2-2}. Since $\tilde\Theta_{\varepsilon,\mu}^*$ satisfy the orthogonality condition by projection \eqref{projection}, we deduce from the estimates for $I_1,I_2$ that
\begin{equation*}
\|\tilde\Theta_{\varepsilon,\mu}^*\|_{L^\infty(B_L(\boldsymbol 0))}+\|\nabla\tilde\Theta_{\varepsilon,\mu}^*\|_{L^p(B_{L}(\boldsymbol 0))}=O(\varepsilon), \quad \forall \, p\in(2,+\infty].
\end{equation*}
Now we arrive at a conclusion: by the definition of $\tilde\Theta_{\varepsilon,\mu}^*$ in \eqref{projection}, for each $p\in(2,+\infty]$, it holds
\begin{equation*}
\tilde\Theta_{\varepsilon,\mu}=b_\varepsilon\frac{\partial w}{\partial y_1}+O(\varepsilon), \quad \text{in} \ W^{1,p}(B_{L}(\boldsymbol 0)),
\end{equation*}
and for all $\tilde\varphi\in \ C_0^\infty(\mathbb{R}^2)$, it holds
\begin{equation*}
\int_{\mathbb{R}^2} s^2f_\varepsilon(s\boldsymbol y+\boldsymbol z)\tilde\varphi d\boldsymbol y= \frac{2}{z_1}\int_{|\boldsymbol y|=1} \left(b_\varepsilon\cdot\frac{\partial w}{\partial y_1}+O(\varepsilon)\right) \tilde\varphi,
\end{equation*}
where $b_\varepsilon$ is bounded independent of $\varepsilon$. So we have completed the proof of Lemma \ref{lem3-16}.
\end{proof}
\bigskip
To make use of the local Pohozaev identity in Appendix \ref{appC} and obtain a contradiction, we let
\begin{equation*}
\xi_\varepsilon(\boldsymbol x):=\frac{\psi_{1,\varepsilon}^{(1)}(\boldsymbol x)-\psi_{1,\varepsilon}^{(2)}(\boldsymbol x)}{|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}
\end{equation*}
be the normalized difference of $\psi_{1,\varepsilon}^{(1)}(\boldsymbol x)$ and $\psi_{2,\varepsilon}^{(1)}(\boldsymbol x)$. Then $\xi_\varepsilon$ has the following integral representation
\begin{equation}\label{3-16}
\xi_\varepsilon=z_1^2\int_{\mathbb R^2_+} G(\boldsymbol x,\boldsymbol x')\cdot x_1'^{-1}f_\varepsilon(\boldsymbol x')d\boldsymbol x'.
\end{equation}
By the asymptotic estimate for $f_\varepsilon(s\boldsymbol y+\boldsymbol z)$ in Lemma \ref{lem3-16}, it holds
\begin{equation*}
\frac{\psi_{2,\varepsilon}^{(1)}(\boldsymbol x)-\psi_{2,\varepsilon}^{(2)}(\boldsymbol x)}{|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}=\int_{\mathbb R^2_+} H(\boldsymbol x,\boldsymbol x')\cdot x_1'^{-1}f_\varepsilon(\boldsymbol x')d\boldsymbol x'=o_\varepsilon(1).
\end{equation*}
So we see that $\xi_\varepsilon$ is the main part in $\Theta_\varepsilon$, and $||\xi_\varepsilon||_{L^\infty(\mathbb R^2_+)}=1-o_\varepsilon(1)$. To derive a contradiction and obtain uniqueness, we only have to show $||\xi_\varepsilon||_{L^\infty(\mathbb R^2_+)}=o_\varepsilon(1)$.
For the purpose of dealing with boundary terms in the local Pohozaev identity, we need the following lemma concerning the behavior of $\xi_\varepsilon$ away from $\boldsymbol z$.
\begin{lemma}\label{lem3-17}
For any large $L>0$, it holds
\begin{equation}\label{3-17}
\xi_\varepsilon(\boldsymbol x)=\mathbf B_\varepsilon\cdot \frac{sz_1^2}{2\pi}\frac{x_1-z_1}{|\boldsymbol x-\boldsymbol z|^2}+\mathbf B_\varepsilon\cdot \frac{sz_1^2}{2\pi}\frac{x_1+z_1}{|\boldsymbol x-\boldsymbol {\bar z}|^2}+\mathbf B_\varepsilon\cdot \frac{sz_1}{2\pi} \ln \frac{|\boldsymbol x-\boldsymbol {\bar z}|}{|\boldsymbol x-\boldsymbol z|}+O(\varepsilon^2),
\end{equation}
in $C^1(\mathbb R^2_+\setminus B_{\delta/2}(\boldsymbol z))$, where $\delta>0$ is the small constant in \eqref{C-1}, and
\begin{equation*}
\mathbf B_\varepsilon:=\frac{1}{s}\int_{B_{2s}(\boldsymbol z)} (x_1-z_1) x_1^{-1}f_\varepsilon(\boldsymbol x) d\boldsymbol x
\end{equation*}
is bounded independent of $\varepsilon$.
\end{lemma}
\begin{proof}
Since $\xi_\varepsilon$ is symmetric with respect to $x_1$-axis, for $\boldsymbol x\in \mathbb{R}^2_+\setminus B_{\delta/2}(\boldsymbol z)$ we have
\begin{equation*}
\begin{split}
\xi_\varepsilon(\boldsymbol x)&=\frac{z_1^2}{2\pi}\int_{\mathbb{R}^2_+} x_1'^{-1}\ln \left(\frac{|\boldsymbol x-\boldsymbol {\bar x}'|}{|\boldsymbol x-\boldsymbol x'|} \right)f_\varepsilon(\boldsymbol x')d\boldsymbol x'=\frac{z_1^2}{2\pi}\int_{B_{Ls}(\boldsymbol z)} x_1'^{-1}\ln \left(\frac{|\boldsymbol x-\boldsymbol {\bar x}'|}{|\boldsymbol x-\boldsymbol x'|}\right) f_\varepsilon(\boldsymbol x')d\boldsymbol x'\\
&=\frac{z_1}{2\pi}\ln \frac{1}{|\boldsymbol x-\boldsymbol z|} \int_{B_{Ls}(\boldsymbol z)} f_\varepsilon(\boldsymbol x')d\boldsymbol x'+ \frac{z_1^2}{2\pi}\int_{B_{Ls}(\boldsymbol z)}x_1'^{-1}\ln \left(\frac{|\boldsymbol x-\boldsymbol z|}{|\boldsymbol x-\boldsymbol x'|}\right) f_\varepsilon(\boldsymbol x')d\boldsymbol x'\\
&\quad-\frac{z_1}{2\pi}\ln \frac{1}{|\boldsymbol x-\boldsymbol {\bar z}|} \int_{B_{Ls}(\boldsymbol z)} f_\varepsilon(\boldsymbol x')d\boldsymbol x'-\frac{z_1^2}{2\pi}\int_{B_{Ls}(\boldsymbol z)}x_1'^{-1}\ln \left(\frac{|\boldsymbol x-\boldsymbol {\bar z}'|}{|\boldsymbol x-\boldsymbol {\bar x}'|} \right)f_\varepsilon(\boldsymbol x')d\boldsymbol x'\\
&\quad-\frac{z_1}{2\pi} \ln \frac{|\boldsymbol x-\boldsymbol {\bar z}|}{|\boldsymbol x-\boldsymbol z|} \int_{B_{Ls}(\boldsymbol z)}(x_1-z_1) x_1^{-1}f_\varepsilon(\boldsymbol x)d\boldsymbol x\\
&=-\frac{z_1^2}{4\pi}\int_{B_{Ls}(\boldsymbol z)}x_1^{-1}\ln\left(1+ \frac{2(\boldsymbol x-\boldsymbol z)\cdot (\boldsymbol z-\boldsymbol x')}{|\boldsymbol x-\boldsymbol z|^2} +\frac{ |\boldsymbol z-\boldsymbol x'|^2}{|\boldsymbol x-\boldsymbol z|^2}\right)f_\varepsilon(\boldsymbol x')d\boldsymbol x'\\
&\quad +\frac{z_1^2}{4\pi}\int_{B_{Ls}(\boldsymbol z)}x_1^{-1}\ln\left(1+ \frac{2(\boldsymbol x-\boldsymbol {\bar z})\cdot (\boldsymbol {\bar z}-\boldsymbol {\bar x}')}{|\boldsymbol x-\boldsymbol {\bar z}|^2} +\frac{ |\boldsymbol {\bar z}-\boldsymbol {\bar x}'|^2}{|\boldsymbol x-\boldsymbol z|^2}\right)f_\varepsilon(\boldsymbol x')d\boldsymbol x'\\
&\quad-\frac{z_1}{2\pi} \ln \frac{|\boldsymbol x-\boldsymbol {\bar z}|}{|\boldsymbol x-\boldsymbol z|} \int_{B_{Ls}(\boldsymbol z)}(x_1-z_1) x_1^{-1}f_\varepsilon(\boldsymbol x)d\boldsymbol x\\
&=\mathbf B_\varepsilon\cdot \frac{sz_1^2}{2\pi}\frac{x_1-z_1}{|\boldsymbol x-\boldsymbol z|^2}+\mathbf B_\varepsilon\cdot \frac{sz_1^2}{2\pi}\frac{x_1+z_1}{|\boldsymbol x-\boldsymbol {\bar z}|^2}+\mathbf B_\varepsilon\cdot \frac{sz_1}{2\pi} \ln \frac{|\boldsymbol x-\boldsymbol {\bar z}|}{|\boldsymbol x-\boldsymbol z|}+O(\varepsilon^2).
\end{split}
\end{equation*}
Moreover, $\mathbf B_\varepsilon$ is bounded independent of $\varepsilon$ since $||s^2 f_\varepsilon(s \boldsymbol y+\boldsymbol z)||_{W^{-1,p}(B_L(\boldsymbol 0))}=O_\varepsilon(1)$ for $p\in [2,\infty)$. Then we can verify \eqref{3-17} in $C^1(\mathbb R^2_+\setminus B_{\delta/2}(\boldsymbol z))$ by a same argument.
\end{proof}
If we apply \eqref{C-1} in Appendix \ref{appC} to $\psi_{1,\varepsilon}^{(1)}$ and $\psi_{1,\varepsilon}^{(2)}$ separately and calculate their difference, we can obtain the following local Pohozaev identity:
\begin{equation}\label{3-18}
\begin{split}
&\quad-\int_{\partial B_\delta(\boldsymbol z)}\frac{\partial\xi_\varepsilon}{\partial\nu}\frac{\partial\psi_{1,\varepsilon}^{(1)}}{\partial x_1}dS-\int_{\partial B_\delta(\boldsymbol z)}\frac{\partial \psi_{1,\varepsilon}^{(2)}}{\partial\nu }\frac{\partial\xi_\varepsilon}{\partial x_1}dS+\frac{1}{2}\int_{\partial B_\delta(\boldsymbol z)}\langle\nabla(\psi_{1,\varepsilon}^{(1)}+\psi_{1,\varepsilon}^{(2)}),\nabla\xi_\varepsilon\rangle\nu_1dS\\
&=-\frac{z_1^2}{\varepsilon^2|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_{B_\delta(\boldsymbol z)} \left(\partial_1\psi_{2,\varepsilon}^{(1)}\cdot\boldsymbol 1_{A_\varepsilon^{(1)}}-\partial_1\psi_{2,\varepsilon}^{(2)}\cdot\boldsymbol 1_{A_\varepsilon^{(2)}}\right)d\boldsymbol x.
\end{split}
\end{equation}
The proof of the uniqueness of a vortex ring with small cross-section is based on a careful estimate for each term in \eqref{3-18}.
\bigskip
\noindent{\bf Proof of Proposition \ref{prop3-1}:}
Using the asympototic estimate for $\psi_{1,\varepsilon}$ in Lemma \ref{C2} and $\xi_\varepsilon$ in Lemma \ref{lem3-17}, we see that
\begin{equation}\label{3-19}
\begin{split}
&\quad\int_{\partial B_\delta(\boldsymbol z)}\frac{\partial\xi_\varepsilon}{\partial\nu}\frac{\partial\psi_{1,\varepsilon}^{(1)}}{\partial x_1}dS+\int_{\partial B_\delta(\boldsymbol z)}\frac{\partial \psi_{1,\varepsilon}^{(2)}}{\partial\nu }\frac{\partial\xi_\varepsilon}{\partial x_1}dS-\frac{1}{2}\int_{\partial B_\delta(\boldsymbol z)}\langle\nabla(\psi_{1,\varepsilon}^{(1)}+\psi_{1,\varepsilon}^{(2)}),\nabla\xi_\varepsilon\rangle\nu_1dS\\
&=O(\varepsilon)\cdot \mathbf B_\varepsilon+O(\varepsilon^2).
\end{split}
\end{equation}
To deal with the right hand side of \eqref{3-18}, we write
\begin{equation*}
\begin{split}
&\quad\frac{z_1^2}{\varepsilon^2|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_{B_\delta(\boldsymbol z)} \left(\partial_1\psi_{2,\varepsilon}^{(1)}\cdot\boldsymbol 1_{A_\varepsilon^{(1)}}-\partial_1\psi_{2,\varepsilon}^{(2)}\cdot\boldsymbol 1_{A_\varepsilon^{(2)}}\right)d\boldsymbol x\\
&=\frac{z_1^2}{\varepsilon^2|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_{B_\delta(\boldsymbol z)} \left(\partial_1\psi_{2,\varepsilon}^{(1)}(\boldsymbol 1_{A_\varepsilon^{(1)}}-\boldsymbol 1_{A_\varepsilon^{(2)}})+\boldsymbol 1_{A_\varepsilon^{(2)}}(\partial_1\psi_{2,\varepsilon}^{(1)}-\partial_1\psi_{2,\varepsilon}^{(2)})\right)d\boldsymbol x\\
&=G_1+G_2,
\end{split}
\end{equation*}
and
\begin{equation*}
G_1=\frac{z_1^2}{\varepsilon^2}\int_{B_\delta(\boldsymbol z)} x_1^{-1}f_\varepsilon(\boldsymbol x)\int_{B_\delta(\boldsymbol z)} \partial_{x_1}H(\boldsymbol x,\boldsymbol x')\cdot\boldsymbol 1_{A_\varepsilon^{(1)}}d\boldsymbol x'd\boldsymbol x=G_{11}+G_{12}+G_{13}+G_{14},
\end{equation*}
where
\begin{equation*}
G_{11}=\frac{z_1^2}{4\pi\varepsilon^2}\cdot\ln\left(\frac{1}{s}\right)\cdot\int_{B_\delta(\boldsymbol z)} x_1^{-3/2}f_\varepsilon(\boldsymbol x)\int_{A_\varepsilon^{(1)}}x_1'^{3/2} d\boldsymbol x'd\boldsymbol x,
\end{equation*}
\begin{equation*}
G_{12}=\frac{z_1^2}{4\pi\varepsilon^2}\cdot\int_{B_\delta(\boldsymbol z)} x_1^{-3/2}f_\varepsilon(\boldsymbol x)\int_{A_\varepsilon^{(1)}}x_1'^{3/2} \ln\left(\frac{s}{|\boldsymbol x-\boldsymbol x'|}\right)d\boldsymbol x'd\boldsymbol x,
\end{equation*}
\begin{equation*}
G_{13}=-\frac{z_1^2}{2\pi\varepsilon^2}\cdot\int_{B_\delta(\boldsymbol z)} x_1^{-1}f_\varepsilon(\boldsymbol x)\int_{A_\varepsilon^{(1)}}\left(x_1^{1/2}x_1'^{3/2}-z_1^2\right)\cdot\frac{x_1-x_1'}{|\boldsymbol x-\boldsymbol x'|^2}d\boldsymbol x'd\boldsymbol x,
\end{equation*}
and $G_{14}$ a regular term. Using the circulation constraint \eqref{3-2} and Lemma \ref{lem3-16}, we have
\begin{equation*}
\begin{split}
G_{11}&=\frac{z_1^2}{4\pi}\cdot\ln\left(\frac{1}{s}\right)\cdot\int_{B_\delta(\boldsymbol z)} x_1^{-3/2}f_\varepsilon\cdot \frac{1}{\varepsilon^2}\int_{\Omega_\varepsilon^{(1)}}x_1'\left(z_1^{1/2}+O(\varepsilon)\right)d\boldsymbol x'd\boldsymbol x\\
&=\frac{\kappa z_1^2}{4\pi}\cdot \left(z_1^{1/2}+O(\varepsilon)\right)\cdot\ln\left(\frac{1}{s}\right)\cdot\int_{B_\delta(\boldsymbol z)} x_1^{-3/2}f_\varepsilon(\boldsymbol x)d\boldsymbol x\\
&=\frac{\kappa z_1^2}{4\pi}\cdot \left(z_1^{1/2}+O(\varepsilon)\right)\cdot\ln\left(\frac{1}{s}\right)\cdot\int_{B_\delta(\boldsymbol z)} f_\varepsilon\cdot\left(z_1^{-3/2}-\frac{3}{2z_1^{5/2}}\cdot(x_1-z_1)+O(\varepsilon^2)\right)d\boldsymbol x\\
&=\frac{\kappa z_1^2}{4\pi}\cdot \left(z_1^{1/2}+O(\varepsilon)\right)\cdot\ln\left(\frac{1}{s}\right)\cdot\int_{\mathbb R^2}\left(-\frac{3}{2z_1^{-5/2}}\cdot sy_1+O(\varepsilon^2)\right)s^2f_\varepsilon(s\boldsymbol y+\boldsymbol z)d\boldsymbol y\\
&=\frac{\kappa z_1^2}{4\pi}\cdot \left(z_1^{1/2}+O(\varepsilon)\right)\cdot\ln\left(\frac{1}{s}\right)\cdot\int_{|\boldsymbol y=1|}\left(-\frac{3}{2z_1^{-5/2}}\cdot sy_1+O(\varepsilon^2)\right)\left(b_\varepsilon\cdot\frac{y_1}{z_1|\boldsymbol y|^2}+O(\varepsilon)\right)d\boldsymbol y\\
&=-\frac{3\kappa}{8z_1}\cdot b_\varepsilon s\ln\left(\frac{1}{s}\right)+O(\varepsilon).
\end{split}
\end{equation*}
For the term $G_{12}$, it holds
\begin{equation*}
\begin{split}
G_{12}&=\frac{z_1^2}{4\pi\varepsilon^2}\int_{B_\delta(\boldsymbol z)} \left(z_1^{-3/2}+O(\varepsilon)\right)f_\varepsilon\int_{A_\varepsilon^{(1)}}\left(z_1^{3/2}+O(\varepsilon)\right) \ln\left(\frac{s}{|\boldsymbol x-\boldsymbol x'|}\right)d\boldsymbol x'd\boldsymbol x\\
&=\frac{z_1^2}{4\pi\varepsilon^2}\int_{B_\delta(\boldsymbol z)} f_\varepsilon\int_{B_s(\boldsymbol z)} \ln\left(\frac{s}{|\boldsymbol x-\boldsymbol x'|}\right)d\boldsymbol x'd\boldsymbol x+O(\varepsilon)\\
&=\frac{z_1^2s^2}{4\pi\varepsilon^2}\int_{|\boldsymbol y=1|}\left(b_\varepsilon\cdot\frac{y_1}{z_1|\boldsymbol y|^2}+O(\varepsilon)\right)\left(\int_{B_1(\boldsymbol 0)} \ln\left(\frac{1}{|\boldsymbol y-\boldsymbol y'|}\right)d\boldsymbol y'\right)+O(\varepsilon)\\
&=O(\varepsilon),
\end{split}
\end{equation*}
where we have used the formula of Rankine vortex
\begin{equation*}
\frac{1}{2\pi}\int_{B_1(\boldsymbol 0)} \ln\left(\frac{1}{|\boldsymbol y-\boldsymbol y'|}\right)d\boldsymbol y'=\left\{
\begin{array}{lll}
\frac{1}{4}(1-|\boldsymbol y|^2), \ \ \ \ \ & |\boldsymbol y|\le 1,\\
\frac{1}{2}\ln\frac{1}{|\boldsymbol y|}, & |\boldsymbol y|\ge 1.
\end{array}
\right.
\end{equation*}
Similarly, for $G_{13}$ we have
\begin{equation*}
\begin{split}
G_{13}&=-\frac{z_1^2}{4\pi\varepsilon^2}\int_{B_\delta(\boldsymbol z)} f_\varepsilon\int_{A_\varepsilon^{(1)}}\left((x_1-z_1)+3(x_1'-z_1)\right) \cdot\frac{x_1-x_1'}{|\boldsymbol x-\boldsymbol x'|^2}d\boldsymbol x'd\boldsymbol x+O(\varepsilon)\\
&=-\frac{z_1^2}{4\pi\varepsilon^2}\int_{B_\delta(\boldsymbol z)} f_\varepsilon\int_{B_s(\boldsymbol z)}\left((x_1-z_1)+3(x_1'-z_1)\right) \cdot\frac{x_1-x_1'}{|\boldsymbol x-\boldsymbol x'|^2}d\boldsymbol x'd\boldsymbol x+O(\varepsilon).
\end{split}
\end{equation*}
Notice that
\begin{equation*}
g(\boldsymbol y)=\int_{B_1(\boldsymbol 0)}\left(y_1+3y'_1\right) \cdot\frac{y_1-y_1'}{|\boldsymbol y-\boldsymbol y'|^2}d\boldsymbol y'
\end{equation*}
is a bounded function even symmetric with respect to $y_1=0$. While $\partial w/\partial y_1$ is odd symmetric with respect to $y_1=0$. Hence it holds
\begin{equation*}
G_{13}=-\frac{z_1^2s^2}{4\pi\varepsilon^2}\int_{|\boldsymbol y=1|}\left(\frac{2}{z_1}\cdot b_\varepsilon\cdot\frac{\partial w}{\partial y_1}+O(\varepsilon)\right)g(\boldsymbol y)+O(\varepsilon)=O(\varepsilon).
\end{equation*}
For the regular term $G_{14}$, it is easy to verify that $G_{14}=O(\varepsilon)$. Summarizing all the estimates above, we get
\begin{equation}\label{3-20}
G_1=-\frac{3\kappa}{8z_1}\cdot b_\varepsilon s\ln\left(\frac{1}{s}\right)+O(\varepsilon).
\end{equation}
Then we turn to deal with $G_2$. Using Fubini's theorem, we have
\begin{equation*}
\begin{split}
G_2&=\frac{z_1^2}{\varepsilon^4|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_{A_\varepsilon^{(2)}}\left(\int_{A_\varepsilon^{(1)}}\partial_{x_1}H(\boldsymbol x,\boldsymbol x')d\boldsymbol x'-\int_{A_\varepsilon^{(2)}}\partial_{x_1}H(\boldsymbol x,\boldsymbol x')d\boldsymbol x'\right)d\boldsymbol x\\
&=\frac{z_1^2}{\varepsilon^4|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_{B_\delta(\boldsymbol z)} \left(\boldsymbol 1_{A_\varepsilon^{(1)}}-\boldsymbol 1_{A_\varepsilon^{(2)}}\right)\int_{A_\varepsilon^{(2)}}\partial_{x_1}H(\boldsymbol x,\boldsymbol x')d\boldsymbol x'd\boldsymbol x\\
&=\frac{z_1^2}{\varepsilon^4|\psi_\varepsilon^{(1)}-\psi_\varepsilon^{(2)}|_\infty}\int_{B_\delta(\boldsymbol z)} \left(\boldsymbol 1_{A_\varepsilon^{(1)}}-\boldsymbol 1_{A_\varepsilon^{(2)}}\right)\partial_1\psi_{2,\varepsilon}^{(2)}d\boldsymbol x.
\end{split}
\end{equation*}
Due to the dual formulation of $G_1$ and $G_2$, we claim
\begin{equation}\label{3-21}
G_2=-\frac{3\kappa}{8z_1}\cdot b_\varepsilon s\ln\left(\frac{1}{s}\right)+O(\varepsilon).
\end{equation}
Now from \eqref{3-19} \eqref{3-20} \eqref{3-21}, we have
\begin{equation}\label{3-22}
\frac{3\kappa}{4z_1}\cdot b_\varepsilon s\ln\left(\frac{1}{s}\right)=O(\varepsilon).
\end{equation}
Since $z_1$ is near $x^*>0$ defined in Lemma \ref{lem3-15}, and $s\ln\left(1/s\right)=O(\varepsilon|\ln\varepsilon|)$, we can derive from \eqref{3-22} that
\begin{equation*}
b_\varepsilon =O\left(\frac{1}{|\ln\varepsilon|}\right).
\end{equation*}
According to Lemma \ref{lem3-16}, we can also use the fact that for fixed $\boldsymbol y\in\mathbb R^2$ it holds
$$\frac{1}{2\pi}\ln\left(\frac{1}{|\boldsymbol y-\cdot|}\right)\in W_{\text{loc}}^{1,p'}(\mathbb R^2),\quad \forall \, p'\in [1,2),$$
and deduce
\begin{equation*}
\begin{split}
\tilde\xi_\varepsilon(\boldsymbol y)&=\frac{z_1}{2\pi}\int_{\mathbb R^2_+} \ln\left(\frac{1}{s|\boldsymbol y-\boldsymbol y'|}\right)\cdot \left(1-\frac{sy_1'}{z_1}\right)s^2f_\varepsilon(s\boldsymbol y'+\boldsymbol z)d\boldsymbol y'+O\left(\frac{1}{|\ln\varepsilon|}\right)\\
&=\frac{1}{\pi}\int_{|\boldsymbol y'=1|} \ln\left(\frac{1}{|\boldsymbol y-\boldsymbol y'|}\right)\cdot \left(1-\frac{sy_1'}{z_1}\right)\left(b_\varepsilon\cdot\frac{\partial w(\boldsymbol y')}{\partial y_1}+O(\varepsilon)\right)\\
&\quad+\frac{1}{\pi}\ln\left(\frac{1}{s}\right)\cdot\int_{|\boldsymbol y'=1|}\left(b_\varepsilon\cdot\frac{\partial w(\boldsymbol y')}{\partial y_1}+O(\varepsilon)\right)+O\left(\frac{1}{|\ln\varepsilon|}\right)\\
&=O\left(\frac{1}{|\ln\varepsilon|}\right).
\end{split}
\end{equation*}
Thus we conclude $||\xi_\varepsilon||_{L^\infty(\mathbb R^2_+)}=O(1/|\ln\varepsilon|)$, which is a contradiction to $||\xi_\varepsilon||_{L^\infty(\mathbb R^2_+)}=1-o_\varepsilon(1)$. By the discussion given before Lemma \ref{lem3-17}, we have verified the uniqueness of $\psi_\varepsilon$ for $\varepsilon>0$ small, which means the vortex ring $\zeta_\varepsilon$ with assumptions in Proposition \ref{prop3-1} is unique.\qed
\bigskip
\section{Stability}\label{sec4}
In this section, we study nonlinear orbital stability of the steady vortex ring $\zeta_\varepsilon$ constructed in Theorem \ref{thm1}. We will provide the proof of Theorem \ref{thm4}. The key idea is to build a bridge between the existence result of \cite{Bad, CWZ} based on variational method and the uniqueness result established in the prceeding section in order to apply the concentration-compactness principle of Lions \cite{Lions} to a maximizing sequence.
\subsection{Variational setting}
Let $\kappa$ and $W$ be as in Theorem \ref{thm1}. We now show that ${\zeta}_\varepsilon$ enjoys a variational characteristic.
We set the space of admissible functions
\begin{equation*}
\mathcal{A}_\varepsilon:=\left\{\zeta\in L^\infty(\mathbb{R}^3)\mid \zeta: \text{axi-symmetric},\ 0\le \zeta\le 1/\varepsilon^2, \ \|\zeta\|_{L^1(\mathbb{R}^3)}\le2\pi\kappa \right\}.
\end{equation*}
We shall consider the maximization problem:
\begin{equation}\label{4-1}
\mathcal{E}_{\varepsilon}=\sup_{\zeta\in \mathcal{A}_\varepsilon}\left(E[\zeta]-W\ln\frac{1}{\varepsilon}\mathcal{P}[\zeta] \right).
\end{equation}
Denote by $\mathcal{S}_{\varepsilon}\subset \mathcal{A}_{\varepsilon}$ the set of maximizers of \eqref{4-1}. Note that any $z$-directional translation of $\zeta\in \mathcal{S}_{\varepsilon}$ is still in $ \mathcal{S}_{\varepsilon}$.
The following result is essentially contained in \cite{Bad, CWZ}.
\begin{proposition}\label{pro4-1}
If $\varepsilon>0$ is sufficiently small, then $\mathcal{S}_{\varepsilon}\neq \emptyset$ and each maximizer $\hat{\zeta}_\varepsilon \in \mathcal{S}_{\varepsilon}$ is a steady vortex ring with circulation $\kappa$ and translational velocity $W\ln \varepsilon\,\mathbf{e}_z$. Furthermore,
\begin{itemize}
\item [(i)]$\hat{\zeta}_\varepsilon=\varepsilon^{-2}\boldsymbol 1_{\hat{\Omega}_\varepsilon}$ for some axi-symmetric topological torus $\hat{\Omega}_\varepsilon\subset \mathbb{R}^3$.
\item [(ii)]It holds $C_1\varepsilon \le \sigma\left(\hat{\Omega}_\varepsilon\right)<C_2\varepsilon$ for some constants $0<C_1<C_2$.
\item [(iii)]As $\varepsilon\to 0$, $\mathrm{dist}_{\mathcal C_{r^*}}(\hat{\Omega}_\varepsilon)\to0$ with $r^*={\kappa}/{4\pi W}$.
\end{itemize}
\end{proposition}
If $\zeta\in \mathcal{S}_\varepsilon$ for $\varepsilon>0$ small, then it must be symmetric with respect to some horizontal line $x_2=h$ by Steiner symmetrization, and it can be centralized by a unique translation in the $z$-direction that makes it a centralized steady vortex ring. We shall denote its centralized version by $\zeta^\#$. We also set $\mathcal{S}^\#_{\varepsilon}:=\{\zeta^\#\mid \zeta\in \mathcal{S}_{\varepsilon}\}$. In view of Theorem \ref{thm2}, we see that $\mathcal{S}^\#_{\varepsilon}=\{{\zeta}_\varepsilon\}$ for all $\varepsilon>0$ small.
The following elementary estimates can be found in \cite{Choi20} (see Lemma 2.3 in \cite{Choi20}).
\begin{lemma}\label{le4-2}
There exists a positive number $C$ such that
\begin{equation*}
\begin{split}
& |E[\zeta]|\le E[|\zeta|]\le C\left(\|r^2\zeta\|_{L^1(\mathbb R^3)}+\|\zeta\|_{L^1\cap L^2(\mathbb R^3)} \right)\|r^2\zeta\|_{L^1(\mathbb R^3)}^{1/2}\|r^2\zeta\|_{L^1(\mathbb R^3)}^{1/2}, \\
& |E[\zeta_1]-E[\zeta_2]|\le C\left(\|r^2(\zeta_1+\zeta_2)\|_{L^1(\mathbb R^3)}+\|\zeta_1+\zeta_2\|_{L^1\cap L^2(\mathbb R^3)} \right)\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \ \ \times\|r^2(\zeta_1-\zeta_2)\|_{L^1(\mathbb R^3)}^{1/2}\|r^2(\zeta_1-\zeta_2)\|_{L^1(\mathbb R^3)}^{1/2},
\end{split}
\end{equation*}
for any axi-symmetric $\zeta$, $\zeta_1$, $\zeta_2\in \left(L^1\cap L^2\cap L^1_\mathrm{w}\right)(\mathbb R^3)$.
\end{lemma}
\bigskip
\subsection{Reduction to absurdity}
We are now in a position to prove Theorem \ref{thm4}.
\bigskip
\noindent{\bf Proof of Theorem \ref{thm4}:}
We argue by contradiction. Suppose that there exist a positive number $\eta_0$, a sequence $\{\zeta_{0,n}\}_{n=1}^\infty$ of non-negative axi-symmetric functions, and a sequence $\{t_n\}_{n=1}^{\infty}$ of non-negative numbers such that, for each $n\ge 1$, we have $\zeta_{0,n}$, $(r\zeta_{0,n})\in L^\infty(\mathbb{R}^3)$,
\begin{equation*}
\|\zeta_{0,n}-\zeta_\varepsilon\|_{L^1\cap L^2(\mathbb{R}^3)}+\|r^2(\zeta_{0,n}-\zeta_\varepsilon)\|_{L^1(\mathbb{R}^3)}\le \frac{1}{n^2},
\end{equation*}
and
\begin{equation*}
\inf_{\tau\in \mathbb{R}} \|\zeta_n(\cdot-\tau \mathbf{e}_z,t_n)-\zeta_\varepsilon\|_{L^1\cap L^2(\mathbb{R}^3)}+\|r^2(\zeta_n(\cdot-\tau \mathbf{e}_z,t_n)-\zeta_\varepsilon)\|_{L^1(\mathbb{R}^3)}\ge \eta_0,
\end{equation*}
where $\zeta_n(\boldsymbol x,t)$ is the global-in-time weak solution of \eqref{1-7} for the initial data $\zeta_{0,n}$ obtained by Proposition \ref{Pro1}. Using Lemma \ref{le4-2}, we get
\begin{equation*}
\lim_{n\to \infty}E[\zeta_{0,n}]=E[\zeta_\varepsilon].
\end{equation*}
Thus, we have
\begin{equation*}
\begin{split}
& \lim_{n\to \infty} \mathcal{P}[\zeta_{0,n}]=\mathcal{P}[\zeta_\varepsilon],\ \ \lim_{n\to \infty}E[\zeta_{0,n}]= E[\zeta_\varepsilon],\\
& \lim_{n\to \infty}\|\zeta_{0,n}\|_{L^p(\mathbb{R}^3)}=\|\zeta_{\varepsilon}\|_{L^p(\mathbb{R}^3)},\ \ \forall\, 1\le p\le 2.
\end{split}
\end{equation*}
Let us write $\zeta_n=\zeta_n(\cdot,t_n)$. By virtue of the conservations, we conclude that
\begin{equation}\label{4-5}
\begin{split}
& \lim_{n\to \infty} \mathcal{P}[\zeta_{n}]=\mathcal{P}[\zeta_\varepsilon],\ \ \lim_{n\to \infty}E[\zeta_{n}]=E[\zeta_\varepsilon],\\
& \lim_{n\to \infty}\|\zeta_{n}\|_{L^p(\mathbb{R}^3)}=\|\zeta_{\varepsilon}\|_{L^p(\mathbb{R}^3)},\ \ \forall\, 1\le p\le 2.
\end{split}
\end{equation}
Note that
\begin{equation*}
\int_{\left\{\boldsymbol x\in \mathbb{R}^3\mid |\zeta_n(\boldsymbol x)-1/\varepsilon^2|\ge 1/n\right\}}\zeta_n d\boldsymbol x=\int_{\left\{\boldsymbol x\in \mathbb{R}^3\mid |\zeta_{0,n}(\boldsymbol x)-1/\varepsilon^2|\ge 1/n\right\}}\zeta_{0,n}d\boldsymbol x.
\end{equation*}
Set $D(n):=\left\{\boldsymbol x\in \mathbb{R}^3\mid |\zeta_{0,n}(\boldsymbol x)-1/\varepsilon^2|\ge 1/n \right\}$ and $Q:=\text{supp}\, \zeta_\varepsilon$. We check that
\begin{equation*}
\begin{split}
\int_{D(n)}\zeta_{0,n}d\boldsymbol x & =\|\zeta_{0,n} \|_{L^1\left(D(n)\cap Q\right)}+\|\zeta_{0,n} \|_{L^1\left(D(n)\backslash Q \right)} \\
& \le \|\zeta_{0,n}-\zeta_\varepsilon \|_{L^1\left(D(n)\cap Q \right)}+\|\zeta_\varepsilon \|_{L^1\left(D(n)\cap Q \right)}+\|\zeta_{0,n}-\zeta_\varepsilon \|_{L^1\left(D(n)\backslash Q \right)}\\
&\le \|\zeta_{0,n}-\zeta_\varepsilon \|_{L^1(\mathbb{R}^3)}+\|\zeta_\varepsilon \|_{L^1\left(D(n)\cap Q\right)}\\
&\le \|\zeta_{0,n}-\zeta_\varepsilon \|_{L^1(\mathbb{R}^3)}+|D(n)\cap Q|\le (n+1)\|\zeta_{0,n}-\zeta_\varepsilon \|_{L^1(\mathbb{R}^3)}\le \frac{n+1}{n^2}\to 0
\end{split}
\end{equation*}
as $n\to \infty$, where we used the fact that
\begin{equation*}
\frac{1}{n} |D(n)\cap Q|\le \|\zeta_{0,n}-\zeta_\varepsilon \|_{L^1\left(D(n)\cap Q \right)}\le \|\zeta_{0,n}-\zeta_\varepsilon \|_{L^1(\mathbb{R}^3)}.
\end{equation*}
Set
\begin{equation*}
\mathcal{A}^*_\varepsilon:=\left\{\zeta\in\mathcal{A}_\varepsilon\mid \mathcal{P}[\zeta]=\mathcal{P}[\zeta_\varepsilon] \right\}.
\end{equation*}
It is easy to see that
\begin{equation*}
E[\zeta_\varepsilon]=\max_{\zeta\in \mathcal{A}^*_\varepsilon}E[\zeta]\ \ \ \text{and}\ \ \ \mathcal{S}_{\varepsilon}=\left\{\zeta\in \mathcal{A}^*_\varepsilon \mid E[\zeta]
=E[\zeta_\varepsilon]\right\}.
\end{equation*}
Therefore, we can now use Theorem 3.1 in \cite{Choi20} as a consequence of the concentration-compactness principle to obtain a subsequence (still using the same index $n$) and $\{\tau_n\}_{n=1}^\infty\subset \mathbb{R}$ such that
\begin{equation*}
\|r^2\left(\zeta_{n}(\cdot-\tau_n\mathbf{e}_z)-\zeta_\varepsilon\right)\|_{L^1(\mathbb{R}^3)}\to 0, \quad \text{as}\ n\to \infty.
\end{equation*}
Recalling \eqref{4-5}, we can further deduce that
\begin{equation*}
\|\zeta_{n}(\cdot-\tau_n\mathbf{e}_z)-\zeta_\varepsilon\|_{L^2(\mathbb{R}^3)}\to 0, \quad \text{as}\ n\to \infty.
\end{equation*}
By H$\ddot{\text{o}}$lder's inequality, we get
\begin{equation*}
\lim_{n\to\infty}\int_Q \zeta_{n}(\boldsymbol x-\tau_n\mathbf{e}_z)d\boldsymbol x=\int_Q \zeta_\varepsilon(\boldsymbol x) d\boldsymbol x,
\end{equation*}
which implies
\begin{equation*}
\lim_{n\to\infty}\int_{\mathbb{R}^3\backslash Q}\zeta_n(\boldsymbol x-\tau_n\mathbf{e}_z)d\boldsymbol x=\lim_{n\to\infty}\int_{\mathbb{R}^3}\zeta_n(\boldsymbol x-\tau_n\mathbf{e}_z)d\boldsymbol x-\lim_{n\to\infty}\int_{Q}\zeta_n(\boldsymbol x-\tau_n\mathbf{e}_z)d\boldsymbol x=0.
\end{equation*}
It follows that
\begin{equation*}
\begin{split}
\|\zeta_{n}(\cdot-\tau_n\mathbf{e}_z)-\zeta_\varepsilon\|_{L^1} & =\|\zeta_{n}(\cdot-\tau_n\mathbf{e}_z)-\zeta_\varepsilon\|_{L^1(Q)}+\|\zeta_{n}(\cdot-\tau_n\mathbf{e}_z)-\zeta_\varepsilon\|_{L^1(\mathbb{R}^3\backslash Q)} \\
& \le |Q|^{1/2}\|\zeta_{n}(\cdot-\tau_n\mathbf{e}_z)-\zeta_\varepsilon\|_{L^2(\mathbb{R}^3)}+\|\zeta_{n}(\cdot-\tau_n\mathbf{e}_z)\|_{L^1(\mathbb{R}^3\backslash Q)}\to 0
\end{split}
\end{equation*}
as $n\to \infty$. In sum, we have
\begin{equation*}
0 =\lim_{n\to\infty}\|\zeta_n(\cdot-\tau_n \mathbf{e}_z,t_n)-\zeta_\varepsilon\|_{L^1\cap L^2(\mathbb{R}^3)}+\|r^2(\zeta_n(\cdot-\tau_n \mathbf{e}_z,t_n)-\zeta_\varepsilon)\|_{L^1(\mathbb{R}^3)}\ge \eta_0>0,
\end{equation*}
which is a contradiction. The proof is thus complete.
\qed
\bigskip
\bigskip
\bigskip
|
3,212,635,537,568 | arxiv | \section{Introduction}
Any realistic quantum computer has errors. Principally these errors
come in two varieties: random decoherence and systematic errors.
Systematic errors can arise from imperfections and inhomogeneities
in the construction or implementation of demanding experiments. Both
systematic errors and errors due to decoherence may be corrected,
although it is considerably easier to correct systematic errors.
A pertinent example of systematic error is the strength of the
exchange interaction oscillation in solid state silicon based
architectures \cite{KHD+03,KHS02, WHP+03,WHK+04, WH05, Ket06}. The
six conduction-band minima in silicon generate inter-valley
electronic interference. Discouragingly this causes oscillation in
magnitude of the exchange splitting between two neighboring donors.
The strength of the interaction between qubits therefore sensitively
depends on the exact positioning of donors. In this paper we
demonstrate that, in principle, such a systematic error in the
strength or form of interaction between two qubits \emph{is}
correctable.
Systematic errors may be corrected using composite pulses, in which
a single operation is replaced by several imperfect pulses in such a
way that systematic errors in each pulse cancel each other. Freeman
\cite{Fre97} and Levitt's review \cite{Lev86} and the references
therein provide an excellent introduction. More recently Jones
\cite{Jon03} notes that single qubit composite pulses can modified
to apply to the Ising interaction. In particular he presents a two
qubit pulse sequence based on those by Wimperis \cite{Wim84} for the
construction of a CNOT gate in NMR.
This paper applies to any architecture with the ability to apply
single qubit rotations and a coupling between the two qubits.
Therefore many leading quantum computing architectures - including
solid state architectures - can, in principle, correct for an
unknown coupling between qubits. This addresses a common problem
across many architectures, where composite pulses have begun to be
applied (for example in ion traps \cite{RHR+04, BCS+04, GRL+04} and
Josephson Junctions \cite{CIA+04}). As an example, we explicitly
consider electron spin in the Kane architecture \cite{Kan98}.
Using the method presented here, it is not necessary to know either
the \emph{strength} or the \emph{form} of the coupling. We will not
assume that the error is in the strength of the interaction alone. In
fact, we will demonstrate that it is possible to create a high
fidelity CNOT gate from a largely random Hamiltonian.
A key benefit of composite pulses is that the error does not need to
be perfectly characterised. Characterising the strength and form of
the interaction to a high degree of accuracy is a challenging
task. Even with an accurate characterisation of the Hamiltonian, the
pulse sequences given in this paper outperform a naive implementation
of the CNOT gate. Although we never \emph{learn} the exact
Hamiltonian, we arrange that systematic errors cancel themselves.
The composite CNOT gate construction follows the following steps:
\begin{enumerate}
\item Isolate a single term: In this step, a single coupling terms is
isolated from the interaction Hamiltonian.
\item Create a composite control sign gate: In this step, pulses
adapted from NMR correct for systematic errors in the strength of
the coupling.
\item Finally, apply single qubit unitaries.
\end{enumerate}
A completely general two-qubit Hamiltonian may be expanded in the
Pauli basis as
\begin{equation}
H_2 = \sum_{i,j = \{I,X,Y,Z\}} J_{ij} \sigma_i \sigma_j.
\end{equation}
where $\sigma_i$ are the Pauli matrices, and as throughout the
paper, the tensor product is implied. This Hamiltonian includes both
coupling between the qubits and single qubit terms. The coupling
energies between the qubits are given by the constants $J_{ij}$
($i\ne I, j\ne I$). We do not assume that we know either the
strength of the single qubit terms, or the coupling terms. There
will be a coupling energy which we believe is greatest. Without loss
of generality, let us assume that this term is $J_{ZZ}$. Any single
two qubit term is sufficient.
It is well known that it is possible to isolate a particular term of
the interaction using a technique called \emph{term isolation}
\cite{BDN+04}. In our case, it is possible to isolate the $J_{ZZ}$
term. Consider the pulse sequence
\begin{eqnarray}
Q(t) &=& \Zp{1} \Zp{2} \ V_{t/4} \ \Zp{1} \ V_{t/4} \ \Zp{1} \Zp{2} \
V_{t/4} \ \Zp{1} \ V_{t/4}, \label{eqn:refocus1}\\
V_t &=& \Xp{1} \Xp{2} \ \exp\left(i H_2 \frac{t}{2}\right)
\Xp{1} \Xp{2} \exp\left(i H_2 \frac{t}{2}\right)
\end{eqnarray}
Here, as throughout this paper, a single qubit rotation of an angle
$\theta$ around the z-axis of the $i^{th}$ qubit is denoted by
\begin{equation}
\Rz{\theta}{i} = \exp\left(i \frac{\theta}{2} \sigma_{Z}\right),
\end{equation}
and similarly for rotations around the x and y-axes. This pulse
sequence isolates a single coupling term:
\begin{equation}
Q(t) \approx \exp(i J_{ZZ}t \ \Z\Z). \label{eqn:Q}\\
\end{equation}
Eq. (\ref{eqn:Q}) is only correct to first order, because not all
terms in the Hamiltonian, $H_2$, commute. However, it may be made
arbitrarily accurate by applying the pulses, $\Xp{1}\Xp{2}$,
$\Zp{1}\Zp{2}$ and $\Zp{1}$, $k$ times more frequently:
\begin{equation}
Q_k(t) = Q(t/k)^k
\end{equation}
Term isolation is not uniformly valid. If there is no coupling of the
specified type (that is $J_{ZZ} =0$) then the qubits will be decoupled
by the pulse sequence, and no term isolated. Also, to perform term
isolation it is necessary that the single qubit rotations are
implemented much faster than the typical timescale of the coupling
between qubits. This requires either fast single qubit rotations, or
the ability to turn the interaction between qubits on and off.
If interaction Hamiltonian is \emph{known} to have a simpler form,
then a single coupling term may be isolated more simply and
effectively. For the Heisenberg interaction:
\begin{equation}
H_H = J(\X \X + \Y\Y + \Z\Z),
\end{equation}
all terms commute, and therefore $J_{ZZ}$ be isolated using just two
steps:
\begin{equation}
\exp(i J_{ZZ}t \Z\Z) = \Zp{1} \ \exp \left(i H_H t \right) \ \Zp{1} \
\exp \left(i H_H t \right). \label{eqn:heisenberg} \\
\end{equation}
Eq. \eqref{eqn:heisenberg} is exact, and would only need to be
carried out once. For many systems, such as the nuclei and electron
spins in the Kane architecture, or quantum dots, this much simpler
form of term isolation may be used.
This completes the first step: To isolate a coupling single term.
For a completely general two qubit Hamiltonian, it is always
possible to isolate a single coupling term. The strength of this
term remains unknown, but as this paper now shows, systematic errors
in the strength $J_{ZZ}$ can be corrected.
The exact coupling strength, $J_{ZZ}$ is not known. In general we
will predict a certain value, $J_P$. Unless the gate is perfectly
characterised, we will make some fractional error, $\Delta$, defined
as $J_{ZZ} = (1+\Delta){J_{P}}$. Therefore, when we attempt to
create the gate
\begin{equation}
\theta_0 = \exp\left(i \frac{\theta}{2} \Z\Z\right),
\end{equation}
by setting $t=\frac{\theta}{J_P}$ we will systematically over-rotate
or under-rotate, \emph{actually} creating the gate
\begin{equation}
\theta^{[1]}_0 = \left(\theta(1+\Delta)\right)_0 \approx Q(t).
\end{equation}
Jones \cite{Jon03} notes that single qubit composite pulses can be
modified to apply to the Ising interaction. In particular a two
qubit pulse sequence based on BB1 \cite{Wim84} is presented. The
symmeterized version of the pulse is
\begin{equation}
\theta_0^{[2]} = (\theta/2)^{[1]}_0 \ \pi^{[1]}_{\phi} \
2\pi^{[1]}_{3\phi} \ \pi^{[1]}_{\phi} \ (\theta/2)^{[1]}_0,
\label{eqn:comp1}
\end{equation}
where this pulse is made up of imperfect gates,
\begin{equation}
\theta_\phi = \Ry{\phi}{2} \ \theta_0 \ \Ry{-\phi}{2}
\end{equation}
and in order to cancel first and second order terms, $\phi =
\arccos\left(-\frac{\theta}{4\pi}\right)$.
An alternative pulse which gives the same increase in fidelity when
the uncertainty in $J_{ZZ}$ is the only source of error, but
which allows us to refocus an additional time, is given by
\begin{equation}
\theta_0^{[2]} = (\theta/2)_0^{[1]} \ \frac{\pi}{2}^{[1]}_\phi \
\frac{\pi}{2}^{[1]}_{-\phi} \ \Zp{2} \ \frac{\pi}{2}^{[1]}_\phi \
\frac{\pi}{2}^{[1]}_{-\phi} \ (\theta/2)^{[1]}_0 \ \Zp{2}.
\label{eqn:comp2}
\end{equation}
Pulse schemes on a single qubit may be made arbitrarily accurate
\cite{BHC04}. This is also true of two qubit pulses. One
straight-forward way to do this is to feed the pulse back into
itself. If we implement the pulse sequence,
\begin{equation}
\theta_0^{[2*]} = \left(\Xp{2} \ \Xp{2} \ \Zp{2} \
(\theta/16)^{[2]}_0 \ \Zp{2}\ (\theta/16)^ {[2]}_0 \right)^{8},
\label{eqn:comp3}
\end{equation}
then by feeding this pulse back into the right hand side of Eq.
\eqref{eqn:comp2}, we obtain a pulse which is correct to higher
order. In principle there is no limit to the order which is
achievable.
The average fidelity, for the purposes of this paper, is defined as
\begin{equation}
F^2 = \frac{|\Tr{U_I^{\dagger}U}|}{\Tr{U^{\dagger}U}}
\end{equation}
where $U_I$ is the actually implemented operation, and $U$ is the
intended rotation.
Using each of the three pulse sequences, we attempted to create the
entangling component of the CNOT gate,
$\left(\frac{\pi}{2}\right)_0$. Fig. \ref{fig:graph} shows the
fidelity each pulse sequence, plotted against the error, $\Delta$,
in the strength of the interaction. The solid line shows the
fidelity without any correction. The first dotted line shows the
fidelity of the composite pulse described in Eq. (\ref{eqn:comp1})
or Eq. (\ref{eqn:comp2}). The composite pulse provides an
improvement over the fidelity of the uncorrected pulse for $J_{ZZ}(1
\pm 100)\%$. The higher order pulse described using Eq.
(\ref{eqn:comp3}) is shown as the second dotted line. It shows an
improvement over both the uncorrected pulse and the first composite
pulse between $\Delta=-100\%$ and $\Delta=100\%$.
If we wish to have an error of $1 \times 10^{-4}$ then without
correction we require a $\Delta$ of less than 1\%. For the composite
pulse scheme described by Eq. (\ref{eqn:comp1}) or
Eq. (\ref{eqn:comp2}), we may tolerate an error, $\Delta$ of
approximately 22\%. In the higher order composite pulse described
using Eq. (\ref{eqn:comp3}), an error $\Delta$ approximately 41\%
still achieves a fidelity of $99.99\%$.
\begin{figure}
\begin{center}
\includegraphics[height=\columnwidth, angle=-90]{ho2.eps}
\caption{This plot shows the fidelity of several methods of creating a
CNOT gate, with a systematic error in the strength of the
coupling.} \label{fig:graph}
\end{center}
\end{figure}
This concludes the second step. A systematic error in the interaction
strength, $\Delta$ may be corrected using two-qubit extensions of well
known composite pulses. These pulses may be made arbitrarily accurate
by concatenation.
For the final step, we simply note that a CNOT gate may be written as
\begin{equation}
\CNOT = H^{(2)} \ \Rz{\frac{\pi}{2}}{1} \Rz{\frac{\pi}{2}}{2}
\exp\left(i \frac{\pi}{4}\ \Z\Z\right) \ H^{(2)}.
\end{equation}
A robust CNOT gate may be constructed applying all three steps. The
first step isolates the $\Z\Z$ term, regardless of the form of the
Hamiltonian. The second step corrects for any error in the strength of
this term, and finally the third step applies single qubit unitaries
to complete the \emph{robust} CNOT. Using this robust CNOT gate, we
now describe two examples.
One of the current concerns about the viability of the construction
of an exchange based solid state quantum computer is oscillations in
the strength of the exchange interaction \cite{KHS02, KHDS02,
Ket06}. For an arbitrarily placed donor, the strength of the
exchange interaction is unknown. Even the variation of the donor's
position by a single lattice site can change the strength of the
exchange interaction dramatically. The placement introduces an
unknown systematic error in the strength of the exchange
interaction. Fortunately, that is \emph{exactly} the type of error
which is corrected in this paper. It does not matter that we do not
know the strength of the interaction, or that the exchange
interaction may differ from site to site.
For the Kane quantum computer the single-qubit Hamiltonian is given
by
\begin{equation}
H_{Q}=\mu_{B}B\sigma_{e}^{z}-g_{n}\mu_{n}B\sigma_{n}^{z}+A(V_{A})
{\sigma}_{e}\cdot{\sigma}_{n},\label{eqnE:HQ}
\end{equation}
where $B$ is the strength of the constant magnetic field,
$\sigma^{z}$ is the Pauli Z matrix with subscripts $e$ referring to
electrons and $n$ referring to the nucleus and $A(V_A)$ is the
strength of the hyperfine interaction. This allows single qubit
operation of the computer using either the nuclear spin as a qubit
\cite{Kan98} or electron spin \cite{Hil05}. The exchange coupling
between electrons whose strengths, $J_i$, can vary considerably,
leads to the Hamiltonian
\begin{equation}
H = \sum_i J_i \sigma_i \cdot \sigma_{i+1} + H_Q^{(i)}.
\end{equation}
This Hamiltonian allows for the single qubit operation of the
computer, but it is not immediately clear if two qubit operations
can be implemented while the strengths of the exchange interactions
remain unknown.
We now consider the pulses and overhead required to create a
composite CNOT gate using electron spin. We use some simplifying
assumptions. We assume that any single qubit rotation by $\pi$
requires 40ns, as does the Hadamard gate. We also assume that two
qubit rotations by $\pi/8$ require only 1ns. The typical `square
root of swap' CNOT gate requires a total of 6 single qubit gates,
and 2 two-qubit gates. The total time required for a CNOT gate is
approximately $140 ns$. Jones' pulse sequence requires more pulses.
A total of 16 single qubit gates, and 8 two qubit gates are
required. A CNOT gate requires approximately $460 ns$ to complete.
The composite pulse is short compared to the comparatively long
decoherence time of donors in Si. $1 \times 10^{5}$ operations may
be performed during the $60 ms$ dephasing time measured in bulk Si.
It therefore falls below the fault tolerant threshold \cite{Got97}.
Assuming perfect single qubit gates, the fidelities for these two
sequences exactly mimic those shown in Fig. \ref{fig:graph}. The
solid, uncorrected curve is extremely sensitive to errors in the
strength of the interaction, and therefore also to the exact placement
of the donor. The fidelity of the composite pulse follows the first
dotted line in Fig. \ref{fig:graph}. This curve is much less sensitive
to errors. As noted above, this pulse improves over the naive case for
$\Delta=-1$ to $\Delta=1$.
We now consider a largely random coupling between qubits. Remarkably,
regardless of our incomplete knowledge of the system, in many cases,
we can still create a high fidelity CNOT gate. To demonstrate this, we
consider the effect of random systematic error on the fidelity of a
CNOT gate. We will assume that the interaction Hamiltonian is given by
\begin{eqnarray}
H_R &=& J\left(\X\X + \Y \Y + \Z\Z \right) \nonumber \\
& & + R \sum_{i,j = \{I,X,Y,Z\}} J_{ij}^{r} \sigma_i \sigma_j.
\end{eqnarray}
The coefficients $J_{ij}^{r}$ are chosen uniformly at random between
-1 and 1. The factor $R$ gives the strength of the random term in the
Hamiltonian.
The first three terms in this Hamiltonian give a simple Heisenberg
interaction. We did not need to choose the Heisenberg interaction. Any
coupling can be chosen, for example an Ising interaction, a
dipole-dipole interaction, or an XY interaction. Each different
coupling requires only trivial modifications. The non-random term in
the Hamiltonian represents the interaction or combination of
interactions expected to be present in the quantum system. The random
term in the Hamiltonian contributes to both single qubit terms, and
two qubit terms. The random term models our incomplete knowledge, not
only of the strength of the interaction, but also of the form of the
interaction.
This demonstration uses the well known `square root of swap'
construction of the CNOT gate \cite{LD98}. Fig. \ref{fig:random} shows
the average and minimum fidelity of this construction, for different
values of $R/J$. The fidelity for each value of $R/J$ is calculated
for 1000 different random Hamiltonians. When the random contribution
of the Hamiltonian is large ($R/J = \pm 1$), the uncorrected CNOT gate
is useless. It has a average fidelity of approximately $50 \%$. This
is worse than if no interaction had been applied at all. Even in the
worst case of minimum fidelity, the composite pulse has a fidelity
superior to the uncorrected case.
\begin{figure}
\begin{center}
\includegraphics[angle=-90, width=\columnwidth]{cnotRandom1000bMin.eps}
\caption{Graph showing the fidelity of uncorrected and composite
pulses to a Hamiltonian with a random component.} \label{fig:random}
\end{center}
\end{figure}
If the `square root of swap' construction is replaced by the
composite pulse described in this paper, on average, a high fidelity
CNOT gate may be constructed. The mean fidelity, averaged over 1000
different random Hamiltonians for each value of $R/J$, is shown as
the dotted line in Fig. \ref{fig:random}. We also found the minimum
fidelity for the composite pulse, and this is also plotted in Fig.
\ref{fig:random}. Even when the random contribution is as large as
the exchange coupling, $R/J =1$, the average fidelity of the
composite gate is approximately $95\%$.
To obtain this composite pulse scheme, we combined decoupling with a
composite pulse scheme. First, the pulse used in Fig.
\ref{fig:random} was obtained using Eq. (\ref{eqn:refocus1}). Term
isolation was applied using $k=20$ repetitions for each gate.
Second, Eq. \ref{eqn:comp2} was used to correct the strength of the
$\Z\Z$ term. As Fig. \ref{fig:random} shows, a large increase in
fidelity is obtained. Even when the error is large and the coupling
between the qubits is essentially random, using the pulse schemes
presented in this paper, it is possible to produce a high fidelity
CNOT gate.
We have presented a method for creating CNOT gates which corrects
for systematic errors in both the form and strength of the
interaction between qubits. We have applied the composite pulse to a
model electron spin architecture, showing it does not slow down the
gates times too much to still fall below the fault tolerant
threshold. We also considered random systematic errors, showing that
the form of the systematic error is not important. The pulse scheme
presented here has broad applicability. Any system which implements
single qubit operations, and has a direct coupling between two
qubits directly may implement the composite pulses presented here.
In this paper we have shown that, regardless of an incomplete
knowledge of the strength or form of the interaction between two
qubits, in many cases it is possible to construct a CNOT gate which
has arbitrarily high fidelity.
I wish to thank Gerard Milburn, Hsi-Sheng Goan and Lloyd Hollenberg
for discussions and support in preparing this paper.
|
3,212,635,537,569 | arxiv | \section{Introduction}
$\AT$-free graphs are those that do not have an asteroidal triple
\,\textemdash\, that is \,\textemdash\,
$\AT$-free graphs do \,not\, have
three vertices \:of \,which \,every pair is connected by a path \,that
\,avoids \,the \,neighborhood \,of \,the \,third.
\bigskip
Broersma \,et al.~\cite{kn:broersma}
\,introduced the following `betweenness relation'
to characterize $\AT$-free graphs. Let $G$ be an $\AT$-free graph.
For a vertex $x$ \:let $N(x)$ denote its neighborhood and let
$N[x]$ denote its closed neighborhood; \,$N(x) \cup x$.
A vertex $z$ is \,\underline{between}\, $x$ and $y$ if there is a path from
\,$z$\, to \,$x$\, \,that avoids \,$N[y]$ \,and \,similarly \:
there is a path from \,$z$\, to \,$y$\, \,that avoids \,$N[x]$.
\par
Let \,$\I(x,y)$\, stand for the set of vertices that are between $x$ and $y$.
\,Then \,a graph is $\AT$-free \,if and only if
\,for any three vertices $x$, $y$ and $z$ the following property holds.
\[z \,\in\, \I(x,y) \quad \Rightarrow \quad x \,\notin\, \I(z,y)\]
\bigskip
A \,set system \,is a pair $(E,\mathcal{C})$ \;where $E$ is a finite set and
$\mathcal{C}$ is a collection of subsets of $E$.
The elements of $\mathcal{C}$ will be called \,\underline{convex}.
The problem to determine
\;for which set systems a greedy algorithm optimizes
linear objective functions
\;has a long history \,\textemdash\, for a brief overview see \,eg\, Helman
\,et al.~\cite{kn:helman} \;For set systems that are convex geometries
\:Kashiwabara \,and \,Okamoto~\cite{kn:kashiwabara} \:characterize
\,linear programming problems
\,for which a greedy algorithm
finds an optimum.
\bigskip
A \,\underline{convex geometry}\, is a set system $(E,\mathcal{C})$ that
satisfies the following properties.
\begin{enumerate}[\rm (1.)]
\item $E \in \mathcal{C}$ and $\varnothing \in \mathcal{C}$.
\item $\mathcal{C}$ is closed under intersections.
\item
The \underline{anti-exchange property} holds, that is,
for all convex sets $Y \in \mathcal{C}$ \;and \;$x,z \notin Y$, \;$x \neq z$
\begin{gather*}
z \,\in\, \sigma(x+Y) \quad \Rightarrow\quad x \,\notin\, \sigma(z+Y) \\
\text{where we write}
\quad \sigma(U)=\;\bigcap \;\;\{\:J\:|\: J \in \mathcal{C}
\quad \text{and}\quad
U \subseteq J\;\} \quad\text{for a subset $U$ of $E$.}
\end{gather*}
\end{enumerate}
\newpage
\noindent For \:some \:interesting \:`prospective applications'
\:of convex geometries \:in \:cloud computing
\;we \:refer \:to \:Kordecki~\cite{kn:kordecki}.
\bigskip
Let $G$ be an $\AT$-free graph. Define a set system \:on $V=V(G)$
\:as the collection of convex sets in $G$ \,\textemdash\,
where a set $X \subseteq V$ \:is \:convex \,if
it contains \:with any two of its elements \:the elements
that are between them.
\par
In the following section we show that
\,the collection of convex sets
\,in an $\AT$-free graph \,constitutes \,a \,convex \,geometry.
This completes the
result of Alc\'on \,et al.~\cite{kn:alcon}\, who proved a similar result
for interval graphs.
\section{$\AT$-free convex geometries}
A set system $(E,\mathcal{C})$ \:which satisfies $E,\varnothing \in \mathcal{C}$
\:and \:which is closed under intersections \:is called
an alignment \:by \:Edelman and Jamison~\cite{kn:edelman}. They show that an alignment
satisfies the anti-exchange property \:if and only if \:either one
\:of the following two properties holds.
\begin{enumerate}[\rm (1.)]
\item For any $C \in \mathcal{C}$ and $y \notin C$ \:the element
$y$ is \,\underline{extreme}\, in $\sigma(C + y)$
\,\textemdash\, that is \,\textemdash\,
$\sigma(C + y) \setminus y$ is convex.
\item Any convex set $X$, $X\neq E$, has an element $y \notin X$
such that $X + y$ is convex.
\end{enumerate}
\bigskip
The following definition allows us to characterize convex geometries
with a third property
(which is equivalent to the
characterization~\cite[Theorem~2.3]{kn:edelman}).
\begin{definition}
Let \,$(E,\mathcal{C})$\, be an alignment, \;let \,$X,Y \subseteq E$ \,and \,let
\:$Y=\{y_1,\dots,y_k\}$. \,Let \,$k \geq 2$.
\;The set \,$Y$ \,\underline{induces a cycle}\, on \,$X$\, if
\:for \:all \:$i \in [k]$
\[y_{i+1} \in \sigma(y_i+X) \quad \text{\rm{if \,$i < k$ \;and}}\quad
y_1 \in \sigma(y_k+X).\]
\end{definition}
\bigskip
\begin{lemma}
\label{property (2.3)}
An alignment \,$(E,\mathcal{C})$\, is a convex geometry \,if and only if
\,any set \,$Y$\, \:which induces a cycle on a set \,$X$
\:is contained in \,$\sigma(X)$.
\end{lemma}
\bigskip
All proofs we skip here can be found in the appendix. We proceed to prove \,that the convex sets in an $\AT$-free graph constitute
a convex geometry
\,via the following three lemmas.
(Some easily-made drawings might be helpful to the reader.)
\begin{lemma}
\label{prop 3.3}
Let $G$ be an $\AT$-free graph. Any four vertices satisfy the following
property.
\begin{equation}
u \in \I(v,x) \quad \text{\rm{and}}\quad
v \in \I(u,y) \quad \Rightarrow \quad u \in \I(x,y)
\end{equation}
\end{lemma}
\bigskip
\begin{lemma}
\label{lm 3.4}
Let \,$G$\, be an \,$\AT$-free graph.
Any five vertices satisfy the following
property.
\begin{multline}
a \in \I(x,b) \quad \text{\rm{and}} \quad b \in \I(y,z)
\quad \text{\rm{and}} \quad a \notin N[y] \cup N[z] \quad \Rightarrow \\
a \in \I(x,y) \quad\text{\rm{or}}\quad a \in \I(x,z) \quad\text{\rm{or}}\quad
a \in \I(y,z)
\end{multline}
\end{lemma}
\bigskip
(In~\cite{kn:chvatal} \:Chvat\'al\, describes a subclass of convex geometries by a property similar to Lemma~\ref{lm 3.4}.) A component of a graph is a maximal subset of vertices \;of \:which
\;every pair is connected by a path.
\begin{lemma}
\label{lemma 3.5}
Let \,$G$\, be an \,$\AT$-free graph. Let \,$X,Y \subseteq V$ \,and let
\:$Y=\{y_1,\dots,y_k \}$.
Assume that \,$Y \cap \sigma(X)= \varnothing$\, and that no subset of \,$Y$\,
induces a cycle on \,$X$. \:If \,\textemdash\, for all \,$i \in [k-1]$
\,\textemdash\, there exist $x_i \in X$ such that
\:$y_{i+1} \in \I(y_i,x_i)$ \:then
\[Y \,\subseteq\, N[y_1] \,\cup\, C
\quad \text{\rm{where $C$ is a component of the graph $G-N[y_1]$.}}\]
\end{lemma}
\bigskip
\begin{theorem}
Let \,$G$\, be an \,$\AT$-free graph.
\,The convex sets in \,$G$\, constitute a
convex geometry \,on \,$V(G)$.
\end{theorem}
\section{Generating $\AT$-free orders}
When $(E,\mathcal{C})$ is a convex geometry then
$(E,\Bar{\mathcal{C}})$ is an \,\underline{antimatroid}\,
and this defines all
antimatroids \,\textemdash\, here we write
\[\Bar{\mathcal{C}}=\{\,E \setminus C\;|\; C \in \mathcal{C}\,\}.\]
Crapo~\cite{kn:crapo} \,characterizes formal languages
that are antimatroids \,as \,follows.
\begin{definition}
A language \,$L$\, is an antimatroid
\,if \,its \,words \:satisfy \:the following properties.
\begin{enumerate}[\rm (1.)]
\item Every symbol of the alphabet occurs in at least one word.
\item Every word of \,$L$\, contains at most one copy of
every symbol in the alphabet.
\item Every prefix of a word in \,$L$ \,is \,in \,$L$.
\item If \,$s,t \in L$ \,and \,if \,$s$ contains at least one
symbol that is not in \,$t$
\,then \,there is a symbol \,$x \in s$ such that \,$tx \in L$.
\end{enumerate}
\end{definition}
\bigskip
\,\textemdash\, Observe that \,\textemdash\, when $L$ is
the language \:whose words are prefixes of $\AT$-free orders of a graph
\:then \:$L$ \:is \:an \:antimatroid.
The \underline{basic words} of $L$ are those of maximal
length \;which \:are \:the \:$\AT$-free orders.
\begin{definition}
A linear order \:$<$ \:of the vertices
of a graph \:is an
\:$\AT$-free order \:if any three vertices satisfy the following property.
\[z \in \I(x,y) \quad \Rightarrow\quad
x < z \quad \text{\rm{or}}\quad
y < z\]
\end{definition}
A graph is $\AT$-free \,if and only if\, it has
an $\AT$-free order~\cite{kn:corneil}.
\bigskip
Pruesse \,and \,Ruskey~\cite{kn:pruesse2,kn:pruesse1} considered the problem of
producing a Gray code for the
basic words of an antimatroid.
\par
Let $L$ be an antimatroid.
Consider the graph whose vertices are the basic words of $L$
\;two vertices being adjacent when one is obtained
from the other by a
transposition of an adjacent pair.
The \,\underline{prism}\, is obtained from two copies
($+$ and $-$) of this graph \:and the addition of edges joining
$+/-$ copies of similar vertices.
\:Pruesse \:and \:Ruskey \:show that this prism is
Hamiltonian \:for \:all \:antimatroids.
\:Their generic algorithm \:generates
\:all the basic words of $L$ \:in the order of a Hamiltonian traversal
of the prism \,\textemdash\, whilst reporting only
the (transpositions in the) $+$ copies.
\bigskip
Assume \,that a graph \,$G$\, is connected and $\AT$-free.
Let $\omega$ be a vertex \:such \:that
the number of vertices \:in the largest component of $G-N[\omega]$
\:is \:as \:large \:as \:possible.
\:Let \:$C$ be a largest component of $G-N[\omega]$
\:and \:let \:$S=N(C)$ and $\Omega=V\setminus (C \cup S)$.
\,When $G$ is not a clique then $\{\Omega,S,C\}$ is a
partition of $V(G)$.
\par
By our choice of $\omega$ \:every vertex of $\Omega$ is
adjacent to every vertex of $S$ \,\textemdash\, that is
\,\textemdash\, $\Omega$ is a module, \:hence, \:convex.
Since $G$ is $\AT$-free \:$C \cup S$ is convex as well.
\bigskip
Consider $\AT$-free orders for $G[\Omega]$ and
$G[C \cup S]$ \,\textemdash\, say
\[a=a_1\,\dots \,a_n \quad\text{and}\quad
b=b_1\,\dots\,b_m.\]
Notice that the linear order
\[\beta=a_1\,\dots\,a_n\,b_1\,\dots\,b_m\]
is an $\AT$-free order for $G$ \,\textemdash\, we
call \:$\beta$ \:a \:\underline{canonical order}
\:when $a$ and $b$ are that of $G[\Omega]$ and $G[C \cup S]$.
\;It is easy to obtain
a canonical order in polynomial time.
\bigskip
Consider an arbitrary order $\sigma=v_1\,\dots\,v_n$ of the
vertices of $G$. We write $\sigma_1$ for the linear order
induced on $\Omega$
and $\sigma_2$ for the linear order induced on $C \cup S$.
Observe that
$\sigma$ is an $\AT$-free order \:if \:and \:only \:if
\begin{enumerate}[\rm 1.]
\item $\sigma_1$ and $\sigma_2$ are
$\AT$-free orders of $\Omega$ and $C \cup S$
\item for $\omega \in \Omega$ and $x,y \in C\cup S$
\[x \in \I(\omega,y) \quad \text{and}\quad
\sigma_2^{-1}(x) < \sigma_2^{-1}(y) \quad \Rightarrow \quad
\sigma^{-1}(\omega) < \sigma^{-1}(x)\]
\end{enumerate}
that is \,\textemdash\, \:all \:vertices of $\Omega$ should appear
\:before \:the first element of a pair $x,y \in C \cup S$
that satisfies $x \in \I(\omega,y)$ \:and \:that \:is
\:in \:a \:`wrong' \:order
\,\textemdash\, namely \,\textemdash\,
$\sigma^{-1}(y) > \sigma^{-1}(x)$.
\bigskip
In \:proving \:that \:the prism of $G$ is Hamiltonian \:we
may assume that the prisms of
\,$\Omega$\, and \,$C \cup S$ \:are \:that.
\,\textemdash\, Furthermore \,\textemdash\, we may
assume that $\{+\beta,-\beta\}$
is an edge of
\:both \:Hamiltonian \:cycles.
\:A Hamiltonian cycle in the prism of $G$
\:that uses the edge $\{+\beta,-\beta\}$ \:is
easily obtained from this~\cite[Theorem~3.3]{kn:pruesse1}.
\par
It
\:follows \:that
\:the
$\AT$-free orders of \,$G$\, can be
generated \:such \:that \:each order differs from its predecessor
\:by \:at \:most \:one \:or \:two \:adjacent \:transpositions.
\par
It remains to establish the timebound.
\bigskip
\begin{theorem}
The \:$\AT$-free orders \:of an \:$\AT$-free graph can be generated \:in
\:con\-stant \:amortized \:time.
\end{theorem}
\begin{proof}
Pruesse and Ruskey developed a generic algorithm
to produce all basic words of an antimatroid~\cite{kn:pruesse2,kn:pruesse1}.
The amortized time complexity is determined by
an \,\textemdash\, antimatroid specific
\,\textemdash\, transposition oracle\:
which answers whether two adjacent elements
in a basic word may \:swap \:places \:to produce
another basic word.
\medskip
\noindent
We use the notation introduced above.
For an $\AT$-free order $\sigma$ \:with an induced
order $\sigma_2$ on $C\cup S$ \:and $x \in C$
\:define \,$h(x)$\, \:as \:follows.
\[h(x) = \# \;\{\,z\;|\; z \in C \quad \text{and}\quad
x \in \I(\omega,z) \quad \text{and}\quad
\sigma_2^{-1}(x) < \sigma_2^{-1}(z)\;\}\]
Then $\sigma$ is an $\AT$-free order \,if \,and \,only \,if
\,$\sigma_1$ and $\sigma_2$ \:are \:that \:and
\:$\sigma^{-1}(\omega) < \sigma^{-1}(x)$ when $h(x) \geq 1$.
We show that $h$ can be maintained
\:during a swap of two adjacent elements in $\sigma$.
\:Notice \:that $h$ is easily computable for a canonical order.
\medskip
\noindent
Sawada~\cite[Theorem~15~ff.]{kn:sawada}
\:introduces the counter $\numbad(x,y)$ \:for ordered pairs
$x$ and $y$ \:as the number of vertices $z$ with $x \in \I(y,z)$
and $\sigma^{-1}(z) > \sigma^{-1}(x)$. Two elements
\,$v_j$ and $v_{j+1}$\, can be swapped to produce a new
$\AT$-free order \:only \:if
\:$\numbad(v_{j+1},v_j)=0$. \:Sawada \:shows that
$\numbad$ can be maintained
during a generation of $\AT$-free orders in constant
amortized time~\cite[Theorem~13 and Observation~1]{kn:sawada}.
\medskip
\noindent
Notice that $h(x)=\numbad(x,\omega)$. This proves the theorem.
\hfill{$\blacksquare$}\end{proof}
\section{Concluding remark}
The family of ideals \,in a poset \, constitutes \,a convex geometry on the elements
of the poset. This convex geometry is usually referred to as a poset shelling.
A convex geometry is a poset shelling \,if and only if \,its family of
convex sets is closed
under unions~\cite{kn:korte}.
(See~\cite{kn:kashiwabara} for other characterizations.)
\bigskip
The family of convex sets of the $\AT$-free graph \,shown in the figure
\,is not a poset shelling. \:To see that \:let $A=\{y_1,z_2,u\}$ and
let $B=\{y_2,z_1,u\}$. Then $A$ and $B$ are convex \,but \,their union
is not \,since $u^{\prime} \in \I(z_1,z_2)$.
\begin{figure}
\begin{center}
\setlength{\unitlength}{1mm}
\thicklines
\begin{picture}(50,50)
\put(10,30){\circle*{1}}
\put(20,40){\circle*{1}}
\put(30,10){\circle*{1}}
\put(30,20){\circle*{1}}
\put(30,30){\circle*{1}}
\put(40,40){\circle*{1}}
\put(50,30){\circle*{1}}
\put(30,10){\line(0,1){20}}
\put(30,30){\line(-1,1){10}}
\put(30,30){\line(1,1){10}}
\put(20,40){\line(1,0){20}}
\put(30,20){\line(-2,1){20}}
\put(30,20){\line(2,1){20}}
\put(10,30){\line(1,1){10}}
\put(50,30){\line(-1,1){10}}
\put(18,42){$y_2$}
\put(40,42){$y_1$}
\put(8,28){$z_2$}
\put(50.5,28){$z_1$}
\put(29,7.5){$u$}
\put(31,29){$u^{\prime}$}
\end{picture}
\end{center}
\caption{The family of convex
sets in this graph is not a poset shelling.}
\end{figure}
|
3,212,635,537,570 | arxiv | \section{Introduction}
In recent decades, micro- and nanomechanical systems have attracted widespread interest in science and technology \cite{Cleland_2003, Schmid_2016}.
They constitute outstanding sensors for force \cite{reinhardt_ultralow-noise_2016
, mass \cite{Chaste_2012}, radiation \cite{Yi_2013}, and temperature \cite{Chien_2018}, to name just a few examples.
Simultaneously, they are promising building blocks for future quantum technologies, such as microwave- or spin-to-optical quantum transducers \cite{Midolo_2018, Kurizki_2015} or quantum memories \cite{pechal_superconducting_2018}.
Low thermomechanical noise, and correspondingly long mechanical coherence times are crucial for these applications, and are typically limited by energy dissipation from the mode of interest.
Considerable effort has therefore gone into designing mechanical resonators with minimal dissipation, leading to great advances in recent decades.
Beyond mitigating all external losses, e.\ g.\ to the surrounding gas or the device substrate, important progress was made in suppressing loss to internal degrees of freedom, such as two-level systems.
By storing the majority of the mechanical mode's energy in a loss-less potential, dissipation dilution \cite{Gonz_lez_1994} has emerged as a successful strategy in this endeavour.
It can be utilized in highly-stressed nanomechanical string and membrane systems, whereby the elongation energy assumes the role of the loss-less potential \cite{Huang_1998, Verbridge_2007, Unterreithmeier_2010, Schmid_2011, Fedorov_2019}.
We have recently introduced an extension of this approach --- soft-clamping --- to engineer mechanical resonance modes particularly conducive to dissipation dilution in phononic crystal membranes \cite{Tsaturyan_2017}.
Soft-clamping has allowed realizing nanomechanical resonators with the highest $Q$-factors ($>10^8$) and $Qf$-products ($>10^{15}\,\mathrm{Hz}$) yet observed at room temperature \cite{Tsaturyan_2017, Ghadimi_2018, Reetz_2019, Catalini_2020}.
For the vast majority of mechanical systems, it is sufficient to consider dissipation in the linear regime, i.\ e.\ when the quality factor is independent of the displacement amplitude \cite{Cleland_2003}.
Some instances of nonlinear dissipation in nanomechanical systems have been reported, for example in nano-resonators made from diamond \cite{Imboden_2013}, carbon nanotubes and graphene sheets \cite{Eichler_2011}, but without providing a clear explanation as to the origin of this effect.
Here, we investigate nonlinear effects in soft-clamped membrane resonators with very high Q-factors.
Whereas the Duffing frequency shift has been observed in stressed nanomechanical resonators before \cite{Fong_2012, Hocke_2014,Defoort_2012},
we focus on nonlinear damping here \cite{Zaitsev_2011, Catalini_2020}.
Starting from a full 3D model (von K\'{a}rm\'{a}n theory), we derive analytical expressions for both the Duffing frequency shift and nonlinear damping, similar to what has been derived for a string [2].
This analysis furthermore reveals strong connections to dissipation dilution, both being linked to geometric nonlinearities.
A series of systematic experiments yields good quantitative agreement with the model for low-order soft-clamped modes.
We thereby establish not only a means to quantitatively predict nonlinear losses --- as relevant e.\ g.\ for parametric sensing protocols \cite{Ko_ata_2020} --- but also introduce a new experimental tool for assessing dissipation dilution.
\section{Model}
We describe the motion of a thin membrane, of thickness $h$. The xy-plane coincides with the one of the undeformed membrane.
The displacement of the mass element located at position $\mathbf{r}(x, y, z)$ is quantified by the vector $u_i$, where Latin indexes represent the three directions $x, y, z$. The membrane's deformation due to the motion of the mass elements is expressed by the strain tensor $\varepsilon_{ij}=\left(\partial_j u_i+\partial_i u_j+\partial_i u_z\partial_j u_z\right)/2$.
The deformation induces stresses within the structure, described by the stress tensor $\sigma_{ij}$. We consider elastic materials, for which the induced stresses are linear in the strain tensor and Hooke's law holds \cite{Landau1970}.
For thin membranes with no external loads, stress components associated with the $z$-direction are negligible, i.e. $\sigma_{iz}=0$.
The membrane's displacement can then be decomposed into the out-of-plane displacement, $u_z(x,y,z)\equiv w(x,y)$, and the in-plane displacement $u_\alpha(x,y,z)=v_\alpha(x,y)-z\partial_\alpha w(x,y)$, where Greek indexes represent the in-plane coordinates $x$ and $y$, and $v_\alpha$ is the in-plane displacement. %
For amplitudes relevant to this work, the in-plane displacement components are negligible with respect to the out-of-plane components (confirmed with FEM simulations). Therefore, we apply the so-called out-of-plane approximation and neglect them \cite{Atalaya_2008}. Furthermore, our model includes a static in-plane deformation, $\varepsilon_0 (x,y)$, representing the static strain required for dissipation dilution. The shear components of the stress tensor, $(\sigma_0)_{xy}$, are negligible in the structures considered. Within these approximations, we can write the strain and stress tensors, according to von K\'{a}rm\'{a}n theory \cite{Landau1970,Atalaya_2008}, as
\begin{align}
\varepsilon_{\alpha\beta}&=\varepsilon_0\delta_{\alpha\beta}-z\partial_{\alpha\beta}w+\frac{1}{2}\partial_{\alpha}w\partial_{\beta}w,\\
\sigma_{\alpha\beta}&=\frac{E}{1-\nu^2}\left[(1-\nu)\varepsilon_{\alpha\beta}+\nu\varepsilon_{\gamma\gamma}\delta_{\alpha\beta}\right],
\end{align}
where $E$ is the Young's modulus, $\nu$ the Poisson's ratio, $\delta_{\alpha\beta}$ the Kronecker delta and the repeated Greek indices are summed over.
To introduce dissipation, we assume a time delay $\tau$ in the stress-strain relation, as a phenomenological model for microscopic relaxation processes intrinsic to the resonator material \cite{Schmid_2016}. For small and constant time delay (relative to mechanical period), the stress tensor can be approximated as $\sigma(t)=H[\varepsilon(t+\tau)]\approx H[\varepsilon(t)]+\tau H[\dot{\varepsilon}(t)]$, where $H$ is a linear functional expressing Hooke's law.
The dissipation arises from the additional term proportional to $\tau$. The equation of motion for out-of-plane displacement is
\begin{subequations}
\label{eq:eom_vk}
\begin{align}
\rho h \ddot{w}-\partial_{\alpha\beta}M_{\alpha\beta}-\partial_{\beta}\left(N_{\alpha\beta}\partial_{\alpha}w\right)=0,\label{eq:eom_1}\\
\partial_\beta N_{\alpha\beta}=0,
\end{align}
\end{subequations}
where the stress resultants $N_{\alpha\beta}$ and $M_{\alpha\beta}$ are the shear force components and bending momenta, respectively. They are given by $N_{\alpha\beta}=\int \sigma_{\alpha\beta}dz$ and $M_{\alpha\beta}=\int z\sigma_{\alpha\beta}dz$, where each integral is performed over the membrane thickness. As both stress resultants contain terms proportional to the time delay $\tau$, they generate both linear and nonlinear dissipative processes, as we shall see.
Typically, equations~\eqref{eq:eom_vk} are difficult to solve due to nonlinearity in the displacement $w$. For small $w$, we neglect nonlinear terms and retrieve a solvable linear equation, yielding a set of normal modes $w_\eta(x, y, t)=\phi_\eta(x, y)u_\mu(t)\delta_{\eta\mu}$, where $\eta$ indexes different vibrational modes and we have separated the time-dependent mode amplitude $u_\mu(t)$ from the dimensionless transverse spatial profile $\phi_\eta(x, y)$.
We use this set of transverse modes as a basis to expand a solution of the full nonlinear equation of motion, that is $w(x, y, t)=\phi_\eta(x, y)u_\eta(t)$.
We insert this ansatz in Eq.~\eqref{eq:eom_1}, then project it into a single transverse mode, $\phi_i$, by applying the Galerkin method together with a single mode approximation that neglects intermodal coupling \cite{Younis_2011}.
Finally, we obtain the following effective nonlinear equation for the temporal mode $u_i$
\begin{equation}\label{eq:eff_eom_singlemode}
\ddot{u}_i+\Gamma_i \dot{u}_i+\gamma_i^\mathrm{nl}u_i^2\dot{u}_i+\Omega_i^2u_i+\beta_i u_i^3=0,
\end{equation}
which describes a damped Duffing resonator \cite{Hocke_2014, Fong_2012}, including a nonlinear damping term \cite{Catalini_2020, Zaitsev_2011, Polunin_2016, Antoni_2012,Gusso_2020}.
The effective parameters in Eq.~\eqref{eq:eff_eom_singlemode} are defined as
\begin{subequations}
\label{eq:main}
\begin{align}
\Omega_i^2&=m_\text{eff}^{-1}\int\phi_i\left[D\partial_{\alpha\alpha\beta\beta}\phi_i-h\sigma_0\partial_{\alpha\alpha}\phi_i\right]dA,\label{eq:omega}\\
\beta_i&=\frac{k_1}{2}\int\phi_i\left(\partial_{\alpha\beta}\phi_i\partial_{\alpha}\phi_i\partial_{\beta}\phi_i+k_2\partial_{\alpha\alpha}\phi_i\partial_{\beta}\phi_i\partial_{\beta}\phi_i\right)dA\label{eq:beta},\\
\Gamma_i&=\tau D\,m_\mathrm{eff}^{-1} \int\phi_i\partial_{\alpha\alpha\beta\beta}\phi_i dA,\label{eq:gamma}\\
\gamma_i^\mathrm{nl}&=k_1\tau\int\phi_i\left(\partial_{\alpha\beta}\phi_i\partial_{\alpha}\phi_i\partial_{\beta}\phi_i+k_2\partial_{\alpha\alpha}\phi_i\partial_{\beta}\phi_i\partial_{\beta}\phi_i\right)dA\label{eq:gammanl},
\end{align}
\end{subequations}
where the integrals extend over the whole membrane surface,
$m_\text{eff}=\rho h \int \phi_i^2 dA$ is the effective mass,
$D=Eh^3/(12(1-\nu^2))$ the flexural rigidity and we have introduced the constants $k_1=-hE/(m_\mathrm{eff}(1-\nu^2))$ and $k_2=\nu/(1-\nu)$. Notably, the two nonlinear terms are purely geometric effects and they are not introduced by the material itself.
As expected, we find both the linear ($\Gamma_i$) and nonlinear ($\gamma_i^{\mathrm{nl}}$) dissipation proportional to the lag time $\tau$.
In the context of dissipation dilution, the linear dissipation is commonly expressed as
\begin{equation}
\Gamma_i=\frac{1}{D_{Q,i}}\frac{\Omega_i}{Q_\mathrm{intr}},
\end{equation}
where $D_{Q,i}\gg 1$ is the dissipation dilution factor, determined by the geometry of mode $i$ \cite{Gonz_lez_1994, Huang_1998, Verbridge_2007, Unterreithmeier_2010, Schmid_2011, Tsaturyan_2017, Fedorov_2019}.
The resonator's material properties enter via the intrinsic quality factor
$Q_\mathrm{intr}=(\Omega_i \tau)^{-1}$, which we use interchangeably with the material's loss angle $\theta_\mathrm{lin}=Q_\mathrm{intr}^{-1}$.
In dissipation-diluted devices, one expects to measure enhanced (linear) quality factors
\begin{equation}\label{eq:diluted_q}
Q_\mathrm{meas}:=\frac{\Omega_i}{\Gamma_i} = D_{Q,i} Q_\text{intr}=D_{Q,i} \theta_\text{lin}^{-1},
\end{equation}
compared to resonators made from the same material in absence of dissipation dilution (e.\ g.\ unstressed).
In this setting, $D_{Q,i}$ and $Q_\text{intr}$ are not separately accessible through measurement.
Turning to nonlinear effects, we note the Duffing frequency shift ($\beta_i$) and nonlinear damping ($\gamma_i^\mathrm{nl}$) depend on the mode pattern identically.
Yet their ratio depends on the lag time $\tau$ and thence on the intrinsic loss.
We therefore introduce the nonlinear loss angle
\begin{equation}\label{eq:def_theta_nl}
\theta_\mathrm{nl}:=\frac{\gamma_i^{\mathrm{nl}}\Omega_i}{2\beta_i},
\end{equation}
which notably depends only on quantities which can be experimentally measured through large-amplitude excitation (see below).
Importantly, eqs.~\eqref{eq:main} suggest
\begin{equation}\label{eq:nl_eq_lin_hyp}
\theta_\mathrm{nl}=\Omega_i \tau=\theta_\mathrm{lin}
\end{equation}
which would imply access to the intrinsic linear damping through measurement of a device's nonlinear properties.
\section{Experimental results}
Our experimental subjects are highly-stressed 3.6mm~$\times$~3.6mm soft-clamped $\text{Si}_3\text{N}_4$ membrane resonators \cite{Tsaturyan_2017}, shown in Fig.~\ref{f:fig2}b, operated at room temperature and pressures lower than $10^{-7}$~mbar to reduce gas damping to a negligible value.
We expect intrinsic material dissipation to dominate and the theoretical framework derived above to apply since radiation losses of the vibrational defect modes are shielded by the honeycomb phononic crystal pattern.
Nonlinear phenomena are present during free decay evolution as amplitude-dependent damping and shift of the mechanical resonance frequency.
To observe these effects, we employ ringdown techniques and measure mechanical displacement with a fiber-based optical Mach-Zender interferometer (Fig.~\ref{f:fig2}a).
Then, we stop driving and monitor the displacement decaying with a heterodyne detector, from which we extract both the displacement amplitude $A_i$ and phase, thus the instantaneous frequency $\Omega_i'$.
The displacement amplitude and frequency evolve according to \cite{Catalini_2020}:
\begin{align}
\delta \Omega_i(t)&\equiv \Omega_i-\Omega_i'=\frac{3}{4}\omega_i^{sD}A_i^2(t), \label{eq:backbone}\\
A_i(t)&=\frac{A_{i_0}e^{-\frac{\Gamma_i}{2}t}}{\sqrt{1+\frac{\gamma_i^\mathrm{nl}}{4\Gamma_i}A_{i_0}^2\left(1-e^{-\Gamma_i t}\right)}}. \label{eq:nl-decay}
\end{align}
Equation~\eqref{eq:backbone} is the standard backbone equation \cite{Fong_2012,Hocke_2014}, while eq.~\eqref{eq:nl-decay} describes non-exponential decay, induced by the nonlinear damping \cite{Polunin_2016,Zaitsev_2011}.
We have also introduced the Duffing shift per displacement, $\omega_i^\text{sD}:=\beta_i/(2\Omega_i)$, such that the nonlinear loss angle defined in eq.~\eqref{eq:def_theta_nl} becomes $\theta_\text{nl}^{-1} = 4 \omega_i^\text{sD}/\gamma_i^\text{nl}$
\begin{figure}[htb]
\center
\includegraphics[scale=1]{fig2_v12_lc.pdf}
\caption{(a) Optical interferometer for displacement measurement (AOM: acousto-optic modulator, PZT: piezoelectric actuator). The detection is realized with a heterodyne receiver.
(b) Soft-clamped membrane pattern.
(c) Nonlinear amplitude decay for 19-nm-thick membrane and corresponding fit. Linear exponential decay (gray) is extrapolated to highlight deviation arising from nonlinear damping. (d) Duffing frequency shift as a function of displacement amplitude and corresponding fit. The abscissa is the fit result from (c).}
\label{f:fig2}
\end{figure}
In Fig.~\ref{f:fig2}c and d, we show an example of nonlinear amplitude decay and frequency shift as a function of displacement amplitude. The nonlinear parameters are extracted from the best fit.
Importantly, the values of both $\omega_i^\text{sD}$ and $\gamma_i^\text{nl}$ depend on the displacement calibration \cite{Catalini_2020}, however their ratio $\theta_\text{nl}$ is independent of that.
Notice that deviation from linear decay starts appearing for displacement amplitudes comparable to the membrane thickness \cite{Gusso_2020}, as expected when the bending at the membrane edges has been eliminated (soft-clamping).
From the amplitude decay fit, we extract the linear damping rate $\Gamma_i$ and calculate $Q_\text{meas}$.
We perform 5 ringdown measurements on every individual mode, then average the results.
Ringdowns are discarded if relative errors on fit parameters are greater than $10$~\%, using a 95~\% confidence interval.
For each membrane, we characterize the nonlinear parameters of four defect modes lying in the bandgap from $1.30$~MHz to $1.55$~MHz (see supplementary). Statistics are collected from 6 to 12 nominally identical membranes.
Due to the presence of outliers within this ensemble, we evaluate and report the median and median absolute deviation as robust estimators of the ensemble statistics \cite{Pham_Gia_2001}.
The measurement protocol stated above is repeated for membranes of different thickness, with all nonlinear parameters reported in Fig.~\ref{f:fig3}.
(The values obtained for mode 3 of two membranes were discarded since a negative Duffing shift was observed.)
We simulate the parameters according to eqs.~\eqref{eq:beta} and \eqref{eq:gammanl}, where the transverse profile of each mode is obtained by FEM simulations.
For modes 1 and 2 we find very good agreement for all investigated thicknesses in the range 19-100~nm.
For thicker membranes, we observe excess nonlinear damping in modes 3 and 4, though the Duffing nonlinearity still matches predictions.
Possible origins of this deviation are discussed below.
\begin{figure*}[htb]
\center
\includegraphics[scale=1]{fig3_v12_lc.pdf}
\caption{Nonlinear parameters. (a)-(d) Measured nonlinear parameters as a function of membrane thickness $h$. Blue (green) points are the Duffing (nonlinear damping) parameters, $\omega_i^{sD}$ ($\gamma_i^{nl}$), estimated from the median of each statistical ensemble. Dashed lines represent simulated values for the Duffing (blue) and nonlinear damping (green) parameters.}
\label{f:fig3}
\end{figure*}
By comparing nonlinear and linear losses experimentally, we now examine the hypothesis of eq.~\eqref{eq:nl_eq_lin_hyp}.
In Fig.~\ref{f:fig4}, the measured linear quality factors $Q_\mathrm{meas}$ are plotted against extracted $\theta_\text{nl}^{-1}$.
Per our hypothesis, the measured quality factor cannot exceed the function $ Q_\mathrm{meas}\stackrel{!}{=}D_Q \theta_\mathrm{nl}^{-1}$, which follows from eq.~\eqref{eq:diluted_q} if eq.~\eqref{eq:nl_eq_lin_hyp} is correct. The dilution factor for each mode is obtained from FEM simulations, as described earlier \cite{Tsaturyan_2017}. This relation should hold independent of material quality, which may vary between fabrication runs.
Linear loss angles of silicon nitride thin films have been extracted from a large number of resonator data reported in the literature \cite{Villanueva_2014}.
The expected range is represented by the gray area, where the gray line is the average value.
If the hypothesis eq.~\eqref{eq:nl_eq_lin_hyp} holds, $\theta_\text{nl}$ (abscissa in Fig.~\ref{f:fig4} ) should also fall in this range.
Although each point should ideally fall between the two lines, additional losses can be introduced during fabrication and handling.
The data points would then have larger loss angle but still lie on the oblique line.
On the other hand, imperfect dissipation dilution (e.~g.~ damaged structures, residual gas damping, or radiation loss) would lead to data points lying below the oblique line.
The regions not explained by these mechanisms are shown as hatched. We observe that the vast majority of the measurements are within the expected region. For the first two modes, most points lie close to the intersection, corroborating our hypothesis. The points' location with respect to the intersection then gives an indication of possible imperfections in the sample.
Nonlinear loss angle measurements for different thicknesses are shown in Fig.~\ref{f:fig4}e--h.
We compare this to the phenomenological model of the linear loss angle as a function of membrane thickness:
\begin{equation}\label{eq:qintr}
\theta_\text{lin}(h)=Q_\mathrm{intr}^{-1}(h)=Q_\mathrm{vol}^{-1}+(\beta_s h)^{-1},
\end{equation}
where $Q_\mathrm{vol}=(2.8\pm 0.2)\times10^4$ and $\beta_s=(60\pm 40)\,\mathrm{nm}^{-1}$ \cite{Villanueva_2014}.
Measured $\theta_\text{nl}$ for modes 1 and 2 agree with the model for the $\theta_\text{lin}$, within error, supporting our hypothesis. However, the data for mode 3 and 4 show an evident deviation for larger thickness. This is consistent with the excess nonlinear damping observed in Fig.~\ref{f:fig3}c and d. The source of this excess nonlinear damping is unclear.
Since the isolation provided by the phononic shield for these two modes is in general worse \cite{Tsaturyan_2017}, we speculate that this can lead to nonlinear energy exchange mediated by vibrational modes of the supporting silicon frame \cite{patil_thermomechanical_2015}.
\begin{figure*}[htb]
\center
\includegraphics[scale=1]{Fig4_v11_lc.pdf}
\caption{Nonlinear loss angles. (a)-(d) Measured quality factors against $\theta_\text{nl}^{-1}$. The gray line is the expected $\theta_\text{lin}^{-1}$ \cite{Villanueva_2014} and gray area the uncertainty in that value. The black line is the limit $Q_\text{meas}\leq D_Q \theta_\text{lin}^{-1}$ with a dissipation dilution factor $D_Q$ obtained from simulations. The hatched area is inaccessible under the most obvious sources of excess dissipation. (e)-(h) Measured $\theta_\text{nl}^{-1}$ as a function of the membrane thickness.
The stars represent the median of the points show in the panels above. The gray line is the $\theta_\text{lin}^{-1}$ as a function of thickness, expressed in eq.~\eqref{eq:qintr}, whereas the gray area reflects the uncertainty in the parameters.}
\label{f:fig4}
\end{figure*}
Lastly, we characterize the intrinsic losses as a function of the temperature (cf.\ Fig.\ \ref{f:fig5}).
Ringdown measurements were performed on mode 1 of a 19-nm-thick membrane inside a dilution refrigerator, at temperatures ranging from $20$~mK to $1$~K.
We use $100$~nW of optical power at $830$~nm impinging on the membrane, to minimize heating from absorption of optical radiation \cite{Page_2020}.
An additional reference measurement is taken at room temperature.
As reported previously \cite{yuan_silicon_2015, faust_signatures_2014,fischer_optical_2016,Page_2020}, we observe an increase of linear quality factor with decreasing temperature.
We find $\theta_\text{nl}$ decreases in unison with the linear loss over nearly 4-orders of magnitude span in temperature, in line with our hypothesis.
Our new analysis also gives two insights of potential use for further experimental optimization: (a) saturation of the decrease in $\theta_\text{nl}$ at around 100~mK suggests that the sample might not thermalize properly to lower temperatures, (b) near $10^9$ the measured linear $Q$-factors stop following $\theta_\text{nl}$, suggesting the presence of additional linear, undiluted dissipation.
\begin{figure}[ht!]
\center
\includegraphics[scale=.9]{fig5_v04.pdf}
\caption{Nonlinear loss angles at different temperatures. (a) Measured $\theta_\text{nl}$ as a function of the cryostat temperature, $T_\text{mxc}$. The gray line is a polynomial fit, roughly showing the behavior.
(b) Measured quality factors versus $\theta_\text{nl}$, taken at room temperature (orange) and cryogenic temperatures (blue). The gray line is the expectation value of $\theta_\text{lin}^{-1}$ at room temperature, and the gray area reflects uncertainty \cite{Villanueva_2014}. The black line is the simulated quality factor, from the dissipation dilution factor $D_{Q_1}$. Error bars are the mean absolute deviation among 3 repetitions.}
\label{f:fig5}
\end{figure}
\section{Conclusion}
Our work shed lights on the origin of nonlinear damping in dissipation-diluted nanomechanical resonators.
We have developed an analytic theory based on a continuum elastic model for large deflections of a thin membrane.
The geometric nonlinearity arising from the material elongation modifies both the conservative and dissipative dynamics, in the form of Duffing frequency shifts and a nonlinear damping.
We observe these nonlinear effects in soft-clamped, ultracoherent membrane resonators and find good agreement with our model.
We introduce the nonlinear loss angle $\theta_\text{nl}$ and show that it can be extracted from ringdown measurements without displacement calibration.
Our model hypothesizes $\theta_\text{nl}$ is equal to the linear loss angle, which is otherwise not separately accessible by measurement.
We find substantial evidence supporting this hypothesis across a wide array of mode shapes, geometric parameters, and temperatures.
These insights deepen our understanding of nonlinear behaviour in this important class of nanomechanical resonators, and can guide design of future generations of ultracoherent mechanical sensors \cite{Tsaturyan_2017,Ghadimi_2018, Reetz_2019}, especially with regard to sensing protocols \cite{Ko_ata_2020}.
Finally, the tools developed here yield additional insight in the performance and loss contributions of dissipation-diluted resonators.
\section*{Acknowledgments}
The authors acknowledge Y.~Tsaturyan for sample fabrication and D.~Mason for assistance with the interferometric nonlinear transduction. This work was supported by the Swiss National Science Foundation (grant nr. 177198), the European Research Council project Q-CEOM (grant nr. 638765), the Danish National Research Foundation (Center of Excellence “Hy-Q”), the EU H2020 FET proactive project HOT (grant nr. 732894), and the Novo Nordisk Foundation (grant nr. NNF20OC0061866).
\begin{widetext}
|
3,212,635,537,571 | arxiv | \section{Introduction}
With increasing urbanization, traffic congestion is a significant and costly problem \cite{guerrini2014traffic,mcnew2014}.
While early works proposed to optimize traffic light controllers based on expert knowledge and traditional model-based planning \cite{porche1999adaptive,gershenson2004self,cools2013self}, there are promising recent results on applying flexible model-free methods in reinforcement learning (RL) \cite{sutton2018reinforcement} and deep RL, such as DQN in particular \cite{mnih2015human}, to find optimal policies for traffic light controllers that dynamically respond to real-time traffic conditions \cite{abdulhai2003reinforcement,genders2016using,li2016traffic,wei2018intellilight}.
These works model a single traffic light as a Markov decision process (MDP) equipped with a discrete action space (e.g. signal phase change) and a continuous state space (e.g. vehicle waiting time, queue length), and train a policy to optimize the expected return of an expert-designed reward function.
However, the single-agent RL perspective on traffic control optimization fails to account for the fundamental issue that optimizing global traffic flow over a densely connected road network is a cooperative multi-agent problem, where independently-learning agents face difficulty in finding global optimal solutions.
For example, if an intersection with low vehicle density in the North-South direction selfishly lets East-West traffic flow with little interruption to maximize its own performance, it will cause severe issues for any adjacent intersection that has heavy North-South traffic.
Instead, all traffic light agents must act cooperatively to optimize the global traffic condition while optimizing their own individual reward based on local observations.
On the other hand, existing work that adopt the multi-agent perspective on traffic signal optimization either fall back to independent learning \cite{liu2017cooperative} or resort to centralized optimization of coordinated agents \cite{arel2010reinforcement,van2016coordinated}.
Independent learners \cite{tan1993multi} only optimize their own reward based on local observations, cannot optimize for global criteria (e.g., different priorities for different intersections), and they face a nonstationary environment due to other learning agents, which violates stationarity assumptions of RL algorithms.
While centralized training can leverage global information, it requires maximization over a combinatorially-large joint action space and hence is difficult to scale.
Motivated by these challenges, our paper focuses on deep multi-agent reinforcement learning (MARL) for traffic signal control with the following specific contributions:
\begin{figure}[t]
\centering
\includegraphics[scale=0.7]{method/MATL_architecture.pdf}
\caption[one figure.]{QCOMBO architecture combining independent learning of $Q^n(o^n,a^n)$ with centralized training of $Q(s,\abf)$ via a novel consistency loss $L(Q,\lbrace Q^n \rbrace_n)$}
\label{Fig:QCOMBO}
\end{figure}
\textbf{1. Novel objective function combining independent and centralized training.}
We propose QCOMBO, a Q-learning based method with a new objective function that combines the benefits of both independent and centralized learning (\Cref{Fig:QCOMBO}).
The key insight is to learn a global action-value function using the global reward, employ agent-specific observations and local rewards for fast independent learning of local utility functions, and enforce consistency between local and global functions via a novel regularizer.
Global information shapes the learning of local utility functions that are used for efficient action selection.
\textbf{2. Evaluation of state-of-the-art MARL algorithms on traffic signal optimization.}
Recent cooperative MARL algorithms
specifically tackle the case when all agents share a single global reward \cite{foerster2018counterfactual,sunehag2017value,rashid2018a}.
However, as these methods were not designed for settings with individual rewards, it is open as to whether their performance can be surpassed by leveraging such agent-specific information.
While they have shown promise on video game tasks, to the best of our knowledge they have not been tested on the important real-world problem of optimizing traffic signal over a network.
Hence we conducted extensive experiments comparing our algorithm versus independent Q-learning (IQL), independent actor-critic (IAC), COMA \cite{foerster2018counterfactual}, VDN \cite{sunehag2017value} and QMIX \cite{rashid2018a} on a variety of road networks with varying traffic conditions.
\textbf{3. Generalizability of traffic light control policies.}
To the best of our knowledge, we conduct the first investigation on the generalizability and transferability of deep MARL policies for traffic signal control.
Reinforcement learning methods are especially suitable for dynamic traffic light control since the transfer of a policy learned in simulation to real-world execution is arguably more feasible than in other domains (e.g. robotics).
Similar to domains where deep RL excels \cite{mnih2015human}, each traffic light has a small set of discrete actions for a signal phase change, which poses negligible issues for sim-to-real transfer.
Given improvements in sensor technology, measurements of traffic conditions can be increasingly accurate and real-world measurements can approach ideal simulated data.
Powerful model-free RL methods also do not require an accurate transition model that predicts traffic flow.
Hence, there is strong motivation to investigate whether a decentralized policy trained with simulated traffic approximating real-world conditions can be transferred to larger networks and different traffic conditions without loss of performance.
\section{Related Work}
Early work demonstrated the application of RL to single traffic light control \cite{abdulhai2003reinforcement}.
The success of deep RL has spurred recent works that incorporate high dimensional state information into a more realistic problem definition \cite{genders2016using,mousavi2017traffic,wei2018intellilight,liang2018deep},
which we further extend
to define the observation and action spaces of our new multi-agent setting.
Various choices of the reward function were proposed for training a single traffic light agent
\cite{balaji2010urban,chin2011q,mousavi2017traffic}.
We extended the definition of a single-agent reward \cite{wei2018intellilight}
by defining the global reward as a weighted sum of individual rewards using the PageRank algorithm \cite{page1999pagerank}.
Previous work on multi-agent traffic light control mostly relied on independent Q-learning (IQL) with heuristics to account for nonstationarity and coordination, such as:
single-agent Q-learning for a central intersection surrounded by non-learning agents \cite{arel2010reinforcement};
applying a Q function learned on a sub-problem to the full problem with max-plus action-selection \cite{van2016coordinated};
training only one agent during each episode while fixing other agents' policies \cite{liu2017cooperative};
sharing information among neighboring Q-learning agents \cite{el2013multiagent,liu2017intelligent}.
These approaches do not account for the importance of macroscopic measures of traffic flow \cite{geroliminis2007macroscopic}.
In contrast, our formulation explicitly shapes the learning of individual agents via a global reward.
Recent work proposed more sophisticated deep MARL algorithms for cooperative multi-agent problems with a global reward \cite{sunehag2017value,foerster2018counterfactual,rashid2018a}, under the paradigm of centralized training with decentralized execution \cite{bernstein2002complexity}.
However, these methods only learn from a global reward without using available individual rewards,
which motivates our proposal for a simple yet effective way to combine individual and centralized training.
To the best of our knowledge, these algorithms have yet to be evaluated and compared in the real-world problem of multi-agent traffic light control, which we do as part of our main contributions.
\section{Markov game model of multi-agent traffic light control}
We formulate the multi-agent traffic light control problem as a partially-observed Markov game $\langle S, \lbrace O \rbrace^n, \lbrace A \rbrace^n, P, R, N, \gamma \rangle$, consisting of $N$ agents labeled by $n = [1..N]$, defined as follows:
\textbf{Agents}
$n \in [1..N]$.
Each agent controls the phase of one traffic light at an intersection.
\textbf{Observation space}
$O^n$.
Since all traffic lights have the same measurement capabilities, all agents' observation \textit{spaces} $O := O^1 = \dotsm = O^N$ have the same definition.
Each agent's individual observation \textit{vector} $o^n \in O$ depends on its own local traffic flow,
with the following components: $q^n\in\R^l$, $v^n\in\R^l$, $wt^n\in\R^l$, $delay^n\in\R^l$ (for $l$ incoming lanes at a traffic light), $ph^n\in\R^2$, and $d^n\in\R$, defined as:
\begin{itemize}[leftmargin=*]
\item $q^n$: the length of queue on incoming lanes, defined as the total number of halting vehicles (speed less than 0.1m/s);
\item $v^n$, the number of vehicles on each incoming lane;
\item $wt^n$, the average waiting time of all vehicles on each incoming lane; defined as the time in minutes a vehicle spent with a speed below 0.1m/s since the last time it was faster than 0.1m/s
\item $delay^n$, the average delay of all vehicles on each incoming lane, the delay of a lane is equal to 1 - (average vehicle speed)/(maximum allowed vehicle speed);
\item $ph^n$: the traffic light's current phase, indicating the status of the east-west and north-south directions, represented by a one-hot variable $ph^n: EW\times NS\mapsto \lbrace 0, 1 \rbrace^2$;
\item $d^n$: phase duration in seconds since the last phase change.
\end{itemize}
Prior work used an image representation of positions of all vehicles near a traffic light and required convolutional networks \citet{wei2018intellilight}.
In contrast, we show this is not necessary and hence significantly reduce computational cost.
\textbf{Global state space}
$S$ contains all global information including every route and traffic light.
Global state $s \in S$ is the concatenation of all local observation vectors.
\textbf{Action space}
$A^n$.
Traffic controllers with the same capabilities means $A := A^1 = \dotsm = A^N$.
Extension to controllers with different types is easily done by learning separate $Q$ functions for each type.
The action $a^n$ of each agent is a binary variable to indicate whether or not the traffic light will keep the current phase or switch to another phase.
This definition is sufficient for policy learning because the agent's current phase is included in its own observation vector.
The game has joint action space $\A \equiv A^1\times...\times A^n$.
Agents produce joint action $\abf := (a^1,\dotsc,a^N) \in \A$ at each time step.
Let $\abf^{-n}$ denote all actions \textit{except} that of agent $n$.
\textbf{Individual reward}
$R^n(s,\abf) : S \times \A \mapsto \R$.
We base our individual reward on previous work that used weighted route features as the reward for the single-agent traffic light setting \cite{van2016coordinated,wei2018intellilight}
There are seven features used to calculate the individual reward (\Cref{app:reward-definition}).
The individual reward is a combination of these meaningful features that capture many intuitive metrics of desirable and undesirable traffic conditions.
\textbf{Global reward}
$R^g$, defined as a weighted sum of individual rewards.
We explored different methods to compute these weights.
We used the PageRank algorithm to compute weights on each individual reward \cite{page1999pagerank}, since traffic intersections with higher risk of congestion are generally located in the central areas of the map that have higher interactions with surrounding traffic, and therefore should receive higher priority.
Hence $R^g(s,\abf) := \sum_{n=1}^N k_n R^n(s,\abf)$, where $k_n = \text{PageRank}(n)$.
While we considered using the traffic flow conditions under a fixed control policy to compute the weights for each traffic light, this is not a good choice since
an arbitrary suboptimal nominal policy may produce a bad estimation of weights.
In contrast, the PageRank algorithm accounts for the topological structure of the transportation network, addresses the connectivity and interaction between agents, and assigns higher weights to the rewards of highly-connected traffic lights.
\textbf{Evaluation criteria.}
Given a reward function designed with sufficient expert domain knowledge and specified with enough precision to disambiguate different traffic states, we can investigate the performance of state-of-the-art MARL algorithms by directly evaluating them using the cumulative reward and components of the reward.
Hence we do not resort to manual inspection of policy behavior, in contrast to previous work where certain states were aliased (i.e. produce the same reward) and manual inspection of the policy was required \citet{wei2018intellilight}.
\section{Multi-agent Reinforcement Learning Algorithms}
In this section, we give an overview of early and recent MARL algorithms, focusing on their respective strengths and weaknesses, with details in \Cref{app:algs}.
We use them as baselines for our experiments.
\textbf{IQL and IAC.}
Independent Q-learning (IQL) and independent actor-critic (IAC) have demonstrated surprisingly strong performance in complex multi-agent systems \cite{tan1993multi,tampuu2017multiagent,foerster2018counterfactual,rashid2018a}.
For each agent $n$, IQL directly applies single-agent Q-learning to learn a local utility function $Q^n(o^n,a^n)$, which is not a true action-value function because the presence of other learning agents result in a nonstationary environment from any single agent's perspective.
Similarly, IAC directly applies the single-agent policy gradient \cite{sutton2000policy} to train an actor-critic pair for each agent, resulting in actors $\pi^n(a^n|o^n)$ and critics $Q^n(o^n,a^n)$.
While IQL and IAC agents display strong ability to optimize individual rewards \cite{tan1993multi,yang2018cm3}, the lack of global information and a mechanism for cooperation means they are likely to settle for sub-optimal solutions.
\textbf{COMA.}
In cooperative MARL with a single global reward, COMA \cite{foerster2018counterfactual} estimates a centralized action-value function $Q^{\pibf}(s,\abf)$ to compute a counterfactual advantage function for a multi-agent policy gradient.
Their formulation is a low variance gradient estimate, as the advantage function evaluates the contribution of an agent's chosen action $a^n$ versus the average of all possible counterfactuals $\hat{a}^n$, keeping other agents' $a^{-n}$ fixed.
However, since the only learning signal comes from the global reward and individual agents are not directly trained to improve local performance, COMA may exhibit slower training in cooperative traffic light control.
\textbf{VDN.}
While IQL agents cannot learn to cooperate for a global reward, it is also not feasible to learn a single optimal action-value function $Q^*(s,\abf)$ since the maximization step requires searching over $|\Acal|^N$ joint actions.
Instead, VDN \cite{sunehag2017value} learns a joint action-value function that decomposes as $Q^{\text{VDN}}(s,\abf) := \sum_{n=1}^N Q^n(o^n,a^n)$, so that agents act greedily with respect to their own utility functions, while global reward is used for overall training.
However, there is no guarantee in general settings that the true optimal $Q^*(s,\abf)$ can be decomposed as a linear combination of individually-optimized utilities, which could limit VDN's performance.
\textbf{QMIX.}
QMIX \cite{rashid2018a} generalizes VDN by representing the optimal action-value function as a nonlinear function $Q^*(s,\abf) = F(Q^1,\dotsc,Q^N)$ of individual utility functions, while ensuring that the combination of individual $\argmax$ on each $Q^n$ yields the same joint action as a global $\argmax$ on $Q^*(s,\abf)$.
This is achieved by enforcing positive weights in the nonlinear mixing network $F$.
Despite being a more expressive model than VDN, the stability of QMIX depends on appropriate choices of mixing network architecture, for which there is little theoretical guidance, and QMIX also relies on global reward without using local reward for training.
\section{Method}
We propose QCOMBO, a novel combination of centralized and independent learning with coupling achieved via a new consistency regularizer.
We optimize a composite objective consisting of three parts: an individual term based on the loss function of independent DQN, a global term for learning a global action-value function, and a shaping term that minimizes the difference between the weighted sum of individual Q values and the global Q value.
This algorithm ensures that agents cooperate to maximize the global reward, which is difficult for independent learning agents to achieve, and also maintain the ability to optimize their individual performance using agent-specific observations and rewards, which is more efficient than a purely centralized approach.
\subsection{Individual Part}
Using individual observations and rewards for each agent is computationally efficient, since local observations generally have lower dimension than global state information, and also algorithmically efficient, since it avoids the difficult credit assignment problem of decomposing a single global reward signal for each agent to learn.
Furthermore, by optimizing individual utility functions $Q^n$ instead of a global optimal Q function, we reduce the maximization problem at each step of Q-learning from $O(|\Acal|^N)$ to $O(N|\Acal|)$.
Parameterizing the local utility function for agent $n$ with parameter $\theta^n$, we minimize the loss
\begin{align}
\mathcal{L}(\theta^n) &= \frac{1}{N} \sum_{n=1}^N \E_{\pibf} \Bigl[ \frac{1}{2}(y^n_t - Q^n(o^n_{t}, a^n_t; \theta^n))^2 \Bigr] \label{eq:loss-Q-individual} \\
y^n_t &= r^n_t + \gamma \max_{\ahat^n}Q^n(o^n_t,\ahat^n,\theta'^n) \label{eq:td-target-individual}
\end{align}
Since the agent population is homogeneous (i.e. all agents have the same observation and action spaces), we improve memory and computational efficiency by employing \textit{parameter-sharing} \cite{foerster2018counterfactual} among all agents, which means $\theta := \theta^n, \forall n \in [1..N]$.
Agents still act differently since they receive different observations, and we further give an agent indicator as input for agent disambiguation.
$\hat{\theta}$ are parameters of a target network \citet{mnih2015human}.
\subsection{Global Part}
Without global information, independently learning agents face a nonstationary environment due to the presence of other learning agents, and they may have insufficient information to find cooperative optima.
On the other hand, training an optimal global Q function is not scalable, since the Q-learning step would require maximization over $|\Acal|^N$ possible joint actions for $N$ agents.
To address this dilemma, our key insight is that we can learn the global Q function \textit{under the joint policy induced by all agents' local utility functions}, rather than learn the \textit{optimal} global Q function, and use it to shape the learning of individual agents via information in global state $s$ and global reward $R^g$.
Specifically, the joint policy defined by $\abf \sim \pibf(\abf|s) = \lbrace \argmax_{a^n} Q^n(o^n,a^n) \rbrace_{n=1}^N$ is associated with a global action-value function (letting $R^g_t := R^g_t(s,\abf)$):
\begin{align}
Q^{\pibf}(s,\abf) := \Ebb_{\pibf} \Bigl[ \sum_{t=0}^{\infty} \gamma^t R^g \mid s_0=s,\abf_0=\abf \Bigr]
\end{align}
Parameterizing $Q^{\pibf}_w(s,\abf)$ with $w$, we minimize the loss:
\begin{align}
&\Lcal(w) = \Ebb_{\pibf} \Bigl[ \frac{1}{2}\bigl( y_t - Q^{\pibf}_w(s_t,\abf_t) \bigr)^2 \Bigr] \label{eq:loss-Q-global} \\
&y_t = R^g_t + \gamma Q^{\pibf}_{\hat{w}}(s', \abf')\vert_{a'^n = \argmax_{a^n} Q^n_{\hat{\theta}}(o'^n,a^n)} \label{eq:td-target-global}
\end{align}
where we let $(\cdot)' := (\cdot)_{t+1}$.
Crucially, action selection for computing the TD target \eqref{eq:td-target-global} uses the greedy action from local utility functions and does not use the global Q function.
The collection of local utility functions induce a joint policy $\pibf$ that generates data for off-policy learning of the global action-value function $Q^{\pibf}$.
$\hat{w}$ are target network parameters.
\subsection{Combined objective}
If each agent greedily optimizes its own local utility function, the global return can be suboptimal.
For example, if agent $n$ (with low weight $k^n$) has no flow in the N-S direction while adjacent agent $m$ (with high weight $k^m$) has heavy flow in the N-S direction, the individual optimal policy for $n$ is to let W-E traffic flow continuously to $m$, which negatively impacts conditions at $m$ and leads to low global reward.
This is supported by experimental results in a 1x2 network.
To address the suboptimality of independent learning, we propose a new consistency regularization loss
\begin{align}\label{eq:regularizer}
\Lcal_{reg} := \Ebb_{\pibf} \Bigl[ \frac{1}{2}\bigl( Q^{\pibf}_w(s,\abf) - \sum_{n=1}^N k^n Q^n_{\theta}(o^n,a^n) \bigr)^2 \Bigr]
\end{align}
between global $Q^{\pibf}_w$ and individual utility functions $Q^n_{\theta}$.
Since $Q^{\pibf}_w$ is the true global action-value function with respect to the induced joint policy, this regularization brings the weighted sum of individual utility functions closer to global expected return, so that the optimization of individual utility functions is influenced by the global objective rather than purely determined by local information.
Hence the regularizer prevents any individual agent from attaining high individual performance at the cost of collective performance.
The complete QCOMBO architecture (\Cref{Fig:QCOMBO}) combines the individual loss \eqref{eq:loss-Q-individual}, global loss \eqref{eq:loss-Q-global}, and consistency regularizer \eqref{eq:regularizer} into the overall objective:
\begin{equation}\label{eq:overall}
\begin{split}
\Lcal_{tot}(w, \theta) &= \Lcal(w) + \Lcal(\theta) + \lambda \Lcal_{reg}
\end{split}
\end{equation}
where $\lambda$ controls the extent of regularization.
Whenever any agent learns to attains high individual reward at the cost of global performance, which is likely when minimizing the individual loss, the consistency loss will increase to reflect inconsistency between individual and global performance; it will then decrease once global information influences the learning of individual $Q^n$.
Our experiments provide evidence of this dynamic learning process that balances individual and global learning (\Cref{fig:consistency-loss}).
Since the third term is a regularizer, which in general is not necessarily zero at convergence, \eqref{eq:overall} does not force $Q^{\pibf}_w$ to equal the weighted sum of all $Q_{\theta}$.
Even at equality, agents still retain cooperation and are not independent because $Q^{\pibf}_w$ is trained using the total reward and \eqref{eq:regularizer} weighs each agent by $k^n$.
Optimizing $\Lcal(w)$ by itself does not enable action selection due to combinatorial explosion of the joint action space; optimizing $\Lcal(\theta)$ alone amounts to IQL; and, crucially, removing our novel regularization term from \eqref{eq:overall} would decouple the global and individual losses and reduce \eqref{eq:overall} to IQL.
At each training step, we interleave the updates to $\theta$ and $w$ to exchange information between the global and individual parts, allowing each agent to learn a policy that considers the effects of other agents' learning.
The parameters $w$ and $\theta$ are updated by gradient descent (derived in \Cref{app:gradients}):
\begin{equation}
\begin{split}
&\nabla_{w}\mathcal{L}_{tot} =
-E_{\pibf} \Bigl[ \bigl[ Y -(1 + \lambda)Q^{\pibf}_w(s_t, \abf_t) \\
&+ \lambda \sum_{n}k^{n}Q^{n}_{\theta}(o^n_t,a^n_t) \bigr] \nabla_{w} Q^{\pibf}_w(s_t, \abf_t) \Bigr] \\
&Y := R^g + \gamma Q^{\pibf}_{\hat{w}}(s', \abf')\vert_{\abf' = \lbrace \argmax_{a^n} Q_{\hat{\theta}}(o'^n,a^n) \rbrace}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
& \nabla_{\theta}\mathcal{L}_{tot} =
-\Ebb_{\pibf} \Bigl[ \sum_{n=1}^N \Bigl( Z_1 + \lambda k^n Z_2 \Bigr) \nabla_{\theta} Q^n_{\theta}(o^n_t,a^n_t) \Bigr] \\
&Z_1 := \frac{1}{N} \left( R^n_t + \gamma \max_{\ahat^n} Q^n_{\hat{\theta}}(o'^n,\ahat^n) - Q^n_{\theta}(o^n_t,a^n_t) \right) \\
&Z_2 := Q^{\pibf}_w(s_t,\abf_t) - \sum_{m=1}^N k^m Q^m_{\theta}(o^m_t,a^m_t)
\end{split}
\end{equation}
\section{Experimental Setup}
We evaluated the performance of our method against a large set of baselines (described in \Cref{app:algs}) on multiple road networks under a variety of traffic conditions in the SUMO simulator \cite{wu2017flow,SUMO2018}.
We describe all key experimental setup details in this section.
\Cref{sec:results} provides detailed analysis of each algorithm's performance.
For each algorithm, we report the mean of five independent runs, with standard deviation reported in \Cref{table_conditions}.
\subsection{Environment}
We used the Flow framework \citet{wu2017flow} with the SUMO simulator.
We used homogeneous vehicles of the same type and size in all experiments.
Extension to heterogeneous vehicles requires no modification to our algorithm, but only redefinition of observation vectors (e.g., a longer vehicle takes up two units in queue length).
Road networks are defined as the intersection of $m$ horizontal and $n$ vertical roads (e.g. a $1\times2$ network has one horizontal route intersecting two vertical routes.).
Each traffic light situated at an intersection is a learning agent.
Each road between two intersections is 400m long and has two lanes with opposite directions of travel.
Hence each agent has one incoming and one outgoing lane for each edge.
Vehicles are emitted at the global outer boundaries of each edge with random starting lane and fixed entering speed.
We used different traffic flow programs which vary in the number of vehicles per hour in the specific period to show our findings.
\Cref{table:config} contains all traffic configurations.
At each time step, the traffic light is green exclusively for either the horizontal or the vertical direction.
\begin{figure}
\begin{minipage}[c]{0.44\textwidth}
\minipage{0.4\textwidth}
\subfigure[][]{
\label{fig:intersection}
\includegraphics[width=\linewidth,height=1.2in]{figures/exps/1.png}}
\endminipage\hfill
\minipage{0.4\textwidth}
\subfigure[][]{
\label{fig:1by2}
\includegraphics[width=\linewidth,height=1.2in]{figures/exps/1-2.png}}
\endminipage\hspace{0.1cm}
\vfill
\minipage{0.4\textwidth}
\subfigure[][]{
\label{fig:2by2}
\includegraphics[width=\linewidth,height=1.2in]{figures/exps/2-2.png}}
\endminipage\hfill
\minipage{0.4\textwidth}
\subfigure[][]{
\label{fig:6by6}
\includegraphics[width=\linewidth,height=1.2in]{figures/exps/6-6.png}}
\endminipage\hspace{0.05cm}
\caption[A set of three subfigures.]{Grid topology used :
\subref{fig:intersection} 1 traffic light example;
\subref{fig:1by2} 2 traffic lights;
\subref{fig:2by2} $2\times2$ traffic lights;
\subref{fig:6by6} $6\times6$ traffic lights;}%
\label{Fig:duel}%
\end{minipage}
\hfill
\begin{minipage}[c]{0.53\textwidth}%
\begin{tabular}[c]{@{}c@{}|@{}c@{}|@{}c@{}@{}c@{}}
\hline
Program&Time Period& \multicolumn{2}{c}{Num Vehicles/Hour}\\
And Topology&A:train&Horizontal&Vertical\\
&B:test&(Bot to Top)&(Left to Right)\\\hline
$1\times2$ two Agents & A:0-12000 & 700 & 10, 620 \\ \hline
$2\times2$ Same State & A:0-12000 & 700, 700 & 700, 700 \\ \hline
Generalization & A:0-12000 & 700, 280 & 10, 620 \\
different flow 1 & B:0-2000 & 700, 280 & 10, 620 \\
$2\times2$ & B:2000-4000 & 1000, 800 & 900, 700 \\
& B:4000-6000 & 1400, 1000 & 400, 900 \\ \hline
Generalization & A:0-12000 & 1000, 580 & 110, 920 \\
different flow 2 & B:0-2000 & 1000, 580 & 110, 920 \\$2\times2$
& B:2000-4000 & 1000, 800 & 900, 700 \\
& B:4000-6000 & 1400, 1000 & 400, 900 \\ \hline
Generalization & A:0-12000 & 700, 700 & 700, 700 \\
different flow 3 & B:0-2000 & 700, 700 & 700, 700 \\
$2\times2$ & B:2000-4000 & 1000, 800 & 900, 700 \\
& B:4000-6000 & 1400, 1000 & 400, 900 \\ \hline
Train on $2\times2$ & A:0-12000 & 700, 700 & 700, 700 \\
test on $6\times6$ & B:0-12000 & 700,280, & 10,620, \\
& &260,240,&620,50,\\
& &780,200&90,700\\
\hline
\end{tabular}\\
{\captionof{table}{Network topologies, flow schedules and flow \\rates in all training and testing configurations}
\label{table:config}}
\end{minipage}
\vspace{-7pt}
\end{figure}
\textbf{Uniform flow on symmetric $2\times2$ network.}
The first road network is a symmetric environment with $N=4$ traffic light agents defined by a $2\times2$ network \Cref{fig:2by2}.
Vehicle emissions from all route boundaries are equal and uniform throughout the entire training horizon.
Due to symmetry, all agents' rewards are weighted equally, and all agents should learn the same policy or value functions.
\textbf{Non-uniform traffic flow on $1\times2$ network.}
The second traffic control scenario has more challenging traffic dynamics.
One horizontal route and two vertical routes form two interactions (\Cref{fig:1by2}).
The horizontal route has 700 vehicles/hour, one vertical route on the left has only 10 vehicles/hour, while the second vertical route on the right has 600 per hour.
This is a common real-world scenario where one main arterial road with dense traffic is adjacent and parallel to a smaller local road with sparse traffic.
The greedy strategy for the left traffic light is to let vehicles in the E-W direction pass almost all the time, which would lead to high incoming E-W traffic for the second traffic light that already has heavy N-S traffic.
Hence cooperative learning is necessary for the left light to close the E-W route to some extent, to optimize global performance.
\textbf{Generalization To Different Flows.}
We investigated how well policies learned by each algorithm in one traffic condition generalize to different traffic conditions without further training.
This is crucial for real-world applicability since training on every possible traffic condition is not feasible.
In the first experiment, we trained a policy under a \textit{static} traffic flow in the $2\times2$ network, but tested it on three consecutive equal-duration time periods with \textit{different vehicle densities}.
Denoting the flow as a vector $f:=(\text{bot, up, left, right}) \in \Rbb^4$ specifying the number of vehicles per hour on each vertical and horizontal route, traffic flows in the second and third test periods are $f_{t_2}:= (1000,800,900,700)$ $f_{t_3}:=(1400,1000,400,900)$ (\Cref{table:config} third row).
The first test period has the same traffic flow as the training phase.
We further investigated the extent to which generalization performance is affected by the specific traffic condition used in training.
Specifically, we trained separate QCOMBO policies with \textit{different flows} in the same network topology (with one policy per training flow), and test all policies under the \textit{same} flow.
We denote the $i$-th train-test combination ($i = 1,2,3$) as 4-component sequence $F^i = \lbrace f^i_{t_0},f^i_{t_1},f_{t_2},f_{t_3} \rbrace$, where the first flow is the training flow followed by three test flows, and $f_{t_2}$ and $f_{t_3}$ are shared by all train-test combinations.
Then the three train-test programs are:
$F^1= \lbrace f^1_{t_0},f^1_{t_1}, f_{t_2},f_{t_3} \rbrace$,
$F^2=\lbrace f^2_{t_0}, f^2_{t_1},f_{t_2},f_{t_3}\rbrace$,
$F^3=\lbrace f^3_{t_0},f^3_{t_1},f_{t_2},f_{t_3}\rbrace$,
where $f^1_{t_0}= f^1_{t_1}:=(700,280,10,620)$, $f^2_{t_0}= f^2_{t_1}:=(1000,580,110,920)$, $f^3_{t_0}= f^3_{t_1}:=(700,700,700,700)$ are training flows and the testing flow in the first $1/3$ period.
Three programs use the same testing flow for the second and third periods ($f_{t_2}, f_{t_3}$), specified in \Cref{table:config}.
Time periods $t_0,t_1,t_2,t_3$ are 10000,1000,1000,1000 steps, respectively.
\textbf{Generalization to larger networks.}
Directly training on simulations of large real-world traffic networks may not be computationally practical due to the combinatorially-large state and joint-action spaces, regardless of independent or centralized training.
Previous approaches first formulated simple models for regional traffic and then realigned to the whole system \citet{esser2000large}, or relied on transfer planning \citet{van2016coordinated}.
In contrast, we investigated the feasibility of training on a sub-network and directly transferring the learned policies without further training into a larger network.
Direct deployment in a larger systems is possible as QCOMBO learns decentralized policies.
We constructed a $6\times6$ traffic network with 36 traffic lights (\Cref{fig:6by6}) and nonuniform traffic flows, which severely reduces the possibility that any traditional hand-designed traffic control plan can be the optimal policy.
Policies trained via QCOMBO in the $2\times2$ network were directly tested on the $6\times6$ network.
\subsection{Algorithm implementations}
We implemented all algorithms using deep neural networks as function approximators.
We ensure that all policy, value, and action-value functions have the same neural network architecture among all algorithms---to the extent allowed by each algorithm (e.g. hypernetwork architecture required by QMIX)---for fair comparison.
The individual utility functions $Q^n$ of IDQN, VDN, QMIX, and QCOMBO are represented by fully-connected three-layer neural networks, where the last layer has $|A|$ output nodes.
QMIX has a two level hypernetwork that generates the weight matrix and bias vector from the global state $s$, to compute the inner product with each $Q^n$ and produce one state action value $Q(s,\abf)$.
VDN takes the sum of $Q^n$ to calculate the total $Q$, which is minimized by the squared error loss.
The critic of IAC is a value function $V$, which is used to estimate the TD error.
The COMA uses a centralized $Q$ minus a counterfactual baseline to compute the COMA policy gradient.
The critics of COMA and IAC have the same three-layer neural network structure, similar to the Q-functions in IDQN, VDN, QMIX.
Both IAC and COMA use the same actor network to approximate $\pi(a_t|o_t)$, which is also a three-layer fully connected neural network that takes the agent's observation, with a softmax activation in the last layer.
We give the agent's observations, last actions, and one-hot vector of agent labels as input to utility functions ($Q^n$ of IDQN, QMIX, and VDN, $V^n$ of IAC)
We give the global state, all other agents' actions, and agent labels as inputs to $Q$ of COMA.
Motivated by the possibility of periodic behavior of traffic, we also experimented with RNN and GRU cells for $Q^n$ of IDQN, VDN, QMIX and QCOMBO, and the $\pi(a_t|o_t)$ of IAC and COMA.
\Cref{app:architecture} contains more architecture details.
\section{Results}
\label{sec:results}
\begin{figure*}[t]
\hspace*{0.7cm}%
\minipage[c]{0.4\textwidth}
\includegraphics[scale=0.7]{figures/exps/label.png}
\endminipage\vfill
\minipage{0.25\textwidth}
\subfigure[][$2\times2$]{%
\label{fig:reward_four_agents}%
\includegraphics[width=\linewidth,height=1.2in]{figures/exps/Train_2,2_Test_2,2_train_global_rewards.png}}%
\endminipage\hfill%
\minipage{0.25\textwidth}
\subfigure[][2 traffic lights]{%
\label{fig:reward_two_agents}%
\includegraphics[width=\linewidth,height=1.2in]{figures/exps/Train_1,2_Test_1,2_train_global_rewards.png}}%
\endminipage\hfill%
\minipage{0.25\textwidth}
\subfigure[][$2\times2$ using RNN]{%
\label{fig:reward_four_agents_rnn}%
\includegraphics[width=\linewidth,height=1.2in]{figures/exps/Train_2,2_Test_2,2_rnn_train_global_rewards.png}}%
\endminipage\hfill%
\minipage{0.25\textwidth}
\subfigure[][2 lights using RNN]{%
\label{fig:reward_two_agents_rnn}%
\includegraphics[width=\linewidth,height=1.2in]{figures/exps/Train_1,2_Test_1,2_rnn_train_global_rewards.png}} %
\endminipage\hfill
\vspace{-10pt}
\caption[]{
\subref{fig:reward_four_agents}-\subref{fig:reward_two_agents_rnn} show rewards under different topology among algorithms}%
\label{fig:rewards}%
\end{figure*}
To analyze differences in algorithms, we show full learning curves for all algorithms in addition to reporting final performance.
This is crucial because previous work on deep RL for traffic signal control noted the potential for instability during training \cite{van2016coordinated}.
Our learning curves were generated by conducting evaluation measurements periodically during training (i.e. test the current policy for 400 steps).
Since it takes time to populate the road networks, our learning curves start after 1000 simulation time steps.
For every experiment, we ran a static policy (change light phase every 30s), and a random policy (keep or change the current phase every 5s) so that improvements due to learning can be clearly seen.
\begin{table*}[t]
\centering
\captionof{table}{ Final Traffic Condition After Learning}
\label{table_conditions}
\begin{tabular}{c|@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}}
\hline
\multicolumn{1}{l}{} & \multicolumn{3}{c}{$2\times2$ balanced} & \multicolumn{3}{c}{$1\times2$ unbalanced} \\ \cline{2-7}
\multicolumn{1}{l}{} & Queue Length& Wait Time& Vehicle Delay & Queue Length& Wait Time& Vehicle Delay\\ \hline
QCOMBO & \textbf{1.80}(0.05) & \textbf{0.03}(0.00) & 3.30(0.01) & \textbf{1.12} (0.96) & \textbf{0.14}(0.20) & \textbf{2.08} (0.11) \\
IDQN & 23.92(24.98) & 191.98(270.65) & 3.44(0.29) & 29.78(22.97) & 44.63(52.58) & 2.37 (0.95) \\
IAC & 41.89(1.71) & 9.16 (1.29) & 3.45(0.02) & 34.37 (33.20) & 42.13 (78.92) & 2.27 (1.14) \\
COMA & 2.10(0.30) & 0.05 (0.01) & \textbf{3.26}(0.03) & 7.72(8.53) & 0.84 (1.06) & 2.19(1.25) \\
QMIX & 25.10(32.10) & 115.72(163.57) & 3.57(0.31) & 16.91(19.11) & 40.57 (58.34) & 2.82 (0.12) \\
VDN & 38.12(29.13) & 106.34(105.83) & 3.45(0.30) & 10.29(15.08) & 16.13 (34.09) & 2.91 (0.47) \\
Random & 26.57(0.00) & 2.49(0.00) & 3.65 (0.00) & 25.15(7.77) & 4.96(1.37) & 2.81(0.49) \\
Static & 36.14(0.00) & 6.90(0.00) & 3.51 (0.00) & 27.59(4.51) & 4.18(1.60) & 2.83(0.02) \\ \hline
\end{tabular}
\vspace{-10pt}
\end{table*}
\subsection{Static Traffic Flows}
\Cref{fig:rewards} shows the global reward in both the $2\times2$ and $1\times2$ road networks using both fully-connected and RNN neural networks.
Over all flow and network configurations, QCOMBO attained the global optimal performance and is most stable among all algorithms.
Policy-based methods COMA and IAC, which mitigate issues with state aliasing and are more robust to small changes of observation,
show lower variance and higher average reward than other value-based methods (IDQN, VDN and QMIX).
In the $2\times2$ environment \Cref{fig:reward_four_agents}, QCOMBO and COMA converged to global optimal policies.
Both two maintain good traffic condition throughout learning, as reflected by queue length, waiting time, and delay time of vehicles in \Cref{table_conditions}.
VDN performed better than the random and static policies, but worse than QCOMBO and COMA, as it can only learn a restricted set of linearly-combined Q functions.
IDQN and IAC were not stable and failed to reach a global optimum, showing that learning cooperation from the global reward is necessary.
QMIX diverged possibly due to the difficulty of stabilizing its hypernetwork.
All algorithms (with the exception of QMIX) improved with the use of RNN for policy or value functions (\Cref{fig:reward_four_agents_rnn}), giving strong evidence that RNNs are especially suitable for handling history information and periodicity of traffic.
Performance differences between algorithms are more apparent in the $1\times2$ environment with non-uniform traffic flow (\Cref{fig:reward_two_agents}).
QCOMBO converged to the optimal policy, exceeding the performance of all other algorithms.
VDN began with high performance but struggled to maintain a good policy as more vehicles enter the road.
QCOMBO's higher performance and stability over IDQN shows that the new regularization loss in QCOMBO helps to stabilize learning of independent utility functions by limiting their deviation from the centralized action-value function.
The benefit of centralized learning (e.g. QCOMBO and COMA) over independent learning (e.g. IDQN, IDQN) is more apparent than in the $2\times2$ environment, since the non-uniform flow increases the impact of each agent on other agents' performance, resulting in the need for cooperation.
IDQN, VDN and IAC exhibits oscillation, similar to behavior reported in \cite{van2016coordinated}.
QMIX suffers from convergence issues when using a fully-connected network (RNN results shown in \Cref{fig:reward_two_agents_rnn}).
We further explain differences in algorithm performance in $1 \times 2$ by analyzing actions (E-W or N-S phase) selected by the learned policies.
In order to achieve cooperation, the left traffic light should open N-S and close E-W periodically to reduce incoming E-W traffic for the right light, who already experiences heavy N-S traffic.
\Cref{fig:phases} shows that QCOMBO and COMA achieve cooperation by turning off E-W traffic periodically (with low frequency, since it receives higher E-W than N-S traffic).
However, IDQN greedily keeps E-W open for long durations, which is not globally optimal; IAC switches between the two phases almost equally; VDN switches to N-S too often than necessary; QMIX incorrectly chooses N-S more frequently than E-S.
\subsection{Generalizing To Dynamic Traffic Flows}
\begin{figure*}[t]
\minipage{0.16\textwidth}
\label{fig:policy_qcombo}%
\includegraphics[scale=0.25]{figures/exps/Train_1,2_Test_1,2_train_cur_Model-QCombo_center-0.png}
\endminipage\hfill
\minipage{0.16\textwidth}
\label{fig:policy_coma}%
\includegraphics[scale=0.25]{figures/exps/Train_1,2_Test_1,2_train_cur_Model-COMA_center-0.png}
\endminipage\hfill
\minipage{0.16\textwidth}
\label{fig:policy_iqdn}%
\includegraphics[scale=0.25]{figures/exps/Train_1,2_Test_1,2_train_cur_Model-IDQN_center-0.png}
\endminipage\hfill
\minipage{0.16\textwidth}
\label{fig:policy_iac}%
\includegraphics[scale=0.25]{figures/exps/Train_1,2_Test_1,2_train_cur_Model-IAC_center-0.png}
\endminipage\hfill
\minipage{0.16\textwidth}
\label{fig:policy_vdn}%
\includegraphics[scale=0.25]{figures/exps/Train_1,2_Test_1,2_train_cur_Model-VDN_center-0.png}
\endminipage\hfill
\minipage{0.16\textwidth}
\label{fig:policy_qmix}%
\includegraphics[scale=0.25]{figures/exps/Train_1,2_Test_1,2_train_cur_Model-QMix_center-0.png}
\endminipage\hfill
\vspace{-7pt}
\caption[A set of six subfigures.]{Traffic light phase selected by the left agent in the $1\times2$ topology}
\label{fig:phases}%
\end{figure*}
\begin{figure}[t]
\centering
\minipage{0.33\textwidth}
\subfigure[][]{
\label{fig:algos_flows}
\includegraphics[width=\linewidth,height=1.4in]{figures/exps/Train_2,2_Test_2,2_test_global_rewards.png}}
\endminipage\hfill
\minipage{0.33\textwidth}
\subfigure[][]{\label{fig:QCOMB_flows}
\includegraphics[width=\linewidth,height=1.4in]{figures/exps/Train_2,2_Test_2,2_test_different_flows.png}}
\endminipage\hfill
\minipage{0.33\textwidth}
\subfigure[][]{\label{fig:largesystem}
\includegraphics[width=\linewidth,height=1.4in]{figures/exps/Train_6,6_Test_6,6_Sub_test_global_rewards.png}}
\endminipage
\vspace{-10pt}
\caption[]{More Generalization Results,
\subref{fig:algos_flows} generalization to different traffic flows among algorithms;
\subref{fig:QCOMB_flows} Impact of training conditions on performance in new test condition; \subref{fig:largesystem} Policy trained in $2\times2$ network generalizes to $6\times6$ network}%
\label{fig:generalization}%
\end{figure}
QCOMBO displayed the highest generalization performance, when trained on one traffic condition and deployed on two different test conditions \Cref{fig:algos_flows}.
As reflected by the decrease in performance of all policies when traffic flow changes at time step 2000, the test conditions were more difficult.
While COMA and QCOMBO have equal performance on the training flow (first 1000 steps), QCOMBO generalized much more gracefully to the test conditions as the consistency regularizer prevents overfitting to local training conditions.
VDN perform worse on the training flow than COMA but generalized better, despite experiencing a large drop in the second test flow when the vehicle density increases.
IAC shows high variance on the test condition, while IDQN and QMIX could not adapt to new flows due to low training performance.
\Cref{fig:QCOMB_flows} shows results of the second generalization experiment, where we evaluated QCOMBO policies trained on three different traffic flows, using the same test program.
The first 1000 steps have the same flow as during training, while the new flows appear at $t=2000$ and $t=3000$.
QCOMBO policies show flow invariance during the $t_2$ and $t_3$ period: trained on different traffic conditions, generalization performance on new unseen conditions exhibit only small variability.
This gives evidence that performance of QCOMBO on test conditions does not heavily depend on specific choices of training conditions.
\subsection{Generalization To Larger Traffic Topology}
We directly applied the QCOMBO policy trained in the $2\times2$ traffic network to the $6\times6$ network with 36 traffic light agents, which poses a significant generalization challenge due to the decreased observability for any particular agent and the different traffic flow induced by the different network topology.
\Cref{fig:largesystem} shows that QCOMBO's policy is able to maintain high and stable test performance, with almost negligible difference from its training performance.
Surprisingly, it sometimes attains higher reward even than a policy that was trained specifically on the $6\times6$ environment.
This shows that centralized training with few agents can still produce policies that generalize well to larger settings, mitigating concerns about scalability of centralized training.
This is strong evidence that a policy trained on a subset of a city road network can be deployed with little loss of performance at a larger scale.
\subsection{Conclusion}
We proposed QCOMBO, a novel MARL algorithm for traffic light control.
QCOMBO combines the benefits of independent and centralized training, with a novel objective function that enforces consistency between a global action-value function and the weighted sum of individual optimal utility functions.
We conducted detailed empirical evaluation of state-of-the-art MARL algorithms (IDQN, IAC, VDN, COMA, and QMIX) on network traffic light control under different map topologies and traffic flows, and showed that QCOMBO is a simple yet competitive approach to this real-world problem.
Experiments also indicate that QCOMBO can be generalized with limited loss of performance to large traffic networks.
Our work gives strong evidence for the feasibility of training cooperative policies for generalizable, scalable and intelligent traffic light control.
\bibliographystyle{natbib}
|
3,212,635,537,572 | arxiv | \section{Introduction}
Graphene is a two dimensional sheet that constitutes the building unit of all graphitic forms of matter, such as graphite, carbon nanotubes and carbon fibers. Lee etal. (\cite{Leeetal2008}) use a nanoidentation experiment in an atomic force microscope to measure the elastic properties and intrinsic strength of graphene. Using second order elasticity they evaluate Young's modulus, the second order elastic constant as well as graphene's breaking strength. Their analysis models graphene as an isotropic body in one dimension, due to symmetry in the loading.
Generalization of their approach to two dimensions is done by Cadelano et al. (\cite{Cadelanoetal2009}). These authors view graphene as an isotropic body and they utilize an energy cubic in strains (second order elasticity in words of Murnaghan and Rivlin \cite{Rivlin1963,Murnaghan1951}). Utilizing tight-binding atomistic simulations they calculate Young's modulus, Poisson ratio as well as higher order constants for graphene. While interesting and novel their approach is, it lacks the treatment of bending effects. It also models graphene as an isotropic body; dependence on the zig-zag and the armchair direction is not incorporated to the constitutive law through dependence on a structural tensor. Fifth order models for graphene are presented by Wei et al. (\cite{Weietal2009}). These authors utilize an energy that depends on strains of the fifth order. Using density functional theory for simple loading histories they evaluate higher order constants for graphene. Their approach does not include bending effects neither anisotropy; graphene is modeled as an isotropic body.
To introduce anisotropy for a free-standing monolayer graphene as well as for incorporating bending effects we recently proposed a finite elasticity model for graphene (\cite{Sfyris-Galiotis2014}). Viewing graphene as a two dimensional 2-lattice, we obtain its arithmetic symmetries (\cite{Fadda-Zanzotto2000,Pitteri-Zanzotto2003}). Confined to weak transformation neighborhoods (\cite{Ericksen1979,Pitteri1985}) and invoking the Cauchy-Born rule (\cite{Ericksen2008}), we arrive to the classical symmetries continuum mechanics uses. We lay down the complete and irreducible representation (\cite{Zheng1994,Zheng1997}) for an energy depending on the Cauchy-Green deformation tensor, the curvature tensor as well as the shift vector. Cauchy-Green's surface tensor is a measure of in-plane motions, the curvature tensor measures out-of-plane motions, while dependence on the shift vector stems from viewing graphene as a 2-lattice. Dependence of the energy on the curvature tensor is motivated by the fundamental works of Murdoch and Cohen, Steigmann and Ogden (\cite{Murdoch-Cohen1979,Steigmann-Ogden1999}). We note that E and Ming (\cite{E-Ming2007}) report dependence on the energy on the shift vector for graphene as well. The need for introducing the shift vector as an independent variable in continuum modeling of graphene is also apparent in the approach of Zhu and Huang (\cite{Zhou-Huang2008}). Additionally, the corrugation vector that is introduced in the homogenization scheme of Davini (\cite{Davini2014}) is very close in spirit to the shift vector of our approach.
In \cite{Sfyris-Galiotis2014} anisotropy is introduced throught a sixth-order strucural tensor which describes the zig-zag and armchair directions of graphene. This model predicts 13 independent material moduli, in contrast to the seemingly endless Taylor expansion models in terms of the strains adopted at third and fifth order elasticity \cite{Cadelanoetal2009,Weietal2009}. It is worth mentioning that bending effects are considered in the work of Wei et al. (\cite{Weietal2013}). These authors utilize an energy depending on one in-plane measure and two out-of-plane: bending rigidity and Gaussian bending stifness. These two quantities are work conjugate to the mean and the Gaussian curvature, respectively. Using density functional calcualtions for single wall carbon nanotubes, they evaluate bending rigidity and Gaussian bending stifness for a monolayer graphene. Their calculations are based on assuming infinitely long constant radius carbon nanotubes, so thay can relate energy per atom of the carbon nanotube to the energy per atom of the graphene sheet.
Another interesting study incorporating bending effects is that of Lu-Huang (\cite{Lu-Huang2009}). Using von-Karman kinematical assumptions together with a measure of curvature they provide stress-strain curves using the virial theorem and molecular calculations. In-plane constants are calculated together with bending stiffness which is work-conjugate to curvature. Mixed atomistic-continuum methods are reported by Arroyo and Belytscko (\cite{Arroyo-Belytschko2002,Arroyo-Belytschko2004}) based on the earlier notion of the quasicontinuum (\cite{Tadmoretal1996,Tadmoretal1999}). Arroyo and Belytschko provide a finite continuum theory derived from interatomic potential; the material moduli are expressed in an explicit form in terms of the interatomic potential. They also provide a generalization of the Cauchy-Born rule appropriate for modeling surfaces.
The present work is the linearized counterpart of our previous contributions (\cite{Sfyris-Galiotis2014,Sfyrisetal2014}). Linearization is understood at two levels: material linearity as well as geometrical linearity. Geometrical linearity means confinement to small deformations; mathematically this means that higher order terms of the displacement gradient are negligible. Material linearity means that energy is a quadratic function of the strain tensor, the curvature tensor, the shift vector, as well as to combinations of them. Anisotropy is introduced by requiring the tensors of material constants to be independent under rotations by 60$^0$: graphene's symmetry. This reduces the independent moduli to 9.
We then examine what this framework gives for simple loading histories. Initially, we treat the case of in-plane deformations only. We thus disregard out-of-plane effects setting the curvature tensor equal to zero. In this case we need not take into account the equations of moment of momentum. Assuming the form of the displacement components that correspond to axial tension/compression, we solve for the components of the shift vector. It turns out that shift's vector components are homogeneous; they depend on the loading parameter as well as on the material constants. Same homogeneity of the shift vector components holds true for the case of biaxial tension/compression and for the simple shear case. Analogous procedure is done in the nonlinear counterpart of the present theory (\cite{Sfyrisetal2014}). Results there (\cite{Sfyrisetal2014}) are obtained using the same procedure, nevertheless they are much more complicated than the results of this study. This is due to the linearity assumptions that simplify the analysis here severely. This is apparent especially in the equations describing the shift vector. In the linearized problem they are algebraic equations of the first order, while for the nonlinear case they are algebraic equations of the fifth order. This order reduction simplifies the analysis and facilitates analytical results.
This difference in the algebraic nature of the equations ruling the shift vector permit closed form solutions for the buckling/wrinkling case as well, in contrast to the nonlinear case. By making a suitable assumption for the buckling mode (\cite{Punteletal2011}), we solve for the components of the shift vector. These expressions are substituted to one of the momentum equation. From this equation we obtain the form the function present in the buckling mode has. Then, this final expression is substituted to all the other field equations thereby rendering constraints that the material parameters, the loading constant and the constants of integration should satisfy so that all field equations are satisfied.
The paper is structured as follows. Section 2 reminds the modeling of graphene as a 2-lattice, as well as the passage to the continuum theory. The field equations as well as the constitutive laws that introduce material linearity are given there. Section 3 deals with evaluating the number of independent constants for the constitutive law. Following standard approaches on the topic (see e.g. \cite{Nye1969}), we postulate invariance of the material tensors under rotations by 60$^0$: this is graphene's symmetry group. This reduces the number of independent moduli to 9.
Section 5 deals with in-plane motions only. We disregard out-of-plane motions so the equations of moment of momentum are redundant, as is the curvature tensor. Making suitable assumptions for the displacement field describing axial, and biaxial tension/compression, we evaluate the components of the shift vector in order all field equations to be satisfied. Section 6 deals with buckling/wrinkling: we study in-plane deformations that ultimately lead to wrinkling/buckling. Evaluating the components of the curvature tensor that correspond to such a displacement, we search for the componets of the shift vector. When the latter are substituted to the momentum equations we obtain an equation for evaluating the function present in the buckling/wrinkling mode. We solve for this function and then make sure that all other field equations are satisfied. The paper ends up at Section 7 with some concluding remarks.
As far as notation is concerned Greek indices range from 1 to 2. The common dot product is denoted by $\cdot$, tensor product by $\otimes$ while the cross product for the three dimensional space by $\times$. Summation of repeated indices is assumed throughout the paper. Initially, graphene is assumed to be a flat surface; namely a plane.
\section{Graphene as a 2-lattice}
Following the classification of 2-lattices by Fadda and Zanzotto (\cite{Fadda-Zanzotto2000}), we treat a monolayer graphene as a hexagonal monoatomic 2-lattice with unit cell of the form of Figure 1.
\begin{figure}[!htb]
\centering
\includegraphics{Image2.eps}
\caption{The unit cell of a hexagonal 2-lattice (\cite{Fadda-Zanzotto2000}). }
\label{fig:digraph}
\end{figure}
The lattice and shift vectors are depicted in Figure 2
\begin{figure}[!htb]
\centering
\includegraphics{Image1.eps}
\caption{The lattice and shift vectors of graphene. }
\label{fig:digraph}
\end{figure}
and defined as
\begin{equation}
{\bf e}_1=(\sqrt{3} l, 0), \ \ {\bf e}_2=\left( \frac{\sqrt{3}}{2} l, \frac{3}{2} l \right), \ \ {\bf p}=\left( \frac{\sqrt{3}}{2} l, \frac{1}{2} l \right),
\end{equation}
$l$ being the lattice size, namely the interatomic distance at ease which is approximately 1, 42 Angstrom. The two simple hexagonal lattices are
\begin{eqnarray}
&&L_1 (l)=\{ {\bf x} \in \mathcal R^2: {\bf x}=n^1 {\bf e}_1 + n^2 {\bf e}_2, \ \ (n^1, n^2) \in \mathcal Z^2 \}, \nonumber\\
&&L_2 (l)={\bf p}+L_1(l).
\end{eqnarray}
The arithmetic symmetry group (\cite{Ericksen1979,Pitteri-Zanzotto2003}) of graphene is then described by the matrices
\begin{equation}
\begin{pmatrix}
-1 & -1 & -1 \\
1 & 0 & 0 \\
0 & 0 & 1
\end{pmatrix},
\begin{pmatrix}
0 & 1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1
\end{pmatrix},
\begin{pmatrix}
-1 & -1 & -1 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix},
\end{equation}
\begin{equation}
\begin{pmatrix}
1 & 0 & 0 \\
-1 & -1 & -1 \\
0 & 0 & 1
\end{pmatrix},
\begin{pmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix},
\begin{pmatrix}
0 & 1 & 0 \\
-1 & -1 & -1 \\
0 & 0 & 1
\end{pmatrix}.
\end{equation}
The eigenvalues of these matrices are $1, -1, e^{i \pi/3}, e^{-i \pi/3}$, so they describe the identity transformation, reflection transformation, and rotations by $60^0$, $-60^0$, respectively.
At the continuum level, topologically, graphene is modeled as a two dimensional smooth surface embedded in a three dimensional Euclidean space. Position vectors on the reference configuration $\mathcal B_R$ of the referential surface are parametrized by two surface coordinates $\Theta^{\alpha}, \alpha=1, 2$ as (\cite{Ciarlet2005})
\begin{equation}
{\bf X}={\bf X}(\Theta^{\alpha}).
\end{equation}
After the deformation the surface occupies the current configuration $\mathcal B_C$, described by the position vector
\begin{equation}
{\bf x}={\bf x}(\Theta^{\alpha}).
\end{equation}
Covariant surface base vectors are then defined as
\begin{equation}
{\bf A}_{\alpha}={\bf X}_{, \alpha }, \ \ {\boldsymbol \alpha}_{\alpha}={\bf x}_{, \alpha },
\end{equation}
for $\mathcal B_R$ and $\mathcal B_C$, respectively. Contravariant base vectors are given as
\begin{equation}
{\bf A}_{\alpha} \cdot {\bf A}^{\beta}=\delta_{\alpha}^{\beta}, \ \ {\boldsymbol \alpha}_{\alpha} \cdot {\boldsymbol \alpha}^{\beta}=\delta_{\alpha}^{\beta},
\end{equation}
$\delta_{\alpha}^{\beta}$ being the two dimensional Kronecker delta.
The surface deformation gradient ${\bf F}_S$ reads
\begin{equation}
{\bf F}_S={\boldsymbol \alpha}_{\alpha} \otimes {\bf A}^{\alpha},
\end{equation}
while the surface right Cauchy-Green deformation tensor takes the form
\begin{equation}
{\bf C}_S={\bf F}_S^T \cdot {\bf F}_S.
\end{equation}
This tensor is related to the surface strain tensor by the formula
\begin{equation}
{\bf e}=2 {\bf C}_S-{\bf I},
\end{equation}
$\bf I$ being the two dimensional unit tensor.
Geometrical linearity (small deformations) is introduced by defining the displacement vector ${\bf u}={\bf x}-{\bf X}$. Then, the deformation gradient reads ${\bf F}_S={\bf I}+\nabla_S {\bf u}$, with $\nabla_S()$ being the surface gradient defined as $\nabla_S()=\nabla()-{\bf n}({\bf n} \cdot \nabla())$, where $\bf n$ is the outward unit normal of the surface. Using this relation to eq. (10) together with eq. (11) one finally obtains for the strain tensor
\begin{equation}
e_{\alpha \beta}=\frac{1}{2}(u_{\alpha, \beta}+u_{\beta , \alpha}),
\end{equation}
when higher order terms, $u_{\alpha , \beta} u_{\alpha , \beta}$ are neglected due to the linear assumption. The geometrical linear case utilize the strain tensor $\bf e$ of eq. (12) which measure the in-plane deformations graphene suffers. Essentially, in this case the reference configuration $\mathcal B_R$ and current configuration $\mathcal B_C$ are very close to one another, so there is no need to distinguish between them.
Out-of-plane deformations are described by the surface curvature tensor
\begin{equation}
{\bf b}=b_{\alpha \beta} {\boldsymbol \alpha}^{\alpha} \otimes {\boldsymbol \alpha}^{\beta},
\end{equation}
which is the second fundamental form of the surface. Taking into account bending effects for a monolayer graphene modeled as a surface, requires dependence of the energy on the curvature (\cite{Steigmann-Ogden1999,Murdoch-Cohen1979,Cohen-DeSilva1966}). Thus, for a monolayer graphene at the continuum level we assume an energy of the form (\cite{Sfyris-Galiotis2014,Sfyrisetal2014})
\begin{equation}
W=W({\bf e}, {\bf b}, {\bf p}).
\end{equation}
Dependence on the shift vector, $\bf p$, at the continuum level, results from the fact that at the crystalline level graphene is a 2-lattice. Now, we confine ourselves to weak transformation neighborhoods (\cite{Pitteri-Zanzotto2003}) and assume validity of the Cauchy-Born rule (\cite{Ericksen2008}). With these assumptions enforced we may utilize the classical symmetries employed by continuum mechanics.
Material linearity is introduced by quadratic dependence of the energy
\begin{eqnarray}
W({\bf e}, {\bf b}, {\bf p})&&=\frac{1}{2} C^1_{ijkl} e_{ij} e_{kl} +\frac{1}{2} C^2_{ij} p_i p_j +\frac{1}{2} C^3_{ijk} e_{ij} p_k \nonumber\\
&&+\frac{1}{2} C^4_{ijkl} b_{ij} b_{kl} +\frac{1}{2} C^5_{ijkl} e_{ij} b_{kl} +\frac{1}{2} C^6_{ijk} b_{ij} p_k.
\end{eqnarray}
Tensors ${\bf C}^1, {\bf C}^4, {\bf C}^5$ are fourth order tensors, ${\bf C}^3, {\bf C}^6$ are third order tensors, while ${\bf C}^2$ is a second order tensor: all these tensors are tensors of material parameters. The components of ${\bf C}^1$ describe pure in-plane moduli, those of ${\bf C}^4$ pure out-of-plane moduli, while those of ${\bf C}^5$ mixed in-plane with out-of-plane moduli. Components of ${\bf C}^3, {\bf C}^6$ describe the effect of strain and curvature, respectively, on the shift vector. Finally, ${\bf C}^2$ gives the material modulus related with the shift vector's motions, solely.
The field equations for such a problem are the momentum equation, the moment of momentum equation as well as the equations ruling the shift vector. For the momentum equation we have (\cite{Chhapadiaetal2011,Sfyris-Galiotis2014}) when body forces and inertia are absent
\begin{equation}
\boldsymbol \sigma^{\textrm{bulk}} \cdot {\bf n}+\nabla_S {\boldsymbol \sigma}=0,
\end{equation}
where $\boldsymbol \sigma$ is Cauchy's stress tensor for the surface, while $\boldsymbol \sigma^{\textrm{bulk}}$ is the stress tensor of the bulk material. Here the sheet of graphene is assumed to be free-standing, so $\boldsymbol \sigma^{\textrm{bulk}}$ is set equal to zero. Since we confine ourselves to small deformations we need not distinguish between different stress measures for the surface stress measures. The moment of momentum equation in the absence of body couples, inertia and bulk material reads
\begin{equation}
\textrm{div} {\bf m}-\nabla ({\boldsymbol \sigma} \times {\bf u})=0,
\end{equation}
where $\bf m$ is the surface couple stress tensor. For the shift vector the field equation reads (\cite{Pitteri-Zanzotto2003,E-Ming2007})
\begin{equation}
\frac{\partial W}{\partial {\bf p}}={\bf 0}.
\end{equation}
Form the physical point of view, the momentum equation is the force balance for the surface, while the moment of momentum renders the couple balance for the surface. The shift vector adjusts according to eq. (18) in order equilibrium to be reached (\cite{Pitteri-Zanzotto2003}).
\section{Constitutive relations}
For obtaining the exact form of the constitutive relations we need to evaluate the independent components of the tensors ${\bf C}^1,..., {\bf C}^6$. Symmetries of graphene (see eqs. (3, 4)) dictate that they should be invariant under rotations by 60$^0$. Certainly, the arithmetic symmetries of eqs. (3, 4) are for the atomistic point of view. Passage to the continuum requires confinement to weak transformation neighborhoods as well as enforcement of the Cauchy-Born rule (see \cite{Sfyris-Galiotis2014} and references therein).
For evaluating the independent constants, we start with the following systems for tensors of fourth, third and second order, respectively (\cite{Nye1969})
\begin{eqnarray}
\tilde{C}_{ijkl}&&=a_{ip} a_{jq} a_{kr} a_{ls} C_{pqrs}, \\
\tilde{C}_{ijk}&&=a_{ip} a_{jq} a_{kr} C_{pqr}, \\
\tilde{C}_{ij}&&=a_{ip} a_{jq} C_{pq},
\end{eqnarray}
where the tensor $\bf a$ describe rotation by 60$^0$ and has the following matrix form
\begin{equation}
[a_{ij}]=\begin{pmatrix}
\frac{1}{2} & \frac{\sqrt{3}}{2} \\
-\frac{\sqrt{3}}{2} & \frac{1}{2} \\
\end{pmatrix}.
\end{equation}
Using eq. (22) on eqs. (19-21) one obtains systems for the components of the material moduli. Then, setting (\cite{Nye1969})
\begin{equation}
\tilde{C}_{ijkl}=C_{ijkl}, \ \ \tilde{C}_{ijk}=C_{ijk}, \ \ \tilde{C}_{ij}=C_{ij},
\end{equation}
invariance of the material moduli under rotations by 60$^0$ is enforced.
For the components of the fourth order tensors one finally obtains two independent moduli (\cite{Guinovartetal2001}). For the third order constants the independent moduli is one, as has been evaluated by Nye (\cite{Nye1969}, p. 124, Table 8) for piezoelectric problems. For the second order tensor one component is independent as one can evaluate. All in all, the constitutive expression for the surface stress then read
\begin{eqnarray}
\sigma_{11}&&=c_1 e_{11}+c_2 e_{22} + c_3 b_{11} + c_4 b_{22} -c_5 p_2, \\
\sigma_{22}&&=c_2 e_{11} + c_1 e_{22}+c_4 b_{11}+c_3 b_{22}+c_5 p_2, \\
\sigma_{12}&&=\frac{c_1-c_2}{2} e_{12}+\frac{c_3-c_4}{2} b_{12}-2 c_5 p_1,
\end{eqnarray}
steming from the expression
\begin{equation}
{\boldsymbol \sigma}=\frac{\partial W}{ \partial {\bf e}}=C^1_{ijkl} e_{kl}+C^3_{ijk} p_k+C^5_{ijkl} b_{kl}.
\end{equation}
The constants $c_1, c_2$ are the independent moduli of the tensor $C^1$, $c_3, c_4$ is related with $C^3$ while $c_5$ stems from $C^5$.
For the surface couple stress the constitutive law reads
\begin{equation}
{\bf m}=\frac{\partial W}{\partial {\bf b}}=C^4_{ijkl} b_{kl} +C^5_{ijkl} e_{kl} +C^6_{ijk} p_k.
\end{equation}
So, we obtain
\begin{eqnarray}
m_{11}&&=c_6 b_{11}+c_7 b_{22}+c_3 e_{11}+c_4 e_{22}-c_8 p_2, \\
m_{22}&&=c_7 b_{11}+c_6 b_{22}+c_4 e_{11}+c-3 e_{22}+c_8 p_2, \\
m_{12}&&=\frac{c_6-c_7}{2} b_{12}+\frac{c_3-c_4}{2} e_{12}-2c_8 p_1.
\end{eqnarray}
The material parameters $c_6, c_7$ are related to $C^4$, while $c_8$ is related to $C^6$.
For the components related with the shift vector we have
\begin{equation}
\frac{\partial W}{\partial p_i}=C^2_{ij} p_j+C^3_{ijk} e_{jk} +C^6_{ijk} b_{jk}.
\end{equation}
So, we finally take
\begin{eqnarray}
\frac{\partial W}{\partial p_1}=&&c_9 p_1-2 c_5 e_{12}-2c_8 b_{12}, \\
\frac{\partial W}{\partial p_2}=&&c_9 p_2 -c_5 e_{11}+c_5 e_{22}-c_8 b_{11}+c_8 b_{22},
\end{eqnarray}
where $c_9$ is the independent moduli related to $C^2$.
\section{Field equations}
Using eqs. (24-26) to eq. (16) we obtain for the momentum equation
\begin{eqnarray}
&&c_1 u_{1,11}+c_2 u_{2,21}+c_3 b_{11,1}+c_4 b_{22,1}-c_5 p_{2,1}+\frac{c_1-c_2}{4} (u_{1,22}+u_{2,12}) \nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{c_3-c_4}{2} b_{12,2}-2 c_5 p_{1,2}=0, \\
&&\frac{c_1-c_2}{4} (u_{1,21}+u_{2,11})+\frac{c_3-c_4}{2} b_{12,1} -2 c_5 p_{1,1} +c-2 u_{1,12}+c_1 u_{2,22} \nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +c_4 b_{11,2}+c_3 b_{22,2}+c_5 p_{2,2}=0.
\end{eqnarray}
After using of eqs. (33, 34) to eq. (18) the equations ruling the shift vector render
\begin{eqnarray}
&& c_9 p_1 -c_5 (u_{1,2} +u_{2,1}) -2 c_8 b_{12}=0, \\
&& c_9 p_2 -c_5 u_{1,1} +c_5 u_{2,2} -c_8 b_{11}+c_8 b_{22}=0.
\end{eqnarray}
For the moment of momentum equations we use eqs. (24-26), (29-31) to eq. (17) and obtain
\begin{eqnarray}
&& \left[ c_6 b_{11}+c_7 b_{22}+c_3 u_{1,1}+c_4 u_{2,2}-c_8 p_2 \right]_{,1} +\left[ \frac{c_1-c_2}{2}b_{12}+\frac{c_3-c_4}{4} (u_{1,2}+u_{2,1}-2 c_8 p_1) \right]_{,2} \nonumber\\
&& -\left( u_1 [\frac{c_1-c_2}{4}(u_{1,2}+u_{2,1})+\frac{c_3-c_4}{2} b_{12}-2c_5 p_1] \right)_{,1} \nonumber\\
&& -\left( u_1 [\frac{c_1-c_2}{4}(u_{1,2}+u_{2,1})+\frac{c_3-c_4}{2} b_{12}-2c_5 p_1] \right)_{,2} \nonumber\\
&& +\left( u_2 [c_1 u_{1,1}+c_2 u_{2,2}+c_3 b_{11}+c_4 b_{22}-c_5 p_2] \right)_{,1} \\
&& +\left( u_2 [c_1 u_{1,1}+c_2 u_{2,2}+c_3 b_{11}+c_4 b_{22}-c_5 p_2] \right)_{,2}=0 \nonumber
\end{eqnarray}
and
\begin{eqnarray}
&& \left[ \frac{c_6-c_7}{2} b_{12}+\frac{c_3-c_4}{4} (u_{1,2}+u_{2,1})-2 c_8 p_1 \right]_{,1} + \left[ c_7 b_{11}+c_6 b_{22}+c_4 u_{1,1}+c_3 u_{2,2}+c_8 p_2 \right]_{,2} \nonumber\\
&& -\left( u_1 [ c_2 u_{1,1} +c_1 u_{2,2}+c_4 b_{11}+c_3 b_{22}+c_5 p_2 ] \right)_{,1} \nonumber\\
&& -\left( u_1 [ c_2 u_{1,1} +c_1 u_{2,2}+c_4 b_{11}+c_3 b_{22}+c_5 p_2 ] \right)_{,2} \nonumber\\
&& +\left( u_2 [ \frac{c_1-c_2}{4} (u_{1,2}+u_{2,1})+\frac{c_3-c_4}{2} b_{12}-2c_5 p_1 ] \right)_{,1} \\
&& +\left( u_2 [ \frac{c_1-c_2}{4} (u_{1,2}+u_{2,1})+\frac{c_3-c_4}{2} b_{12}-2c_5 p_1 ] \right)_{,2}=0. \nonumber
\end{eqnarray}
Eqs. (35-40) are the counterpart of eqs. (16-18) written in terms of the kinematical measures: ${\bf u}, {\bf p}, {\bf b}$, which are the unknown functions.
\section{In-plane motions only}
When in-plane motions are considered only, the curvature tensor should be set equal to zero. Also, the moment of momentum equation need not be taken into account. The field equations for this case therefore, read
\begin{eqnarray}
&& c_1 u_{1,11} +c_2 u_{2,21}-c_5 p_{2,1} +\frac{c_1-c_4}{4} [u_{1,22}+u_{2,12}]-2c_5 p_{1,2}=0, \\
&& \frac{c_1-c_2}{4} [u_{1,21}+u_{2,11}]-2c_5 p_{1,1}+c_2 u_{1,12}+c_1 u_{2,22}+c_5 p_{2,2} =0, \\
&& c_9 p_1 -c_5 (u_{1,2}+u_{2,1})=0, \\
&& c_9 p_2 -c_5 u_{1,1}+c_5 u_{2,2} =0.
\end{eqnarray}
The first two are the momentum equations while the rest are the equations ruling the auxiliary variables.
\subsection{Axial tension/compression}
For modeling axial tension/compression we assume for the displacement field
\begin{equation}
u_1=\epsilon \Theta^1, \ \ u_2 = \Theta^2.
\end{equation}
This field of displacement models axial tension/compression in the $\Theta^1$ direction. When the loading constant $\epsilon$ is greater than zero, then we speak about tension, while when it is negative we speak about compression. The necessary derivatives for this case read
\begin{equation}
u_{1,1}=\epsilon, \ \ u_{1,2}=u_{2,1}=0, \ \ u_{2,2}=1.
\end{equation}
The equations of the shift vector render
\begin{eqnarray}
&& c_9 p_1 =0 \rightarrow p_1=0, \\
&& c_9 p_2-c_5 \epsilon +c_5 =0 \rightarrow p_2 =\frac{c_5(\epsilon-1)}{c_9}.
\end{eqnarray}
So, the outcome for the one dimensional tension/compression leads to a homogeneous solution of the shift vector. With eqs. (47, 48) the momentum equations (eqs. (41, 42)) are satisfied trivially as one can infer by direct substitution. Thus, the pair $(p_1, p_2)=\left( 0, \frac{c_5(\epsilon-1)}{c_9} \right)$ qualifies as a solution for the problem at hand when loading is of the form of eq. (45).
\subsection{Biaxial tension/compression}
For modeling tension/compression in both directions we set for the displacement field
\begin{equation}
u_1 =\epsilon_1 \Theta^1, \ \ u_2 =\epsilon_2 \Theta^2.
\end{equation}
For the necessary derivatives we evaluate
\begin{equation}
u_{1,1} =\epsilon_1, \ \ u_{1,2}=0=u_{2,1}, \ \ u_{2,2}=\epsilon_2.
\end{equation}
The equations ruling the shift vector take then the form
\begin{eqnarray}
&& c_9 p_1 =0 \rightarrow p_1 =0, \\
&& c_9 p_2 -c_5 \epsilon_1 +c_5 \epsilon_2=0 \rightarrow p_2 =\frac{c_5 (\epsilon_2-\epsilon_1)}{c_9}.
\end{eqnarray}
Therefore, for the biaxial loading as well we obtain homogeneous solutions for the expressions of the components of the shift vector. Therefore, the momentum equations are satisfied trivially. Collectively, the pair $(p_1, p_2)=\left( 0,\frac{c_5 (\epsilon_2-\epsilon_1)}{c_9} \right)$ qualifies as a solution when loading is given by eq. (49).
\subsection{Simple shear}
Simple shear is described by a displacement field given by
\begin{equation}
u_1 =\Theta^1 +\epsilon \Theta^2, \ \ u_2 =\Theta^2.
\end{equation}
For the derivatives we evaluate
\begin{equation}
u_{1,1}=1, \ \ u_{1,2}=\epsilon, \ \ u_{2,1}=0, \ \ u_{2,2}=1.
\end{equation}
The equations ruling the shift vector are then
\begin{eqnarray}
&& c_9 p_1 -c_5 \epsilon =0 \rightarrow p_1 =\frac{c_5 \epsilon}{c_9} \\
&& c_9 p_2 -c_5+c_5=0 \rightarrow p_2 =0.
\end{eqnarray}
The momentum equations are then satisfied trivially, since the solution in terms of the shift vector is homogeneous. All in all, the pair $(p_1, p_2)=\left( \frac{c_5 \epsilon}{c_9}, 0 \right)$ qualifies as a solution when simple shear is given by eq. (53). It is interesting to note that simple shear in the other direction will lead to the same result in terms of the components of the shift vector.
\section{Out-of-plane motions}
\subsection{Introducing wrinkling/buckling}
In order to model wrinkling/buckling we need to assume that the out of plane displacement is given by the following expression (\cite{Timoshenko,Punteletal2011})
\begin{equation}
u_3=u_3(\Theta^1, \Theta^2)=\textrm{cos} \left( \frac{n \pi \Theta^1}{2 L_1} \right) f(\Theta^2),
\end{equation}
$n$ being the number of sinusoidal wave in the $\Theta^1$ direction and $f$ is an arbitrary function (see Figure 7
\begin{figure}[!htb]
\centering
\includegraphics{Figure6.eps}
\caption{Wrinkling/buckling described by eq. (57) (figure taken from \cite{Androulidakisetal2014}).}
\label{fig:digraph}
\end{figure}
for a schematic guide for this kind of deformation). The parametric form of a surface having the above expression as displacement is
\begin{equation}
{\bf u}(\Theta^1, \Theta^2)=\left( \Theta^1, \Theta^2, \textrm{cos} \left( \frac{n \pi \Theta^1}{2 L_1} \right) f(\Theta^2) \right).
\end{equation}
For our framework, $\bf b$ is the second fundamental form of the surface, so we evaluate for its components
\begin{eqnarray}
&& b_{11}={\bf u}_{,11} \cdot {\bf n} \\
&& b_{12}=b_{21}={\bf u}_{,12} \cdot {\bf n} \\
&& b_{22}={\bf u}_{,22} \cdot {\bf n}.
\end{eqnarray}
The outward unit normal, $\bf n$, of the surface is defined by
\begin{equation}
{\bf n}=\frac{{\bf x}_{,1} \times {\bf x}_{,2} }{|{\bf x}_{,1} \times {\bf x}_{,2}| }.
\end{equation}
These measures of the surface are important since they participate to the field equations (35-40) when out-of-plane motions are taken into acoount.
\subsection{Tension/Compression}
Axial tension/compression resulting in wrinkling/buckling is described by the parametric form of the surface
\begin{equation}
{\bf u}(\Theta^1, \Theta^2)= \left( \epsilon \Theta^1, \Theta^2, \textrm{cos} \left( \frac{n \pi \Theta^1}{2 L_1} \right) f(\Theta^2) \right).
\end{equation}
Such an assumption means that tension/compression in the in-plane results in wrinkling/buckling, i.e. out of plane motion. The phenomenon is not assumed to be dynamic in order to have tension/compression initially that finally leads to wrinkling/buckling. The method is semi-inverse: we assume the form that the solution has in the final form. Tension will finally produce wrinkling on the material, while compression will lead to buckling. Certainly, one expects different behaviour in these two kind of loadings. Such a hardening response cannot be captured by the model in its present form; generalizations should be made which are outside the scope of this work.
For the above given surface the outward unit normal has components
\begin{equation}
{\bf n}= \left( -\frac{n \pi}{2 L_1} \textrm{sin} \left( \frac{n \pi \Theta^1}{2 L_1} \right) f(\Theta^2), -\epsilon \textrm{cos} \left( \frac{n \pi \Theta^1}{2 L_1} \right) f'(\Theta^2), \varepsilon \right),
\end{equation}
when for its Euclidean length we assume it is unity:
\begin{equation}
||{\bf n}||=\sqrt{\left[ -\frac{n \pi}{2 L_1} \textrm{sin} \left( \frac{n \pi \Theta^1}{2 L_1} \right) f(\Theta^2) \right]^2+\left[ -\epsilon \textrm{cos} \left( \frac{n \pi \Theta^1}{2 L_1} \right) f'(\Theta^2) \right]^2+\epsilon^2}=1.
\end{equation}
For the components of the second fundamental form we then obtain
\begin{eqnarray}
&& b_{11}=-\epsilon \frac{n^2 \pi^2}{4 L_1^2} \textrm{cos} \left( \frac{n \pi \Theta^1}{2 L_1} \right) f(\Theta^2), \\
&& b_{12}=b_{21}=-\epsilon \frac{n \pi}{2 L_1} \textrm{sin} \left( \frac{n \pi \Theta^1}{2 L_1} \right) f'(\Theta^2), \\
&& b_{22}=\epsilon \textrm{cos} \left( \frac{n \pi \Theta^1}{2 L_1} \right) f''(\Theta^2).
\end{eqnarray}
Under these assumptions the equations ruling the shift vector, eqs. (37, 38), can be solved as
\begin{eqnarray}
&& p_1 =-2 \frac{c_8}{ c_9} \epsilon \frac{n \pi}{2 L_1^2} \textrm{cos} \left( \frac{n \pi \Theta^1}{2 L_1} \right) f(\Theta^2), \\
&& p_2 =\frac{1}{c_9} \left( c_5 -c_5 \epsilon -c_8 \left[ \epsilon \frac{n^2 \pi^2}{4 L_1^2} \textrm{cos} \left( \frac{n \pi \Theta^1}{2 L_1} \right) f(\Theta^2)+ \epsilon \textrm{cos} \left( \frac{n \pi \Theta^1}{2 L_1} \right) f''(\Theta^2) \right] \right).
\end{eqnarray}
With these expressions for the components of the shift vector the second of the equations of momentum, eq. (36), render one differential equation for the function $f$ after elimination of the term with the cosinus, in the form
\begin{equation}
A f''(\Theta^2)+B f'(\Theta^2)=0,
\end{equation}
where $A=-[\frac{c_3-c_4}{2} - 4 \frac{c_5 c_8}{c_9}] \epsilon \frac{n^2 \pi^2}{4 L_1^2}-[c_4+\frac{c_5 c_8}{c_9}] \epsilon \frac{n^2 \pi^2 }{4 L_1^2}$ and $B=[c_3-\frac{c_5 c_8 }{c_9 }] \epsilon$. Solving eq. (71) we obtain
\begin{equation}
f(\Theta^2)=-\frac{A}{B} e^{-\frac{A}{B} \Theta^2}h_1 +h_2,
\end{equation}
where $h_1, h_2$ are constants of integration. When this expression is substituted to the first of the momentum equation, eq. (35), one finally obtains
\begin{equation}
-A \epsilon \frac{n^3 \pi^3}{8 L_1^3} \left( \frac{A}{B} e^{-\frac{A}{B} \Theta^2}h_1 +h_2 \right) +(C+B) \epsilon \frac{n \pi}{2 L_1} \frac{A^3}{B^3} e^{-\frac{A}{B} \Theta^2}h_1=0,
\end{equation}
where $C=\frac{c_3-c_4}{2}-\frac{4 c_5 c_8}{c_9}$. The latter equation should be viewed as a constraint on the material parameters, through the quantities $A, B, C$, the loading constant, $\epsilon$, and the constants of integration, $h_1, h_2$ in order to fulfill the second of the momentum equation. To this constraint two additional constraint equations should be added; these stem from the moment of momentum equations (eqs. (39, 40)) by substituting eqs. (63, 69, 70, 72). This would result, as for eq. (73), to two equations that the loading parameter, the material constants and the integration constants should satisfy in order the displacement field of eq. (63) to be a solution for the problem at hand. We refrain from writing down these two additional constraints resulting from the moment of momentum equations, but we mention that they can be obtained by direct substitution of eqs. (63, 69, 70, 72) to eqs. (39, 40).
\section{Conclusions}
This work constitutes an extension of \cite{Sfyris-Galiotis2014} in the direction of giving some closed form solutions for a free standing monolayer graphene. The approach is valid for the geometrical and material linear framework at the level of the continuum.
We start by presenting the framework of \cite{Sfyris-Galiotis2014} suitable for the geometrically and materially linear regime. For the case of in plane motions we examine one dimensional tension/compression along both directions of the surface as well as the case of biaxial tension/compression and simple shear. The outcome cosnists of homogeneous solutions for the components of the shift vector that depend on the material parameters and the loading constant. For modeling out of plane motions we describe how wrinkling/buckling can be introduced into the framework. We evaluate explicitly the components of the shift vector as well as those of the curvature tensor so that all field equations are satisfied.
As for future directions, we consider that investigation of thin graphene sheets on subtrates constitutes a highly challenging theoretical and experimental problem. The linearized equations presented here together with the incorporation of substrate effects to the model, will make the present approach more relevant to actual experimental set-ups such as [1]. \\
\section{Acknowledgements}
This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: ERC-10 "Deformation, Yield and Failure of Graphene and Graphene-based Nanocomposites". The financial support of the European Research Council through the projects ERC AdG 2013 (‘‘Tailor Graphene’’) is greatfully acknowledged.
|
3,212,635,537,573 | arxiv | \section{Introduction}
There are two classes of models proposed for the formation of massive stars (e.g., Krumholz 2005). The first posits that massive stars are assembled in a manner analogous to their lower mass counterparts: from infalling material located in a rotating protostellar core channeled starward through a circumstellar accretion disk, with the flow of material from the inner regions of the accretion disk toward the stellar surface mediated by a magnetic field rooted in the star. We choose the shorthand ``magnetospherically-mediated accretion" (henceforth MMA) to describe this scenario. Bonnell, Vine, \& Bate (2004) propose the following variant: that massive stars form preferentially in cluster-forming environments, starting their lives as lower mass cores whose mass then increases as a result of competitive accretion of surrounding molecular gas. In the case of mass buildup through competitive accretion, transfer of material from the protostellar core to the central forming star would presumably still follow the general precepts of MMA. A second, very different class of model posits that massive stars start their lives as lower mass protostellar cores in clusters, and then grow via mergers of cores located in regions where the density of cores is high (Bally \& Zinnecker 2005 and references therein). We refer to this class of model as ``core merger " models or CM. Observational tests aimed at determining which of these contrasting models, MMA or CM, dominates in nature have proven impossible to date, primarily because high mass stars form on very short time scales in highly obscured regions at distances large enough so that, with the spatial resolutions achieved by current generation telescopes, crowding renders study of individual forming objects difficult.
Direct tests of MMA for massive stars would involve measurements similar to those carried out to establish MMA as the likely assembly mechanism for low mass stars: of stellar magnetic fields (e.g. Johns-Krull, Valenti, \& Koresko 1999), and of line profiles diagnostic of the temperature-density-velocity fields expected were material channeled starward from the inner regions of the accretion disk by magnetic fields along ``funnel flows" (e.g., Hartmann, Hewett, \& Calvet, 1994; Muzerolle, Calvet, \& Hartmann 1998). Unfortunately, rotational and turbulent broadening among higher mass stars precludes direct observation of Zeeman broadening for magnetically sensitive lines (rotational broadening typically exceeds Zeeman broadening by more than a factor of 10). Moreover, searches for line profile morphologies consistent with magnetic funnel flows have yielded no convincing evidence of such flows for stars with masses much in excess of 3 {M$_{\sun}$} (e.g., Muzerolle et~al.\ 2004). An alternative, but more indirect test of MMA might be provided by observation of highly collimated outflows presumably launched in the inner regions of the accretion disk, perhaps near the point where the dynamical pressure of accretion is matched by the magnetic pressure of the stellar field (e.g., Shu et~al.\ 1995). Such collimated outflows are common among low mass stars ($M < $ 3 {M$_{\sun}$}) which exhibit funnel flow profiles and for which direct measurements of magnetic field strengths are available, that is, objects where MMA finds strong observational support. Collimated outflows have also been observed in association with even more massive (up to $\sim$10 {M$_{\sun}$}) stars surrounded by circumstellar accretion disks (Cesaroni et~al.\ 2005; Beuther \& Shepherd 2005, and references therein), perhaps suggesting that MMA is operative for stars as massive as 10 {M$_{\sun}$}.
Bally \& Zinnecker (2005) propose two possible tests for CM. The first involves surveys aimed at detecting direct evidence of collisions and mergers of low mass protostellar cores via observation of infrared ``flares". The second is indirect, namely a change in the character of stellar outflows from the highly collimated outflows characteristic of lower mass objects ($M <$ 10 {M$_{\sun}$}) presumably undergoing MMA, to much more poorly collimated outflows among higher mass objects. Unfortunately, no surveys aimed at detecting flares in regions of high mass star formation have yet been carried out. Moreover, there is thus far only very limited evidence regarding the degree of collimation, or lack thereof, for outflows from forming stars with masses larger than $\sim$ 10 {M$_{\sun}$}.
Given the challenge of distinguishing observationally between MMA and CM models for high mass star formation, alternative clues are potentially valuable. One such clue may be provided by observation of stellar angular momenta among young, high mass stars. For example, Bally \& Zinnecker (2005) suggest that the product of the merger of two stars should be rotating rapidly: a result of converting the orbital angular momenta of merging protostars into the spin angular momentum of the resulting higher mass, merged object. In contrast, stars formed via MMA should be rotating relatively slowly, at speeds well below breakup (e.g., Shu et~al.\ 1994).
In a recent paper discussing the rotation properties among young stars in Orion, Wolff, Strom, \& Hillenbrand (2004; hereafter WSH) used observations of {$v \sin i$}\ to derive values of the projected specific angular momentum ({Jsini/M}) for pre-main-sequence stars still on convective tracks; these stars have recently completed their main accretion phase and should provide the best directly measurable estimate of the initial values of angular momentum. WSH found that the upper envelope of {Jsini/M} varies slowly with mass (J/M $\sim$ $M^{0.25}$) over the mass range 0.1-3 {M$_{\sun}$} and lies well below the angular momentum associated with stars having rotation speeds near breakup (see section 3 below). These authors suggest that MMA can in principle account for these results for stars in this mass range.
In this paper we extend measurements of {$v \sin i$} to O and B0 stars in extremely young clusters and associations and ask whether there is a discontinuity in rotation properties between lower mass stars and stars with masses $M >>$ 3 {M$_{\sun}$}, as might be expected if MMA dominates the formation of low mass stars, while
CM plays a role in the formation of a significant fraction of high mass stars.
\section{Observations}
\label{sec:obs}
The O stars for this study were selected from the associations studied by
Massey \& Thompson (1991), Hillenbrand et~al.\ (1993), and Massey, Johnson,
\& DeGioia-Eastwood (1995). Because our proposed search for a discontinuity
in rotation properties as a function of stellar mass requires our estimating
rotation rates for stars as soon as possible after they have formed, we have
limited our sample to stars very close to the zero-age main sequence (ZAMS),
i.e., to luminosity classes III-V. The stars were observed with the Hydra
multi-object spectrograph at the
WIYN telescope on Kitt Peak.
Observations (the ``high resolution sample'') of nine O stars in NGC 6611 were
obtained in May, 2005 using the Hydra multiobject fiber spectrograph on WIYN as part of a more extensive study of stellar rotation in this cluster. In conjunction with an Echelle grating and an order separating filter, these observations with WIYN-Hydra yielded spectra with R $\sim$ 20,000 spanning the wavelength range 4450 to 4590 $\AA$. In addition to our NGC 6611 program stars, 12 stars with spectral types from O4 to B0 with known values of {$v \sin i$} (Penny 1996; Howarth et al. 1997) were observed on the same nights with the same instrumental configuration to serve as rotation standard stars. Our approach was to measure the FWHM of He I 4471 $\AA$ and to derive a relationship between FWHM and published {$v \sin i$} values (Penny 1996; Howarth et~al.\ 1997) for the twelve standard stars. We note that the {$v \sin i$} values adopted from these two published studies result from application of a cross-correlation technique to the rich near-ultraviolet metallic spectra of late-O and early B- stars observed with IUE. By using stars
from these two previous studies as standards, we
are placing our own values of {$v \sin i$} on a known system. We note that
different calibrations yield slightly different answers. For examples,
the Penny and Howarth et al. values are larger by about 10\% than the
values determined from visual examination of observed line profiles by Conti \& Ebbets (1977) from spectra in the optical region of the spectrum. In turn, these latter values are systematically smaller by about 30 km/sec than the values
derived in the pioneering study of Slettebak (1956).
Forty-two additional stars were observed over
the wavelength range 4070-4580 $\AA$ at a resolution of $\sim$ 0.75 $\AA$
(``low resolution sample''). These observations enable measurement of
projected rotational velocities, {$v \sin i$} $>$ 50 km/sec, and upper limits to {$v \sin i$} for stars rotating more slowly than 50 km/s. The spectra were
extracted and calibrated in wavelength by Kim Venn. The lines used to
measure {$v \sin i$} were He I 4471 $\AA$ or He I 4387 $\AA$ or both, depending
on the spectral type and the strength of the lines.
In order to calibrate {$v \sin i$} for our ``low resolution sample'', we established the relationship between FWHM and {$v \sin i$} for He I 4471 $\AA$ and He I 4387
$\AA$ by using the {$v \sin i$} values determined as above from the 9 O stars
in NGC 6611 that were observed at both resolutions. To strengthen the
calibration, we also added 4 stars in
NGC 6611 with spectral types B0.5 that were also observed at both high and
low spectral resolutions; the values of {$v \sin i$} for these 13 stars range
from 30 to 400 km/sec. (The measurements of stars in NGC 6611 with types B0.5 and later will be reported along with data for B stars in several other associations in a subsequent paper.) Independent measures of the line widths for stars observed more than once during our run indicate that the values of {$v \sin i$} are internally consistent to within about 20 km/sec. Given the uncertainties in the calibration and the difficulty of measuring extremely broad lines, we report the overall uncertainty in a measured value of {$v \sin i$} as $\sim$10\% or 20 km/sec, whichever is larger. This uncertainty is typical of {$v \sin i$} studies for stars in this spectral type range. For example, the average difference in the {$v \sin i$} values measured by both Penny (1996) and Howarth et~al.\ (1997) for the 9 of our standard stars that are common to both of the published studies is 19 km/sec. As a further check on our values, we have compared our results with 11 stars for which data are also available on either WEBDA (http://www.univie.ac.at/webda/webda.html) or from Huang \& Gies (2006). We find that the standard deviation of the difference between our results and the literature values is 13 km/sec.
The stars observed, along with spectral types, etc., are listed in Table 1. The first column lists the name of the cluster or association, the second gives the name of the star, if any; the third lists the number on the WEBDA system; the fourth and fifth columns give the stellar position; the sixth lists the spectral type, the seventh the log of the effective temperature, the eighth the bolometric luminosity, the ninth the stellar mass, the tenth the apparent rotational velocity, and the eleventh the log of the specific angular momentum. The derivation of the quantities in the last 5 columns is described in section 3.1
\section{Analysis}
\subsection{Comparing Observed Rotational Velocities with Equatorial Breakup Velocities}
In order to make our first assessment of whether forming stars of all masses rotate similarly or whether there is a discontinuity at some characteristic mass, we need to establish a simple procedure for comparing rotation properties of young high and low mass stars. Our first approach is to compare the observed rotational velocities of the stars in our sample with the critical velocity for stars of the same mass on the birthline. We will argue below that the observed values of {$v \sin i$} for the very young stars in our sample have changed little since the time that the main accretion phase ended, and so the current values of {$v \sin i$} are representative of their initial values. The goal of this comparison is to determine whether the ratio of the observed initial values of {$v \sin i$} to the critical velocities on the birthline varies systematically with mass. This simple approach is motivated by the fact that explicit calculation of stellar angular momentum, which is a more fundamental quantity, requires a number of assumptions about stellar structure and the internal distribution of angular momentum as well as accurate estimates of stellar parameters. The apparent rotational velocity, however, is a directly measured quantity, while the breakup velocity on the birthline can be estimated from stellar models. Therefore, we begin the analysis by comparing the observed values of {$v \sin i$} with the critical velocities calculated from models of newly formed stars.
Models predict that stars with masses comparable to the O-stars in our sample are deposited directly on the zero age main sequence at the end of the assembly process. For a time-averaged mass accretion rate of $dM_{acc}/dt$ = 10$^{-5}$ {M$_{\sun}$}/year, which observations suggest is typical for intermediate mass stars (Palla \& Stahler 1992), stars with M $>$ $8$ {M$_{\sun}$} should lie on the main sequence when accretion stops. If $dM _{acc}/dt $ =10$^{-4}$ {M$_{\sun}$}/year, which is more typical of the accretion rates advocated by McKee \& Tan (2003) for 20-30 {M$_{\sun}$} stars, then stars with $M >$ 15 {M$_{\sun}$} should lie on the main sequence when accretion stops. Therefore, we expect that all of the O stars in our sample were already on the main sequence when accretion ended.
For condensed stasr, such as O stars on the main sequence, the breakup velocity $v_c$ is given by (Townsend, Owocki, \& Howarth 2004)
$v_c$ = $(2GM/3R_p)^{0.5}$,
where M is the mass of the star and $R_p$ is the polar radius at the
breakup velocity. The factor 2/3
takes into account the difference in the polar and equatorial radii for the
case of critical rotation.
For the O stars, most authors calculate the critical velocity by including
a correction factor for radiation pressure:
$v_c^{2}$ = $(GM/R_e)(1-\Gamma_{rad})$
where $\Gamma_{rad}$, the ratio of the luminosity to the Eddington luminosity is given by Mihalas (1978):
$\Gamma_{rad}$ = 2.6 x $10^{-5}$ L/M.
Maeder and Meynet (2000) point out that this relationship is valid only if we
assume that the brightness of a rotating star is uniform over its surface,
which is inconsistent with von Zeipel's theorem. They argue that radiation
pressure can in fact be ignored for stars with Eddington factors less than
0.639, which is true for the stars treated here.
For present purposes, we will also assume that the polar radius of a rapidly rotating star can be taken to be approximately equal to the radius of a non-rotating star. We have used the models of Schaller et~al.\ (1992) to obtain the parameters for ZAMS O stars. The critical velocities calculated with no allowance for radiation
pressure agree well with the calculations by Meynet et~al.\ (2006).
For an accretion rate of $10^{-5}$ {M$_{\sun}$}/yr, stars with masses less than 8
{M$_{\sun}$} have not yet reached the ZAMS when
the main accretion phase ends. To estimate the critical rotational velocities
for these lower mass stars when accretion stops and they are deposited on the birthline, we have used the PMS models of Swenson et~al.\ (1994) and the birthline of Palla and Stahler (Palla, private communication) for an accretion rate of $10^{-5}$ {M$_{\sun}$}/yr to determine the values of M and R on the birthline. The Swenson models extend
only to 5 {M$_{\sun}$}. We have used the models of Siess, Dufour, \& Forestini
(2000) to calculate values for stars of 3-7 {M$_{\sun}$}; there is good agreement in the overlap region.
We have been unable to find calculations of the critical
velocity derived from models of PMS stars. However, it appears that
the 2/3 factor used
for high mass stars is a reasonable first approximation even for PMS stars.
Herbst and Mundt (2005)
have used the equilibrium shapes of a rotating polytrope of index 1.5
to estimate that the ratio of the polar to equatorial radius for the most
rapidly
rotating ($P=0.6$ days) low-mass PMS stars is $\sim$ 0.75. We have also derived
the true rotational velocities for the 5 PMS stars in Orion with periods
less than 0.9 days (Stassun et al. 1999) and masses between 0.15 and 0.27
{M$_{\sun}$}. If we assume that these stars are rotating at breakup, we find that
the lower bound for the factor that should be used to calculate
the critical velocity is 0.62. Because these values bracket the factor
of 2/3 calculated for high mass stars, we have assumed that this same factor
can also be applied to low-mass PMS stars.
The results are plotted in Figure 1. For low and intermediate mass stars, we have used the values of {$v \sin i$} derived by Rhode, Herbst \& Mathieu (2001) and WSH for stars in the Orion association, most of which are less than $10^6$ years old. (We note that rotation periods have been derived from photometry of many more low mass stars in Orion but cannot be measured for the O stars, which lack the spot-modulated light curves characteristic of their lower mass brethren. Therefore, we choose here to confine our comparison to {$v \sin i$}, which can be measured for stars of all masses.).
For the high mass stars in our current sample, we use the values of {$v \sin i$} reported in Table 1. We derive masses for the O stars from the evolutionary tracks of Schaller et~al.\ (1992). In order to do so, we need to know the effective temperature ($T_{eff}$) and bolometric luminosity ($L_{bol}$). We have used the calibrations given by Massey et~al.\ (2005) to estimate $T_{eff}$ from spectral types given in the papers by Massey \& Thompson (1991), Hillenbrand et~al.\ 1993, and Massey et~al.\ (1995); given $T_{eff}$, the bolometric correction can also be obtained from Massey et~al.\ (2005). The distances to each association have been taken from the papers that provided the spectral types, and intrinsic colors as a function of spectral type given by Fitzgerald (1970) have been used to derive to the reddening to each star.
Figure 1 shows that nearly all of the stars are rotating at velocities that are less than half the critical velocity on the birthline, and most are rotating at velocities that are less than 30\% of the critical velocity. We note that in
this figure, we have plotted $v_c$ but have measured {$v \sin i$}. A statistical
correction can be made for a group of stars whose inclinations are randomly
distributed by multiplying the average {$v \sin i$} by $4/\pi$ to obtain
the average true velocity, which we will call $v_{obs}$ (Chandrasekhar \&
Munch 1950; Gaige, 1993).
If we subdivide the low mass stars into three groups according to mass, we find that the median value of $v_{obs}/v_c$ $\sim$ 0.14 for the lowest mass group,
0.10 for next lowest mass group, and 0.11 for stars with masses near that of the Sun (see Table 2). If we divide the high mass stars into two groups, the medians are 0.13 for stars with masses between 8 and 25 {M$_{\sun}$} and 0.20 for the stars with masses greater than 25 {M$_{\sun}$}.
The discerning reader will note a ``gap" in the observed data between 3 and 8 {M$_{\sun}$}. This gap results from the fact that, while there are many observations of stars near the ZAMS in this mass range, there are very few observations of stars young enough to be near their expected birthline radii. Rotation rates for ZAMS stars in this mass range may therefore not be representative of their initial values. Rather, such stars are initially deposited on pre-main sequence radiative tracks and will spin up as they contract, resulting in both higher critical velocities and higher rotation rates on the main sequence than would have been
observed on the birthline.
The results from our limited sample appear to be representative of the results from other studies. Stassun et~al.\ (1999) also find only a few exceptional low mass PMS stars in Orion with periods close to the critical value, and our results for the low mass Orion sample plotted in Figure 1 are representative of the range of apparent rotational velocities observed for other groups of PMS stars of similar mass. More extensive studies of O stars have found that about 95\% of main sequence O stars have values of {$v \sin i$} that are less than 300 km/sec (Penny 1996; Howarth et~al.\ 1997), with a tail extending to about 400 km/sec. Because the effects of limb darkening are severe in stars rotating near the breakup velocity, model calculations for B stars show that line widths may become insensitive to rotation speeds higher than about 90\% of the critical velocity for stars viewed equator-on (Townsend et~al.\ 2004). Only about 5\% of the O stars, however, are rotating even as rapidly as half the critical velocity, and so our sample should not be affected by excessive limb darkening.
Was the rotation of our sample stars also low relative to the critical velocity immediately after accretion stopped? Or has significant angular momentum loss occurred since the stars were deposited on the birthline or, equivalently for the high mass stars, since they first arrived on the ZAMS?
For stars with masses $M >$ 15 {M$_{\sun}$}, stellar winds are predicted to carry away angular momentum following deposition on the ZAMS. Moreover, as massive stars evolve, the radius increases, thus driving surface rotation speeds toward lower values. The amount of angular momentum lost depends on the age of the star,
and unfortunately it is difficult to estimate the ages of massive stars with
an accuracy of 1-2 Myr. The
values of T$_{eff}$ and L$_{bol}$ derived from the non-LTE atmospheric models
displace even the youngest O stars to the right of the ZAMS predicted by
models of the
interior (Schaller et al. 1992) by 2000-4000 K (Repolust,
Puls, \& Herrero 2004; Martins, Schaerer, \& Hillier 2005). As an
alternative method of estimating ages,
we note that there is evidence that all of our clusters except CygOB2 contain
PMS stars
with masses of 3-5 {M$_{\sun}$} and temperatures cooler than T$_{eff}< 10000$K.
Stellar models (e.g. Siess \&Forestini 2000) predict that these stars are
less than 2.6 Myr old. Since the formation of stars in these regions appears to
be contemporaneous within 2-3 Myr with the most massive stars being the
youngest (cf. Massey et al. 1995), it is reasonable to assume that the stars in
our sample are less than 2.5 Myr old. The observations of Cyg OB2 do not go
faint enough to reach the PMS stars, but Massey et al. have estimated from
the massive stars that the age of Cyg OB2 is similar to that of the other
associations in our sample.
Only 4 of the stars in our sample have masses greater than 40 {M$_{\sun}$}.
Quantitative estimates from extant models predict that the surface rotation rate will be reduced by only about 1/3 of the initial value during the first 2.5 Myr for stars rotating at 300 km/sec and with masses of ~40 {M$_{\sun}$} (Meynet \& Maeder 2000). For most of the stars in our sample, which are of lower mass, the
effects of winds and radius changes will be smaller. Therefore, we can assume that the currently observed values of {$v \sin i$} for the O stars in our sample differ by not more than 100 km/sec, and typically by much less, from the initial
values just after the star was fully assembled. For stars more massive
than 40 {M$_{\sun}$}, the angular momentum loss is predicted to be larger and,
given the uncertainty
in ages, we can say little about what their initial angular momentum might
have been.
Some low mass stars must lose angular momentum as they evolve toward the ZAMS along convective tracks (Rebull, Wolff \& Strom, 2004; Herbst \& Mundt, 2005). For this reason, the values of {$v \sin i$} for our sample stars could in principle be somewhat lower than their initial values when they were deposited on the birth line. For example, Covey et~al.\ (2005) report that Class I/flat spectrum objects rotate on average $\sim$ 38 km/sec, or about twice as fast on average as Classical T Tauri stars. An increase in the average rotation rate of a factor of two would, however, still result in a representative median value of $v_{obs}/v_c \sim 0.2$.
What we can conclude from the data in Figure 1, therefore, is that throughout the mass range from 0.1 to 50 {M$_{\sun}$}\ the median value of $v_{obs}$ is
$\sim$ 10-20\% of the critical velocity; this difference does not change significantly with mass.
\subsection{Comparing Stellar Angular Momenta for Stars of Differing Mass}
A more fundamental quantity than rotation speed is the angular momentum per unit mass.
The specific angular momentum of a star is given by
J/M = I$\Omega$/M.
Therefore, in order to calculate the specific angular momentum of a star we require values for the moment of inertia (I), the angular rotational velocity, the stellar radius, and the mass. We also assume that the observed surface rotation rate is representative of the rotation of the star as a whole.
Most published models do not provide moments of inertia, but Maeder (private communication) has provided us with values for models of stars with masses of 3, 5, 9, and 15 {M$_{\sun}$} on the ZAMS. Meynet et~al.\ (2006) have shown that log I scales as log M for massive main sequence stars, and we have used this relationship to extrapolate the values of I from the models we have to the higher mass stars.
For the highest mass star for which we have a model (15 {M$_{\sun}$}), the computed tracks predict a change of log I of only 0.11 dex by the time that such a star has completed nearly half its evolution toward core hydrogen exhaustion. Since this is a small change, we neglect evolutionary effects in I for the relatively unevolved stars in our sample.
The calculated values of J/M are plotted as a function of mass in Figure 2. As seen in this Figure, J/M varies slowly with mass over the range 0.2-50 {M$_{\sun}$}. A fit by eye suggests that over this entire range the upper bound to $J/M \sim M^{0.3}$, which is very close to the value of 0.25 derived by WSH for stars with $M <$ 3 {M$_{\sun}$}. The scatter below the upper bound can be attributed to several factors, including differences in viewing angle, loss of angular momentum as low- and intermediate-mass stars evolve down their convective tracks, and depending on the mechanism that determines the initial angular momentum, possibly to differences in stellar properties (e.g., magnetic field strength, accretion rate during the stellar assembly phase).
Note that we have included in this plot young stars in the mass range 3-8 {M$_{\sun}$} for which {$v \sin i$} values are reported by WSH. Such objects were excluded from Figure 1 on the grounds that these objects are considerably removed from their birthline locations. We include them here because it {\it appears} that stars in this mass range likely conserve angular momentum as they evolve. We do not expect intermediate mass stars to have magnetic fields or strong winds to carry away angular momentum (see also WSH). If this reasoning is correct, then the angular momenta reported here {\it may} accurately reflect their initial values.
\section{Summary}
Data for O-type stars have been combined with data in the literature to show that over a mass range of a factor of 250 (0.2-50 {M$_{\sun}$}), the specific angular momentum J/M of stars varies slowly and continuously with mass ($J/M \sim M^{0.3}$). Nearly all stars in this mass range are rotating at rates that are no more than 30\% of the critical velocity calculated from models of stars along the birthline. We conclude that a single mechanism must be at work to keep rotation rates low and at similar values for stars of all masses at birth, this despite the rapid accretion of high angular momentum material from a disk during the stellar assembly phase.
In the context of MMA and CM models, our results would appear to rule out CM models, which we expect naively to produce more rapid rotation were massive stars formed through mergers. If such mergers take place, they would appear to be the exception rather than the rule. Rather, the continuity of angular momentum properties across the whole mass range from M stars to O stars argues for a common formation mechanism to masses as high as 50{M$_{\sun}$} and adds one more piece to the mounting evidence (cf WSH) that magnetically-mediated accretion through a disk is the main mechanism by which stars of all masses form.
With the present data, we cannot yet extend this conclusion to still higher masses. We note, however, that Penny (1996) observed several stars with estimated masses larger than 60 {M$_{\sun}$} and found none with {$v \sin i$} $>$ 200 km/sec (well below the estimated escape velocity of $>$ 800 km/sec). It is unclear whether this means that no massive stars rotate extremely rapidly or whether their strong stellar winds have carried away significant amounts of angular momentum. Observations of very young stars with well established ages are needed to determine the angular momentum properties of stars in the range 50-100 {M$_{\sun}$}.
\acknowledgements
We thank Diane Harmer of NOAO, who provided generous assistance both in preparing for our WIYN-Hydra observing run and at the telescope, and the referee, Georges Meynet, for his thorough reading of the manuscript and a number of helpful suggestions.
\clearpage
|
3,212,635,537,574 | arxiv | \section{Introduction}
\label{sec:intro}
The recent discovery by \citet{b05} of the hypervelocity star SDSS J090745.0+024507
(hereafter the HVS) has lent credence to the prediction of \citet{h88} that
dynamical encounters with the black hole in the Galactic centre
can eject stars with velocities up to several thousand kilometers per second.
At a distance of 40-70$\kpc$, the HVS has a heliocentric radial velocity
of $853 \pm 12\kms$ in a direction of $174^{\circ}$ from the Galactic centre
\citep{b05}.
Once corrected for solar motion and galactic rotation, this translates into
a velocity of about $730\kms$ relative to the Local Standard of Rest.
This velocity, which represents the lower limit of the total space velocity,
is significantly higher than that of any other runaway or high-velocity star
in the Galaxy \citep{s91}.
We investigate the origin of the extreme velocity of the HVS
by means of kinematic analyses, binary evolution calculations and
numerical scattering experiments.
There are three possible explanations for the velocity of the HVS:
(i) ejection upon supernova explosion of the companion in a binary system;
(ii) dynamical ejection after a close encounter with main sequence stars or
stellar mass compact objects;
(iii) dynamical ejection from the Galactic centre as a result of an encounter
with the supermassive black hole.
The paper is organised as follows: in \S2 we analyse the possibility
of an origin in the Galactic disk and an ejection by supernova explosion,
in \S3 we back-trace the orbit of the star in the Galactic potential
and constrain the value of its proper motion and in \S4 we describe
numerical scattering experiments of 3-body encounters involving
main sequence (MS) stars, the supermassive black hole (SMBH)
and intermediate-mass black holes (IMBHs).
\section{The origin of the high velocity of the HVS}
\label{sec:origin}
To address the possibility that the HVS is a high-velocity runaway resulting
from the disintegration of a binary system, we turned to population
synthesis. Making use of the {\tt SeBa} stellar evolution package \citep{p01}, we
generated two sets of $10^6$ binaries each. In the first set, primary masses
ranged from $100\msun$ down to the hydrogen burning limit.
In the second set, the minimum primary mass was increased to $8\msun$, in
order to get a larger sample of events for statistical purposes.
The mass ratio in both cases was chosen from the distribution of
\citet{h92}, whilst the initial orbital separation was drawn from
that of \citet{d91} and truncated at $10^5$ R$_{\odot}$.
The distribution of natal kick speeds for neutron stars was taken from
\citet{p90}, with a dispersion of $300\kms$.
We assumed that the escape speed was the orbital speed of the secondary
immediately before the binary disintegrated.
As well, we only considered secondary stars with a mass below $10\msun$,
given the known constraints on the mass of the HVS. From this
synthesis experiment, we obtained maximum escape speeds of $\sim 70 \kms$,
an order of magnitude below the space velocity of the HVS.
Based upon this, we reject the high-velocity runaway hypothesis.
The possibility that the system is a binary and that its high speed is the
result of an asymmetric supernova kick can be discounted because, if we
assume that the visible companion is on the main sequence, a kick magnitude
in excess of $2000\kms$ would be required. While this can not be ruled out
physically, the probability of the system remaining bound in such an event
is negligible.
The only alternative explanation for the high velocity of the HVS
is an ejection during a dynamical encounter.
Stellar encounters involving MS stars, neutron stars or stellar mass black
holes eject stars with maximum velocities on the order of the orbital velocity
in binary systems \citep{gpe04, sp93}.
\citet{b05} assume therefore that the HVS was ejected from the
Galactic centre by a strong encounter with the SMBH.
This possibility was first proposed by \citet{h88}, who considered encounters
of binaries with a SMBH, and then further developed by \citet{yt03},
who found that encounters between binaries and the SMBH or between
single stars and a binary black hole represent the most efficient channel
to eject hypervelocity stars.
\section{The trajectory of the HVS in the Galaxy}
\label{sec:orbit}
In the previous section we argue that the HVS must have been
ejected from the Galactic centre by a dynamical encounter
with the SMBH. This implies that the HVS originated
in the Milky Way's central region.
Despite the incompleteness of available kinematic data, we trace back
the orbit of the HVS in the Galactic potential.
The distance of the object is still uncertain, as it depends on its
spectral type and evolutionary state.
\citet{b05} estimate a distance of $71\kpc$ for a B9.2 MS star and
$39\kpc$ for a blue horizontal branch star, with an average value
$d=55\kpc$. We consider all three values in our analysis.
We first assume that the star has no proper motion and trace its trajectory
backward in time until it crosses the disc, using Paczynski's model (1990)
for the potential of the Galaxy.
The orbit, shown in Fig. \ref{fig:orbit} (dashed line) for the case
$d=55\kpc$, doesn't pass through the Galactic centre region.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{figure1.eps}
\end{center}
\caption{Trajectory of the HVS integrated backward in time
in the Galactic potential without any proper motion component (dashed line)
and with a proper motion of about 1.8$\masyr$ (solid line).
The integration is stopped upon passage through the disc.
The full dot represents the present position of the star at a distance of
$55\kpc$ while the grey dots represent a schematic model of the Galaxy.}
\label{fig:orbit}
\end{figure}
We then randomly generate the two proper motion components
in the range 1-3$\masyr$ and derive the three-dimensional positions and
velocities in the galactocentric reference frame \citep{js87}.
For each distance value we generate about 5000 sets of initial conditions,
integrate the trajectories and determine the minimum distance $d_{\rm min}$ to the
Galactic centre. We find that there is at least one
combination of proper motion components for each distance such that
$d_{\rm min}< 5\,\rm pc$.
In Tab. \ref{tab:pm} we report, for each distance $d$, the average values
$\mu_{\alpha}$ and $\mu_{\delta}$ of the proper motion components
such that $d_{\rm min}<$10\,pc, the corresponding heliocentric velocity
$V_{\rm SUN}$ and the velocity $V_{\rm ej}$ at a distance $d_{\rm min}$
from the centre, corrected for galactic rotation.
\begin{table}
\caption{Kinematic parameters of the HVS which minimize the distance
from the Galactic centre. For each distance value, we report the average
proper motion in right ascension $\mu_{\alpha}$ and declination $\mu_{\delta}$
such that $d_{\rm min}<$10\,pc, the corresponding heliocentric velocity
$V_{\rm SUN}$ and the velocity $V_{\rm ej}$ at a distance $d_{\rm min}$
from the centre corrected for galactic rotation.}
\label{tab:pm}
\begin{center}
\begin{tabular}{ccccc}
\hline
$d$ & $\mu_{\alpha}$ & $\mu_{\delta}$ & $V_{\rm SUN}$ & $V_{\rm ej}$ \\
(kpc) & (mas/yr) & (mas/yr) & (km/s) & (km/s)\\
\hline
39 & -1.4916$\,\pm\,$0.0007 & 2.1679$\,\pm\,$0.0009 & 982$\,\pm\,$1 & 1246$\,\pm\,$144\\
55 & -0.9769$\,\pm\,$0.0004 & 1.5379$\,\pm\,$0.0003 & 976$\,\pm\,$1 & 1266$\,\pm\,$66\\
71 & -0.7214$\,\pm\,$0.0002 & 1.1917$\,\pm\,$0.0003 & 973$\,\pm\,$1 & 1249$\,\pm\,$142\\
\hline
\end{tabular}
\end{center}
\end{table}
The ejection velocity $V_{\rm ej}$ is about $1250\kms$, independent of the
assumed distance.
In the case $d=55\kpc$, the mean values $\mu_{\alpha}$ and $\mu_{\delta}$ given
in Tab. \ref{tab:pm} result in the orbit shown in Fig. \ref{fig:orbit} (solid line).
We can use this analysis to calculate the minimum velocity
with which the HVS should have been ejected from the disk in the case
of the binary supernova scenario. Assuming a random proper motion in the range
1-3$\masyr$, the minimum velocity at disk crossing (corrected for galactic rotation)
is about $500\kms$, much larger than the average recoil velocity
from a supernova explosion (see \S~2). This result supports the scenario
of an origin in the Galactic centre.
\section{Three-body scatterings with the supermassive black hole}
\label{sec:scatter}
We now explore the hypothesis of a dynamical ejection from the Galactic centre
by means of numerical simulations of three-body scatterings with the SMBH.
In Fig. \ref{fig:enc} we show three examples of encounters
involving MS stars, the SMBH and an IMBH.
\begin{figure*}
\begin{center}
\includegraphics[width=5.5cm]{figure2a.ps}
\includegraphics[width=5.5cm]{figure2b.ps}
\includegraphics[width=5.5cm]{figure2c.ps}
\end{center}
\caption{Examples of three-body encounters involving the SMBH.
The positions are in units of the initial binary semi-major axis.
(Left) An encounter between a binary of $3\msun$ MS stars and the
SMBH resulting in the the ejection of one star while the other star
remains bound to the SMBH (exchange).
(Middle) An encounter between a star and a black hole binary resulting in a
preservation.
(Right) An encounter between the SMBH and a binary containing a $3\msun$
MS star and an IMBH resulting in the breakup of the binary and the ejection
of the MS star (ionization).}
\label{fig:enc}
\end{figure*}
The experiments are carried out with the {\tt sigma3} package included in the
STARLAB\footnote{\tt http://www.manybody.org/manybody/starlab.html} software
environment \citep{mh96,p01}. For each simulation we specify: the masses of
the three stars, the semi-major axis and eccentricity of the binary and the
relative velocity at infinity between the binary's centre of mass and the
single star. In order to classify possible collisions and mergers, physical
radii are also specified for the stars. Additional parameters like the
orbital phase of the binary and its orientation relative to the incoming star
are randomly drawn from uniform distributions \citep{hb83}. The initial
eccentricity is drawn from a thermal distribution \citep{h75}. The impact
parameter $b$ is randomized according to an equal probability distribution for
$b^2$ in the range $[0-b_{\rm max}]$. The maximum value $b_{\rm max}$ is
determined automatically for each experiment (see \citet{gpe04} for a description).
Energy conservation is usually better than one part in $10^6$ and,
in case the error exceeds $10^{-5}$, the encounter is rejected.
The accuracy in the integrator is chosen in such a way that at most 5\%
of the encounters are rejected.
In all the experiments, we consider MS stars of mass
$m=3\msun$ (as indicated by \citet{b05} for a B9 star) and radius $R =
2.4\rsun$, an IMBH of mass $\mimbh=3000\msun$ and a SMBH of mass $\msmbh =
3.5\times10^6 \msun$ \citep{g03, s03}. The relative velocity at infinity
between the single star and the binary's centre of mass is set equal to the
dispersion velocity in the Galactic centre ($\sim 100\kms$). The ejection
velocities of escapers are taken at the distance at which the integrator
stops \citep{mh96}. The effect of the black hole potential is negligible
at this distance.
\subsection{Encounters between a binary of MS stars and the SMBH}
\label{sec:nb}
Close encounters between a binary and a very massive object can
(i) break up the binaries and eject the two components with high speed
({\it ionizations}),
(ii) eject one star and leave the second star bound to the SMBH
({\it exchange}) (see the left panel of Fig. \ref{fig:enc}).
We perform scattering experiments between the SMBH and binaries of MS stars.
In all the runs, the binary stars have equal mass $m$
and the semi-major axis is varied in the range 0.05\,AU$ < a <$ 1\,AU.
In Fig. \ref{fig:nbvel} we show the average ejection velocity $V_{\rm ej}$
of escapers as a function of the initial binary semi-major axis.
Sufficiently high ejection velocities ($V_{\rm ej} \simgreat 1250\kms$)
are obtained for $a\simless$ 0.3\,AU. The dotted line represents the
theoretical prediction by \citet{yt03} with an ejection speed parameter
$v'_{\rm BH} = 130\kms$ (see Eq. 20).
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{figure3.ps}
\end{center}
\caption{Average recoil velocity of escapers as a function of the initial
binary semi-major axis in the interaction of a stellar binary with the SMBH.
The error bars indicate the 2$\sigma$ deviation from the mean.
The squares indicate the velocity $V_{\rm max}$ for which 1\% of the
encounters have $V_{\rm ej} > V_{\rm max}$. The triangles indicate
the velocity $V_{\rm min}$ for which 1\% of the encounters have
$V_{\rm ej} < V_{\rm min}$. The horizontal line marks the $1250\kms$
ejection velocity of the HVS while the dotted line gives the
theoretical estimate by \citet{yt03}.}
\label{fig:nbvel}
\end{figure}
In Fig. \ref{fig:nbbranch} we show the fraction of encounters resulting
in ionization, exchange of the secondary star or merger for the range
of $a$ under consideration.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{figure4.ps}
\end{center}
\caption{Branching ratios as a function of the initial binary semi-major
axis for encounters between a stellar binary and the SMBH.
The empty symbols indicate the total fraction of ionizations,
exchanges and mergers while the full dots indicate the fraction
of encounters which produce escaping stars with a velocity larger
than $1250\kms$.}
\label{fig:nbbranch}
\end{figure}
The fraction of encounters resulting in an exchange of the secondary
star increases slowly from about 25\% for $a = 0.005$\,AU to about 30\% for
$a = 1$\, AU. Since the binary components have equal masses,
the probability of ejection for the two stars is equal in exchange encounters.
Ionizations are rare and only occur for $a \simgreat$ 0.2\,AU,
which marks the transition between hard and soft binaries \citep{h75}.
As a result, high ejection velocities are mostly produced in exchange encounters.
The total fraction of scatterings whose outcome is a star escaping
with a velocity larger than $1250\kms$ decreases rapidly with increasing $a$,
and therefore binaries with semi-major axis 0.05\,AU $\simless a \simless$
0.3\,AU are the most suitable progenitors of hypervelocity stars.
If we consider that about 20\% of the simulated encounters
result in the ejection of a hypervelocity star over the suggested
range of orbital separations, the ejection rate provided by \citet{yt03}
can be refined to $\sim 3\times10^{-6} \left(\eta/0.1\right) \rm yr^{-1}$,
where $\eta$ is the binary fraction.
The fraction of physical collisions and mergers decreases steadily with
increasing $a$ from $\sim$20\% to $\sim$2\%, and mainly involve the binary
components. Collisions with the SMBH occur in less than 1\% of the cases.
The merger products can remain bound to the SMBH or escape its gravitational potential.
We perform additional scattering experiments of encounters between binaries
and the SMBH to study the properties of the merger products.
We consider equal mass binaries with mass $m=1.5\msun$ (in such a way that the
mass of any possible merger is $\sim 3\msun$) and radius $R=1.4\rsun$.
Escapers are ejected with velocities larger than $1000\kms$ if the initial
semi-major axis is in the range 0.03\,AU $\simless a \simless$ 0.05\,AU.
In this range, only about 6\% of the encounters result in a binary merger with
escape of the collision product. We therefore consider this scenario
inefficient for the production of hypervelocity stars.
\subsection{Encounters between a single star and a binary black hole}
We now consider the hypothesis that the SMBH is in a binary with an IMBH
and interacts with single stars (see the central panel of Fig. \ref{fig:enc}).
The semi-major axis of the black hole binary is taken in the range
2\,AU $< a < 1000$\,AU.
All the simulated encounters result in a preservation of the black hole
binary, during which the single star gains kinetic energy and escapes.
In Fig. \ref{fig:bbhvel} we show the average ejection velocity
of the escaping star as a function of $a$.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{figure5.ps}
\end{center}
\caption{Average recoil velocity of a single MS star escaping from
a black hole binary as a function of the binary's initial semi-major axis.
The horizontal line marks the $1250\kms$ ejection velocity of the HVS while
the dotted line represents the analytical estimate by \citet{yt03}.}
\label{fig:bbhvel}
\end{figure}
The maximum velocity obtained in these encounters is about
$1000\kms$, barely sufficient to explain the velocity of the HVS.
The comparison with the theoretical estimate (dotted line) by \citet{yt03}
reveals a discrepancy of about a factor of 2 in the ejection velocities.
The numerical results also flatten at large initial semi-major axes and
deviate from the expected $V_{\rm ej} \propto a^{-1/2}$ scaling.
We have performed additional calculations with $1\msun$ incoming stars but
the average $V_{\rm ej}$ appears to be insensitive to the mass of the single
star.
\subsection{Encounters between a SMBH and a main sequence star in a binary with an IMBH}
\label{sec:break}
It has been proposed \citep{hm03} that the Galactic centre
is populated by IMBHs, with masses in the range $100-10000\msun$.
These IMBHs may form in young dense star clusters as a
result of runaway collisions \citep{spz04} and sink toward the Galactic centre
together with their parent cluster due to dynamical friction.
While the cluster dissolves in the Galactic potential, the IMBHs continue
to spiral in, possibly with a stellar companion, until they eventually
interact with the SMBH \citep{e01}.
If this is the case, IMBHs must play a role in the dynamical encounters taking
place in the few inner parsecs of the Galaxy.
We consider encounters between the SMBH and a binary consisting of a MS star
and an IMBH.
We select the binary semi-major axis in the range 0.1\,AU$< a < 100$\,AU,
with the lower limit set by the IMBH's tidal radius
$R_t = R \left(\mimbh/m\right)^{1/3} = 24\rsun$ for the adopted masses and radii.
We focus on the encounters whose final outcome is the break-up of the binary
with subsequent ejection of star $m$ to infinity, while the IMBH
can either remain bound to the SMBH or escape (see the right panel of Fig. \ref{fig:enc}).
Figure \ref{fig:bkvel} reports the velocity of the escaping star
after the encounter as a function of the initial $a$.
This type of encounter can easily eject stars with velocities of
thousands of kilometers per second.
A theoretical estimate derived from \citet{yt03} (see Eq. 1--3)
is shown with a dotted line. The numerical results agree well
with the theoretical estimate.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{figure6.ps}
\end{center}
\caption{Average recoil velocity of escapers as a function of the initial
binary semi-major axis in the interaction between a normal star in a binary
with an IMBH and the SMBH. The error bars indicate a 2$\sigma$ deviation from the mean.
The squares and triangles are defined as in Fig. \ref{fig:nbvel}.
The horizontal line marks the $1250\kms$
ejection velocity of the HVS while the dotted line gives the
theoretical estimate by \citet{yt03}.}
\label{fig:bkvel}
\end{figure}
In Fig. \ref{fig:bkbranch} we show the fraction of encounters resulting
in ionization, exchange of the secondary star or merger for the range
of $a$ under consideration.
For $a \simless$ 0.3\,AU the binary is hard and the total cross-section is dominated
by exchange encounters. In this case, the escaping star gains energy at the
expense of the IMBH, which becomes bound to the SMBH.
For larger semi-major axes, the binary is soft and tends to be ionized.
The highest recoil velocities are generally obtained in exchange
encounters.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{figure7.ps}
\end{center}
\caption{Branching ratios as a function of the initial
binary semi-major axis in the interaction between a normal star in a binary
with an IMBH and the SMBH.}
\label{fig:bkbranch}
\end{figure}
Although this scenario can eject stars with hypervelocities over a wide range
of orbital separations, only rarely eccentric orbits result in prompt
ionization. Prompt ionization only occurs for low angular momentum
orbits, which comprise a fraction of about $10^{-5}$ if we assume an isotropic
velocity distribution. Adopting a formation rate of $\sim 10^{-7} \rm
yr^{-1}$ for IMBHs \cite{spz05}, the ejection rate can be as small as
$\eta 10^{-11} \rm yr^{-1}$, where $\eta$ represents the fraction of IMBHs
with a stellar companion.
Low eccentricity orbits result in a slow inspiral of the binary
due to dynamical friction, but, in this case, it is not clear whether
the stellar density is sufficiently high to drag the binary to the tidal
radius of the SMBH. If an encounter with the SMBH does take place,
we expect the ejection velocity to be much lower than in the case of
a prompt ionization.
\section{Summary and Conclusions}
\label{sec:disc}
We have investigated the origin of the extreme velocity of the HVS
by means of kinematic analyses, binary evolution calculations
and numerical simulations of three-body encounters.
By tracing the trajectory of the HVS in the Galactic potential
(using the available measurements for the distance and the radial velocity),
we have shown that a proper motion of about $\sim 2\masyr$ is required
for the star to have come within a few parsecs from the SMBH.
We confirm the prediction by \citet{h88} and \citet{yt03} that dynamical
encounters with the SMBH can eject hypervelocity stars to the Galactic halo;
the HVS is likely the first discovered object of this kind.
The most promising scenarios are dynamical encounters of MS stars with
the SMBH, possibly in a binary with an IMBH.
In particular, the encounter between a SMBH and a stellar binary
or between a single star and a binary black hole provide enough kinetic
energy to eject hypervelocity stars over a restricted range of initial
semi-major axes.
We have also investigated the more exotic encounter between a SMBH
and an IMBH orbited by a stellar companion.
Although this type of encounter can be very energetic and doesn't require
any constraint on the binary's hardness, it has a very low probability.
The possibility that the HVS is the product of a merger induced by the SMBH
is intriguing but not very likely.
\section{Acknowledgments}
We thank Clovis Hopman and Melvin Davies for interesting discussions
on hypervelocity stars and the anonymous referee for brief
and insightful comments on the manuscript.
This work was supported by the Netherlands Organization
for Scientific Research (NWO), the Royal Netherlands Academy
of Arts and Sciences (KNAW) and the Netherlands Research School
for Astronomy (NOVA).
|
3,212,635,537,575 | arxiv | \section{Introduction}
Thin layers of fluids on solid substrates display surprisingly rich dynamics, due to the interplay of a variety of forces at many lengthscales~\cite{simpson1982, 1985gennes, oron1997, craster2009, 2009bonn}. Much progress has been achieved on the study of thin fluid films. For low Reynolds number flows, they are well characterized by the lubrication approximation of the Navier-Stokes equation. This elegant approximate formalism allows for tractable analysis of a wide range of fluid dynamics problems on many lengthscales, such as liquids spreading on flat surfaces~\cite{ehrhard1991}, inclined surfaces~\cite{huppert1982},
convergent viscous gravity currents~\cite{dijksman2015},
spin coating applications~\cite{schwartz2004, wu2006}, flow of granular suspensions~\cite{ancey2013} and geophysical~\cite{balmforth2000} contexts, with ``thin'' here meaning that the height $h$ of the film is small with relative to the typical lateral lengthscale. The spatiotemporal evolution of the height field $h(x,y,t)$ is of a very general form, essentially a nonlinear conservation equation, describing how a profile in $h$ evolves in time. The evolution is driven by various forces, including gravitational, surface tension and, in a rotating system, centrifugal forces. If all the forces are sustained at constant levels, the film may approach an steady state profile.
\begin{figure}[tbp]
\includegraphics[width=14cm]{1_fig_setup.pdf}
\caption{\label{fig:setup_mech} (a) Schematic drawing of the container with the interferometry setup and all the relevant parameters: the rotation speed $\Omega$ in radians per second, the initial filling height $H_0$, the radius of the container $R$, the dynamic viscosity and surface tension of the fluid, $\eta, \gamma$ respectively. (b) the location of the heating element underneath the container, and double-walled rotating axis that doubles as a cooling tube.
}
\end{figure}
In a rotating container, the free-surface of a fluid will develop a parabolic free-surface profile to balance gravitational pressure and centrifugal forces with the amplitude of the profile increasing with the rotation rate~\cite{linden1984,lubarda2013,dijksman2015,bostwick2017}. For sufficiently large rotation rates, $\Omega> \Omega_c$, the profile will be truncated by the bottom of the container \cite{linden1984, dijksman2015}. Related flows are observed in other studies using a stationary container with a rotating bottom plate~\cite{bergmann2011,tophoj2013}. For a fluid that wets the container walls, a thin film of fluid will remain in the center of the container, we call this the central thin film (CTF), whose radius depends on the rotation rate~\cite{dijksman2015}. Away from the center, the height profile remains parabolic. After sufficient time, the full height profile $h$ will converge to an equilibrium, due to conservation of mass in the container.
In the absence of other influences, the CTF will be of nearly uniform thickness. The thinning dynamics of the film is described in the classic work of Emslie, Bonner and Peck~\citep{Emslie1958} (EBP), who considered the simplest case of centrifugal forcing of a viscous fluid. From the balance of viscous and centrifugal forcing, they derived that $h(t) \propto t^{-1/2}$, a progressively slowing down thinning behavior. This behavior has been confirmed and expanded by many~\cite{acrivos1960, meyerhofer1978, flack1984, masahiro1987, kitamura2001} and is of fundamental importance in all spin coating techniques, for example in lithographic microchip production and in making the next generation of photovoltaics~\cite{delbos2012}.
Here we show that the addition of a surface tension gradient stress also known as Marangoni stress drives nontrivial spatio-temporal dynamics in a thin fluid film generated by spin coating. The inward Marangoni stress we create produces an accumulation of mass in the center of the film. We quantify how the thinning dynamics is affected by Marangoni stresses, and how the equilibrium profile is determined by the balance of Marangoni forcing with the various other forces acting on the thin film. We observe that Marangoni driving can even qualitatively change the structure of the surface at the edge of the CTF.
The paper first describes the experimental (Sec.~\ref{sec:exp}) and theoretical approach (Sec.~\ref{sec:theo}) used, and then presents the results in four main divisions: Sec.~\ref{sec:isothinning} shows how we can recover the classic EBP scaling dynamics; Sec.~\ref{sec:marathinning} describes how Marangoni forcing changes the EBP scaling. We also obtain results on the final equilibrium profile of the thin film spot after thinning has ceased: for the isothermal case, these results are described in Sec.~\ref{sec:isoequil}; the Marangoni effects on the equilibrium profile are described in Sec.~\ref{sec:maraequil}.
\begin{figure}
\includegraphics[width=16cm]{2_fig_fringeevo.pdf}
\caption{\label{fig:maraqual} Fringe pattern of the CTF after steady rotation for 1~hour at $\Omega = 2\pi$ at isothermal conditions (leftmost panel). At $t=0$~min, heating/cooling is turned on. Even though the CTF has not equilibrated to its steady state, the Marangoni forcing immediately induces strong height variations in the CTF. Image crop measures about 4.6~cm in width.}
\end{figure}
\section{Experimental setup}\label{sec:exp}
The experimental system consists of an initially
uniform layer of fluid of thickness $\sim 3$~mm in a shallow cylindrical container -- see Fig.~\ref{fig:setup_mech}. The container is spun-up using a stepper motor to rotation speed $\Omega = 2\pi$~radians per second~(rps) unless otherwise noted. The container measures 13~cm in diameter and 2~cm in height. On the bottom of the container, a 4'' diameter silicon wafer (University Wafers) is placed; the wafer is fixed to the base through the deposition of a small ($\lesssim 1$~ml) amount of fluid between wafer and the bottom of the container. The suction force that keeps the wafer stuck to the container relies on the hydrodynamic drag on the thin film between wafer and container, and remains even after complete submersion of the wafer. The container is filled with a volume $V$ of fluid which gives an initial filling height $H_0 = V/(\pi R^2)$ with $R$ the radius of the container. We use polydimethylsiloxane (PDMS) for all experiments described in this work; this fluid completely wets the silicon wafer. The fluid wetting, combined with the fact that the thin films explored in this work are never thinner than several microns eliminates the necessity of extreme wafer cleanliness. The properties of PDMS are density $\rho = 965$~kg/m$^3$ and $\gamma = 0.02$~N/m. The thermal sensitivity of the surface tension for PDMS is $d\gamma/d\Theta \approx 6\times10^{-5} \mbox{N}/(^\circ\mbox{K$\cdot$m})$~\cite{1947fox, Bhatia1985}. We use a range of dynamic viscosities of 10-10000~mPa$\cdot$s; in all cases the viscosity and rotation rates used ensure that Coriolis forces do not play a role, since $\eta \gg \rho\Omega^2h$~\cite{Emslie1958}. The transparency of the PDMS and reflectivity of the silicon wafer allows for a laser-assisted alignment of the gravity-leveled fluid surface and the silicon wafer in the container, whose orientation can be tuned by set screws. Interferometry provides access to the spatio-temporal features of the thin film dynamics~\cite{toolan2012} -- see Fig~\ref{fig:setup_mech}a. Wafer illumination is provided with a uniform sodium light via a beam splitter. The spatial structure of the interference pattern of reflected and incoming light waves is recorded with a digital camera. In particular, height dynamics at any location can be measured through rate at which interferometric fringes evolve. The fringe succession rate can be measured with the varying intensity of the interference pattern, which we record by camera. The magnitude of the interference signal (the pixel values in the recording) is irrelevant, but the periodicity of the signal indicates the passing of fringes, which represent a height reduction (or increase) of known amplitude. Here we use this technique to measure the thin film dynamics in the center of the container at $r=0$. We will neglect the weak temperature dependence of the viscosity and the index of refraction (required for the fringe-based height measurements) of PDMS.
\par
In isothermal experiments, the container is uniformly heated to a temperature of 24$^{\circ}$C to fix the temperature dependent viscosity $\eta$ and surface tension $\gamma$ of the fluid. Temperature control is implemented by running water at a set temperature through the double-walled rotating axis. To establish Marangoni forcing, we cool the center of the container by running cooling water through the double-walled rotating axis while heating the outside ring of the container with a foil heater (Minco), positioned underneath the container (Fig.~\ref{fig:setup_mech}b). The Marangoni forcing has a very significant effect on $h(r)$. To show the qualitative effect of the initiation of heating on an almost flat CTF, see Fig.~\ref{fig:maraqual}. After isothermal spin-up of about 1~hour, we turn on Marangoni forcing. While the container is establishing its equilibrium radial temperature profile, we see strong evolution of the fringes in CTF region.
The power for the foil heater is supplied through a slip ring (Moog) on the rotating axis. In the thermal gradient experiments, the level of cooling, set by the thermal bath that circulates the water (Neslab RTE7), is always run at maximum capacity. The maximum thermal gradient is then achieved at the largest heating power we can provide, which is 100W. The edge temperature at 100W heating is about 60$^{\circ}$C. For a complete description of the setup, see \cite{Mukhopadhyay2009}. This heating mechanism gives a azimuthally symmetric thermal profile in the bottom of the container, and hence on the silicon wafer, as shown in Fig.~\ref{fig:setup_IR}a. The temperature gradient is opposite in direction compared to the thermal profile considered in~\cite{dandapat1994}.
Fig.~\ref{fig:setup_IR}b shows the temperature profile on a silicon wafer at maximum heating/cooling capacity in an empty container. We measure the profile with an infrared (IR) camera (FLIR A325). The infrared measurements require a known emmissivity for the substrate, and low reflectiveness of spurious infrared radiation into the camera. The measurements are thus performed with a layer of spray paint (Krylon flat white 1502) on the silicon wafer and in an empty container. We determined the infrared emissivity of the spray painted silicon wafer by calibrating the response at known temperature, similar to \cite{boreyko}. The spray painted wafer was subsequently placed in the container in the same way an uncoated wafer would be mounted in an experiment with an actual fluid present. The IR data is available only out to the edge of the silicon wafer, at approximately $0.8R$.
\section{Governing model}\label{sec:theo}
\subsection{Temperature profile}
\par
Due to the high thermal conductivity of the wafer, we expect the temperature profile to be maintained in the steady state set by the balance of the outer heating and central cooling. At the outer edge, the heating effectively sets the temperature at the boundary, $r=R$. On the interior, the temperature should satisfy the steady axisymmetric heat equation, with a heat sink, $q$, for the influence of the cooling,
To represent the idealized conditions, we write the steady state heat equation in cylindrical coordinates,
\begin{equation}
0 = \frac{\kappa}{r}\frac{\partial}{\partial r}\left(r\frac{\partial\Theta}{\partial r}\right) -q,
\end{equation}
in which $\kappa$ is the heat conductivity. We model the cooling as being a uniform constant value over a small inner region, $0\le r\le r_1$, and zero outside.
Consequently, the temperature distribution can be described over the central and outer annular regions in terms of a parabolic profile and the classic axisymmetric steady state solution,
\begin{equation}
\Theta(r)= \Theta_0 + {q\over 4\kappa}
\begin{cases}
r^2 & 0\le r\le r_1\\
r_1^2 +2r_1^2\log(r/r_1) & r_1\le r\le R
\end{cases}
\label{v1temp}
\end{equation}
where $\Theta_0$ is the temperature at the origin. Fig.~\ref{fig:setup_IR} shows that
equation (\ref{v1temp}) matches the experimental profile well. Over a significant portion of the domain, the profile has a nearly uniform radial temperature gradient of about $7.4^\circ$K/cm. For PDMS\cite{1947fox, Bhatia1985}, this translates to a maximum surface tension gradient of about $4.4\times 10^{-2}$~N/m$^2$.
It is important to note that it is not appropriate to approximate $\Theta(r)$ by a linear profile because this yields an unphysical gradient in the solution at the origin which can yield spurious behaviors.
\par
Fitting \eqref{v1temp} to the actual temperature profile data as shown in Fig.~\ref{fig:setup_IR}, we obtain an inner region where the parabolic profile applied with $r_1\approx 1.95$~cm ($r_1\approx 0.3R$, scaled relative to $R$). This radius is much larger than the width of the tubing $r_i\approx 0.17 R$ indicated in Fig~\ref{fig:setup_IR}b. We conclude that the assumption of uniform central cooling is not exactly satisfied and will replace \eqref{v1temp} with a qualitatively equivalent but less restrictive empirical profile.
\par
The qualitative form of the temperature profile given by (\ref{v1temp}) should be mostly insensitive to variations in the properties of the central cooling, but we have not attempted to calibrate those values precisely. It will be convenient to replace this profile with an empirical fit to a single smooth function on $0\le r\le R$, given by
\begin{equation}
\Theta(r)=\Theta_0 +B\left(1 - \exp(-Cr^2)\right),
\label{v2temp}
\end{equation}
which has $\Theta'(0)=0$ at the origin. Here $B, C$ are dimensional fitting constants: $C$ relates to the effective width of the central cooling while $B$ scales with overall temperature rise to the outer edge of the container. The product $BC$ corresponds to the ratio of the source strength to conductivity from \eqref{v1temp}, $BC=q/(4\kappa)$, and the effective linear temperature gradient is given by the maximum slope, $\Theta'_{\max}=B\sqrt{2C/e}$.
This profile fits the experimental measurements well and is very close to (\ref{v1temp}) over most of the domain (see Fig.~\ref{fig:setup_IR}b). For our experimental setup, by fitting to the profile in Fig.~\ref{fig:setup_IR} we determined $C\approx 0.0935$~cm$^{-2}$.
\begin{figure}[!tbp]
\includegraphics[width=14cm]{3_fig_IR.pdf}
\caption{\label{fig:setup_IR} (a) Infrared (IR) false color top view of the container showing the temperature profile of the base obtained at the largest thermal gradient possible. Color indicates temperature, ranging from $20^{\circ}$ to $60^{\circ}$K. The temperature profile on the dashed line is shown in (b) for the entire container, as a function of the radial position in units of the container radius $R$ (blue solid line). The black short-long dashes line indicates a linear fit with slope $7.4^{\circ}$K/cm. The actual thermal profile is well fitted to a smooth profile based on a Gaussian (red dash-dotted curve; see text) that approximates the piecewise-defined steady state solution (green solid curve). The arrow and dashed vertical line indicate the radius of the cooling tubes at the base of the container.}
\end{figure}
\subsection{Lubrication model}
The time dependent film height $h(r,t)$ in the rotating container is described with a the time dependent axisymmetric lubrication approximation that includes surface tension, surface tension gradients, gravity, centrifugal force and disjoining pressure\cite{schwartz2004,wu2005,wu2006,wu2007,Mukhopadhyay2009},
\begin{equation}
- \frac{1}{3\eta r} \frac{\partial}{\partial r} \left\{
{\rho\Omega^2}r^2h^3 +
\frac{3rh^2}{2}\frac{d\gamma}{dr} -
rh^3\frac{\partial}{\partial r} \left[ \rho g h -
\frac{A}{h^3} \right]
+ \gamma rh^3 \frac{\partial}{\partial r} \left[
\frac{1}{r} \frac{\partial}{\partial r}\left(r
\frac{\partial h}{\partial r} \right)\right] \right\} =
\frac{\partial h}{\partial t} ~,
\label{eq:filmfull}
\end{equation}
where $h$ is the height of the axisymmetric surface depending on the radial coordinate, $r$, and time, $t$. In the influence of the wetting properties of the container's base is given by the contribution of the disjoining pressure, $\Pi=A/h^3$, with negative Hamaker constant for complete wetting.
\par
To incorporate thermal Marangoni stresses, we use the temperature profile $\Theta(r)$ to write
$$\frac{d\gamma}{dr} = \frac{d\gamma}{d\Theta} \frac{d\Theta}{dr}=\tau \frac{d\Theta}{dr}$$
with $\tau$ being a material parameter that captures the temperature dependence of the surface tension of PDMS. We assume a linear dependence of the surface tension $\gamma$ on temperature $\Theta$~\cite{ehrhard1991}; the literature suggests $\tau \approx 6\times 10^{-5}$~N/$^\circ$K$\cdot$m~\cite{1947fox, Bhatia1985}.
\par
We nondimensionalize Eq.~\ref{eq:filmfull} with $h=H_0\tilde{h}, r=R\tilde{r}$ and set the timescale $T=\eta R^2/(\rho g H_0^3)$ based on the balance between viscous and gravity-driven effects. With these choices and after dropping the tildes on all nondimensionalized variables, the scaled equation is:
\begin{equation}
-\frac{1}{3r} \frac{\partial}{\partial r}\left\{
\mbox{Fr}^2\, r^2h^3 -
\mbox{Ma}\,\frac{3rh^2}{2}\phi(r)-
rh^3\frac{\partial h}{\partial r} -
\mbox{Ha}\,\frac{3r}{h}\frac{\partial h}{\partial r}
+\frac{rh^3}{\mbox{Bo}} \frac{\partial}{\partial r}
\left[
\frac{1}{r} \frac{\partial}{\partial r}\left(r
\frac{\partial h}{\partial r} \right)\right]
\right\}=\frac{\partial h}{\partial t},
\label{eq:filmfullscale}
\end{equation}
where the nondimensionalized temperature gradient function is
\begin{equation}
\phi(r)=2cr \exp(-c r^2)\qquad \mbox{with $c=3.95$},
\label{phiEqn}
\end{equation}
where $c=CR^2$ (analagous to a Damkohler or Thiele parameter for the dimensionless ratio of a reaction rate to a diffusivity) and other dimensionless parameters being
\begin{equation}
\mbox{Fr}^2={\Omega^2 R^2\over g H_0}\qquad
\mbox{Ma}={\tau B\over \rho g H_0^2}\qquad
\mbox{Ha}={|A|\over \rho g H_0^4}\qquad
\mbox{Bo}={\rho g R^2\over \gamma},
\end{equation}
respectively: a rotational Froude number, a modified Marangoni number (the ratio of thermally driven surface tension gradients to gravity), a dimensionless Hamaker parameter, and a Bond number based on the size of the container.
\par
We use a second-order-accurate implicit finite difference scheme to solve the time dependent axisymmetric lubrication equation \eqref{eq:filmfullscale} subject to no-flux boundary conditions. In addition, we also solve for the steady-state profiles with a different quad-precision numerical code to give an independent check on the accuracy of the computational results for large times. We use values consistent with PDMS wherever they are constants; $\rho = 965$~kg/m$^3$ and $A = -7.6\times10^{-21}$J for a numerically convenient $\mbox{Ha} = 10^{-14}$.
\begin{figure}[!tbp]
\includegraphics[width=14cm]{4_fig_isotherm.pdf}
\caption{\label{fig:thin_iso} (a) Typical interferometric signal at $r=0$ for a thinning experiment. The increasing period length of the intensity modulation in the interferometric signal signifies a decreasing thinning rate. (b) the result of numerically solving Eq.~\ref{eq:filmfull} with the parameters from a 1~Pa$\cdot$s spinning experiment. The solid black line has a slope corresponding to the EBP scaling of $t^{-1/2}$. The arrow points to the plateau in $h(r=0,t)$ when the equilibrium profile was reached. (c) Experimental data on the fringe succession time increase with time measured for various fluid viscosities, in units of mPa$\cdot$s: (10, red $\triangle$), (100, indigo $+$), (1000, green $\circ$), (10000, light blue $\times$) at $\Omega = 2\pi$~rps. The green line corresponds to the data is the same as shown in panel (b), in the fringe succession time representation. (d) Rescaling the time axis with the EBP scaling of $\eta^{-1/3}$, we can collapse all data from panel (c). In the fringe succession representation, the expected EBP scaling corresponds to $t^{3/2}$ (see text) as observed (black line).}
\end{figure}
\section{Results}
\subsection{Isothermal thinning dynamics}\label{sec:isothinning}
We study the thinning dynamics for the system with $H_0 = 2.9$~mm, $\Omega = 2\pi$~rps, and PDMS oil with viscosities in the range $\eta = 10-10000$~mPa$\cdot$s with the fringe-passing technique described in Fig.~\ref{fig:thin_iso} and via solving Eq.~\ref{eq:filmfull}. Under steady rotation the thickness of the central film $h(r=0,t)$ will decrease until an equilibrium state is reached. Due to the thinning of the CTF during rotation, viscous forces increase progressively as the shear rates increase in the thinning layer, while the centrifugal force remains constant. The equilibrium solution is thus approached only very slowly. In our rotating container, we track the evolution of the thin film by recording the succession of fringes at $r=0$, the center of the container. Each successive fringe implies a thinning of the central thin film by $\Delta h_f = \lambda/2n = 210$~nm with $\lambda$ being the wavelength of the sodium light ($\lambda\approx 588$~nm) and $n$ the index of refraction of PDMS ($n\approx 1.4$). A typical intensity profile for the center of the container is shown in Fig.~\ref{fig:thin_iso}a: clearly the fringe succession period $\Delta T=t_{k+1}-t_k$, corresponding to
$h(0,t_{k+1})=h(0,t_k)-\Delta h_f$, grows over time. Solving Eq.~\ref{eq:filmfull} gives the expected EBP thinning dynamics with a scaling of $h(r=0,t) \propto t^{-1/2}$ in the CTF as shown in Fig.~\ref{fig:thin_iso}b. Eventually the numerical solution reaches a steady state profile in which the final equilibrium height is set by the disjoining pressure. Numerical data shown in aforementioned panel is obtained for the experimental conditions of a thinning experiment with a 1~Pa$\cdot$s fluid. We perform equivalent experiments for a range of different viscosities by counting the number of fringe successions as a function of time to obtain scaling of $h(r=0,t)$. All experiments are done at a rotation rate of $2\pi$~rps.
\par
We cannot track all the passing fringes, as recording the entire experiment on video would result in prohibitively large data sets. Instead, we record short periods of the thinning process at several stages during the thinning process. Each recording is long enough to observe the length of a fringe passing period, so from every recording we can estimate the thinning rate. However, as we do not record the total number of passed fringes, we cannot get an accurate measure of the total change in the height profile. We can only record the thinning \emph{rate}. The minimum fringe passing time is of the order of several frames in the 30 frames per second video imaging, limiting the thinning rate measurements in the early stages of the thinning process. The manual recording of thinning dynamics make the thinning rate measurements irregularly spaced in time at later times.
\par
Results are shown in Fig.~\ref{fig:thin_iso}c. The slowdown in thinning is clearly observed by the gradual increase in $\Delta T/\Delta h_f$. The numerical solution shown in Fig.~\ref{fig:thin_iso}b, from which we compute the derivative $\Delta T/\Delta h_f$ from the $h(r=0,t)$ data, also coincides with the corresponding experimental data without any free fitting parameters. Experimental data for fluids with different viscosities can be collapsed by rescaling the time axis with $\eta^{-1/3}$ as shown in Fig~\ref{fig:thin_iso}d, also consistent with EBP scaling. The rescaling shows clearly that the time between a fringe succession $\Delta T \propto t^{3/2}$, which implies that $h(t) \propto t^{-1/2}$, also consistent with the EBP scaling. These results show that our experimental and numerical methods in the rotating container geometry are effective in capturing the classic thinning dynamics. Interestingly, they indicate that the accumulation of fluid at the edge of the container during its rotation do not noticeably affect the EBP scaling for the thinning of the CTF.
\begin{figure}[tbp]
\includegraphics[width=10cm]{5_fig_maraexp.pdf}
\caption{\label{fig:thin_maraA} Experimental (symbols) and numerical (solid curves) data on thin film height dynamics at $r=0$ during spinning with $\Omega = 2\pi$~rps and subject to different thermal gradients:
($7.4^\circ$K/cm, red $\circ$),
($6^\circ$K/cm, indigo $+$),
($1.8^\circ$K/cm, green $\triangle$).
For the numerics, we used $B=39,13,3.25$ corresponding to a maximum thermal gradient of: ($6^\circ$K/cm, red), ($2^\circ$K/cm, blue) and ($0.5^\circ$K/cm, green) to obtain best fits. Inset: digital image data, showing the circular fringes of the fluid hump in the center. Image width is approximately 3~mm. }
\end{figure}
\begin{figure}
\includegraphics[width=10cm]{6_fig_predprof.pdf}
\caption{\label{fig:thin_maraB} Numerical profiles $h(r,t)$ corresponding to the $2^\circ$K/cm case from \ref{fig:thin_maraA} at times $0.01, 0.1, 1, 10,\cdots$, being effectively converged to a steady state by $t=10^5$s.
Arrows indicate the direction of motion of the evolving surface and dashed curves give predictions of the steady state profile to be constructed in sections \ref{sec:isoequil} and
\ref{sec:maraequil}.
The inset gives the profiles on a linear scale,
showing the semi-parabolic profile rapidly attained near the outer wall, to be described in
Section \ref{sec:isoequil}. }
\end{figure}
\subsection{Marangoni effect in thinning dynamics}\label{sec:marathinning}
We can now determine the effect of adding thermal Marangoni forces to spin coating applications. We probe the thinning dynamics for a $\eta = 100$~mPa$\cdot$s silicone oil spun at $\Omega = 2\pi$~rps with $H_0 = 2.9$~mm. In three different experiments, we provided an equilibrated, steady thermal gradient profiles of 7.4, 6 and 1.8$^\circ$K/cm. The thinning dynamics in representation $\Delta T/\Delta h_f$ are shown in Fig.~\ref{fig:thin_maraA}. The early time thinning behavior displays the classic EBP scaling with $\Delta T/\Delta h_f \propto t^{3/2}$, corresponding to $h \propto t^{-1/2}$. After some time however, the thinning dynamics slows down substantially, leading to what looks like a divergence of $\Delta T/\Delta h_f$. We check that our numerical simulations provide the same perspective. We can indeed quantitatively capture the experimental observations with the numerics -- see Fig.~\ref{fig:thin_maraA}.
\par
Note that given our experimental settings, we allow for a small variation in the numerical value of the thermal stress gradient strength $b$, as we can only image the thermal profile at the base and have to assume a homogeneous temperature in the thin fluid layer. The divergence in the thinning time is accompanied by the appearance of a set of rings in the center of the rotating container (Fig.~\ref{fig:thin_maraA} inset). This suggests that the thermal Marangoni stress, which is directed towards the center of the container, draws fluid inward and serves to increase the height in the center of the container. The numerical simulations again confirm the experimental picture. In Fig.~\ref{fig:thin_maraB} we show computed $h(r)$ profiles at several times in the dynamics subject to a thermal gradient of $2^\circ$K/cm. After approximately $10^4$ seconds, the profile has effectively reached a steady shape with a central fluid hump. With the appearance of the hump, deviation with respect to EBP dynamics and indeed convergence to a finite-thickness steady state is expected.
\par
The Marangoni effect on the thinning behavior can be understood from a simplified version of \eqref{eq:filmfullscale}. From Fig.~\ref{fig:thin_maraB} we observe that during most of the dynamics the height profile in the center of the container remains nearly flat,
$h(r,t)\approx \bar{h}(t)$. Therefore, capillarity can be neglected since the curvature will be small. We will also neglect disjoining pressure for short to moderate times while the CTF remaining relatively thick. Consequently, the
$r\to 0$ limit yields the leading order equation
\begin{equation}
{d\bar{h}\over dt} = -{2\over 3} \mbox{Fr}^2 \bar{h}^3 \left(1 - {3c\mbox{Ma} \over \mbox{Fr}^2 \, \bar{h}}\right),
\label{hbarODE}
\end{equation}
where $c$ is the parameter from \eqref{phiEqn} relating to the variance of the temperature profile about the origin. When $\bar{h}$ is relatively large, the factor in parentheses will be close to unity and the solution will be
\begin{equation}
\bar{h}(t)\sim \left(1+ {\textstyle{4\over 3}} \mbox{Fr}^2 \,t\,\right)^{-1/2},
\label{hbarsol}
\end{equation}
which corresponds to the EBP scaling, see Fig.~\ref{fig:longtime_mara}a.
For longer times, as $\bar{h}$ becomes smaller,
the influence of the Marangoni stress is to slow the EBP thinning rate and to establish an equilibrium film thickness where Marangoni stresses and centrifugal forcing balance,
\begin{equation}
\bar{h}_*= {3c\mbox{Ma} \over\mbox{Fr}^2},
\label{hbarstar}
\end{equation}
see Fig.~\ref{fig:longtime_mara}. Consequently,
Eqn \eqref{hbarsol} gives an estimate of the (dimensionless) time when a near-equilibrium thickness has been reached, $t_* = 3(h_*^{-2}-1)/(4\mbox{Fr}^2)$, for small Ma, yielding $\bar{h}_* \ll 1$, this
yields $t_*\sim (\mbox{Fr/Ma})^2/(12c^2)$. For our experimental settings at relatively large $\mbox{Ma}$, this yields dimensionless $t_*\sim 10^4$ and larger; as the time scale $T=\eta R^2/(\rho g H_0^3)$ is approximately 1.65, this equilibration estimate consistent with Fig.~\ref{fig:thin_maraA}. Since the timescales needed to explore the thinning dynamics become prohibitively large for smaller $\mbox{Ma}$, and experimental control of the temperature gradient is not ideal with smaller temperature gradients, we will explore the long time behavior only numerically.
\par
Note that Fig.~\ref{fig:longtime_mara}b shows deviations from the linear scaling with respect to $(\mbox{Ma/Fr}^2)$ for small $h$ (and very small Ma); we will see that this occurs when the disjoining pressure is no longer negligible compared to Ma and establishes a minimum thickness for the CTF layer.
\begin{figure}
\includegraphics[width=14cm]{7_fig_marascale.pdf}
\caption{\label{fig:longtime_mara} Numerical simulations for (a) $h(r = 0,t)$ for a range of Marangoni strengths; the color coding is the same as in (b). The standard EBP scaling of -0.5 is satisfied initially (black line, Eq.~\ref{hbarsol}); eventually an equilibrium height $h_{\mathrm{eq}}$ is reached as indicated by the plateaus, \eqref{hbarstar}. (b) the equilibrium heights $h_{\mathrm{eq}}(r = 0)$ for a range of Marangoni strengths $\mbox{Ma}/\mbox{Fr}^2$; the indicated linear scaling in $\mbox{Ma}$ is consistent with the prediction from
\eqref{hbarstar}.}
\end{figure}
\subsection{Isothermal equilibrium profiles}\label{sec:isoequil}
\par
To better frame the influences of the Marangoni stresses on the steady CTF profile, we first review behaviors for the isothermal free surface fluids in rotating containers \cite{linden1984}. Assuming $h=O(1)$, allowing us to neglect disjoining pressure effects, for
small rotation rates the steady free surface will have a central depression that is paraboloidal,
\begin{equation}
h(r) =1 +{\mbox{Fr}^2\over 2}\left(r^2 -{1\over 2}\right)
+ \mbox{Fr}^2\left({2\over \mbox{Bo}} -
{ I_0(r\sqrt{\mbox{Bo}}\,)\over \sqrt{\mbox{Bo}}\, I_1(\sqrt{\mbox{Bo}}\,)}\right).
\label{heq8}
\end{equation}
This result incorporates the contribution of surface tension through a term involving the ratio of modified Bessel functions $I_i$. Surface tension has a weak influence on the form of the solution, yielding a boundary layer of width $O(\sqrt{\mbox{Bo}})\to 0$ at the outer wall of the container to satisfy a contact angle condition, here taken to be $h'(1)=0$.
\par
For higher rotation rates \cite{linden1984}, a central bare spot will form,
\begin{equation}
h(r)\approx
\begin{cases}
0 & 0 \le r < r_c\\
{\textstyle {1\over 2}}\mbox{Fr}^2(r^2 -r_c^2) &
r_c < r< 1
\end{cases}
\label{eq9}
\end{equation}
where the radius of the bare spot is given by
\begin{equation}
r_c \approx \sqrt{1 -{2 \over \mbox{Fr}}}\ge 0\qquad \mbox{for $\mbox{Fr}\ge 2$.}
\label{eq13rc}
\end{equation}
Note that for large rotation rates, this shows that the fluid volume becomes forced into a narrow layer at the outer walls of the container, of width $1-r_c\sim 1/\mbox{Fr}=O(\Omega^{-2})\to 0$ as $\Omega\to\infty$. Similar expressions for a ``fluid hole'' were derived in \cite{bostwick2017}, but there the influence of gravity was neglected. This expression for the radius of the hole gives a very good estimate of the critical Froude number corresponding to the maximum rotation rate for the onset of formation of a hole, $\mbox{Fr}_c=2$, or equivalently, $\Omega_c=2\sqrt{gH_0}/R$.
\par
In reality, on a completely wetting substrate, the ``bare spot'' will not be dry and will retain an adsorbed thin film due the intermolecular forces with the substrate, this describes our CTF. Neglecting the influences of capillarity and gravity for very thin films, by balancing the effects of the centrifugal effects with the disjoining pressure, we can obtain an approximate steady height profile for the CTF region,
\begin{equation}
\mbox{Fr}^2 r^2 h^3- \mbox{Ha} {3r\over h} {dh\over dr}=0\qquad \to\qquad
h(r)= \left( h_0^{-3} - {\mbox{Fr}^2\over 2 \mbox{Ha}}r^2\right)^{-1/3},
\end{equation}
where $h_0=h(0)$ is the height at the origin. This solution can be used to
produce a scaling relation for the curvature of equilibrium solutions at $r=0$,
\begin{equation}
h''(0)= {\mbox{Fr}^2\over 3\mbox{Ha}}h(0)^4> 0.
\label{eq15hpp}
\end{equation}
The scaling for $h(0)$ for $\mbox{Fr}>2$ is not clear, based on the numerical simulations we have fit the data to an empirical relation like $h(0)\propto (\mbox{Fr}-2)^{-0.25}/\ln(\mbox{Fr})^{0.72}$, see Fig.~\ref{fig:wm0hpp}a.
In summary, when Marangoni forcing is absent, the curvature of the film at the origin is positive for all rotation rates, but above the critical Froude number, the central curvature becomes much smaller and depends sensitively on the wetting properties of the substrate. While Fig.~\ref{fig:wm0hpp}a shows that the simplified prediction for the critical Froude number agrees very well with the full simulations, Fig.~\ref{fig:thin_maraB} shows that
neglecting capillary effects and the disjoining pressure does affect the profile near the predicted CTF radius given by \eqref{eq13rc}.
\begin{figure}
\includegraphics[width=3.5in]{8a_wm0h.pdf}
\includegraphics[width=3.5in]{8b_wm0hpp.pdf}
\caption{Numerically computed properties of the isothermal equilibrium solutions of Eqn \eqref{eq:filmfullscale} on log-log plots: (a) Thickness of the film at the center, $h(r=0)$, showing excellent agreement with the predicted dependence on rotation rate, $h(0)=1-\mbox{Fr}^2/4$ for $\mbox{Fr}<2$ from \eqref{heq8} (red curve) and a fit to an empircal fit (dashed) for higher rotation rates, (b) The curvature at the center of the container, $h''(r=0)$, with analytically predicted behaviors for low rotation rate ($h''(0)=\mbox{Fr}^2$) and \eqref{eq15hpp} for larger
Fr.}
\label{fig:wm0hpp}
\end{figure}
\begin{figure}[tbp]
\includegraphics[width=6in]{9_fig_marasurf.pdf}
\caption{\label{fig:profile_mara} (a) Computed $h_{\mathrm{eq}}(r)$ for a range of Marangoni strengths equivalent to Fig.~\ref{fig:longtime_mara}. The color scale is the same in all panels. (b) shows a close-up on the CTF region for small heights (and small Ma) from the profiles in panel (a). We observe that the curvature at $r=0$ changes sign as the limit $\mbox{Ma}\rightarrow 0$ is approached.}
\end{figure}
\subsection{Marangoni effects on the equilibrium profile}\label{sec:maraequil}
\par
To explore the equilibrium profile dynamics at finite Marangoni number, we again use the experimental values for all parameters in the numerical exploration and pick 100~mPas for viscosity and $2\pi$~rps for the spinning rate. Depending on the strength of the Marangoni stresses, we observe that the thinning in the center of the container initially follows the standard EBP scaling and reaches a minimum equilibrium thickness for the entire range of Marangoni strengths explored: see Fig.~\ref{fig:longtime_mara}a.
\par
As described earlier, centrifugal effects scaled by the Froude number work to force fluid out of the central region, with the disjoining pressure and gravity opposing this outflow. Thermocapillary effects due to the imposed temperature gradients will promote opposing inwards flows. To explore the full range of behaviors that can occur from difference balances of these effects, we use numerical simulations to compute the steady state solutions of \eqref{eq:filmfull} over a range of Marangoni numbers.
\par
Fig.~\ref{fig:profile_mara} shows steady profiles in the central thin film region for a range of Marangoni numbers, with other parameters fixed in the regime with $\mbox{Fr}>2$. Fig.~\ref{fig:profile_mara}b shows that for very small Ma, the central thin film will have positive curvature; this is to be expected from the result \eqref{eq15hpp} for $\mbox{Ma}=0$. However, we observe that for stronger thermal forcings, Marangoni stresses are sufficiently strong to draw in fluid to form a central ``hump'' with a local maximum, $h''(0)<0$.
\par
When the film is thin smooth and slowly varying, for large Bond numbers we can neglect surface tension in the central region to approximate \eqref{eq:filmfull} by a first order equation for the steady CTF profile with no flux through the origin,
\begin{equation}
{dh\over dr} = \left(r\mbox{Fr}^2-{3\mbox{Ma}\over 2h} \phi(r)\right)\bigg/\left( 1+ {3\mbox{Ha}\over h^4}\right).
\label{eq16}
\end{equation}
This equation on $0\le r < r_c$ must be asymptotically matched to an interior layer at $r_c$ that captures capillary effects at the contact line and allows for
matching to the outer solution \eqref{eq9} on $r_c<r\le 1$. In general, determining the value of $h(0)$ will depend on this matching process, but we will show that over a range of larger Ma, a simpler solution can be obtained.
\par
Fig.~\ref{fig:longtime_mara}b shows that for $\mbox{Ma}\to 0$, a minimum film thickness will be set as a function of Ha via the influence of the disjoining pressure. While the curvature of the CTF changes sign with Ma, the central height $h(0)$, is always monotone increasing with Ma. Eqn \eqref{eq16} gives a good approximation of the CTF profile up to a transitional range in Ma where surface tension starts to play a more important role in setting the structure of the film at $r_c$, see Fig.~\ref{fig:profile_mara}b.
\par
For Ma above this range, surface tension is still important locally at $r_c$, but in the CTF region, the centrifugal and thermocapillary influences dominate in \eqref{eq16} to balance and
give an explicit leading order estimate of the height profile in terms of the scaled gradient of the temperature profile,
\begin{equation}
h(r)={3\mbox{Ma}\over 2 \mbox{Fr}^2} {\phi(r)\over r} \qquad \mbox{on $0\le r< r_c$.}
\label{eq17}
\end{equation}
This well defined hump profile gives $h(0)\sim 3c(\mbox{Ma/Fr}^2)$ and
$h''(0)\sim 3c^2(\mbox{Ma/Fr}^2)$ yielding the linear scaling regimes seen in
Figures~\ref{fig:longtime_mara}b and \ref{fig:Mahhpp}a. For even larger Marangoni numbers, this scaling ends when the thermal stresses are able to pull in the fluid from the outer region, to significantly degrade the semi-parabolic profile in \eqref{eq9}.
\par
As suggested by the variation in forms of the height profiles for small Ma shown in Fig.~\ref{fig:profile_mara}b, Figure~\ref{fig:Mahhpp}b indicates that the central curvature has a nontrivial dependence on the system parameters in \eqref{eq16} and capillarity to yield the non-uniform behavior shown. In computations of the steady solutions with higher rotation rates, it was found that $h''(0)$ could become
non-monotone with respect to $\mbox{Ma}$ at the transition of $\mbox{Ha}$ to $\mbox{Ma}$ dominated behavior occurring near Ma/Fr$^2\approx 10^{-5}$.
\begin{figure}
\includegraphics[width=6in]{10_fig_maracurv.pdf}\caption{(a) The absolute value of the curvature of the steady state film height at the origin, $h''_{\mathrm{eq}}(r=0)$. Diamonds indicate positive value (a local minimum); filled circles represent negative curvature (a central hump) obtained from long-time runs of the dynamic problem \eqref{eq:filmfullscale}. The solid line was obtained by solving the steady state version of the equation over a range of Ma values. Scaling regimes are indicated with the thin solid lines and accompanying scaling exponents. The green, blue and red vertical line indicate the experiments at low, intermediate and high Ma respectively from Fig.~\ref{fig:thin_maraA}. (b) Zoomed-in view on a linear scale of the small Ma regime, showing how the curvature switches from positive to negative. }
\label{fig:Mahhpp}
\end{figure}
\section{Conclusions} We performed experiments and examined numerical solutions on thin film dynamics and steady state profiles in a rotating container in the presence of a thermal surface tension gradient force. We find that such thermal Marangoni forces can significantly affect the profile thickness and spatial height variations of the central thin film that develops at large enough rotation rates. Most notably, we find that the equilibration CTF height scales linearly with $\mbox{Ma}$. Once equilibrated, the CTF height profile follows the thermal profile. In the limit of small $\mbox{Ma}$, reaching equilibrium takes progressively longer and the steady state reached is set by competition between Marangoni and disjoining pressure.
We foresee that this feature can be used in various applications. The dramatic changes for small Ma shown in Figures~\ref{fig:profile_mara} and \ref{fig:Mahhpp} will be observable in experimental fringe patterns for the CTF. We expect that this can developed further to yield a method for determining properties of the disjoining pressure (and characterize wetting properties of the substrate) by tuning the Marangoni number of a small range. The slow thinning dynamics need not be prohibitive: spin coating is done with rotation speeds that are orders of magnitude larger than used in this study, and the viscosity of the PDMS liquids (used for their low volatility) can also be orders of magnitude lower.
The sensitivity of the spatiotemporal thin film dynamics to surface tension gradients can be of great interest in fields where functional nanometer thin films are produced with spin coating techniques~\cite{jiang2004}. Even thin film fluid deposition methods used in 3D printing can be improved with thermal gradient technology to design features smaller than a thickness of the fluid layer. If a temperature field is not the most natural control method, surface tension gradients can also be induced with other modes of
forcing like electric fields~\cite{electro2005} or light~\cite{shin1999}.
Fundamentally, there is also interest in exploring whether the competing centrifugal versus thermocapillarity influences can give rise to undercompressive shocks and fingering instabilities, as in the studies by Bertozzi and collaborators for planar thin films \cite{bertozzi1,bertozzi2,bertozzi3}.
\acknowledgements We thank Joshua Bostwick, Omar Matar, Howard Stone, Dominic Vella, Detlef Lohse and Jacco Snoeijer for stimulating discussions. Frans Leermakers helped with the numerical simulations and Chuan-Hua Chen and Jonathan Boreyko allowed and jhelped us use their infrared camera. This project was funded by NSF DMS0968252.
|
3,212,635,537,576 | arxiv | \section{Details of the eigenvalue equation}
In what follows we derive the one dimensional eigenvalue equation that is solved in the main paper. A similar analysis of the Feynman rules for the OTOC was done before {\it e.g.} in \cite{Chowdhury2017, patel2017quantum, steinberg2019}.
We will present explicit derivations for each diagram and describe how the final structure is obtained. The analysis is first done for the single patch theory, and we later on generalize to two antipodal patches, and show that our results and conclusions do not change.
The object of interest is the regulated squared anticommutator of the fermion operators
\begin{eqnarray}\label{eq:OTOC}
\mathcal{C}_{\bm x} (t,0) = \frac1{N^2}\,\theta(t) \sum_{n,m=1}^N\text{Tr} \left[e^{-\beta H/2}\{\psi_n({\bf x},t),\psi_m^\dagger(0)\}e^{-\beta H/2}\{\psi_n({\bf x},t),\psi_m^\dagger(0)\}^\dagger\right],
\end{eqnarray}
where $\psi({\bm x},t) = U_I^\dagger \psi_0({\bm x},t)U_I$ and $U_I$ is the interaction picture time evolution operator. The notation $\psi(0)\equiv \psi({\bf0},0)$ is introduced for simplicity. In the following, we drop the subscript $``0"$ on $\psi$, assuming that the fields are free.
A Taylor expansion of the evolution operator up to second order reads
\begin{eqnarray}\label{eq:UI}
U_I = \exp\left(\frac{i}{N}\sum_{ijl}\int_{\bm y}\int_0^t ds\,g_{ijl}\psi_{i}^\dagger({\bm y},s)\psi_{j}({\bm y},s)\phi_{l}({\bm y},s)\right)= 1 + \frac{i}{N}\sum_{ijl}\int_{{\bm y}}\int_0^t ds\, g_{ijl}\psi_{i}^\dagger({\bm y},s)\psi_{j}({\bm y},s)\phi_{l}({\bm y},s)\\ + \left(\frac{i}{N}\right)^2\sum_{\text{all indecies}}\int_{{\bm y},{\bm y}'} \int_0^t ds\int_0^s ds' g_{ijl}g_{i'j'l'}\phi_{l}({\bm y},s)\phi_{l'}({\bm y}',s')\psi_{i}^\dagger({\bm y},s)\psi_{j}({\bm y},s)\psi_{i'}^\dagger({\bm y}',s')\psi_{j'}({\bm y}',s')+\dots.\nonumber
\end{eqnarray}
An expansion of its conjugate is therefore given by
\begin{eqnarray}\label{eq:UId}
U_I^\dagger = \exp\left(-\frac{i}{N}\sum_{ijl}\int_{\bm y}\int_0^t ds\,g_{ijl}\psi_{i}^\dagger({\bm y},s)\psi_{j}({\bm y},s)\phi_{l}({\bm y},s)\right)= 1 - \frac{i}{N}\sum_{ijl}\int_{{\bm y}}\int_0^t ds\, g_{ijl}\psi_{i}^\dagger({\bm y},s)\psi_{j}({\bm y},s)\phi_{l}({\bm y},s)\\ + \left(\frac{-i}{N}\right)^2\sum_{\text{all indecies}}\int_{{\bm y},{\bm y}'} \int_0^t ds\int_0^s ds' g_{ijl}g_{i'j'l'}\phi_{l}({\bm y},s)\phi_{l'}({\bm y}',s')\psi_{i}^\dagger({\bm y},s)\psi_{j}({\bm y},s)\psi_{i'}^\dagger({\bm y}',s')\psi_{j'}({\bm y}',s')+\dots.\nonumber
\end{eqnarray}
We use the following definitions of retarded Green's functions and the symmetrized Wightman functions for fermions and bosons:
\begin{eqnarray}\label{eq:RGF}
G^R({\bf x},t) \delta_{a,b}&=& -i\theta(t)\,\langle\{\psi_a({\bf x},t),\psi_b^\dagger(0)\}\rangle,\\
D^R({\bf x},t)\delta_{a,b} &=& -i\theta(t) \langle[\phi_a({\bf x},t),\phi_b(0)]\rangle,\\
G^W({\bf x},t)\delta_{a,b} &=& \text{Tr}[\rho^{1/2} \psi_a({\bf x},t) \rho^{1/2} \psi_b^\dagger(0)],\\
D^W({\bf x},t) \delta_{a,b}&=& \text{Tr}[\rho^{1/2} \phi_a({\bf x},t) \rho^{1/2} \phi_b(0)].\label{eq:WGF}
\end{eqnarray}
~\\
{\bf Leading contribution.}
The zeroth order Taylor expansion of the evolution operator \eqref{eq:UI}, along with the above definition of the retarded Green's function for fermions \eqref{eq:RGF}, gives us the leading contribution to the expression for the squared anticommutator
\begin{eqnarray}
\label{eq:C0}
\mathcal{C}^{(0)}_{\bm x} (t,0) = \frac1{N^2} \theta(t)\sum_{n,m=1}^N \text{Tr} \left[\rho^{1/2}\{\psi_m({\bm x},t),\psi_n^\dagger(0)\}\rho^{1/2} \{\psi_m({\bm x},t),\psi_n^\dagger(0)\}^\dagger\right] \nonumber\\= \frac1{N}\,(i G^R({\bf x},t))(-i G^{R*}({\bf x},t)) = \frac1{N}\, |G^R({\bf x},t)|^2,\nonumber
\end{eqnarray}
Diagrammatically, we represent it simply as
\begin{eqnarray}
\mathcal{C}^{(0)}_{\bm x} (t,0) = \begin{tikzpicture}[baseline={([yshift=-4pt]current bounding box.center)}]
\draw[thick , latex-] (25pt,10pt)--(0pt,10pt);
\draw[thick] (40pt,10pt)--(20pt,10pt);
\draw[thick , -latex] (40pt,-10pt)--(18pt,-10pt);
\draw[thick] (0pt,-10pt)--(20pt,-10pt);
\end{tikzpicture}.
\end{eqnarray}
\\
{\bf First order.}
To derive the eigenvalue equation at the first order, we note that only the product of even numbers of the coupling constants $g_{ijl}$ gives a non-zero result. The combination of the first term on one of the time folds, and the expansion \eqref{eq:UI}-\eqref{eq:UId} to higher orders on the other time fold leads to corrections self energy corrections to the Green's function. Thus we are left with the expansion \eqref{eq:UI}-\eqref{eq:UId} to the first order on both time folds, and we obtain
\begin{eqnarray}\nonumber
\mathcal{C}^{(1)}_{\bm x} (t,0) = \frac{1}{N^4}\theta(t)\int_{s,s'}\int_{\bf y,y'}\sum_{\text{all indecies}}\, \text{Tr} [\rho^{1/2} g_{ijl} \phi_l({\bf y},s) \{[\psi_m({\bf x},t),\psi_i^\dagger({\bf y},s)\psi_j({\bf y},s)] ,\psi_n^\dagger(0)\}\\
\times\rho^{1/2}g_{i'j'l'} \,\,\phi_{l'}({\bf y'},s') \{[\psi_m({\bf x},t),\psi_{i'}^\dagger({\bf y'},s')\psi_{j'}({\bf y'},s')],\psi_n^\dagger(0)\}^\dagger]\nonumber
\end{eqnarray}
Using definitions \eqref{eq:RGF}-\eqref{eq:WGF}, and counting the powers of $N$ for each term, we obtain the following expression
\begin{eqnarray}\nonumber
\mathcal{C}^{(1)}_{\bm x} (t,0)
=\frac{g^2}{N} \int_{s,s'}\int_{{\bf y,y'}} G^R({\bf x}-{\bm y},t-s)G^R({\bm y},s)D^W({\bm y}-{\bm y}',s-s')G^{R*}({\bm x}-{\bm y}',t-s')G^{R*}({\bm y}',s'),\nonumber
\end{eqnarray}
which gives the ``rung" diagram
\begin{eqnarray}\label{eq:firstorder}
\mathcal{C}^{(1)}_{\bm x} (t,0) = \begin{tikzpicture}[baseline={([yshift=-4pt]current bounding box.center)}]
\draw[thick , latex-] (10pt,10pt)--(0pt,10pt);
\draw[thick , latex-] (35pt,10pt)--(10pt,10pt);
\draw[thick] (40pt,10pt)--(0pt,10pt);
\draw[thick , -latex] (40pt,-10pt)--(28pt,-10pt);
\draw[thick , -latex] (20pt,-10pt)--(2pt,-10pt);
\draw[thick] (0pt,-10pt)--(40pt,-10pt);
\draw[thick, decorate, decoration={snake, segment length=11pt,amplitude=2pt}] (20pt,-10pt) -- (20pt,10pt);
\draw[dashed] (20pt, 10pt) .. controls (12pt,0pt) .. (20pt, -10pt);
\end{tikzpicture}.
\end{eqnarray}
\\
{\bf Second order.}
At the second order we find three terms, but only one of them is both nonzero at large $N$ and is not two ladders of type \eqref{eq:firstorder}. This term is a ``box" diagram and is similar to the one found in \cite{patel2017quantum} and reads
\begin{eqnarray}\nonumber
\mathcal{C}^{(2)}_{\bm x} (t,0)=\frac{g^4}{N}\int_{\{{\bm y}\}}\int_0^t ds_1\int_0^{s_1}ds_2\int_0^tds_3\int_0^{s_3}ds_4 G^R({\bm x}-{\bm y}_1,t-s_1)D^R({\bm y}_1-{\bm y}_2,s_1-s_2)G^W({\bm y}_1-{\bm y}_3,s_1-s_3)\\
\times G^{W}({\bm y}_4-{\bm y}_2,s_4-s_2)G^R({\bm y}_2,s_2)G^{R*}({\bm x}-{\bm y}_3,t-s_3)D^{R*}({\bm y}_3-{\bm y}_4,s_3-s_4)G^{R*}({\bm y}_4,s_4)\nonumber
\end{eqnarray}
The diagram is
\begin{eqnarray}\label{eq:box}
\mathcal{C}^{(2)}_{\bm x} (t,0) = \begin{tikzpicture}[baseline={([yshift=-4pt]current bounding box.center)}]
\draw[thick, latex-] (9pt,10pt)--(0pt,10pt);
\draw[thick ] (13pt,10pt)--(7pt,10pt);
\draw[thick] (50pt,10pt)--(37pt,10pt);
\draw[thick , latex-] (48pt,10pt)--(37pt,10pt);
\draw[thick, decorate, decoration={snake, segment length=11pt,amplitude=1.5pt}] (37pt,10pt)--(13pt,10pt);
\draw[thick ] (13pt,-10pt)--(0pt,-10pt);
\draw[thick, -latex] (10pt,-10pt)--(3pt,-10pt);
\draw[thick ] (50pt,-10pt)--(37pt,-10pt);
\draw[thick , -latex] (50pt,-10pt)--(40pt,-10pt);
\draw[thick, decorate, decoration={snake, segment length=11pt,amplitude=1.5pt}] (37pt,-10pt)--(13pt,-10pt);
\draw[thick] (13pt,-10pt) -- (13pt,10pt);
\draw[thick] (37pt,-10pt) -- (37pt,10pt);
\draw[dashed] (13pt, 10pt) .. controls (7pt,0pt) .. (13pt, -10pt);
\draw[dashed] (37pt, 10pt) .. controls (43pt,0pt) .. (37pt, -10pt);
\end{tikzpicture}.
\end{eqnarray}
In the large $N$ limit, we don't find any new types of diagrams other than boxes and rungs contributing to \eqref{eq:OTOC}, after expanding the unitary operator \eqref{eq:UI} to higher orders. \\
{\bf The eigenvalue equation.}
We now evaluate the ladder sum and work with the following function:
\begin{eqnarray}
\mathcal{C} (\omega) = \frac1N \int\frac{d^3k}{(2\pi)^3} \mathcal{C}(k,\omega),
\end{eqnarray}
where $\mathcal{C}(k,\omega)$ is the Fourier transform of $\mathcal{C}_{\bm{x}}(t,0)$ with respect to $\bm{x}$ and $t$.
The Bethe-Saltpeter equation (Fig. \ref{fig:Bethe_Salpeter} of the main text) then reads
\begin{eqnarray}
\mathcal{C}(k,\omega)=G^R(k)G^{R*}(k-\omega)\left[1+ \int \frac{d^3 k'}{(2\pi)^3} (g^2 D^W(k-k') + K_2(k,k',\omega))\mathcal{C}(k',\omega)\right].
\end{eqnarray}
Each of the terms corresponds to Fourier transform of the perturbative expansion found above.
The exponential growth of the squared anticommutator in the chaos regime is expected if the eigenvalue equation is invariant under adding a ladder, and we can then write
\begin{eqnarray}\label{eq:BS}
\mathcal{C}(k,\omega)=G^R(k)G^{R*}(k-\omega) \int \frac{d^3 k'}{(2\pi)^3} (g^2 D^W(k-k') + K_2(k,k',\omega))\mathcal{C}(k',\omega),
\end{eqnarray}
where the function $K_2(k,k',\omega)$ is the kernel of the ``box" diagram found above \eqref{eq:box}, which in Fourier space reads
\begin{eqnarray}
K_2 (k,k',\omega) = g^4\int \frac{d^3k_1}{(2\pi)^3}D^R(k_1) D^{R*}(k_1-\omega)G^W(k-k_1) G^W(k'-k_1).
\end{eqnarray}
We can now explicitly compute each term in \eqref{eq:BS}. The fermion and boson Green's functions the single patch theory \cite{esterlis2021} are given by
\begin{eqnarray}
&&G^R({\bm k},\omega) = \frac1{ k_x + k_y^2 - \Sigma^R(\omega)},\\
&&G^W({\bm k},\omega)=-\frac{\text{Im}\Sigma^R(\omega)}{\cosh\frac{\beta\omega}{2}\left([(k_x+k_y^2) + \text{Re}\Sigma^R(\omega)]^2 + [\text{Im}\Sigma^R(\omega)]^2\right)},\\
&&D^R({\bf q},\Omega\neq0) = \frac{|q_y|}{|q_y|^3 + m^2 - ic_b\Omega}, \\
&&D^W({\bf q},\Omega) = \frac{c_b \Omega|q_y|}{ \sinh\frac{\beta\Omega}{2}((|q_y|^3 +m^2)^2+ c_b^2 \Omega^2)},
\end{eqnarray}
where we added a small mass term in the boson Green's function as an IR regulator that will eventually be carefully taken to zero, and
\begin{eqnarray}
\Sigma^R(\omega) = ic_fT^{2/3}H_{1/3}\left(\frac{-i\omega-\pi T}{2\pi T}\right)
\end{eqnarray}
The function $H_{1/3} (z) =\zeta(1/3) - \zeta(1/3,z+1)$ is the Harmonic number function, as noted in the main text, and the constants take the values $c_f = 2^{5/3}g^{4/3}/(3\sqrt{3})$ and $c_b=g^2/(8\pi)$ in the single patch theory.
Since our theory has sliding symmetry about the patch Fermi surface, the function $\mathcal{C}(k,\omega)$ depends on momentum as $\mathcal{C}(k,\omega) = \mathcal{C}(k_x+k_y^2,k_0,\omega)$. Our main goal further is to express the equation in terms of a momentum independent eigenfunction \cite{patel2017quantum}
\begin{eqnarray}
\tilde{\mathcal{C}}(k_0',\omega) = \int \frac{d k_x'}{2\pi}{\mathcal{C}}(k_x',k_0',\omega),
\end{eqnarray}
which we will see is possible to do because of the special momentum dependence of $\mathcal{C}$ induced by the sliding symmetry.
We now compute each term in \eqref{eq:BS}. The diagonal term in the equation is simply
\begin{eqnarray}\nonumber
\int \frac{dk_x}{2\pi} G^R(k)G^{R*}(k-\omega) \,= \frac{i}{\Sigma^R(k_0)-\Sigma^{R*}(k_0-\omega)} \\=\frac{1}{c_f T^{2/3}}\frac{1}{H_{1/3}\left(\frac{-ik_0-\pi T}{2\pi T}\right) + H_{1/3}\left(\frac{-i(\omega-k_0)-\pi T}{2\pi T}\right)}.
\end{eqnarray}
The ``first order" term can be written as
\begin{eqnarray}
&&g^2 \int \frac{d^3 k'}{(2\pi)^3} D^W(k-k') \mathcal{C}(k',\omega) \\&&={g^2 }c_b \int \frac{dk_0'dk'_y}{(2\pi)^2} \frac{(k_0-k_0')|k_y'|}{ (k_y'^3+m^2)^2 + c_b^2 (k_0-k_0')^2}\frac{\tilde{\mathcal{C}}(k_0',\omega)}{\sinh\frac{k_0-k_0'}{2T}}. \nonumber
\end{eqnarray}
The second term in the eigenvalue equation, {\it i.e.} the $K_2$ - term can be evaluated as done in Ref. \cite{patel2017quantum}
\begin{eqnarray}\label{eq:K2}
\int \frac{d^3k'}{(2\pi)^3}K_2 (k,k',\omega) \mathcal{C}(k',\omega)\,\,&&= g^4\int \frac{d^3k_1d^3k'}{(2\pi)^6}D^R(k_1) D^{R*}(k_1-\omega)G^W(k-k_1) G^W(k'-k_1) \mathcal{C}(k',\omega)\nonumber\\
&& = \frac{g^{4/3}4\pi^{4/3}}{3\sqrt{3}}
\int\frac{dk_0' d k_{01}}{(2\pi)^2} \frac{(i k_{01} + (-i k_{01})^{2/3}(i(k_{01}-\omega)))}{ k_{01}(i(k_{01}-\omega))^{1/3}(2 k_{01}-\omega)}\,\frac{\tilde{\mathcal{C}}(k_0',\omega)}{\cosh{\frac{k_0-k_{01}}{2T}}\cosh{\frac{k_0'-k_{01}}{2T}}}.
\end{eqnarray}
The integration over frequencies has to be done numerically.
Combining the above equations, we obtain the eigenvalue equation that we solve in the main Letter:
\begin{eqnarray}\label{eq:eigen}
&&\left[c_f T^{2/3} \left(H_{1/3}\left(\frac{-ik_0-\pi T}{2\pi T}\right) + H_{1/3}\left(\frac{-i(\omega-k_0)-\pi T}{2\pi T}\right)\right)+2\mu(T)\right] \tilde{\mathcal{C}}(k_0,\omega) \\ \nonumber
&& = {g^2} \int \frac{dk_0'dk'_y}{(2\pi)^2} \frac{c_b(k_0-k_0')|k_y'|}{ (|k_y'|^3+m^2)^2 + c_b^2 (k_0-k_0')^2}\frac{\tilde{\mathcal{C}}(k_0',\omega)}{\sinh\frac{k_0-k_0'}{2T}}\\ \nonumber
&&+\,\frac{g^{4/3}4\pi^{4/3}}{3\sqrt{3}}
\int\frac{dk_0' d k_{01}}{(2\pi)^2} \frac{(i k_{01} + (-i k_{01})^{2/3}(i(k_{01}-\omega)))}{ k_{01}(i(k_{01}-\omega))^{1/3}(2 k_{01}-\omega)}\,\frac{\tilde{\mathcal{C}}(k_0',\omega)}{\cosh{\frac{k_0-k_{01}}{2T}}\,\cosh{\frac{k_0'-k_{01}}{2T}}}.
\end{eqnarray}
We then numerically solve the equation following Appendix D in \cite{patel2017quantum}.
Our numerical procedure is as follows. We first perform the integration over $k_y$ with a small finite mass term $m=0.02$. We then discretize the integration over $k_0'$ as well as the variable $k_0$ in the region $k_0,k_0' \in [-15,15]$ with the step size $dk_0 = 0.005$. We solve the eigenvalue equation for different $\omega$ on the positive imaginary axis, and look for $\omega$ at which eigenvalue of the equation \eqref{eq:eigen} is closest to zero. Whenever we obtain a solution with multiple eigenvalues, we choose the largest one since it corresponds to the Lyapunov exponent $\lambda_L$. As a check, we find the value $\lambda_L\approx2.48T$ at $p_x=0$, which is exactly the result that was found in \cite{patel2017quantum}. \\
\textbf{Antipodal patches.}
We now consider the evolution of the OTOC in the non-chiral theory with two antipodal patches. We will show that our conclusions about maximal quantum chaos are unaffected in the large $N$ limit by the interactions induced between the two patches.
The action for the two patch model (\eqref{otocL} of the main text) tells us that the fermions in the $\pm$ patches disperse as $\pm k_x + k_y^2$ respectively. We therefore expect that the eigenfunctions for the two patches $\pm$ are given by $\mathcal{C}_\pm(k,k_0,\omega) = \mathcal{C}(\pm k_x+k_y^2,k_0,\omega)$, which is obvious at the non-interacting level ({\it i.e.} \eqref{eq:C0}) and can easily be shown to be self-consistent when interactions are included in the Bethe-Salpeter equation.
In addition to the $K_2$ term in the eigenvalue equation, we now have a term that couples fermions from opposite patches at the two ends of the ``box" diagrams. For the $+$ patch, it is given by \eqref{eq:boxpm},
\begin{eqnarray}\label{eq:boxpm}
\mathcal{C}^{'(2)}_{\bm x} (t,0) = \begin{tikzpicture}[baseline={([yshift=-4pt]current bounding box.center)}]
\draw[thick, latex-] (9pt,10pt)--(0pt,10pt);
\node at (8pt, 18pt){$+$};
\node at (8pt,-18pt){$+$};
\draw[thick] (13pt,10pt)--(7pt,10pt);
\draw[thick] (50pt,10pt)--(37pt,10pt);
\draw[thick, latex-] (48pt,10pt)--(37pt,10pt);
\draw[thick, decorate, decoration={snake, segment length=11pt,amplitude=1.5pt}] (37pt,10pt)--(13pt,10pt);
\draw[thick] (13pt,-10pt)--(0pt,-10pt);
\draw[thick, -latex] (10pt,-10pt)--(3pt,-10pt);
\draw[thick] (50pt,-10pt)--(37pt,-10pt);
\draw[thick, -latex] (50pt,-10pt)--(40pt,-10pt);
\node at (42pt, 18pt){$-$};
\node at (42pt,-18pt){$-$};
\draw[thick, decorate, decoration={snake, segment length=11pt,amplitude=1.5pt}] (37pt,-10pt)--(13pt,-10pt);
\draw[thick] (13pt,-10pt) -- (13pt,10pt);
\draw[thick] (37pt,-10pt) -- (37pt,10pt);
\draw[dashed] (13pt, 10pt) .. controls (7pt,0pt) .. (13pt, -10pt);
\draw[dashed] (37pt, 10pt) .. controls (43pt,0pt) .. (37pt, -10pt);
\end{tikzpicture},
\end{eqnarray}
which yields
\begin{eqnarray}\label{eq:K3}
\int \frac{d^3k'}{(2\pi)^3}K'_2 (k,k',\omega) \mathcal{C}_{-}(k',\omega)\,\,&& = \int \frac{d^3k'}{(2\pi)^3}K'_2 (k,k'_0,k'_x,k'_y,\omega) \mathcal{C}(-k'_x+k'^{2}_y,k_0,\omega).
\end{eqnarray}
Then, by exploiting the sliding symmetry and shifting $k'_x \rightarrow k'_x + k'^{2}_y$, we can complete the internal momentum
($k_{x1},k_{y1}$) integrals just like Ref. \cite{patel2017quantum} did with the $K_2$ term, followed by the $(k'_x,k'_y)$ integrals. This yields
\begin{eqnarray}
\int \frac{d^3k'}{(2\pi)^3}K'_2 (k,k',\omega) \mathcal{C}_{-}(k',\omega)\,\, &&= g^4\int \frac{d^3k_1d^3k'}{(2\pi)^6}D^R(k_1) D^{R*}(k_1-\omega)G^W_+(k-k_1) G^W_-(k'-k_1) \mathcal{C}_{-}(k',\omega)\nonumber\\
&&= \frac{g^{4/3}4\pi^{4/3}}{2^{4/3} 3\sqrt{3}} \int\frac{dk_0' d k_{01}}{(2\pi)^2} \frac{(i k_{01} + (-i k_{01})^{2/3}(i(k_{01}-\omega)))}{ k_{01}(i(k_{01}-\omega))^{1/3}(2 k_{01}-\omega)}\,\frac{\tilde{\mathcal{C}}(k_0',\omega)}{\cosh{\frac{k_0-k_{01}}{2T}}\cosh{\frac{k_0'-k_{01}}{2T}}},
\end{eqnarray}
where $G^W_\pm$ are the Wightman fermion Green's functions for the $\pm$ patches respectively. This is just proportional to the action of the $K_2$ term on $\mathcal{C}_{+}$ \eqref{eq:K2}. The sum of the action of the $K_2$ and $K'_2$ terms therefore yields the RHS of (\ref{eq:K2}) divided by a factor of $2^{1/3}$. However, since the two patch value of $c_b$ is twice its one patch value, and the two patch value of $c_f$ is $1/2^{1/3}$ times its one patch value, this extra factor of $1/2^{1/3}$ cancels out in the equivalent of \eqref{eq:eigen} as $m$ is taken to $0$, leaving us with the same final one dimensional integral equation to be solved numerically.
Therefore, at $ip_x=0$, there is no difference between the solutions for the one-patch and two-patch OTOCs. For non-zero external momentum, we can see that if we apply $+ip_x$ to the $+$ patch and $-ip_x$ to the $-$ patch, corresponding to a pair of chiral scrambling modes traveling with the same speed but in opposite directions (one coming from each patch), we get the same $ip_x$ dependence for the OTOC as well. The only difference is that the value of $ip_x$ is rescaled by a factor of $2^{1/3}$, and therefore the butterfly velocity is rescaled by a factor of $1/2^{1/3}$. However, since this merely corresponds to a rescaling of the $x$-axes of Figs. \ref{fig:LambdaPx},~\ref{fig:LambdaPxZ} of the main text, our conclusions regarding maximal chaos remain unchanged.
\section{Generalization of the ladder identity}
In this section we discuss the generalization of the single mode ansatz \cite{kitaev2018, gu2019, gu2021twoway} to the spatially dependent case, and show that the result obtained by Gu and Kitaev \cite{gu2019} holds.
The ``regular" OTOC we are considering consists of connected and disconnected parts
\begin{eqnarray}\label{eq:defOTOC}
\text{OTOC}_{{{\bm x}}}(t_1,t_2,t_3,t_4) = \frac1{N^2} \sum_{n,m=1}^N \,( \langle \rho^{1/4} \psi_n({{\bm x}},t_1)\rho^{1/4}\psi_m^\dagger({\bf0},t_3) \rho^{1/4}\psi^\dagger_n({{\bm x}},t_2)\rho^{1/4}\psi_m({\bf0},t_4)\rangle \nonumber \\
+ \langle\psi_n({{\bm x}},t_1) \psi^\dagger_n({{\bm x}},t_2)\rangle\langle \psi_m^\dagger({\bf0},t_3)\psi_m({\bf0},t_4)\rangle).
\end{eqnarray}
We consider the Fourier transform of the connected part:
\begin{eqnarray}
\text{OTOC}_{{{\bm x}}}(t_1,t_2,t_3,t_4) = \int_{{\bm p},{\bm k},{\bm k}'}\,e^{i{{\bm p}}{{\bm x}}} \,\text{OTOC}_{{{\bm p}}}, (t_1,t_2,t_3,t_4;{\bm k},{\bm k}')
\end{eqnarray}
and use the assumption that the single mode ansatz works for every momentum eigenmode. The OTOC then has the following form
\begin{eqnarray}\label{eq:ansatz}
\text{OTOC}_{{\bm p}}(t_1,t_2,t_3,t_4;{\bm k},{\bm k}') \approx \frac{e^{\kappa({{\bm p}}) (t_1+t_2-t_3-t_4)/2}}{C({{\bm p}})} \Upsilon_{{\bm p}}^R(t_{12},{{\bm k}})\Upsilon_{p}^A(t_{34},{{\bm k}'}),
\end{eqnarray}
where we assume that the times are well separated $s=(t_1+t_2-t_3-t_4)/2 \gg \kappa({{\bm p}})^{-1}$, the exponent represents the ``scramblon", the momentum-dependent function $C({{\bm p}})$ is to be determined below from the consistency condition, and $\Upsilon_{p}^{A,R}(t_{ij},{\bm k})$ are the momentum-dependent vertex functions. The diagram that represents the ansatz is shown in Fig. \ref{fig:ansatzsup}.
\begin{figure}
\center{\includegraphics[width=2.6in]{single_mode_ansatz.png}}
\caption{Diagram representing the ansatz \eqref{eq:ansatz}. The wavy line represents the scramblon. The figure is adapted from \cite{gu2019} for the momentum-dependent case.}
\label{fig:ansatzsup}
\end{figure}
Let us consider the following form of the regular OTOC, where we distinguish 2 ladders somewhere in the middle. We can then write down the self consistency condition as follows
\begin{eqnarray}\label{eq:OTOCBOX}
\text{OTOC}_{{\bm p}}(t_1,t_2,t_3,t_4;{\bm k},{\bm k}')\,\,
\approx \int_{t_5,t_6,t_7,t_8;{\bf q},{\bf q}'}
\text{OTOC}_{{\bm p}}^R(t_1,t_2,t_5,t_6;{\bm k},{\bf q}) \times
\begin{tikzpicture}[baseline={([yshift=5pt]current bounding box.center)}]
\draw[thick, dashed] (40pt,-15pt) -- (40pt,15pt);
\draw[thick] (40pt,15pt)--(70pt,15pt);
\draw[thick] (40pt,-15pt)--(70pt,-15pt);
\draw[thick, dashed] (70pt,-15pt) -- (70pt,15pt);
\filldraw (40pt,-15pt) circle (1pt);
\filldraw (40pt,15pt) circle (1pt);
\filldraw (70pt,-15pt) circle (1pt);
\filldraw (70pt,15pt) circle (1pt);
\node at (40pt,22pt) {\scriptsize $5$};
\node at (40pt,-22pt) {\scriptsize $6$};
\node at (70pt,22pt) {\scriptsize $7$};
\node at (70pt,-22pt) {\scriptsize $8$};
\node at (55pt,-40pt) {\scriptsize $\text{BOX}$};
\end{tikzpicture} \times \text{OTOC}_{{\bm p}} (t_7,t_8,t_3,t_4;{\bf q}',{\bm k}').\nonumber
\end{eqnarray}
\begin{figure}
\includegraphics[width=3in]{Keldysh_contour.png}
\caption{Double Keldysh contour for the equation \eqref{eq:OTOCBOX}.}
\label{fig:Keldysh}
\end{figure}
The points $t_5$ and $t_6$ have the same corresponding imaginary time components as $t_1$ and $t_2$. We therefore need to shift the coordinates on the Keldysh contour by $\pm i\pi/2$ to either direction as shown in Fig.\ref{fig:Keldysh}. There are two choices
\begin{eqnarray}
\text{OTOC}_{{{\bm p}},A} &=& \text{OTOC}_{{\bm p}}\left(t_1,t_2,t_5 - i\frac\pi2,t_6- i\frac\pi2;{\bm k},{{\bm k}'}\right) \approx \frac{e^{\kappa({{\bm p}}) (t_1+t_2-t_5-t_6 + i\pi)/2}}{C({{\bm p}})} \Upsilon_{{\bm p}}^R(t_{12},{{\bm k}})\Upsilon_{{\bm p}}^A(t_{56},{{\bm k}'}),\nonumber\\
\text{OTOC}_{{{\bm p}},A'} &=&- \text{OTOC}_{{\bm p}}\left(t_1,t_2,t_6 + i\frac\pi2,t_5 + i\frac\pi2;{\bm k},{{\bm k}'}\right) \approx \frac{e^{\kappa({{\bm p}}) (t_1+t_2-t_5-t_6 - i\pi)/2}}{C({{\bm p}})} \Upsilon_{{\bm p}}^R(t_{12},{{\bm k}})\Upsilon_{{\bm p}}^A(t_{56},{{\bm k}'}),\nonumber
\end{eqnarray}
that we need to add together. The other OTOC in \eqref{eq:OTOCBOX} is regular because the points $t_3$, $t_4$ and $t_7$, $t_8$ have different imaginary parts, and we obtain
\begin{eqnarray}
\text{OTOC}_{{{\bm p}}}(t_7,t_8,t_3,t_4;{\bm k},{{\bm k}'}) \approx \frac{e^{\kappa({{\bm p}}) (t_7+t_8-t_3-t_4)/2}}{C({{\bm p}})} \Upsilon_{{\bm p}}^R(t_{78},{{\bm k}})\Upsilon_{{\bm p}}^A(t_{34},{{\bm k}'}).
\end{eqnarray}
Plugging in the ansatzé for each OTOC on both sides in \eqref{eq:OTOCBOX}, we can schematically write
\begin{eqnarray}
&&\frac{e^{\kappa({{\bm p}}) \frac{(t_1+t_2-t_3-t_4)}2}}{C({{\bm p}})} \Upsilon_{{\bm p}}^R(t_{12},{{\bm k}})\Upsilon_{{\bm p}}^A(t_{34},{{\bm k}'})
= N\frac{2\cos\frac{\kappa({{\bm p}})\pi}{2}}{C^2({{\bm p}})}e^{\kappa({{\bm p}}) \frac{(t_1+t_2-t_3-t_4)}2}\nonumber\\&&\times\int_{t_5,t_6,t_7,t_8;{\bf q},{\bf q}'} e^{-\kappa({{\bm p}}) \frac{(t_5+t_6)}2}\Upsilon_{{\bm p}}^R(t_{12},{{\bm k}})\Upsilon_{{\bm p}}^A(t_{56},{{\bf q}})\cdot
\text{BOX}(t_5,t_6,t_7,t_8;{\bf q},{\bf q}') \cdot e^{\kappa({{\bm p}}) \frac{(t_7+t_8)}2}\Upsilon_{{\bm p}}^R(t_{78},{{\bf q}'})\Upsilon_{{\bm p}}^A(t_{34},{{\bm k}'}), \nonumber\\\nonumber
\end{eqnarray}
where the dot product is a notation for the integration over the intermediate times and momentum. The factor of $N$ comes from the definition of the OTOC and needs to be compensated since we have two of them on the RHS. Simplifying the equation, we obtain
\begin{eqnarray}
C({{\bm p}}) &&= 2N{\cos\frac{\kappa({{\bm p}})\pi}{2}}\int_{t_5,t_6,t_7,t_8;{\bf q},{\bf q}'}e^{\kappa({{\bm p}}) (t_7+t_8-t_5-t_6)/2}\Upsilon_{{\bm p}}^A(t_{56},{{\bf q}})\cdot
\text{BOX}(t_5,t_6,t_7,t_8;{\bf q},{\bf q}') \cdot \Upsilon_{{\bm p}}^R(t_{78},{{\bf q}'}). \nonumber
\end{eqnarray}
We are interested in the fact that the momentum dependent coefficient $C({\bm p})$ is proportional to
\begin{eqnarray}
\frac{C({{\bm p}})}N \sim \cos\frac{\kappa({{\bm p}})\pi}{2},
\end{eqnarray}
which becomes zero when $\kappa({\bm p}) = 1$. In our case $\lambda_L({\bm p}) = 2\pi T \kappa({\bm p})$, which means that $\lambda_L = 2\pi T$ corresponds to a pole in the regular OTOC \eqref{eq:defOTOC}.
\section{Varying the dynamic critical exponent}
In this section we find a generalized eigenvalue equation when the dynamic critical exponent $2<z\leq 3$. The retarded boson Green's function and boson Wightman function have the following forms:
\begin{eqnarray}
D^R({\bf q},\Omega\neq 0) = \frac{1 }{ |q_y|^{z-1} + ic_b\frac{\Omega}{|q_y|}},\\
D^W({\bf q},\Omega) = \frac{c_b \Omega |q_y|}{\sinh\frac{\beta\Omega}2 (|q_y|^{2z} + c_b^2\Omega^2)},
\end{eqnarray}
where $c_b=g^2/8\pi$ as before. The fermion self energy changes to
\begin{eqnarray}\label{eq:SelfEnFermi}
&&\Sigma(i\omega_n) \,= g^2 \frac{T}{2} \sum_{\Omega_m\neq0}\int_{q_y} \frac{\text{sgn}(\omega_n+\Omega_m)}{|q_y|^{z-1} + \frac{g^2}{8\pi}\frac{|\Omega_m|}{|q_y|}}\\
&&=2^{2-\frac6z}g^{\frac4z} T\pi^{1-\frac2z} \frac{1}{z\sin{\frac{2\pi}{z}}} \sum_{\Omega_m\neq0} \Omega_m^{\frac2z-1}\text{sgn}(\omega_n+\Omega_m)\nonumber\\
&&= \,\text{sgn}(\omega_n) 2^{2-\frac4z}g^{\frac4z} T^{\frac2z} \frac{1
}{z\sin{\frac{2\pi}{z}}} H_{1-\frac2z}\left(\frac{|\omega_n|-\pi T}{2\pi T}\right). \nonumber
\end{eqnarray}
In a simpler form, the self energy is
\begin{eqnarray}
&&\Sigma(i\omega_n) \,=
\text{sgn}(\omega_n)c_{f,z}T^{\frac2z} H_{1-\frac2z}\left(\frac{|\omega_n|-\pi T}{2\pi T}\right), \\
&&c_{f,z} = 2^{2-\frac4z}g^{\frac4z}\frac{1}{z\sin(\frac{2\pi}z)},\label{eq:cfz}
\end{eqnarray}
and the fermion Green's function reads
\begin{eqnarray}
&&G({\bm k},i\omega_n) = \frac1{k_x+k_y^2 - i c_{f,z} \text{sgn}(\omega_n) T^{2/z} H_{1-\frac2z}\left(\frac{|\omega_n|-\pi T}{2\pi T}\right) - i\text{sgn}(\omega_n)\mu(T) }, \\
&&\mu(T) = \frac{g^2T}{2z\sin\left(\frac{2\pi}{z}\right) m^{2-\frac{4}{z}}},
\end{eqnarray}
where $\mu(T)$, as before, is a term generated by a finite but small boson mass $m$.
The derivation of the eigenvalue equation is the same as before, and it can be written as
\begin{eqnarray}
&&\left[T^{2/z}c_{f,z}\left(H_{1-2/z}\left(\frac{-ik_0-\pi T}{2\pi T}\right) + H_{1-2/z}\left(\frac{-i(\omega-k_0)-\pi T}{2\pi T}\right)\right) + 2\mu(T) - p_x\right] \tilde{\mathcal{C}}(k_0,\omega)\nonumber\\
&&=g^2\int \frac{d k_0' dk'_y}{(2\pi)^2} \frac{c_b(k_0-k_0')|k_y'|}{ (|k_y'|^{z} + m^2)^2 + c_b^2 (k_0-k_0')^2}\frac{\tilde{\mathcal{C}}(k_0',\omega)}{\sinh\frac{k_0-k_0'}{2T}}\nonumber\\
&& - \frac{ig^4}{{8}z\sin({\frac{2\pi}z})c_b^{2-\frac2z}}
\int\frac{dk_0' d k_{01}}{(2\pi)^2} \frac{(-ik_{01})^{\frac2z-1} - (i(k_{01}-\omega))^{\frac2z-1}}{ 2k_{01}-\omega}\,\frac{\tilde{\mathcal{C}}(k_0',\omega)}{\cosh{\frac{k_0-k_{01}}{2T}}\cosh{\frac{k_0'-k_{01}}{2T}}}.\label{eq:eigenz}
\end{eqnarray}
Checking this for $z=3$, we obtain the result of the previous section.
The numerical approach to solve this equation is the same as in the previous section; we vary the external momentum $p_x$ on the imaginary axis and looking for a value of $k_0$ at which the equation has a solution. We present the result of $\lambda_L(|p_x|)/T$ in the main Letter, and show that any theory with the dynamic critical exponent in a region $2<z\leq3$ is maximally chaotic.
\subsection{Quasiparticles}
Here, we would like to explore a regime where quasiparticles appear, specifically when the dynamic critical exponent $z$ is in the region $1<z<2$. We explicitly derive and solve the eigenvalue equation, and find and compare the saddle point and pole values. We show that the saddle point gives dominant contribution to the OTOC, and the theories are therefore not maximally chaotic as per the criteria of Ref. \cite{gu2019}.
We first compute the fermion self energy with the dynamic critical exponent in the region $1<z<2$. As compared to the previous case, we set now the boson mass to zero, but have to include a finite UV cutoff $\Lambda$ when integrating over $k_y$. The fermion self energy reads
\begin{eqnarray}
&&\Sigma(i\omega_n) \,= i g^2 \frac{T}{2} \sum_{\Omega_m}\int_{-\Lambda}^{\Lambda}\frac{dq_y}{2\pi} \frac{|q_y|\text{sgn}(\omega_n+\Omega_m)}{|q_y|^{z} + c_b|\Omega_m|}\\
&&=ig^2 \frac{T}{2\pi} \sum_{\Omega_m}\text{sgn}(\omega_n+\Omega_m) \left[\frac{\Lambda^{2-z}}{2-z} + \frac{\pi c_b^{\frac2z-1}|\Omega_m|^{\frac2z-1}}{z\sin\left(\frac{2\pi}{z}\right)}\right]\nonumber\\
&&= i\omega_n \frac{g^2}{2\pi^2 } \frac{\Lambda^{2-z}}{2-z} +ig^2 {T}^{\frac2z} \text{sgn}(\omega_n) \frac{(2\pi c_b)^{\frac2z-1} }{z\sin\left(\frac{2\pi}{z}\right)} H_{1-\frac2z}\left(\frac{|\omega_n|-\pi T}{2\pi T}\right),\nonumber
\end{eqnarray}
where $\Lambda$ is assumed to be large. The retarded self energy is therefore
\begin{eqnarray}\label{eq:SigmaQP}
\Sigma^R(\omega) \,&&=\omega \frac{g^2}{2\pi^2 } \frac{\Lambda^{2-z}}{2-z} +ig^2 {T}^{\frac2z} \frac{(2\pi c_b)^{\frac2z-1} }{z\sin\left(\frac{2\pi}{z}\right)} H_{1-\frac2z}\left(\frac{-i\omega-\pi T}{2\pi T}\right).
\end{eqnarray}
The diagonal term in the eigenvalue equation reads
\begin{eqnarray}\label{eq:diagQP}
\tilde{f}_0(k_0,\omega)=\int \frac{dk_x}{2\pi} G^R(k)G^{R*}(k-\omega) \,\,&&= \frac{i}{ \Sigma^{R}(k_0)-\Sigma^{R*}(k_0-\omega)+\omega},
\end{eqnarray}
where we have retained the bare $k_0$ term in the fermion Green's functions, as it is no longer dominated by the self energy unlike the previously considered $2<z\le3$ case.
Combining \eqref{eq:SigmaQP} and \eqref{eq:diagQP}, we obtain
\begin{eqnarray}\label{eq:diagTerm}
\tilde{f}_0(k_0,\omega) = \frac{1}{T^{2/z}c_{f,z}\left(H_{1-2/z}\left(\frac{-ik_0-\pi T}{2\pi T}\right) + H_{1-2/z}\left(\frac{-i(\omega-k_0)-\pi T}{2\pi T}\right)\right) - i\omega(1+ \frac{g^2}{2\pi^2} \frac{\Lambda^{2-z}}{2-z})}.
\end{eqnarray}
The $K_1$ term is simply
\begin{eqnarray}
K_1 \tilde{\mathcal{C}} = g^2\int \frac{d k_0' dk'_y}{(2\pi)^2} \frac{c_b|k_y'|(k_0-k_0')}{ |k_y'|^{2z} + c_b^2 (k_0-k_0')^2}\frac{\tilde{\mathcal{C}}(k_0',\omega)}{\sinh\frac{k_0-k_0'}{2T}} = \frac{g^2c_b^{\frac2z-1} }{2z\sin\frac\pi{z}}\int \frac{d k_0'}{2\pi} \frac{(k_0-k_0')^{\frac2z-1}}{\sinh\frac{k_0-k_0'}{2T}} \tilde{\mathcal{C}}(k_0',\omega),
\end{eqnarray}
and we note that it is free of IR divergences. We also note that the $K_2$ term is the same as in the previous case \eqref{eq:eigenz} for $2<z\le 3$, and the eigenvalue equation becomes
\begin{eqnarray}
&&\left[T^{2/z}c_{f,z}\left(H_{1-2/z}\left(\frac{-ik_0-\pi T}{2\pi T}\right) + H_{1-2/z}\left(\frac{-i(\omega-k_0)-\pi T}{2\pi T}\right)\right) -i\omega\left(1+\frac{g^2}{2\pi^2}\frac{\Lambda^{2-z}}{2-z}\right) - p_x\right] \tilde{\mathcal{C}}(k_0,\omega)\nonumber\\
&&=\frac{g^2c_b^{\frac2z-1} }{2z\sin\frac\pi{z}}\int \frac{d k_0'}{2\pi} \left[\frac{(k_0-k_0')^{\frac2z-1}}{\sinh\frac{k_0-k_0'}{2T}} - \frac{ig^2}{8\cos({\frac{\pi}z})c_b}
\int\frac{d k_{01}}{2\pi} \frac{(-ik_{01})^{\frac2z-1} - (i(k_{01}-\omega))^{\frac2z-1}}{( 2k_{01}-\omega)\cosh{\frac{k_0-k_{01}}{2T}}\cosh{\frac{k_0'-k_{01}}{2T}}}\,\right] \tilde{\mathcal{C}}(k_0',\omega)\label{eq:eigenQP},
\end{eqnarray}
where the constants $c_{f,z}$ and $c_b$ are defined above.
\begin{figure}
(a)\includegraphics[width=3in]{lambdaLPx_z15.pdf}\ \ \ \ \
(b)\includegraphics[width=3in]{saddlePt.pdf}
\caption{Analysis of the eigenvalues numerically obtained from \eqref{eq:eigenQP} for $z=1.5$, $g=0.5$ and $\Lambda=50$. (a) Lyapunov exponent as a function of the external momentum $ip_x$ on the imaginary axis. The saddle point $|p_s|$ gives the dominant contribution as $|p_s|<|p_1|$ \cite{gu2019}. In particular, we obtained $|p_s| = 4.28$ and $|p_1| = 7.84$. The butterfly velocity therefore is $v_B = (d\lambda_L(p_x)/d|p_x|)_{p_x=p_s} = 0.8v_F$, where $v_F = 1$ is the Fermi velocity. The purple line is a fitting function $\lambda_L = 0.04 +0.78|p_x| +0.002|p_x|^2$ of the numerical data. The accuracy of the results is within $\sim 5\%$. (b) Finding the saddle point, which is given by the root of the plotted function.}
\label{fig:Lambdaz15}
\end{figure}
We solve the equation for parameters $z=1.5$, $g=0.5$, $T = 1$, and the UV cutoff $\Lambda = 50$. We find that the ballistic growth is present with the Lyapunov exponent $\lambda_L = 3.42$ and butterfly velocity $v_B =0.8\,v_F$. Since the solution is numerical, we obtain the result with some error, that we estimate to be $\sim 5 \%$. We show the behavior of eigenvalues upon changing the external momentum on the imaginary axis $|p_x|$ in Fig. \ref{fig:Lambdaz15}. We find that the saddle point value is $|p_s| = 4.28\,$, and that the pole is at $|p_1| = 7.84\, $ which is much greater than the saddle point.
We also note that the leading behavior of the eigenvalues is given by the linear in frequency behavior in the diagonal term of the equation \eqref{eq:eigenQP}. Therefore, to first order, we can estimate the butterfly velocity as
\begin{eqnarray}
v_B \approx \frac{1}{1+\frac{g^2}{2\pi^2}\frac{\Lambda^{2-z}}{2-z}} v_F,
\end{eqnarray}
where $v_F = 1$ is the Fermi velocity. For the chosen parameters, this equation approximates $v_B \approx 0.85 \, v_F$, which is close to the actual numerical solution we obtained above. We therefore argue that, within the class of problems we are interested in, any theory with the dynamic critical exponent in the region $1<z<2$, {\it{i.e.}}, one that has quasiparticles, is not maximally chaotic.
|
3,212,635,537,577 | arxiv | \section{Introduction}
Magnetic activity in late-type main-sequence stars is observable evidence of the stellar magnetic fields. The generation and intensification of surface magnetic fields in solar stars are generally due to a complex dynamo mechanism, whose efficiency is determined by the interaction between differential rotation and subphotospheric convection into the stellar interior and in which meridional circulation plays an important role \citep{2015SSRv..196..303B,2017LRSP...14....4B,2020LRSP...17....4C}.
Magnetic fields reach the stellar surface and manifest themselves in a variety of phenomena that we call stellar activity: starspots, chromospheric plages, heating of the chromosphere and corona, impulsive flares. Starspots are a manifestation of magnetic field lines going through the stellar photosphere and obstructing the convective welling up of hot plasma, producing these cool spots that are darker than the surrounding photosphere. Chromospheric plage regions correspond to enhanced network magnetic field and facula regions in the photosphere, which might surround sunspots, but are not necessarily associated with them. Heating of the stellar chromosphere and corona generates chromospheric emission lines. Impulsive flares are visible in all regions of the spectrum, and are due to the reconnection of magnetic field lines \citep{1975ApJ...200..747S,1989ApJ...337..964S,2006RPPh...69..563S,2017SCPMA..60a9601C,2018ApJS..236....7H}.
M stars are small cool main-sequence stars with effective temperatures in the range 2400 - 3800 K and radii between 0.10 and 0.63 R$_\odot$; they represent 75\% of the stars in the solar neighbourhood \citep{2002AJ....124.2721R, 2006AJ....132.2360H}. They are known to generate the strongest photospheric magnetic fields among main-sequence stars \citep{1985ApJ...299L..47S, 2009ApJ...692..538R,2017NatAs...1E.184S}, showing magnetic activity as spots, flares, plages, and other brightness inhomogeneities.
In recent years the exoplanet community have started to monitor samples of M dwarfs, aiming to search for habitable planets around these stars.
From an observational point of view, there are more chances of finding an Earth-like planet in the habitable zone as the host star's mass decreases. Therefore, M dwarfs are extremely interesting targets for planet discovery \citep{2012A&A...541A...9G}. However, magnetic activity increases with decreasing stellar mass \citep{1996AJ....112.2799H,2008AJ....135..785W,2017ApJ...834...85N}.
Stellar activity has effects on the search of exoplanets: in some cases the radial velocity periodicity, induced by stellar activity and rotation, may produce spurious signals that mimic planetary signals. This was the case, for example, of AD Leonis, for which \citet{2018AJ....155..192T} proposed the existence of a planet, while \citet{2013A&A...552A.103R} and \citet{2013A&A...549A.109B} have interpreted the RV signal present in the AD Leo spectra as being due to magnetic activity; this thesis has also been recently confirmed by \citet{2020A&A...638A...5C}. They use a multiwavelength approach (visible and near-infrared) to shown that the signal is of stellar origin. Therefore, a detailed study of magnetic activity in active M stars could improve our capability of modelling the signal generated by magnetic activity and increase our possibilities of finding new exoplanet candidates.
In addition, stars with high levels of magnetic activity show flares more frequently than inactive stars \citep{2009AJ....138..633K}. The large amounts of energy released by flares could potentially affect the structure and temperature regime of exoplanetary atmospheres, thereby affecting the size of the habitable zone \citep{2007AsBio...7..185L}. It is therefore crucial to better understand and quantify the activity of M dwarfs in terms of strength and variability.
Chromospheric activity is usually observed in the cores of the \ion{Ca}{ii} H\&K lines and the \ion{H}{i} Balmer lines. Other common optical activity indicators include lines such as the Na D$_{1,2}$ doublet, the \ion{Mg}{i} b triplet, or the \ion{Ca}{ii} infrared triplet.
A simultaneous analysis of the different indicators of magnetic activity could increase our knowledge of the chromospheric structure and the radial-velocity variations \citep[e.g.][]{2000A&AS..146..103M, 2013A&A...558A.141S, 2017A&A...598A..27M,2018A&A...616A.155L,2019A&A...627A.118M}. The common approach is to study the relationship between pairs of fluxes of different lines.
In this paper we aim to understand the behaviour of stellar chromospheres for M stars with high levels of activity.
To this end, we focus our study on one M dwarf, AD Leonis, a very close active star, which was analysed through spectroscopic monitoring in the optical band. We present an analysis of fluxes and profiles of the main optical activity indicators such as chromospheric lines of \ion{H}{i}, \ion{He}{i}, \ion{Na}{i}, and \ion{Ca}{ii}.
This paper is organised as follows. We describe the target in Sect. \ref{sec:adleo} and the observations in Sect. \ref{sec:obs}. We detail our procedure in Sect. \ref{sec:analysis}. Section \ref{sec:flux-flux} presents the analysis of the different spectral lines sensitive to the activity. A flare analysis is discussed in Sect. \ref{section:flare}. Our conclusions follow in Sect. \ref{sec:summary}.
\section{AD Leonis}\label{sec:adleo}
AD Leonis (AD Leo, GJ 388, BD +20 2465) is classified as dM4.5e \citep{2018AJ....155..192T} and is located in the immediate solar neighbourhood, at a distance of $\sim 4.97$ pc \citep{2018A&A...616A...1G}. \citet{2012ApJ...758...56S} estimated a radial velocity of 12.5 $\pm$ 0.2 km s$^{-1}$.
\citet{2013A&A...549A.109B} estimated a mass of 0.42 M$_\odot$ and a luminosity of 0.023 L$_\odot$.
The star has a radius of 0.436 $\pm$ 0.049 R$_\odot$ and effective temperature of 3414 $\pm$ 100 K \citep{2016ApJ...822...97H}. \citet{2012A&A...538A..25N} estimated the metallicity of AD Leo to be [Fe/H] = 0.07, while \citet{2012ApJ...748...93R} gave a value of 0.28 $\pm$ 0.17.
Based on spectropolarimetry, \citet{2008MNRAS.390..567M} reported a stellar rotation period of 2.2399 $\pm$ 0.0006 days; they also gave alternative solutions at periods of 2.2264 and 2.2537 days.
The strongest evidence in favour of the short rotation period of AD Leo comes from the Microvariability and Oscillations of Stars (MOST) photometric observations. MOST observations were reported to contain strong evidence for a periodicity of 2.23$^{+0.36}_{-0.27}$ days \citep{2012PASP..124..545H} caused by `spots distributed at different longitudes or, possibly, that the modulation is caused by varying surface coverage of a large polar spot or a spot that is viewed nearly pole on'. This suggests a young age, estimated to be 25-300 Myr by \citet{2009ApJ...699..649S}.
\citet{2016ApJ...822...97H} reported a value for $v$ sin $i$ of AD Leo equal to 2.63 km s$^{-1}$ that produced a projected rotation period of 8.38$^{+1.2}_{-1.1}$ days. Thus, since the rotation period of the star is 2.23 days, the star is oriented nearly pole-on with an inclination of $\sim 15$ degrees, confirming the value reported by \citet{2008MNRAS.390..567M} and \citet{2012AJ....143...93R}.
AD Leo has been observed to be variable on longer timescales as well. \citet{2014ApJ...781L...9B} reported an approximately 7 yr activity cycle based on ASAS optical photometry and CASLEO spectroscopy. Even though the period reported in the ASAS photometry has a rather modest statistical significance with a false alarm probability (FAP) of the order of 8\%, together with the spectroscopic data it indicates the presence of an approximately seven-year activity cycle in a convincing manner.
AD Leo hosts a magnetic field with properties similar to those observed for fully convective stars \citep{2008MNRAS.390..567M}.
A high-resolution infrared spectrum of AD Leo, obtained with the Kitt Peak 4 m Fourier Transform Spectrometer, clearly shows the presence of strong magnetic fields \citep{1985ApJ...299L..47S}.
\citet{2018MNRAS.479.4836L} inspected circularly polarised spectra and estimate an average large-scale magnetic field of $\sim 300 - 330$ G. Line broadenings in unpolarised spectra, also determined by small-scale field structures, reveal instead a stronger overall magnetic field ($3100$ G, \citealt{2017NatAs...1E.184S}).
Since AD Leo is a magnetically active star, its emission from the upper layers of the atmosphere (chromosphere and corona) is intense. In particular, in the optical band AD Leo is characterised by H$\alpha$, H$\beta$, and \ion{Ca}{ii} H\&K lines in emission, with variable line profiles (shape and intensity) that depend on the activity level at the time of observation, and by the presence of phenomena directly related to stellar magnetic activity such as flares.
It is well known for its frequent \citep{1984ApJS...54..375P, 2006AJ....132.2360H} and strong flares \citep[e.g.][]{1991ApJ...378..725H} that have been observed and studied in the optical, extreme UV, and X-ray wavelength ranges \citep[e.g.][]{1995ApJ...453..464H, 1996A&A...310..245M, 2000A&A...354.1021F, 2003ApJ...597..535H, 2003A&A...411..587V}.
\begin{table}[h!]
\caption{Target characteristics}
\label{table:target}
\centering
\begin{tabular}{cc}
\toprule[0.05cm]
\toprule
\multicolumn{2}{c}{AD Leonis} \\
\midrule
\smallskip
Spectral type \ \tablefootmark{(a)} & M4.5e \\
\enskip
$\mathrm{M_{\star}}\mathrm{(M_{\odot})}$ \ \tablefootmark{(b)} & $\sim 0.42$\\
\enskip
$\mathrm{R_{\star}}\mathrm{(R_{\odot})}$ \ \tablefootmark{(c)} & $0.436 \pm 0.049$ \\
\enskip
$\mathrm{log \ g}$ & $\sim 4.8$ \\
\enskip
d (pc)\ \tablefootmark{(d)} & $\sim 4.9660 \pm 0.0017$ \\\
\enskip
$\mathrm{L_{\star}}\mathrm{(L_{\odot})}$ \ \tablefootmark{(b)} & $\sim 0.023$ \\
\enskip
$\mathrm{T_{eff}}$ (K) \ \tablefootmark{(c)} & $3414 \pm 100$ \\
\enskip
$v$ sin $i$ $\mathrm{(km \ s^{-1})}$ \ \tablefootmark{(c)}& $\sim 2.63$ \\
\enskip
$\mathrm{P_{phot}}$ (d)\ \tablefootmark{(e)} & $\sim 2.23$ \\
\enskip
$\mathrm{[Fe/H]}$ \ \tablefootmark{(f)}& $0.28 \pm 0.17$ \\
\enskip
RV $\mathrm{(km \ s^{-1})}$ \ \tablefootmark{(g)}& $12.5 \pm 0.2$\\
\enskip
B$_{pol}$ (G) \ \tablefootmark{(h)} & $\sim 300 - 330$\\
B$_{unpol}$ (G) \ \tablefootmark{(i)} & $\sim 3100$\\
\bottomrule[0.05cm]
\end{tabular}
\tablefoot{\tablefoottext{a}{\citet{2018AJ....155..192T}} \tablefoottext{b}{\citet{2013A&A...549A.109B}} \tablefoottext{c}{\citet{2016ApJ...822...97H}} \tablefoottext{d}{\citet{2018A&A...616A...1G}} \tablefoottext{e}{\citet{2008MNRAS.390..567M}} \tablefoottext{f}{\citet{2012ApJ...748...93R}} \tablefoottext{g}{\citet{2012ApJ...758...56S}} \tablefoottext{h}{\citet{2018MNRAS.479.4836L}}
\tablefoottext{i}{\citet{2017NatAs...1E.184S}}.}
\end{table}
\subsection{Activity indicators}
High-resolution spectroscopy of activity diagnostics has revealed to be a powerful tool to improve our understanding of stellar chromospheres; optically thick photospheric lines with broad absorption wings have core emission features that are strictly linked to the chromosphere's thermal structure. High-resolution spectra are required to resolve these emission features and to characterise their complex profiles that often consist of emission peaks with a self-reversed dip at line centre.
In particular, we analysed the fluxes and profiles of the \ion{H}{i} Balmer series, \ion{He}{i}, \ion{Na}{i}, and \ion{Ca}{ii} H\&K.
H$\alpha$ and \ion{Ca}{ii} K are two of the strongest optical emission lines in active M dwarf chromospheres. Across the M spectral class there is a range of emission strength in \ion{Ca}{ii} K, and a wide variety of both absorption and emission in H$\alpha$. The
H$\alpha$ core appears to trace hotter regions of the chromosphere ($\ge$ 7000 K), while \ion{Ca}{ii} K is formed in the cooler regions between the temperature minimum and $\sim$ 6000 K \citep{1982ApJ...258..740G, 1985ApJ...294..626C, 2009AJ....137.3297W}. Thus, H$\alpha$ and \ion{Ca}{ii} K together offer complementary information on chromospheric structure.
The \ion{Ca}{ii} H (3968.47 \AA) and K (3933.66 \AA) lines are very useful diagnostics of the solar chromosphere.
The emission cores of the H\&K lines are weak for very quiet regions on the Sun, but can exceed the local continuum in brightness for active stars, particularly for active M dwarfs that have a weak continuum.
For FGK stars, the H\&K lines show emission cores inside very broad absorption wings because \ion{Ca}{ii} is the primary ionisation stage in the photospheres and lower chromospheres of these warm stars. For M stars, \ion{Ca}{i} is the dominant ionisation stage in the photosphere and lower chromosphere, and as a consequence the H\&K lines for these stars do not have broad absorption wings \citep{2017ARA&A..55..159L}.
Observations of the solar surface indicate that the inhomogeneities on the surface may be due to contributions from different regions and phenomena; \ion{Ca}{ii} K core emission corresponds spatially to regions of concentrated magnetic field, such as active plage regions and bright network grains, while H$\alpha$ chromospheric emission and absorption can be produced in filaments protruding from active regions, in spots across the network of the quiet Sun, and in enhanced emission from bright points during flares \citep{2008ApJ...680.1542H, 2006ASPC..354..276R, 2007ASPC..368...27R}. Consequently, examining the relationship between the \ion{Ca}{ii} and Balmer lines can throw light on the nature of magnetic structures.
\citet{2017A&A...598A..28S}, extending a previous study by \citet{2010A&A...520A..79M}, analysed the short-term chromospheric variability and the flux excess emitted in the \ion{Ca}{ii} H\&K and H$\alpha$ lines of a sample of 71 early-type M dwarfs with different levels of activity (inactive and moderately active stars). They show that the \ion{Ca}{ii} H\&K flux excesses are strongly linearly correlated. When comparing the \ion{Ca}{ii} H\&K with the H$\alpha$ chromospheric line flux they found significantly more scatter, mostly for the most active stars. The same sample of inactive and moderately active stars was analysed by \citet{2017A&A...598A..27M}, who focused on average trends.
The sodium resonance doublet is an important photospheric and chromospheric diagnostic. The typical profile of \ion{Na}{i} doublet shows extended wings and narrow cores. Active dwarfs with H$\alpha$ in emission have been shown to exhibit a distinctive core emission of probable chromospheric origin \citep[e.g.][]{1978ApJ...226..144G,1981ApJS...46..159W,1991A&AS...90..437P}. \citet{1989A&A...209..279P} was the first to detect the important chromospheric contribution of the \ion{Na}{i} D$_{1,2}$ lines in the core for active M dwarfs.
A complete study of the formation of the \ion{Na}{i} D$_{1,2}$ lines proposed by \citet{1997A&A...322..266A} confirmed that these lines are promising diagnostics of the lower-middle chromosphere.
\citet{2009MNRAS.400..238H} also shows that the main chromospheric contribution of these indicators arises in a narrow line core, but they also note some differences in the inner wings, suggesting that magnetic activity could also affect the upper photosphere.
The \ion{He}{i} D$_3$ (5875.62 \ \AA) is also an interesting diagnostic because it is formed in the lower transition region and it is mostly detected in very active stars. All these chromospheric lines are used in planet search programmes to identify stellar activity, and they are all correlated to some extent with the RV jitter \citep[e.g.][]{2012A&A...541A...9G}.
In this paper we present a study of all these chromospheric lines and their variability due to magnetic activity, focusing our attention on a specific M dwarf, well known for its high level of magnetic activity.
\section{Observations}\label{sec:obs}
The high-resolution spectra of AD Leo analysed in this work were obtained with two different instruments.
We analysed 33 high-resolution spectra of AD Leo collected with HARPS \citep{2003Msngr.114...20M}, the fibre echelle spectrograph installed on the 3.6 m European Southern Observatory (ESO) telescope in the La Silla Observatory, Chile, obtained from January to May 2006.
In addition we considered 63 HARPS-N \citep{2012SPIE.8446E..1VC} spectra collected in the context of the Global Architecture of Planetary System (GAPS) programme \citep{2013A&A...554A..28C} \footnote{AD Leo was originally part of the search of planets around young stars of the GAPS 2 programme since a candidate planet around was proposed by \citet{2018AJ....155..192T} and then discarded by \citet{2020A&A...638A...5C}}.
HARPS-N observations were performed in two different observing seasons: from April to June 2018 and from November 2018 to January 2019. All the data used in this work are listed in Table \ref{tab:ObsData}.
The two instruments have very similar performance with a resolving power of R $\sim$ 120000 (HARPS) and R $\sim$ 115000 (HARPS-N) and a spectral coverage of 378-691 nm and 383-693 nm, respectively. The spectra are provided already reduced using ESO/HARPS-N standard calibration pipelines.
\section{Analysis of the observations}\label{sec:analysis}
We identified a number of lines sensitive to activity, listed in Table \ref{tab:rangeline}. A strong emission is detected, even during the quiescent state of the star, for the H$\alpha$, H$\beta$, \ion{Ca}{ii} H\&K lines; an intermediate emission above the continuum is observed for the He lines (\ion{He}{i} D$_3$, \ion{He}{i} 4026 \AA \ and \ion{He}{i} 4471 \AA ); and the \ion{Na}{i} doublet (D$_1$ \& D$_2$) shows emission in the core of the line profile.
These lines result from different excitation potentials, so their formation requires different physical conditions that occur in different parts of the active atmosphere of AD Leo. As a result, changes in equivalent width and/or in line profile of these lines can be explained by a direct or indirect impact of the magnetic activity on the whole stellar atmosphere and on its time variability.
As a measure of the chromospheric activity strength, we measured excess of fluxes, as described in the next sections.
To measure the emission caused by activity, we chose wavelength integration ranges that are sufficiently broad for the broadest emission even in case of a strong flaring event. These ranges were set after a visual inspection of the spectra and are reported in Table \ref{tab:rangeline} for each line we considered.
In addition, other lines known as good indicators of chromospheric activity, such as the \ion{Mg}{i} b$_1$, b$_2$, b$_4$ lines and \ion{Fe}{i} at 5270 \AA, were analysed, showing the same behaviour as the other lines studied in this work, even though their emission above the continuum is less intense than for the other lines, and for this reason they are not reported.
\begin{table}[h]\scriptsize
\begin{center}
\caption{Rest wavelength and integration ranges for the selected lines. Blue and red integration ranges were chosen to fit the continuum.}
\label{tab:rangeline}
\begin{tabular}{lcccc}
\toprule[0.05cm]
\toprule
Line & $\lambda$ & Blue integration & W & Red integration \\
& (\AA) & ranges (\AA) & (\AA)& ranges (\AA) \\
\midrule
\ion{Ca}{ii} K & 3933.66 & 3932.20 - 3933.20 & 3933.20 - 3934.50 & 3934.50 - 3935.00 \\
\ion{Ca}{ii} H & 3968.47 & 3967.70 - 3968.00 & 3968.00 - 3969.10 & 3969.10 - 3969.30 \\
\ion{He}{i} 4026 & 4026.19 & 4025.40 - 4026.10 & 4026.10 - 4026.70 & 4026.70 - 4027.00 \\
\ion{He}{i} 4471 & 4471.48 & 4470.00 - 4471.40 & 4471.40 - 4471.85 & 4471.85 - 4473.00 \\
H$\beta$ & 4861.35 & 4858.70 - 4859.60 & 4859.60 - 4864.00 & 4864.00 - 4864.20 \\
\ion{He}{i} 5876 & 5875.62 & 5875.30 - 5875.42 & 5875.28 - 5876.80 & 5876.90 - 5877.00 \\
\ion{Na}{i} D$_2$ & 5889.95 & 5889.50 - 5889.80 & 5889.80 - 5890.70 & 5890.70 - 5891.00 \\
\ion{Na}{i} D$_1$ & 5895.92 & 5895.70 - 5895.80 & 5895.90 - 5896.50 & 5896.60 - 5896.70 \\
H$\alpha$ & 6562.79 & 6553.00 - 6555.00 & 6555.00 - 6570.00 & 6570.00 - 6572.00 \\
\bottomrule[0.05cm]
\end{tabular}
\end{center}
\end{table}
\begin{figure*}[ht!]
\resizebox{\hsize}{!}
{\includegraphics[width=\hsize]{Curve_luce_flx_log.png}}
\caption{Line flux vs time (MJD$_0$ is the start time of observations in 2006). Data obtained in 2006 are shown in the left panel. Data obtained in 2018 are shown in the middle and right panels. <F$_\lambda$> is the average of the logarithmic flux of each activity indicator for each season (for the second season of 2018 these values exclude the flare event points). Black arrows mark the points relative to the flare event. The error bars are shown in the plots, but for most of the points are too small to be visible.}
\label{fig:Timeseries}
\end{figure*}
\subsection{Flux rescaling}\label{sec:flux-rescaling}
The HARPS and HARPS-N spectra are not calibrated in flux; therefore, they have arbitrary units.
The spectra provided by the data reduction system (DRS) show night-to-night variations in the continuum level at different wavelengths, due to atmospheric differential absorption and instrumental effects. To correct them, and to scale the observed spectra to the same flux reference, in order to be able to compare the intensity of the analysed lines, we compare them with synthetic spectra from the BT-Settl spectral library provided by \citet{2011ASPC..448...91A}\footnote{We adopt the CIFIST2011 models (\url{https://phoenix.ens-lyon.fr/Grids/BT-Settl/CIFIST2011bc/SPECTRA/})} with $T_{\mathrm{eff}}$, log(\textit{g}), and [Fe/H] corresponding to stellar parameters (see Table \ref{table:target}) in analogy to the procedure adopted to compute the excess fluxes provided by \citet{2017A&A...598A..28S}.
Both the observed and the model spectra were degraded to low resolution, convolving them with a Gaussian kernel with $\sigma = 80$ \ \AA, in order to avoid discrepancies between the observed and the model lines profiles.
Finally, the observed-to-model flux ratio was used to rescale the observed high-resolution spectra.
The flux calibration procedure may be less accurate in the case of strong emission lines, sensitive to the magnetic activity, because the model does not take into account the chromospheric emission; therefore, to obtain a more precise calibration in those areas, they are removed during this procedure.
We used the flux calibrated spectra to calculate the flux for each line according to Eq \eqref{eq:Fline} with the same integration ranges listed in Table \ref{tab:rangeline}. This value provides a measure similar to the equivalent width (EW), but less influenced by continuum flux estimation. This is important for lines located in spectral regions where the continuum is very low, and hence its relative uncertainty is very high. The flux line is computed as
\begin{equation}
\label{eq:Fline}
F_{line} = \sum_{i=1}^{i=n} F_{i}d\lambda - \dfrac{(F_{c,b}+F_{c,r})}{2}W
,\end{equation}
where d$\lambda$ is the width of the wavelength bin; $F_{i}$ is the observed flux in the bin $i$ of the line; $n$ is the number of bins within the line region, defined as $W/d\lambda$; F$_{c,b}$ and F$_{c,r}$ are the flux values measured at the extremes of the integration range on the blue and red side of the line, respectively; and $W$ is the wavelength range used for the integration, corresponding to the full line width (see Table \ref{tab:rangeline}).
Several tests were done to find the most accurate method for determining the continuum flux F$_c$. We chose to fit the continuum (in the blue and red integration ranges defined in Table \ref{tab:rangeline}) with a linear function. This method shows that the continuum flux is, with good approximation, constant over the considered range in most of the analysed spectra; however, some spectra show a slope and the linear fit allows us to take it into account. The error of the continuum flux, $\delta$F$_{c,i}$, was estimated by applying the standard error propagation theory on the uncertainties of the fit parameters.
There are no obvious estimates for the statistical error of the observed flux, $\delta$F$_{i}$. The spectrum is affected by the presence of numerous minor lines that are not variable in time, and that characterise every part of the spectrum. Since these lines are too numerous to be isolated, and since they can affect the spectrum in the continuum and in the profile of the line, we can assume that the $\delta F_{i}$ is the standard deviation with respect to the continuum flux calculated in the N points outside the line (Eq.\ref{eq:errflusso}):
\begin{equation}\label{eq:errflusso}
\delta F_{i} = \sqrt{\dfrac{\sum_i \big(F_{i} - F_{c,i}\big)^2}{N-1}}
.\end{equation}
The $F_{line}$ uncertainty was estimated using Eq. \ref{eq:errflussoline}, assuming $d\lambda = 0.01$ \AA:
\begin{equation}\label{eq:errflussoline}
\delta F_{line} = \sqrt{\sum_i\bigg(\delta F_{i} \ d\lambda\bigg)^2 + \bigg(\delta F_{c,b}^2+\delta F_{c,r}^2\bigg)\bigg(\dfrac{W}{2}\bigg)^2+\delta F_{range}^2}~.
\end{equation}Here F$_{range}$ takes into account the possible effects due to the selection of the ranges used to estimate the continuum ($\delta W$). This value was calculated as the half difference between the maximum and minimum values of the continuum flux obtained with three different ranges for the continuum measurements.
\subsection{Time series and line flux variability}
Figure \ref{fig:Timeseries} shows the temporal variations of the analysed activity indicators. In particular, Fig. \ref{fig:Timeseries} shows the variability of the integrated line flux of the analysed lines with time. The left panel shows HARPS data obtained in 2006; the middle and right panels show two different observing seasons of HARPS-N dataset performed in 2018.
Several conclusions can be drawn from this figure. First, we can confirm that the flux on the stellar surface for the analysed lines is variable on both short (hours, days) and long (months, years) timescales during the entire observed time. Second, during the second season of 2018 (right panel) a flare is observed: two points, corresponding to two observations obtained two hours apart, highlight this phenomenon and allow us to follow its evolution. A more detailed analysis of the flare is described in Sect. \ref{section:flare}. Three other possible flare events are detected during 2006 (left panel).
Moreover, observing the time series of the analysed activity indicators, we can assert that AD Leo was more active in 2018 than in 2006. Unexpectedly, despite the lower activity level, the time series of \ion{Ca}{ii} H\&K show a higher flux in 2006 (see the average of the logarithmic flux <F$_\lambda$> in Fig. \ref{fig:Timeseries} and the histograms in Fig. \ref{fig:Corr20062018}).
\section{Flux-flux relationship}\label{sec:flux-flux}
In the following we analyse the relationships between the chromospheric fluxes of different activity indicators.
We inspect the presence of a correlation based on Spearman’s rank-order correlation coefficient ($\rho$).
Figure \ref{fig:Corr20062018} shows the correlations between the fluxes obtained from observations in 2018 (dark blue points) and those in 2006 (red points), and the results of the statistical tests separated for the two seasons are provided in Table \ref{tab:correlation}.
\begin{figure*}
\centering
\includegraphics[scale=0.35]{Corr_Dati_2006_2018_contract.pdf}
\caption{Correlation plot of flux (logarithmic scale) between different activity indicators. The diagonal panels show the histogram of flux of each indicator. It can be seen that most of the indicators show a significant correlation in both the datasets. Correlations with \ion{Ca}{ii} are less significant and more scattered. This figure also shows less activity of the star in 2006 (red points) than in 2018 (blue points). However, the flux of \ion{Ca}{ii} is higher in 2006 than in 2018. This result can be interpreted as a major surface coverage of plages and filaments during the observations in 2006.}
\label{fig:Corr20062018}
\end{figure*}
\begin{table}\scriptsize
\begin{center}
\caption{Statistical analysis of chromospheric activity indicators fluxes. In the third and fourth columns we reported the Spearman coefficient $\rho$ and the value of the probability of a null hypothesis for the dataset of 2006. In the last two columns we reported the same values for the dataset of 2018. Weak correlations are reported in brackets, no correlations are shown in boldface.}
\label{tab:correlation}
\begin{tabular}{llcccc}
\toprule[0.05cm]
\toprule
X-index & Y-index & $\rho_{2006}$\tablefootmark{a} & P$_{\rho_{2006}}$\tablefootmark{b}& $\rho_{2018}$ & P$_{\rho_{2018}}$ \\
\midrule
H$\alpha$ & H$\beta$ & 0.965 & 0.000006 \% & 0.941 & 0 \% \\
H$\alpha$ & \ion{Ca}{ii} H & 0.518 & 0.34 \% & 0.490 & 0.01 \% \\
H$\alpha$ & \ion{Ca}{ii} K & (0.438) & (1.33 \%) & \textbf{0.233} & \textbf{6.4 \%} \\
H$\alpha$ & \ion{Ca}{ii} & 0.494 & 0.5 \% & 0.409 & 0.12 \% \\
H$\alpha$ & \ion{He}{i} 4026 & 0.816 & 0.0004 \% & 0.769 & 0 \% \\
H$\alpha$ & \ion{He}{i} 4471 & 0.920 & 0.000018 \% & 0.831 & 0 \% \\
H$\alpha$ & \ion{He}{i} 5876 & 0.953 & 0.000006 \% & 0.802 & 0 \% \\
H$\alpha$ & \ion{Na}{i} & 0.932 & 0.000012 \% & 0.585 & 0.0003 \% \\
H$\beta$ & \ion{Ca}{ii} H & 0.532 & 0.26 \% & 0.459 & 0.027 \% \\
H$\beta$ & \ion{Ca}{ii} K & (0.433) & (1.43 \%) & \textbf{0.212} & \textbf{9.26 \%} \\
H$\beta$ & \ion{Ca}{ii} & 0.489 & 0.6 \% & 0.390 & 0.2 \% \\
H$\beta$ & \ion{He}{i} 4026 & 0.821 & 0.0003 \% & 0.831 & 0 \% \\
H$\beta$ & \ion{He}{i} 4471 & 0.952 & 0.000006 \% & 0.886 & 0 \% \\
H$\beta$ & \ion{He}{i} 5876 & 0.948 & 0.000006 \% & 0.901 & 0 \% \\
H$\beta$ & \ion{Na}{i} & 0.933 & 0.000012 \% & 0.601 & 0.00018 \% \\
\ion{Ca}{ii} H & \ion{Ca}{ii} K & 0.928 & 0.000018 \% & 0.771 & 0 \% \\
\ion{Ca}{ii} H & \ion{He}{i} 4026 & 0.597 & 0.07 \% & 0.338 & 0.73 \% \\
\ion{Ca}{ii} H & \ion{He}{i} 4471 & 0.543 & 0.21 \% & 0.379 & 0.26 \% \\
\ion{Ca}{ii} H & \ion{He}{i} 5876 & 0.517 & 0.3 \% & 0.408 & 0.12 \% \\
\ion{Ca}{ii} H & \ion{Na}{i} & 0.517 & 0.43 \% & 0.520 & 0.004 \% \\
\ion{Ca}{ii} K & \ion{He}{i} 4026 & 0.511 & 0.38 \% & \textbf{0.123} & \textbf{33 \%} \\
\ion{Ca}{ii} K & \ion{He}{i} 4471 & (0.422) & (1.69 \%) & \textbf{0.187} & \textbf{14 \%} \\
\ion{Ca}{ii} K & \ion{He}{i} 5876 & (0.409) & (2.1 \%) & \textbf{0.191} & \textbf{13 \%} \\
\ion{Ca}{ii} K & \ion{Na}{i} & (0.404) & (2.2 \%) & 0.327 & 0.94 \% \\
\ion{Ca}{ii} & \ion{He}{i} 4026 & 0.591 & 0.08 \% & \textbf{0.268} & \textbf{3.37 \%} \\
\ion{Ca}{ii} & \ion{He}{i} 4471 & 0.494 & 0.52 \% & 0.336 & 0.76 \% \\
\ion{Ca}{ii} & \ion{He}{i} 5876 & 0.479 & 0.69 \% & 0.344 & 0.63 \% \\
\ion{Ca}{ii} & \ion{Na}{i} & 0.478 & 0.69 \% & 0.479 & 0.014 \% \\
\ion{He}{i} 4026 & \ion{He}{i} 4471 & 0.86 & 0.00009 \% & 0.844 & 0 \% \\
\ion{He}{i} 4026 & \ion{He}{i} 5876 & 0.842 & 0.00019 \% & 0.808 & 0 \% \\
\ion{He}{i} 4026 & \ion{Na}{i} & 0.777 & 0.0011 \% & 0.397 & 0.16 \% \\
\ion{He}{i} 4471 & \ion{He}{i} 5876 & 0.943 & 0.000012 \% & 0.919 & 0 \% \\
\ion{He}{i} 4471 & \ion{Na}{i} & 0.887 & 0.00005 \% & 0.478 & 0.015 \% \\
\ion{He}{i} 5876 & \ion{Na}{i} & 0.947 & 0.000006 \% & 0.536 & 0.002 \% \\
\ion{Na}{i} D1 & \ion{Na}{i} D2 & 0.867 & 0.00001 \% & 0.893 & 0 \% \\
\bottomrule[0.05cm]
\end{tabular}
\end{center}
\tablefoot{\tablefoottext{a}{Rank correlation for two populations} \tablefoottext{b}{P-value denotes the two-sided significance of its deviation from 0 by random chance, i.e. a small value indicates significant correlation}.}
\end{table}
It can be seen that most of the indicators show a significant correlation (P $<1\%$) in both datasets. The \ion{Ca}{ii} K line has a peculiar behaviour, with a weak correlation ($1 \% <$ P $<3\%$) in the 2006 season and no correlation (P $>3\%$) in 2018 with the other indicators. The correlation between most of the analysed lines implies that they have a similar origin and are likely formed from the same material or from the same region of the star's atmosphere.
We evaluated the same correlations excluding the points relative to the flare to verify their impact. We found that they do not influence the correlations among the indices.
Finally, we verified that the correlation among the activity indices for the whole dataset is maintained when we join data obtained 12 years apart, with the only exception of the \ion{Ca}{ii} H\&K index, for which there is no correlation with the other indices.
In addition, we estimated the Balmer decrements (H$\alpha$, H$\beta$), which are indicators of the physical conditions of the emitting regions \citep[e.g.][]{1979ApJ...230..581L,1991PhDT.........9C}. \citet{2017A&A...598A..27M} showed the Balmer decrement as a function of the effective temperature and overplotted the typical values of solar plages. Our result ($\sim 1.76$) is compatible with values of solar plages, suggesting that AD Leo is dominated by them.
\subsection{\ion{Ca}{ii} H\&K versus H$\mathrm{\alpha}$}
The comparison between H$\alpha$ (or H$\beta$) and \ion{Ca}{ii} (\ion{Ca}{ii} H\&K) fluxes shows that the correlation between these two indicators is less significant and more scattered than the correlations between the other lines.
This result is consistent with the hypothesis that the phenomena that produce the two lines are actually connected, but the materials that generate them are in different regions of the atmosphere. Moreover, we also tested the correlations between the indicators excluding the measurements taken during the flare. These further tests return a value only slightly more significant than the previous one.
The result obtained from the test is consistent with those presented by \cite{2017A&A...598A..28S}, who show that H$\alpha$ and \ion{Ca}{ii} H\&K are correlated and that the correlation is more scattered for the most active stars. Specifically, in Fig. \ref{fig:rico_scandariato}, the blue bubbles represent an envelope of the results obtained by \cite{2017A&A...598A..28S}, while the orange points are the values obtained for AD Leo in this study.
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\hsize]{ADLeoinScandariato.png}
\caption{Plot of F$_{HK}$ vs F$_{H\alpha}$. The blue bubbles map the region populated by the stars analysed by \cite{2017A&A...598A..28S}, the orange points are the flux values obtained for AD Leo in this study.}
\label{fig:rico_scandariato}%
\end{figure}
Furthermore, although all the other activity indicators are more intense in 2018, the flux of \ion{Ca}{ii} H\&K is higher in 2006 than in 2018. By considering the model of \citet{2009A&A...501.1103M}, which affirms that \ion{Ca}{ii} core emission is connected to the active plage regions and bright network grains, while the H$\alpha$ line is produced from all the inhomogeneities present on the stellar surface, our result can be interpreted with a major surface coverage of plages and filaments during the observations on 2006.
Even though the Balmer decrement suggests that AD Leo is dominated by plages, this ratio does not allow us to distinguish between the two observing seasons.
\section{Flare analysis}
\label{section:flare}
Solar and stellar flares are observable evidence of magnetic energy released on short timescales. The magnetic reconnection plays a key role in the reconfiguration of the magnetic field lines and the conversion of magnetic energy into kinetic and thermal energies of plasma \citep{1996ApJ...459..330F, 2000mare.book.....P}. The impulsive X-ray and UV emission associated with stellar flares can affect the stellar atmosphere.
The most extreme solar flare that hit Earth was recorded in 1859 \citep{1859MNRAS..20...13C,1859MNRAS..20...15H}. It released a flare energy of $10^{32}$ erg.
Stellar flares are expected to be generated by the same mechanism of solar flares with a wider range of energy radiation and timescale \citep[e.g.][]{2010ARA&A..48..241B,2018MNRAS.475.2842D}. Over short timescales of minutes to a few hours, they emit energy ranging from $10^{23}$ erg (called nanoflares) \citep[e.g.][]{2000ApJ...529..554P} to $10^{33}-10^{38}$ erg (called superflares) \citep[e.g.][]{2013ApJS..209....5S}.
From the standard solar flare model, flares are formed by accelerated non-thermal electrons that propagate downward and heat the chromosphere. As a consequence, the heated chromospheric material moves upward (evaporation), filling the coronal loop above. This material then cools down radiating away its excess energy, and finally moves downward (condensation), going back to the lower layers of the stellar atmosphere \citep{1998ApJ...494L.113Y}. Because of the high temperatures and large motions of the flaring material, chromospheric emission lines during flares appear much broader than in the quiescent state of star.
In the right panels of Fig. \ref{fig:Timeseries} we indicate with black arrows two consecutive points obtained during the second observing season of 2018, where the flux of all activity indicators is significantly higher than the quiescent state of the star. Therefore, it is reasonable to assume the presence of a flare.
Since the two spectra were obtained two hours apart, we have the possibility to follow roughly the temporal evolution of the flare. We can suppose that the first observation during the flare is relative to the maximum phase of the flare, while the second point, with lower value of flux than the first one, was obtained during the decaying phase of the flare.
The observed profiles of some selected spectral lines sensitive to the stellar activity are broadened during the flare. This can be due to the motion of material inside the magnetic loop.
We considered a number of lines where the broadening is more evident (H$\alpha$, H$\beta$, \ion{He}{i} 4471 \AA, \ion{He}{i} 5876 \AA) and we fitted each profile with two Gaussian components \citep[see][]{2006A&A...452..987C,2018A&A...615A..14F}. The Balmer lines show a self-reversal absorption in the core, but this behaviour was not taken into account because it does not have a significant contribution on the following analysis of the flare. The fit with two components results in a reasonably good description of the line profile even in the most asymmetric cases. In general, the Balmer lines display two distinct phases, called the impulsive and the gradual phases, with broader profiles during the impulsive phase and narrower profiles during the gradual phase. We do not consider the \ion{Ca}{ii}
H\&K even if they are strong emission lines because they are not significatively influenced by the flare and they do not show broadening. Because the flare event is supposed to be generated in different regions with respect to the plages, the fact that \ion{Ca}{ii} lines are not broadened is consistent with the hypothesis that this indicator is influenced by the presence of plages and that AD Leo is dominated by them.
The results of the fit (the redshift and the sigma) for the narrow and the broad components are provided in Table \ref{tab:valorishift}.
\begin{table}[h]\footnotesize
\begin{center}
\caption{Fitted value of redshifts $\delta v$ and sigma $\sigma(\delta v)$ of the narrow and broad components for ID 79 and ID 80 spectra taken during the flare. The errors resulting from the fit are $\le 0.1\%$.}
\label{tab:valorishift}
\begin{tabular}{lcccc}
\toprule[0.05cm]
\toprule
\multicolumn{5}{c}{ID obs 79} \\
\midrule
\multirow{3}*{Line} & \multicolumn{2}{c}{Narrow} & \multicolumn{2}{c}{Broad} \\
\cmidrule(lr){2-5}
& $\delta v$ & $\sigma(\delta v)$ & $\delta v$ & $\sigma(\delta v)$ \\
& (km s$^{-1}$) & (km s$^{-1}$) & (km s$^{-1}$) & (km s$^{-1}$) \\
\midrule
H$\alpha$ & 0.55 & 31.10 & 1.77 & 155.38 \\
H$\beta$ & 1.50 & 26.51 & 4.31 & 129.03 \\
\ion{He}{i} 4471 & 0.68 & 7.93 & 15.77 & 16.73 \\
\ion{He}{i} 5876 & 2.06 & 8.21 & 10.13 & 15.49 \\
\midrule[0.04cm]
\multicolumn{5}{c}{ID obs 80} \\
\midrule
\multirow{3}*{Line} & \multicolumn{2}{c}{Narrow} & \multicolumn{2}{c}{Broad} \\
\cmidrule(lr){2-5}
& $\delta v$ & $\sigma(\delta v)$ & $\delta v$ & $\sigma(\delta v)$ \\
& (km s$^{-1}$) & (km s$^{-1}$) & (km s$^{-1}$) & (km s$^{-1}$) \\
\midrule
H$\alpha$ & 0.68 & 28.82 & 29.52 & 170.99 \\
H$\beta$ & 1.48 & 22.34 & 34.95 & 86.30 \\
\ion{He}{i} 4471 & 0.96 & 6.38 & 21.49 & 11.00 \\
\ion{He}{i} 5876 & 0.28 & 8.72 & 15.78 & 9.35 \\
\bottomrule[0.05cm]
\end{tabular}
\end{center}
\end{table}
\begin{figure*}[h!]
\centering
\includegraphics[width=0.8\hsize]{Flare_comp.png}
\caption{Spectrum ID 79 for the flare's maximum phase (red dotted line) and spectrum ID 80 for the decay phase (blue dotted line). Gaussian fit with broad and narrow components, respectively in orange and purple for ID 79 and in green and light blue for ID 80. The black dashed line shows the centre of each broad component.}
\label{fig:fit}%
\end{figure*}
Figure \ref{fig:fit} shows the fits that we made on spectra obtained during the flare. The red dotted line corresponds to the spectrum obtained during the maximum phase of the flare (ID 79), the blue dotted line to the spectrum obtained during the decaying phase (ID 80) of the flare. The orange and light blue Gaussians represent the broader components for the observation ID 79 and ID 80, respectively, while the purple (ID 79) and green (ID 80) are the narrow components obtained from the fit.
The spectra in Fig. \ref{fig:fit} show that the broadening of Balmer lines is larger than that of the helium lines.
The broad components of the Balmer and helium lines are more redshifted than the narrow components. \citet{1988A&A...193..229D} observed a similar effect during flare on YZ CMi and suggest the presence of material inside the loop corresponding to different flare kernels that brighten successively one after another. Each downflow would produce a redshifted contribution to the Balmer lines.
Moreover, Fig. \ref{fig:fit} shows symmetric broadening during the decay phase (light blue component), for H$\alpha$ and H$\beta$, with $\sigma$ of the order of hundreds of km/s.
This symmetric broadening can be interpreted with the presence of material inside the magnetic loop that undergoes blueshift and redshift simultaneously. The exposure time (900 seconds) of the observations obtained with HARPS-N, shorter than the evolution time of the flare, leads us to exclude the possibility that we are monitoring the same material before going uphill inside the loop and then downhill.
This result can be explained instead as the presence of turbulent motion that can be dominant with respect to the coherent motion of the material (uphill or downhill) \citep[see][]{1999ASPC..158..226M,2005A&A...436..677F}.
H$\alpha$ monitors the lower regions of the magnetic loop; in this region, due to the high density of the material, the turbulent motion can be dominant with respect to the coherent motion of the material, which instead follows the magnetic field lines.
Globally, the lines are shifted due to the coherent motion, but the broadening due to the turbulence is much larger and dominates the shape of the line.
On the contrary, Fig. \ref{fig:fit} (right panels) shows an asymmetrical broadening of helium lines with velocity of the order of tens of km s$^{-1}$. This asymmetric broadening might be present even in the Balmer lines, but it is clearly smaller than the symmetric broadening shown in H$\alpha$ and H$\beta$ and for these reasons it cannot be detected. We can suppose that the helium lines monitor an upper region of the loop higher than H$\alpha$. If in the lower chromospheric regions the kinetic energy density of the turbulent motion is probably comparable to the magnetic energy density, in the upper regions the magnetic energy density dominates the kinetic energy density making the motion of the plasma less turbulent and inducing it to move along the magnetic field lines.
This effect leads to a decrease in the line broadening and emphasises the radial velocity shift.
In addition, despite the low temporal resolution, we identified a delay of a flare event for the \ion{Ca}{ii} H\&K and \ion{He}{i} at 4026 \AA \ with respect to Balmer lines. The moment at which a line reaches its maximum is related to the temperature that characterises the formation of the line, and therefore it is also related to the height at which the line is formed. Therefore, we can suppose that this delay, also observed by \citet{2006A&A...452..987C}, confirms that these lines monitor different regions of the stellar atmosphere with respect to the Balmer lines.
We also tried to estimate the luminosity and the energy released during the flare. According to our data, the line luminosity, estimated by analysing the lines during the flare, is significantly higher than the luminosity of the quiescent state of the star. The energy released ($\sim 10^{30}$ erg to $\sim 1.4 \times 10^{32}$ erg for the Balmer lines) is consistent with the presence of a particularly intense flare event, stronger than the flares detected by \citet{2006A&A...452..987C} who obtained an energy released value of the order of $10^{29}$ erg. In support of our results, we mention that \citet{2019arXiv191109922G}, observing AD Leo for 222 hours with the Echelle spectrograph of the 2 m telescope Alfred-Jensch-Teleskope in Tautenburg, detected 22 flares, the largest of which emitted 2.9 $10^{31}$ erg in H$\alpha$ and 1.8 $10^{32}$ erg in H$\beta$. \citet{2020arXiv200306163M}, analysing more than 2000 spectra of AD Leo collected with the same telescope in the context of the flare-search programme of the Th\"{u}ringer Landessternwarte, also detected numerous flares; the largest one emitted 8.32 $10^{31}$ erg in H$\beta$ and 2.12 $10^{32}$ erg in H$\alpha$. Results from both studies are comparable to the energy released by our flare.
A more detailed analysis of the flare is described in Appendices \ref{appendice_delay} and \ref{appendice_energy}.
\section{Summary and conclusions}\label{sec:summary}
In this paper we analysed the spectra of AD Leo using two datasets HARPS and HARPS-N spectra, obtained 12 years apart. We measured the line profiles and the intensities of the sensitive activity indicators, such as H$\alpha$, H$\beta$, \ion{Ca}{ii} H\&K, \ion{He}{i} at 4026 \AA, 4471 \AA, and 5876 \AA, and \ion{Na}{i} doublet. We derived the fluxes of these lines and evaluated the correlations between them.
By analysing the time variability of the fluxes we found a higher level of activity during 2018 than 2006, except for the \ion{Ca}{ii} H\&K indicator that shows a higher flux on 2006. As suggested by \citet{2008ApJ...680.1542H} and \citet{2006ASPC..354..276R, 2007ASPC..368...27R}, the \ion{Ca}{ii} core emission originates from regions of concentrated magnetic field, such as active plages and bright grain networks. According to this, the longterm variability of \ion{Ca}{ii} suggests that the star had a larger coverage of plages during the observations of 2006 than in 2018.
Furthermore, the Balmer decrements (H$\alpha$/H$\beta$), calculated for the three observing seasons, are compatible with the typical values of solar plages showed by \citet{2017A&A...598A..27M}, confirming that the stellar surface is probably covered by a distribution of plages.
We searched for the correlation among the activity indicators measured in this work. All lines show a good correlation with each other, except for the \ion{Ca}{ii}, particularly the K line, indicating that the processes and regions of the formation of this line differ from other lines.
Many studies \citep[e.g.][]{2009AJ....137.3297W,2007A&A...469..309C} suggest that there is a correlation between H$\alpha$ and \ion{Ca}{ii} K flux obtained for a sample of different stars of different spectral types. However, \citet{2007A&A...469..309C} have declared that `when we investigate this relation for individual observations of a particular stars, the general trend is lost and each star shows a particular behaviour, ranging from tight correlations with different slopes, to anti-correlations, including cases where no correlations are found'. \citet{2009AJ....137.3297W} compared the equivalent width of H$\alpha$ to the \ion{Ca}{ii} K surface flux measured from a sample of M stars. They found a positive correlation between the measurements of these indicators when comparing different stars, with a wide range of scatter for the more active stars. Furthermore, they obtained multiple measurements of EW of Balmer lines and \ion{Ca}{ii} K in AD Leo and showed that for individual active stars these two lines are not necessarily correlated in time-resolved observations.
Our flux values obtained for \ion{Ca}{ii} H\&K and H$\alpha$ follow the extrapolation of the trend shown in Fig. 10 of \citet{2017A&A...598A..27M}, confirming that the same trend continues at a high activity level.
We also detected the presence of a flare during the second season of HARPS-N data.
\citet{2006A&A...452..987C} monitored AD Leo during four nights in 2001 and observed a large number of short and weak flares occurring very frequently. We measured the EWs\footnote{The EWs were measured with a procedure similar to that of fluxes (see Sect. \ref{sec:flux-rescaling}), except for the normalisation, and the results are provided in Appendix \ref{appendice:tabelle}.} of the analysed lines to compare our results to the published ones. The range of EW values that we obtained during the entire observed time identified as the `quiescent' state of the star is consistent with the variability of Balmer lines EWs obtained by \citet{2006A&A...452..987C}.
Moreover, the surface fluxes of the Balmer lines at flare maximum (F$_{max}$) obtained by \citet{2006A&A...452..987C} are an order of magnitude lower than our results (see Table \ref{tab:luminositàenergiabrillamento}). This implies, also due to our low temporal resolution, that we are unable to resolve less intense flares and that what we call quiescent state is indeed the superposition of several weak flares. The flare that we observed is a stronger and uncommon event. In this work we presented a detailed analysis of the profile of selected emission lines to study dynamic processes occurring during this phenomenon. In particular, we analysed the profiles of H$\alpha$, H$\beta$, and \ion{He}{i} at 4471 \AA \ and 5876 \AA \ from two spectra collected during the flare and obtained two hours apart, showing a significant broadening, while no evidence of broadening is present in the \ion{Ca}{ii} lines.
We fitted the profiles combining a broad and a narrow Gaussian component, finding that the broader one is redshifted with a velocity of the order of tens of km s$^{-1}$. This redshift can be interpreted as the presence of material going downhill inside the magnetic loop, according to the solar flare model. Globally, the shape of these lines, especially for the Balmer lines, is symmetrically broadened with $\sigma$ of the order of hundreds of km s$^{-1}$. Since H$\alpha$ monitors the lower regions of the magnetic loop, we can suppose that in this region, because of the high density of the material, the turbulent motion can be dominant over the coherent motion of the material that follows the magnetic field lines. Consequently, we can suppose that the Balmer lines are also redshifted due to the coherent motion of the material, but that this redshift is hidden by the broadening due to the turbulence that is much larger and dominates the shape of the lines.
\balance{
\bibliographystyle{aa}
|
3,212,635,537,578 | arxiv | \section{Introduction
One of the key science goals for the SKA is to provide a nearly complete census of radio pulsars in the Milky Way and its Globular Clusters
\citep{keane2014,hessels2014}. The large number of pulsar discoveries and the unprecedented timing precision of the instrument will enable a broad spectrum of science,
ranging from characterization of stochastic gravitational wave (GW) signals \citep{janssen2014} to probing all possible outcomes of massive-star evolution \citep{tauris2014}.
As the SKA will be detecting its first light, a fleet of sensitive telescopes will be gathering photons at all wavelengths and second generation GW detectors will be
making sensitive observations. This multi-wavelength, multi-messenger frontier in astronomy will allow to tackle many remaining open problems in neutron
star (NS) physics, and study with unprecedented detail the Galactic structure and content, the nature of the strong interaction, strong-field gravity and the large-scale
structure of the Universe.
Figure\,\ref{fig1} sketches an approximate timeline (design--construction--operations) for observatories that will be of particular importance for pulsar science. At radio and
sub-mm wavelengths, ALMA\footnote{Atacama Large Millimeter/submillimeter Array, http://www.almaobservatory.org/} and EHT\footnote{Event-Horizon Telescope,
http://www.eventhorizontelescope.org/}/BlackHoleCam\footnote{http://www.space.com/24002-black-hole-image-event-horizon.html}, will study the pulsar emission
mechanism and magnetic structure and, jointly with pulsar-timing observations, probe the nature of strong-field gravity around the super-massive black hole (SMBH)
at the center of the Milky Way.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.36]{SKA_timeline.pdf}
\label{fig1}
\caption{Approximate timeline for observatories of interest for synergistic neutron star science with the SKA.}
\end{center}
\end{figure}
Ground-based and space-borne instruments such as GAIA\footnote{Global Astrometric Interferometer for Astrophysics, http://sci.esa.int/gaia/}, LSST\footnote{Large
Synoptic Survey Telescope, http://www.lsst.org/lsst/}, JWST \footnote{James Webb Space Telescope, http://www.jwst.nasa.gov/} and E-ELT \footnote{The European
Extremely Large Telescope, https://www.eso.org/public/teles-instr/e-elt/} will provide a thorough census of the Galaxy's stellar content, including the companions of
pulsars discovered and monitored with the SKA. Similarly, studies of NS X-ray binaries with next-generation X-ray telescopes such as NICER\footnote{Neutron star Interior Composition Explorer; http://heasarc.gsfc.nasa.gov/docs/nicer/}, eROSITA
\footnote{extended ROentgen Survey with an Imaging Telescope Array, http://www.mpe.mpg.de/eROSITA} and LOFT\footnote{Large Observatory For X-ray Timing;
http://sci.esa.int/loft} and precise timing of binary millisecond pulsars will constrain the super-dense matter equation-of-state (EoS) \citep{watts2014} and further
enrich the ensemble of laboratories for strong-field gravity \citep{shao2014}. In $\gamma-$rays, pulsar searches in \emph{Fermi}\footnote{http://fermi.gsfc.nasa.gov}
and CTA\footnote{Cherenkov Telescope Array, https://www.cta-observatory.org} unidentified sources will allow a better understanding of the NS environment, pulsar
emission and binary evolution. Beyond the electromagnetic spectrum, Advanced-LIGO\footnote{https://www.advancedligo.mit.edu}, VIRGO\footnote{http://www.ego-
gw.it/public/about/whatis.aspx}, eLISA\footnote{https://www.elisascience.org} and the Pulsar Timing Array
(PTA) monitored by the SKA will open a new window to the GW Universe.
In this chapter we elaborate on a selected number of topics, for which coordination between different observatories will provide the greatest benefits. The text is
organized as follows: Section\,2 covers the Galactic structure and content, focusing on studies of the Milky Way's kinematics and multi-wavelength pulsar surveys.
Section\,3 discusses the added benefits for stellar evolution and NS population studies. In section\,4 we elaborate on the NS EoS and the nature of the Strong
Interaction and in section\,5 on NS-related transient phenomena. Finally, section\,6 covers the multi-messenger efforts in the GW detection era and section\,7
concludes with some final remarks. Given space limitations, this chapter is not meant to be an in-depth review of each topic nor does it exhaust the complete potential
of the multi-wavelength approach. For details on specific topics the reader is encouraged to skim through the other chapters of this book cited throughout the text.
\section{The Galactic Structure and Content}
\subsection{Targeted multi-wavelength searches for pulsars
One of the great achievements of \emph{Fermi} was the discovery by its main instrument, the Large Area Telescope (LAT), of a large number of $\gamma$-ray sources
with no previously known counterparts, the so-called unassociated sources. The 2FGL catalogue \citep{2FGL}, a catalogue of \emph{Fermi} LAT sources based on two
years of data, contained 1873 sources in total, of which about 30\% have not yet been associated with a known class of gamma-ray source; the recently published 3FGL catalogue,
which is based on four years of LAT data, contains a total number of sources close to 3000. Many of these could be $\gamma-$ray pulsars, the
most numerous class of Galactic $\gamma-$ray sources. Indeed, searches for pulsations at the locations of LAT unassociated sources with pulsar-like $\gamma-$ray
emission properties have led to the discovery of many new pulsars, either by directly blind searching the photon data \citep{pletsch2012}, or by conducting deep radio
observations \citep{ray2012}.
Radio searches of pulsations in LAT sources have led to the discovery of a very large number of previously unknown MSPs \citep{ray2012} -- currently about 25\% of all
known in the Galaxy\footnote{See http://astro.phys.wvu.edu/GalacticMSPs/GalacticMSPs.txt for an up-to-date list of known Galactic disk MSPs.}. A good fraction of
these new pulsars would probably have \emph{eventually} been found in standard radio-pulsar surveys. However, the LAT accelerated their discovery by
showing radio telescopes where to look. Until the end of its mission, \emph{Fermi} will continue to discover new sources and the SKA will be extremely useful in the
quest for the identification of the unassociated ones. Similarly, other instruments covering different wavelengths that have started operating (e.g., GAIA or ALMA) or
will start in the future (e.g., LSST, eROSITA, or the CTA), will find new sources across the spectrum that could be searched for pulsars with the SKA. To quote some
examples: faint, variable optical stars detected by GAIA could point to white dwarfs orbiting unknown radio pulsars \citep[e.g.][]{Antoniadis2013}, or ``black widow''
systems, with millisecond pulsars ablating their companion stars with their strong particle winds, generating optical emission modulated at the orbital period
\citep[e.g.][]{Romani2012a}. Another example is the possibility to search for young radio-emitting pulsars with the SKA, at the locations of supernova remnants, pulsar
wind nebulae (PWNe) or unassociated sources discovered by CTA, in its future surveys of the very high energy $\gamma$-ray sky \citep{Dubus2013}.
\subsection{Probing the Dynamics and Structure of the Galaxy
Regardless of the discovery method, measured positions, distances and proper motions of radio pulsars, have allowed to broadly outline the structure of the Galactic
disk and, among others, unveiled the presence of a warp \citep{yusifov2004}. However, to date, out of $\sim$2200 known radio pulsars, only $\sim$150 (i.e. about 7\%)
have measured proper motions and even less have measured parallaxes, fundamental in determining their actual location in the Galaxy.
The large number of pulsar discoveries expected by the SKA, both in Phase-1 and later in Phase-2 \citep{keane2014,hessels2014}, and the manifold increase in sensitivity, will
improve the accuracy of previous results and allow to look for new features like humps or depletions in the Galactic disk that might have escaped previous searches.
Furthermore, binary pulsars with optical counterparts, where both the proper motion and systemic radial velocity can be measured, will provide their full three-
dimensional motion in the Galaxy \citep{Lazaridis2009} enabling the reconstruction of the Milky-Way's structure and possibly unveiling thin and thick disk pulsar
populations. Similarly, precise knowledge of pulsar proper motions \citep{smits2011} will allow to discriminate between pulsars born in the warped regions of the Galaxy
from those born in the Galactic plane. In all these respects, the SKA will complement the work done by GAIA which will sample different stellar populations.
\subsection{The Interstellar and Intergalactic Medium
In addition to the above, effects like dispersion measure, Faraday rotation, scattering and scintillation, routinely measured in pulsar observations, provide valuable
information about the interstellar medium (ISM). Since pulsars are typically very faint, until now, mostly those located in the vicinity of the Sun have been used for
studies of the ISM \citep{NE2001}. The superb sensitivity of the SKA in Phase\,II will extend precision ISM studies to greater distances \citep{han2014} and further enable
studies of the intergalactic medium (IGM) through the detection of pulsars and fast radio bursts (FRBs) in other galaxies.
If the distance to the host galaxy is known by e.g. optical and near-infrared studies of standard candles and Cepheid variables, the electron column density and magnetic field parallel to the line-of-sight to the pulsar can be calculated. These distances to nearby galaxies have been historically difficult to measure with precision \citep{Jacoby1992}, but the new and more powerful optical and infrared telescopes such as E-ELT and JWST will significantly improve distance measurements.
\section{Extreme Astrophysics and Stellar Evolution}
\subsection{Studies of the pulsar emission mechanism across the electromagnetic spectrum
While the general concept of pulsar electromagnetic emission is fairly well established, the complex details of the radiation processes such as the exact emission
heights and the relevant importance of various emission mechanisms are still unclear. Different approaches and models attempting to
explain the geometry and physical processes responsible for pulsar emission rely on a broad range of assumptions that lead to different predictions
\citep{LyneGrahamSmith2012}. The different techniques
utilized to study the physical processes in emission regions, e.g., profile shape studies and polarimetry, make use of information at
multiple frequencies across the radio spectrum \citep{LyneGrahamSmith2012}. As the radiation from pulsars is broadband, and produced presumably by several distinct
radiative
mechanisms, the coverage of the multiple wavelengths across the electromagnetic spectrum is required to build a complete analysis of the problem. With the SKA and
the new ground and space borne observatories covering virtually the entire electromagnetic spectrum with unprecedented sensitivity and time resolution, a much
better understanding of the emission processes from pulsars is possible. This new era of cutting-edge instrumentation may answer many of the related questions, that
last now more than 45 years.
Beyond that, population analyses of $\gamma-$ray pulsars will yield information on the fraction of ``Geminga-like'' pulsars: pulsars that are only visible from high-
energy observations or with extremely low radio luminosities, presumably because the radio emission beams do not cross or only graze our line of sight. The ratio of
radio-loud to radio-quiet pulsars is a key observable of high-energy emission models \citep{WattersRomani2011} and the SKA's great sensitivity will be particularly
useful for constraining this ratio, and thus understanding pulsar emission across the spectrum.
\subsection{Pulsar Wind Nebulae}
There will also be significant synergies between the SKA
and upcoming high-energy facilities like the Cerenkov Telescope
Array (CTA) regarding the study of PWNe.
Powered by the rotational energy of their central NS, these
objects are detected across the electromagnetic spectrum, and
currently dominate the Galactic population of TeV $\gamma$-ray
sources \citep[e.g.][]{hartman1999}. The radio emission from PWNe is believed to be synchrotron
emission from the electrons and positrons created in the pulsar
magnetosphere interacting with the PWN's magnetic field, while its
$\gamma$-ray emission is believed to result from these high-energy
lepton inverse Compton scattering of background photons. Detecting
both the radio and $\gamma$-ray emission from a PWN allows one to
measure the electron particle spectrum, magnetic field strength, and
energy density of the background photon field -- critical for
understanding the generation and acceleration of leptons in these
objects \citep{gaensler2006,gelfand2009}. However, such an analysis is currently possible for only
few sources, since many TeV PWNe remain undetected at radio wavelengths.
This is likely the result of a low magnetic field strength and large
angular size, resulting in a radio surface brightness too low to be
detected with current facilities. However, the significant
improvement in sensitivity of the SKA, especially on large angular
scales, will allow us to detect radio emission from existing TeV PWNe
as well as any new PWN candidate detected by CTA. Additionally, the
SKA will also have the sensitivity to discover PWN and pulsars in
currently unidentified TeV sources \citep{gel}. Together, the CTA and SKA have
the potential for revolutionizing our understanding of these sources.
\subsection{A multi-wavelength view of stellar evolution
The advent of sensitive multi-wavelength surveys described above will uncover a diverse population of binary NSs. This rich NS ensemble will greatly increase the
chances for revision, and ultimately unification of the stellar formation and evolution paradigm \citep{tauris2014}. For example, the SKA will enable extremely precise
mass measurements for a large number of pulsars in binaries, allowing a thorough statistical study of the mass-transfer mechanics and the distribution of NS masses at
birth \citep{kiziltan2013,ozel2012}. At the same time, sensitive optical instruments such as GAIA, LSST and E-ELT will measure the radial velocities, atmospheric
composition, proper motions and parallaxes for several pulsar companions with optical counterparts, providing further information for the evolutionary history of the
systems \citep[e.g.][]{antoniadis2012}. Furthermore, as described above, a joint optical/radio effort might increase the chances for finding unique systems. Just to give an
example, intermediate-mass binary pulsars may be ideal places to look for faint, ultra-cool white dwarfs, which constraint the stellar formation history of the Milky Way
and might contribute to its ``dark'' baryonic content \citep{kaplan2014}.
Similarly, X-ray observatories such as LOFT may help to identify more low-mass X-ray binary/MSP transition objects which would help to understand the details of
pulsar recycling models \citep{archibald2009,Patruno2014,tauris2014}.
Deep radio observations of pulsars discovered in X-ray or $\gamma$-ray blind searches are also key for understanding the NS luminosity distribution. Unlike high-
energy signals, radio waves are dispersed by free electrons in the propagation path. The dispersion measure of pulsars inferred with radio observations yields their
(approximate) distance \citep{NE2001}, which is impossible to determine from the high-energy observations alone.
\section{Nuclear Physics and the Strong Interaction
NSs are extremely compact objects; denser than any other object in the current Universe, and anything that has ever been since $\sim 3$\,ms after the Big Bang
\citep{lattimer2012}. Owing to their extreme properties, they are of fundamental importance for studying the nature of the Strong Interaction which dictates the
behaviour of matter at densities reaching and exceeding the nuclear saturation density \citep{watts2014}. The EoS describing the bulk properties of matter can
theoretically be inferred from first-principle QCD calculations. Practically however, the complicated many-body interactions at play render this approach unfeasible.
Over the past few decades, numerous different approximations have been developed, leading to diverse EoS predictions that span a large space of parameters
\citep{lattimer2012}.
The EoS of cold nuclear matter, and the way it joins up with the EoS of hot matter, uniquely determine several NS observables such as the NS mass-radius relation,
moment of inertia, cooling rate, maximum spin and maximum mass above which NSs collapse to black holes. For the first time, these observables will be significantly
constrained in a range of NS populations with the SKA and next-generation X-ray observatories \citep{watts2014}. The SKA will measure masses for several hundreds of
binary pulsars and significantly increase the chances for finding rapidly spinning pulsars \citep{watts2014,hessels2014}. Furthermore it will provide, for the first time, a
direct measurement of the moment-of-inertia for pulsars in relativistic double-NS systems like J0737$-$3039 \citep{watts2014,keane2014,shao2014}. At the same time
X-ray missions such as LOFT and Athena will provide simultaneous mass and radius measurements for a handful of NSs residing in X-ray binaries and potentially
measure the cooling rates of nearby, thermally emitting NSs such as Cas\,A \citep{watts2014}.
Multi-wavelength targeted survey approaches may also significantly speed-up the search for EoS-constraining pulsars, by telling us where to look: fast-spinning
pulsars for example are energetic and most likely radiate the bulk of their spin-down energy in the form of $\gamma-$rays \citep{keane2014}. Furthermore, there has
been increasing evidence that ``black-widow'' and ``redback'' binary pulsars that have optical, X-ray and $\gamma$-ray counterparts, might host massive NSs
\citep{vankerkwijk2011, romani2012}. Today, precise mass measurements in these systems are challenging, mostly due to sensitivity limits of radio and optical
telescopes.
\section{Transient Phenomena and the Dynamic Sky
\subsection{Synergies between the SKA and optical telescopes}
The identification of radio transients, such as Rotating Radio Transients (RRATs) and microquasars, through the interaction with the LSST and LOFT
\citep{Feroci2012,Lazio2014}, or any other {X}-ray sky monitor to fly in the 2020s, will be one of the main science goals of the SKA \citep{fender2014}. Thanks to their
location in the southern hemisphere, the synergies between LSST and the SKA will be crucial to elucidate the nature of thousands of transients in the restless radio and
optical sky.
With its continuous monitoring of 20000 square degrees of the sky and its different observing cadence, LSST will discover thousands of transient events on time scales
ranging from tens of seconds to hours over 9 decades in flux \citep{lsst}. Furthermore, the LSST will be able to respond to targets-of-opportunity (ToOs) from other
facilities with a reaction time of 60s in its Rapid Response Mode. Due to its large field of view of almost 10 square degrees, the LSST will provide colour information in
six bands for several fast transients at each time, following the light curve evolution before, during, and after the event and provide quick localisation for follow-ups
with other facilities \citep{lsst}. Spectral information in the optical (including photometric redshifts for AGNs) will be crucial to complement the spectral coverage in the radio provided by the SKA and elucidate the nature of the transient, discriminating, e.g. a Galactic microquasar from an AGN.
Inversely, an SKA trigger of a fast radio transient for LSST follow-up may be crucial for determining its nature.
RRATs and other sort of bursting NSs would be probably undetectable in optical integration much longer than the length of the radio burst (typically a fraction of a
second for the RRATs \citep{keane2011}), with the signal from a possible optical burst (assuming that it lasts as long as the radio burst) being washed out. While the
non-detection of a candidate RRAT in the optical down to mag$\sim$24.5 (the typical sensitivity of $2\times15$\,s-long LSST snapshot integration) would still be
provide information to help identify its NS, more intriguing synergies would emerge with the E-ELT if it is equipped with suited instruments to fully exploit its potentials
in time-domain astronomy down to the ms time scales. The detection of simultaneous optical and radio bursts from RRATs in coincidence with the radio burst, a goal
that we missed so far, will shed light on the origin of these events and will allow to test the proposed models by, e.g. comparing the optical and radio fluence, the
profile of the burst light curve, and the characteristics of the radio and optical pulsations (typically detected during a RRAT burst), including possible time lags.
\subsection{Transients with the SKA and X-ray Telescopes}
The real-time identification of Galactic X-ray transients and follow-up of variable sources discovered by LOFT, eROSITA and, a posteriori, by \emph{Fermi} is another
field that will greatly benefit from the synergy with the SKA. For example LOFT, with its Wide Field Monitor (WFM), which will cover at least 50\% of the sky
simultaneously in the 2-50 keV energy band, will be a perfect discovery machine of X-ray transients, such as magnetars, which have been well known and extensively
studied for the past three decades. However, well known does not necessarily mean well understood. After the discovery of the prototype source back in 1979, only 30
magnetars have been found (including candidates) and we are still in a position where better understanding would benefit from the discovery of more sources. The
recent discovery of both radio-loud \citep[e.g.][]{shannon2013} and low magnetic field magnetars \citep{rea2010}, has triggered a profound rethinking of the nature of
these objects. Most magnetars have been identified by their transient X-ray emission. Previous large field-of-view telescopes, such as the All-Sky Monitor aboard RXTE,
were instrumental in this role, spotting several candidates. With a factor of 20 larger collecting area, the WFM will discover many more magnetar candidates, triggering
alerts for other new facilities, including the SKA. Following-up magnetars in radio during, but not only, their bursting phase is key to solve the long-standing dichotomy
on the magnetar radio-quietness or radio-loudness and peer deep into the very nature of magnetars.
Other types of erratic variability in radio pulsars can be explored by the LOFT and SKA. A few pulsars tend to emit Giant Radio Pulses (GRPs) and, so far, similar
phenomena at different energies \citep[Giant Optical Pulses;][]{strader2013} have been observed to occur simultaneously in the Crab pulsar. Do they occur in the X rays as
well? Only the SKA and LOFT will be able to answer this question. Phenomena discovered only recently in radio pulsars are the mode switches observed synchronously in
X rays and in radio, like e.g. in PSR B0943+10 \citep{Hermsen2013}. How many other radio pulsars show a similar behaviour? The SKA and LOFT will certainly find more
of such cases. On longer time scales, the nature of the many variable X-ray sources that eROSITA will discover in its 4 year all-sky survey (8 scans in total) could be
clarified by the SKA and optical facilities.
\subsection{Fast Radio Bursts}
Finally, an emerging field where synergy may be proven useful is the discovery and characterization of FRBs.
FRBs are temporary isolated impulsive bursts of radio emission with very short duration of a few milliseconds. An initial discovery by \cite{Lorimer2007} was followed by
a small number of detections \citep{Keane2012,Thornton2013,Spitler2014} which dissipated the initial doubts about the astrophysical origin of the bursts. The sources
of FRBs remain elusive but their large measured dispersion measures and sky locations suggest a possible extragalactic origin. If this is the case, apart from the
importance of discovering a likely new population of astronomical sources at cosmological distances, they will become important tools for the study of the intergalactic
medium \citep{Ginzburg1973}. The inherent short duration of FRBs make their detection very difficult in current radio surveys, where a big single-dish telescope with its
small field of view is scanning the whole sky on many-year timescales. Additionally, the commonly adopted off-line processing of survey data usually leads to detection
of events well after they have occurred on the sky. What makes these phenomena even more intriguing is that no long-lasting counterpart in radio or other wavelengths
is found in the direction of the detected bursts. This could mean that additional radiation of the burst at other wavelengths, if any, is also short-lived. The SKA's wide
field of view will allow to monitor big portions of the sky at once, increasing significantly the detection rate for these FRBs, which are thought to occur several thousand
times-per-day\,per-sky \citep{Thornton2013,Spitler2014}. In addition, the proposed real-time processing back-end for the survey data in search of short radio bursts
enable the necessary rapid follow-up of the detected events. A real-time warning from an SKA detection could trigger rapid ToO observations at other wavelengths in
the burst direction. The SKA in combination with the observatories at all other wavelengths will be key to resolve the mystery of the origin of FRBs and let us study in
detail the sources and the physical processes producing these unexpected bursts of radiation.
\section{Strong-Field Gravity and the Large-Scale Structure}
\subsection{Studies of the Galactic-Centre black hole across the spectrum
Observations of the orbits of the so-called ``S-stars'' have provided detailed information about the object in the centre of our Galaxy. The observations provide the
most convincing case for it being a super-massive black hole \citep{Genzel2010,MeliaFalcke2001}. First detected in the radio as a point source named Sgr A* (Sagittarius
A*), the source is now being studied also at near-infrared and X-ray wavelengths. Tracing the orbits of the S-stars, one can derive the distance to the Galactic Centre (8
kpc) and the mass of the black hole (4 million solar masses). Finding (even normal) pulsars orbiting Sgr A*, the spin and the quadrupole moment can be determined
with high precision \citep{eatough2014}. These measurements can be compared with constraints to be derived with high-precision optical astrometry of the inner-most
stars. Measuring for instance the mass of Sgr A* with radio pulsars to a precision of one solar mass \citep{eatough2014}, we can determine the distance to the Galactic
Centre using the optical observation with a precision to about 1 pc, providing a firm anchor for our understanding of the Galactic dynamics.
\subsection{Multi-Messenger Gravitational Wave Science
Precision timing with the SKA will start an era of GW astronomy with pulsars. Phase I of the SKA will virtually guarantee the detection of a stochastic GW
background that has emerged from a population of super-massive binary black holes that was present in processes of early galaxy formation \citep{janssen2014}. Phase
II of the SKA will allow studying this background in great detail, thereby, for instance, providing insight into the fundamental properties of gravitons such as their spin
and mass \citep{lee2010}. In general, the SMBH population that is detectable with a PTA experiment is of higher mass than those of sources detectable with the space-
based GW detector eLISA. With PTAs being sensitive to SMBHs with $10^7$ solar masses and orbital periods between 10 and 20 years, SKA observations provide a truly
complementary window to the SMBH population. Moreover, observations with the SKA can provide measurements of the amplitude and spectral shape of the GW
background, which encodes information about galaxy merger and SMBH accretion processes. As for instance pointed out by \cite{Sesana2013}, the amplitude of the
signal tracks the number of occurred mergers integrated over the redshift range, while the spectral shape should contain a break frequency where contribution of
individual systems becomes important. Indeed, some individual sources may produce a signal that is significantly larger than indicated by the average spectrum,
allowing the detection of a single source which will eventually evolve into the eLISA frequency band.
In a comprehensive review Burke-Spolaor \cite{bur13} discusses in more detail
the possibilities, and importance, of electromagnetic identification of the sources of GWs,
which could be from continuous-wave, burst or GW memory sources. PTAs are already being
used to dis-prove the identification of supermassive black hole binaries in the local neighbourhood \cite{jllw04}.
As the sensitivity of PTAs improves dramatically with the SKA, improved limits and ultimately detections will
be possible. Localising these sources by use of an electromagnetic counterpart such as might be possible
with current surveys in optical and X-ray wavelengths and using future facilities like LSST, IXO/Athena, Astro-H
combined with the GW signature will allow significantly more detailed information about the binary system to be
determined.
For compact relativistic binaries with sufficiently small orbital periods, it is possible that GWs may be directly detectable with eLISA. The orbital frequency of the Double
Pulsar, for instance, is $1.16 \times 10^{-4}$ Hz, so that we can expect to observe a strain of about $5 \times 10^{-21}$ at $2.3 \times 10^{-4}$ Hz
\citep{KramerStairs2008}. Realistically, a detection may be aggravated by the large expected background of double-white-dwarf systems with similar orbital periods
\citep{Nelemans2001}. However, if the orbital ephemerides are well known from radio timing with the SKA, and because the systems should also produce power at the
next orbital harmonic, it should be possible to detect the appropriate sources in a coherent search that takes advantage of the known direction to the source (S.
Sigurdsson \& C. Miller, private communication). If detection is made, it is possible to combine the observations obtained from the radio with those obtained with eLISA.
This combination should in principle be able to provide the exact distance to the source, the true inclination angle of the system (rather than either sine or cosine of the
inclination angle) and the masses. Therefore, the system should be vastly over-determined, allowing to provide unprecedented tests of theories of gravity.
Isolated neutron stars may also be the source of GWs if they are deformed in such a way as to
make them axi-symmetric. The GW amplitude will depend on the size of the asymmetry
which in turn is strongly dependent on the nature of the equation of state of the neutron star and the strength
of the internal magnetic fields. Significant and important limits on the degree of deformation and the fraction of
the spin-down energy loss of pulsars that might manifest as GW emission have been obtained
with current generation GW observatories \cite[e.g.][]{aaa14,ligo14} however the SKA will be operating at the same
time as the much more sensitive advanced LIGO and VIRGO detectors. To be able to undertake these
searches for GWs typically involves long integration times and it is therefore necessary to have
good models of the rotational history of the pulsars. The SKA will help by discovering many more pulsars, thus
improving chances of finding even just one deformed source and also be enabling the monitoring of even
larger numbers of pulsars. Moreover we will also be sensitive to systems that might exhibit GW
bursts. This is only the tip of what might be possible though synergies between the ground-based gravitational
wave observatories and the SKA.
\section{Conclusions}
In this chapter we elaborated on various aspects of the multi-wavelength, multi-messenger NS science that will be enabled in the SKA era.
Current simulations show that even SKA\,I can discover a total of about 10000 normal pulsars and perhaps as many as 1800 millisecond pulsars (MSPs), with SKA\,I-
LOW surveying the sky with the Galactic latitude $|b| \geq 5^{\rm o}$, and SKA\,I-MID surveying the sky with the Galactic latitude $|b| \leq 10^{\rm o}$
\citep{keane2014}. SKA\,II will provide a complete census of radio pulsars and with its Aperture Array systems should allow for an optimum combination of sensitivity,
field-of-view and
number of beams to be able to obtain exceptional cadence on a very large number of sources. These key features will allow for significant advances in our
understanding of NSs, which could be further accelerated by coordinated efforts with other next-generation telescopes. As demonstrated here, synergies across the
electromagnetic-spectrum and beyond could provide a better
understanding of (I) the Galactic structure and content, (II) extreme astrophysics and stellar evolution, (III) nuclear physics and the strong interaction, (IV) transient
phenomena and (V) strong-field gravity and the large-scale structure of the Universe.
\bibliographystyle{apj}
|
3,212,635,537,579 | arxiv | \section{Introduction\label{sec:intro}}
Buoyant jets are found in a variety of natural and engineering contexts, including industrial burners \citep{Christopher2018,Hayden2019}, hydrothermal vents \citep{Gaskin2001}, and volcanic plumes \citep{Campion2018}. Despite their obvious differences, each of these flows exhibits a similar ``puffing'' phenomenon that results from the continuous injection of less dense fluid into a reservoir of more dense, ambient fluid. The associated flux of both momentum and buoyancy through the inlet leads to the formation of vortical structures that are periodically shed, yielding the puffing behavior. The resulting puffing frequency, $f$, can be characterized using the Strouhal number, $\mathrm{St}_\ell=f\ell/V_0$, and the balance between momentum and buoyancy inlet fluxes is characterized by the Richardson number, $\mathrm{Ri}_\ell =(1-\rho_0/\rho_\infty)g\ell/V_0^2$, where $\ell$ and $V_0$ are, respectively, the characteristic dimension and inlet velocity of the jet, $\rho_0$ is the density of the inlet fluid, $\rho_\infty$ is the ambient density, and $g$ is the gravitational acceleration.
Buoyant jet puffing has been examined in detail over the past three decades, beginning with \citet{Hamins1992} and \citet{Cetegen1996}. In both of these studies, round helium buoyant jets were studied over a range of Richardson and Reynolds, $\mathrm{Re}_\ell = V_0\ell/\nu_0$, numbers, where $\nu_0$ is the kinematic velocity of the inlet fluid. \citet{Hamins1992} found that the Strouhal and Richardson numbers were related as $\mathrm{St}_D \sim \mathrm{Ri}_D^{0.38}$, where $D$ is the inlet diameter. \citet{Cetegen1996} similarly determined the relation $\mathrm{St}_D = 0.8\mathrm{Ri}_D^{0.38}$ for $\mathrm{Ri}_D < 100$ and $\mathrm{St}_D = 2.1\mathrm{Ri}_D^{0.28}$ for $100 < \mathrm{Ri}_D < 500$. In a study of planar (i.e., high aspect ratio rectangular) buoyant jets, \citet{Cetegen1998} found the alternative relation $\mathrm{St}_W = 0.55\mathrm{Ri}_W^{0.45}$, where $W$ is the inlet width. The differences in round and planar buoyant jet scaling relations were attributed to differences in mixing rates and buoyancy fluxes between the different inlet shapes. Additionally, the puffing Strouhal number was found to be independent of Reynolds number \citep{Cetegen1997}.
Although these scaling relations are useful when studying round or planar buoyant jets, they are not directly applicable to inlets of different shapes. This can pose challenges in many practical applications involving complex inlets (e.g., industrial burners or volcanic plumes). Recently, \citet{Bharadwaj2019} experimentally studied a series of buoyant jets from rectangular inlets with different aspect ratios, and found that all puffing Strouhal numbers could be described using a single Richardson number scaling relation when the characteristic length, $\ell$, was taken to be the hydraulic diameter of the inlet, and the characteristic velocity was taken to be the effective velocity of a jet with the same total mass flow rate issuing through a round exit with a diameter equal to the hydraulic diameter. However, the question remains whether there exists a single universal scaling relation based on the hydraulic diameter (or radius) that can be used to relate Strouhal and Richardson numbers for all inlet geometries, and not just rectangles with different aspect ratios.
In the present study, we take the first steps towards determining whether such a universal relation exists by using adaptive mesh numerical simulations to examine high-temperature buoyant jets for circular, rectangular (with three different aspect ratios), triangular, and annular buoyant jet inlet geometries. A range of Richardson numbers are examined for each geometry, and we show that a single scaling relation based on the hydraulic radius accurately describes all available experimental and computational data for Richardson numbers spanning over four orders of magnitude, even without the use of the effective velocity proposed by \citet{Bharadwaj2019}.
\section{Description of Adaptive Mesh Numerical Simulations}
The numerical code used to perform the adaptive mesh simulations is \texttt{PeleLM}, a low-Mach reacting flow code \citep{Almgren1998,Day2000,Bell2005,Nonaka2012,Nonaka2018} that solves the Navier-Stokes equations, as well as additional equations for enthalpy and species conservation. All fluid transport properties are calculated as mixture-averaged coefficients assuming a mixture of perfect gases. A second-order Godunov scheme is used for advection and a semi-implicit discretization is used for diffusion. The overall numerical method is second-order accurate in both space and time, and the time-step is dynamically determined according to an advective Courant-Friedrichs-Levy condition. A more comprehensive description of the numerical approach, algorithm, and methods is available in \citet{Nonaka2018} and \citet{Wimer2019}.
The simulations were each performed in a 1~m$^3$ domain using adaptive mesh refinement (AMR) to reduce the computational cost. The adaptive grid approach implemented in \texttt{PeleLM} is described in detail by \citet{Day2000}, and the governing equations were solved on a series of uniform, nested grids with no subgrid-scale modeling. The computational domain for each simulation consisted of a 128$^3$ base grid with two levels of AMR. The grid was re-meshed every time step to maintain sufficiently fine resolution in regions with large vorticity and density gradient magnitudes, and to reduce resolution in regions where magnitudes were small. With the two levels of AMR, the smallest physical scale resolved in the simulations was 1.95~mm, which was recently shown to be sufficiently fine for accurately capturing the puffing frequency of a large-scale helium plume using an identical numerical approach and similar physical setup \citep{Wimer2019}.
The bottom boundary in each of the simulations consisted of a high temperature jet inlet and a mild air co-flow. Open boundary conditions were used on the four sides and at the top of the domain. The jet inlet and co-flow were specified using Dirichlet boundary conditions, and the open boundaries were specified using a divergence-constrained velocity projection that allowed for both fluid inflow and outflow. In each of the simulations, hot air with temperature $T_0 = 1000$~K flowed through the jet inlet, and the inlet velocity, $V_0$, was varied to span a range of Richardson numbers (see Table \ref{tab:sims}). The co-flow was uniform and the same in all simulations, with velocity $V_\text{coflow} = 0.05$~m/s and temperature $T_\text{coflow} = 300$~K. The gravitational acceleration was directed downwards with magnitude $g=9.81$~m/s$^2$ towards the bottom boundary along the vertical axis.
\begin{table}
\begin{center}
\begin{tabular}{cccccccc}
Inlet Shape & Dimensions & $AR$ & Velocities $V_0$ (m/s)\\[3pt]
Circle & 0.154~m (diameter) & -- & 0.25 \\
Circle & 0.274~m (diameter) & -- & (0.125, 0.25, 0.5, 1) \\
Rectangle & $0.137\times 0.137$~m$^2$ & 1 & (0.25, 0.5, 1) \\
Rectangle & $0.075\times 0.25$~m$^2$ & 10/3 & (0.125, 0.25, 0.5, 1) \\
Rectangle & $0.0433\times 0.433$~m$^2$ & 10 & (0.125, 0.25) \\
Equil.\ Triangle & $0.208$~m (side) & -- & (0.125, 0.25, 0.5, 1) \\
Annulus & 0.1755/0.234~m (in/out) & 3/4 & (0.25, 0.5, 1, 1.5) \\
\end{tabular}
\caption{Summary of simulations performed, indicating the shape of the inlet, its dimensions and aspect ratio ($AR$), and the inlet velocities, $V_0$, examined for each shape. In all simulations, the inlet temperature was $T_0 = 1000$~K and the 1~m$^3$ domain was discretized with a $128^3$ base grid and two levels of AMR, providing an effective grid resolution of 1.95~mm.}
\label{tab:sims}
\end{center}
\end{table}
Seven different jet inlet shapes were simulated (as also summarized in Table \ref{tab:sims}): circles with two different diameters, rectangles with three different aspect ratios, an equilateral triangle, and an annulus. The aspect ratios of the rectangles are computed as $AR_\text{rect}=L/W$, where $L$ is the length and $W$ is the width, with $L>W$ and $1 \le AR_\text{rect} < \infty$. Three rectangular aspect ratios were considered here, corresponding to $AR_\text{rect} = 1,10/3,10$. The aspect ratio for the annulus is given by $AR_\text{annu} = D_\text{in}/D_\text{out}$, where $D_\text{in}$ is the inner diameter and $D_\text{out}$ is the outer diameter, with $D_\text{in} < D_\text{out}$ and $0 \le AR_\text{annu} < 1$. An annulus of aspect ratio, $AR_\text{annu} = 3/4$ was examined here. The simulations were all performed using inlets of equal area and inlet temperature, ensuring that the momentum and buoyancy fluxes were identical for all simulations with the same inlet velocity, $V_0$. Each of the simulations were performed for 20~s and statistics were computed over the last 10~s to allow for the decay of initial transients.
\section{Results}
Figure~\ref{fig:volumes} shows time series of temperature isosurfaces for each of the inlet geometries, with $V_0=0.25$~m/s in each case. For the circular inlet in Figure \ref{fig:volumes}(a), the hot air entering the domain is accelerated upwards along the centerline due to buoyancy. This subsequently causes entrainment of more dense cold air from the sides, pinching the hot gases towards the center of the inlet and creating a shear layer between the hot gases and cooler ambient air. This shear layer, combined with the difference in density across the layer, leads to an axisymmetric Kelvin-Helmholtz (KH) instability around the circumference of the inlet. This instability rolls up into a toroidal vortex ring that eventually pinches off and is shed, as indicated by the sequence of toroidal temperature isosurfaces rising above the inlet in Figure \ref{fig:volumes}(a).
\begin{figure}
\centering\includegraphics[width=\textwidth]{figure1.png}
\caption{Isosurfaces of the temperature field as a function of time (snapshots are separated by 0.05~s) for (a) a circular buoyant jet with $D = 0.154$~m, (b-d) rectangular buoyant jets with $AR_\mathrm{rect} = 1,10/3,10$, (e) an equilateral triangle buoyant jet with side $0.208$~m, and (f) an annular buoyant jet with $AR_\mathrm{annu} =3/4$. The inlet velocity in each case was $V_0=0.25$~m/s.}
\label{fig:volumes}
\end{figure}
Figure~\ref{fig:volumes}(b) shows that the flow evolution for an $AR_\text{rect}=1$ rectangular inlet is similar to that of the circular inlet, although the resulting structures differ. Hot air is still accelerated upwards upon entering the domain due to the buoyant force, again leading to entrainment of cold air from the sides and pinching of the hot gases towards the centroid of the square. However, the entrained flow along the sides of the inlet reaches the centroid before the flow originating from the corners. This leads to the roll-up and shedding of vortical structures (again indicated by the temperature isosurfaces) that are less coherent than in the circular case shown in Figure \ref{fig:volumes}(a).
As the aspect ratio of the rectangle increases to $AR_\text{rect}=10/3$ and $10$, shown in Figures~\ref{fig:volumes}(c) and (d), respectively, the shed vortical structures become increasingly elongated. Since the aspect ratio is greater than one in these cases, the long sides of the rectangle are closer to the centroid than the short sides and, as a result, the long-sided shear layer pinches off before the short-sided shear layer has time to develop. This causes the shedding of elongated vortices that are reminiscent of stretched vortex rings.
For the equilateral triangle inlet in Figure~\ref{fig:volumes}(e), the entrained flow from the sides reaches the centroid before the flow from the corners. This results in the formation of a three-sided vortical structure similar to the four-sided vortical structure of the square buoyant jet in Figure~\ref{fig:volumes}(b).
Finally, Figure~\ref{fig:volumes}(f) shows the structure and development of the buoyant jet for the annular inlet. As the hot air enters the domain, ambient air is again entrained, not towards the centerline of the shape, but rather towards the average of the inner and outer radii (i.e., directly above each section of the inlet). This location of converging flow and subsequent puffing causes ambient air to be entrained in the negative radial direction at locations greater than the outer radius, and in the positive radial direction at locations less than the inner radius. Consequently, the shear layer, roll-up, and subsequent shedding of vortical structures are all centered above the average of the inner and outer radii. This results in toroidal vortices within the inner radius that create a net downward motion at the center of the annulus and counteract the upward acceleration of fluid due to buoyancy.
For each of the inlet shapes examined here, the sequence of vertical buoyancy-driven flow, entrainment of ambient air, pinching of hot gases, development of the KH instability, and subsequent shedding of vortical structures is repeated indefinitely, resulting in characteristic puffing frequencies for each case. From a quantitative perspective, the dominant puffing frequency can be determined using a number of methods, and here we use a Fast Fourier Transform (FFT) of the vertical velocity time series at a point above the centroid of the circular, rectangular, and triangular buoyant jets, and above the average radius for the annular buoyant jet. The dominant frequency was found not to depend on height above the inlet, and a height equal to the hydraulic radius $R_\mathrm{h}=A/P$ was thus used for each case, where $A$ is the geometric area of the inlet and $P$ is its perimeter.
\begin{figure}
\centering\includegraphics[width=\textwidth]{figure2.png}
\caption{Power spectral densities (PSDs) of the vertical velocity for (a) circular, (b-d) rectangular (with $AR_\text{rect} = 1,10/3,10$), (e) triangular, and (f) annular buoyant jets. Inlet velocities, $V_0$, are indicated in the legends in units of m/s. In panel (a), solid lines denote results for the $D=0.274$~m inlet and the dashed line corresponds to the $D=0.154$~m inlet. PSDs are computed one hydraulic radius above the inlet centroid for the circular, rectangular, and triangular inlets, and above the average radius for the annular inlet.}
\label{fig:ffts}
\end{figure}
The resulting power spectral densities, shown in Figure~\ref{fig:ffts}, display distinct peaks for each of the inlet shapes and velocities. Peak frequencies increase with inlet velocity for all shapes, and a comparison of the three rectangular cases in Figures~\ref{fig:ffts}(b-d) shows that the peak frequencies also increase as the aspect ratio increases. The highest peak frequencies are obtained for the annular case shown in Figure~\ref{fig:ffts}(f). Notably, despite the self-interacting nature of the annular buoyant jet and the subsequent difference in structure compared to the other inlet shapes, the flow still exhibits a characteristic puffing behavior. This indicates that the puffing phenomenon is largely independent of the inlet shape, and is instead determined by the underlying dynamics associated with buoyancy, entrainment, pinching, KH roll-up, and vortex shedding.
\section{Scaling and Universality}
Three non-dimensional groups can be formed from the dominant puffing frequency $f$ and the parameters that describe the buoyant jet inlet (i.e., $V_0$, $\ell$, $\rho_0$, and $\nu_0$); namely, $\mathrm{St}_\ell$, $\mathrm{Ri}_\ell$, and $\mathrm{Re}_\ell$. Prior work on round and planar buoyant jets has shown that $\mathrm{St}_\ell$ is independent of $\mathrm{Re}_\ell$ \citep{Cetegen1997} and, following previous studies, we thus seek a scaling relation of the form $\mathrm{St}_\ell=b\mathrm{Ri}_\ell^a$, where $a$ and $b$ are constants that will be determined from the present simulations and prior experimental data.
It is worth noting that, in order for $f$ to increase with inlet velocity $V_0$ (as is suggested for all inlet shapes by Figure \ref{fig:ffts}), the exponent $a$ must be smaller than $1/2$. Prior empirical scaling relations have all been consistent with this constraint; for round buoyant jets, \citet{Cetegen1996} found $a=0.38$ for $1 < \mathrm{Ri}_D < 100$ and $a=0.28$ for $\mathrm{Ri}_D > 100$, and for planar buoyant jets, \citet{Cetegen1998} found $a=0.45$ for $1 < \mathrm{Ri} < 100$, with no secondary scaling relation reported.
Prior scaling relations between $\mathrm{St}_\ell$ and $\mathrm{Ri}_\ell$ have primarily used a characteristic length scale $\ell$ associated with the diameter or width of the inlet. As an initial step, here we follow a similar approach and examine the scaling of $\mathrm{St}_\ell$ using width-based length scales for each inlet shape. For the circular inlets, $\ell$ is taken to be the diameter, $D$. For the three rectangular inlets, the smallest side length (i.e., the width, $W$) is used for $\ell$, since this is the scale most relevant to the pinching and subsequent shedding of vortical structures. For equilateral triangles, similar to squares, the length of one side is used for $\ell$. Finally, for the annulus, $\ell$ is taken to be the difference between the outer and inner radii; i.e., $\ell=(D_\text{out} - D_\text{in})/2$.
\begin{figure}
\centering\includegraphics[width=\textwidth]{figure3.png}
\caption{Strouhal number as a function of Richardson number for circular, rectangular ($AR_\mathrm{rect} =1,10/3,10$), triangular (equilateral), and annular ($AR_\mathrm{annu} = 3/4$) buoyant jets. Panel (a) shows results for characteristic lengths $\ell$ based on the diameter (circles), width (rectangles), side (triangles), and difference between outer and inner radii (annuli). Panel (b) shows results when the characteristic length is taken to be the hydraulic radius, $R_\mathrm{h}$. Scaling relations from \citet{Cetegen1996} and \citet{Cetegen1998} are also shown in panel (a).}
\label{fig:StVRiTraditional}
\end{figure}
Figure~\ref{fig:StVRiTraditional}(a) shows $\mathrm{St}_\ell$ versus $\mathrm{Ri}_\ell$ for each of the simulations performed in the present study, as well as for the round helium plume case from \citet{Wimer2019}. Scaling relations from prior experimental studies of round \citep{Cetegen1996} and planar \citep{Cetegen1998} buoyant jets are also shown. The circular buoyant jet results follow the round jet scaling relations from \citet{Cetegen1996}, including the transition at $\mathrm{Ri}_D \approx 100$. The rectangular buoyant jet results fall between prior round and planar scaling relations, with the lower aspect ratio rectangles more closely resembling the round scaling and the higher aspect ratio rectangles more closely resembling the planar scaling. The latter correspondence occurs because, as the rectangle becomes longer relative to its width (i.e., as aspect ratio increases), locations above the centroid appear more and more two-dimensional as corner effects become increasingly negligible. The triangular buoyant jet results follow a similar relation to that for the circular buoyant jets, but with larger values of $\mathrm{St}_\ell$. Interestingly, the annular buoyant jets also closely follow the round jet scaling slope, but with smaller values of $\mathrm{St}_\ell$.
Although scaling relations for each of the different inlet shapes could be obtained separately using least-squares fits to the data in Figure \ref{fig:StVRiTraditional}(a), we instead seek a single scaling relation between appropriately defined Strouhal and Richardson numbers for any inlet shape. In particular, here we introduce Strouhal and Richardson numbers, denoted $\mathrm{St}_{R_\mathrm{h}}$ and $\mathrm{Ri}_{R_\mathrm{h}}$, respectively, that use the hydraulic radius of the inlet, $R_\mathrm{h}$, as the characteristic length $\ell$. As an example, the hydraulic radius of a circle with diameter $D$ is $R_\mathrm{h} = \pi (D/2)^2 /(\pi D) = D/4$, and the functional forms of $R_\mathrm{h}$ for other inlet shapes are similarly straightforward to determine.
It should be noted that there is substantial reason to suspect the importance of the hydraulic radius in the formulation of a geometry-independent scaling relation for the puffing Strouhal number. In particular, \citet{Bharadwaj2019} recently showed that Strouhal and Richardson numbers based on the hydraulic diameter resulted in a single scaling relation for a series of rectangular buoyant jets with different aspect ratios. From a physical standpoint, the inlet area and perimeter used to determine the hydraulic radius are also dynamically significant; the area determines, in part, the total buoyancy flux introduced into the domain, and the shear layer that leads to KH roll-up and subsequent vortex shedding forms along the inlet perimeter.
Figure~\ref{fig:StVRiTraditional}(b) shows the resulting relationship between $\mathrm{St}_{R_\mathrm{h}}$ and $\mathrm{Ri}_{R_\mathrm{h}}$ for each of the inlet shapes and velocities. Most notably, all results now fall close to a single scaling relation that was determined using a least-squares fit to be $\mathrm{St}_{R_\mathrm{h}}=e^{-0.97}\mathrm{Ri}_{R_\mathrm{h}}^{0.40}$, with an $r^2 = 0.981$ coefficient of correlation. The secondary scaling associated with circular buoyant jets is eliminated, and all round buoyant jets, including the helium plume simulation from \citet{Wimer2019}, follow the same scaling relation. Additionally, the various rectangular, triangular, and annular results all also follow this relation.
\begin{figure}
\includegraphics[width=\textwidth]{figure4.png}
\caption{Strouhal number as a function of Richardson number for the present simulations and prior experiments from \citet{Hamins1992}, \citet{Cetegen1996}, \citet{Cetegen1998}, and \citet{Bharadwaj2019} [see legend in panel (b)]. Definitions of $\ell$ in (a) and (b) are the same as those in Figure~\ref{fig:StVRiTraditional}. In panel (b), we show the least squares fit to all data, as well as the proposed relationship from Eq.\ \eqref{eq:law}. In panel (c), we show residuals with respect to Eq.\ \eqref{eq:law}.}
\label{fig:StVRiHydraulicRadius}
\end{figure}
The single scaling relation measured using the present simulations can also be extended to include prior experimental data for round, planar, and rectangular inlets. In particular, Figure~\ref{fig:StVRiHydraulicRadius} shows data from the present simulations along with all prior experimental data from \citet{Hamins1992}, \citet{Cetegen1996}, and \citet{Cetegen1998}, as well as recent experimental data from \citet{Bharadwaj2019}. Figure \ref{fig:StVRiHydraulicRadius}(a) shows that, once again, when we use a width- or diameter-based formulation for $\ell$, there is no single scaling relation that accurately describes all available data. However, when the Strouhal and Richardson numbers are instead computed using the hydraulic radius, they all collapse along a single line. The least-squares fit to all data [not including the \citet{Bharadwaj2019} data, for reasons explained later] is $\mathrm{St}_{R_\mathrm{h}} = e^{-1.004}\mathrm{Ri}_{R_\mathrm{h}}^{0.406}$ with $r^2 = 0.984$, which is close to the scaling relation obtained for the simulation data alone [see Figure \ref{fig:StVRiTraditional}(b)].
Based on the current simulation and prior experimental results shown in Figure \ref{fig:StVRiHydraulicRadius}(b), as well as the similarity of the computed scaling coefficients to rational numbers, we propose the following shape-independent scaling relation between $\mathrm{St}_{R_\mathrm{h}}$ and $\mathrm{Ri}_{R_\mathrm{h}}$:
\begin{equation}\label{eq:law}
\mathrm{St}_{R_\mathrm{h}} = e^{-1}\mathrm{Ri}_{R_\mathrm{h}}^{2/5}\,.
\end{equation}
The coefficient of correlation between this relation and the simulation and experimental data is $r^2 = 0.984$. The accuracy of the proposed relation compared to the data can also be determined by examining the residuals of the data with respect to the proposed fit, as shown in Figure~\ref{fig:StVRiHydraulicRadius}(c). For the proposed relation to be considered an acceptable fit to the data, the residuals should be scattered about zero and show no discernible trends with respect to the dependent variable (i.e., $\mathrm{Ri}_{R_\mathrm{h}}$). Figure~\ref{fig:StVRiHydraulicRadius}(c) shows that the residuals for prior experimental \citep{Hamins1992,Cetegen1996,Cetegen1998} and the present simulation data are all scattered about zero, indicating the lack of any persistent bias in Eq.\ \eqref{eq:law} with respect to this data.
Rectangular inlet results from \citet{Bharadwaj2019} are also included in Figure~\ref{fig:StVRiHydraulicRadius}, but there is a consistent bias between this data and both the proposed scaling relation in Eq.\ \eqref{eq:law} and the least-squares fit in Figure~\ref{fig:StVRiHydraulicRadius}(b). This results in exclusively negative values of the residuals for data from the study by \citet{Bharadwaj2019}, as shown in Figure~\ref{fig:StVRiHydraulicRadius}(c). Thus, although the data from this study is quite precise, as indicated by the small scatter in the corresponding residuals in Figure~\ref{fig:StVRiHydraulicRadius}(c), the data appears to be inconsistent with other experimental and simulation results, as well as with the relation in Eq.\ \eqref{eq:law}. It is also worth noting that \citet{Bharadwaj2019} used an effective velocity instead of $V_0$ in their scaling relations, but in the present study this correction was found to be unnecessary for obtaining good agreement between the proposed relation in Eq.\ \eqref{eq:law} and all other simulation and experimental data.
\section{Conclusions\label{sec:conclusions}}
Adaptive mesh numerical simulations of buoyant jets have been performed for seven different inlet shapes: two circular inlets (corresponding to different diameters), three rectangular inlets (corresponding to different aspect ratios), an equilateral triangle inlet, and an annular inlet. A range of inlet velocities were simulated for each geometry, giving a range of inlet Richardson numbers for each case, and the resulting puffing Strouhal numbers were measured above the jet inlets.
Fundamental aspects of the flow evolution were found to be similar for each of the inlet shapes, including the buoyant rise of hot inlet gases, entrainment of cold ambient gases, pinching of the hot gases towards a central location above each inlet, formation of a KH instability along the interface between the hot and cold gases, and the subsequent pinch-off and shedding of vortical structures. This process was observed for each inlet shape, but the shed vortices were found to become less coherent as the complexity of the inlet increased. We found that, in general, the puffing frequency increased with inlet velocity and, for the rectangular cases, aspect ratio.
Using width-based puffing Strouhal and inlet Richardson numbers, our results are consistent with prior experimental studies of round and planar jets, with the highest aspect ratio rectangular cases examined here closely corresponding to prior planar results. However, when using such width-based non-dimensional parameters, we did not find a single scaling relation that accurately describes the Strouhal-Richardson relationship for all inlet shapes.
By contrast, if we instead use Strouhal and Richardson numbers based on the hydraulic radius, we do recover a single scaling relation, given in Eq.\ \eqref{eq:law}, that accurately describes the present computational results and prior planar and round experimental data. Recent experimental results for rectangular inlets \citep{Bharadwaj2019} were found to deviate from this scaling relation, but we also did not require any modification to the inlet velocity used to construct the Strouhal and Richardson numbers in order to obtain consistent agreement between the proposed scaling relation and all other data. The scaling relation in Eq.\ \eqref{eq:law} was found to closely describe both experimental and simulation results for Richardson numbers spanning four orders of magnitude.
It is tempting to propose Eq.\ \eqref{eq:law} as a universal scaling relation for buoyant jet puffing that can be applied to any inlet shape. However, a broader range of inlet shapes, spanning a wider range of Richardson numbers, must be examined before this claim can be made with confidence. In particular, complex inlet shapes without any symmetries, or those with narrow connection points, require further study. The rational exponents in Eq.\ \eqref{eq:law} are also suggestive of a theoretically-based scaling relation, and further study may provide a rigorous justification for the proposed exponents. Finally, we found pronounced differences between results from \citet{Bharadwaj2019} and all other experimental and computational results. It is not clear why this discrepancy exists, particularly since the present rectangular results [simlar to the geometries studied by \citet{Bharadwaj2019}] are consistent with all other computational and experimental data. Consequently, this difference requires further study, and a resolution, in future work.
\acknowledgments
Helpful discussions with Drs.\ Marcus S.~Day, Andrew Nonaka, and Werner J.A.~Dahm are gratefully acknowledged. NTW, GBR, and PEH were supported, in part, by the Strategic Environmental Research and Development Program under grant W912HQ-16-C-0026. CL was supported by the National Science Foundation Graduate Fellowship Program. Gift support from the 3M Company is also gratefully acknowledged.
\bibliographystyle{jfm}
|
3,212,635,537,580 | arxiv | \section{Introduction}
In this work, we will consider the compressible isentropic Euler equations with time dependent damping
\begin{subequations}\label{main}
\begin{empheq}[left=\empheqlbrace]{align}
&
\rho_t+\nabla\cdot(\rho u)=0,
\quad
(t, x) \in (0, T)\times\mathbb{R}^n,
\label{eq:main-1}
\\
&
(\rho u)_t +\nabla\cdot(\rho u\otimes u)+\nabla p+\frac{\mu \, \rho u}{(1+t)^{\lambda}} = 0,
\quad (t, x) \in (0, T)\times\mathbb{R}^n,
\label{eq:main-2}
\\
&
u(0, x)=\varepsilon u_0(x), \quad \rho(0, x)= \overline{\rho} +\varepsilon \rho_0(x),
\label{eq:main_data}
\end{empheq}
\end{subequations}
where $\rho \colon [0,T) \times \mathbb{R}^{n} \to \mathbb{R}$, $u \colon [0,T) \times \mathbb{R}^{n} \to \mathbb{R}^n$ and $p \colon [0,T) \times \mathbb{R}^{n} \to \mathbb{R}$ stand for the density, velocity and pressure of the fluid respectively, $n\in\{1,2,3\}$ is the spatial dimension, while $\tfrac{\mu}{(1+t)^\lambda}$ is a time dependent frictional coefficient, with $\mu\ge 0$ and $\lambda\ge 1$.
The initial values are a small perturbation of constant states (with $\overline{\rho}>0$), where the \lq\lq smallness'' is quantified by the parameter $\varepsilon>0$, and the perturbations $\rho_0, u_0\in C_0^{\infty}(\mathbb{R}^n)$ satisfy
\begin{equation}\label{data}
\text{supp}\,(\rho_0-\overline{\rho}), \, \text{supp}\, u_0 \subseteq \{x \in \mathbb{R}^n \colon |x|\le R\}
\end{equation}
for some positive constant $R$. We assume that the pressure satisfies the state equation for the gas
\begin{equation*}
p \equiv p(\rho) = A \rho^{\gamma},
\end{equation*}
where $A>0$ is a constant and $\gamma>1$ is the adiabatic index.
However, it is not restrictive in the following to assume $R=\overline{\rho}=1$ and $A=\frac{1}{\gamma}$.
Our study concerns the estimates for the \emph{lifespan} $T_\varepsilon$ of the solution, defined as the largest time such that the solution exists and is $C^1$ on $[0,T_\varepsilon) \times \mathbb{R}^n$.
\subsection{The undamped case}\label{subsec:undamped}
Setting $\mu=0$, the system \eqref{main} reduces to the classic Euler equations, a fundamental system in fluid dynamics consisting in a set of quasilinear hyperbolic equations governing the adiabatic and inviscid flow for an ideal fluid. Generally speaking, the compressible Euler equations develop shock waves in finite time for general initial data (see \cite{SiderisThomasesWang2003} and references therein).
The study of formation of singularity for compressible Euler equations was initiated by Sideris in \cite{Sideris1985}, where several blow-up results were established for the $3$-dimensional Euler equations modeling a polytropic, ideal fluid with both large data and small initial perturbations.
In the subsequent works \cite{Sideris1991, Sideris1992} by the same author, the following lifespan estimate
\begin{equation}\label{L3d}
\exp\left(C_1\varepsilon^{-1}\right)
\le
T_\varepsilon
\le
\exp\left(C_2\varepsilon^{-2}\right)
\end{equation}
was established in $3D$, where the lower bound was obtained under the assumption that the initial velocity is irrotational and the upper bound holds for $\gamma=2$. In particular, Sideris extended the generic lower bound
\begin{equation}\label{eq:generic-bound}
T_\varepsilon \ge C \varepsilon^{-1}
\end{equation}
which typically holds for symmetric hyperbolic systems in any number of space dimensions (see \cite{Kato1975} and \cite{Majda1984} for the general theory).
Here and in the following, we will use $C, C_1, C_2$ to denote some generic positive constants independent of $\varepsilon$, the value of which may change in different places.
Rammaha proved in \cite{Rammaha1989} the formation of singularity in finite time for the $2D$ case, and furthermore, for $\gamma=2$, obtained the following upper bound of the lifespan for small perturbations:
\begin{equation*
T_\varepsilon \le C\varepsilon^{-2}.
\end{equation*}
Later, Alinhac showed in \cite{Alinhac1993} that the lifespan for rotationnally invariant data in $2D$ satisfies
\begin{equation*
\lim_{\varepsilon\rightarrow 0} \varepsilon^2 T_{\varepsilon}=C.
\end{equation*}
The compressible Euler equations in two space dimensions is reconsidered by Sideris in \cite{Sideris1997}, where different lower bounds for the lifespan are established under different assumptions on the initial data. In \cite{LaiXiangZhou2022}, Lai, Xiang and Zhou studied the generalized Riemann problem governed by compressible isothermal ($p=\rho$) Euler equations in $2D$ and $3D$, establishing blow-up results for the fan-shaped wave structure solution. Recently, Jin and Zhou in \cite{JinZhou2020} derived, for $\gamma=2$, the following upper bound for the lifespan estimate:
\begin{equation}\label{L123-upper}
T_\varepsilon
\le
\left\{
\begin{aligned}
& C\varepsilon^{-1} &&\text{if $n=1$,}\\
& C\varepsilon^{-2} &&\text{if $n=2$,}\\
& \exp(C\varepsilon^{-1}) &&\text{if $n=3$,}
\end{aligned}
\right.
\end{equation}
which in particular improves the upper bound in \eqref{L3d}, and shows the optimality of the lifespan estimate in $3$-dimensions when $\gamma=2$. To be precise, in their work only the $3D$ case is explicated, but their method and Lemma\til2.1 therein work also in lower dimensions.
On the other side, putting together \eqref{eq:generic-bound}, which holds from the general theory, and the results in \cite{Alinhac1993,Sideris1997} and in \cite{Sideris1991}, we find that the lower bounds corresponding to \eqref{L123-upper} hold for any $\gamma>1$, namely
\begin{equation*}\label{L123-lower}
T_\varepsilon
\ge
\left\{
\begin{aligned}
& C\varepsilon^{-1} &&\text{if $n=1$,}\\
& C\varepsilon^{-2} &&\text{if $n=2$,}\\
& \exp(C\varepsilon^{-1}) &&\text{if $n=3$.}
\end{aligned}
\right.
\end{equation*}
In particular, we can see that the lifespan estimates in \eqref{L123-upper} are optimal at least for $\gamma=2$ in $1D$ and $3D$, and for any $\gamma>1$ if $n=2$ (due to \cite{Alinhac1993}). Actually, we anticipate that they are optimal for any $\gamma>1$ also in the $1D$ case thanks to the results in \cite{Sugiyama2018}, but we will properly discuss them in the next subsection.
\subsection{The damped case}
Let us turn our attention to the problem in the presence of a time dependent damping term.
Wang and Yang in \cite{WangYang2001} proved global existence in $\mathbb{R}^n$ for the system \eqref{main} with a positive constant damping (i.e. $\mu>0$, $\lambda=0$) and small initial perturbation of some constant state. Sideris, Thomases and Wang \cite{SiderisThomasesWang2003} restudied the same case in $\mathbb{R}^3$, showing that the damping term can prevent the development of singularities in finite time if the initial perturbations are small, but on the contrary it is not strong enough to prevent the blow-up for large initial data, even if they are smooth.
Hou and Yin in \cite{HouYin2017}, and Hou, Witt and Yin in \cite{HouWittYin2018} studied the system \eqref{main} in $2D$ and $3D$, proving: global existence for $0\le\lambda<1$, $\mu>0$ and for $\lambda=1$, $\mu>3-n$; finite time blow-up for $\lambda>1$, $\mu>0$ and for $\lambda=1$, $0<\mu\le1$, $n=2$. Also, they established the following upper bound for the lifespan estimate:
\begin{equation}\label{L1}
T_\varepsilon\le \exp\left(C\varepsilon^{-2}\right)
\end{equation}
for $\gamma=2$ (see the details in \cite[p. 2511]{HouYin2017} for $n=2$ and in \cite[p. 416]{HouWittYin2018} for $n=3$). In \cite{Pan2016-JMAA, Pan2016-NA}, Pan studied the corresponding problem in $1$-dimension, showing that: if $0\le\lambda<1$, $\mu>0$ or if $\lambda=1$, $\mu>2$, there exists a global solution; if $\lambda=1$, $0\le\mu\le2$ or if $\lambda>1$, $\mu\ge0$, the $C^1$-solutions will blow up in finite time. Moreover, in the latter case, the same lifespan estimate as \eqref{L1} was established in \cite{Pan2016-JMAA} for $\gamma=2$.
From these results, one infers that the point $(\lambda,\mu)=(1,n-3)$ is critical for the problem.
Notice that both the blow-up results in \cite{Pan2016-JMAA, Pan2016-NA} and \cite{HouYin2017, HouWittYin2018} are established exploiting the method developed by Sideris in \cite{Sideris1985}: a $1D$ semilinear-type wave equation is constructed for some average quantity related to the density, and then the d'Alembert's formula is used to establish an ordinary differential inequality.
In the paper \cite{Sugiyama2018}, Sugiyama studied the system \eqref{main} in $1D$, in its equivalent form obtained by changing the Eulerian coordinates into Lagrangian ones. Explicitly, in \cite{Sugiyama2018} it is considered the so-called $p$-system
\begin{equation}\label{eq:psystem}
\left\{
\begin{aligned}
&
v_t-u_x=0,
\\
&
u_t + p_x + a(t,x) v = 0,
\\
&
u(0, x)=\varepsilon u_0(x), \quad v(0, x)= 1 +\varepsilon v_0(x),
\end{aligned}
\right.
\end{equation}
where $p \equiv p(v) = \tfrac{v^{-\gamma}}{\gamma}$ and $v=1/\rho$ is the specific volume.
For the space-independent damping $a(t,x)=\frac{\mu}{(1+t)^\lambda}$, making use of Riemann invariants, the author was able to establish the estimates
\begin{equation}\label{eq:1Dlifespan}
T_\varepsilon
\le
\left\{
\begin{aligned}
& C \varepsilon^{-1} &&\text{for $\lambda>1$ and $\mu\ge0$,}
\\
& C \varepsilon^{-\tfrac{2}{2-\mu}} &&\text{for $\lambda=1$ and $0\le\mu<2$,}
\\
& \exp(C \varepsilon^{-1}) &&\text{for $\lambda=1$ and $\mu=2$,}
\end{aligned}
\right.
\end{equation}
not only from above, but also from below, namely obtaining the sharp lifespan estimates for small $\varepsilon$, provided that $\lim_{x \to -\infty} (u_0(x), v_0(x)) = (u_-, v_-) \in \mathbb{R}^2$. For the study of problem \eqref{eq:psystem}, see also \cite{ChenLiLiMeiZhang2020} and the references therein.
At the best of our knowledge, except for the above nice results in $1D$ by Sugiyama, the lifespan estimate in $2D$ and $3D$ for \eqref{main} seems to be still unclear, especially when $\gamma\neq2$, even in the undamped case for $n=3$.
The method in \cite{Pan2016-JMAA, Pan2016-NA, HouYin2017, HouWittYin2018}, successfully proves the blow-up of the solutions, but provides, as upper bound of the lifespan, the estimate \eqref{L1} for $\gamma\ge 2$, which unfortunately seems to be not the optimal one: indeed, it is reasonable to think that this should be closely related to the dimension $n$ and to the damping strength.
Furthermore, in the literature cited until now often only the case $\gamma=2$ is explicitly considered. The modifications necessary to generalize the proof for any $\gamma>1$ are then indicated, pointing to the fact that the lifespan estimates may be modified, but it is not clear how. This happens in the works \cite{Sideris1985,JinZhou2020} for $n=3$ and \cite{Rammaha1989} for $n=2$ in the undamped case, and similarly in the works \cite{HouYin2017} and \cite{Pan2016-JMAA} for the case with the time dependent damping term.
Nevertheless, the lower bounds of the lifespan for \eqref{L123-lower} are independent of $\gamma$ in the undamped case, and they are optimal for $n\in\{1,2\}$ and any $\gamma>1$, as we observed at the end of the previous subsection.
Thus, it is natural to believe that this phenomena should be true also for the bounds of the lifespan in the damped case, as actually verified in \cite{Sugiyama2018} for $n=1$. Our results go exactly in this direction, see Remark~\ref{rmk:gammaindep}.
\section{Aims and results}
In this paper, we are going to employ an argument based on the manipulation of suitable multipliers to absorb the damping term, and a variation of a lemma on differential inequality established in Li and Zhou \cite{LiZhou1995}.
Similar techniques were applied in the context of the blow-up study for semilinear damped wave equations with small initial data --- as we will show, the density satisfies this kind of equation (see Section~\ref{sec:reformulation}).
About the damped wave equation, there exist a huge and extensive literatures. Since the case treated in this work can be correspondent to the scattering case (when $\lambda>1$) and to the scale-invariant case (when $\lambda=1$) of the damped wave equation according to the Wirth's classification (see \cite{Wirth2004,Wirth2006,Wirth2007}), we cite only the works \cite{LiuWang2020,LaiTakamura2018,WakasaYordanov2019} and
\cite{DAbbicco2015,DAbbiccoLucenteReissig2015,DAbbiccoLucente2015,Palmieri2019,KatoSakuraba2019,Wakasugi2014,KatoTakamuraWakasa2019,ImaiKatoTakamuraWakasa2020,LaiTakamuraWakasa2017,IkedaSobajima2018,TuLin2017,LinTu2019} for the two cases respectively, but we refer the reader to the Introduction of \cite{LaiSchiavoneTakamura2020} and references therein for a comprehensive presentation.
The novelty in our work consists in mixing these techniques, which manage the linear part of the equation, with tools from the Orlicz spaces theory, which seems to be a suitable setting to deal with the non-power-like nonlinearity appearing in \eqref{eq:wave} below in the case of $\gamma\neq2$. In this way, we are able to derive upper bounds of the lifespan not only merely in the case $\gamma=2$, improving the existing ones, but also in the general case $\gamma\neq2$, obtaining the \emph{papabili} optimal results (see Remark~\ref{rmk:damped_optimality}).
To the best of our knowledge, the use of Orlicz space theory in the context of blow-up and lifespan estimates for nonlinear wave equations seems to be new, and we are confident they can be potentially applied to other problems (see Remark~\ref{rmk:otherproblems}).
Let us present now our main results.
\begin{theorem}\label{thm1}
Let $n\in\{1,2,3\}$ and $\gamma>1$. Suppose $\lambda>1$ and $\mu\ge0$.
Assume that the initial data satisfy \eqref{data} and
\begin{equation}\label{eq:datapositivity}
\int_{\R^n} \rho_0 \phi \text{\,d}x >0,
\qquad
\int_{\R^n} u_0 \cdot \nabla\phi \text{\,d}x > 0,
\end{equation}
where $\phi$ is defined in \eqref{def:phi} below.
Then the system \eqref{main} has no global solutions, and the lifespan estimate satisfies
\begin{equation*}
T_\varepsilon \le
\left\{
\begin{aligned}
&C\varepsilon^{-\ltfrac{2}{3-n}}
&&\text{if $n \in \{1,2\}$,}
\\
&\exp\left(C \varepsilon^{-1}\right)
&&\text{if $n=3$,}
\end{aligned}
\right.
\end{equation*}
provided $\varepsilon \le \varepsilon_0$ for some positive constant $\varepsilon_0 \equiv \varepsilon_0(n,\lambda,\mu,\gamma,u_0,\rho_0)$.
\end{theorem}
\begin{theorem}\label{thm2}
Let $n\in\{1,2\}$ and $\gamma>1$. Suppose $\lambda=1$ and $0\le\mu\le 3-n$.
Assume that the initial data satisfy \eqref{data} and \eqref{eq:datapositivity}.
Then the system \eqref{main} has no global solutions, and the lifespan estimate satisfies
\begin{equation}\label{eq:thm2}
T_\varepsilon \le
\left\{
\begin{aligned}
&C\varepsilon^{-\ltfrac{2}{3-n-\mu}}
&&\text{if $\mu<3-n$,}
\\
&\exp\left(C \varepsilon^{-1}\right)
&&\text{if $\mu=3-n$,}
\end{aligned}
\right.
\end{equation}
provided $\varepsilon \le \varepsilon_0$ for some positive constant $\varepsilon_0 \equiv \varepsilon_0(n,\mu,\gamma,u_0,\rho_0)$.
\end{theorem}
\begin{remark}[\emph{About optimality in the undamped case}]
\label{rmk:undamped_optimality}
%
The results in Theorems \ref{thm1} and~\ref{thm2} hold also for the compressible Euler system without damping, setting $\mu=0$. Together with the lower bound in \eqref{L3d} by Sideris, and recalling the discussion at the end of Subsection~\ref{subsec:undamped}, we completely close the problem of the optimality of the lifespan estimates in the undamped case for any $\gamma>1$.
\end{remark}
\begin{remark}[\emph{About optimality in the damped case}]
\label{rmk:damped_optimality}
%
Compared to the result \eqref{L1} obtained in \cite{HouWittYin2018,HouYin2017,Pan2016-JMAA}, we improve the lifespan estimates for $n=2$ and $n=3$, whereas for $n=1$ we re-demonstrate the upper bound in \eqref{eq:1Dlifespan} with a completely different approach.
%
Since when $\lambda=1$ it is the size of the positive constant $\mu$ that determines whether there is global solution or blow-up in finite time, it is reasonable to believe that the upper bound of the lifespan depends also on $\mu$, as the results in $1D$ \eqref{eq:1Dlifespan} obtained in \cite{Sugiyama2018} by Sugiyama.
%
And precisely in view of these results, it is natural to conjecture that the lifespan estimates in Theorem~\ref{thm1} and~\ref{thm2} should be indeed optimal, because we already know they are for $n=1$.
\end{remark}
\begin{remark}[\emph{Independence of $\gamma$}]\label{rmk:gammaindep}
In light of Remark~\ref{rmk:undamped_optimality}, we can provide a negative answer to the statements at the end of \cite{Sideris1985} and \cite{Rammaha1989}: in the undamped case, the lifespan estimates are actually independent of $\gamma$, or better, for any $\gamma>1$ they coincide with the ones in the case $\gamma=2$. This should be true also in the damped case, on the basis of Remark~\ref{rmk:damped_optimality}.
\end{remark}
\begin{remark}[\emph{Relations with the Glassey's conjecture}]
As we anticipated before, we will reduce ourselves to the study of the damped wave equation \eqref{eq:wave}, and hence, roughly speaking, to the inequality
\begin{equation}\label{eq:dampedineq}
\widetilde{\rho}_{tt} - \Delta \widetilde{\rho} + \frac{\mu}{(1+t)^{\lambda}} \widetilde{\rho}_t
\ge \Delta R(\widetilde{\rho})
\end{equation}
where $\lim_{p\to0} \frac{R(p)}{|p|^2} = \frac{\gamma-1}{2}$ and $\lim_{p\to+\infty} \frac{R(p)}{p^\gamma} = \frac1\gamma$. But we can be more precise.
%
Indeed, it is interesting to observe that our problem seems to be related to the Glassey's conjecture for the wave equation with derivative-type non-linearity, which affirms that the critical exponent for the equation
\begin{equation*}
u_{tt} - \Delta u + \frac{\mu}{1+t} u_t = |u_t|^p
\end{equation*}
with small initial data is given by the Glassey exponent
\begin{equation*}
p_G(n) := 1 + \frac{2}{n-1}.
\end{equation*}
Let us consider the corresponding problem with damping term, namely
\begin{equation}\label{eq:glassey}
\left\{
\begin{aligned}
& u_{tt} - \Delta u + \frac{\mu}{(1+t)^\lambda} u_t = |u_t|^p,
\\
& u(0,x) = \varepsilon u_0(x), \quad u_t(0,x) = \varepsilon u_1(x),
\end{aligned}
\right.
\end{equation}
for which the possibly optimal blow-up results in the case $\lambda>1$ and $\lambda=1$ are given in \cite{LaiTakamura2019} and \cite{HamoudaHamza2021} respectively (in the latter reference a more general combined nonlinearity is considered, see also \cite{LucentePalmieri2021,LaiSchiavone2022,ChenLucentePalmieri2021,HamoudaHamza2021-APAM} for related problems). For the model with $\lambda=1$ the critical exponent seems to be $p_G(n+\mu)$, in the sense that we have blow-up for any $1< p \le p_G(n+\mu)$ and global existence for $p>p_G(n+\mu)$. But if we set $p=2$, this requirement is equivalent to $\mu \le 3-n$, which is exactly the condition on $\mu$ in Theorem~\ref{thm2}.
Similarly, if $\lambda>1$ the critical exponent should be $p_G(n)$, experiencing blow-up for $1<p \le p_G(n)$. Setting $p=2$ this condition reduce to $n\in\{1,2,3\}$.
%
This analogy holds true also for the lifespan estimates: the corresponding ones proved in \cite{LaiTakamura2019} and \cite{HamoudaHamza2021}, setting $p=2$, match with the ones provided in Theorem~\ref{thm1} and~\ref{thm2}.
In other words, our problem \eqref{main} seems to share the same blow-up dynamics of the problem \eqref{eq:glassey} with the exponent fixed to be $p=2$. The rising of this particular number is not surprising, in view of Remark~\ref{rmk:gammaindep} and our particular nonlinearity. Indeed, even if the function $R$ defined in \eqref{def:R} and appearing in \eqref{eq:dampedineq} behaves like $|\!\cdot\!|^2$ for small argument and like $|\!\cdot\!|^\gamma$ for large argument, it will emerge from the proof of the theorems that it is actually the power $|\!\cdot\!|^2$ what essentially influence the behavior of the problem.
\end{remark}
\begin{remark}[\emph{Other problems}]
\label{rmk:otherproblems}
Let us list here some other related problems:
\mynobreakpar
\begin{itemize}
\item The method exploited in this work, based on the combination of techniques from the wave equations blow-up theory with Orlicz space tools, could be employed also in the study of blow-up phenomena for wave equation with general non-homogeneous convex non-linearities. For example, the computations we perform can be adapted for an equation like
\begin{equation*}
u_{tt} - \Delta u + d(t) u_t + m(t) u = \min\{|u|^p, |u|^q\}
\end{equation*}
with some constants $p,q>1$ and damping and mass terms $d(t)$, $m(t)$.
\item Another potential application of our methods is to the problem corresponding to \eqref{main} in exterior domain with appropriate boundary conditions.
\item For $\gamma=2$ the problem \eqref{main} is closely related to the inviscid shallow-water equations: they indeed coincide if the pressure satisfies the law $p(\rho)=\frac{\rho^{2}}{2F^2}$, being $F$ the Froude number (see e.g. \cite{Bresch2009,Majda1984}).
\end{itemize}
\end{remark}
\begin{notations}
In the rest of the paper, we will use $A \lesssim B$ (resp. $A \gtrsim B$) in place of $A \le C B$ (resp. $A \ge C B$), where $C$ is a positive constant independent of $\varepsilon$. We will write $A \approx B$ if $A \lesssim B$ and $A \gtrsim B$. Finally, $f(x) \sim g(x)$ for $|x| \to r \in \mathbb{R}\cup\{+\infty\}$ means that $\lim_{|x|\to r} \frac{f(x)}{g(x)}=1$.
\end{notations}
\section{Local existence, finite speed of propagation, reformulation}\label{sec:reformulation}
In this section we are going to recall the local existence and finite speed of propagation property of the solution for \eqref{main}, after reformulating it in a symmetric hyperbolic system. Then, we will deduce the damped wave equation satisfied by the density, the energy and weak formulations of which will be the starting point of our argument.
Denote the sound speed by
\begin{equation*}
\sigma(\rho)=\sqrt{p'(\rho)}=\rho^{\frac{\gamma-1}{2}}
\end{equation*}
and set $\overline{\sigma}=\sigma(1)=1$, which corresponds to the sound speed at the background density $\overline{\rho}=1$. Moreover, let
\begin{equation*}
\theta(t,x)
= \frac{2}{\gamma-1}\left(\sigma(\rho)-1\right)=\frac{2}{\gamma-1}
\left(\rho^{\frac{\gamma-1}{2}}-1\right).
\end{equation*}
Then the system \eqref{main} can be reformulated as
\begin{equation}\label{rmain}
\left\{
\begin{aligned}
&\partial_t\theta+u\cdot\nabla\theta+\left(1+\frac{\gamma-1}{2}
\theta\right)\nabla\cdot u=0,
\\
&\partial_tu+u\cdot \nabla u+\left(1+\frac{\gamma-1}{2}
\theta\right)\nabla\theta+\frac{\mu u}{(1+t)^{\lambda}}=0,
\\
&\theta(0,x)= \varepsilon \theta_{0}(x), \qquad u(0,x)=\varepsilon u_{0}(x)
\end{aligned}
\right.
\end{equation}
where
\begin{equation*}
\theta_0(x)
=
\frac{1}{\varepsilon} \cdot \frac{2}{\gamma-1}\left(\sigma(1+\varepsilon\rho_0)-1\right)
\end{equation*}
satisfies
\begin{equation*}
\theta_0 \sim \rho_0 \quad\text{for $\varepsilon\to0^+$,}
\qquad
\text{supp}\,\theta_0 \subseteq \left\{x \in \mathbb{R}^n \colon |x|\le 1\right\}.
\end{equation*}
It is easy to see that \eqref{rmain} is a symmetric hyperbolic system of the form
\begin{equation*
V_t+\sum_{i=1}^{n}a_i(V)V_{x_i}=f(t,V)
\end{equation*}
with $V=(\theta, u)^T$. According to the general theory in \cite{Kato1975, Majda1984}, the above hyperbolic system of conservation laws admits a local $C^1$-solution on the time interval $[0, T)$ for some $T>0$, provided the initial data are sufficiently regular. It holds also that
\begin{equation*}
\rho(t, x)>0
\qquad
\text{for $(t, x)\in [0, T)\times \mathbb{R}^n$,}
\end{equation*}
if the initial density satisfies $\rho(0, x)>0$ (which in our case is surely true for $\varepsilon$ small enough). Moreover, it can be proved that for any local $C^1$-solution $(\rho, u)$ of the system \eqref{main}, with initial data satisfying \eqref{data}, the following finite speed of propagation result holds:
\begin{equation}\label{eq:supp_rho}
\text{supp}\,(\rho-1), \,\, \text{supp}\, u \subseteq \{ (t,x) \in (0,T) \times \mathbb{R}^n \colon |x| \le 1+t \}.
\end{equation}
This can be showed by exploiting a parallel method as in the proof of Lemma\til3.2 in \cite{SiderisThomasesWang2003}. We omit the details here, and we refer to \cite{Sideris1985,Sideris1991,SiderisThomasesWang2003} for a more exhaustive discussion.
With the local existence and the finite speed of propagation in hands, we can proceed reformulating our equations in a integral form, multiplying by a test function.
\subsection{The damped wave equation}
Suppose $(\rho,u)$ is a $C^1$-solution to \eqref{main}, satisfying \eqref{eq:supp_rho}.
Let $\Phi(t,x)$ be a real smooth function with compact support on $[0,T)\times\mathbb{R}^n$.
Multiplying equation \eqref{eq:main-2} by $\nabla\Phi$, integrating on the strip $[0,t) \times \mathbb{R}^n$ of the space-time, with $t\in[0,T)$, and then by parts with respect to the time,
we reach
\begin{equation}\label{eq:energy2}
\begin{split}
&\int_{\R^n} \rho u \cdot \nabla \Phi \text{\,d}x
-
\int_0^t \int_{\R^n}
\rho u \cdot \nabla\Phi_s
-
d(s) \rho u \cdot\nabla\Phi
\text{\,d}x\text{\,d}s
\\
=&\,
\varepsilon \int_{\R^n} (1+\varepsilon\rho_0) u_0 \cdot\nabla\Phi(0,x) \text{\,d}x
\\
&-
\int_0^t \int_{\R^n}
\left[ \nabla \cdot (\rho u \otimes u) +
\nabla\frac{\rho^\gamma-1}{\gamma} \right]
\cdot \nabla\Phi
\text{\,d}x\text{\,d}s.
\end{split}
\end{equation}
where for short we set
\begin{equation*}
d(s) := \frac{\mu}{(1+s)^\lambda} .
\end{equation*}
Consider now equation \eqref{eq:main-1}. After a multiplication by $\Phi_s(s,x)$, integrating on $[0,t)\times\mathbb{R}^n$ and then by parts with respect to the space, we have
\begin{equation}\label{eq:energy1}
\int_0^t \int_{\R^n}
\rho u \cdot \nabla \Phi_s \text{\,d}x\text{\,d}s
=
\int_0^t \int_{\R^n} (\rho-1)_s \Phi_s \text{\,d}x\text{\,d}s.
\end{equation}
Multiplying \eqref{eq:main-1} by $\Phi$ instead, integrating only on the space and then by parts, we obtain that
\begin{equation}\label{eq:rho-u-nablaphi}
\int_{\R^n} \rho u \cdot \nabla \Phi \text{\,d}x = \int_{\R^n} (\rho-1)_t \Phi \text{\,d}x .
\end{equation}
Finally, multiplying \eqref{eq:main-1} by $d(s)\Phi(s,x)$, integrating on the space-time and then by parts, we get
\begin{equation}\label{eq:damping_part}
\begin{split}
\int_0^t \int_{\R^n}
d(s) \rho u \cdot \nabla\Phi \text{\,d}x\text{\,d}s
=
\int_0^t \int_{\R^n}
d(s) (\rho-1)_s \Phi
\text{\,d}x\text{\,d}s
.
\end{split}
\end{equation}
Now, inserting \eqref{eq:energy1}, \eqref{eq:rho-u-nablaphi} and \eqref{eq:damping_part} into \eqref{eq:energy2}, we obtain
\begin{equation}\label{eq:wave-energy}
\begin{split}
& \int_{\R^n} (\rho-1)_t \Phi \text{\,d}x
- \int_0^t \int_{\R^n} (\rho-1)_s \Phi_s \text{\,d}x\text{\,d}s
\\
&+ \int_0^t \int_{\R^n} \nabla(\rho-1) \cdot \nabla\Phi \text{\,d}x\text{\,d}s
+ \int_0^t \int_{\R^n} d(s) (\rho-1)_s \Phi \text{\,d}x\text{\,d}s
\\
=&\,
-
\varepsilon \int_{\R^n}
\nabla\cdot ((1+\varepsilon\rho_0) u_0)
\Phi(0,x) \text{\,d}x
\\
&
- \int_0^t \int_{\R^n} \left[\nabla \cdot (\rho u \otimes u)\right] \cdot \nabla\Phi \text{\,d}x\text{\,d}s
- \int_0^t \int_{\R^n} \nabla R(\rho-1) \cdot \nabla \Phi \text{\,d}x\text{\,d}s
\end{split}
\end{equation}
where $R \colon [-1, +\infty) \to [0, +\infty)$ is defined by
\begin{equation}\label{def:R}
R(p) := \frac{(p+1)^\gamma-1}{\gamma} - p.
\end{equation}
Notice that equation \eqref{eq:wave-energy} can be regarded as the energy solution formulation of the nonlinear damped wave equation
\begin{equation}\label{eq:wave}
(\rho-1)_{tt} - \Delta(\rho-1) + d(t) (\rho-1)_t = \nabla\cdot\left[\nabla\cdot(\rho u\otimes u)\right] + \Delta R(\rho-1)
\end{equation}
with initial data
\begin{align*}
[\rho-1]_{t=0} &= \varepsilon \rho_0 ,
\\
[(\rho-1)_t]_{t=0} &= - \varepsilon \nabla\cdot ((1+\varepsilon\rho_0) u_0) .
\end{align*}
Finally, we will need also the weak solution formulation of \eqref{eq:wave}. Applying the integrations by parts several times in \eqref{eq:wave-energy} and letting $t \to T$, we obtain
\begin{equation}\label{eq:wave-weak}
\begin{split}
&\int_0^T \int_{\R^n} (\rho-1)
\left[ \Phi_{tt} - \Delta\Phi - d(t) \Phi_t - d_t(t) \Phi
\right]
\text{\,d}x\text{\,d}t
\\
=&\,
\varepsilon \int_{\R^n}
\left[
d(0) \rho_0 \Phi(0,x) + (1+\varepsilon\rho_0) u_0 \cdot \nabla\Phi(0,x) - \rho_0 \Phi_t(0,x)
\right]
\text{\,d}x
\\
& +
\int_0^T \int_{\R^n}
\text{tr}[(\rho u \otimes u) \nabla^2\Phi]
\text{\,d}x\text{\,d}t
+
\int_0^T \int_{\R^n}
R(\rho-1) \Delta\Phi
\text{\,d}x\text{\,d}t
\end{split}
\end{equation}
where $\nabla^2\Phi$ is the Hessian matrix of $\Phi$.
\section{Some pills of Orlicz spaces theory}
Here we briefly recap some tools from the Orlicz space theory, together with some properties we will largely employ across our work. As references on the Orlicz spaces, see for example the classic books \cite{KrasnoselskiiRutickii1961} and \cite{RaoRen1991}.
Let us consider a continuous, even, convex function $\Upupsilon \colon \mathbb{R} \to [0,+\infty)$ satisfying the conditions
\begin{equation*}
\lim_{p\to0} \frac{\Upupsilon(p)}{p} = 0,
\qquad
\lim_{|p|\to +\infty} \frac{\Upupsilon(p)}{|p|} = +\infty.
\end{equation*}
By definition (see \cite[\S I.1.5]{KrasnoselskiiRutickii1961}), $\Upupsilon$ is a $N$-function. The $N$-function complementary to $\Upupsilon$, denoted by $\Upupsilon^*$, is obtained by taking the Legendre transform of $\Upupsilon$ (\cite[Equation (2.9)]{KrasnoselskiiRutickii1961}), namely
\begin{align}\label{def:complY}
\Upupsilon^*(q)
:= \sup_{p \ge 0} \, \{ p|q| - \Upupsilon(p) \}.
\end{align}
The following useful inequalities (\cite[Equation (2.10)]{KrasnoselskiiRutickii1961}) hold for any $p\ge0$:
\begin{equation}\label{eq:2.10}
p \le \Upupsilon^{-1}(p) \cdot (\Upupsilon^*)^{-1}(p) \le 2p .
\end{equation}
We are in the position now to define the Orlicz space associated to $\Upupsilon$. Let us denote with $\widetilde{L^\Upupsilon}(\mathbb{R}^n)$ the Orlicz class of functions $u \colon \mathbb{R}^n \to \mathbb{R}$ for which
\begin{equation*}
\rho(u;\Upupsilon) := \int_{\mathbb{R}^n} \Upupsilon(u(x)) \text{\,d}x < \infty.
\end{equation*}
Then, we denote by $L^\Upupsilon(\mathbb{R}^n)$ the set of all functions $u$ satisfying the condition
\begin{equation*}
\int_{\mathbb{R}^n} u(x) v(x) \text{\,d}x < \infty
\end{equation*}
for all $v \in \widetilde{L^{\Upupsilon^*}}(\mathbb{R}^n)$. The set $L^\Upupsilon(\mathbb{R}^n)$ is a complete normed linear space, called Orlicz space, when equipped with the Orlicz norm
\begin{equation*}
\n{u}_{(L^\Upupsilon)} := \sup_{\rho(v;\Upupsilon^*) \le 1}
\left| \int_{\mathbb{R}^n} u(x) v(x) \text{\,d}x \right|
.
\end{equation*}
The set $L^\Upupsilon(\mathbb{R}^n)$ can also be transformed into a Banach space by using the Luxemburg norm, defined as
\footnote{A small remark on the notation: compared with the monography \cite{KrasnoselskiiRutickii1961} by Krasnosel'skii and Rutickii, for cosmetic reasons we reverse the symbols used for the Orlicz and Luxemburg norms, since we will employ only the latter norm in our computations.}
%
\begin{equation*}
\n{u}_{L^\Upupsilon} :=
\inf
\left\{
k > 0
\colon
\int_{\mathbb{R}^n} \Upupsilon \left(\frac{u(x)}{k}\right) \text{\,d}x \le 1
\right\},
\end{equation*}
which is equivalent to the Orlicz norm, since (\cite[Equation (9.24)]{KrasnoselskiiRutickii1961})
\begin{equation}\label{eq:equivorlux}
\n{u}_{L^\Upupsilon} \le \n{u}_{(L^\Upupsilon)} \le 2 \n{u}_{L^\Upupsilon}.
\end{equation}
%
Last but least, a key tool we need from the Orlicz spaces theory is the following H\"older inequality.
\begin{lemma}[{\cite[Theorem\til9.3]{KrasnoselskiiRutickii1961}}]
\label{lem:holder}
The inequality
\begin{equation*}
\left|
\int_{\mathbb{R}^n} u(x) v(x) \text{\,d}x
\right|
\le \n{u}_{(L^\Upupsilon)} \n{v}_{(L^{\Upupsilon^*})}
\end{equation*}
holds for any pair of functions $u \in L^\Upupsilon(\mathbb{R}^n)$, $v \in L^{\Upupsilon^*}(\mathbb{R}^n)$.
\end{lemma}
The above is true for any $N$-function, but it is time to specialize the definition of $\Upupsilon$ to adapt it to our purposes.
For us, the role of $\Upupsilon \colon \mathbb{R} \to [0,+\infty)$ will be held by
\begin{equation}\label{def:Y}
\Upupsilon(p) := \frac{(|p|+1)^\gamma-1}{\gamma} - |p|,
\end{equation}
whose complementary function, as one can check from formula \eqref{def:complY}, is
\begin{equation*}
\Upupsilon^*(q) = \frac{(|q|+1)^{\gamma'}-1}{\gamma'} - |q|,
\end{equation*}
where $\gamma' = \frac{\gamma}{\gamma-1}$ is the conjugate exponent of $\gamma$, i.e. $\frac{1}{\gamma}+\frac{1}{\gamma'}=1$.
Note that, in the simplest case $\gamma=2$, we have $\Upupsilon(p) = \frac{|p|^2}{2}$. In the general case $\gamma>1$, it is clearly seen that
$\Upupsilon(p) \sim \frac{|p|^\gamma}{\gamma}$
when $|p| \to +\infty$.
On the other side, when $p \to 0$, by Taylor's theorem it holds
\begin{equation*}
\Upupsilon(p) = \frac{\gamma-1}{2} |p|^2 + o(|p|^2).
\end{equation*}
Therefore we can deduce
\begin{equation}\label{eq:approxY}
\Upupsilon(p)
\approx
\left\{
\begin{aligned}
& |p|^2 &&\text{if $|p| \le 1$,}
\\
& |p|^\gamma &&\text{if $|p| > 1$,}
\end{aligned}
\right.
\end{equation}
where the implicit constants in the symbol \lq\lq$\approx$'' depend only on $\gamma$. This relation is helpful in order to highlight the asymptotic behavior of $\Upupsilon$, and we will repeatedly use it to approximate the value of $\Upupsilon$.
We need to introduce also the following function $\Upxi \colon \mathbb{R} \to [0,+\infty)$, which roughly speaking inverts the asymptotic behaviors of $\Upupsilon(p)$ near $p=0$ and near $p=\infty$:
\begin{equation}\label{def:X}
\Upxi(p)
:=
\left\{
\begin{aligned}
& \frac{1}{\Upupsilon(1/p)}
&&\text{if $p \neq 0$,}
\\
& 0
&&\text{if $p=0$,}
\end{aligned}
\right.
\approx
\begin{cases}
|p|^\gamma &\text{if $|p| \le 1$,}
\\
|p|^2 &\text{if $|p| > 1$.}
\end{cases}
\end{equation}
It is also useful to observe that, for $1<\gamma\le2$, $\Upupsilon$ is super-multiplicative (apart from a multiplicative constant), while on the contrary it is sub-multiplicative (again apart from a multiplicative constant) when $\gamma \ge 2$.
\begin{lemma}\label{lem:supersubmolt}
Consider $\Upupsilon$ defined in \eqref{def:Y}.
If $1 < \gamma \le 2$, then
\begin{equation}\label{eq:superm}
\Upupsilon(pq) \gtrsim \Upupsilon(p) \, \Upupsilon(q)
\end{equation}
for any $p,q \in \mathbb{R}$, from which in particular
\begin{equation}\label{eq:subm}
\Upupsilon(pq) \lesssim \Upupsilon(p) \, \Upxi(q),
\qquad
\Upxi(pq) \lesssim \Upxi(p) \, \Upxi(q),
\end{equation}
for any $p,q \in \mathbb{R}$.
If $\gamma \ge 2$, then
\begin{equation}\label{eq:subm2}
\Upupsilon(pq) \lesssim \Upupsilon(p) \, \Upupsilon(q)
\end{equation}
for any $p,q \in \mathbb{R}$, from which in particular
\begin{equation*}\label{eq:superm2}
\Upupsilon(pq) \gtrsim \Upupsilon(p) \, \Upxi(q),
\qquad
\Upxi(pq) \gtrsim \Upxi(p) \, \Upxi(q),
\end{equation*}
for any $p,q \in \mathbb{R}$.
All the implicit constants depend only on $\gamma$.
\end{lemma}
\begin{proof}
The proof is na\"if: we just need to use \eqref{eq:approxY}. Without loss of generality we can assume $|p|\le|q|$. Firstly suppose $1<\gamma\le2$. There are four cases:
\begin{enumerate}[label=(\roman*)]
\item if $|p|\le |q| \le1$, then in particular $|pq|\le1$, thus $$\Upupsilon(p)\Upupsilon(q) \approx |p|^2 |q|^2 \approx \Upupsilon(pq);$$
\item if $|p|\le 1 < |q|$ and $|pq|\le1$, then $$\Upupsilon(p)\Upupsilon(q) \approx |p|^2 |q|^\gamma \approx \Upupsilon(pq) |q|^{-(2-\gamma)} \le \Upupsilon(pq);$$
\item if $|p|\le 1 < |q|$ and $|pq|>1$, then $$\Upupsilon(p)\Upupsilon(q) \approx |p|^2 |q|^\gamma \approx \Upupsilon(pq) |p|^{2-\gamma} \le \Upupsilon(pq);$$
\item if $1<|p|\le |q|$, then in particular $|pq|>1$, thus
$$\Upupsilon(p)\Upupsilon(q) \approx |p|^\gamma |q|^\gamma \approx \Upupsilon(pq).$$
\end{enumerate}
The relation on the right of \eqref{eq:subm} follows directly from \eqref{eq:superm} and the definition \eqref{def:X} of $\Upxi$, whereas the relation on the left follows by substituting $p$ and $q$ with $pq$ and $1/q$ respectively in \eqref{eq:superm}. The case $\gamma>2$ is completely analogous.
\end{proof}
\begin{remark}
Note that the super-/sub-multiplicative relations above in general can not be inverted if $\gamma\neq2$, in the sense that \eqref{eq:subm2} does not hold if $1<\gamma<2$, whereas \eqref{eq:superm} does not hold if $\gamma>2$. In the special case $\gamma=2$ instead, we have $\Upupsilon(pq)=2\Upupsilon(p)\Upupsilon(q)$ for any $p,q\in\mathbb{R}$.
\end{remark}
An immediate implication of Lemma~\ref{lem:supersubmolt} is that, if $p \lesssim q$, then $\Upupsilon(p) \lesssim \Upupsilon(q)$ and $\Upxi(p) \lesssim \Upxi(q)$.
Another consequence is the following observation (which we upgrade at the status of lemma for convenience).
\begin{lemma}\label{lem:luxest}
Consider $\Upupsilon$ defined in \eqref{def:Y} and let $u\in L^{\Upupsilon}(\mathbb{R}^n)$. If
\begin{equation*}
\int_{\R^n} \Upupsilon\left(\frac{u(x)}{k}\right) \text{\,d}x \le \kappa_0
\end{equation*}
for some $k>0$ and $\kappa_0>0$, then
\begin{equation*}
\n{u}_{L^{\Upupsilon}} \le c_{\gamma, \kappa_0} k,
\end{equation*}
where $c_{\gamma, \kappa_0}$ is a positive constant depending only on $\gamma$ and $\kappa_0$.
\end{lemma}
\begin{proof}
From Lemma~\ref{lem:supersubmolt} there exists some positive constant $d_\gamma$, depending only on $\gamma$, such that $\Upupsilon(pq) \le d_\gamma \Upupsilon(p) \, \Upxi(q)$ if $1<\gamma\le2$. Set
\begin{equation*}
c_{\gamma, \kappa_0} := \left[\Upxi^{-1}\left(\frac{1}{d_\gamma \kappa_0}\right)\right]^{-1}.
\end{equation*}
Then
\begin{equation*}
1
\ge
\int_{\R^n}
\Upupsilon\left(\frac{u}{k}\right)
\frac{1}{\kappa_0}
\text{\,d}x
=
d_\gamma
\int_{\R^n} \Upupsilon\left(\frac{u}{k}\right) \Upxi\left(\frac{1}{c_{\gamma, \kappa_0}}\right)
\text{\,d}x
\ge
\int_{\R^n}
\Upupsilon\left(\frac{u}{c_{\gamma, \kappa_0} k}\right)
\text{\,d}x
\end{equation*}
and hence, by definition of Luxemburg norm, we have that $c_{\gamma,\kappa_0} k$ is an upper bound for $\n{u}_{L^{\Upupsilon}}$. Employing instead $\Upupsilon(pq) \le \widetilde{d_\gamma} \Upupsilon(p) \, \Upupsilon(q)$, one can prove the case $\gamma\ge2$ in the same way.
\end{proof}
\begin{remark}
Here and in the following, when we write $\Upupsilon^{-1}$, $(\Upupsilon^*)^{-1}$ or $\Upxi^{-1}$, clearly we mean the inverse function of $\Upupsilon$, $\Upupsilon^*$ or $\Upxi$ respectively, when restricted on the non-negative interval $[0,+\infty)$.
\end{remark}
We are now ready to prove our theorems.
\section{Proof of Theorem~\ref{thm1}}\label{sec:proof1}
\subsection{The test function
Let us define the positive function
\begin{equation}\label{def:phi}
\phi(x) :=
\left\{
\begin{aligned}
&
\frac{1}{n\omega_n}
\int_{\mathbb{S}^{n-1}} e^{x\cdot\sigma} \text{d}\sigma&\text{if $n \ge2$},
\\
& \frac{e^x+e^{-x}}{2} &\text{if $n = 1$},
\end{aligned}
\right.
\end{equation}
where $\omega_n := \frac{\pi^{n/2}}{\Gamma\left(\frac{n}{2}+1\right)}$ is the volume of the $n$-dimensional unit ball, so that $n\omega_n$ is the volume of the unit $(n-1)$-sphere $\mathbb{S}^{n-1}$. This function was firstly introduced to prove the blow-up of wave equations by Yordanov and Zhang in \cite{YordanovZhang2006}.
The function $\phi$ is radial, strictly increasing with respect to $|x|$, solves $\Delta\phi=\phi$, and moreover
\begin{equation*}
\phi(0)=1,
%
\qquad
%
\phi(x) \sim
\frac{(2\pi)^{\frac{n-1}{2}}}{n \omega_n}
|x|^{-\frac{n-1}{2}} e^{|x|}
\quad\text{when $|x|\to+\infty$,}
\end{equation*}
therefore $\phi$ satisfies, for any $x\in\mathbb{R}^n$, the relation
\begin{equation}\label{testpro}
\phi(x) \approx \jap{x}^{-\frac{n-1}{2}} e^{|x|}.
\end{equation}
Here and in the following $\jap{\cdot}:=(1+|\!\cdot\!|^2)^{1/2}$ stands as customary for the Japanese bracket notation.
All the above properties of $\phi$ can be easily deduced rewriting this function in a closed form. Indeed, using $n$-dimensional spherical coordinates and choosing the polar axis parallel to $x$, one can prove that actually
\begin{equation*}
\phi(x) =
\frac{(2\pi)^{\frac{n}{2}}}{n \omega_n} |x|^{1-\frac{n}{2}} I_{\frac{n}{2}-1}(|x|),
\end{equation*}
where $I_\nu(z)$ is the modified Bessel function of the first kind.\footnote{Curiously, as far as we know, this closed expression seems to be never reported in the related literature.}
From the properties of the latter (see e.g. \cite[Sections\til9.6 and 9.7]{AbramowitzStegun1964}) follow those of~$\phi$.
Let us consider then the positive function
\begin{equation*
\psi(t,x) := e^{-t} \phi(x),
\end{equation*}
which satisfies $\psi_t=-\psi$, $\Delta \psi = \psi$,
and the following bounds we will use later:
\begin{equation}\label{est:psi}
\psi(t,x) \lesssim \jap{t}^{-\frac{n-1}{2}}
\end{equation}
for $|x| \le 1+t$, and
\begin{equation}\label{est:intpsi}
\int_{|x| \le 1+t} \psi^b(x) \text{\,d}x \approx \jap{t}^{\frac{n-1}{2}(2-b)}
\end{equation}
for any $b \in [0,2]$ and $t\ge0$.
The estimate \eqref{est:psi} comes directly from \eqref{testpro}.
For \eqref{est:intpsi}, observe that from one side we have
\begin{equation*}
\begin{split}
\int_{|x|\le1+t} \psi^b(x) \text{\,d}x
&\gtrsim
\int_{|x| \le 1+t} \jap{x}^{-\frac{n-1}{2}b} e^{b|x|-bt} \text{\,d}x
\\
&\gtrsim
\int_{\frac{1+t}{2}}^{1+t} \jap{r}^{\frac{n-1}{2}(2-b)} e^{br-bt} \text{\,d}r
\\
&\gtrsim
\jap{t}^{\frac{n-1}{2}(2-b)},
\end{split}
\end{equation*}
while on the other side
\begin{equation*}
\begin{split}
\int_{|x|\le1+t} \psi^b(x) \text{\,d}x
&\lesssim
\int_{|x| \le 1+t}
\jap{x}^{-\frac{n-1}{2}b} e^{b|x|-bt} \text{\,d}x
\\
&\lesssim
\int_{0}^{1+t} \jap{r}^{\frac{n-1}{2}(2-b)} e^{br-bt} \text{\,d}r
\\
&\lesssim
\jap{t}^{\frac{n-1}{2}(2-b)}.
\end{split}
\end{equation*}
Now, let $\chi$ a smooth compactly supported cut-off function such that $\chi(t,x)\equiv1$ on the support of $\rho-1$. Choosing $\Phi=\psi\chi$ as test function in \eqref{eq:wave-energy}, using \eqref{eq:supp_rho}, integrating by parts with respect to the space variables and deriving with respect to the time, we arrive at
\begin{equation}\label{eq:F-1}
\begin{multlined}
F'' + 2F' + \frac{\mu}{(1+t)^{\lambda}} (F'+F)
\\=
\int_{\R^n}
\text{tr}\left[(\rho u \otimes u) \nabla^2\psi \right]
\text{\,d}x
+
\int_{\R^n}
R(\rho-1) \psi
\text{\,d}x,
\end{multlined}
\end{equation}
where we define the functional $F \equiv F(t)$ as
\begin{equation*}
F(t) := \int_{\R^n} (\rho-1) \psi \text{\,d}x.
\end{equation*}
Observe that from the conditions \eqref{eq:datapositivity} on the initial data we have
\begin{align}\label{est:datapos}
\begin{split}
F(0)
&= \varepsilon \int_{\R^n} \rho_0 \phi \text{\,d}x > 0,
\\
F'(0) + F(0)
&=
-\varepsilon \int_{\R^n} \nabla \cdot ((1 + \varepsilon\rho_0) u_0) \phi \text{\,d}x
\\
&=
\varepsilon \int_{\R^n} (1 + \varepsilon\rho_0) u_0 \cdot \nabla \phi \text{\,d}x > 0.
\end{split}
\end{align}
The last inequality holds true for $\varepsilon$ small enough, since
\begin{equation*}
\lim_{\varepsilon\to0^+} \varepsilon^{-1} \left[ F'(0) + F(0) \right] = \int_{\R^n} u_0 \cdot \nabla \phi \text{\,d}x > 0.
\end{equation*}
Our next step is to bound from below the nonlinear term in \eqref{eq:F-1}.
\subsection{Estimate for the nonlinear term}\label{subsec:estnonlin}
First of all, we can get rid of the first term on the right-hand side of \eqref{eq:F-1} due to its positiveness. Indeed, for $n\ge2$,
\begin{equation*}
\begin{split}
\text{tr}\left[(\rho u \otimes u) \nabla^2\psi \right]
&=
\frac{1}{n\omega_n}
\int_{\mathbb{S}^{n-1}} \rho
\,
\text{tr}\left[(u\otimes u)(\sigma \otimes \sigma)\right]
\,
e^{x\cdot\sigma-t} \text{\,d}\sigma
\\
&=
\frac{1}{n\omega_n}
\int_{\mathbb{S}^{n-1}} \rho (u\cdot\sigma)^2 e^{x\cdot\sigma-t} \text{\,d}\sigma
\\
&\ge0
\end{split}
\end{equation*}
whereas of course
\begin{equation*}
\text{tr}\left[(\rho u \otimes u) \nabla^2\psi \right]
= \rho u^2 \, \frac{e^{x-t} + e^{-x-t}}{2} \ge 0
\end{equation*}
in the $1$-dimensional case.
Moreover, the functions $\Upupsilon$ and $R$, defined respectively in \eqref{def:Y} and \eqref{def:R}, coincide on $[0,+\infty)$, but are different on $[-1,0)$. However, since $R(p), \Upupsilon(p) \sim \frac{\gamma-1}{2} |p|^2$ for $p \to 0$ and $R, \Upupsilon$ are bounded on $[-1,0)$, it is easily checked that
\begin{equation*}
R(\rho-1) \approx \Upupsilon(\rho-1)
\end{equation*}
for $\rho\ge0$.
Therefore, from \eqref{eq:F-1} we get
\begin{equation}\label{eq:F-2}
F'' + 2F' + \frac{\mu}{(1+t)^\lambda}( F' + F )
\gtrsim
\int_{\R^n} \Upupsilon(\rho-1) \, \psi \text{\,d}x .
\end{equation}
If $\gamma>2$, from \eqref{eq:approxY} it is easily seen that $\Upupsilon(p) \gtrsim |p|^2$. Thus, the case $\gamma>2$ can be reduced to the case $\gamma=2$, so in the following we will assume $1<\gamma\le2$.
We want to prove now that
\begin{equation}\label{est:nonlin0}
\int_{\R^n} \Upupsilon(\rho-1) \psi \text{\,d}x \gtrsim \jap{t}^{-\frac{n-1}{2}} \Upupsilon(F),
\end{equation}
which inserted in \eqref{eq:F-2} would imply
\begin{equation}\label{eq:F-3}
F'' + 2F' + \frac{\mu}{(1+t)^\lambda}( F' + F )
\gtrsim
\jap{t}^{-\frac{n-1}{2}} \Upupsilon(F).
\end{equation}
Now the Orlicz spaces properties comes into play.
From the inequality \eqref{eq:2.10}, the support property \eqref{eq:supp_rho}, the equivalence relation \eqref{eq:equivorlux} and the H\"older inequality in Lemma~\ref{lem:holder}, we obtain, for a fixed positive $\alpha \in (0,2)$ to be chosen later, that
\begin{equation}\label{est:nonlin1}
\begin{split}
|F(t)|
&\le
\int_{\R^n} |\rho-1| \psi \text{\,d}x
\\
&\approx
\int_{\R^n}
\left[
\psi^{1-\alpha}
(\Upupsilon^*)^{-1}\left( \psi^{\alpha} \right)
\right]
%
\left[
|\rho-1|
\Upupsilon^{-1}\left( \psi^{\alpha} \right)
\right]
\text{\,d}x
\\
&\approx
\int_{\R^n}
\psi^{1-\frac\alpha2}
\left[
|\rho-1|
\Upupsilon^{-1}\left( \psi^{\alpha} \right)
\right]
\text{\,d}x
\\
&\lesssim
\n{\psi^{1-\frac\alpha2}}_{L^{\Upupsilon^*}(|x|\le1+t)}
\n{|\rho-1| \Upupsilon^{-1}\left( \psi^{\alpha} \right)}_{L^\Upupsilon(\mathbb{R}^n)}
.
\end{split}
\end{equation}
In the penultimate relation we used
\begin{equation*}
\psi^{1-\alpha} (\Upupsilon^*)^{-1}\left( \psi^{\alpha} \right)
\approx
\psi^{1-\frac\alpha2}
\end{equation*}
which follows from $\psi\lesssim1$ (due to \eqref{est:psi}) and the fact that $(\Upupsilon^*)^{-1}(p) \approx |p|^{1/2}$ if $p$ is small.
Let us consider $\n{\psi^{1-\frac\alpha2}}_{L^{\Upupsilon^*}(|x|\le1+t)}$. Note that, thanks to \eqref{est:psi} and \eqref{est:intpsi}, we have
\begin{equation*}
\frac{\psi^{1-\alpha/2}}{\left(\int_{|x|\le 1+t} \psi^{2-\alpha} \text{\,d}x \right)^{1/2}}
\lesssim
\jap{t}^{-\frac{n-1}{2}}
\lesssim
1
\end{equation*}
for $|x|\le 1+t$,
and therefore, since $\Upupsilon^*(p) \approx |p|^2$ for small $|p|$,
\begin{equation*}
\bigintss_{|x| \le 1+t}
\Upupsilon^*\left( \frac{\psi^{1-\alpha/2}}{ \left( \int_{|x|\le 1+t} \psi^{2-\alpha}dx \right)^{1/2} } \right)
\text{\,d}x
\lesssim
1.
\end{equation*}
Hence, by Lemma~\ref{lem:luxest} and using again \eqref{est:intpsi}, we gain
\begin{equation*}
\n{\psi^{1-\frac\alpha2}}_{L^{\Upupsilon^*}(|x|\le1+t)}
\lesssim
\left( \int_{|x|\le 1+t} \psi^{2-\alpha} \text{\,d}x \right)^{1/2}
\lesssim
\jap{t}^{\frac{n-1}{2}\cdot\frac{\alpha}{2}}
.
\end{equation*}
Let us insert this information back into \eqref{est:nonlin1} to obtain
\begin{equation}\label{est:nonlin2}
|F(t)| \lesssim \n{ |\rho-1| \Psi }_{L^\Upupsilon(\mathbb{R}^n)},
\qquad
\Psi \equiv \Psi(t,x) := \jap{t}^{\frac{n-1}{2}\cdot\frac{\alpha}{2}} \Upupsilon^{-1}(\psi^\alpha)
,
\end{equation}
where we used the homogeneity of the norm. From \eqref{est:psi} we easily have
\begin{equation*}
\Psi
\approx
\jap{t}^{\frac{n-1}{2}\cdot\frac{\alpha}{2}} \psi^{\alpha/2}
\lesssim
\jap{t}^{\frac{n-1}{2}\cdot\frac{\alpha}{2}}
\jap{t}^{-\frac{n-1}{2}\cdot\frac{\alpha}{2}}
=
1
\end{equation*}
for $|x|\le 1+t$. From \eqref{def:X}, it follows that
\begin{equation*}
\Upxi(\Psi)
\approx
\Psi^\gamma
\approx
\jap{t}^{\frac{n-1}{2}\cdot\frac{\gamma}{2}\alpha} \psi^{\frac{\gamma}{2}\alpha}.
\end{equation*}
Employing the above estimate, the relations in \eqref{eq:subm} and the compactness of the support of $\rho-1$, we obtain
\begin{equation*
\begin{split}
\int_{\R^n} \Upupsilon\left( \frac{|\rho-1| \Psi}{k} \right)
\text{\,d}x
&\lesssim
\int_{|x| \le 1+t}
\Upupsilon(\rho-1) \, \Upxi\left(\frac{\Psi}{k}\right)
\text{\,d}x
\\
&\lesssim
\int_{|x| \le 1+t}
\frac{\Upupsilon(\rho-1)}{\Upupsilon(k)}
\, \Upxi(\Psi)
\text{\,d}x
\\
&\lesssim
\int_{\R^n}
\frac{\Upupsilon(\rho-1) \psi^{\frac{\gamma}{2} \alpha} }{\jap{t}^{-\frac{n-1}{2}\cdot\frac{\gamma}{2}\alpha} \Upupsilon(k)}
\text{\,d}x
\end{split}
\end{equation*}
for any $k>0$. Plugging into the above inequality the choices
\begin{equation*}
\alpha = \frac{2}{\gamma},
\qquad
k= \Upupsilon^{-1}\left( \jap{t}^{ \frac{n-1}{2} } \int_{\R^n} \Upupsilon(\rho-1) \psi \text{\,d}x \right),
\end{equation*}
we deduce
\begin{equation*}
\bigintss_{\mathbb{R}^n} \Upupsilon\left( \frac{|\rho-1| \Psi}{\Upupsilon^{-1}\left( \jap{t}^{ \frac{n-1}{2} } \int_{\R^n} \Upupsilon(\rho-1) \psi \text{\,d}x \right)} \right)
\text{\,d}x
\lesssim
1
\end{equation*}
and so, by Lemma~\ref{lem:luxest}, we have
\begin{equation*}
\n{|\rho-1|\Psi}_{L^\Upupsilon(\mathbb{R}^n)}
\lesssim
\Upupsilon^{-1}\left( \jap{t}^{\frac{n-1}{2}} \int_{\R^n} \Upupsilon(\rho-1) \psi \right),
\end{equation*}
inserting which into \eqref{est:nonlin2} gives the desired inequality \eqref{est:nonlin0}.
Before going on, we list some observations.
\begin{remark}
The choice of $\alpha= 2/\gamma$ is of course forced from the fact that we need to recover the expression $\int_{\R^n} \Upupsilon(\rho-1) \psi \text{\,d}x$, and this explain the maybe strange factorization $\psi \approx \psi^{1-\alpha} (\Upupsilon^*)^{-1} (\psi^\alpha) \Upupsilon^{-1}(\psi^\alpha)$ employed in \eqref{est:nonlin1}. Note also that $\alpha=2/\gamma$ lies in $(0,2)$ for any $\gamma>1$.
\end{remark}
\begin{remark}
In \eqref{est:nonlin2}, taking advantage of the homogeneity of the norm to squeeze the estimate for $\n{\psi^{1-\frac\alpha2}}_{L^{\Upupsilon^*}(|x|\le 1+t)}$ inside $\n{|\rho-1| \Psi}_{L^\Upupsilon(\mathbb{R}^n)}$ is a necessary step in order to not lose information. Indeed, if we do not do that, we would have the estimate
\begin{equation}\label{eq:rmkF}
|F(t)| \lesssim \jap{t}^{\frac{n-1}{2} \cdot \frac{\alpha}{2}} \n{|\rho-1| \Upupsilon^{-1}(\psi^\alpha)}_{L^\Upupsilon(\mathbb{R}^n)}
\end{equation}
and, proceeding as above, with $\Psi$ replaced by $\Upupsilon^{-1}(\psi^\alpha) \approx \psi^{\alpha/2}$, we would obtain
\begin{equation*}
\n{|\rho-1| \Upupsilon^{-1}(\psi^\alpha)}_{L^\Upupsilon(\mathbb{R}^n)} \lesssim \Upupsilon^{-1} \left( \int_{\R^n} \Upupsilon(\rho-1) \psi \text{\,d}x \right)
\end{equation*}
with again the choice $\alpha=2/\gamma$. At this point, since $\Upupsilon$ is not sub-multiplicative for $1<\gamma<2$, when we return to \eqref{eq:rmkF} we would be forced to employ again \eqref{eq:subm}, getting
\begin{equation*}
\Upupsilon(F) \lesssim \jap{t}^\frac{n-1}{\gamma} \int_{\R^n} \Upupsilon(\rho-1) \psi \text{\,d}x,
\end{equation*}
which is a worst estimate respect to \eqref{est:nonlin0} for $1<\gamma<2$.
\end{remark}
\subsection{The multiplier}\label{subsec:multiplier}
Now that we have \eqref{eq:F-3} at our disposal, let us consider the multiplier
\begin{equation}\label{def:m}
\mathcal{m} \equiv \mathcal{m}(t) := \exp\left( \mu \frac{(1+t)^{1-\lambda}}{1-\lambda} \right)
\end{equation}
introduced in \cite{LaiTakamura2018}, which solves the ordinary differential equation
\begin{equation}\label{eq:odem}
\frac{\mathcal{m}'(t)}{\mathcal{m}(t)} = \frac{\mu}{(1+t)^\lambda}
\end{equation}
and satisfies the bounds
\begin{equation}\label{eq:m-bound}
0 < \mathcal{m}(0) \le \mathcal{m}(t) \le 1
\end{equation}
for $\mu\ge0$ and $\lambda>1$. Its role is to \lq\lq absorb'' the damping term. Indeed, adding $\frac{\mu}{(1+t)^{\lambda}} F$ on both side of \eqref{eq:F-3} and multiplying by $\mathcal{m}(t)$, we get
\begin{equation}\label{eq:diffmF}
\frac{\text{d}}{\text{d}t} \left\{ \mathcal{m}(t) \, [F'(t)+2F(t)] \right\}
\gtrsim
\frac{\mu}{(1+t)^{\lambda}} \mathcal{m}(t) F(t) + \jap{t}^{-\frac{n-1}{2}} \mathcal{m} \Upupsilon(F(t)).
\end{equation}
We will soon prove that actually $F$ is non-negative, so we can get rid also of the additional first term on the right-hand side.
After an integration with respect to the time we have
\begin{equation*}
\begin{split}
\mathcal{m}(t) [F'(t)+2F(t)]
\gtrsim&\,
\mathcal{m}(0) [F'(0)+2F(0)]
\\
&+
\int_0^t
\frac{\mu}{(1+s)^{\lambda}} \mathcal{m}(s)F(s)
\text{\,d}s
\\
&+
\int_0^t
\jap{s}^{-\frac{n-1}{2}} \mathcal{m}(s)\Upupsilon(F(s)) \text{\,d}s.
\end{split}
\end{equation*}
Multiplying now by $\frac{e^{2t}}{\mathcal{m}(t)}$, integrating again with respect to the time, and then multiplying by $e^{-2t}$, we aim to
\begin{equation}\label{eq:F-4}
\begin{split}
F(t)
\gtrsim&\,
F(0) e^{-2t}
+
\mathcal{m}(0)[F'(0)+2F(0)]
e^{-2t}
\int_0^t \frac{e^{2s}}{\mathcal{m}(s)} \text{\,d}s
\\
&+
e^{-2t}
\int_0^t \frac{e^{2s}}{\mathcal{m}(s)} \int_0^s \frac{\mu}{(1+r)^{\lambda}} \mathcal{m}(r)F(r)\text{\,d}r\text{\,d}s
\\
&+
e^{-2t}
\int_0^t \frac{e^{2s}}{\mathcal{m}(s)} \int_0^s \jap{r}^{-\frac{n-1}{2}} \mathcal{m}(r) \Upupsilon(F(r)) \text{\,d}r\text{\,d}s
.
\end{split}
\end{equation}
From this expression we can prove that $F(t) >0$ for $t\ge0$, employing a standard comparison argument. Indeed, due to the fact that $F(0)>0$ by our assumption on the initial data and that $F$ is continuous, we know that $F(t)>0$ at least for small $t\ge0$. Assume by contradiction that $t_0>0$ is the smallest zero point of $F$; therefore, setting $t=t_0$ in \eqref{eq:F-4} and using also that $F'(0)+2F(0)\ge0$, we get $0=F(t_0) \gtrsim F(0) e^{-2 t_0}$ --- a contradiction. Thus $F(t)>0$ for any $t\ge0$.
Thanks to this information we can suppress the third term in \eqref{eq:F-4}, and using also $\mathcal{m}(t) \approx 1$ (due to \eqref{eq:m-bound}) we have now
\begin{equation}\label{eq:F-5}
\begin{split}
F(t)
\gtrsim&\,
F(0) e^{-2t}
+
[F'(0)+2F(0)]
\frac{1-e^{-2t}}{2}
\\
&+
e^{-2t}
\int_0^t e^{2s} \int_0^s \jap{r}^{-\frac{n-1}{2}} \Upupsilon(F(r)) \text{\,d}r\text{\,d}s.
\end{split}
\end{equation}
Morally, we would like now to \lq\lq differentiate'' the above estimate to obtain a differential inequality like \eqref{eq:F-3} but without the damping term. Let us introduce the auxiliary function $\overline{F} \equiv \overline{F}(t)$ defined by
\begin{equation*}
\begin{split}
\overline{F}(t)
:=&\,
\frac{F(0)}{2} e^{-2t}
+
[F'(0)+2F(0)]
\frac{1-e^{-2t}}{2}
\\
&+
e^{-2t}
\int_0^t e^{2s} \int_0^s \jap{r}^{-\frac{n-1}{2}} \Upupsilon(F(r)) \text{\,d}r\text{\,d}s.
\end{split}
\end{equation*}
From its definition and \eqref{eq:F-5}, it holds
\begin{equation*}
F(t) \gtrsim \frac{F(0)}{2} e^{-2t} + \overline{F}(t) \ge \overline{F}(t) > 0.
\end{equation*}
It is easy to check, multiplying firstly by $e^{2t}$ and deriving, and then multiplying by $e^{-2t}$ and deriving again, that
\begin{equation*}
\overline{F}''(t) + 2 \overline{F}'(t) = \jap{t}^{-\frac{n-1}{2}} \Upupsilon(F(t)) \gtrsim \jap{t}^{-\frac{n-1}{2}} \Upupsilon\left(\overline{F}(t)\right).
\end{equation*}
Moreover
\begin{gather*}
\overline{F}(0) = \frac{F(0)}{2} > 0 ,
\\
\overline{F}'(0) = F'(0) + F(0) > 0 .
\end{gather*}
At this point, recalling \eqref{eq:approxY}, the conclusion of the proof follows from a straightforward application of the next lemma, which is a variation of Theorem\til3.1 in \cite{LiZhou1995}, with a non-linear term which is allowed to behave like two different powers when its argument is respectively small or large. Since the proof of the lemma follows step by step the one in \cite{LiZhou1995} with minor changes, we include the demonstration for the sake of completeness but we postpone it in Appendix~\ref{app:proof-lizhou-var}.
\begin{lemma}\label{lem:lizhou-var}
Let $0\le \lambda \le 1$. Assume that $I \colon [0,+\infty) \to \mathbb{R}$ satisfies
\begin{equation}\label{eq:lz}
I''(t) + I'(t) \gtrsim (1+t)^{-\lambda} N(I(t))
\end{equation}
where $N(p), N'(p)>0$ for $p>0$ and
\begin{gather*}
N(p) \approx
\begin{cases}
p^{1+\alpha} &\text{if $0\le p \le 1$,}
\\
p^{1+\beta} &\text{if $p > 1$,}
\end{cases}
\end{gather*}
for some $\alpha,\beta>0$.
Suppose also
\begin{equation*}
I(0) = \varepsilon >0, \qquad I'(0) \ge 0.
\end{equation*}
Then, $I(t)$ blows up in a finite time. Moreover, if $\varepsilon>0$ is small enough, the lifespan $T_\varepsilon$ of $I(t)$ satisfies the upper bound
\begin{equation*}
T_\varepsilon \le
\left\{
\begin{aligned}
&C \varepsilon^{-\ltfrac{\alpha}{1-\lambda}} &&\text{if $0\le\lambda<1$,}
\\
&\exp(C \varepsilon^{-\alpha}) &&\text{if $\lambda=1$,}
\end{aligned}
\right.
\end{equation*}
where $C$ is a positive constant independent of $\varepsilon$.
\end{lemma}
\section{Proof of Theorem~\ref{thm2}}
First of all, notice that, in the case $\lambda=1$, the solution to the ODE \eqref{eq:odem} is given by
\begin{equation*}
\mathcal{m} \equiv \mathcal{m}(t) = (1+t)^{\mu},
\end{equation*}
which is unbounded for $\mu>0$. Anyway, the inequality \eqref{eq:diffmF} still holds but with $\lambda=1$, and so also \eqref{eq:F-4}.
Hence, with the same comparison argument in Subsection~\ref{subsec:multiplier} we can deduce that
\begin{equation*}
F(t) := \int_{\R^n} (\rho-1) \psi \text{\,d}x >0
\end{equation*}
for $t\ge0$.
The role of $\mathcal{m}$ as multiplier in the case $\lambda=1$ was only to prove the positivity of $F$.
With this information in our hands, let us go back to \eqref{eq:F-3} and this time we use as multiplier $\sqrt{\mathcal{m}(t)} = (1+t)^{\mu/2}$.
Define the functional
\begin{equation*}
G(t) := \sqrt{\mathcal{m}(t)} F(t) = (1+t)^{\mu/2} F(t),
\end{equation*}
which of course inherits from $F$ its positiveness and the same blow-up dynamic. Multiplying both side of \eqref{eq:F-3} by $\sqrt{\mathcal{m}}$, we obtain
\begin{equation}\label{eq:G1}
G'' + 2G' + \frac{\mu(2-\mu)/4}{(1+t)^2} G
\gtrsim
\jap{t}^{-\frac{n-1}{2}+\frac{\mu}{2}} \Upupsilon(F)
\gtrsim
\jap{t}^{-\frac{n+\mu-1}{2}} \Upupsilon(G),
\end{equation}
where we used also \eqref{eq:superm} and that $\Upupsilon((1+t)^{-\mu/2}) \approx (1+t)^{-\mu}$. The use of $\sqrt{\mathcal{m}}$ is connected to a Liouville-type transform, often employed in the study of the scaling-invariant damped wave equation. For example, D'Abbicco, Lucente and Reissig in \cite{DAbbiccoLucenteReissig2015} inaugurated the beginning of a series of works by various authors where the case $\mu=2$ is considered. This is due to the fact that this choice simplifies the analysis of the problem, making it related to the undamped wave equation. In our case, setting $\mu=2$ would eliminate the third term in the left-hand side of \eqref{eq:G1}. However, since we are dealing with $\mu\le 3-n$, we need another way to suppress the massive term in \eqref{eq:G1}.
Let us introduce the new multiplier (well, actually is a relative of $\mathcal{m}$ in \eqref{def:m} with $\lambda=2$) defined by
\begin{equation*}
\mathcal{l} \equiv \mathcal{l}(t)
:=
\exp\left( - \frac{\mu(2-\mu)/8}{1+t} \right) .
\end{equation*}
Observe that $\mathcal{l}$ solves the ODE
\begin{equation*}
\frac{\mathcal{l}'(t)}{\mathcal{l}(t)} = \frac{\mu(2-\mu)/8}{(1+t)^2}
\end{equation*}
and satisfies the bounds
\begin{equation}\label{eq:mmbound}
0 < \mathcal{l}(0) \le \mathcal{l}(t) \le 1 ,
\end{equation}
since $0 \le \mu \le 3-n \le 2$ for $n\in\{1,2\}$.
Multiplying \eqref{eq:G1} by $\mathcal{l}(t)$ we obtain
\begin{equation*}
\mathcal{l} G'' + 2 (\mathcal{l} G)' \gtrsim \mathcal{l} \jap{t}^{-\frac{n+\mu-1}{2}} \Upupsilon(G)
\end{equation*}
and hence
\begin{equation}\label{eq:G2}
(\mathcal{l} G)'' + 2 (\mathcal{l} G)' \gtrsim \mathcal{l}''G + 2\mathcal{l}'G' + \mathcal{l} \jap{t}^{-\frac{n+\mu-1}{2}} \Upupsilon(G).
\end{equation}
We would like to get rid now of the first two terms in the right-hand side of the above inequality. Let us consider another multiplier $\upvarpi$, defined by
\begin{equation*}
\upvarpi(t) := (1+t) \exp\left( \frac{\mu(2-\mu)/16}{1+t} \right),
\end{equation*}
which satisfies the ODE
\begin{equation*}
\frac{\upvarpi'(t)}{\upvarpi(t)}
=
-\frac12
\frac{\mathcal{l}''(t)}{\mathcal{l}'(t)}
=
\frac{1}{1+t} - \frac{\mu(2-\mu)/16}{(1+t)^2}
.
\end{equation*}
It is straightforward to check that
\begin{equation*}
\mathcal{l}'' G + 2 \mathcal{l}' G' = \frac{2}{\upvarpi} \frac{\text{d}}{\text{d}t} \left\{ \upvarpi \mathcal{l}' G \right\},
\end{equation*}
and so, integrating by parts the above identity, it holds
\begin{equation*}
\int_0^t \left[\mathcal{l}'' G + 2 \mathcal{l}' G' \right] \text{\,d}s
=
2\mathcal{l}'(t)G(t) - 2\mathcal{l}'(0)G(0) + 2 \int_0^t \frac{\upvarpi'}{\upvarpi} \mathcal{l}' G \text{\,d}s
.
\end{equation*}
Noting that $G$, $\mathcal{l}'$ and $\frac{\upvarpi'}{\upvarpi}$ are positive functions, we have
\begin{equation}\label{eq:G3}
\int_0^t \left[\mathcal{l}'' G + 2 \mathcal{l}' G' \right] \text{\,d}s
\ge
- 2\mathcal{l}'(0)G(0)
=
- \frac{\mu(2-\mu)}{4} \mathcal{l}(0) F(0).
\end{equation}
Integrating \eqref{eq:G2} with respect to the time and taking into account \eqref{eq:G3}, it follows
\begin{equation}\label{eq:G4}
(\mathcal{l} G)'(t) + 2 (\mathcal{l} G)(t)
\gtrsim
g_0
+
\int_{0}^{t}
\mathcal{l}(s)
\jap{s}^{-\frac{n+\mu-1}{2}} \Upupsilon(G(s)) \text{\,d}s
,
\end{equation}
where
\begin{equation*}
\begin{split}
g_0
:=&\,
(\mathcal{l} G)'(0) + 2 (\mathcal{l} G)(0)
-
\frac{\mu(2-\mu)}{4} \mathcal{l}(0) F(0)
\\
=&\,
\mathcal{l}(0)
\left[
F'(0)
+
F(0)
+
\frac{\mu^2+2\mu+8}{8} F(0)
\right]
>
0
.
\end{split}
\end{equation*}
Multiplying \eqref{eq:G4} by $e^{2t}$, integrating and multiplying by $e^{-2t}$, we get
\begin{equation*}
\begin{split}
\mathcal{l}(t) G(t)
\gtrsim&\,
\mathcal{l}(0) G(0) e^{-2t}
+
g_0
\frac{1-e^{-2t}}{2}
\\
&+
e^{-2t}
\int_0^t
e^{2s}
\int_0^s
\mathcal{l}(r)
\jap{r}^{-\frac{n+\mu-1}{2}}
\Upupsilon(G(r))
\text{\,d}r\text{\,d}s,
\end{split}
\end{equation*}
and equivalently, using \eqref{eq:mmbound},
\begin{equation*}
\begin{split}
G(t)
\gtrsim&\,
F(0) e^{-2t}
+
\left[
F'(0)
+
\frac{\mu^2+2\mu+16}{8}
F(0)
\right]
\frac{1-e^{-2t}}{2}
\\
&+
e^{-2t}
\int_0^t
e^{2s}
\int_0^s
\jap{r}^{-\frac{n+\mu-1}{2}}
\Upupsilon(G(r))
\text{\,d}r\text{\,d}s
.
\end{split}
\end{equation*}
At this stage, the conclusion of the proof follows exactly as that in Subsection~\ref{subsec:multiplier}. Namely, introduce the auxiliary function $\overline{G} \equiv \overline{G}(t)$ defined by
\begin{equation*}
\begin{split}
\overline{G}(t)
:=&\,
\frac{F(0)}{2} e^{-2t}
+
\left[
F'(0)
+
\frac{\mu^2+2\mu+16}{8}
F(0)
\right]
\frac{1-e^{-2t}}{2}
\\
&+
e^{-2t}
\int_0^t
e^{2s}
\int_0^s
\jap{r}^{-\frac{n+\mu-1}{2}}
\Upupsilon(G(r))
\text{\,d}r\text{\,d}s
\end{split}
\end{equation*}
and note that
\begin{equation*}
G(t) \gtrsim \frac{F(0)}{2} e^{-2t} + \overline{G}(t) \ge \overline{G}(t) >0.
\end{equation*}
Moreover $\overline{G}$ satisfies
\begin{equation*}
\overline{G}''(t) + 2 \overline{G}'(t)
=
\jap{t}^{-\frac{n+\mu-1}{2}} \Upupsilon(G(t))
\gtrsim
\jap{t}^{-\frac{n+\mu-1}{2}} \Upupsilon(\overline{G}(t)),
\end{equation*}
together with
\begin{equation*}
\begin{aligned}
\overline{G}(0) &= \frac{F(0)}{2} > 0
\\
\overline{G}'(0) &= F'(0) + F(0) + \frac{\mu(\mu+2)}{8}F(0) >0
.
\end{aligned}
\end{equation*}
Another application of Lemma~\ref{lem:lizhou-var} concludes our proof.
|
3,212,635,537,581 | arxiv | \section{Introduction}
By using the boundary state formalism all
properties of the D-branes can be extracted.
In this formalism a D-brane can be completely represented in
terms of all closed string states,
internal fields, tension and dynamical variables of the brane. Hence,
a D-brane appears as a source for
emitting (absorbing) all closed string states.
The D-branes interaction is obtained
by overlap of two boundary states, associated with the
branes, through the closed string propagator.
Thus, this adequate formalism has been applied for various
configurations of the D-branes \cite{1}-\cite{21}.
Among the different configurations of branes
the setups with fractional D-branes have some
appealing behaviors \cite{17}-\cite{24}.
The fractional branes appear
in the various parts of string and M- theories.
For example, they are useful tools for demonstrating the
gauge/gravity correspondence \cite{24}, and the
dynamical fractional branes prepare an
explicit starting point for defining
Matrix theory \cite{25, 26}.
On the other hand, we have the compactified D-branes which
have a considerable application in string theory.
Besides, there are D-branes with background
fields which possess various interesting properties.
For example, these fields drastically control the
interactions of the branes \cite{8}-\cite{15}, and they
influence the emitted and absorbed closed
strings by the branes.
The fractional branes, wrapped branes and the background fields
motivated us to study a configuration of the
dynamical fractional-wrapped branes with
background fields.
In this paper we use the method of boundary state to
obtain the interaction amplitude between two parallel
fractional-wrapped bosonic D$p$-branes with
background fields and dynamics.
We introduce the background field $B_{\mu\nu}$,
internal $U(1)$ gauge potentials
and internal open string tachyon fields in
the worldvolumes of the branes.
In addition, the branes of our setup are dynamical,
i.e. they rotate and move within their volumes.
For the background spacetime in the twisted
sector $\mathcal{T}$ we shall apply the following topological structure
\begin{eqnarray}
T^n \times \mathbb{R}^{1, d-n-5}
\times\mathbb{C}^{2}/\mathbb{Z}_{2}\;\;,\;\;
n \in \{0,1,\ldots,d-5\}.
\nonumber
\end{eqnarray}
An arbitrary torus from the set
$\{T^n|n =0,1,\ldots,d-5\}$ will be considered.
Therefore, our configuration represents a generalized setup.
We shall demonstrate that the twisted sector
does not contribute to the long-range force,
i.e. the interaction of the distant
branes completely comes from the untwisted sector
$\mathcal{U}$.
This paper is organized as follows. In Sec. 2, we compute
the boundary states corresponding to a
rotating and moving fractional-wrapped D$p$-brane
with background and internal fields.
In Sec. 3.1, the interaction amplitude for two parallel
D$p$-branes
will be acquired. In Sec. 3.2, the contribution of
the massless states of closed string to the
interaction amplitude will be extracted.
Section 4 is devoted to the conclusions.
\section{The boundary states corresponding to a D$p$-brane}
We start by calculating the boundary states,
associated with a fractional-wrapped D$p$-brane.
The $d$-dimensional background spacetime
contains a toroidal compact part,
and for the twisted sector includes a non-compact orbifold part
$\mathbb{C}^{2}/\mathbb{Z}_{2}$. The $\mathbb{Z}_2$ group
acts on the orbifold directions $\{x^a|a= d-4, d-3, d-2, d-1\}$.
We begin with the string action
\begin{eqnarray}
S=&-&\frac{1}{4\pi\alpha'}\int_{\Sigma} d^2\sigma
\left(\sqrt{-g} g^{ab}G_{\mu\nu} \partial_{a}X^\mu
\partial_b X^{\nu} +\epsilon^{ab}B_{\mu\nu}\partial_a X^{\mu}
\partial_b X^{\nu}\right)
\nonumber\\
&+&\frac{1}{2\pi\alpha'}\int_{\partial\Sigma}d\sigma\left(A_{\alpha}
\partial_{\sigma}X^{\alpha}
+\omega_{\alpha\beta}J^{\alpha\beta}_{\tau}
+T^2\right( X^{\alpha}\left) \right)~,
\end{eqnarray}
where $\alpha , \beta \in \{0,1,\ldots,p\}$
represent the worldvolume directions of the brane,
the metrics of the worldsheet and spacetime are $g_{ab}$
and $G_{\mu\nu}$, $\Sigma$ indicates the worldsheet
of closed string and $\partial\Sigma $ is its boundary.
Here we take the flat spacetime with the signature
$G_{\mu\nu}=\eta_{\mu\nu}={\rm diag}(-1,1,\ldots,1 )$
and a constant Kalb-Ramond field $B_{\mu\nu}$.
The profile of the tachyon field is chosen as
$T^2(X) = \frac{1}{2}U_{\alpha \beta}X^{\alpha}X^{\beta}$
with the constant symmetric matrix $U_{\alpha \beta}$
\cite{27, 28}.
For the internal gauge potential we chose the gauge $A_{\alpha}=
-\frac{1}{2}F_{\alpha \beta}X^{\beta}$ with
the constant field strength. The tachyon field and gauge
potential belong to the spectrum of the open string theory,
thus, they accurately appeared as the boundary terms.
The antisymmetric constant angular velocity $\omega_{\alpha \beta}$
shows the rotation and linear motion of the brane, and
$J^{\alpha \beta}_\tau = X^\alpha \partial_\tau X^\beta
-X^\beta \partial_\tau X^\alpha$
is the angular momentum density.
Note that the rotation and linear motion of the brane
are inside the volume of the brane.
In fact, presence of the various internal
fields indicates some preferred alignments in the brane,
and hence the Lorentz symmetry in the brane worldvolume
explicitly has been broken.
We should say that adding a tachyonic mode
generally breaks the conformal invariance, however the
conformal boundary state can still be considered at the
fixed points of the orbifold. For string actions
with tachyon fields e.g. see Ref. \cite{23} and
references therein, and also Refs. \cite{27, 29, 30, 31, 32},
in which some of them contain the resultant boundary states.
Setting the variation of this action to zero yields
equation of motion of $X^\mu$ and the following equations
for the boundary state
\begin{eqnarray}
&~&\left(\mathcal{K}_{\alpha\beta}
\partial_{\tau}X^{\beta}+\mathcal{F}_{\alpha\beta}
\partial_\sigma X^{\beta}
+ B_{\alpha I}\partial_\sigma X^I
+ U_{\alpha \beta }
X^{\beta}\right)_{\tau=0}|B_x\rangle=0~,
\nonumber\\
&~&\left(X^I-y^I\right)_{\tau=0}|B_x\rangle=0~,
\end{eqnarray}
where $\mathcal{K}_{\alpha
\beta}=\eta_{\alpha\beta}+4\omega_{\alpha\beta}$,
and the total field strength is
$\mathcal{F}_{\alpha\beta}=B_{\alpha\beta}-F_{\alpha\beta}$.
The coordinates $\{x^I|I = p+1, \ldots,d-1\}$
show the directions which are perpendicular to the brane worldvolume,
and the parameters $\{y^I|I = p+1, \ldots,d-1\}$
represent the location of the brane. Combination of Eqs. (2.2)
eliminates the third term of the first equation.
We observe that the background fields
impose the mixed boundary conditions along the brane
worldvolume.
The solution of the equation of motion
for the non-orbifold directions has the form
\begin{eqnarray}
X^{\lambda}(\sigma,\tau)&=&x^{\lambda}+2\alpha'p^{\lambda}
\tau+2L^{\lambda}\sigma~
\nonumber\\
&+&\frac{i}{2}\sqrt{2\alpha'}\sum_{m\neq 0}\frac{1}{m}
\Big{(}\alpha_m^{\lambda}e^{-2im(\tau- \sigma)}
+\tilde{\alpha}_m^{\lambda}
e^{-2im(\tau+\sigma)}\Big{)}~,\;
\end{eqnarray}
where $\lambda \in \{\alpha , I\}$ for
the untwisted sector and
$\lambda \in \{\alpha , i\}$ for the twisted one.
In the twisted sector
the set $\{x^i|i=p+1,\ldots,d-5\}$
represents the non-orbifold perpendicular directions
to the brane worldvolume.
In the solution (2.3) for the non-compact coordinates,
like the time direction,
the quantity $L^{\lambda}$ identically vanishes,
while for the circular directions there are
\begin{eqnarray}
&~&L^{\lambda}=N^{\lambda}R^{\lambda}~,\;\;\;\;\;\;
N^{\lambda}\in \mathbb{Z}~,
\nonumber\\
&~&p^{\lambda}=\dfrac{M^{\lambda}}{R^{\lambda}}~,
\;\;\;\;\;\;\;\;\; M^{\lambda}\in \mathbb{Z}~,
\end{eqnarray}
where $N^{\lambda}$ is the winding number
and $M^{\lambda}$ is momentum number
of a closed string state, and $R^{\lambda}$
specifies the radius of compactification
for the compact direction $x^{\lambda}$.
Now look at the orbifold directions.
The orbifold $\mathbb{C}^{2}/\mathbb{Z}_{2}$
is non-compact, thus, its fixed points define a
$(d-4)$-dimensional hyperplane
at $x^a =0$. As the D$p$-brane has to sit
on this hyperplane, and as the closed string is emitted
(absorbed) at the brane position,
the orbifold coordinates of the closed string
possess the solution
\begin{equation}
X^a(\sigma,\tau)=\frac{i}{2}\sqrt{2\alpha'}
\sum_{r\in\mathbb{Z}+\frac{1}{2}}
\frac{1}{r}\Big{(}\alpha_r^{a}
e^{-2ir(\tau- \sigma)}+\tilde{\alpha}_r^a
e^{-2ir(\tau+\sigma)}\Big{)}.
\end{equation}
In the twisted sector the solutions (2.3)
and (2.5) decompose the second
equation of (2.2) as in the following
\begin{eqnarray}
&~&(X^i-y^i)_{\tau=0}|B\rangle^{\mathcal{T}}=0~,
\nonumber\\
&~&(X^a)_{\tau=0}|B\rangle^{\mathcal{T}}=0~.
\end{eqnarray}
By introducing Eqs. (2.3) and (2.5) into the boundary state equations we
acquire the following equations
\begin{eqnarray}
&~&\bigg{[}\left(\mathcal{K}_{\alpha\beta}
- \mathcal{F}_{\alpha\beta}+\dfrac{i}{2m}
U_{\alpha\beta}\right)
\alpha_{m}^{\beta}+\left(\mathcal{K}_{\alpha\beta}
+ \mathcal{F}_{\alpha\beta}
-\dfrac{i}{2m}U_{\alpha\beta}\right)
\tilde{\alpha}_{-m}^{\beta}
\bigg{]}|B_{\rm osc}\rangle^{\mathcal{T} \setminus \mathcal{U}}=0,
\nonumber\\
&~&\left( 2\alpha' \mathcal{K}_{\alpha\beta}p^{\beta}
+ 2\mathcal{F}_{\alpha\beta}
L^{\beta}+U_{\alpha\beta }x^{\beta}
\right) |B\rangle^{(0)\mathcal{T}\setminus\mathcal{U}}=0~,
\nonumber\\
&~& U_{\alpha\beta}L^{\beta}
|B\rangle^{(0)\mathcal{T}\setminus\mathcal{U}}=0,
\end{eqnarray}
for both twisted and untwisted sectors, and
\begin{eqnarray}
&~&(\alpha_{m}^{i}-\tilde{\alpha}_{-m}^{i})
|B_{\rm osc}\rangle^{\mathcal{T}}=0,
\nonumber\\
&~&(\alpha_{r}^{a}-\tilde{\alpha}_{-r}^{a})
|B_{\rm osc}\rangle^{\mathcal{T}}=0,
\nonumber\\
&~&(x^i-y^i)|B\rangle^{(0)\mathcal{T}}=0,
\nonumber\\
&~& L^{i}|B\rangle^{(0)\mathcal{T}}=0,
\end{eqnarray}
for the twisted sector, and
\begin{eqnarray}
&~&(\alpha_{m}^{I}-\tilde{\alpha}_{-m}^{I})
|B_{\rm osc}\rangle^{\mathcal{U}}=0,
\nonumber\\
&~&(x^I-y^I)|B\rangle^{(0)\mathcal{U}}=0,
\nonumber\\
&~& L^{I}|B\rangle^{(0)\mathcal{U}}=0,
\end{eqnarray}
for the untwisted sector, where we applied
$|B_x\rangle=|B\rangle^{(0)} \otimes|B_{\rm osc}\rangle $.
Since the fractional brane has stuck at the fixed
points of the orbifold the state $|B\rangle^{(0)}$
does not obtain any contribution
from the orbifold directions.
According to the third equation of Eqs. (2.7) the tachyon field
plays a crucial role for winding of closed strings
around the compact directions of the brane.
This equation implies that if the tachyon matrix
is invertible we obviously receive the zero winding numbers
$\{N^{\bar \alpha} =0|{\bar \alpha}=1,2,\ldots,p\}$,
and hence closed strings cannot wrap
around the circular directions of the brane. If the
tachyon matrix possesses null determinant the
vector $\{L^{\bar \alpha}|{\bar \alpha}=1,2,\ldots,p\}$
can be nonzero, and therefore such wrapping of closed strings are
allowable. If the perpendicular direction
$x^i$ (or $x^I$) is non-compact the last equation of Eqs. (2.8)
(or (2.9)) becomes trivial, i.e. $L^i$ (or $L^I$)
identically vanishes,
and if $x^i$ (or $x^I$) is compact we observe
that closed strings cannot wrap around it,
that is $N^i =0$ (or $N^I =0$).
The second equation of Eqs. (2.7) eventuates to the following
valuable relation between the eigenvalues
\begin{eqnarray}
p^{ \alpha} =-\frac{1}{2\alpha'}
\left[ \left(\mathcal{K}^{-1}U\right)^{ \alpha}_{\;\;{\beta}}
x^{\beta}+ 2\left(\mathcal{K}^{-1}
\mathcal{F}\right)^{\alpha}_{\;\;{\beta}}
\ell^{\beta}\right] ,
\end{eqnarray}
where $\ell^{\beta}$ is eigenvalue of the operator $L^{\beta}$.
We observe that any closed string state
(wrapped or unwrapped) has a spacetime momentum along the
worldvolume of the brane. This momentum includes two parts:
continuous and discrete. The former
is created by the tachyon while the latter
originates from the Maxwell field and compactification.
As we see this momentum is somewhat under the influence of the
rotation and linear motion of the brane.
This nonzero momentum extremely is unlike the
conventional case in which the closed strings are
radiated perpendicular to the brane worldvolume,
for the conventional case e.g. see Refs. \cite{7, 33, 34}.
Thus, a peculiar potential, which is inspired by the background
fields, the brane dynamics and compactification,
acts on the center-of-mass positions of the
emitted closed strings.
If the brane directions are non-compact
and or they are compact but the tachyon
matrix is invertible Eq. (2.10) reduces to
\begin{eqnarray}
p^{\alpha} =-\frac{1}{2\alpha'}
\left(\mathcal{K}^{-1}U\right)^{\alpha}_{\;\;{\beta}}
x^{\beta}.
\end{eqnarray}
By the quantum mechanical technics, specially by
using the commutation relations between $x^\alpha$
and $p^\beta$, and between $x'^\alpha$ and
$L^\beta/\alpha'$, where $x'^\alpha =x^\alpha_L-x^\alpha_R$
and $L^\alpha =\alpha'(p^\alpha_L-p^\alpha_R )$,
the zero-mode part of the boundary state in the twisted
sector finds the form
\begin{eqnarray}
|B\rangle^{(0)\mathcal{T}}&=&\frac{T_p}{2\sqrt{\det(U/2)}}
\int_{-\infty}^{\infty}
\exp\bigg{[}i\alpha' \sum_{\alpha \neq \beta}
\left(U^{-1}\mathcal{K}
+\mathcal{K}^T U^{-1}\right)_{\alpha\beta}\;
p^{\alpha}p^{\beta}
\nonumber\\
&+& \frac{i}{2}\alpha' \left(U^{-1}\mathcal{K}
+\mathcal{K}^T U^{-1}\right)_{\alpha \alpha}\;
(p^{\alpha})^2
+ 2i\left(U^{-1}
\mathcal{F}\right)_{\alpha \beta}\;\ell^{\alpha}p^{\beta}
\bigg{]}
\nonumber\\
&\times &
\prod^{d-5}_{i=p+1} \left[\delta \left({x}^{i}-y^{i}\right)
|p^{i}_{L}=p^{i}_{R}=0 \rangle \right] \prod^{p}_{\alpha =0}
\left(|p^{\alpha}\rangle dp^{\alpha}\right)~
\label{zer}.
\end{eqnarray}
The disk partition function induces the normalization
factor $1/\sqrt{\det(U/2)}$, \cite{35, 36}.
In the same sector, by
using the coherent state method \cite{37}, we obtain the following
boundary state for the closed string oscillators
\begin{eqnarray}
|B_{\rm osc}\rangle^{\mathcal{T}}
&=&\prod_{n=1}^{\infty}[\det{M_{(n)}}]^{-1}
\exp\left[{-\sum_{m=1}^{\infty}
\left(\frac{1}{m}\alpha_{-m}^{\lambda}S_{(m)\lambda\lambda'}
\tilde{\alpha}_{-m}^{\lambda'}\right)}\right]
\nonumber\\
&\times&\exp\left[-\sum_{r=1/2}^{\infty}
\left(\frac{1}{r}\alpha_{-r}^{a}\tilde{\alpha}_{-r}^{a}\right)\right]
|0\rangle_\alpha|0\rangle_{\tilde{\alpha}}~,
\label{aos}
\end{eqnarray}
where $\lambda , \lambda' \in \{\alpha , i\}$, and
the matrix $S_{(m)}$ is defined by
\begin{eqnarray}
S_{(m)\lambda\lambda'}&=&\left(Q_{(m)\alpha \beta} \equiv (M_{(m)}^{-1}
N_{(m)})_{\alpha\beta},-\delta_{ij}\right)~,
\nonumber\\
M_{(m)\alpha\beta}&=&\mathcal{K}
_{\alpha\beta}- \mathcal{F}_{\alpha\beta}
+\dfrac{i}{2m}U_{\alpha\beta}~,
\nonumber\\
N_{(m)\alpha\beta}&=&\mathcal{K}_{\alpha\beta}
+ \mathcal{F}_{\alpha\beta}-\dfrac{i}{2m}U_{\alpha\beta}~.
\end{eqnarray}
Expansion of the exponential parts of Eq. (2.13)
clarifies that the brane couples to the whole
closed string spectrum in the twisted sector.
The disk partition function gives the normalizing
factor $\prod_{n=1}^{\infty}[\det{M_{(n)}}]^{-1}$
\cite{1, 16, 36}.
More precisely, the quadratic forms of the tachyon
profile and rotating-moving term, accompanied by the
gauge $A_\alpha = -\frac{1}{2}F_{\alpha\beta}X^\beta$,
give a quadratic form to the boundary part of the action
(2.1). Thus, there exists a Gaussian path integral, which
induces the prefactors of Eqs. (2.12) and (2.13), and
also the prefactors of the next Eqs. (2.15) and (2.16).
In a similar fashion, the untwisted sector $\mathcal{U}$
has the following boundary states for the zero-mode
part and the oscillating part
\begin{eqnarray}
|B\rangle^{(0) \mathcal{U}}&=&\frac{T_p}{2\sqrt{\det(U/2)}}
\int_{-\infty}^{\infty}
\exp\bigg{[}i\alpha' \sum_{\alpha \neq \beta}
\left(U^{-1}\mathcal{K}
+\mathcal{K}^T U^{-1}\right)_{\alpha\beta}\;
p^{\alpha}p^{\beta}
\nonumber\\
&+& \frac{i}{2}\alpha' \left(U^{-1}\mathcal{K}
+\mathcal{K}^T U^{-1}\right)_{\alpha \alpha}\;
(p^{\alpha})^2
+ 2i\left(U^{-1}
\mathcal{F}\right)_{\alpha \beta}\;\ell^{\alpha}p^{\beta}
\bigg{]}
\nonumber\\
&\times& \prod^{d-1}_{I=p+1} \left[\delta \left({x}^{I}-y^{I}\right)
|p^{I}_{L}=p^{I}_{R}=0 \rangle \right] \prod^{p}_{\alpha =0}
\left(|p^{\alpha}\rangle dp^{\alpha}\right)~
\label{zer},
\end{eqnarray}
\begin{eqnarray}
|B_{\rm osc}\rangle^{\mathcal{U}}
&=&\prod_{n=1}^{\infty}[\det{M_{(n)}}]^{-1}
\exp\left[{-\sum_{m=1}^{\infty}
\left(\frac{1}{m}\alpha_{-m}^{\lambda}S_{(m)\lambda\lambda'}
\tilde{\alpha}_{-m}^{\lambda'}\right)}\right]
|0\rangle_\alpha|0\rangle_{\tilde{\alpha}}~,
\label{aos}
\end{eqnarray}
where $\lambda , \lambda' \in \{\alpha , I\}$, and
$S_{(m)\lambda\lambda'}=\left(Q_{(m)\alpha \beta},
-\delta_{IJ}\right)$.
For obtaining Eq. (2.15) we have used methods of quantum
mechanics, specially the commutation relations between the
position coordinates and their corresponding momenta,
and for Eq. (2.16) we have applied the coherent
state method. As expected, by setting all
linear and angular velocities
to zero the above boundary states reduce to the simple
configurations of the D-branes, e.g. see Ref. \cite{12}.
Besides, by decompactifying the compact directions and
quenching the background fields and velocities we
receive the simpler boundary states,
e.g. see Refs. \cite{6, 24, 34, 38}.
Look at the first equation of Eqs. (2.7).
The coherent state method on the oscillators
$\{\alpha^\beta_m \;,\;{\tilde \alpha}^\beta_{-m}
| m=1,2,3,\ldots\}$
introduces the matrix $Q_{(m)\alpha \beta}$ in Eqs.
(2.13) and (2.16), while this method on the set
$\{{\tilde \alpha}^\beta_{m}\;,\;\alpha^\beta_{-m}
| m=1,2,3,\ldots\}$
recasts these boundary states with the matrix
$\left([Q^{-1}_{(-m)}]^T\right)_{\alpha \beta}$.
Equality of these matrices leads to the following conditions
\begin{eqnarray}
&~& \eta U -U\eta +4(\omega U+ U\omega ) =0 ,
\nonumber\\
&~& \eta \mathcal{F} - \mathcal{F} \eta
+4(\omega \mathcal{F} +\mathcal{F}\omega)=0.
\end{eqnarray}
These equations are independent of the mode numbers.
Finally we shall use the following known boundary state,
corresponding to the conformal ghost fields \cite{16, 34},
\begin{equation}
|B_{\rm gh}\rangle=\exp{\left[\sum_{m=1}^{\infty}(c_{-m}\tilde{b}_{-m}
-b_{-m} \tilde{c}_{-m})\right]}\frac{c_0+\tilde{c}_0}{2}
|q=1\rangle|\tilde{q}=1\rangle~.
\end{equation}
This state is
independent of the orbifold projection, toroidal
compactification, rotation and linear motion of the brane
and the background fields.
The total boundary state in the bosonic
string theory, for each sector, is given by
\begin{equation}
|B\rangle^{\mathcal{T}\setminus \mathcal{U}}_{\rm Total}
=|B_{\rm osc}\rangle^{\mathcal{T}\setminus \mathcal{U}}
\otimes|B\rangle^{(0)\mathcal{T}\setminus \mathcal{U}}
\otimes|B_{\rm gh}\rangle~.
\end{equation}
Compare the boundary states (2.12), (2.13),
(2.15) and (2.16) with the boundary states of
a bare brane, i.e. a stationary brane without
any background and internal fields.
This induces to define the following
effective tension for the dressed brane
\begin{equation}
{\mathcal{T}}_p =
\frac{T_p}{\sqrt{\det(U/2)}}
\bigg{|}\prod_{n=1}^{\infty}[\det{M_{(n)}}]^{-1}\bigg{|}.
\end{equation}
\section{Interaction between two D$p$-branes}
The interactions of the branes have appeared
in many physical phenomena and in the main problems of physics.
For example, in the brane-world scenario these
interactions have been introduced as the origin
of the inflation \cite{35, 39}.
Beside, interaction and collision of two D-branes
create a Big-Bang \cite{40}.
In addition, in the early universe these
interactions have been considered
for describing the radiation-dominated era.
Also there are D$p$-branes that overlap with our D3-brane,
hence, interact with it. Thus, these interactions induce
the added gravity within our world \cite{41, 42}.
Furthermore, the branes interactions
clarify some corners of the gauge/gravity correspondence \cite{24}.
Finally, the gravitational interaction between the branes
describes creation of the dark matter \cite{43}.
There are many other satisfactory applications of
such interactions, e.g. see the Refs. \cite{36, 44, 45, 46}.
The interaction between two D-branes can be
described by the 1-loop graph of an open string worldsheet
\cite{47}-\cite{49}, or tree-level diagram of a closed
string worldsheet \cite{1}-\cite{21}.
In the second approach each brane couples to
all closed string states through its corresponding boundary state.
This is due to the fact that
all properties of a D-brane are encoded into a boundary state.
Thus, in the closed string channel closed string is radiated
from one brane, then propagates
toward the other brane, and finally is absorbed by the second brane.
Therefore, for acquiring the interaction amplitude of
two D$p$-branes we should calculate the overlap of their
corresponding boundary states via the closed
string propagator, i.e.,
\begin{eqnarray}
\mathcal{A}=\langle
B_1|D|B_2\rangle~,
\end{eqnarray}
where the total boundary states of the branes
should be used. ``$D$'' is the closed string propagator, and
is constructed from the closed string Hamiltonian.
For the twisted sector the Hamiltonian is
\begin{eqnarray}
H^{\mathcal{T}} &=& H_{\rm ghost}+\alpha'p^{\lambda}p_{\lambda}
+ 2\left(\sum_{n=1}^{\infty}(\alpha_{-n}^{\lambda}
\alpha_{n\lambda}
+\tilde{\alpha}_{-n}^{\lambda}\tilde{\alpha}_{n\lambda})
+\sum_{r=1/2}^{\infty}
(\alpha_{-r}^{a}\alpha_{ra}
+\tilde{\alpha}_{-r}^{a}\tilde{\alpha}_{ra})\right)
\nonumber\\
&-& \frac{d-6}{6}~,\;\;\;\lambda \in \{\alpha , i\}.
\end{eqnarray}
For the untwisted sector there is
\begin{eqnarray}
H^{\mathcal{U}} = H_{\rm ghost}+\alpha'p^{\lambda}p_{\lambda}
+ 2\sum_{n=1}^{\infty}(\alpha_{-n}^{\lambda}
\alpha_{n\lambda}+\tilde{\alpha}_{-n}^{\lambda}\tilde{\alpha}_{n\lambda})
-d/6~,\;\;\;\lambda \in \{\alpha , I\}.
\end{eqnarray}
The difference between the ground state energies of
the two sectors is a consequence of the
orbifold projection on the twisted sector. These
ground state energies
impose some significant effects in the branes interaction.
\subsection{Interaction amplitude: arbitrary distance of the branes}
According to the orbifold projection
the total interaction amplitude has two parts: one part
from the untwisted sector and the other part
from the twisted sector
\begin{eqnarray}
\mathcal{A}^{\rm Total}&=&\mathcal{A}^{\mathcal{T}}
+\mathcal{A}^{\mathcal{U}}~.
\end{eqnarray}
After a heavy calculation we receive the following
amplitude for the twisted sector
\begin{eqnarray}
\mathcal{A}^{\mathcal{T}}&=&\frac{T_p^2\alpha'V_{p+1}}{4(2\pi)^{d-p-5}}
\frac{\prod_{n=1}^{\infty}[\det(M^\dagger_{(n)1}
M_{(n)2})]^{-1}}{\sqrt{\det{(U_1/2)}\det{(U_2/2)}}}
\int_{0}^{\infty}dt\bigg{[}e^{(d-8)t/6}\left(
\sqrt{\frac{\pi}{\alpha' t}}\right)^{d_{i_n}}
\nonumber\\
&\times& \exp\left( {-\frac{1}{4\alpha't}
\sum_{i_n}{\left(y_{1}^{i_n}-y_{2}^{i_n}\right)^2}}
\right)\prod_{i_{c}}\Theta_{3}
\left(\dfrac{y_{1}^{i_{c}}-y_{2}^{i_{c}}}
{2\pi R_{i_{c}}} \bigg{|}
\dfrac{i\alpha' t}{\pi R_{i_{c}}^{2}}\right)
~\nonumber\\
&\times& [\det \mathcal{Z}(t)]^{-1/2}
\sum_{\{N^{\alpha_c}\}}\exp\left(2 W^{\dagger}
\mathcal{Z}(t)^{-1}W\right)
~\nonumber\\
&\times& \prod_{n=1}^\infty \bigg{(}
\det[\mathbf{1}-Q^\dagger_{(n)1}
Q_{(n)2}e^{-4nt}]^{-1}~
\left(1- e^{-4nt}\right)^{p-d+7}
\left(1- e^{-2(2n-1)t}\right)^{-4}\bigg{)}\bigg{]},
\end{eqnarray}
where $V_{p+1}$ is the common worldvolume of the branes, and
\begin{eqnarray}
&~& W_{\alpha} = (U_1^{-1}\mathcal{F}_1)_{\beta_{c}
\alpha}\ell^{\beta_{c}}+(U_2^{-1}\mathcal{F}_2)
_{\beta_{c}\alpha}\ell^{\beta_{c}}~,
\nonumber\\
&~& {\mathcal{Z}(t)}_{\alpha\beta} =
\begin{cases}
2 t\alpha'\delta_{\alpha\beta}
+i \alpha'[(U_1^{-1}\mathcal{K}_1
+ \mathcal{K}^T_1 U_1^{-1})
-(U_2^{-1}\mathcal{K}_2+ \mathcal{K}^T_2
U_2^{-1})]_{\alpha\beta},
& \mbox{if }\alpha=\beta \\
2i \alpha'[(U_1^{-1}\mathcal{K}_1
+ \mathcal{K}^T_1 U_1^{-1})
-(U_2^{-1}\mathcal{K}_2+ \mathcal{K}^T_2
U_2^{-1})]_{\alpha\beta}, & \mbox{if }\alpha\neq\beta.
\end{cases}
\end{eqnarray}
Besides, we decomposed each set of the directions into
the compact and non-compact subsets, i.e.
\begin{eqnarray}
\lbrace i = p+1 , \ldots, d-5
\rbrace=\lbrace i_{n}
\rbrace\cup\lbrace i_{c}\rbrace \;\;\;\;,\;\;\;\;
\lbrace \alpha = 0, \ldots, p
\rbrace=\lbrace
\alpha_{n} \rbrace\cup\lbrace \alpha_{c} \rbrace,
\nonumber
\end{eqnarray}
where the index ``c'' (``n'') represents the word
``compact'' (``non-compact'').
Thus, $d_{i_n}$ is the dimension of the directions $\{x^{i_n}\}$.
The factor $\prod_{n=1}^\infty(1- e^{-4nt})^{p-d+7}$
originates from the oscillators of the non-orbifoldy perpendicular
directions and the conformal ghosts, and the last factor
of the last line is contribution of the orbifold directions.
The interaction amplitude in the untwisted sector is given by
\begin{eqnarray}
\mathcal{A}^{\mathcal{U}}&=&\frac{T_p^2\alpha'V_{p+1}}{4(2\pi)^{d-p-1}}
\frac{\prod_{n=1}^{\infty}[\det(M^\dagger_{(n)1}
M_{(n)2})]^{-1}}{\sqrt{\det{(U_1/2)}\det{(U_2/2)}}}
\int_{0}^{\infty}dt\bigg{[}e^{(d-2)t/6}\left(
\sqrt{\frac{\pi}{\alpha' t}}\right)^{d_{I_{n}}}
\nonumber\\
&\times& \exp\left( {-\frac{1}{4\alpha't}
\sum_{I_n}{\left(y_{1}^{I_n}-y_{2}^{I_n}\right)^2}}
\right)\prod_{I_{c}}\Theta_{3}
\left(\dfrac{y_{1}^{I_{c}}-y_{2}^{I_{c}}}
{2\pi R_{I_{c}}} \bigg{|}
\dfrac{i\alpha' t}{\pi R_{I_{c}}^{2}}\right)
~\nonumber\\
&\times& [\det \mathcal{Z}(t)]^{-1/2}
\sum_{\{N^{\alpha_c}\}}\exp\left(2 W^{\dagger}
\mathcal{Z}(t)^{-1}W\right)
~\nonumber\\
&\times& \prod_{n=1}^\infty \bigg{(}
\det[\mathbf{1}-Q^\dagger_{(n)1}
Q_{(n)2}e^{-4nt}]^{-1}~
\left(1- e^{-4nt}\right)^{p-d+3}\bigg{)}\bigg{]},
\end{eqnarray}
where $\lbrace I = p+1 , \ldots, d-1
\rbrace=\lbrace I_{n}
\rbrace\cup\lbrace I_{c}\rbrace $, and
$d_{I_n}= {\rm dim}\;\{x^{I_n}\}$.
The factor $\prod_{n=1}^\infty(1- e^{-4nt})^{p-d+3}$
originates from the oscillators of the perpendicular
directions and the conformal ghosts.
For computing the amplitudes (3.5) and (3.7)
we receive the factor
$\prod_{\alpha =0}^p \langle p_1^\alpha | p_2^\alpha
\rangle$. This implies that a nonzero interaction
requires the equation
\begin{eqnarray}
p_1^\alpha - p_2^\alpha =0, \;\;\;\alpha=0,1,\ldots,p.
\nonumber
\end{eqnarray}
According to Eq. (2.10) this equation eventuates to the following
conditions
\begin{eqnarray}
&~& \det \left(\mathcal{K}_1^{-1}U_1 - \mathcal{K}_2^{-1}U_2
\right)=0,
\nonumber\\
&~& \det \left(\mathcal{K}_1^{-1}\mathcal{F}_1 -
\mathcal{K}_2^{-1}\mathcal{F}_2 \right)=0.
\end{eqnarray}
The conditions (2.17) and (3.8)
reduce $n+(p+1)(3p+2)/2$ parameters of the theory to
$n-2+p(p+1)/2$, where ``$n$'' is the dimension of the
asymmetric torus $T^n$.
The second lines of the amplitudes (3.5) and (3.7) imply that
the interaction is exponentially
damped by the square distance of the branes.
In the last lines of these equations
the determinants come from the
oscillators of the string coordinates
$\{X^\alpha\}$. The overall factors in front of the
integrals, which include the parameters
of the system, partially specify
the strength of the interaction.
The variety of the parameters in the setup,
i.e., the matrix elements of: the Kalb-Ramond tensor and
field strengths and
tachyon matrices, the linear and angular speeds of the branes,
the dimensions of the spacetime and the branes,
the closed string
winding and momentum numbers, the coordinates
of the branes location, and the radii of the
circular directions, specifies a general interaction
amplitude $\mathcal{A}^{\rm Total}=\mathcal{A}^{\mathcal{T}}
+\mathcal{A}^{\mathcal{U}}$.
The effects of the toroidal compactification have been gathered
in $i_n$, $d_{i_n}$, $I_n$, $d_{I_n}$,
the Jacobi theta function $\Theta_3$
and the worldvolume vector $W_\alpha$. Thus,
for obtaining the interaction amplitudes
in the non-compact spacetime it is sufficient to
exert the following replacements: $i_n \rightarrow i$,
$d_{i_n} \rightarrow d_i=d-p-5$,
$\Theta_3 \rightarrow 1$
and $W \rightarrow 0$ in Eq. (3.5); and $I_n \rightarrow I$,
$d_{I_n} \rightarrow d_I=d-p-1$, $\Theta_3 \rightarrow 1$
and $W \rightarrow 0$ in Eq. (3.7).
\subsection{Interaction amplitude: large distance of the branes}
Behavior of the total interaction
amplitude for large distances of the branes is very important.
This prominently defines the long-range force of the theory,
which is determined by
\begin{eqnarray}
\mathcal{A}_{\rm long-range}^{\rm Total}
&=& \mathcal{A}_{\rm long-range}^{\mathcal{T}}
+\mathcal{A}_{\rm long-range}^{\mathcal{U}}~.
\end{eqnarray}
In fact, this picks out the contributions of the closed
string tachyon and massless states to the interaction.
For this purpose, since the states of the graviton,
Kalb-Ramond tensor and dilaton
have zero winding and zero momentum numbers we shall impose
$\ell^{\beta_c}=0$ for every $\beta_c$.
Besides, in the critical string theory, i.e. for the
dimension $d=26$, we impose the limit $t\rightarrow \infty$
on the oscillating parts of the amplitudes (3.5) and (3.7).
Since the nature of an emitted (absorbed)
closed string is independent of the locations of the
interacting branes the position factors in
Eqs. (3.5) and (3.7) do not change.
In this limit the contribution of all massive states, except
the tachyon state, vanish.
For the twisted sector the limit is
\begin{eqnarray}
&~& {\mathop{\lim }_{t\to \infty}}
e^{3t} \prod_{n=1}^\infty \bigg{(}
\det[\mathbf{1}-Q^\dagger_{(n)1}
Q_{(n)2}e^{-4nt}]^{-1}~
\left(1- e^{-4nt}\right)^{p-d+7}
\left(1- e^{-2(2n-1)t}\right)^{-4}\bigg{)}
\nonumber\\
&~& \longrightarrow
e^{3t}+\left[21-p +{\rm Tr}\left(Q^\dagger_{(n=1)1}
Q_{(n=1)2}\right)\right]e^{-t}.
\end{eqnarray}
Thus, the interaction amplitude
of the distant branes, in the twisted sector, has the following form
\begin{eqnarray}
\mathcal{A}_{\rm long-range}^{\mathcal{T}}
&=&\frac{T_p^2\alpha'V_{{p+1}}}{4(2\pi)^{21-p}}
\frac{\prod_{n=1}^{\infty}[\det(M^\dagger_{(n)1}
M_{(n)2})]^{-1}}{\sqrt{\det{(U_1/2)}\det{(U_2/2)}}}
\int_{0}^{\infty}dt\bigg{\{}\Big{(}
\sqrt{\frac{\pi}{\alpha' t}}\Big{)}^{d_{i_n}}
~\nonumber\\
&\times& [\det \mathcal{Z}(t)]^{-1/2}
\exp\left( {-\frac{1}{4\alpha't}
\sum_{i}{\left(y_{1}^{i_n}-y_{2}^{i_n}\right)^2}}
\right)\prod_{i_{c}}\Theta_{3}
\left(\dfrac{y_1^{i_{c}}-y_2^{i_{c}}}{2\pi R_{i_{c}}}
\bigg{|} \dfrac{i\alpha' t}{\pi R_{i_{c}}^{2}}\right)
~\nonumber\\
&\times & {\mathop{\lim }_{t\to \infty}}
\left(e^{3t}+\left[21-p +{\rm Tr}
\left(Q^\dagger_{(n=1)1}
Q_{(n=1)2}\right)\right]e^{-t}\right) \bigg{\}}.
\label{tg}
\end{eqnarray}
According to the negative mass squared of the tachyon,
the divergent part in the last line exhibits
exchange of the tachyonic state.
The last bracket in Eq. (3.11) clarifies that
in the twisted sector the $\mathbb{Z}_2$
projection extremely damps the long-range force.
This is due to the fact that this
projection modified the zero-point energy of the
Hamiltonian of this sector.
In fact, the twisted spectrum of closed string
does not have any massless state, but contains the
tachyonic state with a modified imaginary mass.
Therefore, the vanishing long-range force in this
sector is an expected result. However, we
calculated this force to find the damping form of
it and the divergence
form for the tachyon exchange.
We should also calculate the long-time behavior
of the interaction amplitude in the untwisted sector.
By considering the following limit in the
26-dimensional spacetime
\begin{eqnarray}
&~& {\mathop{\lim }_{t\to \infty}}
e^{4t} \prod_{n=1}^\infty \bigg{(}
\det[\mathbf{1}-Q^\dagger_{(n)1}
Q_{(n)2}e^{-4nt}]^{-1}~
\left(1- e^{-4nt}\right)^{p-23}\bigg{)}
\nonumber\\
&~& \longrightarrow
e^{4t}+ 23-p +{\rm Tr}\left(Q^\dagger_{(n=1)1}
Q_{(n=1)2}\right),
\end{eqnarray}
the long-range force of the untwisted sector
takes the form
\begin{eqnarray}
\mathcal{A}_{\rm long-range}^{\mathcal{U}}
&=&\frac{T_p^2\alpha'V_{{p+1}}}{4(2\pi)^{25-p}}
\frac{\prod_{n=1}^{\infty}[\det(M^\dagger_{(n)1}
M_{(n)2})]^{-1}}{\sqrt{\det{(U_1/2)}\det{(U_2/2)}}}
\int_{0}^{\infty}dt\bigg{\{}\Big{(}
\sqrt{\frac{\pi}{\alpha' t}}\Big{)}^{d_{I_n}}
~\nonumber\\
&\times& [\det \mathcal{Z}(t)]^{-1/2}
\exp\left( {-\frac{1}{4\alpha't}
\sum_{i}{\left(y_{1}^{I_n}-y_{2}^{I_n}\right)^2}}
\right)\prod_{I_{c}}\Theta_{3}
\left(\dfrac{y_1^{I_{c}}-y_2^{I_{c}}}{2\pi R_{I_{c}}}
\bigg{|} \dfrac{i\alpha' t}{\pi R_{I_{c}}^{2}}\right)
~\nonumber\\
&\times & \left( {\mathop{\lim }_{t\to \infty}}
e^{4t}+ 23-p +{\rm Tr}
\left(Q^\dagger_{(n=1)1}
Q_{(n=1)2}\right)\right) \bigg{\}}.
\label{tg}
\end{eqnarray}
Again the divergent part represents the exchange of
the tachyon state, and the remainder indicates the
long-range force.
The amplitudes (3.11) and (3.13) demonstrate that
the orbifold projection does not
deform the total long-range force. In addition,
this projection imposed the divergence
$e^{3t}$ as the contribution of the
tachyon exchange in the twisted sector.
Besides, these amplitudes reveal that the compactification of the
branes directions does not have any role in the long-range
force.
According to Eqs. (2.14) the matrices
$Q_{(n)1}$ and $Q_{(n)2}$ contain
$2(p+1)(2p+1)$ parameters
\begin{eqnarray}
\{\omega_{(l)\alpha \beta}, F_{(l)\alpha \beta},
B_{(l)\alpha \beta},
U_{(l)\alpha \beta} |\alpha , \beta =0,1,\ldots , p\},
\nonumber
\end{eqnarray}
with $l=1,2$ for the first and second interacting branes.
By adjusting these parameters we can receive
\begin{eqnarray}
23-p +{\rm Tr}\left(Q^\dagger_{(n=1)1}
Q_{(n=1)2}\right)=0,
\end{eqnarray}
and hence, we acquire a vanishing total long-range force.
In fact, for the two D0-branes there are only two parameters
$U_{(1)00}$ and $U_{(2)00}$, thus this equation
is not satisfied. However, for the systems with
$p \geq 1$ there are enough parameters for
satisfying this equation.
For example, consider two parallel D1-branes.
For simplification let $\omega_{(1)01}=\omega_{(2)01}=0$,
therefore Eq. (3.14) is decomposed to the following equations
\begin{eqnarray}
&~&U_{(1)11} U_{(2)00}U_{(2)11}-U_{(1)00}U_{(1)11}
U_{(2)11}+U_{(1)00}U_{(1)11}U_{(2)00}
\nonumber\\
&~& -U_{(1)00}U_{(2)00}U_{(2)11}
+4U_{(1)11}-4U_{(2)11}+4U_{(2)00}-4U_{(1)00}
\nonumber\\
&~&-4U_{(1)11}\mathcal{F}^{2}_{(2)01}
+4U_{(2)11}\mathcal{F}^{2}_{(1)01}
-4U_{(2)00}\mathcal{F}^{2}_{(1)01}+4U_{(1)00}\mathcal{F}^{2}_{(2)01}
\nonumber\\
&~&-U_{(1)11} U^{2}_{(2)01}+U_{(2)11} U^{2}_{(1)01}
-U_{(2)00} U^{2}_{(1)01}+U_{(1)00} U^{2}_{(2)01}= 0~,
\nonumber\\
\nonumber\\
&~&-12U_{(1)11}U_{(2)11}-48\mathcal{F}^{2}_{(1)01}
\mathcal{F}^{2}_{(2)01}-4U_{(1)01}U_{(2)01}
-16\mathcal{F}_{(1)01}\mathcal{F}_{(2)01}
\nonumber\\
&~&-12U^{2}_{(1)01}\mathcal{F}^{2}_{(2)01}
-12U^{2}_{(2)01}\mathcal{F}^{2}_{(1)01}
-12U_{(1)00}U_{(2)00}+10U_{(1)11}U_{(2)00}
\nonumber\\
&~& +10U_{(1)00}U_{(2)11}+3U^{2}_{(1)01}U_{(2)00}U_{(2)11}
+3U^{2}_{(2)01}U_{(1)00}U_{(1)11}+10U^{2}_{(2)01}
\nonumber\\
&~&+10U^{2}_{(1)01}+40\mathcal{F}^{2}_{(2)01}
+40\mathcal{F}^{2}_{(1)01}-10U_{(1)00}U_{(1)11}
-10U_{(2)00}U_{(2)11}
\nonumber\\
&~&-3U^{2}_{(1)01}U^{2}_{(2)01}+12\mathcal{F}^{2}_{(1)01}
U_{(2)00}U_{(2)11}+12\mathcal{F}^{2}_{(2)01}U_{(1)00}U_{(1)11}
\nonumber\\
&~& -3U_{(1)00}U_{(1)11}U_{(2)00}U_{(2)11}-48= 0~.
\end{eqnarray}
\section{Conclusions}
We constructed the boundary states, associated with a
non-stationary fractional-wrapped D$p$-brane,
in the presence of the Kalb-Ramond
background field, an internal $U(1)$
gauge potential and an internal open string tachyon
field in the twisted and untwisted sectors of
the orbifold projection.
We observed that the
emitted closed strings cannot wrap around the
compact directions which are perpendicular to the brane.
In addition, wrapping of them around the compact
directions of the brane is controlled by the tachyon matrix.
Besides, each emitted closed string possesses a
momentum along the worldvolume of the brane.
This momentum depends on the position of the
closed string center-of-mass,
its winding numbers, and the parameters of the
setup. This noticeable result
clarifies that the background fields, accompanied
by the toroidal compactification and linear
and angular velocities of the brane, induce a
marvelous potential on the emitted closed string.
For both twisted and untwisted sectors the interaction amplitudes
of two dynamical fractional-wrapped D$p$-branes, in the
above-mentioned setup, were obtained. The multiplicity of the
parameters designed a generalized
amplitude. The strength of the interaction accurately
is adjustable via these parameters to any desirable value.
From the total interaction amplitude the total long-range force
was extracted. The
long-range force only originates from the untwisted sector.
That is, the orbifold directions quenches
the contribution of the massless states to this
interaction. By a specific adjustment of the
parameters we can eliminate the long-range force.
|
3,212,635,537,582 | arxiv | \section{Introduction}
Strong-field quantum electrodynamics (SFQED) unites special relativity, electrodynamics and quantum physics within one formalism \cite{euler_heisenberg}. As such it is capable of describing fundamental processes in physics to an astonishing level of precision, see e.g. lamb shift \cite{Lamb} or the magnetic moment of the electron \cite{SchwingerMu}.
Additionally, the theory stimulated predictions on the fabric of time and space itself including the possibility of a decaying vacuum \cite{schwinger_1951}.
One of the most striking examples of such strong-field effects is the conversion of energy into massive particles either through the interaction of multiple high-energy photons \cite{BreitWheeler, Narozhnyi, Brezin:1970xf, Reiss}, see also Refs.~\cite{PhysRevD.94.013010,PhysRevD.105.L071902,Mercuri-Baron:2021waq,Blackburn:2021cuq, Salgado:2021uua} on recent developments, or by having particles come into existence through stimulation by a high-intensity field \cite{sauter_1931}. Such an effect would be impossible in the context of Maxwell's equation where light does not interact with light. Quantum electrodynamics changes this picture completely, as at high intensities light-by-light scattering enabled through virtual particles acting as mediators becomes possible \cite{euler_heisenberg, Weisskopf}.
To probe the scattering of light with light in detail in an earth-based laboratory effectively putting these non-linear interaction terms at display poises strong-field experiments to become one of the most anticipated experiments at modern laser facilities \cite{Marklund:2008gj, Dunne:2008kc, Heinzl:2008an, dipiazza_rmp_2012, Gies:2008wv, PhysRevLett.129.061802}. Prominent examples in the context of pair production are the SLAC e-340 at FACET-II at the linear collider in Stanford (LCLS) \cite{FACET-II} or the LUXE experiment at DESY in Hamburg \cite{LUXE}. Both of which will probe strong-field physics with unprecedented precision. For reference, a proof of principle experiment within SLAC e-144 \cite{Burke:1997ewA, Burke:1997ewB}.
In this regard, experiments in the regime of quantum plasmas or even beyond require a solid theoretical description in order to make a comparison between theoretical predictions and experimental data viable. As such, a variety of approaches have been introduced in the past; worldline instantons \cite{ SemiClassA, PhysRevD.73.065028, Affleck, PhysRevD.84.125023}, Monte-Carlo worldline methods \cite{Gies:2005bz}, directly solving the Dirac equation \cite{Aleksandrov:2016lxd, Ruf, PhysRevA.97.022515, PhysRevA.81.022122}, quantum kinetic theories \cite{Smolyansky:1997fcC,Vasak:1987umA,Kluger}, WKB approaches \cite{linder_prd_2015,Kim:2007pm,Oertel,Taya}, real-time lattice techniques \cite{Hebenstreit:2013baa, Hebenstreit:2013qxa}, techniques that embrace analogies \cite{Gies:2015hia} and even tools that benefit from a hybrid approach mixing semi-classical and computational methods \cite{li_prd_2017, Blinne:2015zpa,NewPaper}.
While undoubtedly important in gaining new knowledge, most of these methods are only applicable in specific regions of the full parameter space. For example, direct $n$-photon scatterings (at comparably low intensities) can be described perturbatively while for strong, slowly varying fields an analysis based on the Euler-Heisenberg Lagrangian yields good results \cite{euler_heisenberg, Kohlfurst:2017git}. The crux is the intermediate region where no obvious expansion parameter exists and different effects contribute equally, see Refs. \cite{Dittrich:2000zu, Dunne:2012vv, Gelis:2015kya, Fedotov} for in-depth reviews regarding our theoretical understanding of strong-field QED in general.
One approach that can probe different regimes of pair production is the Heisenberg-Wigner formalism \cite{Vasak:1987umA, NewPaper}, which takes into account the possibility of creating particle pairs at intermediate times \cite{PhysRevD.83.025011, PhysRevD.94.065005, PhysRevD.105.016021, Diez}. Additionally, the general particle dynamics is incorporated to all orders of $\hbar$ thus including not only the Lorentz force or the Stern-Gerlach force but also spin dynamics even beyond the BMT equations \cite{BB}.
Over the last years there was steady progress not only in understanding the intricacies of the derived transport equations, see the theses, Refs. \cite{Hebenstreit:2011pm, KohlfurstDiss, KohlfurstMag, DiezMag}, but also in finding and applying computational methods that render solving these equations feasible \cite{Hebenstreit, KohlfurstTech}. Within this manuscript we take another step in the direction of creating a system of equations that are easy-to-solve while maintaining all aspects of pair production in strong background fields.
As we want to deliver a complete picture of pair production for the specific class of transverse fields, we will derive the quantum kinetic transport equations for QED as well as scalar QED, see. Secs. \ref{sec:dhw} and \ref{sec:fvhw}. In Sec. \ref{sec:transverse} we aim at providing general information on the structure of the transport equations in transverse fields. Most importantly, we derive the pseudo-differential operators incorporating the various features that make transverse fields so uniquely well suited for application within the Heisenberg-Wigner formalism. In the third section, Sec. \ref{sec:Appl}, we discuss an exemplary field configuration on the basis of the Heisenberg-Wigner formalism. Specifically we display the transport equations for bi-frequent field configurations in the context of Dirac particles in Sec. \ref{sec:Appl_DHW} and for scalar particles in Sec. \ref{sec:Appl_FVHW}.
At the end we provide a brief conclusion regarding the main points of the manuscript, Sec. \ref{sec:Conclusion}, and state our opinion on possible further applications and future pathways of the Heisenberg-Wigner formalism, Sec. \ref{sec:Outlook}.
Throughout the manuscript we use natural units $c=\hbar = 1$.
\section{Dirac-Heisenberg-Wigner formalism}\label{sec:dhw}%
The Heisenberg-Wigner formalism is an approach to describe quantum physics in phase-space similar to classical kinetic theories. The main conceptual difference between classical and quantum systems is the uncertainty principle which prevents an interpretation in terms of particles or particle distributions in phase-space. Instead, the phase-space of quantum systems deals in terms of quasi-probabilities which can easily become negative. Only when these issues with a negative particle number are addressed, for example, by integrating out either momentum or spatial coordinates or by performing a convolution with a smearing function, an interpretation in terms of distribution functions is valid \cite{ShinRafelski}.
Despite these shortcomings, quantum phase-space formalisms provide an incredibly detailed insight into non-equilibrium statistical systems that are governed by principles of the quantum world. Incorporating Dirac (anti)-spinors and electromagnetic fields on the basis of the QED Lagrangian into the Heisenberg-Wigner formalism leads, for example, to a powerful relativistic, quantum kinetic theory capable of describing the interaction of photons with spin-$1/2$ fermions \cite{Vasak:1987umA, BB, PhysRevLett.98.025001, PhysRevE.104.015207, PhysRevE.102.043203, PhysRevE.100.023201}.
To expand on this point in more detail, the Dirac-Heisenberg-Wigner (DHW) formalism gives rise to transport equations that determine the time evolution of the underlying distribution functions, e.g., the particle number distribution. In this way, there is no need to keep track of all individual particles as instead of the dynamics of a wave packet representing one single particle the time-evolution of a macroscopic average over an ensemble of particles is considered.
An additional advantage of the Dirac-Heisenberg-Wigner formalism is that it not only takes into account the particle dynamics in an external field, it also grants access to the particles' charge distribution or the spin density distribution. Moreover, the conversion from photons to pairs of electrons and positrons is deeply integrated into the formalism thus no assumptions on particle creation rates have to be artificially implemented. In this way, it is possible to employ phase-space methods to actively search for unknown effects or even emergent phenomena in non-equilibrium quantum plasmas.
The downside of such an approach is its huge computational cost. While the formalism grants access to a variety of important quantities, meaningful results can only be achieved if the corresponding phase-space shows a resolution small enough to capture important quantum statistical details. Furthermore, the time-integration of the quantum system has to be evaluated accurately. Considering an, in general, $2n$-dimensional phase-space, with $n$ being the dimension of the system, solving the full transport equations remains a massive undertaking even for modern computers.
Nevertheless, in order to have a self-consistent manuscript we state the key points in the original derivation of the quantum transport equations in the following. We will, however, not go too much into detail about the significance of each element in the derivation. We recommend Ref. \cite{Vasak:1987umA} for an in-depth analysis of the transport equations. In Refs. \cite{BB,Hebenstreit,KohlfurstDiss,Vasak:1987umB} the further development of the formalism is laid out. Additional information regarding the Wigner function formalism can be found in Refs. \cite{Weinbub, Ochs:1998qj, Hidaka:2022dmn}.
The basis of any Heisenberg-Wigner formalism is given by the density operator
\begin{equation}
\hat {\mathcal C}_{\alpha \beta} \left( r , s \right) = \mathcal U \left(A,r,s \right) \left[ \bar {\Psi}_\beta \left( r - s/2 \right), {\Psi}_\alpha \left( r +
s/2 \right) \right], \label{equ:C}
\end{equation}
with the center-of-mass coordinate $r$ and the relative coordinate $s$. As we consider the electromagnetic field strength tensor $F_{\mu\nu}$, and thus the vector potential $A_{\mu}$, as a c-number instead of as an operator no path ordering is required \cite{Vasak:1987umA}. Instead, we ensure gauge-invariance through implementation of a Wilson line factor
\begin{equation}
\mathcal U \left(A,r,s \right) = \exp \left( \mathrm{ie} \int {\rm d}
\xi \ A \left(r+ \xi s \right) \, s \right). \label{equ:U}
\end{equation}
Here, the spinors $\Psi$ are determined by the Lagrangian for quantum electrodynamics describing charged spin-$1/2$ particles
\begin{multline}
{\mathcal L} \left( \Psi, \bar{\Psi}, A \right) = \\
\frac{1}{2} \left( {\rm i} \bar{\Psi} \gamma^{\mu} \mathcal{D}_{\mu} \Psi - {\rm i} \bar{\Psi} \mathcal{D}_{\mu}^{\dag} \gamma^{\mu} \Psi \right)
-m \bar{\Psi} \Psi - \frac{1}{4} F_{\mu \nu} F^{\mu \nu}. \label{equ:Lag}
\end{multline}
In Eq. \eqref{equ:Lag}, the quantity $m$ determines the masses of particle and anti-particle in the formalism (here, electrons and positrons), $\mathcal{D}_{\mu} = \partial_{\mu} +{\rm i} e A_{\mu} $ and $\mathcal{D}_{\mu}^{\dag} = \overset{\leftharpoonup} {\partial_{\mu}} -{\rm i} e A_{\mu} $ describe covariant derivatives and $\gamma^\mu$ are the Dirac matrices.
In order to obtain a kinetic formalism in familiar position- and momentum coordinates, a Fourier transform in $s$ is to be performed. As a result we obtain the covariant Wigner operator
\begin{align}
\hat{\mathcal W}_{\alpha \beta} \left( r , p \right) = \frac{1}{2} \int {\rm d}^4 s \ \mathrm{e}^{\mathrm{i} ps} \ \hat{\mathcal C}_{\alpha \beta} \left( r , s \right). \label{equ:W}
\end{align}
On the basis of the Lagrangian \eqref{equ:Lag} we obtain the (adjoint) Dirac equations determining the equations of motion for the spinors $\Psi,~ {\bar \Psi}$. The Wigner operator \eqref{equ:W} follows the same principles as the spinors it is based on, thus we can rewrite the (adjoint) Dirac equation leading to
\begin{alignat}{3}
& \left( \frac{1}{2} \hat D_{\mu} - {\rm i} \hat P_{\mu} \right) \gamma^{\mu} \hat{\mathcal W} \left( r , p \right) && = - &&{\rm i} m \hat{\mathcal W} \left( r , p \right), \label{equ:W1} \\
& \left( \frac{1}{2} \hat D_{\mu} + {\rm i} \hat P_{\mu} \right) \hat{\mathcal W} \left( r , p \right) \gamma^{\mu} && = &&{\rm i} m \hat{\mathcal W} \left( r , p \right). \label{equ:W2}
\end{alignat}
In this regard, the covariant derivatives $\mathcal{D}_{\mu}$ and $\mathcal{D}_{\mu}^{\dag}$ are replaced by nonlocal, pseudo-differential operators
\begin{alignat}{4}
& \hat D_{\mu} && = \partial_{\mu}^r - e &&\int_{-1/2}^{1/2} {\rm d} \xi \ && F_{\mu \nu} \left( r - {\rm i} \xi \partial^p \right) \partial_p^{\nu}, \label{equ:D} \\
& \hat P_{\mu} && = p_{\mu} - {\rm i} e && \int_{-1/2}^{1/2} {\rm d} \xi \ \xi \ && F_{\mu \nu} \left( r - {\rm i} \xi \partial^p \right) \partial_p^{\nu}, \label{equ:P}
\end{alignat}
having the electromagnetic field strength tensor $F_{\mu \nu}$ replace the vector potential $A_{\mu}$ and formally featuring derivative operators $\partial^p$ as arguments. \cite{footnote1}. The integration path is chosen such that $p$ can be identified as the kinetic momentum.
An equation of motion for the matrix-valued Wigner function $\mathcal W$, the vacuum expectation value of the Wigner operator
\begin{equation}
\mathcal W \left( r , p \right) = \langle \Phi | \hat{\mathcal W} \left( r , p \right) | \Phi \rangle,
\end{equation}
is obtained by taking the vacuum expectation values of Eqs. \eqref{equ:W1} and \eqref{equ:W2}. This step is crucial in order to have access to a system of transport equations in terms of distribution functions. Note, that at this point
the consequences of treating the background fields as classical c-number fields instead of operators becomes obvious. While terms of the form $\langle \Omega | \hat{\mathcal W} \left( r , p \right) \ \hat F^{\mu \nu} (r) | \Omega \rangle$ would create an infinite chain of coupled equations (BBGKY hierarchy, see, e.g., Ref. \cite{Fauth}), a Hartree-type approximation as is used in this manuscript yields a truncation at first order \cite{Ochs:1998qj}.
While such measures exempts us from describing, e.g., radiative emission, it marks an important step towards constraining the total number of interactions and hence differential equations to a computationally manageable level. Besides, our primary interest in this manuscript are fundamental particle production rates in subcritical fields, which can still be obtained despite the decision to use c-number fields.
To keep the notation clear and simple, we expand the matrix-valued function $\mathcal W$ into Dirac bilinears
\begin{multline}
\mathcal W \left( r , p \right) = \\
\frac{1}{4} \left( \mathbbm{1} \mathbbm{S} + {\rm i} \gamma_5 \mathbbm{P} + \gamma^{\mu} \mathbbm{V}_{\mu} + \gamma^{\mu} \gamma_5 \mathbbm{A}_{\mu} + \sigma^{\mu \nu} \mathbbm{T}_{\mu \nu} \right). \label{equ:wigner}
\end{multline}
This step allows us to determine the time evolution of each individual component of the matrix creating an easier-to-interpret system of equations. In this regard, we also have to abandon covariance as it demands a description at all times. As such a formulation is too restrictive for our needs, we project on equal times, $\int {\rm d}p_0 / (2 \pi)$, effectively transforming the equations of motion into an initial-value problem governed by transport equations in the equal-time Wigner components
${\mathbbm w} \left( t, \boldsymbol{x} , \boldsymbol{p} \right) = \int {\rm d}p_0 / (2 \pi) \ \mathbbm{W} \left( r , p \right)$. Ultimately, the equations of motion for the individual Wigner components read
\begin{alignat}{4}
& D_t \mathbbm{s} && && -2 \boldsymbol{\Pi} \cdot \mathbbm{t_1} &&= 0, \label{eq_DHW1} \\
& D_t \mathbbm{p} && && +2 \boldsymbol{\Pi} \cdot \mathbbm{t_2} &&= -2m\mathbbm{a}_\mathbb{0}, \\
& D_t \mathbbm{v}_\mathbb{0} &&+ \boldsymbol{D} \cdot \mathbbm{v} && &&= 0, \\
& D_t \mathbbm{a}_\mathbb{0} &&+ \boldsymbol{D} \cdot \mathbbm{a} && &&=+ 2m\mathbbm{p}, \\
& D_t \mathbbm{v} &&+ \boldsymbol{D} \ \mathbbm{v}_\mathbb{0} && +2 \boldsymbol{\Pi} \times \mathbbm{a} &&= -2m\mathbbm{t_1}, \\
& D_t \mathbbm{a} &&+ \boldsymbol{D} \ \mathbbm{a}_\mathbb{0} && +2 \boldsymbol{\Pi} \times \mathbbm{v} &&= 0, \\
& D_t \mathbbm{t_1} &&+ \boldsymbol{D} \times \mathbbm{t_2} && +2 \boldsymbol{\Pi} \ \mathbbm{s} &&= +2m\mathbbm{v}, \\
& D_t \mathbbm{t_2} &&- \boldsymbol{D} \times \mathbbm{t_1} && -2 \boldsymbol{\Pi} \ \mathbbm{p} &&= 0. \label{eq_DHW2}
\end{alignat}
with $\mathbbm{t_1} = 2 \mathbbm{t}^{i0} \boldsymbol{e}_i$ and $\mathbbm{t_2} = \epsilon_{ijk} \mathbbm{t}^{jk} \boldsymbol{e}_i$. The corresponding pseudo-differential operators $D_t ,~ \boldsymbol{D}$ and $\boldsymbol{\Pi}$ are given by
\begin{alignat}{6}
& D_t && = \partial_t &&+ e &&\int {\rm d} \xi &&\boldsymbol{E} \left( \boldsymbol{x}+{\rm i} \xi \boldsymbol{\nabla}_p,t \right) && ~ \cdot \boldsymbol{\nabla}_p, \label{eqn2_1} \\[-2mm]
& \boldsymbol{D} && = \boldsymbol{\nabla}_x &&+ e &&\int {\rm d} \xi &&\boldsymbol{B} \left( \boldsymbol{x}+{\rm i} \xi \boldsymbol{\nabla}_p,t \right) &&\times \boldsymbol{\nabla}_p, \label{eqn2_2} \\[-2mm]
& \boldsymbol{\Pi} && = \boldsymbol{p} &&- {\rm i} e &&\int {\rm d} \xi \xi &&\boldsymbol{B} \left( \boldsymbol{x}+{\rm i} \xi \boldsymbol{\nabla}_p,t \right) &&\times \boldsymbol{\nabla}_p. \label{eqn2_3}
\end{alignat}
Vacuum initial conditions are given by
\begin{alignat}{3}
\mathbbm{s}_{\rm vac} \left(\boldsymbol{p} \right) = -\frac{2m}{\sqrt{m^2 +
\boldsymbol{p}^2}}, \, \,
\boldsymbol{\mathbbm{v}}_{\rm vac} \left(\boldsymbol{p} \right) = -\frac{2
\boldsymbol{p}}{\sqrt{m^2 + \boldsymbol{p}^2}}, \label{equ:vac}
\end{alignat}
with all other Wigner components vanishing initially \cite{BB, Hebenstreit:2011pm}.
Although the individual Wigner components are not directly observable an interpretation in terms of familiar quantities is possible. As such, we have the mass density $ \mathbbm{s}$, the charge density $ \mathbbm{v}_\mathbb{0}$, the current density $ \boldsymbol{\mathbbm {v}}$, the spin density $ \boldsymbol{\mathbbm {a}}$ and the magnetic moment density $ \boldsymbol{\mathbbm {t_2}}$ \cite{BB}.
Furthermore, Noether's theorem can be applied yielding a prescription on how to obtain observables at asymptotic times $t \to \infty$, e.g., the particle distribution function \cite{Hebenstreit:2011pm}
\begin{equation}
n \left( \boldsymbol{x}, \boldsymbol{p} \right) = \frac{m \left( \mathbbm{s}-\mathbbm{s}_{\rm vac} \right) + {\boldsymbol p} \cdot \left(
\boldsymbol{\mathbbm{v}}-\boldsymbol{\mathbbm{v}}_{\rm vac} \right)}{2\sqrt{m^2+\boldsymbol{p}^2}}
\label{equ:n}
\end{equation}
or the particle momentum spectrum
\begin{equation}
n \left( \boldsymbol{p} \right) = \int \frac{{\rm d}^3 {\boldsymbol x}}{\left(2 \pi \right)^3} ~ n \left( \boldsymbol{x}, \boldsymbol{p} \right).
\label{equ:nn}
\end{equation}
\section{Feshbach-Villars-Heisenberg-Wigner formalism}
\label{sec:fvhw}
To put the versatility of the Heisenberg-Wigner formalism on display we will also consider scalar quantum electrodynamics (sQED) which deals with charged particles of spin zero. In this regard, sQED is a simpler theory as there is no need to track the spin degrees of freedom or consider light-matter interactions that are facilitated by the particle spin.
Additionally, a formalism based on bosons provides us with another opportunity to dissect the intricacies of pair production.
For example, it has been shown that in order to obtain the angular momentum distribution, which is the gateway to fully understanding the momentum spectrum, a two-step calculation process might be the favorable strategy \cite{KohlfurstSpin}. As within the DHW formalism one can hardly distinguish between different electron-positron spin states a dual calculation taking into account spin-zero pairs gives the opportunity to distinguish ortho- and para-states. This is an important distinction as depending on the coupling of the particle spins with the background field a particle pair is created with a particular probability and a particular orbital angular momentum thus decisively changing the particle spectrum \cite{SeiptKing, Seipt:2020uxv}.
Similar to the overview we have given regarding the Dirac-Heisenberg-Wigner formalism, Sec. \ref{sec:dhw}, we also state the key elements in the Feshbach-Villars-Heisenberg-Wigner formalism culminating in the transport equations for particles of spin zero, see Refs. \cite{ZhuangHeinz, FVHW, BestGreiner} for detailed discussions on the subject.
We state the Lagrangian
\begin{equation}
\mathcal{L}_{\rm sQED} \left( \phi, F \right) = \frac{1}{2} (D_\mu \phi) D^\mu \phi - \frac{m^2}{2} \phi^2 -\frac{1}{4} F_{\mu\nu}F^{\mu\nu}\ , \label{equ:Lag_scal}
\end{equation}
with the scalar field $\phi$ as well as the covariant derivative $\mathcal{D}_{\mu} = \partial_{\mu} + {\rm i} e A_{\mu}$ and the vector potential $A_\mu$ connected to the field strength tensor through $F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu$. On the basis of Eq. \eqref{equ:Lag_scal} we obtain the Klein-Gordon equation
\begin{eqnarray}
\left( D_\mu D^\mu +m^2 \right) \phi = 0, \label{equ:KG}
\end{eqnarray}
defining the equation of motion for a scalar field. While the Heisenberg-Wigner formalism can also be applied on $\phi$ with respect to Eq. \eqref{equ:KG} the presence of second order derivatives in time is expected to create a challenging problem \cite{ZhuangHeinz}. Hence, at this point we switch to a two-component Feshbach-Villars field description $\Phi = \begin{pmatrix} \xi \\ \chi \end{pmatrix}$,
\begin{align}
\xi &= \frac{1}{2} \left( \phi + \frac{ {\rm i} }{m} \partial_t \phi - \frac{e A_0}{m} \phi \right), \\
\chi &= \frac{1}{2} \left( \phi - \frac{ {\rm i} }{m} \partial_t \phi + \frac{e A_0}{m} \phi \right).
\end{align}
In this way, the equation of motion is given in terms of a first-order differential equation in time
\begin{equation}
{\rm i} \partial_t \Phi = \left( \frac{1}{2m} \left(- {\rm i} \boldsymbol{\nabla} -e \boldsymbol{A} \right)^2 \left(\sigma_3 + {\rm i} \sigma_2 \right) + m \sigma_3 + e A_0 \right) \Phi, \label{equ:FVKG}
\end{equation}
where $\sigma_1$, $\sigma_2$ and $\sigma_3$ are the Pauli matrices.
A further convenience is that the field $\Phi$ plays the same role as the spinor $\Psi$ did for the Dirac-Heisenberg-Wigner formalism making the derivation of the transport equations strikingly similar.
\cite{footnote2}
The basic density operator is thus given here in terms of Feshbach-Villars fields
\begin{multline}
\hat {\mathcal C}^{\rm FV}_{\alpha \beta} \left( t, \boldsymbol{x} , \boldsymbol{s} \right) = \mathcal U^{\rm FV} \left(\boldsymbol{A},t, \boldsymbol{x}, \boldsymbol{s} \right) \\
\times \left[ {\Phi}^\dag_\beta \left( \boldsymbol{x} - \boldsymbol{s}/2,t \right), {\Phi}_\alpha \left( \boldsymbol{x} +
\boldsymbol{s}/2,t \right) \right],
\end{multline}
where we have the Wilson line factor
\begin{multline}
\mathcal U^{\rm FV} \left(\boldsymbol{A},t,\boldsymbol{x},\boldsymbol{s} \right) = \\
\exp \left(- \mathrm{i}e \int_{-1/2}^{1/2} {\rm d}
\xi \ \boldsymbol{A} \left(\boldsymbol{x}+ \xi \boldsymbol{s},t \right) \cdot \boldsymbol{s} \right), \label{eq:FV_W}
\end{multline}
with vector potential $\boldsymbol{A}$ for center-of-mass $r=\left(t, \boldsymbol{x} \right)$ and relative coordinates $\boldsymbol{s}$. Again, the background fields are considered to be c-numbers thus no path-ordering is needed. The Wilson line \eqref{eq:FV_W} has the same function as for the DHW formalism simply ensuring gauge-invariance.
Performing the Fourier transform of the density operator $\hat {\mathcal C}^{\rm FV} \left( t, \boldsymbol{x}, \boldsymbol{s} \right)$ we directly obtain the equal-time Wigner operator
\begin{equation}
\hat {\mathcal W}^{\rm FV}_{\alpha \beta} \left( t, \boldsymbol{x}, \boldsymbol{p} \right) = \frac{1}{2} \int {\rm d}^3 s \ \mathrm{e}^{-\mathrm{i} \boldsymbol{p} \cdot \boldsymbol{s}} \ \hat{\mathcal C}^{\rm FV}_{\alpha \beta} \left( t, \boldsymbol{x}, \boldsymbol{s} \right), \label{equ:Wf}
\end{equation}
with momentum $\boldsymbol{p}$. The equations of motion for the operator $\hat{\mathcal W}^{\rm FV}$ are then directly determined by the field equation \eqref{equ:FVKG} and its adjoint. Furthermore, we take the vacuum expectation value of the equation of motion, thus determing the time evolution of the equal-time Wigner function
\begin{equation}
{\mathcal W}^{\rm FV}_{\alpha \beta} \left( t, \boldsymbol{x}, \boldsymbol{p} \right) = \langle \Omega | \hat {\mathcal W}^{\rm FV}_{\alpha \beta} \left( t, \boldsymbol{x}, \boldsymbol{p} \right) | \Omega \rangle. \label{equ:WFunc}
\end{equation}
The corresponding equation reads
\begin{multline}
2m D_t {\mathcal W}^{\rm FV} \\
+ {\rm i} \left( \frac{1}{4} \boldsymbol{D}^2 - \boldsymbol{\Pi}^2 \right) \left( {\mathcal W}^{\rm FV} \left( \sigma_3 - {\rm i} \sigma_2 \right) - \left( \sigma_3 + {\rm i} \sigma_2 \right) {\mathcal W}^{\rm FV} \right) \\
+ \boldsymbol{\Pi} \cdot \boldsymbol{D} \left( {\mathcal W}^{\rm FV} \left( \sigma_3 - {\rm i} \sigma_2 \right) - \left( \sigma_3 + {\rm i} \sigma_2 \right) {\mathcal W}^{\rm FV} \right) \\
-2 {\rm i} m^2 \left(
{\mathcal W}^{\rm FV} \sigma_3 - \sigma_3 {\mathcal W}^{\rm FV} \right) = 0, \label{equ:WF}
\end{multline}
where we find the familiar, nonlocal, pseudo-differential operators \eqref{eqn2_1}-\eqref{eqn2_3}
\begin{alignat}{6}
& D_t && = \partial_t &&+ e &&\int {\rm d} \xi &&\boldsymbol{E} \left( \boldsymbol{x}+ {\rm i} \xi \boldsymbol{\nabla}_p,t \right) && ~ \cdot \boldsymbol{\nabla}_p, \label{eq_FVHW_Der1} \\
& \boldsymbol{D} && = \boldsymbol{\nabla}_x &&+ e &&\int {\rm d} \xi &&\boldsymbol{B} \left( \boldsymbol{x}+ {\rm i} \xi \boldsymbol{\nabla}_p,t \right) &&\times \boldsymbol{\nabla}_p, \\
& \boldsymbol{\Pi} && = \boldsymbol{p} &&- {\rm i} e &&\int {\rm d} \xi \xi &&\boldsymbol{B} \left( \boldsymbol{x}+ {\rm i} \xi \boldsymbol{\nabla}_p,t \right) &&\times \boldsymbol{\nabla}_p. \label{eq_FVHW_Der2}
\end{alignat}
In order to obtain transport equations in scalar-valued quantities an expansion of the matrix-valued Wigner function \eqref{equ:WFunc} in terms of Pauli matrices is in order
\begin{equation}
\mathcal {W}^{\rm FV} \left( t, \boldsymbol{x}, \boldsymbol{p} \right) = \frac{1}{2} \left( \mathbbm{1} \mathbbm{f} + \sigma_1 \mathbbm{g} + \sigma_2 \mathbbm{h} + \sigma_3 \mathbbm{k} \right). \label{equ:wigner_sc}
\end{equation}
In this way, we obtain the transport equations for the components of the Feshbach-Villars Wigner function
\begin{alignat}{5}
&m D_t \mathbbm{f} && +\left( \frac{1}{4} \boldsymbol{D}^2 - \boldsymbol{\Pi}^2 \right) \mathbbm{h} && +\boldsymbol{\Pi} \cdot \boldsymbol{D} \ \mathbbm{k} && &&= 0, \label{eq_4_1} \\
&m D_t \mathbbm{g} && -\left( \frac{1}{4} \boldsymbol{D}^2 - \boldsymbol{\Pi}^2 \right) \mathbbm{h} && -\boldsymbol{\Pi} \cdot \boldsymbol{D} \ \mathbbm{k} && +2m^2 \mathbbm{h} &&= 0, \\
&m D_t \mathbbm{h} && +\left( \frac{1}{4} \boldsymbol{D}^2 - \boldsymbol{\Pi}^2 \right) \left( \mathbbm{f}+\mathbbm{g} \right) && && -2m^2 \mathbbm{g} &&= 0, \\
&m D_t \mathbbm{k} && && +\boldsymbol{\Pi} \cdot \boldsymbol{D} \ \left( \mathbbm{f} + \mathbbm{g} \right) \hspace{-5cm} && &&= 0. \label{eq_4_2}
\end{alignat}
Similarly to the Dirac-Heisenberg-Wigner formalism, we employ vacuum initial conditions \cite{FVHW}
\begin{alignat}{3}
\mathbbm{f}_{\rm vac} \left(\boldsymbol{p} \right) = \frac{1}{2} \left( \frac{m}{\sqrt{m^2 + \boldsymbol{p}^2}} + \frac{\sqrt{m^2 + \boldsymbol{p}^2}}{m} \right), \quad \mathbbm{h}_{\rm vac} = 0, && \\
\mathbbm{g}_{\rm vac} \left(\boldsymbol{p} \right) = \frac{1}{2} \left( \frac{m}{\sqrt{m^2 + \boldsymbol{p}^2}} - \frac{\sqrt{m^2 + \boldsymbol{p}^2}}{m} \right), \quad \mathbbm{k}_{\rm vac} = 0. &&
\end{alignat}
\pagebreak
The particle distribution function is given by
\begin{multline}
n^{FV} \left( \boldsymbol{x}, \boldsymbol{p} \right) = \\
\phantom{+} \frac{1}{2} \left( \frac{\sqrt{m^2 + \boldsymbol{p}^2}}{m} + \frac{m - \boldsymbol{\nabla^2}/(4m) }{\sqrt{m^2 + \boldsymbol{p}^2}} \right) \left( \mathbbm{f} - \mathbbm{f}_{\rm vac} \right)
\\
\hspace{1.cm} + \frac{1}{2} \left( \frac{\sqrt{m^2 + \boldsymbol{p}^2}}{m} - \frac{m + \boldsymbol{\nabla^2}/(4m)}{\sqrt{m^2 + \boldsymbol{p}^2}} \right) \left( \mathbbm{g} - \mathbbm{g}_{\rm vac} \right)
\label{equ:nF}
\end{multline}
and the particle momentum spectrum is obtained through
\begin{equation}
n^{FV} \left( \boldsymbol{p} \right) = \int \frac{{\rm d}^3 {\boldsymbol x}}{\left( 2 \pi \right)^3} ~ n^{FV} \left( \boldsymbol{x}, \boldsymbol{p} \right).
\label{equ:nnF}
\end{equation}
\section{Transverse fields}
\label{sec:transverse}
Within the Heisenberg-Wigner formalism, Eqs. \eqref{eq_DHW1}-\eqref{eq_DHW2} as well as Eqs. \eqref{eq_4_1}-\eqref{eq_4_2}, the transport equations are generally formulated on a $2n$-dimensional domain where $n$ is the spatial dimension of the system. Hence, in order to solve the differential equations one is either forced to focus on lower-dimensional problems \cite{Hebenstreit, KohlfurstTech} or disregard important aspects of a fields' characteristics, e.g., it carrying linear momentum \cite{HebenstreitRelate}.
In this manuscript, we want to discuss the class of transverse field. To be more specific, we consider field configurations where all photons propagate in the same direction such that electric field $\hat{\boldsymbol{e}}_E$, magnetic field $\hat{\boldsymbol{e}}_E$ and the fields' propagation direction $\hat{\boldsymbol{e}}_k$ form the relations
\begin{equation}
\hat{\boldsymbol{e}}_E \cdot \hat{\boldsymbol{e}}_B = \hat{\boldsymbol{e}}_E \cdot \hat{\boldsymbol{e}}_k = \hat{\boldsymbol{e}}_B \cdot \hat{\boldsymbol{e}}_k = 0,\quad \hat{\boldsymbol{e}}_E \times \hat{\boldsymbol{e}}_B = \hat{\boldsymbol{e}}_k. \label{eq:Transv}
\end{equation}
These assumptions are very well valid when describing ordinary laser fields, where the transversal component of the wave vector is small compared to the parallel component ${\boldsymbol k_\perp}^2 \ll {\boldsymbol k}^2$. For example, Gaussian laser beams that are not extremely tightly focussed fall in this category.
Without loss of generality we assume photon propagation in $z$-direction; wave vector $\boldsymbol{k} = \left(0,0,k_z \right)$. Due to the relations $\boldsymbol{k} \perp \boldsymbol{E}$ and $\boldsymbol{k} \perp \boldsymbol{B}$ \eqref{eq:Transv} this further implies that photon polarization is limited to the $xy$-plane. Consequently, vector potential and fields are given by
\begin{equation}
\boldsymbol{A} = \begin{pmatrix} A_x (t,z) \\ A_y (t,z) \\ 0 \end{pmatrix},\ \boldsymbol{E} = \begin{pmatrix} E_x (t,z) \\ E_y (t,z) \\ 0 \end{pmatrix},\ \boldsymbol{B} = \begin{pmatrix} B_x (t,z) \\ B_y (t,z) \\ 0 \end{pmatrix}, \label{equ:Field}
\end{equation}
where due to $\boldsymbol{E} = - \partial_t \boldsymbol{A}$ and $\boldsymbol{B} = \boldsymbol{\nabla}_x \times \boldsymbol{A}$ we have
\begin{align}
& E_x = -\partial_t A_x, \quad && E_y = -\partial_t A_y, \\
& B_x = -\partial_z A_y, \quad && B_y = +\partial_z A_x. \label{eq:EB}
\end{align}
The crucial observation is that due to the fact that the quantities in Eq. \eqref{equ:Field} do not depend on $x$ or $y$ we can make an ansatz for the spinors
\begin{equation}
\Psi \left(t, x,y,z \right) = {\rm e} ^{ {\rm i} q_x x + {\rm i} q_y y} \ \psi \left(t,z \right)
\end{equation}
in the QED Lagrangian \eqref{equ:Lag} as well as
\begin{equation}
\Phi \left(t, x,y,z \right) = {\rm e} ^{ {\rm i} q_x x + {\rm i} q_y y} \ \varphi \left(t,z \right)
\end{equation}
in case of scalar particles \eqref{equ:KG}.
Note that even if the vector potential or the fields show a dependence in $x$ or $y$, this ansatz can still be justified under the condition that the dependency is weak and can be absorbed into the overall field strength. In such a case the variables $x$ and $y$ are not treated as coordinates in physical space but as mere additional external parameters. Under such circumstances it is still possible to retain enough information to obtain an accurate prediction for the total particle yield, c.f. the dipole or locally-homogeneous approximation in Appendix \ref{App:LHA}.
Factoring out $x$- and $y$-components of the wave function has a profound impact on the Wigner operator, see Eqs. \eqref{equ:W} and \eqref{equ:Wf}. The resulting operators $\hat{\mathcal W}_{\alpha \beta}^{\perp}$ and $\hat{\mathcal W}_{\alpha \beta}^{\rm FV,\perp}$ assume the forms
\begin{multline}
\hat{\mathcal W}_{\alpha \beta}^{\perp} \left( r_0, r_z , p \right) =
\frac{1}{2} \int {\rm d}^4 s \ {\rm e} ^{\mathrm{i} p_0 s_0 - \mathrm{i} \boldsymbol{p} \cdot \boldsymbol{s}} \ {\rm e} ^{ {\rm i} q_x s_x + {\rm i} q_y s_y} \\
\times {\rm e} ^{\mathrm{i}e \int_{-1/2}^{1/2} {\rm d} \xi \ \left( A_0 \left(r_0+ \xi s_0, r_z+ \xi s_z \right) s_0 - \boldsymbol{A} \left(r_0+ \xi s_0, r_z+ \xi s_z \right) \cdot \boldsymbol{s} \right)} \\
\times \left[ \bar {\psi}_\beta \left( r_0 - s_0/2, r_z - s_z/2 \right), \right. \\
\left. {\psi}_\alpha \left( r_0 + s_0/2, r_z + s_z/2 \right) \right], \label{eq:W_perp_s}
\end{multline}
\begin{multline}
\hat{\mathcal W}_{\alpha \beta}^{\rm FV, \perp} \left( t, z , p \right) = \\
\frac{1}{2} \int {\rm d}^3 s \ {\rm e} ^{- \mathrm{i} \boldsymbol{p} \cdot \boldsymbol{s}} \ {\rm e} ^{ {\rm i} q_x s_x + {\rm i} q_y s_y}
\ {\rm e} ^{-\mathrm{i}e \int_{-1/2}^{1/2} {\rm d} \xi \ \boldsymbol{A} \left(t, z+ \xi s_z \right) \cdot \boldsymbol{s} } \\
\times \left[ \bar {\varphi}_\beta \left( t, z - s_z/2 \right), {\varphi}_\alpha \left( t, z + s_z/2 \right) \right], \label{eq:W_perp_sc}
\end{multline}
where we have the relative coordinates $s = \left(s_0, s_x, s_y, s_z \right)$ and the temporal and spatial center-of-mass coordinates $(r_0,r_z)$ and $(t,z)$, respectively. The altered dependencies on relative coordinates in Eqs. \eqref{eq:W_perp_s}-\eqref{eq:W_perp_sc} makes it possible to evaluate the Fourier transforms with respect to $s_x$ and $s_y$ analytically. In this way, we find that the Wigner operators for transverse fields scale as per
\begin{multline}
\hat{\mathcal W}_{\alpha \beta}^{\perp} \left( r_0, r_z , p \right) \propto \\
\delta \left(p_x - q_x + e \int_{-1/2}^{1/2} {\rm d} \xi \ A_x \left(r_0 + \xi s_0, r_z + \xi s_z \right) \right) \\
\times \delta \left(p_y - q_y + e \int_{-1/2}^{1/2} {\rm d} \xi \ A_y \left(r_0 + \xi s_0, r_z + \xi s_z \right) \right), \label{equ:Wperp}
\end{multline}
\begin{multline}
\hat{\mathcal W}_{\alpha \beta}^{\rm FV,\perp} \left( t, z , p \right) \propto \\
\delta \left(p_x - q_x + e \int_{-1/2}^{1/2} {\rm d} \xi \ A_x \left(t, z + \xi s_z \right) \right) \\
\times \delta \left(p_y - q_y + e \int_{-1/2}^{1/2} {\rm d} \xi \ A_y \left(t, z + \xi s_z \right) \right). \label{equ:Wperp_sc}
\end{multline}
The Dirac delta functions basically display minimal coupling in the Wigner function approach, c.f. to lowest order
\begin{equation}
p_x = q_x - e A_x(t,z), \quad p_y = q_y - e A_y(t,z).
\end{equation}
Equations \eqref{equ:Wperp}-\eqref{equ:Wperp_sc} perfectly demonstrate that transversal fields, when evaluated in terms of the general field notations, Eqs. \eqref{eq_DHW1}-\eqref{eq_DHW2} or Eqs. \eqref{eq_4_1}-\eqref{eq_4_2}, are bound to constraints. As such evaluation of the full transport equations is inefficient. Thus we seek for a way to incorporate the constraint equations into the transport equations. This, however, is not a trivial task as the time evolution of the Wigner components is given in phase-space, $\boldsymbol{x}$ and $\boldsymbol{p}$, while Eqs. \eqref{equ:Wperp} and \eqref{equ:Wperp_sc} suggest a relation between ordinary spatial coordinates $z$ and relative coordinates $s_z$.
\subsection{Minimal coupling within the transport equations}
\label{sec:MinCoup}
There exist two possible pathways in order to obtain a lower-dimensional domain on which to solve the system of transport equations. Option I is to write down the Wigner operator in the form of a transverse Wigner operator as in Eq. \eqref{equ:Wperp} or Eq. \eqref{equ:Wperp_sc} and derive a system of transport equations. The alternative option is to identify the transformation that allows us to rewrite the system of transport equations, Eqs. \eqref{eq_DHW1}-\eqref{eq_DHW2} or \eqref{eq_4_1}-\eqref{eq_4_2}, and especially the differential operators, Eqs. \eqref{eqn2_1}-\eqref{eqn2_3} or \eqref{eq_FVHW_Der1}-\eqref{eq_FVHW_Der2}, into a more compact form.
In this manuscript we will pursue the second option. More specifically, we will incorporate minimal coupling for the special class of fields given by Eq. \eqref{equ:Field}. The decisive point in understanding the procedure is to realize that it is not the transport equations that have to be altered but the differential operators.
To be more specific, for fields of the form of Eq. \eqref{equ:Field} the pseudo-differential operators take on the form
\begin{align}
& D_t && = \partial_t && + e \int {\rm d} \xi && \\
& && \times \left( E_x \left( z+ {\rm i} \xi \partial_{p_z},t \right) ~ \partial_{p_x} + E_y \left( z+ {\rm i} \xi \partial_{p_z},t \right) ~ \partial_{p_y} \right), \hspace{-10cm} && \notag \\ \label{eqn3_1}
& D_1 && = &&+ e \int {\rm d} \xi \ && B_y \left( z+ {\rm i} \xi \partial_{p_z},t \right) \partial_{p_z}, \\
& D_2 && = &&- e \int {\rm d} \xi \ && B_x \left( z+ {\rm i} \xi \partial_{p_z},t \right) \partial_{p_z}, \\
& D_3 && = \partial_z && + e \int {\rm d} \xi \\
& && \times \left( B_x \left( z+ {\rm i} \xi \partial_{p_z},t \right) \partial_{p_y} - B_y \left( z+ {\rm i} \xi \partial_{p_z},t \right) \partial_{p_x} \right), \hspace{-10cm} \notag \\
& \Pi_1 && = p_x &&- {\rm i} e \int {\rm d} \xi \ \xi \ && B_y \left( z+ {\rm i} \xi \partial_{p_z},t \right) \partial_{p_z}, \\
& \Pi_2 && = p_y &&+ {\rm i} e \int {\rm d} \xi \ \xi \ && B_x \left( z+ {\rm i} \xi \partial_{p_z},t \right) \partial_{p_z}, \\
& \Pi_3 && = p_z && - {\rm i} e \int {\rm d} \ \xi \ \xi && \\
& && \times \left( B_x \left( z+ {\rm i} \xi \partial_{p_z},t \right) \partial_{p_y} - B_y \left( z+ {\rm i} \xi \partial_{p_z},t \right) \partial_{p_x} \right). \hspace{-10cm} && \notag \label{eqn3_2}
\end{align}
The differential operators $\partial_{p_z}$ are only formally given in terms of arguments in the fields. Instead, terms of the form $z+ {\rm i} \xi \partial_{p_z}$ should be viewed as couplings with respect to the relative coordinate. The identity transformation
\pagebreak
\begin{multline}
\mathcal{F}_{p_z}^{-1} \Big\{ \mathcal{F}_{p_z} \big\{ \partial_{p_z}\ f(p_z) \big\} \Big\} = \\
\mathcal{F}_{p_z}^{-1} \Big\{ - {\rm i} s_z \ \mathcal{F}_{p_z} \big\{ f(p_z) \big\} \Big\} = \partial_{p_z}\ f(p_z),
\end{multline}
where we first perform a Fourier transform from $p_z$ to $s_z$ and then an inverse Fourier transform from $s_z$ to $p_z$, reveals this connection.
It is only at this stage, after performing the Fourier transform from $p_z$ to $s_z$ but before evaluating the inverse Fourier transform, that a coupling in spatial coordinates and relative coordinates makes sense. To be more specific, we introduce the mappings
\begin{alignat}{6}
& p_x && = q_x - e && \int {\rm d} \xi \ && A_x \left( z + \xi s_z ,t \right), \label{eq:px} \\
& p_y && = q_y - e && \int {\rm d} \xi \ && A_y \left( z + \xi s_z ,t \right), \label{eq:py}
\end{alignat}
coupling the transverse momenta $p_x$, $p_y$ to the vector potential and thus relative coordinates $s_z$. Establishing such a connection facilitates a switch in representation from kinetic momenta $p_x$, $p_y$ to canonical momenta $q_x$, $q_y$. As a result, applying the mapping to the differential operators in combination with a switch to canonical momenta and thus a transformation in derivatives, yields pseudo-differential operators $\cal O$ that operate under the same principle
\begin{equation}
\mathcal{F}^{-1}_{p_z} \left\{ {\cal O} \ \mathcal{F}_{p_z} \left\{ \mathbbm{w} \right\} \right\}.
\end{equation}
In detail, the new operators are given by
\begin{alignat}{7}
& D_t && = \partial_t, && && && \label{eqn5_1} \\
& D_1 && = && &&- {\rm i} e &&\int {\rm d} \xi \ && B_y \left( z+ \xi s_z ,t \right) ~ s_z , \\
& D_2 && = && && + {\rm i} e &&\int {\rm d} \xi \ && B_x \left( z+ \xi s_z ,t \right) ~ s_z , \\
& D_3 && = \partial_z && && && \\
& \Pi_1 && = q_x && && && \\
& - e \int {\rm d} \xi \ A_x \left( z+ \xi s_z ,t \right) - e\int {\rm d} \xi \ \xi \ B_y \left( z+ \xi s_z ,t \right) ~ s_z, \hspace{-8cm} && && && \notag \\
& \Pi_2 && = q_y && && && \\
& - e \int {\rm d} \xi \ A_y \left( z+ \xi s_z ,t \right) + e \int {\rm d} \xi \ \xi \ B_x \left( z+ \xi s_z ,t \right) ~ s_z, \hspace{-8cm} && && && \notag \\
& \Pi_3 && = - {\rm i} \partial_{s_z}, && && && && \label{eqn5_2} \hspace{4cm}
\end{alignat}
where all traces of the derivative operators $\partial_{p_x}$ and $\partial_{p_y}$ have been successfully removed. In particular, the time derivative $D_t$ as well as the derivatives in direction of propagation $D_3,~ \Pi_3$ are now given in terms of simple non-integro differential operators.
Furthermore, due to the special form of the terms on the right-hand side we can even integrate out the dependency on the parameter $\xi$. To this end we substitute the terms $B_x$ and $B_y$ in Eqs. \eqref{eqn5_1}-\eqref{eqn5_2} by $-\partial_z A_y$ and $\partial_z A_x$, respectively. Partial integration then leads to
\pagebreak
\begin{alignat}{6}
& && && - {\rm i} e &&\int_{-1/2}^{1/2} {\rm d} \xi \ && \partial_z A_x \left( z+ \xi s_z ,t \right) ~ s_z = \hspace{2cm} \\
& && && && \hspace{1cm} - {\rm i} e \left\{ A_x \left( z + \frac{s_z}{2},t \right) - A_x \left( z- \frac{s_z}{2},t \right) \right\}, \hspace{-8cm} && \notag \\
& && && - {\rm i} e &&\int_{-1/2}^{1/2} {\rm d} \xi \ && \partial_z A_y \left( z+ \xi s_z ,t \right) ~ s_z = \hspace{2cm} \\
& && && && \hspace{1cm} - {\rm i} e \left\{ A_y \left( z+ \frac{s_z}{2},t \right) - A_y \left( z- \frac{s_z}{2},t \right) \right\}, \hspace{-8cm} && \notag
\end{alignat}
\begin{alignat}{6}
& && - e && \int_{-1/2}^{1/2} {\rm d} \xi \ \Big( A_x \left( z+ \xi s_z ,t \right) &&+ \xi \ && \partial_z A_x \left( z+ \xi s_z ,t \right) ~ s_z \Big) \notag \\
& && && = - \frac{e}{2} \left\{ A_x \left( z+ \frac{s_z}{2},t \right) + A_x \left( z- \frac{s_z}{2},t \right) \right\}, \hspace{-10cm} && && \\
& && - e && \int_{-1/2}^{1/2} {\rm d} \xi \ \Big( A_y \left( z+ \xi s_z ,t \right) &&+ \xi \ && \partial_z A_y \left( z + \xi s_z ,t \right) ~ s_z \Big) \notag \\
& && && = - \frac{e}{2} \left\{ A_y \left( z+ \frac{s_z}{2},t \right) + A_y \left( z- \frac{s_z}{2},t \right) \right\}. \hspace{-10cm} && &&
\end{alignat}
\begin{comment}
In combination with the corresponding transformation of the derivative operators
\begin{alignat}{6}
& \partial_t && = && \ \partial_{t} && - e \int {\rm d} \xi \ && \left( E_x \left( z + \xi s_z ,t \right) ~ \partial_{q_x} + E_y \left( z+ \xi s_z,t \right) ~ \partial_{q_y} \right), \\
& \partial_z && = && \ \partial_{z} && - e \int {\rm d} \xi \ && \left( B_x \left( z+ \xi s_z ,t \right) ~ \partial_{q_y} - B_y \left( z+ \xi s_z,t \right) ~ \partial_{q_x} \right), \\
& \partial_{p_x} && = && \ \partial_{q_x}, && && \\
& \partial_{p_y} && = && \ \partial_{q_y}, && && \\
& \partial_{s_z} && = && \ \partial_{s_z} && + e \int {\rm d} \xi \ \xi \ && \left( B_y \left( z+ \xi s_z ,t \right) ~ \partial_{q_x} - B_x \left( z+ \xi s_z,t \right) ~ \partial_{q_y} \right),
\end{alignat}
\end{comment}
To conclude, we have shown how to replace the integro-part $\int {\rm d}\xi$ of the differential operators by coupling terms in Fourier transformed space. In this regard, electric and magnetic fields are replaced by the vector potential. Of course, performing a Fourier transform still equates to computing an integral each time the operators are applied. However, no further numerical integration of, e.g., the electric field has to be performed. This opens up interesting possibilities in regard to an analytical evaluation of the equations.
\subsection{Minimal coupling within the DHW formalism}
\label{sec:MinCoupDHW}
In regard to the DHW formalism, the complete set of transport equations for transverse fields is given by
\begin{alignat}{5}
& \partial_t \mathbbm{s} && && -2 \boldsymbol{q} \cdot \boldsymbol{\mathbbm{t}}_\mathbbm{1} && -2 \boldsymbol{\Pi} \cdot \boldsymbol{\mathbbm{t}}_\mathbbm{1} &&= 0, \label{eq_5_1} \\
& \partial_t \mathbbm{p} && && +2 \boldsymbol{q} \cdot \boldsymbol{\mathbbm{t}}_\mathbbm{2} && +2 \boldsymbol{\Pi} \cdot \boldsymbol{\mathbbm{t}}_\mathbbm{2} &&= -2m\mathbbm{a}_\mathbb{0}, \\
& \partial_t \mathbbm{v}_\mathbb{0} &&+ \boldsymbol{D} \cdot \boldsymbol{\mathbbm{v}} && && &&= 0, \\
& \partial_t \mathbbm{a}_\mathbb{0} &&+ \boldsymbol{D} \cdot \boldsymbol{\mathbbm{a}} && && &&= +2m\mathbbm{p}, \\
& \partial_t \boldsymbol{\mathbbm{v}} &&+ \boldsymbol{D} \ \mathbbm{v}_\mathbb{0} && +2 \boldsymbol{q} \times \boldsymbol{\mathbbm{a}} && +2 \boldsymbol{\Pi} \times \boldsymbol{\mathbbm{a}} &&= -2m\boldsymbol{\mathbbm{t}}_\mathbbm{1}, \\
& \partial_t \boldsymbol{\mathbbm{a}} &&+ \boldsymbol{D} \ \mathbbm{a}_\mathbb{0} && +2 \boldsymbol{q} \times \boldsymbol{\mathbbm{v}} && +2 \boldsymbol{\Pi} \times \boldsymbol{\mathbbm{v}} &&= 0, \\
& \partial_t \boldsymbol{\mathbbm{t}}_\mathbbm{1} &&+ \boldsymbol{D} \times \boldsymbol{\mathbbm{t}}_\mathbbm{2} && +2 \boldsymbol{q} \ \mathbbm{s} && +2 \boldsymbol{\Pi} \ \mathbbm{s} &&= +2m\boldsymbol{\mathbbm{v}}, \\
& \partial_t \boldsymbol{\mathbbm{t}}_\mathbbm{2} &&- \boldsymbol{D} \times \boldsymbol{\mathbbm{t}}_\mathbbm{1} && -2 \boldsymbol{q} \ \mathbbm{p} && -2 \boldsymbol{\Pi} \ \mathbbm{p} &&= 0, \label{eq_5_2}
\end{alignat}
where we have used vector notation for the momenta $\boldsymbol{q} = \begin{pmatrix} q_x , q_y , q_z \end{pmatrix}$. N.B.: As the vector potential in direction of $z$ vanishes, we write, for the sake of aesthetics, $p_z=q_z$. The corresponding differential operators are given by
\begin{alignat}{6}
& D_1 \mathbbm{w} && = \hspace{6.55cm} \label{eqn6_1} \\
& - {\rm i} e \ \mathcal{F}^{-1}_{q_z} \left\{ \left[ A_x \left( z+ \frac{s_z}{2},t \right) - A_x \left( z- \frac{s_z}{2},t \right) \right] \mathcal{F}_{q_z} \left\{ \mathbbm{w} \right\} \right\} \hspace{-10cm} && \notag \\ \notag \\
& D_2 \mathbbm{w} && = \\
& - {\rm i} e \ \mathcal{F}^{-1}_{q_z} \left\{ \left[ A_y \left( z+ \frac{s_z}{2},t \right) - A_y \left( z- \frac{s_z}{2},t \right) \right] \mathcal{F}_{q_z} \left\{ \mathbbm{w} \right\} \right\} \hspace{-10cm} && \notag \\
& \Pi_1 \mathbbm{w} && = \\
&- \frac{e}{2} \ \mathcal{F}^{-1}_{q_z} \left\{ \left[ A_x \left( z+ \frac{s_z}{2},t \right) + A_x \left( z- \frac{s_z}{2},t \right) \right] \mathcal{F}_{q_z} \left\{ \mathbbm{w} \right\} \right\} \hspace{-10cm} && \notag \\
& \Pi_2 \mathbbm{w} && = \\
&- \frac{e}{2} \ \mathcal{F}^{-1}_{q_z} \left\{ \left[ A_y \left( z+ \frac{s_z}{2},t \right) + A_y \left( z- \frac{s_z}{2},t \right) \right] \mathcal{F}_{q_z} \left\{ \mathbbm{w} \right\} \right\} \hspace{-10cm} && \notag
\end{alignat}
where $\mathbbm{w} = \mathbbm{w} \left(z,\boldsymbol{q},t \right)$ is a placeholder for any of the Wigner components. Additionally, we have
\begin{equation}
D_3 = \partial_z \quad {\text{and}} \quad \Pi_3= 0.
\label{eqn6_2}
\end{equation}
Vacuum initial conditions are given by
\begin{alignat}{7}
& \mathbbm{s}_{\rm vac} \left(\boldsymbol{q} \right) = -\frac{2m}{\sqrt{m^2 +
\boldsymbol{q}^2}}, \qquad &&
\boldsymbol{\mathbbm{v}}_{\rm vac} \left(\boldsymbol{q} \right) = -\frac{2
\boldsymbol{q}}{\sqrt{m^2 + \boldsymbol{q}^2}}, \notag \\
& \mathbbm{p}_{\rm vac} = 0, \qquad
\mathbbm{v}_\mathbb{0} {}_{\rm vac} = 0, \qquad
\mathbbm{a}_\mathbb{0} {}_{\rm vac} = 0, \hspace{-3cm} \\
& \boldsymbol{\mathbbm{a}}_{\rm vac} = \boldsymbol{0}, \qquad
\boldsymbol{\mathbbm{t}}_\mathbbm{1} {}_{\rm vac} = \boldsymbol{0}, \qquad
\boldsymbol{\mathbbm{t}}_\mathbbm{2} {}_{\rm vac} = \boldsymbol{0}. \hspace{-3cm} \notag
\end{alignat}
The particle distribution function and particle spectrum at asymptotic times ($t \to \infty$) become
\begin{equation}
n \left( z, \boldsymbol{q} \right) = \frac{m \left( \mathbbm{s}-\mathbbm{s}_{\rm vac} \right) + {\boldsymbol q} \cdot \left(
\boldsymbol{\mathbbm{v}}-\boldsymbol{\mathbbm{v}}_{\rm vac} \right)}{2\sqrt{m^2+\boldsymbol{q}^2}}
\label{equ:n2}
\end{equation}
and
\begin{equation}
n \left( \boldsymbol{q} \right) = \int \frac{{\rm d} z}{2 \pi} ~ n \left( z, \boldsymbol{q} \right),
\label{equ:nn2}
\end{equation}
respectively.
\subsection{Minimal coupling within the FVHW formalism}
\label{sec:MinCoupFVHW}
In the context of sQED, a focus on transverse fields simplifies the transport equations and, more specifically, the differential operators as much as in the case of the Dirac-Heisenberg-Wigner formalism. In this regard, the full differential operators take on the form
\begin{multline}
\left( \frac{1}{4} \boldsymbol{D}^2 - \Pi^2 \right) \mathbbm{w} = \left( \partial_z^2/4 - \boldsymbol{q}^2 \right) \mathbbm{w} \\
+ \mathcal{F}^{-1}_{q_z} \Bigg\{ \bigg( e q_x \left[ A_x (z + \frac{s_z}{2},t) + A_x (z - \frac{s_z}{2},t) \right] \bigg. \Bigg. \\
\hspace{1.15cm} + e q_y \left[ A_y (z + \frac{s_z}{2},t) + A_y (z - \frac{s_z}{2},t) \right] \\
\hspace{1.15cm} \Bigg. \bigg. - \frac{e^2}{2} \left[ A_x^2 (z + \frac{s_z}{2},t) + A_x^2 (z - \frac{s_z}{2},t) \right. \\
\left. + A_y^2 (z + \frac{s_z}{2},t) + A_y^2 (z - \frac{s_z}{2},t) \right] \bigg) \mathcal{F}_{q_z} \left\{ \mathbbm{w} \right\} \Bigg\}, \label{equ:FVHW_DerA}
\end{multline}
\pagebreak
\begin{multline}
\boldsymbol{\Pi} \cdot \boldsymbol{D} \ \mathbbm{w} = \left( q_z \ \partial_z \right) \mathbbm{w} \\
+ \mathcal{F}^{-1}_{q_z} \Bigg\{ \bigg( - {\rm i} e q_x \left[ A_x (z + \frac{s_z}{2},t) - A_x (z - \frac{s_z}{2},t) \right] \bigg. \Bigg. \\
\hspace{1.6cm} - {\rm i} e q_y \left[ A_y (z + \frac{s_z}{2},t) - A_y (z - \frac{s_z}{2},t) \right] \bigg. \Bigg. \\
\hspace{1.5cm} \Bigg. \bigg. + \frac{ {\rm i} e^2}{2} \left[ A_x^2 (z + \frac{s_z}{2},t) - A_x^2 (z - \frac{s_z}{2},t) \right. \\
\left. + A_y^2 (z + \frac{s_z}{2},t) - A_y^2 (z - \frac{s_z}{2},t) \right] \bigg) \mathcal{F}_{q_z} \left\{ \mathbbm{w} \right\} \Bigg\}, \label{equ:FVHW_DerB}
\end{multline}
with $\mathbbm{w} = \mathbbm{w} \left(t,z,\boldsymbol{q} \right)$.
The corresponding set of transport equations \eqref{eq_4_1}-\eqref{eq_4_2} is unchanged by the transformation. Nevertheless, we display equations of motion as well as initial conditions here again to have all important quantities at one location
\begin{alignat}{5}
&m \partial_t \mathbbm{f} && +\left( \frac{1}{4} \boldsymbol{D}^2 - \boldsymbol{\Pi}^2 \right) \mathbbm{h} && +\boldsymbol{\Pi} \cdot \boldsymbol{D} \ \mathbbm{k} && &&= 0, \label{eq_FVHW1} \\
&m \partial_t \mathbbm{g} && -\left( \frac{1}{4} \boldsymbol{D}^2 - \boldsymbol{\Pi}^2 \right) \mathbbm{h} && -\boldsymbol{\Pi} \cdot \boldsymbol{D} \ \mathbbm{k} && +2m^2 \mathbbm{h} &&= 0, \\
&m \partial_t \mathbbm{h} && +\left( \frac{1}{4} \boldsymbol{D}^2 - \boldsymbol{\Pi}^2 \right) \left( \mathbbm{f}+\mathbbm{g} \right) && && -2m^2 \mathbbm{g} &&= 0, \\
&m \partial_t \mathbbm{k} && && +\boldsymbol{\Pi} \cdot \boldsymbol{D} \ \left( \mathbbm{f} + \mathbbm{g} \right) \hspace{-5cm} && &&= 0. \label{eq_FVHW2}
\end{alignat}
Vacuum initial conditions are given by
\begin{alignat}{3}
\mathbbm{f}_{\rm vac} \left(\boldsymbol{q} \right) = \frac{1}{2} \left( \frac{m}{\sqrt{m^2 + \boldsymbol{q}^2}} + \frac{\sqrt{m^2 + \boldsymbol{q}^2}}{m} \right), \quad \mathbbm{h}_{\rm vac} = 0, && \\
\mathbbm{g}_{\rm vac} \left(\boldsymbol{q} \right) = \frac{1}{2} \left( \frac{m}{\sqrt{m^2 + \boldsymbol{q}^2}} - \frac{\sqrt{m^2 + \boldsymbol{q}^2}}{m} \right), \quad \mathbbm{k}_{\rm vac} = 0, &&
\end{alignat}
and the particle distribution function takes on the form
\begin{multline}
n^{FV} \left( \boldsymbol{x}, \boldsymbol{q} \right) = \\
\phantom{+} \frac{1}{2} \left( \frac{\sqrt{m^2 + \boldsymbol{q}^2}}{m} + \frac{m - \partial_z^2/(4m)}{\sqrt{m^2 + \boldsymbol{q}^2}} \right) \left( \mathbbm{f} - \mathbbm{f}_{\rm vac} \right)
\\
\hspace{1.cm} + \frac{1}{2} \left( \frac{\sqrt{m^2 + \boldsymbol{q}^2}}{m} - \frac{m+\partial_z^2/(4m)}{\sqrt{m^2 + \boldsymbol{q}^2}} \right) \left( \mathbbm{g} - \mathbbm{g}_{\rm vac} \right).
\end{multline}
\section{Application: \ Bi-frequent fields within a periodic envelope}
\label{sec:Appl}
One possible application of the Heisenberg-Wigner formalism for transverse fields is given by the study of particle production rates in colliding, high-intensity waves. The exemplary field configuration we will discuss in this manuscript is given by
\begin{align}
\begin{split}
A_x (z \pm \frac{s_z}{2},t) &= A_{Z,x} \left( z \pm \frac{s_z}{2} \right) \times \\
& \left( A_{1,x} \left( t \right) \ \cos \left( \omega_1 t + k_1 \left( z \pm \frac{s_z}{2} \right) \right) \right. \\
& \left. + A_{2,x} \left( t \right) \ \cos \left( \omega_2 t - k_2 \left( z \pm \frac{s_z}{2} \right) \right) \right), \label{eq:ATZxPer}
\end{split} \\
\begin{split}
A_y (z \pm \frac{s_z}{2},t) &= A_{Z,y} \left( z \pm \frac{s_z}{2} \right) \times \\
& \left( A_{1,y} \left( t \right) \ \sin \left( \omega_1 t + k_1 \left( z \pm \frac{s_z}{2} \right) \right) \right. \\
& \left. + A_{2,y} \left( t \right) \ \sin \left( \omega_2 t - k_2 \left( z \pm \frac{s_z}{2} \right) \right) \right), \label{eq:ATZyPer}
\end{split}
\end{align}
with the the spatial envelope functions $A_{Z,x} \left( z \right)$ and $A_{Z,y} \left( z \right)$, the temporal envelope functions $A_{1,x} \left( t \right)$, $A_{1,y} \left( t \right)$, $A_{2,x} \left( t \right)$ and $A_{2,y} \left( t \right)$ as well as photon energies $\omega_1$, $\omega_2$ and photon momenta $k_1$, $k_2$.
Note that we use spatial envelope functions $A_{Z,x} \left( z \right)$ and $A_{Z,y} \left( z \right)$, because we want to discuss field configurations where the interaction region of, e.g., two colliding pulses is such that the spatial finiteness cannot be ignored. We further assume, that an expansion of the spatial envelope function in terms of $z$ would only give a crude approximation and is therefore not applicable.
The impact of the envelope is not to be underestimated as Refs. \cite{HeinzlIlderton,KingEnv} have shown. Nevertheless, as our goal at this point is to demonstrate the power of incorporating minimal coupling at the operator level, we focus on field configurations that exhibit a periodic envelope function. The reason is that Fourier transforming a sine or cosine function yields a delta distribution as a result. Hence, spatially oscillating field profiles are especially well suited to be incorporated into the Heisenberg-Wigner formalism, see Sec.\ref{sec:Outlook} for an outlook on the possibilities of modeling pulse shapes including a discussion on ways to implement non-periodic envelope functions.
In this context, we assume an envelope function of the form of
\begin{multline}
A_{Z,x} \left( z \pm \frac{s_z}{2} \right) = A_{Z,y} \left( z \pm \frac{s_z}{2} \right) = \\
\cos \left( \frac{1}{\lambda} \left( z \pm \frac{s_z}{2} \right) \right)^2. \label{eq:ATZEnv}
\end{multline}
We have not specified any constraints on the width parameter $\lambda$. However, if clear periodicity of the field configuration is to be obtained,
the subcycle oscillation period has to match the period of the envelope function. In this regard, it should also be clarified that any even exponent is equally well suited for the task of containing the characteristics of the field. The larger the exponent the better the envelope function approximates a flat top pulse and thus the more impact it has on the momentum transfer from the field to particles.
If a particle number is to be extracted, a proper normalization has to be done after evaluating the transport equations \cite{KohlfurstDirac}.
Additionally, the field configuration \eqref{eq:ATZxPer}-\eqref{eq:ATZyPer} is formulated in a very general way. Hence, a variety of simpler configurations can be derived from the final set of equations by choosing suitable amplitudes and frequencies. For example, in the limit
\begin{align}
& A_{1,x} \left( t \right) = A_{2,x} \left( t \right) = A_{x} \left( t \right),\\
& A_{1,y} \left( t \right) = A_{2,y} \left( t \right) = A_{y} \left( t \right), \\
& \omega_1 = \omega_2 = \omega,~ k_1 = k_2 = k,~ \lambda \to \infty,
\end{align}
the coupling operators describing a standing wave configuration are recovered. Such, and other applications are given in the appendix, see Appendix \ref{App:LHA} for locally homogeneous fields, Appendix \ref{App:Assist} for assisting potentials, Appendix \ref{App:Stand} for a standing wave pattern and Appendix \ref{App:Bi} for the interaction of bi-frequent fields.
\subsection{Dirac-Heisenberg-Wigner formalism}
\label{sec:Appl_DHW}
Within the DHW formalism for transverse fields, employing a field shape of the form of Eqs. \eqref{eq:ATZxPer}-\eqref{eq:ATZyPer} with envelope Eq. \eqref{eq:ATZEnv}, turns the modification factors in Eqs. \eqref{eqn6_1}-\eqref{eqn6_2} into
\begin{widetext}
\begin{align}
\begin{split}
\left[ A_x \right. & \left. \left( z+ \frac{s_z}{2},t \right) - A_x \left( z- \frac{s_z}{2},t \right) \right] = \label{eq:A1A21e} \\
& +\cos \left( \frac{s_z + 2z}{2 \lambda} \right)^2 \ \left( A_{1,x} (t) \ \cos \left( \omega_1 t + k_1 z + \frac{k_1 s_z}{2} \right) + A_{2,x} (t) \ \cos \left( \omega_2 t - k_2 z - \frac{k_2 s_z}{2} \right) \right) \\
& -\cos \left( \frac{s_z - 2z}{2 \lambda} \right)^2 \ \left( A_{1,x} (t) \ \cos \left( \omega_1 t + k_1 z - \frac{k_1 s_z}{2} \right) + A_{2,x} (t) \ \cos \left( \omega_2 t - k_2 z + \frac{k_2 s_z}{2} \right) \right),
\end{split} \\
\begin{split}
\left[ A_y \right. & \left. \left( z+ \frac{s_z}{2},t \right) - A_y \left( z- \frac{s_z}{2},t \right) \right] = \label{eq:A1A22e} \\
& +\cos \left( \frac{s_z + 2z}{2 \lambda} \right)^2 \ \left( A_{1,x} (t) \ \sin \left( \omega_1 t + k_1 z + \frac{k_1 s_z}{2} \right) + A_{2,x} (t) \ \sin \left( \omega_2 t - k_2 z - \frac{k_2 s_z}{2} \right) \right) \\
& -\cos \left( \frac{s_z - 2z}{2 \lambda} \right)^2 \ \left( A_{1,x} (t) \ \sin \left( \omega_1 t + k_1 z - \frac{k_1 s_z}{2} \right) + A_{2,x} (t) \ \sin \left( \omega_2 t - k_2 z + \frac{k_2 s_z}{2} \right) \right),
\end{split} \\
\begin{split}
\left[ A_x \right. & \left. \left( z+ \frac{s_z}{2},t \right) + A_x \left( z- \frac{s_z}{2},t \right) \right] = \label{eq:A1A23e} \\
& +\cos \left( \frac{s_z + 2z}{2 \lambda} \right)^2 \ \left( A_{1,y} (t) \ \cos \left( \omega_1 t + k_1 z + \frac{k_1 s_z}{2} \right) + A_{2,y} (t) \ \cos \left( \omega_2 t - k_2 z - \frac{k_2 s_z}{2} \right) \right) \\
& +\cos \left( \frac{s_z - 2z}{2 \lambda} \right)^2 \ \left( A_{1,y} (t) \ \cos \left( \omega_1 t + k_1 z - \frac{k_1 s_z}{2} \right) + A_{2,y} (t) \ \cos \left( \omega_2 t - k_2 z + \frac{k_2 s_z}{2} \right) \right),
\end{split} \\
\begin{split}
\left[ A_y \right. & \left. \left( z+ \frac{s_z}{2},t \right) + A_y \left( z- \frac{s_z}{2},t \right) \right] = \label{eq:A1A24e} \\
& +\cos \left( \frac{s_z + 2z}{2 \lambda} \right)^2 \ \left( A_{1,y} (t) \ \sin \left( \omega_1 t + k_1 z + \frac{k_1 s_z}{2} \right) + A_{2,y} (t) \ \sin \left( \omega_2 t - k_2 z - \frac{k_2 s_z}{2} \right) \right) \\
& +\cos \left( \frac{s_z - 2z}{2 \lambda} \right)^2 \ \left( A_{1,y} (t) \ \sin \left( \omega_1 t + k_1 z - \frac{k_1 s_z}{2} \right) + A_{2,y} (t) \ \sin \left( \omega_2 t - k_2 z + \frac{k_2 s_z}{2} \right) \right).
\end{split}
\end{align}
Consequently, Fourier transforms can be performed analytically, thus we obtain a discrete coupling mechanism
\begin{alignat}{6}
& D_1 \mathbbm{w} && = \sum_{n=-1}^1 \binom{2}{n+1} \times && && && \label{eqn12b_1} \\
& && && \hspace{-0.5 cm} \Bigg\{ \Bigg. -\frac{e A_{1,x} \left( t \right) }{4} \sin \left(\omega_1 t + k_1 z + \frac{2nz}{\lambda} \right) \ \left[ \mathbbm{w} \left( z, q_z+\frac{k_1}{2} +\frac{n}{\lambda} \right) - \mathbbm{w} \left( z, q_z - \frac{k_1}{2} - \frac{n}{\lambda} \right) \right] && && \notag \\
& && && \hspace{-0.15 cm} +\frac{e A_{2,x} \left( t \right) }{4} \sin \left(\omega_2 t - k_2 z + \frac{2nz}{\lambda} \right) \ \left[ \mathbbm{w} \left( z, q_z+\frac{k_2}{2} - \frac{n}{\lambda} \right) - \mathbbm{w} \left( z, q_z - \frac{k_2}{2} + \frac{n}{\lambda} \right) \right] \Bigg. \Bigg\}, && && \notag \\ \notag
%
\end{alignat}
\begin{alignat}{6}
& D_2 \mathbbm{w} && = \sum_{n=-1}^1 \binom{2}{n+1} \times && && && \label{eqn12b_3} \\
& && && \hspace{-0.5 cm} \Bigg\{ \Bigg. +\frac{e A_{1,y} \left( t \right) }{4} \cos \left(\omega_1 t + k_1 z + \frac{2nz}{\lambda} \right) \ \left[ \mathbbm{w} \left( z, q_z+\frac{k_1}{2} +\frac{n}{\lambda} \right) - \mathbbm{w} \left( z, q_z - \frac{k_1}{2} - \frac{n}{\lambda} \right) \right] && \notag \\
& && && \hspace{-0.15 cm} -\frac{e A_{2,y} \left( t \right) }{4} \cos \left(\omega_2 t - k_2 z + \frac{2nz}{\lambda} \right) \ \left[ \mathbbm{w} \left( z, q_z+\frac{k_2}{2} - \frac{n}{\lambda} \right) - \mathbbm{w} \left( z, q_z - \frac{k_2}{2} + \frac{n}{\lambda} \right) \right] \Bigg. \Bigg\}, && && \notag \\
%
& \Pi_1 \mathbbm{w} && = \sum_{n=-1}^1 \binom{2}{n+1} \times && && && \label{eqn12b_4} \\
& && && \hspace{-0.5 cm} \Bigg\{ \Bigg. -\frac{e A_{1,x} \left( t \right) }{8} \cos \left(\omega_1 t + k_1 z + \frac{2nz}{\lambda} \right) \ \left[ \mathbbm{w} \left( z, q_z+\frac{k_1}{2} +\frac{n}{\lambda} \right) + \mathbbm{w} \left( z, q_z - \frac{k_1}{2} - \frac{n}{\lambda} \right) \right] && && \notag \\
& && && \hspace{-0.15 cm} -\frac{e A_{2,x} \left( t \right) }{8} \cos \left(\omega_2 t - k_2 z + \frac{2nz}{\lambda} \right) \ \left[ \mathbbm{w} \left( z, q_z+\frac{k_2}{2} - \frac{n}{\lambda} \right) + \mathbbm{w} \left( z, q_z - \frac{k_2}{2} + \frac{n}{\lambda} \right) \right] \Bigg. \Bigg\}, && && \notag \\
%
& \Pi_2 \mathbbm{w} && = \sum_{n=-1}^1 \binom{2}{n+1} \times && && && \label{eqn12b_5} \\
& && && \hspace{-0.5 cm} \Bigg\{ \Bigg. -\frac{e A_{1,y} \left( t \right) }{8} \sin \left(\omega_1 t + k_1 z + \frac{2nz}{\lambda} \right) \ \left[ \mathbbm{w} \left( z, q_z+\frac{k_1}{2} +\frac{n}{\lambda} \right) + \mathbbm{w} \left( z, q_z - \frac{k_1}{2} - \frac{n}{\lambda} \right) \right] && && \notag \\
& && && \hspace{-0.15 cm} -\frac{e A_{2,y} \left( t \right) }{8} \sin \left(\omega_2 t - k_2 z + \frac{2nz}{\lambda} \right) \ \left[ \mathbbm{w} \left( z, q_z+\frac{k_2}{2} - \frac{n}{\lambda} \right) + \mathbbm{w} \left( z, q_z - \frac{k_2}{2} + \frac{n}{\lambda} \right) \right] \Bigg. \Bigg\}. && && \notag
%
\end{alignat}
\end{widetext}
Furthermore, we have $D_3=\partial_z$ and a vanishing operator $\Pi_3=0$.
We observe that there are two factors that weight in in these discrete coupling operators. First, there is a momentum transfer coming from the plane-wave features, see the terms $\left( k_1 s_z \right)/2$ and $\left( k_2 s_z \right)/2$ in Eqs. \eqref{eq:A1A21e}-\eqref{eq:A1A24e} that translate into momentum couplings $q_z \pm k_1/2$ and $q_z \pm k_2/2$, respectively. The second contribution is due to the envelope function which adds an additional layer of complexity scaling as per $\propto 1/\lambda$.
At this level, the differential operators can be readily applied to the transport equations having the advantage that no numerical evaluation of Fourier transforms have to be performed at all. Conceptually, such a formulation of the Heisenberg-Wigner formalism is exceptionally intriguing as calculations can be performed entirely in coordinate-momentum-space, see Sec. \ref{sec:MinCoupDHW} for the transport equations, vacuum initial conditions as well as a definition of observables.
Nevertheless, to put the possibilities of the Heisenberg-Wigner formalism for transverse fields to full display, we further improve on the representation of the transport equations. Identifying that in Eqs. \eqref{eqn12b_1}-\eqref{eqn12b_5} the dependency of the spatial coordinate $z$ is entirely expressed in terms of sine and cosine functions, performing yet another Fourier transform we transfer the system to a complete energy-momentum-based formalism. In this regard, $\boldsymbol{q}$ denotes momenta in phase-space and instead of the spatial coordinate $z$ we have the variable $K$ denoting energy-momentum channels. To be more specific, assuming that the period of envelope function and subcycle oscillations match we can expand the Wigner components in terms of a Fourier series.
This yields the coupling terms
\begin{widetext}
\begin{alignat}{6}
& && D_1 \mathbbm{w} = && \sum_{n=-1}^1 \ \binom{2}{n+1} && \ \frac{ {\rm i} e A_{1,x} \left( t \right) }{8} \times && && \label{eqn13_1} \\
& && && && \left( + \exp \left(+ {\rm i} \omega_1 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K - k_1 - \frac{2n}{\lambda}, q_z+\frac{k_1}{2} + \frac{n}{\lambda} \right) - \tilde{\mathbbm{w}} \left( K -k_1 - \frac{2n}{\lambda} , q_z - \frac{k_1}{2} - \frac{n}{\lambda} \right) \right] \right. && && \notag \\
& && && && \hspace{0.3cm} \left. - \exp \left(- {\rm i} \omega_1 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K + k_1 + \frac{2n}{\lambda}, q_z+\frac{k_1}{2} +\frac{n}{\lambda} \right) - \tilde{\mathbbm{w}} \left( K + k_1 + \frac{2n}{\lambda} , q_z - \frac{k_1}{2} - \frac{n}{\lambda} \right) \right] \right) && && \notag \\
& && && - \sum_{n=-1}^1 \ \binom{2}{n+1} \ \frac{ {\rm i} e A_{2,x} \left( t \right) }{8} \times \hspace{-5cm} && && && \notag \\
& && && && \left( + \exp \left(+ {\rm i} \omega_2 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K + k_2 - \frac{2n}{\lambda} , q_z + \frac{k_2}{2} - \frac{n}{\lambda} \right) - \tilde{\mathbbm{w}} \left( K + k_2 - \frac{2n}{\lambda}, q_z-\frac{k_2}{2} + \frac{n}{\lambda} \right) \right] \right. && && \notag \\
& && && && \hspace{0.3cm} \left. - \exp \left(- {\rm i} \omega_2 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K - k_2 + \frac{2n}{\lambda} , q_z + \frac{k_2}{2} - \frac{n}{\lambda} \right) - \tilde{\mathbbm{w}} \left( K - k_2 + \frac{2n}{\lambda}, q_z-\frac{k_2}{2} + \frac{n}{\lambda} \right) \right] \right), && && \notag \\
%
& && D_2 \mathbbm{w} = && \sum_{n=-1}^1 \ \binom{2}{n+1} && \ \frac{e A_{1,y} \left( t \right) }{8} \times && && \label{eqn13_2} \\
& && && && \left( + \exp \left(+ {\rm i} \omega_1 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K - k_1 - \frac{2n}{\lambda}, q_z+\frac{k_1}{2} +\frac{n}{\lambda} \right) - \tilde{\mathbbm{w}} \left( K -k_1 - \frac{2n}{\lambda} , q_z - \frac{k_1}{2} - \frac{n}{\lambda} \right) \right] \right. && && \notag \\
& && && && \hspace{0.3cm} \left. + \exp \left(- {\rm i} \omega_1 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K + k_1 + \frac{2n}{\lambda}, q_z+\frac{k_1}{2} +\frac{n}{\lambda} \right) - \tilde{\mathbbm{w}} \left( K + k_1 + \frac{2n}{\lambda} , q_z - \frac{k_1}{2} - \frac{n}{\lambda} \right) \right] \right) && && \notag \\
& && && - \sum_{n=-1}^1 \ \binom{2}{n+1} \ \frac{e A_{2,y} \left( t \right) }{8} \times \hspace{-5cm} && && && \notag \\
& && && && \left( + \exp \left(+ {\rm i} \omega_2 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K + k_2 - \frac{2n}{\lambda} , q_z + \frac{k_2}{2} - \frac{n}{\lambda} \right) - \tilde{\mathbbm{w}} \left( K + k_2 - \frac{2n}{\lambda}, q_z-\frac{k_2}{2} + \frac{n}{\lambda} \right) \right] \right. && && \notag \\
& && && && \hspace{0.3cm} \left. + \exp \left(- {\rm i} \omega_2 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K - k_2 + \frac{2n}{\lambda} , q_z + \frac{k_2}{2} - \frac{n}{\lambda} \right) - \tilde{\mathbbm{w}} \left( K - k_2 + \frac{2n}{\lambda}, q_z-\frac{k_2}{2} + \frac{n}{\lambda} \right) \right] \right), && && \notag
\end{alignat}
\begin{alignat}{6}
%
& && \Pi_1 \mathbbm{w} = && -\sum_{n=-1}^1 \ \binom{2}{n+1} && \ \frac{e A_{1,x} \left( t \right) }{16} \times && && \label{eqn13_4} \\
& && && && \left( + \exp \left(+ {\rm i} \omega_1 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K - k_1 - \frac{2n}{\lambda}, q_z+\frac{k_1}{2} +\frac{n}{\lambda} \right) + \tilde{\mathbbm{w}} \left( K -k_1 - \frac{2n}{\lambda} , q_z - \frac{k_1}{2} - \frac{n}{\lambda} \right) \right] \right. && && \notag \\
& && && && \hspace{0.3cm} \left. + \exp \left(- {\rm i} \omega_1 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K + k_1 + \frac{2n}{\lambda}, q_z+\frac{k_1}{2} +\frac{n}{\lambda} \right) + \tilde{\mathbbm{w}} \left( K + k_1 + \frac{2n}{\lambda} , q_z - \frac{k_1}{2} - \frac{n}{\lambda} \right) \right] \right) && && \notag \\
& && && - \sum_{n=-1}^1 \ \binom{2}{n+1} \ \frac{e A_{2,x} \left( t \right) }{16} \times \hspace{-5cm} && && && \notag \\
& && && && \left( + \exp \left(+ {\rm i} \omega_2 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K + k_2 - \frac{2n}{\lambda} , q_z + \frac{k_2}{2} - \frac{n}{\lambda} \right) + \tilde{\mathbbm{w}} \left( K + k_2 - \frac{2n}{\lambda}, q_z-\frac{k_2}{2} + \frac{n}{\lambda} \right) \right] \right. && && \notag \\
& && && && \hspace{0.3cm} \left. + \exp \left(- {\rm i} \omega_2 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K - k_2 + \frac{2n}{\lambda} , q_z + \frac{k_2}{2} - \frac{n}{\lambda} \right) + \tilde{\mathbbm{w}} \left( K - k_2 + \frac{2n}{\lambda}, q_z-\frac{k_2}{2} + \frac{n}{\lambda} \right) \right] \right), && && \notag \\
%
& && \Pi_2 \mathbbm{w} = && \sum_{n=-1}^1 \ \binom{2}{n+1} && \ \frac{ {\rm i} e A_{1,y} \left( t \right) }{16} \times && && \label{eqn13_5} \\
& && && && \left( + \exp \left(+ {\rm i} \omega_1 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K - k_1 - \frac{2n}{\lambda}, q_z+\frac{k_1}{2} +\frac{n}{\lambda} \right) + \tilde{\mathbbm{w}} \left( K -k_1 - \frac{2n}{\lambda} , q_z - \frac{k_1}{2} - \frac{n}{\lambda} \right) \right] \right. && && \notag \\
& && && && \hspace{0.3cm} \left. - \exp \left(- {\rm i} \omega_1 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K + k_1 + \frac{2n}{\lambda}, q_z+\frac{k_1}{2} +\frac{n}{\lambda} \right) + \tilde{\mathbbm{w}} \left( K + k_1 + \frac{2n}{\lambda} , q_z - \frac{k_1}{2} - \frac{n}{\lambda} \right) \right] \right) && && \notag \\
& && && + \sum_{n=-1}^1 \ \binom{2}{n+1} \ \frac{ {\rm i} e A_{2,y} \left( t \right) }{16} \times \hspace{-5cm} && && && \notag \\
& && && && \left( + \exp \left(+ {\rm i} \omega_2 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K + k_2 - \frac{2n}{\lambda} , q_z + \frac{k_2}{2} - \frac{n}{\lambda} \right) + \tilde{\mathbbm{w}} \left( K + k_2 - \frac{2n}{\lambda}, q_z-\frac{k_2}{2} + \frac{n}{\lambda} \right) \right] \right. && && \notag \\
& && && && \hspace{0.3cm} \left. - \exp \left(- {\rm i} \omega_2 t \right) \ \left[ \tilde{\mathbbm{w}} \left( K - k_2 + \frac{2n}{\lambda} , q_z + \frac{k_2}{2} - \frac{n}{\lambda} \right) + \tilde{\mathbbm{w}} \left( K - k_2 + \frac{2n}{\lambda}, q_z-\frac{k_2}{2} + \frac{n}{\lambda} \right) \right] \right). && && \notag
\end{alignat}
\end{widetext}
Additionally, we have $D_3 \mathbbm{w} = i K \tilde{\mathbbm{w}} \left( K, q_z \right)$ and $\Pi_3 = 0$. \\
The transport equations then take on the form
\begin{alignat}{5}
& \partial_t \tilde{\mathbbm{s}} && && -2 \boldsymbol{q} \cdot \tilde{\mathbbm{t}}_\mathbbm{1} && -2 \boldsymbol{\Pi} \cdot \boldsymbol{\mathbbm{t}}_\mathbbm{1} &&= 0, \label{eq_7_1} \\
& \partial_t \tilde{\mathbbm{p}} && && +2 \boldsymbol{q} \cdot \tilde{\mathbbm{t}}_\mathbbm{2} && +2 \boldsymbol{\Pi} \cdot \boldsymbol{\mathbbm{t}}_\mathbbm{2} &&= -2m \tilde{\mathbbm{a}}_\mathbb{0}, \\
& \partial_t \tilde{\mathbbm{v}}_\mathbb{0} &&+ \boldsymbol{D} \cdot \boldsymbol{\mathbbm{v}} && && &&= 0, \\
& \partial_t \tilde{\mathbbm{a}}_\mathbb{0} &&+ \boldsymbol{D} \cdot \boldsymbol{\mathbbm{a}} && && &&= +2m \tilde{\mathbbm{p}}, \\
& \partial_t \tilde{\boldsymbol{\mathbbm{v}}} &&+ \boldsymbol{D} \ \mathbbm{v}_\mathbb{0} && +2 \boldsymbol{q} \times \tilde{\boldsymbol{\mathbbm{a}}} && +2 \boldsymbol{\Pi} \times \boldsymbol{\mathbbm{a}} &&= -2m \tilde{\mathbbm{t}}_\mathbbm{1}, \\
& \partial_t \tilde{\boldsymbol{\mathbbm{a}}} &&+ \boldsymbol{D} \ \mathbbm{a}_\mathbb{0} && +2 \boldsymbol{q} \times \tilde{\boldsymbol{\mathbbm{v}}} && +2 \boldsymbol{\Pi} \times \boldsymbol{\mathbbm{v}} &&= 0, \\
& \partial_t \tilde{\mathbbm{t}}_\mathbbm{1} &&+ \boldsymbol{D} \times \boldsymbol{\mathbbm{t}}_\mathbbm{2} && +2 \boldsymbol{q} \ \tilde{\mathbbm{s}} && +2 \boldsymbol{\Pi} \ \mathbbm{s} &&= +2m \tilde{\boldsymbol{\mathbbm{v}}}, \\
& \partial_t \tilde{\mathbbm{t}}_\mathbbm{2} &&- \boldsymbol{D} \times \boldsymbol{\mathbbm{t}}_\mathbbm{1} && -2 \boldsymbol{q} \ \tilde{\mathbbm{p}} && -2 \boldsymbol{\Pi} \ \mathbbm{p} &&= 0, \label{eq_7_2}
\end{alignat}
where Wigner components $\mathbbm{w}$ are treated under the procedure displayed in Eqs. \eqref{eqn13_1}-\eqref{eqn13_5}. Components that are denoted as $\tilde{\mathbbm{w}} = \tilde{\mathbbm{w}} \left(K, q_z \right)$ have already been transformed.
In this representation, vacuum initial conditions are given by
\begin{alignat}{8}
& \tilde{\mathbbm{s}}_{\rm vac} \left(K, \boldsymbol{q} \right) && = -\frac{2m}{\sqrt{m^2 + \boldsymbol{q}^2}} \ \delta \left( K \right), \quad &&
\tilde{\mathbbm{p}}_{\rm vac} && = 0, && \\
& \tilde{\mathbbm{v}}_\mathbb{0} {}_{\rm vac} && = 0, \quad &&
\tilde{\mathbbm{a}}_\mathbb{0} {}_{\rm vac} && = 0, && \\
& \tilde{\boldsymbol{\mathbbm{v}}}_{\rm vac} \left(K, \boldsymbol{q} \right) && = -\frac{2
\boldsymbol{q}}{\sqrt{m^2 + \boldsymbol{q}^2}} \ \delta \left( K \right), \quad &&
\tilde{\boldsymbol{\mathbbm{a}}}_{\rm vac} && = \boldsymbol{0}, && \\
& \tilde{\boldsymbol{\mathbbm{t}}}_\mathbbm{1} {}_{\rm vac} && = \boldsymbol{0}, \quad &&
\tilde{\boldsymbol{\mathbbm{t}}}_\mathbbm{2} {}_{\rm vac} && = \boldsymbol{0}, \quad &&
\end{alignat}
thus only zero-modes in $K$ do not vanish at $t \to - \infty$.
Accordingly, the particle distribution function reads
\begin{equation}
n \left( K, \boldsymbol{q} \right) = \frac{m \left( \tilde{\mathbbm{s}}-\tilde{\mathbbm{s}}_{\rm vac} \right) + {\boldsymbol q} \cdot \left( \tilde{\boldsymbol{\mathbbm{v}}} - \tilde{\boldsymbol{\mathbbm{v}}}_{\rm vac} \right)}{2\sqrt{m^2+\boldsymbol{q}^2}}.
\end{equation}
\subsection{Feshbach-Villars-Heisenberg-Wigner formalism}
\label{sec:Appl_FVHW}
We have already seen how to adopt a complex field configuration into this new variation of the Heisenberg-Wigner formalism in Sec. \ref{sec:Appl_DHW} for Dirac particles. While the same mechanism applies for the FVHW formalism, the differential operators are much more involved thus obtaining the characteristic coupling terms in the Wigner components is more complicated.
Given a configuration of the form of Eqs. \eqref{eq:ATZxPer}-\eqref{eq:ATZyPer} the differential operators \eqref{equ:FVHW_DerA}-\eqref{equ:FVHW_DerB} are converted into coupling operators of the form of \cite{footnote3}
\begin{widetext}
\begin{align}
& \left( \frac{1}{4} \boldsymbol{D}^2 - \Pi^2 \right) \mathbbm{w} = \label{eq:FV_Op1} \\
& \hspace{1cm} \left( \partial_z^2/4 - \boldsymbol{q}^2 \right) \times \left[ \mathbbm{w} \left( z, q_z \right) \right] \notag \\
\begin{split}
& \hspace{1cm} +\sum_{i=1}^2 \ \sum_{n=-1}^1 \binom{2}{n+1} \ \frac{e}{4} \left( A_{i,x} \left( t \right) q_x \cos \left(\omega_i t + \chi_i k_i z + \frac{2nz}{\lambda} \right) \right. \\
& \hspace{6cm} \left. + A_{i,y} \left( t \right) q_y \sin \left(\omega_i t + \chi_i k_i z + \frac{2nz}{\lambda} \right) \right) \times \\
& \hspace{7cm} \left[ \mathbbm{w} \left( z, q_z+\frac{\chi_i k_i}{2} +\frac{n}{\lambda} \right) + \mathbbm{w} \left( z, q_z - \frac{\chi_i k_i}{2} - \frac{n}{\lambda} \right) \right] \notag
\end{split} \\
\begin{split}
& \hspace{1cm} -\sum_{i=1}^2 \ \sum_{j=1}^2 \ \sum_{\rho=-1,+1} \ \sum_{n=-2}^2 \binom{4}{n+2} \ \frac{e^2}{64} \Big( A_{i,x} \left( t \right) A_{j,x} \left( t \right) - \rho A_{i,y} \left( t \right) A_{j,y} \left( t \right) \Big) \times \\
& \hspace{2cm} \cos \Big( \left(\omega_i + \rho \omega_j \right) t + \left(\chi_i k_i + \rho \chi_j k_j \right) z + \frac{2nz}{\lambda} \Big) \times \\
& \hspace{3.8cm} \left[ \mathbbm{w} \left( z, q_z+\frac{\chi_i k_i}{2} + \frac{\rho \chi_j k_j}{2} +\frac{n}{\lambda} \right) + \mathbbm{w} \left( z, q_z - \frac{\chi_i k_i}{2} - \frac{\rho \chi_j k_j}{2} - \frac{n}{\lambda} \right) \right], \notag
\end{split}
\end{align}
\begin{align}
& \left( \boldsymbol{\Pi} \cdot \boldsymbol{D} \right) \mathbbm{w} = \label{eq:FV_Op2} \\
& \hspace{1cm} \left( q_z \ \partial_z \right) \times \left[ \mathbbm{w} \left( z, q_z \right) \right] \notag \\
\begin{split}
& \hspace{1cm} -\sum_{i=1}^2 \ \sum_{n=-1}^1 \binom{2}{n+1} \ \frac{e}{4} \left( A_{i,x} \left( t \right) q_x \sin \left(\omega_i t + \chi_i k_i z + \frac{2nz}{\lambda} \right) \right. \\
& \hspace{6cm} \left. - A_{i,y} \left( t \right) q_y \cos \left(\omega_i t + \chi_i k_i z + \frac{2nz}{\lambda} \right) \right) \times \\
& \hspace{7cm} \left[ \mathbbm{w} \left( z, q_z+\frac{\chi_i k_i}{2} +\frac{n}{\lambda} \right) - \mathbbm{w} \left( z, q_z - \frac{\chi_i k_i}{2} - \frac{n}{\lambda} \right) \right] \notag
\end{split} \\
\begin{split}
& \hspace{1cm} +\sum_{i=1}^2 \ \sum_{j=1}^2 \ \sum_{\rho=-1,+1} \ \sum_{n=-2}^2 \binom{4}{n+2} \ \frac{e^2}{64} \Big( A_{i,x} \left( t \right) A_{j,x} \left( t \right) - \rho A_{i,y} \left( t \right) A_{j,y} \left( t \right) \Big) \times \\
& \hspace{2cm} \sin \Big( \left(\omega_i + \rho \omega_j \right) t + \left(\chi_i k_i + \rho \chi_j k_j \right) z + \frac{2nz}{\lambda} \Big) \times \\
& \hspace{3.8cm} \left[ \mathbbm{w} \left( z, q_z+\frac{\chi_i k_i}{2} + \frac{\rho \chi_j k_j}{2} +\frac{n}{\lambda} \right) - \mathbbm{w} \left( z, q_z - \frac{\chi_i k_i}{2} - \frac{\rho \chi_j k_j}{2} - \frac{n}{\lambda} \right) \right], \notag
\end{split}
\end{align}
\end{widetext}
with $\chi_i = 1$ if $i=1$ and $\chi_i = -1$ if $i=2$.
Operators \eqref{eq:FV_Op1} and \eqref{eq:FV_Op2} can be used directly to determine solutions of the system of transport equations for scalar fields \eqref{eq_FVHW1}-\eqref{eq_FVHW2}.
One major difference between DHW formalism and FVHW formalism becomes again apparent at this stage of the derivation. Within quantum electrodynamics, couplings between fields and particles are based on the Dirac equation, which is first order in spatial derivatives. Consequently, coupling terms are linear in the vector potential and a maximum of one photon per wave can be absorbed per update step. The latter is realized in the form of terms $k_1/2$, $k_2/2$ in the momentum coordinate of the Wigner components \eqref{eqn12b_1}-\eqref{eqn12b_5}.
This behaviour is entirely different in the Feshbach-Villars formalism. Scalar quantum electrodynamics is based on the Klein-Gordon equation \eqref{equ:FVKG}, thus couplings up to second order may appear. Equations \eqref{eq:FV_Op1} and \eqref{eq:FV_Op2} display this by exhibiting not only a linear (first sum in each of the operators) but also a quadratic term (second set of sums). In this context, at each update step there arises the possibility for a two-photon exchange.
Similarly to the coupling terms in the DHW formalism, the spatial coordinate $z$ appears only within sine and cosine functions. Hence, performing a Fourier transform with respect to $z$ on the complete transport equations \eqref{eq_FVHW1}-\eqref{eq_FVHW2} yields a set of coupled ordinary differential equations \\
\begin{alignat}{5}
&m \partial_t \mathbbm{\tilde f ^+} && && && +2m^2 \mathbbm{\tilde{h}} &&= 0, \\
&m \partial_t \mathbbm{\tilde f ^-} && -2 P^2 \mathbbm{\tilde h} && +2I \ \mathbbm{\tilde k} && -2m^2 \mathbbm{\tilde{h}} &&= 0, \\
\phantom{sum_x^x}
&m \partial_t \mathbbm{\tilde{h}} && -\phantom{2} P^2 \mathbbm{\tilde f ^+} \hspace{-0.5cm} && && - \phantom{2} m^2 \mathbbm{\tilde f ^+} +m^2 \mathbbm{\tilde f ^-}&&= 0, \\
&m \partial_t \mathbbm{\tilde{k}} && && +\phantom{2} I \ \mathbbm{\tilde f ^+} \hspace{-0.5cm} && &&= 0,
\end{alignat}
where we have used the relations $\mathbbm{\tilde f^\pm}=(\mathbbm{\tilde{f}} \pm \mathbbm{\tilde{g}})$.
The corresponding vacuum initial conditions are transformed accordingly
\begin{alignat}{3}
\mathbbm{\tilde f^+}_{\rm vac} \left(K, \boldsymbol{q} \right) = \frac{m}{\sqrt{m^2 + \boldsymbol{q}^2}} \delta \left( K \right), \quad \mathbbm{\tilde{h}}_{\rm vac} = 0, && \\
\mathbbm{\tilde f^-}_{\rm vac} \left(K, \boldsymbol{q} \right) = \frac{\sqrt{m^2 + \boldsymbol{q}^2}}{m} \delta \left( K \right), \quad \mathbbm{\tilde{k}}_{\rm vac} = 0. &&
\end{alignat}
The distribution function then reads
\begin{multline}
n^{FV} \left( K, \boldsymbol{q} \right) = \\
\left( \frac{\sqrt{m^2 + \boldsymbol{q}^2}}{2m} + \frac{K^2}{8m \sqrt{m^2 + \boldsymbol{q}^2}} \right) \Big( \mathbbm{\tilde f^+} - \mathbbm{\tilde f^+}_{\rm vac} \Big) \\
+\frac{m}{2\sqrt{m^2 + \boldsymbol{q}^2}} \Big(\mathbbm{\tilde f^-} - \mathbbm{\tilde f^-}_{\rm vac} \Big).
\end{multline}
The coupling terms in transformed $qK$-space are given by
\begin{widetext}
\begin{align}
& - P^2 \mathbbm{w} = \left( \frac{1}{4} \boldsymbol{D}^2 - \Pi^2 \right) \mathbbm{w} = \left( -K^2/4 - \boldsymbol{q}^2 \right) \times \left[ \tilde{\mathbbm{w}} \left( K, q_z \right) \right] \label{eq:FV_Op3} \\
\begin{split}
& \hspace{1cm} +\sum_{i=1}^2 \ \sum_{n=-1}^1 \binom{2}{n+1} \ \sum_{\nu=-1,+1} \ \frac{e}{8}
\left( A_{i,x} \left( t \right) \ q_x - {\rm i} \nu A_{i,y} \left( t \right) \ q_y \right) \times \\
& \hspace{1.5cm} {\rm e} ^{ {\rm i} \nu \omega_i t}
\left[ \tilde{\mathbbm{w}} \left( K - \nu \left( \chi_i k_i + \frac{2n}{\lambda} \right), q_z + \left( \frac{\chi_i k_i}{2} +\frac{n}{\lambda} \right) \right) \right. \\
& \hspace{7.5cm} \left. + \tilde{\mathbbm{w}} \left( K - \nu \left( \chi_i k_i + \frac{2n}{\lambda} \right), q_z - \left( \frac{\chi_i k_i}{2} + \frac{n}{\lambda} \right) \right) \right] \notag
\end{split} \\
\begin{split}
& \hspace{1cm} -\sum_{i=1}^2 \ \sum_{j=1}^2 \ \sum_{\rho=-1,+1} \ \sum_{n=-2}^2 \binom{4}{n+2} \ \sum_{\nu=-1,+1} \ \frac{e^2}{128} \Big( A_{i,x} \left( t \right) A_{j,x} \left( t \right) - \rho A_{i,y} \left( t \right) A_{j,y} \left( t \right) \Big) \times \\
& \hspace{2cm} {\rm e} ^{ {\rm i} \nu \left( \omega_i + \rho \omega_j \right) t} \left[ \tilde{\mathbbm{w}} \left( K - \nu \left( \chi_i k_i + \rho \chi_j k_j + \frac{2n}{\lambda} \right), q_z + \left( \frac{\chi_i k_i}{2} + \frac{\rho \chi_j k_j}{2} +\frac{n}{\lambda} \right) \right) \right. \\
& \hspace{4.2cm} + \left. \tilde{\mathbbm{w}} \left( K - \nu \left( \chi_i k_i + \rho \chi_j k_j + \frac{2n}{\lambda} \right), q_z - \left( \frac{\chi_i k_i}{2} + \frac{\rho \chi_j k_j}{2} + \frac{n}{\lambda} \right) \right) \right], \notag
\end{split}
\end{align}
\begin{align}
& I \mathbbm{w} = \left( \boldsymbol{\Pi} \cdot \boldsymbol{D} \right) \mathbbm{w} = \left( {\rm i} q_z \ K \right) \times \left[ \tilde{\mathbbm{w}} \left( K, q_z \right) \right] \label{eq:FV_Op4} \\
\begin{split}
& \hspace{1cm} +\sum_{i=1}^2 \ \sum_{n=-1}^1 \binom{2}{n+1} \ \sum_{\nu=-1,+1} \ \frac{e}{8} \left( {\rm i} A_{i,x} \nu \left( t \right) q_x + A_{i,y} q_y \left( t \right) \right) \times \\
& \hspace{1.5cm} {\rm e} ^{ {\rm i} \nu \omega_i t} \left[ \tilde{\mathbbm{w}} \left( K - \nu \left(\chi_i k_i + \frac{2n}{\lambda} \right), q_z+ \left( \frac{\chi_i k_i}{2} +\frac{n}{\lambda} \right) \right) \right. \\
& \hspace{7.5cm} \left. - \tilde{\mathbbm{w}} \left( K - \nu \left( \chi_i k_i + \frac{2n}{\lambda} \right), q_z - \left( \frac{\chi_i k_i}{2} + \frac{n}{\lambda} \right) \right) \right] \notag
\end{split} \\
\begin{split}
& \hspace{1cm} -\sum_{i=1}^2 \ \sum_{j=1}^2 \ \sum_{\rho=-1,+1} \ \sum_{n=-2}^2 \binom{4}{n+2} \ \sum_{\nu=-1,+1} \ \frac{ {\rm i} e^2 \nu}{128} \Big( A_{i,x} \left( t \right) A_{j,x} \left( t \right) - \rho A_{i,y} \left( t \right) A_{j,y} \left( t \right) \Big) \times \\
& \hspace{2cm} {\rm e} ^{ {\rm i} \nu \left( \omega_i + \rho \omega_j \right) t} \left[ \tilde{\mathbbm{w}} \left( K - \nu \left( \chi_i k_i + \rho \chi_j k_j + \frac{2n}{\lambda} \right), q_z + \left( \frac{\chi_i k_i}{2} + \frac{\rho \chi_j k_j}{2} +\frac{n}{\lambda} \right) \right) \right. \\
& \hspace{4.2cm} - \left. \tilde{\mathbbm{w}} \left( K - \nu \left( \chi_i k_i + \rho \chi_j k_j + \frac{2n}{\lambda} \right), q_z - \left( \frac{\chi_i k_i}{2} + \frac{\rho \chi_j k_j}{2} + \frac{n}{\lambda} \right) \right) \right]. \notag
\end{split}
\end{align}
\end{widetext}
\section{Conclusions}
\label{sec:Conclusion}
The Heisenberg-Wigner formalism has always been hampered by the curse of dimensionality with its core principle being to fully resolve spatial as well as momentum coordinates (any problem is given in terms of an at least $2n$ dimensional domain with $n$ being the number of coordinate dependencies). This has, in fact, effectively turned its biggest strength into its biggest weakness.
The method introduced in this manuscript copes with this issue finally revealing the formalism's full potential.
Over the course of this manuscript we have introduced a class of field configurations that circumvent the dimensionality problem by separating off all momenta where the fields are homogeneous in their respective spatial coordinates. In this context, it is possible to reduce the transport equations to a $1+1$-dimensional domain with the additional two momentum coordinates given as external parameters. As a result, the equations are trivially parallelizable, thus the computational requirements are drastically reduced.
Moreover, we have shown that for a special class of transverse fields the pseudo-differential operators, which are formulated such that in their original form they facilitate momentum derivatives up to infinite order, are easily resolved in terms of Fourier transforms completely eliminating the need for, e.g., operator expansions. In the same vein, integrations over an auxiliary variable connecting the momentum variables in the formalism with the classical kinetic momentum can be carried out analytically further reducing the complexity of the differential operators.
Additionally, we have discussed applications of the new approach, e.g., in spatially oscillating fields. In such cases, the pseudo-spectral behaviour of the differential operators can be fully eliminated and, thus, these operators are replaced by discrete coupling terms.
\section{Outlook}
\label{sec:Outlook}
The derivations provided in this manuscript yield a computationally improved version of the Heisenberg-Wigner formalism providing the opportunity to explore new areas of the configuration space which have not been accessible so far.
It should be emphasized, though, that we have only laid the basis for efficiently treating a complete class of problems within phase-space formalisms. In this regard, we have not even touched the subject of computation and numerical feasibility. While the new formulation of the formalism certainly is an improvement, the true strength lies in the fact that new computer code can now be devised incorporating the conceptual differences at a fundamental level, e.g., in optimizing particle production rates \cite{PhysRevD.88.045028, DesignerFields, PhysRevA.99.022128, Gelfand-Dikii}.
\subsection{Beyond single-directional propagation}
The advantage of being able to factor out a specific coordinate persists also for field configurations where two or more transverse fields propagate in different directions. For example, for a scenario where one wave is propagating in direction $z$ and a second wave is propagating in direction $x$ the $y$-coordinate can, when factoring in minimal coupling, still be regarded as an external parameter. While it might still be a colossal undertaking to compute production rates in $4$-dimensional phase-space, fields of the form
\begin{equation}
\boldsymbol{A} = \begin{pmatrix} 0 \\ A(t,x,z) \\ 0 \end{pmatrix}
\label{equ:Field2d}
\end{equation}
have the advantage of inheriting many benefits of the systems discussed in this manuscript. One might, for example, adapt the derivative operators given in this article for quasi-$1+1$-dimensional configurations to $2$ dimensions.
\subsection{Momentum coupling in spatially non-periodic envelopes}
In case of oscillating fields each individual photon carries a specific energy and, thus, momentum which potentially act as tracers to distinguish the different particle pairs \cite{Popov}. In Sec. \ref{sec:Appl} we have shown that for special field configurations the opportunity arises to turn the set of differential equations for a continuous momentum variable $q_z$ into a set of algebraic equations that connect Wigner coefficients only at specific, discrete points.
This procedure has been made particularly easy due to the choice of the envelope function in Eq. \eqref{eq:ATZEnv}. Through implementation of a cosine-squared function the Fourier transforms in the vector potential can be carried out analytically. Moreover, the results are given in terms of delta functions which eliminate the subsequent convolution integrals.
If a different envelope function is chosen, evaluating the derivative operators is more difficult. Nevertheless, we might still be able to find approximations that reproduce the full solution accurately even for spatially non-periodic interaction regions.
There are multiple ways to proceed depending on the form of the envelope function. For example, we want to assume that the functions $A_{Z,x} \left( z \right)$ and $A_{Z,y} \left( z \right)$, c.f. Eq. \eqref{eq:ATZEnv}, are only slowly varying in $z$ compared to the underlying oscillating functions \cite{HeinzlKing}. Hence, the impact of the envelope on the momentum distribution is expected to be small. Consequently, we should be able to safely perform a Taylor expansion with respect to $z$ in order to obtain
\begin{align}
&A_{Z,x} \left( z \pm \frac{s_z}{2} \right) = \label{eq:ATZxE} \\
& && \hspace{-1.5cm} A_{Z,x} \left( z \right) \pm \frac{s_z}{2} \partial_z A_{Z,x} \left( z \right) + \mathcal{O} \approx A_{Z,x} \left( z \right), \notag \\
&A_{Z,y} \left( z \pm \frac{s_z}{2} \right) = \label{eq:ATZyE} \\
& && \hspace{-1.5cm} A_{Z,y} \left( z \right) \pm \frac{s_z}{2} \partial_z A_{Z,y} \left( z \right) + \mathcal{O} \approx A_{Z,y} \left( z \right). \notag
\end{align}
In this way, the envelope function is unaffected by a Fourier transform in $s_z$ but still restricts the interaction region to a finite volume.
\subsection{Energy-Momentum channels}
A transformation from coordinate space to energy-momentum channels as performed in Sec. \ref{sec:Appl} is not advised for general fields. The reason is that, while a transformation of the envelope functions might yield a simple analytical result, the following convolution with the Wigner coefficient only shows a closed analytical expression under very specific circumstances, for example, if the envelope can be written in terms of an oscillating function.
It should be further stressed that for setups with two different oscillation frequencies paraphrasing the transport equations in terms of discrete channel equations might even be unfeasible. This is due to the fact that information between these channels is exchanged in steps of $k_1$ and $k_2$. While this is clearly not a problem for, e.g., scattering of a pulse beam with a frequency-doubled probe beam, for setups with wildly irregular frequencies such a flow of information could amount to an absurdly high number of channels to consider.
Nevertheless, formulating pair production within the Heisenberg-Wigner formalism in terms of coupled discrete channel equations remains an interesting concept, especially when considering the success of Furry-picture quantization \cite{Aleksandrov:2016lxd, fradkin_gitman_shvartsman, Furry:1951zz}.
\subsection{Beyond propagating wave configurations}
The exemplary field configuration discussed in this work capture by no means all possible setups the formalism is capable of describing. The toy model displayed only serves as a reference point for future studies. Such computations might include localized, non-propagating fields \cite{Hebenstreit} as well as chirped laser pulses \cite{PhysRevD.82.045007,PhysRevD.104.016009}.
\subsection{Non-abelian quantum plasmas and chiral kinetic theory}
The derivations in this manuscript have been performed having light-by-light interactions in mind. However, the Heisenberg-Wigner formalism is universally applicable and is therefore already used in various branches of physics, c.f. quantum chromodynamics and quark-gluon transport theory \cite{Elze:1989un, Elze:1986hq} or chiral kinetic theory \cite{Wang:2019moi, Sheng:2017lfu, PhysRevResearch.2.023257}.
As the resulting transport equations are all structurally very similar, the findings given in this manuscript might also turn out to be useful in other areas.
\section{Acknowledgments}
We are grateful to Ralf Sch\"utzhold and Ivan Aleksandrov for the discussions that inspired this work. We thank the Institute for Theoretical Physics at Kyoto University, because discussions during the workshop ``The Schwinger Effect and Strong-Field Physics'' were useful to complete this work. \\
|
3,212,635,537,583 | arxiv | \section{Introduction}
A model of two-band superfluidity has been considered for a long time solely as the next iteration step to the BCS theory of superconducting state to take into account the anisotropic properties of metals and the effect of overlapping of the energy bands in the vicinity of their Fermi surface, which leads to the appearance of interband quantum electron transitions and, as a result, to an additional indirect interaction between the electrons of each band \cite{Suhl, Moskalenko}. The explosive growth in the study of multiband superconductivity began from the discovery of unconventional superconductivity with a complex structure of the superconducting order parameter (cuprates, heavy-fermion compounds, borocarbides, fullerides, strontium ruthenate, organic superconductors, iron pnictides and chalcogenides). Complex structure of the order parameter gives rise to a much richer nomenclature of topological objects and effects in unconventional superconductors in comparison with their conventional counterparts. These superconducting systems can lead to the formation of a variety of quantum phenomena: states that broke time-reversal symmetry (BTRS), new collective modes, phase domains, vortices with fractional flux, and fractional Josephson effect \cite{S.-Z. Lin, Tanaka, Milosevic, Omelyanchouk}, and shape resonance in the superconducting properties \cite{Bianconi1, Bianconi2}.
Another intriguing aspect is the fact that compounds with unconventional superconductivity can demonstrate anomalous normal-state properties above their critical temperature, which are interpreted as the pseudogap state. The existence of a pseudogap state has been firstly argued in the context of the crossover from BCS superconductivity to the Bose-Einstein condensation in the ground state and at the finite temperature \cite{NSR, Randeria} for underdoped high-$T_c$ cuprate superconductors \cite{Perali_2002, Palestini_2012, Marsiglio_2015}. In these compounds, pseudogap formation and non-Fermi liquid behavior are well established, and unusual superconducting fluctuations have also been detected above the critical temperature. However, the pseudogap state appears at a much higher temperature than the onset temperature of superconducting fluctuations. At this moment it is still debatable question whether the system is deep inside the crossover regime and to what extent the crossover physics can be relevant to the phase diagram of underdoped cuprate superconductors.
A magnesium diboride superconductor \cite{Bianconi3, Bianconi4} and recently discovered family of iron-based superconductors with the multiband electron structure and multiple energy gaps offer a new platform for the experimental observation of the BCS-BEC crossover, providing an opportunity to study new problems about crossover, fluctuation phenomena and pseudogap in multi-component systems, which go beyond the single-band physics \cite{Guidini}. For instance, ${\rm{BaF}}{{\rm{e}}_{\rm{2}}}{\left( {{\rm{A}}{{\rm{s}}_{{\rm{1 - x}}}}{{\rm{P}}_{\rm{x}}}} \right)_{\rm{2}}}$ may approach the BCS-BEC crossover regime near a quantum critical point \cite{Hashimoto, Shibauchi}. Another candidate is iron chalcogenide ${\rm{F}}{{\rm{e}}_{{\rm{1 + y}}}}{\rm{S}}{{\rm{e}}_{\rm{x}}}{\rm{T}}{{\rm{e}}_{{\rm{1 - x}}}}$, in which the Fermi energy of FeSe is extremely small and can be tuned by chemically doping through the BCS-BEC crossover \cite{Lubashevsky, Okazaki, Kasahara, Kasahara1} . It was found experimentally that the dimensionless measure of the pairing strength, i.e. the ratio of the energy gap and the Fermi energy $\Delta/E_F$ = 0.16, 0.3 and 0.5, increases monotonically with decreasing of the iron excess $y$, exhibiting a crossover from the BCS to the BEC regime \cite{Rinott}. The investigation of the vortex core by means of scanning tunneling microscopy (STM) shows the presence of Friedel-like oscillations, confirming the BCS-BEC crossover nature of FeSe and a peculiar missing of the pseudogap \cite{Hanaguri}.
Despite that, for the most of multiband superconducting systems the tuning of interband or intraband interactions are rather challenging and their properties can not be studied easily away from the BCS regime.
Strongly interacting superfluid systems can be replicated experimentally with ultracold atomic Fermi gases in optical lattices or in single traps confining clouds of fermionic atoms with several hyperfine states \cite{Köhl, Ospelkaus, Chin}. In such systems the interaction strength is adjusted by means of Fano-Feshbach resonances which allow the evolution of superfluidity throughout the BCS-BEC crossover. The newly realized orbital Feshbach resonance in a $^{{\rm{173}}}{\rm{Yb}}$ Fermi gas promises a new wave for studying two-band Fermi system with Josephson-like interaction between bands, enabling the tuning of inter-orbital interactions based on the Zeeman shift of different nuclear spin states of the atoms \cite{Pagano, Höfer, Zhang}. The many-body Hamiltonian governing the physical properties of alkaline-earth Fermi gases across an orbital Feshbach resonance is similar to that of two-band s-wave superconductors, and the description of the BCS-BEC crossover in these systems requires two components of the order parameter, in contrast to a Fermi gas with a single orbital near the broad magnetic Feshbach resonance. Thus, experimental activity in this direction raises fundamentally new problems about the BCS-BEC crossover in multiband superfluids and calls for the theoretical predictions of possible unusual effects \cite{Iskin1, Iskin2, Iskin3, Iskin4, Reyes1, Mondal, Tajima, Chubukov, Wolf, Salasnich}. At this moment the evolution of low energy collective excitations from BCS to BEC coupling regime in two-band s-wave superfluids coupled via an interband Josephson interaction at $T=0$ has been studied \cite{Iskin1}. Later within a mean-field theory generalized to the case of two bands, the characteristics of two-band superfluidity throughout the BCS-BEC crossover were analyzed and results have been reported only for coincident bands \cite{Iskin2}. Furthermore, based on the extension of the Nozi\`{e}res-Schmitt-Rink approach \cite{NSR} for two bands, strong enhancement of the critical temperature, a significant reduction of the preformed pair region where pseudogap effects are expected, and the entanglement of two kinds of composite bosons in the strong-coupling BEC regime were predicted for a two-band attractive Fermi system in the normal state with a shallow band coupled to a weakly-interacting deeper band \cite{Tajima}.
In this paper, using a mean-field theory for a two-band superfluid with gap equations coupled to the density equation we show that a two-band superfluid Fermi gas with energy shift between the bands reveals unique features of the BCS-BEC crossover, which are not realized in the single-band system. The paper is organized as follows. In Sec. II, we present the model and the main equations of a mean-field approach for the description of the BCS-BEC crossover in a two-band system. In Sec. III, we provide the results of our numerical calculations for the energy gaps, chemical potential, particle densities and the intrapair correlation lengths and discuss unique features of the BCS-BEC crossover, in particular, a coexistence of giant Cooper pairs and bosonic condensate in the strong-coupling regime. We summarize our conclusions in Sec. IV. Two appendices with analytical calculations and technical details are reported at the end of the paper.
\section{Model and basic equations}
We consider a two-band system of interacting fermions in three dimensions (3D), where the two fermionic bands have a parabolic dispersion law
\begin{equation}
\label{eq1}
{\xi _i}\left( {\bf{k}} \right) = \frac{{{{ |\bf{k}} |^2}}}{{2m}} - \mu + \epsilon_i,
\end{equation}
where $\bf{k}$ is the wave-vector, $m$ the effective mass which is assumed equal for both bands, $\mu$ the chemical potential and $\epsilon_i$ the energy of the bottom of the bands. The index $i$ = 1, 2 numerates the bands, where $i$ = 1 denotes the lower band and $i$ = 2 is the upper band. We set $\epsilon_1 = 0$ and $\epsilon_2 = E_g$ where the value $E_g$ defines the energy shift between the two bands of the system (Fig. 1).
\begin{figure}
\includegraphics[width=1.05\columnwidth]{fig1.pdf}
\caption{The band structure of the two-band superfluid Fermi gas under consideration ($k_z = 0$ projection). $E_g$ is the energy shift between the 1st ($i=1$) and the 2nd ($i=2$) band. $E_{{\rm F}i}$ corresponds to the Fermi energy of $i$-band in the absence of interactions.}
\label{fig1}
\end{figure}
The effective pairing interaction between fermions is approximated by a separable potential
\begin{equation}
\label{eq2}
{V_{ij}}\left( {{\bf{k}},{\bf{k'}}} \right) = -{U_{ij}}\Theta \left( {k_0 - |\bf{k}|} \right)\Theta \left( {k_0 - |\bf{k'} |} \right),
\end{equation}
where $U_{ij}$ are the strength of intraband (when $i=j$) and interband (when $i\neq j$) interactions, $k_0$ is the cut-off momentum, which is supposed the same for intraband and
interband pairing terms and $\Theta \left( x \right)$ is the Heaviside function. The sign of $U_{12}$ determines the symmetry of the order parameter in the clean case. A repulsive interband interaction constant $U_{12} < 0$ leads to a ground state with $\pi$-phase difference between the two bands, while attractive interband interactions $U_{12} > 0$ stabilize a ground state with a zero-phase difference between their gap functions \cite{Yerin_2007}.
The ground state of the two-band system is examined within a mean-field theory. We generalize single-band approach for a two-band case and write the equations for the energy gaps
\begin{equation}
\label{eq3}
{\Delta _i}\left( {\bf{k}} \right) = - \frac{1}{\Omega }\sum\limits_j {\sum\limits_{k'} {{V_{ij}}\left( {{\bf{k}},{\bf{k'}}} \right)} \frac{{{\Delta _j}\left( {{\bf{k'}}} \right)\tanh \frac{{{E_i}\left( {{\bf{k'}}} \right)}}{{2T}}}}{{2{E_i}\left( {{\bf{k'}}} \right)}}}.
\end{equation}
Here $\Omega$ is the volume occupied by the system under consideration, ${E_i}\left( {{\bf{k'}}} \right) = \sqrt {\xi _j^2\left( {{\bf{k'}}} \right) + \Delta _j^2\left( {{\bf{k'}}} \right)} $ are excitation branches in the superfluid state and the gaps having the same cut-off generated by the separable interaction
\begin{equation}
\label{eq4}
{\Delta _i}\left( {\bf{k}} \right) = {\Delta _i}\Theta \left( {k_0 - |\bf{k}|} \right).
\end{equation}
The coupled equations for the energy gaps must be supplemented with the equation for the total particle density of the system, as the renormalization of the chemical potential is a key feature of the BCS-BEC crossover.
We consider the total density of particles of the two-band system in the form of an additive contribution from each band
\begin{equation}
\label{eq5-5}
n = {n_1} + {n_2},
\end{equation}
where $n_i$ is the particle density in each band
\begin{equation}
\label{eq5}
n_i = \frac{2}{\Omega } {\sum\limits_{\bf{k}} {\left[ {v_i^2\left( {\bf{k}} \right)f\left( { - {E_i\left( {\bf{k}} \right)}} \right) + u_i^2\left( {\bf{k}} \right)f\left( {{E_i\left( {\bf{k}} \right)}} \right)} \right]} },
\end{equation}
where $f(z)$ is the Fermi-Dirac distribution function. Here we introduce weights of occupied states via the functions $v_i(\bf{k})$ and $u_i(\bf{k})$
\begin{equation}
\label{eq_weight1}
v_i^2\left( {\bf{k}} \right) = \frac{1}{2}\left[ {1 - \frac{{{\xi _i}\left( {\bf{k}} \right)}}{{E_i\left( {\bf{k}} \right) }}} \right],
\end{equation}
\begin{equation}
\label{eq_weight2}
u_i^2\left( {\bf{k}} \right) = 1 - v_i^2\left( {\bf{k}} \right).
\end{equation}
During the calculations $n$ will be taken as a value $n = n_1^0 + n_2^0 = \frac{{k_{F1}^3}}{{3{\pi ^2}}} + \frac{{k_{F2}^3}}{{3{\pi ^2}}} = \frac{{k_{Ft}^3}}{{3{\pi ^2}}}$ , defined via the particle densities $n_i^0$ in the absence of interactions and at zero temperature, as well as the Fermi momentum for each band $k_{Fi}$ and the total Fermi momentum $k_{Ft}$.
According to the model of the two-band system we also assume the presence of the energy shift between bands ${E_g} = \eta {E_{F2}}$, where $E_{F2}=k_{F2}^2/2m$. This implies the relations between different Fermi momentums ${k_{F1}} = {\left[ {1 - \frac{1}{{{{\left( {\eta + 1} \right)}^{\frac{3}{2}}} + 1}}} \right]^{\frac{1}{3}}}{k_{Ft}}$ and ${k_{F2}} = {\left[ {\frac{1}{{{{\left( {\eta + 1} \right)}^{\frac{3}{2}}} + 1}}} \right]^{\frac{1}{3}}}{k_{Ft}}$.
For regularization we use the s-wave scattering lengths for each band $a_{ii}$ defined by the low-energy limit of the two-body problem in vacuum
\begin{equation}
\label{eq6}
\frac{m}{{4\pi {a_{ii}}}} = - \frac{1}{{{U_{ii}}}} + \sum\limits_{\bf{k}}^{{{\bf{k}}_{\bf{0}}}} {\frac{m}{{{{\bf{k}}^2}}}},
\end{equation}
with increase of the momentum cut-off $k_0$ which is much larger than the average distance between particles and $\left| k \right| \le {k_0}$. We will show that in calculations the selection of the cut-off momentum value will not affect for the obtained results (see Appendix A and B). For the sake of simplification we redefine constants $U_{ij} = {{{\tilde U}_{ij}}{{\left( {\frac{{{k_{Ft}}}}{{{k_0}}}} \right)}^2}\frac{{{E_{Ft}}}}{n}}$ of the intraband ($i=j$) and the interband ($i \ne j$) coupling, where $E_{Ft}=k_{Ft}^2/2m$ is the total Fermi energy. From Eq. (\ref{eq6}) this yields relations in the dimensionless form between intraband coupling coefficients and scattering lengths for each band
\begin{equation}
\label{eq7}
{{\tilde U}_{11}}{\left( {\frac{{{k_{Ft}}}}{{{k_0}}}} \right)^2} = \frac{4}{3}{\left( {\frac{{{k_0}}}{{{k_{Ft}}}} - \frac{\pi }{{2{k_{F1}}{a_{11}}}}\frac{{{k_{F1}}}}{{{k_{Ft}}}}} \right)^{ - 1}}.
\end{equation}
Substituting Eq. (\ref{eq7}) to Eq. (\ref{eq3}) and performing dimensionless procedure for Eqs. (\ref{eq3}) and (\ref{eq5}) in units of the total Fermi momentum and the total Fermi energy, we get the system of equations for the energy gaps and the particle densities that will be solved numerically (see Appendix A).
Besides the energy gaps $\Delta_i$ and the particle densities $n_i$ another important characteristic of the pairing regimes through out the BCS-BEC crossover in a two-band superfluid Fermi gas is the intrapair correlation lengths of the Cooper pairs, which is determined by the expression
\begin{equation}
\label{eq11}
\xi _{{\rm{pair,}}i}^2 =\frac{{\sum\limits_{\bf{k}} {{{\left| {{\nabla _{\bf{k}}}\left( {\frac{{1 - 2f\left( {{E_i}\left( {\bf{k}} \right)} \right)}}{{{E_i}\left( {\bf{k}} \right)}}} \right)} \right|}^2}} }}{{\sum\limits_{\bf{k}} {{{\left( {\frac{{1 - 2f\left( {{E_i}\left( {\bf{k}} \right)} \right)}}{{{E_i}\left( {\bf{k}} \right)}}} \right)}^2}} }},
\end{equation}
obtained from the pair correlation function, evaluated at a mean-field level for zero and finite temperature \cite{Palestini0}.
Differently to the paper \cite{Iskin2} where the ratio of intraband coupling constants was fixed for the investigation of the BCS-BEC crossover properties, we study a two-band system with the fixed value of scattering length for the first band, which corresponds to the BCS regime, namely $1/(k_{F1}a_{11}) = -2$, and varying scattering length for the second band $1/(k_{F2}a_{22})$ . Such a strategy allows to avoid the convergence problem and the dependence of physical quantities on the cut-off momentum value $k_0$ (see Appendix B). During our investigations we fix the energy shift between bands $\eta = 3$, which gives $E_{\rm g}=0.75E_{\rm F1}=3E_{\rm F2}$ and corresponding relations for Fermi momenta in each band and the total Fermi momentum $k_{\rm F1}=\left(8/9\right)^{1/ 3}k_{\rm Ft}$ and $k_{\rm F2}=\left(1/9\right)^{1/ 3}k_{\rm Ft}$. For the sake of better presentation of results and their interpretation we measure energy gaps and the chemical potential in units of the total Fermi energy $E_{Ft}$.
\section{Results and discussion}
\subsection{Energy gaps, chemical potential and particle densities}
To provide comprehensive description of the BCS-BEC crossover properties in a two-band superfluid Fermi gas first of all we analyze the evolution of the energy gaps, the chemical potential and the particle densities at the zero temperature based on the numerical solution of Eqs. (\ref{eq3})-(\ref{eq5}). It should be noted that in principle for $T=0$ the system of Eqs. (\ref{eq3})-(\ref{eq5}) can be integrated and after long but straightforward calculations is expressed via full elliptic integrals of the first and the second kinds. These analytical calculations show that within our strategy with the fixed value of the scattering length in the first band there is no dependence on the cut-off momentum value $k_0$ for $\Delta_i$ and $\mu$ at least for the zero temperature. The same statement can be extended for the case of $T_c$ (see Appendix A).
\begin{figure*}
\includegraphics[width=0.68\columnwidth]{fig2a.pdf}
\includegraphics[width=0.68\columnwidth]{fig2b.pdf}
\includegraphics[width=0.68\columnwidth]{fig2c.pdf}
\caption{(a) Energy gaps $\Delta_1$ (solid lines), $\Delta_2$ (dotted lines), (b) the chemical potential $\mu$ and (c) the chemical potential $\mu_2 = \mu - E_g$ in units of $E_{F2}$ at zero temperature as a function of $(k_{\rm F2}a_{22})^{-1}$ for different interband couplings $\tilde{U}_{12} = 0$ (black line), $\tilde{U}_{12} = 0.5$ (green line), $\tilde{U}_{12} = 1$ (brown line), $\tilde{U}_{12} = 1.5$ (magenta line), $\tilde{U}_{12} = 2$ (red line) with the fixed value of the scattering length in the first band $(k_{\rm F1}a_{11})^{-1} = -2$. The presence of the hump on energy gap $\Delta_1$ dependencies for different interband interaction coefficients is shown in inset.
Dashed black line in (b) and (c) corresponds to the energy shift $E_g$ between bands in units of $E_{Ft}$ and $E_{F2}$ respectively. Dotted black line in (c) is the chemical potential of a single-band superfluid Fermi gas. Inset in (b) shows the comparison between the chemical potential (solid lines) and the half of the binding energy $-E_b/2$ (dotted lines) dependences in the strong-coupling limit. }
\label{fig2}
\end{figure*}
We found that in a system with vanishing interband interaction the BCS-BEC crossover is characterized by the full suppression of the first energy gap in the BEC limit and the presence of the kink on the second gap dependence at $(k_{\rm F2}a_{22})^{-1} \approx 2 $ (Fig. 2a).
The interband coupling smooths out the kink of the second gap and leads to the activation of the first gap in the BEC limit. Despite that Fig. 2a shows the almost constant character of $\Delta_1$ dependence for the interval ${\left( {{k_{F2}}{a_{22}}} \right)^{ - 1}} \in \left[ {-3;3} \right]$ according to our numerical analysis we observe very slow growth of the first gap starting from the nonzero value of $\Delta_1^{(0)} \approx 0.043$ in the BCS limit and very slow decrease of $\Delta_1 (1/k_{F2}a_{22})$ in the BEC limit. Moreover the behavior of $\Delta_1 (1/k_{F2}a_{22})$ always has non-monotonic dependence with the very tiny hump in the BCS limit in the case of vanishing interband interaction. With the further increasing of the interband coupling this hump becomes more pronounced and is shifted to the BEC limit (see inset in Fig. 2a). Note that for weak interband coupling the energy gap in the first band is exponentially suppressed when the cold band is almost depleted. The overall non-monotonic behavior of $\Delta_1$ as a function of $(k_{\rm F2}a_{22})^{-1}$ indicates a first regime of weak to intermediate coupling in the hot band in which $\Delta_1$ increases because of the effective attraction generated by the interband interaction able to transfer attractive pairing from the hot to the cold band. On the other hand, when the coupling in the hot band becomes very strong, the depletion of the cold band starts to dominate, causing a decrease in $\Delta_1$ and the presence of the hump in $\Delta_1$ is the result of this interplay.
In turn the chemical potential of a two-band superfluid Fermi gas decreases slower in comparison with a single-band counterpart even in the presence of the interband interaction (Fig. 2b). It is important to note that the single-band case differs from the two-band one for $U_{12} = 0$, because a particle transfer between the two bands occurs due to the additive structure of the density equation in Eq. (\ref{eq5}).
Based on the two-body Schr\"{o}dinger equation we calculate the dependence of the two-body binding energy $E_b$ for different interband couplings (inset in fig. 2b). One can see that when $U_{12}$ increases, $E_b$ also increases, and the chemical potential of the system tends to the BEC limit $-$$E_b/2$.
\begin{figure}
\includegraphics[width=1\columnwidth]{fig3.pdf}
\caption{Distribution of particle densities in each band $n_1/n$ (solid lines) and $n_2/n$ (dotted lines) normalized on the total particle densities as a function of $(k_{\rm F2}a_{22})^{-1}$ for different interband couplings $\tilde{U}_{12} = 0$ (black line), $\tilde{U}_{12} = 1$ (blue line), $\tilde{U}_{12} = 3$ (green line) and $\tilde{U}_{12} = 5$ (red line) with the fixed value $(k_{\rm F1}a_{11})^{-1} = -2$ and at $T = 0$.}
\label{fig3}
\end{figure}
In Figure 2c we report the behavior of $\mu_2 = \mu - E_g$ normalized to $E_{F2}$ as a function of $(k_{\rm F2}a_{22})^{-1}$ to compare with the single-band result. For vanishing or weak interband coupling ($\tilde{U}_{12} < 1$), the chemical potential can be larger than for a single-band case. This indicates Pauli-blocking effects due to the cold states as already obtained in the vicinity of the critical temperature by a Nozi\`{e}res-Schmitt-Rink approach in Ref. \onlinecite{Tajima}.
For $\tilde U_{12}=\tilde U_{21}=0$ the full suppression of $\Delta_1$ is connected with the full redistribution of particles between bands (Fig. 3). Even though the density equation Eq. (\ref{eq5}) couples two condensates, the first deep band remains in the BCS regime and till unitarity the particle distribution among two bands is not important for the system. The increasing of the interband interaction leads to particles equalizing in the weak-coupling regime and retardation of the distribution process between bands in the strong-coupling limit.
\begin{figure}
\includegraphics[width=0.49\columnwidth]{fig4a.pdf}
\includegraphics[width=0.49\columnwidth]{fig4b.pdf}
\includegraphics[width=0.49\columnwidth]{fig4c.pdf}
\includegraphics[width=0.49\columnwidth]{fig4d.pdf}
\caption{Evolution of energy gaps $\Delta_1$ (a), (c) and $\Delta_2$ (b), (d) at zero temperature vs. scattering lengths for fermions in the first and second bands for different interband couplings: $\tilde{U}_{12} = 0$ (a), (b) and strongly interacting bands $\tilde{U}_{12} = 5$ (c), (d).}
\label{fig4}
\end{figure}
Until now we have investigated the characteristics of the BCS-BEC crossover in a two-band fermionic system with the fixed value of intraband coupling in the first band $(k_{\rm F1}a_{11})^{-1} = -2$. To understand full properties of the BCS-BEC crossover for this system at $T=0$, we consider the behavior of the energy gaps and the particle densities varying $(k_{\rm F1}a_{11})^{-1}$. First of all as we can see from Fig. 4a and Fig. 4b weak intraband coupling in the first band and the strong in the second one, namely when $(k_{\rm F1}a_{11})^{-1} < 0 $ and $(k_{\rm F2}a_{22})^{-1} > 2 $ together with the vanishing interband interaction transform the two-band system into the single-band one with the fully suppressed first gap. The single-band scenario is realized also for $(k_{\rm F1}a_{11})^{-1} > 0 $ and for the entire interval of values $(k_{\rm F2}a_{22})^{-1}$ with the full suppression of the second gap. Increasing of the interband interaction extends the region and broadening the borders, where the two-band configuration is preserved (fig. 4c and fig. 4d).
\begin{figure}
\includegraphics[width=0.49\columnwidth]{fig5a.pdf}
\includegraphics[width=0.49\columnwidth]{fig5b.pdf}
\includegraphics[width=0.49\columnwidth]{fig5c.pdf}
\includegraphics[width=0.49\columnwidth]{fig5d.pdf}
\caption{Three-dimension plots of normalized particle densities in the first (a), (c) and in the second band (b), (d) as a function of $(k_{\rm F1}a_{11})^{-1}$ and $(k_{\rm F2}a_{22})^{-1}$ for different values of interband interaction coefficients $\tilde{U}_{12} = 0$ (a), (b) and $\tilde{U}_{12} = 5$ (c), (d) at $T = 0$.}
\label{fig5}
\end{figure}
It is worth noting that a similar transition from single-condensate to two-condensate superconductivity was revealed experimentally in the ${\rm{LaAl}}{{\rm{O}}_{\rm{3}}}{\rm{/SrTi}}{{\rm{O}}_{\rm{3}}}$ interface driven by electrostatic doping \cite{Singh1}. It was found that in such a heterostructure the superconducting gap in the first band is suppressed while the second band is populated. Within our approach, we speculate that these results can be interpreted as the transition from the BEC to the BCS regime of a two-band superfluid system close to a Lifshitz transition with vanishing interband interaction. Despite the fact that heterostructures ${\rm{LaAl}}{{\rm{O}}_{\rm{3}}}{\rm{/SrTi}}{{\rm{O}}_{\rm{3}}}$ represent the two-dimensional electron liquid in the interface, some theoretical models argue the importance of the three-dimensional bands for the explanation of 2D superconductivity in these systems \cite {Fernandes1}.
Our results are confirmed also by the evolution of the particle densities, where as it can be seen from Fig. 5a and Fig. 5b there is no transfer of particles for vanishing interband interaction for $(k_{\rm F1}a_{11})^{-1} > 0 $ and all particles are concentrated in the deeper band, while in the opposite case for $(k_{\rm F1}a_{11})^{-1} < 0$ and $(k_{\rm F2}a_{22})^{-1} > 2$ the bands change places: all particles migrate to the shallow band. Figures 5c and 5d show that with the increasing of the interband coupling populations in each band begin to equalize.
\subsection{Intrapair correlation lengths}
\begin{figure*}
\includegraphics[width=0.95\columnwidth]{fig6a.pdf}
\includegraphics[width=0.95\columnwidth]{fig6b.pdf}
\includegraphics[width=0.95\columnwidth]{fig6c.pdf}
\includegraphics[width=0.95\columnwidth]{fig6d.pdf}
\caption{Intrapair correlation lengths $\xi_{\tt{pair}1}$ and $\xi_{\tt{pair}2}$ for the first (solid lines) and the second band (dotted lines) correspondingly at zero temperature as a function of $(k_{\rm F2}a_{22})^{-1}$ for different interband couplings strengths $\tilde{U}_{12} = 0$ (a), $\tilde{U}_{12} = 0.01$ (b), $\tilde{U}_{12} = 0.1$ (c) and $\tilde{U}_{12} = 0.5$ (d) in the case of the fixed value of $(k_{\rm F1}a_{11})^{-1} = -2$. Dashed black lines in (c) and (d) correspond to $k_F\xi_{\tt{pair}} = 2\pi$ and delimit the BCS-BEC crossover regime (due to large values of $\xi_{\tt{pair}1}$ and $\xi_{\tt{pair}2}$ in panels (a) and (b) we did not plot $k_F\xi_{\tt{pair}} = 2\pi$). Orange dashed lines defines the value of the coupling strength in the second band for which the chemical potential of the system equals to zero (see fig. 2b), namely $(k_{\rm F2}a_{22})^{-1} \approx 1.968$ in (a) and (b), $(k_{\rm F2}a_{22})^{-1} \approx 1.965$ in (c) and $(k_{\rm F2}a_{22})^{-1} \approx 1.903$.}
\label{fig6}
\end{figure*}
Another characteristic for the description of the crossover from Cooper-pair superconductivity to Bose-Einstein condensation of bound pairs of fermions is the intrapair correlation length that is defined by Eq. (\ref{eq11}). For a single-band system it was shown earlier that there is a universal material-independent criterion $k_F\xi$ to follow the evolution of the BCS-BEC crossover \cite{Pistolesi}. Based on Eq. (\ref{eq11}) and the definition from the paper Ref. \onlinecite{Pistolesi}, we investigate the intrapair correlation lengths for each band $\xi_{\tt{pair}1}$ and $\xi_{\tt{pair}2}$ as a function of the scattering length in the second band $(k_{\rm F2}a_{22})^{-1}$.
In the case of vanishing interband interaction $\xi_{\tt{pair}2}$ dependence has a conventional behavior as in the single-band case, whereas $\xi_{\tt{pair}1}$ is almost constant until the unitarity point and undergoes essential discontinuity at $(k_{\rm F2}a_{22})^{-1} \approx 2$ (Fig. 6a). The origin of this discontinuity can be understood from the definition of $\xi_{\tt{pair}1}$ after straightforward integration of Eq. (\ref{eq11}). The obtained expression diverges when the energy gap in the first band $\Delta_1 = 0$ is fully suppressed (Fig. 2a).
By increasing the interband coupling this discontinuity is removed. For very weak interaction between bands a sharp peak on a dependence $ \xi_{\tt{pair}1} (1/k_{\rm F2}a_{22})$ is obtained (Fig. 6b). Contrary to expectations that strong-coupling limit will suppress gradually the intrapair correlation length in the first band, we observe a non-monotonic dependence and the significant amplification of $\xi_{\tt{pair}1}$ in the BEC regime. From the physical point of view such results point out the formation of giant Cooper pairs in the first band with bosonic pairs in the second band. This coexistence of BCS and BEC condensates stems from the weak interband coupling, where the cold band serves as an almost independent reservoir of Cooper pairs. Thus, a two-band superfluid system is described as the continuous transformations from two different BCS condensates to a system where giant Cooper pairs and bosonic condensate coexists and then finally to the mixture of two BEC condensates (Fig. 7a). The crossover in the cold band above discussed can be also interpreted as a density-induced BCS-BEC crossover \cite{Andrenacci1999}, when the density $n_1$ is tuned by the coupling in the hot band.
For larger values of $\tilde U_{12}>0.063$ and for $(k_{\rm F1}a_{11})^{-1}=-2$, we observe gradually disappearance of this peak and the transition to the conventional single-band behavior of the intrapair correlation length for the first band (Fig. 6c and d). The behavior of $\xi_{\tt{pair}2}$ on the qualitative level maintains dependence as for the single-band counterpart. For very strong interband interaction, dependences of intrapair correlation lengths in each band are the same.
Increasing of the intraband coupling in the first band for the given strength of interband interaction leads to the reduction of the peak. Nevertheless, for very weak interband coupling, the effect of the intrapair correlation length amplification can be preserved even for $(k_{\rm F1}a_{11})^{-1}=-0.5$ (Fig. 7a).
Using the criterion of strongly overlapping Cooper pairs for the single-band system $k_F \xi_{\tt{pair}}>2\pi$ we can extract interesting feature of the BCS-BEC crossover in a weakly interacting two-band Fermi gas.
In particular, when the value of $(k_{\rm F1}a_{11})^{-1} < 0$ we have rich picture of the BCS-BEC crossover evolution (Fig. 7a). Initially for $1/(k_{\rm F2}a_{22}) \ll -1$ there is a mixture of two Cooper pairs condensates. Then with the increasing of the intraband coupling towards to the strong-coupling regime formation of giant Cooper pairs in the first band occurs. Such pairs coexist with the BEC condensate from the second band. In the extremely strong coupling limit $1/(k_{\rm F2}a_{22}) \gg -1$ we observe transition from giant Cooper pairs and BEC molecules into the two bosonic condensates with coinciding intrapair correlation lengths.
\begin{figure}
\includegraphics[width=0.49\textwidth]{fig7a.pdf}
\includegraphics[width=0.49\textwidth]{fig7b.pdf}
\caption{(a) Evolution of the intrapair correlation length in the first band $\xi_{\tt{pair1}}$ of a very weakly interacting superfluid two-band Fermi gas with $\tilde{U}_{12} = 0.001$ as a function of $(k_{\rm F2}a_{22})^{-1}$ for $(k_{\rm F1}a_{11})^{-1} = -2$ (black line), $(k_{\rm F1}a_{11})^{-1} = -1.5$ (blue line), $(k_{\rm F1}a_{11})^{-1} = -1$ (green line) and $(k_{\rm F1}a_{11})^{-1} = -0.5 $ (cyan line) at $T=0$.
(b) Temperature effect on the amplification of the intrapair correlation length $\xi_{\tt{pair}1}$ for $\tilde{U}_{12} = 0.001$ with the fixed value $(k_{\rm F1}a_{11})^{-1} = -2$. Black line is for $T=0$ (the maximum value of the intraband length corresponds to the critical temperature of the system $T_c \approx 0.67T_{Ft}$ and to the critical temperature of the first band $T_{c1} \approx 0.0007T_{Ft}$), blue line is for $T=0.005E_{Ft}$ (maximum of $\xi_{\tt{pair}1}$ corresponds to $T_c \approx 0.523T_{Ft}$ and $T_{c1} \approx 0.0043T_{Ft}$), green line - $T=0.01E_{Ft}$ ($T_c \approx 0.41T_{Ft}$ and $T_{c1} \approx 0.0092T_{Ft}$), cyan line - $T=0.015E_{Ft}$ ($T_c \approx 0.3T_{Ft}$ and $T_{c1} \approx 0.014T_{Ft}$) and red line is for $T=0.02E_{Ft}$ ($T_c \approx 0.188T_{Ft}$ and $T_{c1} \approx 0.02T_{Ft}$).}
\label{fig7}
\end{figure}
Apart from the increasing of intraband and interband coupling strengths, here we show how the temperature also effects the phenomenon of giant Cooper pair formation. The slow growing of the temperature decreases the magnitude of the effect dramatically and shifts slightly the position of the peak to smaller values of $(k_{\rm F2}a_{22})^{-1}$ (Fig. 7b). The explanation of such behavior will be provided below.
\begin{figure}
\includegraphics[width=0.49\textwidth]{fig8a.pdf}
\includegraphics[width=0.49\textwidth]{fig8b.pdf}
\caption{Temperature dependence of the energy gaps $\Delta_1$ (solid line) and $\Delta_2$ (dotted line) of a two-band superfluid Fermi gas with $\tilde U_{12} = 0.001$ (black line), $\tilde U_{12} = 0.01$ (blue line) and $\tilde U_{12} = 0.1$ (red line) for $(k_{\rm F1}a_{11})^{-1} = -0.25$, $(k_{\rm F2}a_{22})^{-1} = 1$ (a) and $(k_{\rm F1}a_{11})^{-1} = 0$, $(k_{\rm F2}a_{22})^{-1} = 1.5$ (b).}
\label{fig8}
\end{figure}
Investigations of the temperature dependences of the energy gaps show that in the case of weak interband coupling we observe anomalous behavior of $\Delta_2$ (Fig. 8). The occurrence of the kink is directly connected with the increasing of the intrapair correlation length in the first band. Moreover, temperature dependences of the chemical potential (not shown) also have a kink for the same values of $T$ and hence indicate the first order phase transition in the system.
Numerical analysis shows that the effect is more pronounced when the values of energy gaps become comparable. At the same time we recall that the strong enhancement of $\xi_{\tt{pair}1}$ is realized in the BEC limit of the second band at $T=0$ and for vanishing $U_{12}$ (Fig. 7a). The increasing of the temperature, the strong interband interaction and the strong intraband coupling in the first band lead to the suppression of the intrapair correlation length (Figs. 6 and 7b). Thus, comparability of energy gaps with the conservation of the intrapair correlation length amplification effect can be achieved, when the second band is in the strong-coupling regime (for values of $(k_{\rm F2}a_{22})^{-1}$, where we have the strong enhancement of $\xi_{\tt{pair}1}$) and when the first band is near the unitarity point $(k_{\rm F1}a_{11})^{-1} \approx 0$ (this yields ${\Delta _1} \cong {\Delta _2}$ ).
Other conditions eliminate the non-monotonic dependence of ${\Delta _2 (T)}$, leading to the conventional BCS-like behavior of the energy gap in the second band, or sufficiently decrease the magnitude of the peak. In turn with the increase of the temperature the suppression of the peak in $\xi_{\tt{pair}1}$ becomes important when the temperature is in the vicinity of the critical temperature of the first cold band $T_{c1}$, which is the very small for the case of parameters here considered (see the legend in the Fig. 7b).
Based on this, we can claim that the temperature decreasing rate of the intrapair correlation length in the first band is determined by the width of the temperature interval, where the non-BCS behavior of the second gap is realized (the same, where the first energy gap is not strongly suppressed). Since Fig. 7b corresponds to the system with $(k_{\rm F1}a_{11})^{-1}=-2$, i.e. the BCS regime for the first band, we have the small temperature interval of non-BCS dependence of ${\Delta _2}$ and as a consequence the rapid temperature suppression of $\xi_{\tt{pair}1}$.
It is important to note that similar behavior of the intrapair correlation lengths as a function of temperature was revealed in a two-band superconductor with very weak interband interaction \cite{Komendova}. In the absence of coupling between two superconducting condensates below the critical temperature, a hidden critical point appears at the critical temperature of the weaker band that corresponds to the divergence of the intrapair correlation length. In the case of weak interband interaction the intrapair correlation length of the weaker band exhibits a deviation from the conventional monotonic increase with temperature and leads to a pronounced peak close to the hidden critical point. In our calculations interband coupling also governs the effect but as opposite to a two-band superconductor strong enhancement of the intrapair correlation length in one of the bands occurs in the strong-coupling limit, where the formation of giant Cooper pairs is not expected. Moreover as it was shown above this phenomenon for very weak interband interaction can be observed even for at finite temperatures. We emphasize that the hidden critical-like behavior via the temperature dependence of the energy gap ${\Delta _1}$ in the cold band in the strong-coupling limit, whereas monotonic behavior of the energy gap was reported in the weak-coupling limit in Ref. \onlinecite{Komendova}.
We suggest that the experimental detection of giant Cooper pairs in the strong-coupling regime and verification of our prediction can be done through direct imaging of vortex cores in two-component fermionic condensates or in iron-based superconductors with electronic-like concentric Fermi surfaces. Another possibility to verify our predictions is the precise measurement of the temperature dependence of the energy gaps in two-band superfluid systems.
\section{Conclusions}
We have investigated characteristics and have found novel unique properties of the BCS-BEC crossover in a two-band superfluid Fermi system in the presence of energy shift between bands within a mean-field theory in a configuration of different pairing strengths in the two bands. We have demonstrated the richness of the BCS-BEC crossover in such a two-band system as compared to the single-band counterpart. We have found that for vanishing interband interaction at low temperatures and in the strong-coupling regime, a two-band superfluid Fermi gas evolves to a single-band system with the full suppression of the energy gap in the first cold band, together with the full redistribution of particles. As a result, a giant enhancement of the intrapair correlation length of Cooper pairs in the first band occurs. In the case of finite coupling between the two condensates of the two bands, we have shown a non-monotonic behavior of the first energy gap with a hump, the position of which is determined by the strength of the interband interaction in the second hot band. For weak interband coupling we have found a significant amplification of the intrapair correlation length of the first band in the BEC regime for the second band at zero and finite temperatures that indicates the coexistence of giant Cooper pairs and bosonic condensate in a two-band superfluid system. We have revealed that such an effect can produce an unusual non-monotonic temperature dependence of the second energy gap with the presence of a maximum for nonzero temperatures. Our predictions can be verified via STM investigations of vortex cores and temperature behavior of energy gaps in two-component atomic condensates and in some iron-based superconductors having electron-like or hole-like concentric bands with low filling, with weak interband interaction between the bands.
\acknowledgments
This work was supported by the Italian MIUR through the PRIN 2015 program (Contract No. 2015C5SEJJ001). H. T. was supported by Grant-in-Aid for JSPS fellows (Grant No. 17J03975). We thank Alexei Vagov and Milorad V. Milo\v{s}evi\'c for discussions.
|
3,212,635,537,584 | arxiv | |
3,212,635,537,585 | arxiv | \section{Introduction}
The goal of this article is to prove that the dynamics of fermions interacting through a two-body
interaction can be transformed into a stochastic process in the Hilbert space of Hartree-Fock Bogoliubov (HFB) states.
Such a derivation is motivated by recent studies dedicated to the structure of nuclei. In nuclear physics,
mean-field theories like Hartree-Fock and HFB already provide a good approximation of static and dynamical
properties \cite{Rin80}. It also turns out that a deep understanding of nuclei requires the introduction
of correlations beyond mean-field. Large theoretical developments are devoted for instance to Generator Coordinates
Methods (GCM) \cite{Ben03}. In that case, some collective degrees of freedom are selected and the
correlated ground state is constructed as a superposition of mean-field states (HF or HFB).
It has been shown that the description of nuclear systems is greatly
improved if pairing correlations are already accounted for, i.e. if the GCM is performed with
HFB many-body states.
Such a method has been successfully applied to nuclear structure when only few degrees of
freedom are selected. However, applications of GCM techniques when many collective degrees of freedom
are important are still numerically intractable.
Monte-Carlo techniques appear as an alternative way of treating correlation beyond mean-field.
Shell model Monte-Carlo theory\cite{Koo97} is an example of such technique.
Recently, starting from the Hartree-Fock theory,
new formulations \cite{Car01,Jul02,Lac05} have been proposed that
combines the advantages of Monte-Carlo methods and mean-field theories.
In that case, the exact evolution of fermionic (or bosonic) systems is replaced by
an ensemble of stochastic mean-field evolutions. A possible improvement of such theory, which might
be of great interest in the nuclear context, is to include pairing correlations in the trial set of wave
functions, i.e. to consider quantum jumps between HFB states instead of HF states. A first step in
this direction as been made in ref. \cite{Mon06} where stochastic dynamics between BCS states were
introduced.
In this article, we show that the dynamics of fermions interacting through a general two-body Hamiltonian
\begin{eqnarray}
H = \sum_{ij} \left< i \left| T \right| j \right> c^+_i c_j +
\frac{1}{4} \sum_{ijkl} \left< ij \left| v_{12} \right| lk \right> c^+_i c^+_j c_l c_k
\end{eqnarray}
can be mapped into a quantum jump process between HFB states.
Here, $c^+$ operators correspond to creation operators associated to a complete single-particle basis, and $v_{12}$
matrix elements are antisymmetrized.
In the following we first introduce
quantities associated to "densities" written as a dyadic of two HFB state vectors, i.e.
$D= \left| \Psi_a \right> \left< \Psi_b \right|$. The flexibility of stochastic
methods allows to consider densities with specific helpful
properties which are precised in the first part of this work.
Then a TDHFB equation is derived for $D$ when correlations beyond mean-field are neglected.
Finally, the full stochastic theory that accounts for all two-body effects is derived.
\section{Preliminary results and Notations}
\label{sec:subclass}
In quantum Monte-Carlo approaches starting from an initial density $D=\left| \Psi \right> \left< \Psi \right|$, the
exact system evolution is recovered by averaging over an ensemble densities written as a product of two different state vectors
\begin{eqnarray}
D = \left| \Psi_a \right> \left< \Psi_b \right|.
\label{eq:denshfb}
\end{eqnarray}
The use of two different states is at the heart of exact stochastic methods. The advantage of these approaches
is that states entering in $D$ correspond generally to a specific class of trial wave-function.
In previous applications, these states have been chosen as Hartree-Fock states\cite{Jul02}.
Given a specific choice of trial wave-functions, it turns out that stochastic reformulation is generally not unique. This flexibility might
be used for instance to optimize quantum jumps and reduce the number of paths (see for instance \cite{Lac05-2}).
Here we will consider HFB state as trial state vectors and use this flexibility in a different way.
Indeed, it turns out that the two-body problem can be reformulated as a stochastic process imposing
additional relations between
the states $\left| \Psi_a \right>$ and $\left| \Psi_b \right>$ along each path. These additional constraints,
given below,
lead to simplified equations and derivations without restricting the exactness of the formulation.
\subsection{Choice of a subclass of densities}
Let us assume that $D$ has the form (\ref{eq:denshfb})
where $\left| \Psi_a \right>$ and $\left| \Psi_b \right>$ can be written as a product of quasi-particle operators,
i.e. $\left| \Psi_a \right> = \Pi a^+_\alpha \left| 0 \right>$ and $\left| \Psi_b \right> =
\Pi b^+_\alpha \left| 0 \right>$. $D$ will be referred in the following as a density although
it does not necessarily meet all required properties to be considered as a density matrix.
For each quasi-particle operator, two
sets of single-particle wave-functions, denoted by
$\left| \alpha_{a,b} \right>$ and $\left| \bar \alpha_{a,b} \right>$,
are introduced. They define the transformation between quasi-particle states and a complete set of particle states as
\begin{eqnarray}
\begin{array} {cc}
a_\alpha = & \sum_{i} c_i \left< \bar \alpha_a \left. \right| i \right>+
\left< i \left. \right| \alpha_a \right> c^+_i ,\\
b_\alpha = & \sum_{i} c_i \left< \bar \alpha_b \left. \right| i \right>+
\left< i \left. \right| \alpha_b \right> c^+_i .
\end{array}
\label{eq:bogo}
\end{eqnarray}
Note that, we can recover the matrix notations $U,V$ often used in the HFB theory through the
relation $^{a,b}U_{i\alpha} = \left< \bar \alpha_{a,b} \left. \right| i \right>$ and
$^{a,b}V_{i\alpha} = \left< i \left. \right| \alpha_{a,b} \right>$.
As usual \cite{Rin80,Bla86}, we introduce vector notations ${\bf a} = \left\{ a, a^+\right\}$,
${\bf b} = \left\{ b, b^+\right\}$ and ${\bf c} = \left\{ c, c^+ \right\}$. Above linear
transformations can then be written as matrix transformations ${\bf a} = {\cal W}^+_a {\bf c}$ and
${\bf b} = {\cal W}^+_b {\bf c}$.
In opposite to the standard HFB theory, we do not impose the transformations to be canonical but
instead restrict ourselves to a subclass of quasi-particles and vacuums having two specific properties.
We first assume that
\begin{eqnarray}
{\cal W}_a {{\cal W}_b}^+ = {{\cal W}_a}^+ {{\cal W}_b} = 1 ,
\label{eq:prop1}
\end{eqnarray}
which gives the inverse transformations ${\bf c}={\cal W}_b{\bf a} ={\cal W}_a {\bf b}$.
As a consequence, although the $a$ and $a^+$ operators (respectively the $b$ and $b^+$)
do not necessarily fulfill fermionic anti-commutation rules, because of (\ref{eq:prop1}) we have
\begin{eqnarray}
[a_\alpha, b_\beta ]_+ &=& [a^+_\alpha, b^+_\beta ]_+ = 0,~~~
[a_\alpha, b^+_\beta ]_+ = \delta_{\alpha \beta }.
\label{eq:com1}
\end{eqnarray}
The second important assumption is that $\left| \Psi_a \right>$ and $\left| \Psi_b \right>$ are both vacuum
for all $a_\alpha$ and $b_\alpha$. As we will see in the following such properties might occur without any simple
relations between the two sets of annihilation operators.
We introduce the generalized density matrix ${\cal R}_{ab}$ defined as
\begin{eqnarray}
{\cal R}_{ab} =
\left(
\begin{array} {cc}
\left< \Psi_b \left| c^+_i c_j \right| \Psi_a \right>
& \left< \Psi_b \left| c_i c_j \right| \Psi_a \right> \\
\left< \Psi_b \left| c^+_i c^+_j \right| \Psi_a \right> & \left< \Psi_b \left| c_i c^+_j \right| \Psi_a \right>
\end{array} \right).
\end{eqnarray}
From the two assumptions, it can be shown that
\begin{eqnarray}
{\cal W}^+_b {\cal R} {\cal W}_a = {\cal L} =
\left(
\begin{array} {cc}
0 &0 \\
0 &1 \\
\end{array} \right),
\label{eq:prop2}
\end{eqnarray}
or equivalently:
\begin{eqnarray}
< a_\alpha b_\beta > &=& < a^+_\alpha b_\beta >= < a^+_\alpha
b^+_\beta >=0, \nonumber \\
< a_\alpha b^+_\beta >&=&\delta_{\alpha \beta}. \nonumber
\end{eqnarray}
This again can be seen as a generalization of the HFB case and implies
${\cal R}^2_{ab} ={\cal R}_{ab}$.
In addition, the generalized density ${\cal R}_{ab}$ takes a simplified form
compared to the one generally obtained for transition densities \cite{Rin80}. Here, we have
\begin{eqnarray}
{\cal R}_{ab} =
\left(
\begin{array} {cc}
\rho_{ab} & \kappa_{ab} \\
- \kappa^*_{ab} & 1-{\rho}^T_{ab}
\end{array}
\right) ,
\end{eqnarray}
where ${\rho}^T_{ab}$ denotes the transposed matrix of $\rho_{ab}$. In the following,
to simplify notations we will omit the subscript "$_{ab}$".
Different operators matrix elements can be expressed as
\begin{eqnarray}
\rho &=& \sum_{\alpha}\left| \alpha_a \right> \left< \alpha_b \right|, \\
1-\rho &=& \sum_{\alpha} \left| \bar \alpha_a \right> \left< \bar \alpha_b \right|, \\
\kappa &=& \sum_{\alpha} \left| \alpha_a \bar \alpha_b \right>= -\sum_{\alpha}
\left| \bar \alpha_a \alpha_b \right>,
\end{eqnarray}
with the convention $\kappa_{ij} = \sum_{\alpha} \left< ij \left. \right| \alpha_a \bar \alpha_b\right>$.
Finally, we will also use the notation $\left| ^{a,b}W_\alpha \right>$ and $\left| ^{a,b}V_\alpha \right>$ (taken from
ref. \cite{Bla86}).
We have in particular
\begin{eqnarray}
{\cal R} &=& \sum_\alpha \left| {^a}W_\alpha \right> \left< {^b}W_\alpha \right|, \nonumber \\
1-{\cal R} &=& \sum_\alpha \left| {^a}V_\alpha \right> \left< {^b}V_\alpha \right|,
\end{eqnarray}
with $\left< {^a}W_\beta \left. \right| {^b}W_\alpha \right>=
\left< {^a}V_\beta \left. \right| {^b}V_\alpha \right> =\delta_{\alpha \beta }$ and
$\left< {^a}V_\beta \left. \right| {^b}W_\alpha \right>= 0$. This
completes the different properties associated with the subclass of densities considered here.
\subsection{Expression of the Hamiltonian and generalized TDHBF equation}
Using the previous properties, the action of the two-body Hamiltonian on the vacuum $\left| \Psi_a \right>$ can be
recast as
\begin{eqnarray}
H\left| \Psi_a \right> = \left\{ \left< H \right> + h_L + H^{L}_{res} \right\}\left| \Psi_a \right>,
\label{eq:hpsia}
\end{eqnarray}
where we have used the compact notation
$\left< H \right> = \left< \Psi_b \left| H \right| \Psi_a \right>$ and where
$h_L$ is a one-body effective Hamiltonian given by
\begin{widetext}
\begin{eqnarray}
h_L = \sum_{\alpha \beta} \left\{ \left< \bar \alpha_b \left| h \right| \beta_a \right> a^+_\alpha b^+_\beta
+ \frac{1}{2} \Delta_{\bar \alpha_b \bar \beta_b} a^+_\alpha a^+_\beta
- \frac{1}{2} \Delta^*_{ \alpha_a \beta_a } b^+_\alpha b^+_\beta
\right\}
\label{eq:h1}
\end{eqnarray}
\end{widetext}
$h$ and $\Delta$ correspond respectively to matrix elements
\begin{eqnarray}
h_{ij}&=& T_{ij} + \left< i\left| Tr_{2} (v_{12}\rho_2) \right| j \right>, \\
\Delta_{ij} &=& \frac{1}{2} \sum_{kl} \left< ij \left| v_{12} \right| kl \right> \kappa_{kl},
\end{eqnarray}
which will be called mean-field and pairing field in analogy to HFB theory.
Note that expression (\ref{eq:h1}) differs from the one generally obtained in HFB using the Wick theorem because of
the
coexistence of two sets of quasi-particle operators.
Starting from (\ref{eq:h1}), the effective Hamiltonian can be recast as
\begin{eqnarray}
h_L = \frac{1}{2}(
\begin{array} {cc}
c^+ & ~~c
\end{array})
(1-{\cal R}) {\cal H} {\cal R}
\left(
\begin{array} {c}
c \\
c^+
\end{array}
\right) ,
\label{eq:h1rr}
\end{eqnarray}
where ${\cal H}$ stands for the generalized HFB Hamiltonian \cite{Rin80,Bla86}:
\begin{eqnarray}
{\cal H} = \left(
\begin{array} {ccc}
h & \Delta \\
-\Delta^* & -h^T
\end{array} \right).
\end{eqnarray}
We will see that expression (\ref{eq:h1rr}) is central for further developments.
The last term of equation (\ref{eq:hpsia}), called hereafter
residual Hamiltonian reads
\begin{eqnarray}
H^{L}_{res} = \frac{1}{4} \sum_{\alpha \beta \gamma \delta} \left< \bar \alpha_b \bar \beta_b \left| V_{12} \right| \delta_a \gamma_a \right>
a^+_\alpha a^+_\beta b^+_\gamma b^+_\delta.
\end{eqnarray}
Performing a similar decomposition of $\left< \Psi_b \right| H$ leads to
\begin{eqnarray}
\left< \Psi_b \right| H = \left< \Psi_b \right| \left\{ \left< H \right> + h_R + H^{R}_{res} \right\},
\end{eqnarray}
with
\begin{eqnarray}
h_R =
\frac{1}{2}(
\begin{array} {cc}
c^+ & ~~c
\end{array})
{\cal R} {\cal H} (1-{\cal R})
\left(
\begin{array} {c}
c \\
c^+
\end{array}
\right) ,
\end{eqnarray}
while
\begin{eqnarray}
H^{R}_{res} = \frac{1}{4} \sum_{\alpha \beta \gamma \delta} \left< \alpha_b \beta_b \left| v_{12} \right| \bar \delta_a \bar
\gamma_a \right> a_\alpha a_\beta b_\gamma b_\delta.
\end{eqnarray}
\subsection{Evolution of the generalized density ${\cal R}$}
Starting from the initial density (\ref{eq:denshfb}),
the evolution of the system
is considered assuming first that the effect of the residual interaction can be neglected,.
After one time-step, due to the one-body nature of $h_L$, the state
$\left| \Psi_a + d\Psi_a \right> = e^{\frac{dt}{i\hbar}h_L} \left| \Psi_a \right>$
is a vacuum for the new quasi-particles
$a'_\alpha = a_\alpha + d a_\alpha =
e^{\frac{dt}{i\hbar}h_L} a_\alpha e^{-\frac{dt}{i\hbar}h_L}$. Similarly,
$\left< \Psi_b + d\Psi_b \right|= \left< \Psi_b \right| e^{-\frac{dt}{i\hbar}h_R}$
is a vacuum for the new quasi-particles
${b'_\alpha}^+ = b^+_\alpha + d b^+_\alpha =
e^{\frac{dt}{i\hbar}h_R} b^+_\alpha e^{-\frac{dt}{i\hbar}h_R}$.
Since the residual interaction is neglected, all
the information on the system is contained in the evolution of ${\cal R}$.
From standard rules of creation-annihilation operator
transformations \cite{Blo62,Bal69,Bla86}, we obtain:
\begin{eqnarray}
\left[h_L, {\bf c}\right] &=& -(1-{\cal R}) {\cal H} {\cal R}{\bf c}, \\
\left[h_R, {\bf c}\right] &=& -{\cal R}{\cal H} (1-{\cal R}) {\bf c}.
\label{eq:com}
\end{eqnarray}
With the help of the above anti-commutation relationships, we can express $e^{-\frac{dt}{i\hbar}h_L} {\bf c}
e^{+\frac{dt}{i\hbar}h_L}$ and $e^{\frac{dt}{i\hbar}h_R} {\bf c}
e^{-\frac{dt}{i\hbar}h_R}$, and deduce the evolution of ${\cal R}$.
Using the fact that initially ${\cal R}^2 = {\cal R}$ and
$\left( 1 - {\cal R} \right){\cal R} = 0$, we end with
\begin{eqnarray}
i \hbar \frac{d {\cal R}}{dt} &=& (1-{\cal R}) {\cal H} {\cal R} - {\cal R}{\cal H} (1-{\cal R}) \nonumber \\
&=& \left[{\cal H}, {\cal R} \right],
\end{eqnarray}
which is nothing but a TDHFB equation generalized to densities given by eq. (\ref{eq:denshfb}). Without going into further
details, it can be shown that the density ${\cal R}$
fulfills all properties listed above and thus remains in
the subclass of densities previously described. Therefore, the considerations made
for single time step can be extended to the long-time dynamics.
\section{Introduction of Quantum Monte-Carlo methods}
In the previous section, we have introduced general properties of densities given by eq. (\ref{eq:denshfb})
which will be helpful for the forthcoming discussion. In addition, we have shown that the dynamics reduces
to a TDHFB-like equation when the residual interaction is neglected.
The aim of this section is to show that the residual interaction can be treated by introducing
stochastic processes between densities described in section \ref{sec:subclass}.
\subsection{Separable residual interaction}
In
the following discussion, we concentrate first on the evolution of $\left| \Psi_a \right>$, keeping in
mind that everything can be transposed to $\left< \Psi_b \right|$.
Following the Stochastic mean-field approach \cite{Car01,Jul02,Lac05}, we consider that
the residual part of the interaction can be written as a sum of of separable interactions in the particle-hole channel:
\begin{eqnarray}
\left<\bar \alpha_a \bar \beta_a \left| v_{12} \right|
\delta_b \gamma_b \right> = - \sum_{m} \left<\bar \alpha_a \left| O_m \right|
\delta_b \right> \left< \bar \beta_a \left| O_m \right|
\gamma_b \right> ,
\label{eq:sepph}
\end{eqnarray}
where $O_m$ corresponds to a set of single-particle operators.
Using relation (\ref{eq:sepph}), the residual interaction $H^{L}_{res}$ can be recast as
\begin{eqnarray}
H^{L}_{res} \left| \Psi_a \right>= \frac{1}{4} \sum_{m} B^{ph}_m B^{ph}_m \left| \Psi_a \right>,
\label{eq:hlquare}
\end{eqnarray}
where the set of one-body operators $B_m$ are given by
\begin{eqnarray}
B^{ph}_m = \sum_{ \alpha \beta } \left< \bar \alpha_a \left| O_m \right| \beta_b \right>
a^+_\alpha b^+_\beta .
\end{eqnarray}
Guided by previous section, we write it as
\begin{eqnarray}
B^{ph}_m = \frac{1}{2} \left(
\begin{array} {cc}
c & c^+
\end{array}
\right) (1-{\cal R}) {\cal B}^{ph}_m {\cal R}
\left(
\begin{array} {c}
c^+ \\
c
\end{array}
\right) ,
\label{eq:bph}
\end{eqnarray}
where we have introduced the matrix ${\cal B}^{ph}_m$:
\begin{eqnarray}
{\cal B}^{ph}_m =
\left(
\begin{array} {ccc}
O_m & 0 \\
0 & -O_m^{T}
\end{array}
\right).
\end{eqnarray}
Once $H^L_{res}$ is written as (\ref{eq:hlquare}), the introduction of stochastic process is rather
straightforward. Introducing a set of stochastic variables $d \xi^{L}_{m}$
(which follow Ito rules of stochastic calculus \cite{Gar85}) with mean values equal to zero and variances
satisfying
\begin{eqnarray}
\overline{d \xi^{L,(n)}_{m} d \xi^{L,(n)}_{m'}} = \delta_{m m'} \frac{d t} {2 i\hbar}.
\end{eqnarray}
Here the $^{(n)}$ exponent stands for a specific realization of the stochastic process. In the following,
it will sometimes be omitted to simplify notations.
The evolution of $\left| \Psi_a \right>$ associated to $H$ can then be written as an average
over stochastic evolutions in the Hilbert space of HFB state vectors:
\begin{eqnarray}
e^{\frac{dt}{i \hbar} H}\left| \Psi_a \right> &=& e^{\frac{d t}{i \hbar} \left<H \right>}~\overline{
e^{\frac{d t}{i \hbar} h_L + \sum_m d\xi^{L}_m B^{ph}_m} \left| \Psi_a \right> } \\
&=& \overline{\left|\Psi_a^{(n)}(t+dt) \right> }.
\end{eqnarray}
In this equation, same conventions as in ref. \cite{Lac05,Bre02} are used and $\left| \Psi^{(n)}_a (t+dt) \right>$
correspond to different vacuum states. Introducing the notation
\begin{eqnarray}
dS^{L}_{ph} = \frac{d t}{i \hbar} h_L + \sum_m d\xi^{L}_m B^{ph}_m,
\end{eqnarray}
according to the Thouless theorem, each $\left| \Psi^{(n)}_a \right>$ is a vacuum for the quasi-particle
$a'_\alpha = e^{dS^{L}_{ph}} a_\alpha e^{-dS^{L}_{ph}}$.
One can finally note that the stochastic evolution of $\left| \Psi_a \right>$ should be completed
by an equivalent stochastic evolution for $\left< \Psi_b \right|$. The
associated propagator and stochastic variables are respectively denoted by
$dS^R_{ph}$ and $d\xi^R_m$.
These variables should be taken statistically independent
of $d\xi^L_m$ to properly account for the exact dynamics of $D$.
More explicitly, we have
\begin{eqnarray}
dS^R_{ph} = -\frac{d t}{i \hbar} h_R + \sum_m d\xi^{R}_m {B^{ph}_m},
\end{eqnarray}
where
\begin{eqnarray}
B^{ph}_m = \frac{1}{2} \left(
\begin{array} {cc}
c & c^+
\end{array}
\right) {\cal R}{\cal B}^{ph}_m (1-{\cal R})
\left(
\begin{array} {c}
c^+ \\
c
\end{array}
\right),
\label{eq:bph}
\end{eqnarray}
and $\overline{d\xi^{R}_md\xi^{R}_{m'}} = -\delta_{mm'}\frac{dt}{i\hbar}$.
\section{Nature of the stochastic process}
Similarly to the previous case, where the residual interaction was neglected, we do expect that
along each path, the stochastic evolution of the system reduces to the stochastic evolution
of ${\cal R}$. The explicit form of the stochastic evolution of ${\cal R}$ can now be obtained
using the commutation relationship \cite{Blo62,Bal69,Bla86}:
\begin{eqnarray}
\left[e^{dS^L_{pp}}, {\bf c} \right] &=& e^{-(1-{\cal R})
\left[\frac{dt}{i\hbar}{\cal H} + {\cal B}^L \right]{\cal R}} {\bf c} \nonumber \\
&=& {\bf c}-(1-{\cal R}) \left[\frac{dt}{i\hbar}{\cal H} + {\cal B}^L \right]{\cal R} {\bf c},
\label{eq:commut1}
\end{eqnarray}
while
\begin{eqnarray}
\left[e^{dS^R_{pp}}, {\bf c} \right] &=& e^{-{\cal R}
\left[-\frac{dt}{i\hbar}{\cal H} + {\cal B}^R \right](1-{\cal R})} {\bf c} \nonumber \\
&=& {\bf c}- {\cal R} \left[-\frac{dt}{i\hbar}{\cal H} + {\cal B}^R \right](1-{\cal R}){\bf c},
\label{eq:commut2}
\end{eqnarray}
where ${\cal B}^L$ and ${\cal B}^R$ stand for
\begin{eqnarray}
{\cal B}^{L/R} = \sum_m d\xi^{L/R}_m {\cal B}^{ph}_m.
\end{eqnarray}
Note that equations (\ref{eq:commut1}) and (\ref{eq:commut2}) are exact
thanks to the $(1-{\cal R})$ term. Similarly, as in previous section
and using expression (\ref{eq:commut1}) and (\ref{eq:commut2}), one gets the stochastic evolution of
${\cal R}$:
\begin{eqnarray}
d{\cal R} =\frac{dt}{i \hbar} \left[ {\cal H}, {\cal R} \right]+(1-{\cal R}){\cal B}^{L}{\cal R}
+ {\cal R}{\cal B}^{R}(1-{\cal R}) .
\label{eq:STDHFB}
\end{eqnarray}
Such a stochastic process, called hereafter Stochastic TDHFB, is similar to the
Stochastic mean-field dynamics \cite{Jul06} except that
the mean-field and normal densities are now replaced respectively by the HFB Hamiltonian ${\cal H}$
and density ${\cal R}$. Starting from ${\cal R} = \sum_\alpha \left| {^a}W_\alpha \right>
\left< {^b}W_\alpha \right|$, evolution of ${\cal R}$ can be replaced by the set of equations
\begin{eqnarray}
\left\{
\begin{array} {ccc}
\left| d{^a}W_\alpha \right> &=& \left\{ \frac{dt}{i \hbar} {\cal H}
+(1-{\cal R}){\cal B}^{L} \right\} \left| {^a}W_\alpha \right> \\
&& \\
\left< d{^b}W_\alpha \right| &=&\left< {^b}W_\alpha \right|
\left\{ -\frac{dt}{i \hbar} {\cal H} + {\cal B}^{R}(1-{\cal R}) \right\}
\end{array}
\right.
.
\label{eq:dw} ,
\end{eqnarray}
Above expressions show that if initially fullfilled,
the property $\left< {^a}W_\beta \left. \right| {^b}W_\alpha \right>
= \delta_{\alpha \beta }$ is true all along each stochastic path. Again,
it can be shown that all the properties of the class of densities considered in section \ref{sec:subclass}
are preserved during the stochastic evolution (\ref{eq:STDHFB}). Therefore, we only have to initiate
the quantum jump process with a density which satisfies
the properties described in section \ref{sec:subclass}. This is the case if
we start from an initial HFB density $D=\left| \Psi \right> \left< \Psi \right|$, which is the
most convenient in practice.
Note finally that the explicit form of the quasi-particle
evolution can directly be obtained from eq. (\ref{eq:dw}) while the stochastic evolution
of $\rho$ and $\kappa$ can be deduced from (\ref{eq:STDHFB}).
\subsection{Alternative form and $pp-hh$ separable interaction}
In the previous section, we have developed quantum diffusion processes between HFB states
assuming expression (\ref{eq:sepph}). However,
recent studies in nuclear structure \cite{Dug04} support separable interactions in the particle-particle and
hole-hole channels. For completness, we introduce a set of one-body operators $G_m$ and assume
that the residual interaction now reads
\begin{eqnarray}
\left<\bar \alpha_a \bar \beta_a \left| v_{12} \right|
\delta_b \gamma_b \right> = - \sum_{m} \left<\bar \alpha_a \left| G_m \right|
\bar \beta_b \right> \left< \delta_a \left| G_m \right|
\gamma_b \right>^*,
\label{eq:seppp}
\end{eqnarray}
where $G_m$ should be a skew matrix (i.e. $G^T_m = -G_m$) to respect the antisymmetrization of $v_{12}$.
The formulation of the two-body problem can equivalently be done starting from eq. (\ref{eq:seppp}).
The final result is that ${\cal R}$ still obeys a stochastic equation with similar form as (\ref{eq:STDHFB}), where
${\cal B}^{L/R}$ now reads
\begin{eqnarray}
{\cal B}^{L/R} = \sum_m d\eta^{L/R}_m {\cal B}^{pp}_m + i \left( d\eta^{L/R}_m \right)^* {\cal B}^{hh}_m,
\end{eqnarray}
and $d\eta_m$ corresponds to stochastic variables with mean value zero and
\begin{eqnarray}
\overline{d \eta^{L}_{m} \left( d \eta^{L}_{m'} \right)^*}
&=& \delta_{m m'} \frac{d t} {2 \hbar},
\end{eqnarray}
while all other second moments are equal to zero.
${\cal B}^{pp}_m$ and ${\cal B}^{hh}_m$ are given by
\begin{eqnarray}
{\cal B}^{pp}_m =
\left(
\begin{array} {ccc}
0 & G_m \\
0 & 0
\end{array}
\right),~~~~~~
{\cal B}^{hh}_m =
\left(
\begin{array} {ccc}
0 & 0 \\
-G^*_m & 0
\end{array}
\right).
\end{eqnarray}
\section{Conclusions}
In this work, we have shown that the exact dynamics of interacting fermions
can be replaced by a Monte-Carlo method in the Hilbert space of Hartree-Fock Bogoliubov
states. In order to prove this reformulation, we have used an intermediate
result, considering densities $D=\left| \Psi_a \right> \left< \Psi_b \right|$ with specific
properties. Neglecting the residual interaction, the evolution of $D$ leads to a TDHFB
equation for ${\cal R}$. We then have proven that the introduction of correlations beyond the
mean-field picture can be
replaced by a Stochastic TDHFB equation for ${\cal R}$, generalizing the stochastic mean-field approach \cite{Jul02}.
It should be noted that the reformulation is not unique and the selection of a sub-class of $D$ is not absolutely necessary.
However, in that case derivations and stochastic equations are more complicated.
The stochastic theory presented here is not restricted to dynamical problem and could also be useful
to study static properties of interacting systems \cite{Jul02}. In that case, real time propagation is
replaced by imaginary time evolution. Monte-Carlo methods has the advantage of not requiring an a priori
knowledge of the relevant collective
degrees of freedom and can eventually be used as an alternative to GCM. It should however be noted that Monte-Carlo
methods still require large numerical efforts. Work is actually in progress to combine advantages of GCM
and Monte-Carlo techniques.
Finally, we would like to mention that the above theory gives an indirect proof of the fact that
densities described in section \ref{sec:subclass} form an over-complete set of densities to treat
the two-body problem. This might be of great interest even for non stochastic methods which
treat correlations beyond mean-field.
{\bf ACKNOWLEDGMENTS}
The author is grateful to Thomas Duguet and Vincent Rotival for
the careful reading of the manuscript and to Olivier Juillet for
discussions during this work.
|
3,212,635,537,586 | arxiv | \section{Introduction}
\label{sec:introduction}
Superconductivity, ferromagnetism, and Kondo effect are the representative correlation effects in condensed matter physics. Interestingly, any pair of these three effects compete with each other:
Hampering the spin-singlet pairing in ($s$-wave) superconductors, ferromagnetism naturally suppresses superconductivity.
Kondo effect is attributed to another kind of spin-singlet correlation between the itinerant spins in the conduction band and the localized spin on the quantum dot (or magnetic impurity), and hence is suppressed in the presence of ferromagnetism in the conduction band \cite{Lopez02a,Fiete02a,Martinek03b,Martinek03a,ChoiMS04a,Pasupathy04a,Yang11a}.
Energetically, when the exchange Zeeman splitting due to the ferromagnetism is
larger than the Kondo temperature $T_K$ (in the absence of ferromagnetism), the Kondo effect is destroyed.
The competition between the superconducting pairing correlation and the Kondo correlation even leads to a quantum phase transition:
When the superconductivity dominates over the Kondo effect (i.e., the superconducting gap energy $\Delta_0$ larger than the normal-state $T_K$),
the ground states of the system form a doublet owing to the Coulomb blockade on the quantum dot.
In the opposite case ($\Delta_0<T_K$), the quantum dot overcomes the Coulomb blockade and resonantly transports Cooper pairs and the whole system resides in a singlet state. The quantum phase transition is manifested by the $0$-$\pi$ quantum phase transition in nano-structure Josephson junctions consisting of a quantum dot (QD) coupled to two superconducting electrodes \cite{Buitelaar02a,Avishai03a,ChoiMS04c,ChoiMS05d,Siano04a,Siano05a,Campagnano04a,Sellier05a,Cleuziou06a,Buizert07a,Grove-Rasmussen07b,ChoiMS08f,Martin-Rodero11a,Franke11a,Delagrange16a,Delagrange16b,Delagrange15a}.
In this work, we study the triad interplay of superconductivity,
ferromagnetism, and Kondo effect all together.
More specifically, we consider a quantum dot coupled to both superconducting
and fully spin-polarized \cite{endnote:2} ferromagnetic electrodes as shown
schematically in \figref{fig:system}~(a).
Similar setups have been studied in different contexts:
exchange-field-dependence of the Andreev reflection \cite{Feng2003jan},
spin-dependent Andreev reflection \cite{Cao2004dec,Weymann15a}, and subgap
states in the QD due to ferromagnetic proximity effect
\cite{Hofstetter2010jun}. The case with a superconducting and two
ferromagnetic leads was also studied to examine the crossed Andreev reflection
\cite{Zhu2001dec,Wojcik14a}.
However, these works either did not properly capture the full correlation effects (that is, Kondo regime could not exploited) \cite{Feng2003jan,Cao2004dec,Zhu2001dec} or studied the modification of Kondo effect due to its interplay with superconductivity and ferromagnetism \cite{Weymann15a,Wojcik14a}.
Note that in the latter works, the Kondo effect survives the relatively weak
superconductivity and/or ferromagnestim. In this work we explore novel triad
interplays in the opposite limit: Both supercondcutivity and ferromagnestim
are so strong that they \emph{individually} suppress the Kondo effect, but
nevertheless \emph{together} give rise to new resonant transport.
\begin{figure}[b]
\centering
\includegraphics[height=20mm]{Fig1a_fqds}\qquad
\includegraphics[height=20mm]{Fig1b_fsqd}
\caption{System configurations for (a) the spin-polarized (SP) lead-quantum
dot-superconducting (SC) lead and (b) the spin-polarized lead-quantum dot with the proximity-induced superconductivity. Refer the definition of the symbols to the text.}
\label{fig:system}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=70mm]{Fig2}
\caption{(color online) Phase diagram obtained from the NRG method. The phase
boundary (thick solid line) divides the spin singlet (S) and doublet (D)
phases. The crossover boundaries (red dotted lines) further divide the
singlet phase into the superconductivity-dominant (S$_\mathrm{S}$),
mixed-valence (S$_\mathrm{M}$), and Kondo (S$_\mathrm{K}$) singlet regimes,
which are connected adiabatically. The black dashed lines are the guides
along which we examine the change of physical properties of the system.}
\label{fig:pd}
\end{figure}
We find that unlike the aforementioned pairwise competition among the three effects, the triad interplay is ``cooperative'' in certain sense and leads to a new quantum phase transition between doublet and singlet states; see Fig.~\ref{fig:pd}.
The singlet phase is in many respects similar to the mixed-valence state, but
connected adiabatically (through crossovers) to the superconducting state in
the limit of strong coupling to the superconductor and to the `charge Kondo
state' in the limit of strong coupling to the spin-polarized electrode.
The results are obtained with the numerical renormalization group (NRG) method, and the physical explanations are supplemented by other analytic methods such as scaling theory, variational method, and bosonization.
Based on the analysis of the characteristics of the phases, we propose
three experimental methods to identify the phases, which measure the
dot density of state, the cross-current correlation, and the current response
to a small ac gate voltage (charge relaxation resistance), respectively.
The rest of the paper is organized as following:
We describe explicitly our system and the equivalent models for it in Section~\ref{sec:model}. We report our results based on the NRG method, the quantum phase diagram of the system and the characteristic properties of the phases and crossover regions in the singlet phase in Section~\ref{paper::sec:3}. In Section~\ref{sec:discussion}, we apply several analytic methods to provide physical interpretations of the quantum phase transition and the characteristic properties of the different phases and crossover regions. In Section~\ref{sec:experiments}, we discuss possible experiments to observe our findings. Section~\ref{sec:conclusion} summarizes the work and concludes the paper.
\section{Model}
\label{sec:model}
\Figref{fig:system}~(a) shows the schematic configuration of the system of our
interest, in which an interacting quantum dot is coupled to both a
ferromagnetic lead and a superconducting lead. To stress our points, we consider the extreme case where the ferromagnetic lead is fully polarized \cite{endnote:2} and the superconductivity is very strong (the superconducting gap is the largest energy scale). Recall that with the QD coupled to either a fully polarized ferromagnet or a strong superconductor (but not both), neither charge nor spin fluctuations are allowed on the QD.
First highlighting the fully polarized ferromagnetic lead, the Hamiltonian of the system is written as
\begin{equation}
\label{paper::eq:3}
H = H_\mathrm{QD} + H_\mathrm{F} + H_\mathrm{S} + H_\mathrm{T}
\end{equation}
with
\begin{subequations}
\label{paper::eq:4}
\begin{align}
\label{eq:HQD}
H_\mathrm{QD}
& = \delta \sum_\mu (n_\mu - 1/2) + U (n_\up - 1/2) (n_\down - 1/2)
\\
\label{paper::eq:1}
H_\mathrm{F} & = \sum_k \epsilon_k c_{k\up}^\dag c_{k\up}
\\
H_\mathrm{S}
& = \sum_{k\mu} \varepsilon_k a_{k\mu}^\dag a_{k\mu}
- \sum_k (\Delta_0 a_{k\up}^\dag a_{-k\down}^\dag + (h.c.)) \\
H_\mathrm{T}
& = \sum_k (t_F d_\up^\dag c_{k\up} + h.c.)
+ \sum_{k\mu} (t_S d_\mu^\dag a_{k\mu} + h.c.).
\end{align}
\end{subequations}
The operator $d_\mu^\dag$ creates an electron with energy $\epsilon_d$ and spin
$\mu=\up,\down$ and defines the number operator
$n_\mu := d_\mu^\dag d_\mu$; $n_d := \sum_\mu n_\mu$. The dot electrons
interact with each other with the strength $U$.
As mentioned above, the ferromagnetic lead Hamiltonian $H_F$ involves only the majority spin ($\up$) electrons, which are described by the fermion operator $c_{k\up}$ with momentum $k$ and energy $\epsilon_k$.
In the superconducting lead,
the operator $a_{k\mu}$ describes the electron with momenum $k$, spin $\mu$, and single-particle energy $\varepsilon_k$, and the terms in the pairing potential $\Delta_0$ are responsible for the Cooper pairs. Since the
superconducting phase is irrelevant in this study, $\Delta_0$ is assumed to be
real and positive.
The tunnelings between the dot and the ferromagnetic/superconducting leads are
denoted by $t_{F/S}$, respectively, which are assumed to be
momentum-independent for simplicity. The tunnelings induce the hybridizations
$\Gamma_{S/F} := \pi\rho_{S/F}|t_{S/F}|^2$ between the dot and the
superconducting/ferromagnetic leads, respectively, where $\rho_{S/F}$ are the density of states at the Fermi level in the leads.
The parameter $\delta := \epsilon_d + U/2$ indicates the deviation from the particle-hole symmetry.
To make our points clearer and simplify the discussion, in this work we focus
on the particle-hole symmetric case $(\delta=0)$. While the particle-hole
asymmetry gives rise to some additional interesting features~\cite{endnote:1}, the underlying physics can be understood in terms of that in the symmetric case.
Next we exploit the strong superconductivity to further simplify our model: The
pairing gap of the superconducting lead dominates over the other energy scales
($\Delta_0\gg U,\Gamma_S,\Gamma_F$) including $\Delta_0 \gg T_K^0$, where
$T_K^0$ is the Kondo temperature in the absence of ferromagnetic lead $(t_F=0)$
and the superconductivity $(\Delta_0=0)$. In such a limit, the role of the
superconducting lead is completely manifested in the proximity induced pairing
potential on the QD. Hence, as far as the physics below the superconducting gap
is concerned, the effective low-energy Hamiltonian [see
\figref{fig:system}~(b)] can be approximated, by integrating out the
superconducting degrees of freedom, as
\begin{equation}
\label{eq:H}
H = H_\mathrm{SQD} + H_\mathrm{F} + H_\mathrm{T}
\end{equation}
with
\begin{subequations}
\label{paper::eq:2}
\begin{align}
\label{eq:HSQD}
H_\mathrm{SQD}
&= U \left(n_\up - \frac{1}{2}\right)\left(n_\down - \frac{1}{2}\right)
+ \Delta_d (d_\up^\dag d_\down^\dag + d_\down d_\up) , \\
H_\mathrm{F} & = \sum_k \epsilon_k c_{k\up}^\dag c_{k\up} \,, \\
H_\mathrm{T}
&= \sqrt{\frac{\Gamma_F}{\pi\rho_F}}\sum_k
(d_\up^\dag c_{k\up} + c_{k\up}^\dag d_\up) \,,
\end{align}
\end{subequations}
where the proximity-induced superconducting gap is given by
{$\Delta_d\sim\Gamma_S$} \cite{Volkov95a,McMillan68a}.
In this work, we focus on \eqnref{eq:H} unless specified otherwise.
In passing, the isolated QD with pairing potential \eqref{eq:HSQD} is diagonalized with the eigenstates and the corresponding energies:
\begin{subequations}
\label{eq:iQD}
\begin{align}
\ket{D_\mu^0} & = d_\mu^\dag \ket0,
& E_D^0 & = -U/4 \,,\quad (\mu=\up,\down) \\
\ket{S_\pm^0} & = \frac{1\pm d_\up^\dag d_\down^\dag}{\sqrt{2}}\ket0,
& E_{S\pm}^0 & = U/4\pm \Delta_d
\end{align}
\end{subequations}
The unperturbed ground state of the QD experiences a transition from the spin
doublet state $\ket{D_\mu^0}$ to the spin singlet state $\ket{S_-^0}$ at
$\Delta_d/U=1/2$.
\subsection{Relation to Other Models}
\label{paper::sec:2.1}
Upon the Bogoliubov-de Gennes (BdG) transformation
\begin{equation}
\label{paper::eq:5}
\begin{bmatrix}
d_\up \\ d_\down^\dag
\end{bmatrix} = \frac{1}{\sqrt{2}}
\begin{bmatrix}
1 & +1 \\
1 & -1
\end{bmatrix}
\begin{bmatrix}
f_\Up \\ f_\Down^\dag
\end{bmatrix} ,
\end{equation}
the Hamiltonian~\eqref{eq:H} is rewritten as
\begin{multline}
\label{paper::eq:8}
H = \epsilon_f \sum_{\sigma=\Up,\Down} f_\sigma^\dag f_\sigma
+ U f_\Up^\dag f_\Up f_\Down^\dag f_\Down
+ \sum_k\epsilon_{k}c_{k\up}^\dag c_{k\up} \\{}
+ \sqrt\frac{\Gamma_F}{2\pi\rho_F}\sum_k
\left[c_{k\up}^\dag\left(f_\Up+f_\Down^\dag\right) + h.c.\right]
\end{multline}
with $\epsilon_f = \Delta_d-U/2$. The Hamiltonian in \eqnref{paper::eq:8}
describes a single-orbital Anderson-type impurity level $\epsilon_f$ with
onsite interaction $U$, coupled to a spin-polarized conduction band with
strength $\Gamma_F/2$.
Despite the formal similarity, there are two important distinction between the model~\eqref{paper::eq:8} and the conventional single-impurity Anderson model:
(i) The model~\eqref{paper::eq:8} involves the pair tunneling, $c_{k\up}^\dag f_\Down^\dag$, which will turn out to play a crucial role below.
(ii) The spin index $\sigma=\Up,\Down$ for $f_\sigma$ indicates the spin direction along the spin $x$-direction whereas $\mu=\up,\down$ for $d_\mu$ along the spin $z$-direction.
On the other hand, the particle-hole transformation
\begin{align}
d_1 = d_\up,
\quad
d_2 = d_\down^\dag,
\end{align}
transforms the model~\eqref{eq:H} to
\begin{multline}
\label{eq:HSQD:TLM}
H = - U (n_1 - 1/2) (n_2 - 1/2)
+ \Delta_d (d_1^\dag d_2 + d_2^\dag d_1) \\{}
+ \sum_k\epsilon_k c_{k\up}^\dag c_{k\down}
+ \sqrt{\frac{\Gamma_F}{\pi\rho_F}}\sum_k
(d_1^\dag c_{k\up} + c_{k\up}^\dag d_1) \,.
\end{multline}
In this model, the ferromagnetic lead is coupled to $d_1$ via a normal
tunneling and the pairing term has been transformed to a tunneling term between
dot orbital levels. It is known as the resonant two-level system with
attractive interaction ($-U<0$) \cite{Zitko2009feb,Zitko2011nov}.
\subsection{Methods and Physical Quantities}
\label{paper::sec:2.2}
For a non-perturbative study of the many-body effects, we adopt the well-established
numerical renormalization group (NRG) method, which provides not only
qualitatively but also quantitatively accurate results for quantum impurity systems.
Specifically, we exploit the NRG method to identify the different phases of the system as well as to investigate their quantum transport properties.
Technically, we impose additional improvements, the
generalized Logarithmic discretization \cite{Campo2005sep,Zitko2009feb} with
the discretization parameter $\Lambda=2$ and the $z$-averaging
\cite{Yoshida1990may} with $N_z = 32$, on the otherwise standard
NRG procedure \cite{Wilson1975oct,Krishna-murthy80a,Bulla2008apr}.
We use the conduction band half-width $D=1$ as the unit of energy.
To identify the phases, we follow the (non-perturbative) renormalization group idea \cite{Wilson75a,Krishna-murthy80a,Krishna-murthy80b} and examine the conserved quantity
\begin{equation}
N_S = n_\up - n_\down + \sum_k c_{k\up}^\dag c_{k\up} - N_0
\end{equation}
of the ground state, where $N_0$ is the total charge number of the unperturbed
spin-polarized lead at zero temperature. Physically, $N_S$ is the
\emph{excess} spin number in the whole system.
The quantum transport properties of different phases and crossover regions are investigated by calculating the local spectral density and the charge relaxation resistance with the NRG method. The local spectral density (or local tunneling density of states) of the QD,
\begin{equation}
A_\mu(\omega)
= -\frac{1}{\pi\hbar} \mathrm{Im}[G_\mu^R(\omega)] \,,
\end{equation}
is related to the Fourier transform $G_\mu^R(\omega)$ of the retarded Green's
function $G_\mu(t)$ for spin $\mu$,
$G_\mu(t) = -i\hbar\Theta(t) \Braket{\{d_\mu(t),d_\mu^\dag(0)\}}$.
The charge relaxation resistance $R_q(\omega)$ describes the response of the displacement current $I(t)$ through the QD in the presence of the ac gate voltage \cite{Buttiker1993jun,Buttiker1993sep,LeeMC2011may,LeeMC2014aug}. More explicitly, it is defined through the admittance $g(t) = (ie/\hbar) \Theta(t) \Braket{[I(t),n_d(t)]}$ by the relation
$1/g(\omega) = R_q(\omega) + i/\omega C_q(\omega)$, where $C_q(\omega)$ is the
quantum correction to the capacitance. The admittance in turn can be extracted from its
relation, $g(\omega) = i\omega (e^2/\hbar) \chi_c(\omega)$ to the dot charge
susceptibility $\chi_c(t) = -i\Theta(t)\Braket{[n_d(t),n_d]}$, which is
directly calculated with the NRG method.
\section{Results}
\label{paper::sec:3}
\Figref{fig:pd} shows the phase diagram which exhibits a quantum
phase transition between two phases, the spin singlet (S) and doublet (D)
phases, identified by the quantum number $N_S$ of the ground state calculated with the NRG method. Across the phase boundary, the quantum number $N_S$ of the ground state changes from $N_S=\pm1$ (doublet) to $N_S=0$ (singlet).
In addition, apart from the phase transition, we have found two
crossovers further distinguishing three regimes inside the singlet phase:
superconductivity-dominant ($\mathrm{S_S}$), mixed-valence (S$_\mathrm{M}$), and Kondo (S$_\mathrm{K}$) singlet regimes.
Below, we detail some interesting characteristics of each phase.
\subsection{Double Phase}
\label{paper::sec:3.1}
The doublet phase occupies the region of smaller $\Delta_d$ and $\Gamma_F$ of the phase diagram in Fig.~\ref{fig:pd}. The phase boundary is roughly linear for $\Gamma_F/U\ll 1/2$ as described by the equation
\begin{equation}
\label{paper::eq:6}
\Delta_d/U+\Gamma_F/U \approx 1/2 \,.
\end{equation}
Note that the ground state remains doubly degenerate with the excess spin
number $N_S=\pm1$ even in the presence of the coupling to the spin-polarized
ferromagnetic lead. It is due to the particle-hole symmetry. With the
particle-hole symmetry is broken, the degeneracy is lifted at finite
$\Gamma_F$ and the phase boundary is shifted accordingly \cite{endnote:1}.
In the doublet phase, the local spectral densities $A_\mu(\omega)$ on the QD exhibit typical charge-fluctuation peaks at
\begin{math}
|\hbar\omega| \sim E_{S\pm}^0 - E_D^0 = U/2\pm\Delta_d;
\end{math}
see \figsref{fig:doublet}~(a) and (b).
Apart from those charge-fluctuation peaks, $A_\down(\omega)$ has an additional
power-law peak at the zero frequency $\omega = 0$, $A_\down(\omega) \propto |\omega|^{-\alpha}$ [see
\figref{fig:doublet}~(b)]. This power-law peak at the zero energy suggests
that the doublet phase is `marginal' in the RG sense.
The exponent $\alpha$ is found to increase monotonically with increasing $\Gamma_F$
and $\Delta_d$, and is well fitted to $\alpha = 1 - (2/\pi)\tan^{-1}(U/2\Gamma_F)$ for small $\Delta_d$ [see \figref{fig:doublet}~(c)].
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{Fig3a_D_Aup} \\
\includegraphics[width=0.8\columnwidth]{Fig3b_D_Adn} \\
\includegraphics[width=0.8\columnwidth]{Fig3c_D_alpha}
\caption{(a,b) Spin-dependent spectral densities $A_\mu(\omega)$ in the spin doublet phase, corresponding to the point $1$ in \figref{fig:pd}. Here we have used $U=0.5D$,
$\Gamma_F=0.1D$, and $\Delta_d=0.12D$. The dotted lines in (a) indicate the
frequencies $|\hbar\omega| = U/2\pm\Delta_d$. (c) The exponent $\alpha$ from
the power-law relation of $A_\down(\omega)$. The line is a fitting
curve for $\Delta_d=0$; see the text for the expression for it. The value
of $\Delta_d/D$ are annotated.}
\label{fig:doublet}
\end{figure}
\subsection{Singlet Phase: Superconductivity-Dominant Singlet}
\label{paper::sec:3.2}
For larger values of $\Delta_d$,\footnote{Recall that the proximity-induced pairing potential $\Delta_d\sim\Gamma_S$. Therefore, the large-$\Delta_d$ limit corresponds to the strong coupling to the superconductor in the original system in Fig.~\ref{fig:pd} (a).} the system has a singlet ground state. In particular, the region of larger $\Delta_d/U$ and smaller $\Gamma_F/U$ of the phase diagram Fig.~\ref{fig:pd} is characterized by the strong Cooper pairing. It is natural as the ground state of the unperturbed QD ($\Gamma_F=0$) is the spin singlet $\ket{S_-^0}$ composed of empty or doubly occupied states [see \eqnref{eq:iQD}] due to the proximity-induced superconductivity.
Such superconductivity-dominant singlet region is separated from other singlet regions by a crossover boundary, roughly described by the equation [cf.~\eqnref{paper::eq:6}]
\begin{equation}
\label{paper::eq:7}
\Delta_d/U - \Gamma_F/U \approx 1/2 \,.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{Fig4a_SS_Aup}
\includegraphics[width=0.8\columnwidth]{Fig4b_SS_Adn}
\caption{Dot spectral densities in the superconductivity-dominant singlet
regime corresponding to the point $3$ in \figref{fig:pd}. Here we have used
$U=0.5D$, $\Gamma_F=0.4D$, and $\Delta_d=0.45D$. The dotted lines indicate
the frequencies $|\hbar\omega| = \Delta_d-U/2$. In (a), the spectral density vanishes at zero frequency due a Fano-like destructive interference.}
\label{fig:SuperSinglet}
\end{figure}
Because in this regime the superconductivity prevails over all the other types of correlations, the dot spectral densities [see Figs.~\ref{fig:SuperSinglet}~(a) and (b)] are simply given by the charge fluctuation peaks at $|\hbar\omega| \sim E_D^0 - E_{S-}^0 = \Delta_d - U/2$, broadened by the weak tunnel coupling $\Gamma_F$.
However, there is one noticeable feature in the spin-up spectral density $A_\up(\omega)$. That is, $A_\up(\omega=0)=0$ exactly, which is the consequence of the Fano-like destructive interference between two kinds of
dot-lead tunneling processes. It will be discussed in detail in Section~\ref{paper::sec:4.3}.
\subsection{Singlet Phase: Mixed-Valence Singlet}
\label{paper::sec:3.3}
The most interesting singlet phase occurs near $\Delta_d/U\approx 1/2$ with
finite $\Gamma_F/U$ in the phase diagram (Fig.~\ref{fig:pd}). We call it a ``mixed-valence singlet'' region because $\epsilon_f<\Gamma_f$ in the model~\eqref{paper::eq:8} regarding $\epsilon_f$ and $U$ as independent parameters; see the further discussions in Section~\ref{paper::sec:4.4}. It is distinguished from the doublet phase by the true phase boundary~\eqref{paper::eq:6} and separated from the superconductivity-dominant singlet state by the crossover boundary~\eqref{paper::eq:7}; that is,
\begin{equation}
|\Delta_d/U-1/2|\approx\Gamma_F/U .
\end{equation}
It is also separated from still another singlet state for $\Gamma_F/U\gg 1$,
which is characterized by the Kondo behaviors (see also
Section~\ref{paper::sec:3.4}), by another crossover.
The two spin-dependent spectral densities $A_\mu(\omega)$ in the mixed-valence
singlet state, as shown in Fig.~\ref{fig:mixedvalence}, put stark contrast with each other: While $A_\down(\omega)$ for
the minority spin features a usual Lorentzian peak of width $\Gamma_-$ at the
zero frequency, $A_\up(\omega)$ for the majority spin has a Lorentzian dip of
the same width $\Gamma_-$ superimposed on a broader peak structure of width
$\Gamma_+$. Later [see Section~\ref{paper::sec:4.4}], we will attribute this
dip structure to a destructive interference between two different types of
tunneling processes based on an effective non-interacting theory.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{Fig5a_SM_Aup}
\includegraphics[width=0.8\columnwidth]{Fig5b_SM_Adn}
\caption{Dot spectral densities in the mixed-valence singlet regime at
the point $4$ in \figref{fig:pd}. Here we have used $U=0.5D$,
$\Gamma_F=0.4D$, and $\Delta_d=0.19D$.}
\label{fig:mixedvalence}
\end{figure}
\subsection{Singlet Phase: Kondo Singlet}
\label{paper::sec:3.4}
When the QD couples strongly with the spin-polarized lead ($\Gamma_F/U\gg1,\Delta_d/U$), the system displays still another type of singlet correlation. We call this state as a Kondo singlet state as it corresponds to the so-called `charge Kondo state' \cite{Matveev91a,Iftikhar15a}; see Section~\ref{paper::sec:4.5}. In the charge Kondo state, the excess charge on the QD plays the role of a pseudo-spin.
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth]{Fig6a_SK_Aup}
\includegraphics[width=0.7\columnwidth]{Fig6b_SK_Adn}
\includegraphics[width=0.7\columnwidth]{Fig6c_SK_cs}
\includegraphics[width=0.7\columnwidth]{Fig6d_SK_TK}
\caption{(a,b,c) Dot spectral densities and charge susceptibility in the Kondo
regime of the singlet phase at point $2$ in \figref{fig:pd}. Here we have
used $U=0.5D$, $\Gamma_F=0.4D$, and $\Delta_d=0.125D$. (d) The width of the
central peak of $A_\down(\omega)$ and $T_K^\mathrm{boson}$ from
\eqnref{eq:tk:boson} at $\Delta_d=0.125D$.}
\label{fig:Kondo}
\end{figure}
As shown in Fig.~\ref{fig:Kondo}, the peak shapes of the spectral densities $A_\mu(\omega)$ are similar to those
in the mixed-valence singlet state described in Section~\ref{paper::sec:3.3}. The dip
structure in $A_\up(\omega)$ for the majority spin is again attributed to the
Fano-like destructive interference. However, the normalized peak height
$\pi T_KA_\down(\omega)$ for the minority spin is now unity, demonstrating the
charge Kondo effect; {the peak height of $\pi\Gamma_-A_\down(\omega=0)$
grows from zero to unity as one moves from the mixed-valence regime to the
Kondo regime} [compare \figref{fig:Kondo} (b) with \figref{fig:mixedvalence}
(b)]. Further, the peak width of $A_\down(\omega)$, or the dip width of
$A_\up(\omega)$, is identified as the charge Kondo temperature $T_K$.
The charge Kondo effect is also manifested in the charge susceptibility $\chi_c(\omega)$ of the QD shown in \figref{fig:Kondo}~(c). Its real part displays a pronounced central peak of the same width $T_K$. In the conventional (spin) Kondo effect, this susceptibility corresponds to the spin susceptibility.
\section{Discussion}
\label{sec:discussion}
The NRG calculations reported in the previous section clearly display a quantum phase transition between the spin singlet and doublet phases. Here we use some analytical but approximate methods to understand deeper the nature of the transition and the characteristics of the different phases.
As seen in the equivalent model~\eqref{paper::eq:8}, our system is described by
a generalized form of the Anderson impurity model. The Anderson impurity model
\cite{Anderson61a} has been studied in various theoretical methods; using the
variational method \cite{Varma76a}, the scaling theory
\cite{Jefferson77a,Haldane78a}, the numerical renormalization group method
\cite{Krishna-murthy80b}, and the $1/N$ expansion \cite{Ramakrishnan82a}.
Here we extend some of these methods.
\bigskip
\subsection{Mixed-Valence Transition}
\label{paper::sec:4.1}
We first examine analytically the phase boundary between the doublet and
singlet phases found in Section~\ref{paper::sec:3} based on the NRG method. Our
analysis consists of two steps depending on the relevant energy scale. At
higher energies (the band cutoff $\Lambda\gtrsim\Gamma_F$),\footnote{The band cutoff $\Lambda$ here is not to be confused with the band discretization parameter of the NRG in Section~\ref{paper::sec:2.2}.} we extend the scaling theory
\cite{Jefferson77a,Haldane78a} to integrate out the high-energy excitations. At
lower energies ($\Lambda<\Gamma_F$), we extend the variational method \cite{Varma76a}.
Following Haldane's scaling argument \cite{Jefferson77a,Haldane78a}, it is
straightforward to integrate out the high energy states in the conduction band up
to $\Gamma_F$ and keep track of the scaling of the parameters $\epsilon_f$ and
$U$ in the equivalent model~\eqref{paper::eq:8}; concerning the model~\eqref{paper::eq:8} it is convenient to regard $\epsilon_f$ and $U$ (rather than $\Delta_d$ and $U$) as independent parameters. We found that even though our
system has only a single spin channel the anomalous tunneling term acts as the
tunneling via the second spin channel so that the scaling result is exactly the
same as the one for the conventional Anderson model:
\begin{equation}
\epsilon_f(\Lambda)
= \epsilon_f^* - \frac{\Gamma_F}{\pi} \ln\frac{\Lambda}{\Gamma_F}
\end{equation}
with the scaling invariant
$\epsilon_f^*=\epsilon_f(\Lambda=\Gamma_F)$ and the band cutoff
$\Lambda$. Therefore, as in the conventional Anderson impurity model, it is possible to identify three regimes:
the empty/doubly-occupied ($|\epsilon_f^*|\gg\Gamma_F$), the
mixed-valence ($|\epsilon_f^*|\lesssim\Gamma_F$), and the local-moment
regimes ($\epsilon_f^*\ll-\Gamma_F$).
For the conventional Anderson impurity model, in all these regimes
the renormalization beyond the Haldane's scaling eventually flows into the spin
singlet state, so there are only crossovers between the regimes.
However, for our system the local-moment
regime does not flow into the singlet state because there is only a single spin channel and the anomalous tunneling
term prevents the formation of the conventional Kondo correlation. Therefore, a
transition takes place between the mixed-valence and local-moment regimes;
hence the transition is named as the mixed-valence one.
To see this more clearly,\footnote{From the numerical point of view, the disappearance of the Kondo correlation in the local-moment regime is already well implemented by the non-perturbative NRG method.} we extend the variational method. Here we focus on the case of $U\to\infty$. This condition rules out the doubly occupied state on the QD (recall that concerning the model~\eqref{paper::eq:8} $\epsilon_f$ and $U$ are regarded as independent parameters) and makes the variational analysis much simpler; the finite $U$ should involve more states but would not alter the main qualitative feature of the transition found in the $U\to\infty$ case.
We take a variational ansatz for the ground states in spin singlet and doublet states, respectively, up to the second order in the dot-lead tunneling
\begin{widetext}
\begin{subequations}
\label{eq:ansatz}
\begin{align}
\ket{S}
& =
\left[
\alpha_0
+ \sum_{k<k_F} \alpha_{k+} f_\Up^\dag c_{k\up}
+ \sum_{k>k_F} \alpha_{k-} c_{k\up}^\dag f_\Down^\dag
+ \sum_{k>k_F,k'<k_F} \alpha_{kk'} c_{k\up}^\dag c_{k'\up}
\right] \ket{\mathrm{FS}}_0 \\
\ket{D_\up}
& =
\left[
\beta_0 f_\Up^\dag + \sum_{k>k_F} \beta_k c_{k\up}^\dag
+ \sum_{k>k_F,k'<k_F}
\left( \beta_{kk'+} f_\Up^\dag c_{k\up}^\dag c_{k'\up}
+ \beta_{kk'-} c_{k\up}^\dag c_{k'\up}^\dag f_\Down^\dag\right)
\right]
\ket{\mathrm{FS}}_0,
\end{align}
\end{subequations}
where $\ket{\mathrm{FS}}_0$ is unperturbed Fermi sea and $k_F$ is the Fermi wave
number.
The states satisfy the normalization condition,
$\Braket{S|S} = \Braket{D_\up|D_\up} = 1$. The coefficients $\alpha$ and $\beta$ in
these two states are to be determined by the minimization condition of the energy expectation value with respect to these states:
$\Braket{S|H|S} := E_0 + \epsilon_f + \epsilon_S$ and
$\Braket{D_\up|H|D_\up} := E_0 + \epsilon_f + \epsilon_D$, where $E_0$ is
the unperturbed energy of $\ket{\mathrm{FS}}_0$. By applying the Lagrange multiplier
method under the normalization constraint, we obtain the coupled differential
equations:
\begin{subequations}
\begin{align}
\epsilon_S \alpha_0
& =
- \epsilon_f \alpha_0
+ \frac{t_F}{\sqrt2} \left(\sum_{k<k_F} \alpha_{k+} - \sum_{k>k_F} \alpha_{k-}\right)
\\
\epsilon_S \alpha_{k+}
& =
\frac{t_F}{\sqrt2} \alpha_0 - \epsilon_k \alpha_{k+}
+ \frac{t_F}{\sqrt2} \sum_{k'>k_F} \alpha_{k'k}
\\
\epsilon_S \alpha_{k-}
& =
- \frac{t_F}{\sqrt2} \alpha_0 + \epsilon_k \alpha_{k-}
+ \frac{t_F}{\sqrt2} \sum_{k'<k_F} \alpha_{kk'}
\\
\epsilon_S \alpha_{kk'}
& =
\frac{t_F}{\sqrt2} \left(\alpha_{k'+} + \alpha_{k-}\right)
+ (\epsilon_k - \epsilon_{k'} - \epsilon_f) \alpha_{kk'}
\end{align}
\end{subequations}
and
\begin{subequations}
\begin{align}
\epsilon_D \beta_0
& = \frac{t_F}{\sqrt2} \sum_{k>k_F} \beta_k
\\
\epsilon_D \beta_k
& =
\frac{t_F}{\sqrt2} \beta_0 + (\epsilon_k - \epsilon_f) \beta_k
+
\frac{t_F}{\sqrt2}
\left[\sum_{k'<k} \beta_{k'k-} - \sum_{k'>k} \beta_{kk'-} - \sum_{k'<k_F} \beta_{kk'+}\right]
\\
\epsilon_D \beta_{kk'+}
& = - \frac{t_F}{\sqrt2} \beta_k + (\epsilon_k - \epsilon_{k'}) \beta_{kk'+}
\\
\epsilon_D \beta_{kk'-}
& =
- \frac{t_F}{\sqrt2} (\beta_k - \beta_{k'}) + (\epsilon_k + \epsilon_{k'}) \beta_{kk'-}.
\end{align}
\end{subequations}
\end{widetext}
Up to the first order (by setting $\alpha_{kk'} = \beta_{kk'\pm} = 0$), the
equations for $\epsilon_S$ and $\epsilon_D$ can be obtained in closed form:
\begin{subequations}
\label{eq:vm1}
\begin{align}
\epsilon_S
& =
- \epsilon_f
- \frac{\Gamma_F}{\pi} \ln\left(1 + \frac{D}{|\epsilon_S|}\right)
\\
\epsilon_D
& =
-
\frac{\Gamma_F}{2\pi}
\ln\left(1 + \frac{D}{|\epsilon_D| - \epsilon_f}\right).
\end{align}
\end{subequations}
These equations can be solved numerically, and two different phases, in each of
which either $\epsilon_S < \epsilon_D$ or $\epsilon_S > \epsilon_D$, are
identified, as shown in \figref{fig:vm}~(a). Although a closed form equations
for $\epsilon_S$ and $\epsilon_D$ are not available with the second-order terms
included, the whole differential equation can be solved numerically by
discretizing the lead dispersion. It is found that the inclusion of the
second-order terms hardly changes the phase boundary. On similar reasoning, one can see that the phase boundary remains intact upon including the higher-order terms in the variational wave functions.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{Fig7a_pd_vm}\quad%
\includegraphics[width=0.8\columnwidth]{Fig7b_pd_vm_siam}
\caption{Phase diagrams obtained from the variational method in the
$U\to\infty$ limit (a) for our model and (b) for conventional
single-impurity Anderson model. The solid and dotted lines are phase
boundaries when up to the first and second-order terms are taken into
account, respectively.}
\label{fig:vm}
\end{figure}
It is in stark contrast with the similar variational analysis for the conventional
Anderson impurity model in Appendix~\ref{sec:vmsiam}: Up to the
first-order the equations for $\epsilon_S$ and $\epsilon_D$ are the same as those
for our models [see \eqnref{eq:vm:siam}]. Therefore, at this order a phase transition
between the spin singlet and doublet states also takes place even in the
conventional Anderson impurity model. This apparent contradiction to the
well-known fact that the ground state of the conventional Anderson impurity
model is always spin singlet is due to the perturbative construction of the
ansatz.
As illustrated in \figref{fig:vm}~(b),
the spin doublet region shrinks for the
conventional Anderson model when one includes the higher-order terms.
In other words, the Kondo ground state involves all the higher-order singlet states
between the dot and the lead~\cite{Gunnarsson83a}.
This difference can be inferred from the comparison between two ansatz, \eqnsref{eq:ansatz} and (\ref{eq:ansatz:siam}).
For the spin singlet state, the number of the
particle-hole excitations in the second-order term for our model is by half
smaller than that for the conventional Anderson impurity model because of the difference in the channel
numbers. On the other hand, it is not the case for the doublet state. It
explains why the singlet state in our model does not lower its energy upon
including the higher-order terms, compared to the doublet state, and also why
the Kondo correlation cannot arise.
\subsection{Doublet Phase}
\label{paper::sec:4.2}
We now investigate the characteristics of the different phases (and subregions inside the singlet phase). We start with the doublet phase by applying the Schrieffer-Wolff transformation on
the assumption that $\Gamma_F \ll |\epsilon_f|, U$.
The model~\eqref{paper::eq:8} is then
transformed to an effective Kondo-like model:
\begin{equation}
\label{eq:Heff}
H \approx
H_\mathrm{eff}
= J {\mathbf{s}}\cdot{\mathbf{S}} + \sum_k \epsilon_k c_{k\up}^\dag c_{k\up} \,.
\end{equation}
Here the impurity spin-1/2 operator ${\mathbf{S}}$ is defined by
\begin{equation}
S^+ = (S^-)^\dag = \ket\Up \bra\Down \,,\quad
S^z = \ket\Up\bra\Up - \ket\Down\bra\Down \,,
\end{equation}
where
\begin{equation}
\ket\sigma = f_\sigma^\dag\ket{0} \quad (\sigma=\Up,\Down) \,.
\end{equation}
On the other hand, the conduction-band spin,
${\mathbf{s}} = \sum_{kk'\nu\nu'} \psi_{k\nu}^\dag {\boldsymbol{\tau}}_{\nu\nu'} \psi_{k'\nu'}$ is
defined over the two-component Nambu spinor $\psi_{k\nu}$ with
$\psi_{k1} = c_{k\up}$ and $\psi_{k2} = c_{k\up}^\dag$ with ${\boldsymbol{\tau}}$ being the
Pauli matrices in the Nambu space (i.e. the particle-hole isospin space). The
isotropic exchange coupling is obtained as
$\rho_F J \approx \Gamma_F/\pi|\epsilon_f|$. The model \eqref{eq:Heff} is
formally the same as the usual Kondo model except the fact that the conduction
spin is replaced by the isospin in the Nambu space. This replacement, however,
makes a crucial difference in poor man's scaling \cite{Anderson70a,Krishna-murthy80a,Krishna-murthy80b}. For example, the typical
scaling of $J_z$ term vanishes at least up to the second order:
\begin{equation}
- 2 \frac{J_\perp^2}{\epsilon_{k_3}}
\left(
c_{k_1} \dot{c}_{k_2} \dot{c}_{k_3}^\dag c_{k_4}^\dag
+
\dot{c}_{k_2} c_{k_1} \dot{c}_{k_3}^\dag c_{k_4}^\dag
\right) \ket\up \bra\up
\approx 0 \,.
\end{equation}
These results imply that unlike the true Kondo model involving real spins, the
exchange coupling in \eqnref{eq:Heff} involving particle-hole isospins is
marginal in the RG sense. Namely, it does not scale as one goes down to lower
energies. The NRG results discussed in Section~\ref{paper::sec:3.1} support
this scaling analysis.
\subsection{Singlet Phase: Superconductivity-Dominant Singlet}
\label{paper::sec:4.3}
The superconductivity-dominant singlet phase can be easily understood within the perturbative argument. When the QD is isolated ($\Gamma_F=0$), the pairing potential $\Delta_d$ dominates over the on-site interaction $U$ for $\Delta_d/U>1/2$; see Eq.~\eqref{eq:iQD}. As the tunneling coupling $\Gamma_F$ is turned on, the above feature does not change qualitatively unless $\Gamma_F$ exceeds $\Delta_d$ significantly. As $\Gamma_F/U$ grows further beyond $\Delta_d/U-1/2$, the state gradually crosses over to the mixed-valence singlet state.
\subsection{Singlet Phase: Mixed-Valence Singlet}
\label{paper::sec:4.4}
The mixed-valence singlet phase, $|\Delta_d/U-1/2|\lesssim\Gamma_F/U\lesssim 1$, is roughly similar to the mixed-valence regime of the conventional Anderson impurity model. Recall that in the equivalent model~\eqref{paper::eq:8}, the impurity energy level is given by $\epsilon_f=\Delta_d-U/2$ and according to the above phase boundary, $\epsilon_f<\Gamma_F$, and hence the name mixed-valence singlet state.
The most noticeable feature of the mixed-valence singlet region is the emergence of the two energy scales $\Gamma_\pm$ in the local spectral densities, as demonstrated in Fig.~\ref{fig:mixedvalence}. To understand it, we first note that
in this phase ($\Gamma_F>\epsilon_f$) the charge fluctuation on the QD is huge
and at the zeroth order the effects of the on-site interaction $U$ may be ignored.
In the non-interacting picture, the dot Green's functions given by
\begin{subequations}
\begin{align}
G_\up^R(\omega)
& = \frac{1}{\Gamma_+-\Gamma_-}
\left[
\frac{\Gamma_+}{\omega+i\Gamma_+} -
\frac{\Gamma_-}{\omega+i\Gamma_-}
\right] , \\
G_\down^R(\omega)
& = \frac{1}{\Gamma_+-\Gamma_-}
\left[
\frac{\Gamma_+}{\omega+i\Gamma_-} -
\frac{\Gamma_-}{\omega+i\Gamma_+}
\right] ,
\end{align}
\end{subequations}
clearly exhibits two energy scales
\begin{equation}
\Gamma_\pm =
\Gamma_F/2 \pm \sqrt{(\Gamma_F/2)^2 - \epsilon_f^2} \,,
\end{equation}
which represent the relaxation rates predominantly
via the normal tunneling ($c_{k\up}^\dag f_\Up$) and the pair tunneling ($c_{k\up}^\dag f_\Down^\dag$), respectively.
The normal- and pair-tunneling processes are accompanied by phase shift $\pi$ relative to each other and lead to destructive interference; recall $d_\up=(f_\Up+f_\Down^\dag)/\sqrt2$ from the transformation~\eqref{paper::eq:5}.
The destructive interference is maximal at zero frequency so
that $A_\up(\omega)$ has a dip with a width $\Gamma_-$ inside the central peak
whose width is $\Gamma_+$. For spin $\down$, two processes simply add up so
that two peaks are superposed, displaying a very sharp peak of the width
$\Gamma_-$.
While the non-interacting theory explains the feature of the spectral densities
qualitatively, the NRG results in Section~\ref{paper::sec:3.3} uncover that the interaction $U$ significantly renormalizes $\epsilon_f$ and hence $\Gamma_\pm$
such that $\Gamma_- \ll \Gamma_+ \ll \Gamma_F$.
Especially, $\Gamma_-$ decreases exponentially with decreasing
$\Gamma_F$ and vanishes at the transition point. One way to investigate such
renormalization effects is again to use the extended variational method in
Section~\ref{paper::sec:4.1} including all
orders~\cite{Varma76a,Gunnarsson83a}. It is, however, out of the scope of the present
work and leave it open for future studies.
\subsection{Singlet Phase: Kondo Singlet}
\label{paper::sec:4.5}
Now we turn to the Kondo singlet regime with $\Gamma_F/U\gg 1, \Delta_d/U$.
In Section~\ref{paper::sec:2.1} we have seen that our model, \eqref{eq:H} or \eqref{paper::eq:8}, is equivalent to the resonant two-level model with negative interaction, \eqref{eq:HSQD:TLM}.
In a recent work \cite{Zitko2011nov} along a different context, it has been found that the resonant two-level model in the large $\Gamma_F$ limit can be bosonized and thus mapped to the anisotropic Kondo model. Interestingly, it was also shown to be related to a quantum impurity coupled to helical Majorana edge modes formed around a two-dimensional topological superconductor. Here we adopt their result to our context, referring the details of the derivation to Ref.~\cite{Zitko2011nov}.
Following the bosonization procedure \cite{Zitko2011nov}, the interacting resonant two-level model is mapped to a bosonized form of the anisotropic Kondo model
\begin{equation}
\label{eq:KM}
H_K
=
\sum_{k\sigma} \epsilon_k c_{k\sigma}^\dag c_{k\sigma}
+ \frac{J_\perp}{2} (S^+ s^- + S^- s^+) + J_z S^z s^z
\end{equation}
with the conduction-band spin ${\mathbf{s}}$ and the impurity spin ${\mathbf{S}}$. Here the
Kondo couplings are identified as
\begin{equation}
J_\perp
= \sqrt8\Delta_d
\end{equation}
and
\begin{equation}
\sqrt2
\left[1 - \frac{2}{\pi} \tan^{-1}\frac{\pi\rho J_z}{4}\right]
= \gamma
:= 1 + \frac{2}{\pi} \tan^{-1}\frac{U}{\Gamma_F}.
\end{equation}
For sufficiently large $\Gamma_F$ compared to $U$, this Kondo model is antiferromagnetic $(J_\perp, J_z >0)$, and the effective Kondo temperature associated with the screening of the magnetic moment is, from the known results on the Kondo model,
\begin{equation}
\label{eq:tk:boson}
T_K^\mathrm{boson}
\sim \frac{\Gamma_F}{2} \left(\frac{2\Delta_d}{\Gamma_F}\right)^{\frac{2}{2-\gamma^2}}.
\end{equation}
As clear from the bosonization procedure, the anisotropic Kondo model
essentially corresponds to the so-called `charge Kondo effect' with the excess
charge on the QD playing the role of the pseudo-spin
\cite{Matveev91a,Iftikhar15a}. More specifically, the charging of $d_\down$
level is mapped onto the pseudo-spin of the Kondo impurity.
Considering that the ferromagnetic lead in our original model has only a single
spin component, this Kondo model should be defined in particle-hole isospin
space of both the dot and the lead. Then, the spin-flip scattering in the
effective Kondo model can be interpreted as the particle-hole scattering in our
original model. For example, the injected particle in the lead is scattered
into the hole, accompanying the inversion of the occupation of $d_\down$
level. Since the change in the occupation of $d_\down$ level is only
possible via the pair tunneling to the superconducting lead, the Kondo
correlation implies that the currents in the ferromagnetic and
superconducting leads are highly correlated.
Here it should be noted that the interpretation based on the bosonization is valid only in the large-$\Gamma_F$ limit because the bosonization procedure requires the unbounded momentum (or dispersion) of a continuum band (whose band width is $\Gamma_F$ in our case) which is to be bosonized. Hence, the mapping to the anisotropic Kondo model cannot be justified in general; in this respect our parameter regime and interpretation are different from those of Ref.~\cite{Zitko2011nov}, where the singlet and doublet phases and the phase transition between them are explained in terms of the effective Kondo model.
One evidence supporting the limitation of the bosonization may come from the comparison between the width of the central peak of $A_\down(\omega)$, which is $T_K$ in the S$_\mathrm{K}$ regime, and the effective Kondo temperature, \eqnref{eq:tk:boson}, predicted from the bosonization [see \figref{fig:Kondo}~(d)]. Two energy scales are in good agreement with each other for $\Gamma_F/U > 1$, as expected. However, for $\Gamma_F/U \lesssim 1$, there is a big discrepancy between them. In addition, the expression~\eqref{eq:tk:boson} fails close to the transition point. It indicates that the region of the singlet phase with small $\Gamma_F$ is not of the Kondo state but of the mixed-valence state, as discussed in the previous section.
\section{Possible Experiments}
\label{sec:experiments}
Up to now, we have elucidated the physical nature of the two phases and, in
particular, classified the different regimes in the singlet phase, mostly based
on the dot spectral densities. One remaining question is how to make a
distinction between the different regimes in experiment. Here we suggest three possible experimental observations: the spin-selective tunneling
microscopy, the current correlation between leads, and the the dynamical
response with respect to the ac gate voltage.
The characteristics of different phases and regimes are well reflected in the spin-dependent spectral density which can be measured by the spin-selective tunneling microscopy applied directly to the quantum dot. It corresponds to adding of an additional ferromagnetic lead very weakly connected to the quantum dot and measuring the differential conductance through it. By altering the polarization of the auxiliary ferromagnetic lead, one can measure the spectral density of the quantum dot for each spin, identifying different phases based on it.
Secondly, as explained in \secref{paper::sec:4.5}, the Kondo scattering in the S$_\mathrm{K}$ regime correlates the currents in the ferromagnetic and superconducting leads, resulting in nontrivial cross-current correlation which can be measured in experiment. Obviously, the average current from the fully polarized ferromagnetic lead to the superconducting lead is still zero in the presence of interacting quantum dot because there is no influx of spin-$\down$ electron from the ferromagnetic lead. However, different from previous works on similar systems \cite{deJong1995feb,Cao2004dec}, the strong interaction in our system makes the currents correlated, though they are zero on average.
Surely, this cross-current correlation should appear in the other regimes of the singlet phase. It can be inferred from the fact that they are divided by crossovers not by sharp transition and that they feature similar spectral densities. However, in the S$_\mathrm{K}$ regime the current correlation is maximized by the enhanced particle-hole scattering due to the Kondo correlation. Therefore, we expect that the amplitude of the current correlation increases and saturates as one moves toward the S$_\mathrm{K}$ regime.
Experimentally, the current correlation is measured under finite bias because the dc current correlation strictly vanishes at zero bias and the equilibrium low-frequency feature of the correlation is hard to measure in experiment due to decoherence effect. The calculation of the current correlation at large bias is beyond the scope of this work, so we have described this method only qualitatively.
The third experimental proposal, which is expected to identify all the phases
and regimes unambiguously, is to measure the charge relaxation resistance in
the zero-frequency limit (a current response to an ac gate
voltage). \Figref{fig:rq} shows the dependence of the zero-frequency
relaxation resistance $R_q(\omega\to0)$ on $\Delta_d$ and $\Gamma_F$.
First, it diverges in the spin doublet regime. Physically, the relaxation resistance is related to the dissipation via the charge
relaxation process of the particle-hole pairs in the lead
\cite{LeeMC2011may}. In the doublet regime, the spin $\down$ level in the dot
is effectively decoupled from the other system and is on resonance, which is
the reason for the two-fold degeneracy \cite{Zitko2011nov}. This resonance
condition enhances the generation of the particle-hole pairs greatly (or
indefinitely in the perturbative sense) \cite{LeeMC2014aug}, leading to
diverging value.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\columnwidth]{Fig8a_rq}
\includegraphics[width=0.8\columnwidth]{Fig8b_rq}
\caption{Zero-frequency relaxation resistance $R_q(\omega\to0)$ (a) as
functions of $\Delta_d$ at $\Gamma_F/U=0.1$ along the $bb'$ line in
\figref{fig:pd} and (b) as functions of $\Gamma_F/U$ at $\Delta_d/U=0.125$
along the $aa'$ line in \figref{fig:pd}.}
\label{fig:rq}
\end{figure}
To the contrary, the resistance vanishes in the S$_\mathrm{S}$ regime. In the
presence of the superconductivity, the particle-hole pairs can be generated via
two processes: one is the charge-conserving type ($c_{k\up}^\dag f_\Up$ in
\eqnref{paper::eq:8}) and the other is the pair-tunneling type
($c_{k\up}^\dag f_\Down^\dag$). The particle-hole pair amplitudes of the two
processes are opposite in sign due to the fermion ordering
\cite{LeeMC2014aug}. Also, the cancellation is exact in the zero-frequency
limit of the particle-hole pairs because the weights from the intermediate
virtual states are same for two processes in this limit.
On the other hand, $R_q$ is observed to saturate toward $h/2e^2$ in the
S$_\mathrm{K}$ regime. For a single-channel Fermi-liquid system, the relaxation
resistance is known to have the universal value $h/2e^2$
\cite{Buttiker1993jun,Buttiker1993sep}, and for the conventional Anderson
impurity model in the Kondo regime the resistance becomes $h/4e^2$ since there
are two spin channels which behave like a composite of two parallel resistors
of resistance $h/2e^2$ \cite{LeeMC2011may}. While our system features the
Kondo correlation in this regime, the resistance is $h/2e^2$ because there is
only a single channel to generate the particle-hole pairs.
Finally, in the S$_\mathrm{M}$ regime, $R_q$ is finite but strongly depends on
the values of the parameters: it changes continuously from $R_q=\infty$ to the
saturation values, as seen in \figref{fig:rq}. It is known that
\cite{Buttiker1993jun,Buttiker1993sep} the small mesoscopic RC circuit with a
single channel should have a universal value $R_q = h/2e^2$ at zero
temperature as long as it is in the Fermi-liquid state. Non-universal value
of $R_q$ in the S$_\mathrm{M}$, therefore, indicates that the system is in
non-Fermi-liquid states, which makes it distinctive from the S$_\mathrm{K}$
regime. The microscopic origin of the non-universal value of $R_q$ is
explained by the fact that the two opposite effects discussed above are
partially operative simultaneously: the enhancement of the particle-hole
generation due to the high density of states of spin-$\down$ at the Fermi level
(near the spin doublet phase) and the cancellation between the
charge-conserving and pairing processes (near the S$_\mathrm{S}$ regime). The
relative strength of the two effects surely depends on the value of the
parameters.
\section{Conclusion}
\label{sec:conclusion}
Using the NRG method, we have studied the triad interplay of
superconductivity, ferromagnetism, and Kondo effect all together in a QD
coupled to both a superconducting and spin-polarized electrodes as shown
schematically in Fig.~\ref{fig:system} (a).
We have found that unlike the pairwise competition among the three effects,
the triad interplay is ``cooperative'' and leads to a mixed-valence quantum
phase transition between doublet and singlet states. The singlet phase is in
many respects similar to the mixed-valence state, but connected adiabatically
through crossover either to the superconducting state in the limit of strong
coupling to the superconductor or to the charge Kondo state in the limit of
strong coupling to the spin-polarized lead.
Physical explanations and interpretations based on analytic methods such as
bosonization, scaling theory, and variational method have been provided.
Finally, we have proposed the experimental methods such as the spin-selective
tunneling microscopy, measurement of the cross-current correlation and the
charge relaxation resistance in order to distinguish the different phases and
regimes.
Even though our study has found out the key characteristics of the
ferromagnet-quantum dot-superconductor system, it still leaves much room for
further studies.
First, one can lift the particle-hole symmetry condition used in this
work. Then, due to the ferromagnetic proximity effect, it induces an
effective Zeeman splitting (or exchange field), which would form subgap
states in the dot. Moreover, the breaking of the particle-hole symmetry for
spin-$\down$ level is expected to induce an effective Zeeman field for the
Kondo model in the S$_\mathrm{K}$ regime, shifting the
phase boundaries \cite{endnote:1}.
Secondly, the strong superconductivity condition
($\Delta_0 \gg U, \Gamma_S, \Gamma_F$) also can be lifted so that the spin
Kondo-dominated state ($T_K > \Delta_0$) can arise. Then, the S$_\mathrm{S}$
regime will be replaced by the Kondo state. In this case, one may observe the
interesting crossover from the spin Kondo state to the charge Kondo state.
Finally, the study can go beyond the equilibrium case by applying a finite
bias which is still below the superconducting gap. As discussed in
\secref{sec:experiments}, the calculation of the cross-current correlation at
finite bias is important for experimental verification. Although the
non-equilibrium condition in the presence of a strong interaction is
challenging, it is worth doing in the experimental point of view.
\section*{Acknowledgments}
This work was supported by the the National Research
Foundation (Grant Nos. 2011-0030046, NRF-2017R1E1A1A03070681, and 2018R1A4A1024157)
and the Ministry of Education (through the BK21 Plus Project) of Korea.
\bigskip\bigskip
|
3,212,635,537,587 | arxiv | \section{Introduction}
The quantum spin-Hall effect indicates that the spin of the electron is locked to the direction of propagation \cite{kane2005quantum}. The $Z_2$ index, or the spin Chern number which is a topological invariant of the given quantum system is defined to verify if the spin Hall conductance exists on the edge of the bulk material \cite{kane2005z,sheng2006quantum}. After introducing the $Z_2$ topological index to analyze the system, a variety of unidirectional edge modes in quantum systems were discovered \cite{konig2008quantum,kuemmeth2008coupling,soumyanarayanan2016emergent}. By analogy with the quantum spin-Hall effect of electrons, spin-momentum locking phenomena can also be found in photonic topological insulators \cite{khanikaev2013photonic,wu2015scheme,ma2017scattering,ozawa2019topological,hafezi2011robust}. The direction of propagation is still used to define 'momentum' of the light while the concept of 'spin' is not as clear as the spin of the electron. It may refer to the bonding (antibonding) states of electric and magnetic fields \cite{khanikaev2013photonic}, left-hand (right-hand) circular polarizations of electric fields \cite{wu2015scheme}, and clockwise (anticlockwise) circulations of coupled resonator optical waveguides \cite{hafezi2011robust}.
Spin-momentum locked edge modes can also be discovered in trivial optical systems without topological properties, such as photonic crystal waveguides \cite{sollner2015deterministic,coles2016chirality,young2015polarization}, surface plasmon polaritons \cite{van2016universal,rodriguez2013near}, and even dielectric waveguides \cite{rodriguez2013near}. A pair of orthogonal dipoles with $\pm\pi/2$ phase differences which represent opposite spin directions are applied to excite the unidirectional edge modes in these systems. The spin of dipole sources couples to the spin of evanescent waves near the edges, giving rise to the spin-momentum locked edge modes.
An anti-phase boundary is created by shifting the crystal by one-half period along the propagation direction. It can be observed in electronic systems and can be treated as a defect in the crystal that breaks the translation symmetry \cite{cohen2002structure,ahn2005electronic}. Accurate atomic manipulation is required in order to design the anti-phase boundaries in electronic systems \cite{wang2018designing,wei2014ferroelectric}. It is easier to design the anti-phase boundary in photonic system, which may help us have a deeper understanding of how the energy is distributed near the anti-phase boundary.
In this paper, we create an anti-phase boundary in a photonic crystal structure by shifting the structure along the direction of periodicity. Unidirectional propagation of the edge modes is discovered. To the authors' best knowledge, spin-momentum locked edge modes have not been found on anti-phase boundaries in quantum or optical systems. It will not only make the existence of the propagating edge modes along anti-phase boundaries in quantum systems possible, but also provide a new way to design chiral waveguides in photonic crystal structures.
\begin{figure}
\centering
\subfloat[]{{\includegraphics[width=.2\textwidth]{fig1a}}\label{fig1a}}
\subfloat[]{{\includegraphics[width=.5\textwidth]{fig1b}}\label{fig1b}}
\caption{\label{fig1} (a) Unit cell of photonic crystal with $d$ the diameter of cylinders, $a_0$ the length of diamond edge, and $R$ the distance between the center of the diamond and the center of the cylinders. $\varepsilon_d$ and $\varepsilon_A$ are the relative permittivities of the cylinders and surrounding environment respectively. (b) Anti-phase boundary (red dashed line) formed by shifting the photonic crystal along $\protect\overrightarrow{a}_2$ by one-half period $t=-a_0/2$ where $\protect\overrightarrow{a}_1$ and $\protect\overrightarrow{a}_2$ are lattice vectors of the crystal. The angle between $\protect\overrightarrow{a}_1$ and $\protect\overrightarrow{a}_2$ is $\pi/3$. }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{fig2}
\caption{\label{fig2} The red hexagons are unit cells of the crystal for $R=a_0/3$ while $\protect\overrightarrow{a}_1^{'}$ and $\protect\overrightarrow{a}_2^{'}$ are lattice vectors. }
\end{figure}
\section{Spin-momentum locked modes}
\begin{figure}
\centering
\subfloat[]{{\includegraphics[width=.45\textwidth]{fig3a}}\label{fig3a}}
\subfloat[]{{\includegraphics[width=.55\textwidth]{fig3b}}\label{fig3b}}
\subfloat[]{{\includegraphics[width=.4\textwidth]{fig3c}}\label{fig3c}}
\subfloat[]{{\includegraphics[width=.65\textwidth]{fig3d}}\label{fig3d}}
\caption{\label{fig3} (a) Dispersion relation of the super-cell which is periodic in $\protect\overrightarrow{a}_2$ direction and of 8 unit cells on each side of anti-phase boundary in $\protect\overrightarrow{a}_1$ direction. Label $k_2$ means the projection of k vector onto $\protect\overrightarrow{a}_2/|\protect\overrightarrow{a}_2|$. The green-shaded region is the projected band diagram of the bulk modes. Red and blue lines represent the odd modes and even modes respectively. The diameter of cylinder and distance between cylinder center and diamond center are $d=0.24a_0$ and $R=0.345a_0$. The relative permittivities are $\varepsilon_d=11.7$ and $\varepsilon_A=1$. (b) Real part of $E_z$ distributions at points $P_1$, $P_2$ and $P_3$ as shown in (a). The black arrows indicate the time-averaged Poynting vectors over a period. (c) Real part distributions of $E_z$ of magnetic dipoles $(\hat{x}-i\hat{y})/\sqrt{2}$ (left) and $(\hat{x}+i\hat{y})/\sqrt{2}$ (right) are plotted. The red arrows represent the time-averaged Poynting vectors. (d) $|E_z|$ are plotted for the driven modes excited by magnetic dipoles $(\hat{x}-i\hat{y})/\sqrt{2}$ (left) and $(\hat{x}+i\hat{y})/\sqrt{2}$ (right) respectively. The yellow arrow indicates the location of the source, which is at the center of the unit cell. The normalized frequency of the source is chosen to be $f_0a_0/c=0.46$. }
\end{figure}
\begin{figure}
\centering
\subfloat[]{{\includegraphics[width=.45\textwidth]{fig4a}}\label{fig4a}}
\subfloat[]{{\includegraphics[width=.55\textwidth]{fig4b}}\label{fig4b}}
\subfloat[]{{\includegraphics[width=.65\textwidth]{fig4c}}\label{fig4c}}
\caption{\label{fig4} (a) Dispersion relation of the super-cell when $R=0.3a_0$. Red and blue lines represent the odd modes and even modes respectively. (b) Real part of $E_z$ distributions at points $P_1$, $P_2$ and $P_3$ as shown in (a). (c) $|E_z|$ are plotted for the driven modes excited by magnetic dipoles $(\hat{x}-i\hat{y})/\sqrt{2}$ (left) and $(\hat{x}+i\hat{y})/\sqrt{2}$ (right) respectively. The normalized frequency of the source is chosen to be $f_0a_0/c=0.473$. }
\end{figure}
\begin{figure}
\centering
\subfloat[]{{\includegraphics[width=.5\textwidth]{fig5a}}\label{fig5a}}
\subfloat[]{{\includegraphics[width=.475\textwidth]{fig5b}}\label{fig5b}}
\subfloat[]{{\includegraphics[width=.65\textwidth]{fig5c}}\label{fig5c}}
\subfloat[]{{\includegraphics[width=.65\textwidth]{fig5d}}\label{fig5d}}
\caption{\label{fig5} Dispersion relations of the super-cells with (a) $R=0.345a_0$ and (b) $R=0.3a_0$ when tuning the offset $t$ in units of $a_0$. $|E_z|$ distributions are plotted for the edge modes with $R=0.345a_0$ when (c) $k_2a_0/2\pi=0$ and (d) $k_2a_0/2\pi=0.1$. }
\end{figure}
As shown in Fig.~\ref{fig1}, an anti-phase boundary is created by shifting the photonic crystal along the direction $\overrightarrow{a}_2$ by $-a_0/2$, which is one-half period. The geometry and material parameters are given in Fig.~\ref{fig1}. Here we only investigate the transverse magnetic (TM) modes of the electromagnetic waves, where only $E_z$, $H_x$, and $H_y$ are nonzero. According to Ref.~\cite{wu2015scheme}, tuning the distance between the center of the diamond and the center of cylinders $R$ will change the topological properties of the crystal. When $R<a_0/3$, the structure behaves as a topologically trivial material with $Z_2$ index equal to zero. Band folding occurs when $R=a_0/3$ since the lattice vectors of the unit cell change into $\overrightarrow{a}_1^{'}=-\overrightarrow{a}_1/3+2\overrightarrow{a}_2/3$ and $\overrightarrow{a}_2^{'}=\overrightarrow{a}_1/3+\overrightarrow{a}_2/3$ as shown in Fig.~\ref{fig2}. The size of the unit cell shrinks while the Brillouin zone expands. If the original Brillouin zone ($R\neq a_0/3$) is chosen, the bands on expanded Brillouin zone ($R=a_0/3$) must be folded to fit in the original one, which leads to the creation of a Dirac cone at the $\Gamma$ point. Further increasing $R$ opens the band gap at $\Gamma$ point and turns the trivial crystal into a topological insulator with nonzero $Z_2$ index. Band inversion happens at the $\Gamma$ point when $R>a_0/3$ with dipole modes in the higher band and quadrupole modes in the lower band. Unidirectional edge modes can be found at the boundary between the topological insulator ($R>a_0/3$) and trivial crystal ($R<a_0/3$).
Here we place the topological insulator with $R>a_0/3$ on both sides of the boundary as shown in Fig.~\ref{fig1b}. However, the topological properties of the photonic crystal cannot explain the edge modes discovered on the anti-phase boundary since shifting will not change the band diagram and $Z_2$ index of the crystal. As shown in Fig.~\ref{fig3a}, the odd edge modes (anti-symmetric distributions) and the even edge modes (symmetric distributions) are caused by the mirror symmetry of the super-cell. The field distributions of the edge modes calculated by COMSOL are given in Fig.~\ref{fig3b}. The $E_z$ distributions at point $P_1$ and $P_2$ defined in Fig.~\ref{fig3a} are the same while the Poynting vectors are in opposite directions. Here we define the counter clockwise rotation of the Poynting vectors on the left side of the anti-phase boundary as spin-up and the clockwise rotation as spin-down. By comparing $P_1$ and $P_2$ we know that the edge modes with the same frequency but opposite k vectors have different spin directions. Also, we show that the edge modes with the same k vector have opposite spin directions by comparing the fields at $P_2$ and $P_3$.
In order to excite the edge modes, a circularly polarized magnetic dipole is chosen as the source in our driven mode simulation. By observing the Poynting vectors in Fig.~\ref{fig3c}, we conclude that magnetic dipole $(\hat{x}-i\hat{y})/\sqrt{2}$ behaves like the spin-up source while $(\hat{x}+i\hat{y})/\sqrt{2}$ like the spin-down source. The frequency of excitation is chosen to be inside the band gap of the bulk modes, which only excite the odd edge modes as we can conclude from Fig.~\ref{fig3a}. We apply the spin-up source to the shifted structure to excite the spin-up edge mode at $P_1$. Since the group velocity at $P_1$ is positive, the wave will propagate along the direction $\overrightarrow{a}_2$. The simulation result shown on the left side of Fig.~\ref{fig3d} matches this theoretical prediction. Similarly, a spin-down source will excite the edge mode propagates along $-\overrightarrow{a}_2$, which is also shown on the right side of Fig.~\ref{fig3d}.
Tuning the parameter $R$ to $R<a_0/3$ will dramatically change the properties of the edge modes. According to Ref.~\cite{wu2015scheme}, the band diagram has been closed and reopened at the $\Gamma$ point when tuning the $R$ from $R>a_0/3$ to $R<a_0/3$. The even edge mode rises while the odd mode declines. As shown in Fig.~\ref{fig4a}, the even mode is above the odd mode inside the band gap when $R=0.3a_0$, which is opposite to the result shown in Fig.~\ref{fig3a}. If we apply the spin-up source $(\hat{x}-i\hat{y})/\sqrt{2}$ with normalized frequency inside the band gap, it will excite the spin-up edge mode at $P_2$ as shown in Fig.~\ref{fig4b}. Since the group velocity at $P_2$ is negative, the wave will propagate along the $-\overrightarrow{a}_2$ direction, which is verified by the left part of Fig.~\ref{fig4c}. This indicates both topological and trivial photonic system can form anti-phase boundary and support spin-momentum locked edge modes on the boundary. The source of the same spin can excite wave with opposite propagation directions in these two photonic crystal systems.
\section{Band inversion of edge modes when tuning the offset}
By tuning the offset $t$ which is defined in Fig.~\ref{fig1b}, we can get a series of dispersion relations as shown in Fig.~\ref{fig5a} and Fig.~\ref{fig5b}. Since the mirror symmetry is broken for $t\neq-0.5a_0$, we can't define the odd mode or even mode according to the mirror plane. For the trivial unit cell, varying from the anti-phase boundary with $t=-0.5a_0$ to the two dimensional photonic crystal with $t=0$ will make the dispersion curve get closer to the projected bulk band diagram. The variation of dispersion curves for the structure consisting of topological unit cell is more complicated. As shown in Fig.~\ref{fig5a}, the two dispersion curves converge at the $\Gamma$ point and form a degenerate point at $\Gamma$ when the offset $t=-0.2085a_0$. If we continue changing $t$ from $-0.2085a_0$ to $0$, the gap between two edge modes reopens and increases until the two curves vanish into the bulk bands.
The band inversion occurs at the $\Gamma$ point when the offset crosses over the degenerate case $t=-0.2085a_0$. As shown in Fig.~\ref{fig5c}, the $E_z$ distributions in the higher band of the case $t=-0.2a_0$ are the same as the lower band when $t=-0.22a_0$. When $k_2$ is sufficiently far away from the $\Gamma$ point, the field distributions look similar in the higher band or lower band for different offsets. We can conclude that only the edge modes that are close to $\Gamma$ point will be inverted when $-0.5a_0<t<0.2085a_0$, which is similar to the band inversion of the bulk modes in Ref.~\cite{wu2015scheme}.
\section{Edge modes in gradual shift structure}
\begin{figure}
\centering
\subfloat[]{{\includegraphics[width=.9\textwidth]{fig6a}}\label{fig6a}}
\subfloat[]{{\includegraphics[width=.45\textwidth]{fig6b}}\label{fig6b}}
\subfloat[]{{\includegraphics[width=.55\textwidth]{fig6c}}\label{fig6c}}
\subfloat[]{{\includegraphics[width=.65\textwidth]{fig6d}}\label{fig6d}}
\caption{\label{fig6} (a) Comparison between radical and gradual shift super-cell. A shift of $t=0.05a_0$ is set between adjacent unit cells on the two sides of the boundary marked by red dashed line. The far left with $t=0.25a_0$ and the far right with $t=-0.75a_0$ have the same pattern. (b) Dispersion relation of the gradual shift super-cell when $R=0.345a_0$. (c) $|E_z|$ distributions at points $P_1$, $P_2$ and $P_3$ as shown in (b). (d) $|E_z|$ are plotted for the driven modes excited by magnetic dipoles $(\hat{x}-i\hat{y})/\sqrt{2}$ (left) and $(\hat{x}+i\hat{y})/\sqrt{2}$ (right) respectively. The normalized frequency of the source is chosen to be $f_0a_0/c=0.464$. The source is located at the center of the unit cell with $t=0$. }
\end{figure}
We can also create an anti-phase boundary by gradually shifting the unit cells on the two sides of the boundary as shown in Fig.~\ref{fig6a}. Here the unit cell with $R=0.345a_0$ is studied. We can conclude from the dispersion relations shown in Fig.~\ref{fig5a} that the edge modes which decay rapidly into the bulk can only be found when the offset between the adjacent unit cells is large enough. For the offset with $|t|<0.1a_0$, the dispersion curves are so close to the bulk band diagram that their energy is not well confined to the boundary. Hence the offset of $t=0.05a_0$ is chosen between the adjacent unit cells on the two sides of the anti-phase boundary to prevent the appearance of redundant edge modes. The unit cells will look the same if they are far enough from the boundary, which is different from the radical shift structure where the offset difference always exists on the two sides. In this structure, there is no long-range offset between the two sides, only a local shift in the unit cells near the boundary. The dispersion relation and field distribution are shown in Fig.~\ref{fig6b} and Fig.~\ref{fig6c} respectively, which is similar to the radical shift case as shown in Fig.~\ref{fig3a} and Fig.~\ref{fig3b}. The unidirectional propagation of the edge modes can also be found when we excite with sources of different spin directions as shown in Fig.~\ref{fig6d}.
\section{Optimization of the source}
\begin{figure}
\centering
\subfloat[]{{\includegraphics[width=.5\textwidth]{fig7a}}\label{fig7a}}
\subfloat[]{{\includegraphics[width=.5\textwidth]{fig7b}}\label{fig7b}}
\caption{\label{fig7} Directionality $D$ defined in Eq.~\ref{eq:direction} is plotted as a function of $\theta$ and $\phi$ for (a) $R=0.345a_0, f_0a_0/c=0.46$ and (b) $R=0.30a_0, f_0a_0/c=0.473$. The white dots indicate the locations of the sources $(\hat{x}-i\hat{y})/\sqrt{2}$ (upper) and $(\hat{x}+i\hat{y})/\sqrt{2}$ (lower). }
\end{figure}
By optimizing the combination of two orthogonal magnetic dipoles, we can achieve edge modes with better directionality. The magnetic dipole can be defined as:
\begin{equation}
\overrightarrow{m}=\cos\theta\hat{x}+\sin\theta\exp(-i\phi)\hat{y}
\label{eq:source}
\end{equation}
where $0<\theta<\pi/2$ and $-\pi<\phi<\pi$.
The spin-up ($(\hat{x}-i\hat{y})/\sqrt{2}$) and spin-down ($(\hat{x}+i\hat{y})/\sqrt{2}$) source mentioned above are the particular cases when $\theta$ and $\phi$ in Eq.~\ref{eq:source} are set to $\pi/4, \pi/2$ and $\pi/4, -\pi/2$ respectively.
According to Ref.~\cite{petersen2014chiral}, we can also define the directionality of the edge mode by
\begin{equation}
D=\frac{c_{+}-c_{-}}{c_{+}+c_{-}}
\label{eq:direction}
\end{equation}
where $c_{+}$($c_{-}$) is the line integration of the Poynting vector measured on the top(bottom) of the structure as shown in Fig.~\ref{fig4c}. If $|D|$ is close to 1, we can conclude that the system has good directionality while no directionality can be observed when $D=0$. As shown in Fig.~\ref{fig7}, the signs of $D$ at the locations of spin-up and spin-down source are opposite for $R>a_0/3$ and $R<a_0/3$, which verifies the conclusion that wave propagates in opposite directions for the same source when we tune the $R$ of the system.
\section{Conclusion}
Spin-momentum locked edge modes are discovered on the anti-phase boundaries which are formed by shifting two halves of a photonic crystal along the direction of periodicity. By applying magnetic dipole sources with different spin directions, we can excite the edge modes propagating in opposite directions. The inversion of the edge modes is revealed when we adjust the distance between the center of the unit cell and the cylinders, which leads to opposite propagation directions with the same source. Also, tuning the offset of the unit cells on two sides can cause band inversion of the edge modes for the topologically non-trivial photonic crystal system. Optimization of the source gives the edge modes better directionality and helps us to further understand the system, making it more practical for the unidirectional wave propagation applications.
\section*{Acknowledgments}
This work was supported in part by Air Force Office of Scientific Research Grant No. FA9550-16-1-0093 and in part by the China Scholarship Council (No. 201706230113). The authors acknowledge
discussions with D. Bisharat.
\bibliographystyle{unsrt}
|
3,212,635,537,588 | arxiv | \section{Lipschitz Continuity of the Sinkhorn Potential}
In this section, we provide several lemmas to show the Lipschitz continuity (w.r.t. the underlying probability measures) of the Sinkhorn potentials and the functional gradients we derived in Proposition \ref{proposition_variation_I}.
These lemmas will be used in the convergence analysis and the mean field analysis for \sd.
\subsection{Lipschitz Continuity Study: Sinkhorn Potentials}
We first show the Lipschitz continuity of the Sinkhorn potential w.r.t. the bounded Lipschitz norm of the input measures.
The bounded Lipschitz metric of measures $d_{bl}:\mathcal{M}_1^+(\mathcal{X})\times \mathcal{M}_1^+(\mathcal{X})\rightarrow\mathbb{R}_+$ with respect to the bounded continuous test functions is defined as
\begin{equation*}
d_{bl}(\alpha, \beta) {:=} \sup_{\|\xi\|_{bl}\leq 1} |\langle \xi, \alpha\rangle - \langle \xi, \beta \rangle|,
\end{equation*}
where, given a function $\xi\in\mathcal{C}(\mathcal{X})$, we denote
\begin{align*}
\|\xi\|_{bl} {:=} \max\{\|\xi\|_\infty, \|\xi\|_{lip} \}, \quad \text{with} \|\xi\|_{lip}{:=} \max_{x, y\in\mathcal{X}}\frac{|\xi(x)-\xi(y)|}{\|x-y\|}.
\end{align*}
We note that $d_{bl}$ metrizes the weak convergence of probability measures (see Theorem 1.12.4 in \cite{van1996weak}), i.e. for a sequence of probability measures $\{\alpha_n\}$, $$\lim_{n\rightarrow\infty}d_{bl}(\alpha_n, \alpha) = 0 \Leftrightarrow \alpha_n \rightharpoonup \alpha.$$
\begin{lemma} \label{lemma_Lipschitz_continuity_of_variation}
(i) Under Assumptions \ref{ass_bounded_c} and \ref{ass_bounded_infty_c_gradient}, for two given pairs of measures $(\alpha, \beta)$ and $(\alpha',\beta')$, the Sinkhorn potentials are Lipschitz continuous with respect to the bounded Lipschitz metric:
\begin{align*}
\|f_{\alpha, \beta} - f_{\alpha', \beta'}\|_\infty \leq G_{bl} [d_{bl}(\alpha', \alpha)+d_{bl}(\beta', \beta)], \\
\|g_{\alpha, \beta} - g_{\alpha', \beta'}\|_\infty \leq G_{bl} [d_{bl}(\alpha', \alpha)+d_{bl}(\beta', \beta)].
\end{align*}
where $G_{bl} = {2\gamma\exp(2M_c/\gamma)G'_{bl}}/{(1-\lambda^2)}$ with $G'_{bl} = \max\{\exp(3M_c/\gamma), {2G_c\exp(3M_c/\gamma)}/{\gamma}\}$ and $\lambda = \frac{\exp(M_c/\gamma) - 1}{\exp(M_c/\gamma) + 1}$.\\
(ii) If $(\alpha',\beta')$ are of the particular form $\alpha'={T_\phi}_\sharp\alpha$ and $\beta' = \beta$ where $T_\phi(x) = x + \phi(x), \phi\in\mathcal{H}^d$, we further have that
the Sinkhorn potentials are Lipschitz continuous with respect to the mapping $\phi$. That is, letting $G_T {:=} {2 G_c\exp(3M_c/\gamma)}/{\gamma}$ and $\epsilon>0$, we have
\begin{align*}
\|f_{T_\sharp\alpha, \beta} - f_{\alpha, \beta}\|_\infty \leq G_{T}\|\phi\|_{2, \infty}, \\
\|g_{T_\sharp\alpha, \beta} - g_{\alpha, \beta}\|_\infty \leq G_{T}\|\phi\|_{2, \infty}.
\end{align*}
\end{lemma}
Please see the proof in Appendix \ref{proof_lemma_Lipschitz_continuity_of_variation}.
Importantly, this lemma implies that the weak convergence of $(\alpha,\beta)$ ensures the convergence of the Sinkhorn potential: $(\alpha', \beta')\rightharpoonup(\alpha, \beta)\Rightarrow (f_{\alpha',\beta'}\rightarrow f_{\alpha,\beta})$ in terms of the $L^\infty$ norm.
\begin{remark}
While we acknowledge that the factor $\exp{1/\gamma}$ is non-ideal, such quantity constantly appears in the literature related to the Sinkhorn divergence, e.g. Theorem 5 in \cite{NIPS2019_9130} and Theorem 3 in \cite{pmlr-v89-genevay19a}. It would be an interesting future work to remove this factor.
\end{remark}
\begin{remark}
We note that the Lemma \ref{lemma_Lipschitz_continuity_of_variation} is strictly stronger than preexisting results:
(1) Proposition 13 of \cite{feydy2019interpolating} only shows that the dual potentials are continuous (not Lipschitz continuous) with the input measures, which is insufficient for the mean field limit analysis conducted in Section \ref{section_mean_field_limit}.
(2) Under the infinity norm $\|\cdot\|_\infty$, \citet{NIPS2019_9130} bound the variation of the Sinkhorn potential by the total variation distance of probability measures $(\alpha, \beta)$ and $(\alpha',\beta')$.
Such result means that strong convergence of $(\alpha,\beta)$ implies the convergence of the corresponding Sinkhorn potential.
This is strictly weaker than (i) of Lemma \ref{lemma_Lipschitz_continuity_of_variation}.
(3) Further, to prove the weak convergence of the corresponding Sinkhorn potential, Proposition E.5 of the above work \cite{NIPS2019_9130} requires the cost function $c \in \mathcal{C}^{s+1}$ with $s>d/2$, where $d$ is the problem dimension.
However, Lemma \ref{lemma_Lipschitz_continuity_of_variation} only assumes $c \in \mathcal{C}^{1}$, independent of $d$.
Hence, Lemma \ref{lemma_Lipschitz_continuity_of_variation} makes a good contribution over existing results.
\end{remark}
The continuity results in Lemma \ref{lemma_Lipschitz_continuity_of_variation} can be further extended to the gradient of the Sinkhorn potentials.
\begin{lemma} \label{lemma_Lipschitz_continuity_of_variation_gradient}
(i) Under Assumptions \ref{ass_bounded_c} and \ref{ass_bounded_infty_c_gradient}, for two given pairs of measures $(\alpha, \beta)$ and $(\alpha',\beta')$, with $G_{bl} [d_{bl}(\alpha', \alpha)+d_{bl}(\beta', \beta)] \leq 1$, the gradient of the Sinkhorn potentials are locally Lipschitz continuous with respect to the bounded Lipschitz metric: With $L_{bl} = 2G_cG_{bl}$,
\begin{align*}
\|\nabla f_{\alpha, \beta} - \nabla f_{\alpha', \beta'}\|_\infty \leq L_{bl} [d_{bl}(\alpha', \alpha)+d_{bl}(\beta', \beta)], \\
\|\nabla g_{\alpha, \beta} - \nabla g_{\alpha', \beta'}\|_\infty \leq L_{bl} [d_{bl}(\alpha', \alpha)+d_{bl}(\beta', \beta)].
\end{align*}
(ii) If $(\alpha',\beta')$ are of the particular form $\alpha'={T_\phi}_\sharp\alpha$ and $\beta' = \beta$ where $T_\phi(x) = x + \phi(x)$ for $\phi\in\mathcal{H}^d$, we further have that
the Sinkhorn potentials are Lipschitz continuous with respect to the mapping $\phi$: Let $G_T {:=} {2 G_c\exp(3M_c/\gamma)}/{\gamma}$ and assume $2 G_T\|\phi\|_{2,\infty}\leq 1$. We have with $L_{T} = 2G_cG_T$
\begin{align*}
\|\nabla f_{T_\sharp\alpha, \beta} - \nabla f_{\alpha, \beta}\|_\infty \leq L_{T}\|\phi\|_{2, \infty}, \\
\|\nabla g_{T_\sharp\alpha, \beta} - \nabla g_{\alpha, \beta}\|_\infty \leq L_{T}\|\phi\|_{2, \infty}.
\end{align*}
\end{lemma}
The proof is given in Appendix \ref{proof_lemma_Lipschitz_continuity_of_variation_gradient}. The two lemmas \ref{lemma_Lipschitz_continuity_of_variation} \ref{lemma_Lipschitz_continuity_of_variation_gradient} are crucial to the analysis of the finite-time convergence and the mean field limit of \sinkhorndescent.
\subsection{Lipschitz Continuity Study: \FDcamel}
\vspace{-.2cm}
From Definition \ref{definition_variation_rkhs}, the \FDs derived in Proposition \ref{proposition_variation_I} are functions in $\mathcal{H}^d$ mapping from $\mathcal{X}$ to $\mathbb{R}^d$. They are Lipschitz continuous provided that the kernel function $k$ is Lipschitz.
\begin{assumption} \label{ass_lipschitz_kernel}
The kernel function $k:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}_+$ is Lipschitz continuous on $\mathcal{X}$: for any $y$ and $x, x'\in\mathcal{X}$
\begin{equation}
|k(x, y) - k(x', y)|\leq G_k\|x - x'\|.
\end{equation}
\end{assumption}
\begin{lemma} \label{lemma_lipschitz_variation_mapping}
Define the functional on RKHS $F[\psi]{:=}\OTgamma\big({(\mathcal{I}+\psi)}_\sharp\alpha, \beta\big)$.
Assume Assumptions \ref{ass_bounded_c}, \ref{ass_bounded_infty_c_gradient}, \ref{ass_bounded_infty_c_hessian}, and \ref{ass_lipschitz_kernel}.
The \FD $DF[0]\in\mathcal{H}^d$ is Lipschitz continuous: Denote $L_\psi = G_cG_k$. For any $x, x'\in\mathcal{X}$,
\begin{equation*}
\|D F[0](x) - D F[0](x')\|\leq L_\psi\|x - x'\|.
\end{equation*}
\end{lemma}
Using the above result, the functional gradient \eqref{eqn_gradient_sinkhorn_barycenter} can be shown to be Lipschitz continuous.
\begin{corollary} \label{corollary_lipschitz_variation_mapping}
Assume Assumptions \ref{ass_bounded_c}, \ref{ass_bounded_infty_c_gradient}, \ref{ass_bounded_infty_c_hessian}, and \ref{ass_lipschitz_kernel}.
Recall $L_\psi = G_cG_k$ from the above lemma.
The \FD $DS_{\alpha}[0]\in\mathcal{H}^d$ is Lipschitz continuous: For any $x, x'\in\mathcal{X}$,
\begin{equation*}
\|DS_{\alpha}[0](x) - DS_{\alpha}[0](x')\|\leq L_\psi\|x - x'\|.
\end{equation*}
\end{corollary}
\subsection{Last term convergence of \sd} \label{appendix_last_term_convergence}
With a slight change to \sd, we can claim its last term convergence: In each iteration, check if $\mathbf{S}(\alpha^t, \{\beta_i\}_{i=1}^n)\leq \epsilon$. If it holds, then we have already identified an $\epsilon$ approximate stationary point and we terminate \sd; otherwise we proceed. The termination happens within $\mathcal{O}(1/\epsilon)$ loops as the nonnegative objective \eqref{eqn_sinkhorn_barycenter} is reduced at least $\mathcal{O}(\epsilon)$ per-round.
\section{Analysis}
In this section, we analyze the finite time convergence and the mean field limit of $\texttt{SD}$ under the following assumptions on the ground cost function $c$ and the kernel function $k$ of the RKHS $\mathcal{H}^d$.
\begin{assumption}\label{ass_c}
The ground cost function $c(x, y)$ is bounded, i.e. $\forall x,y \in \mathcal{X}, c(x, y) \leq M_c$; $G_c$-Lipschitz continuous, i.e. $\forall x, x', y \in \mathcal{X}, |c(x, y) - c(x', y)|\leq G_c\|x - x'\|$; and $L_c$-Lipschitz smooth, i.e. $\forall x, x', y \in \mathcal{X}, \|\nabla_1 c(x, y) - \nabla_1 c(x', y)\|\leq L_c\|x - x'\|$.
\end{assumption}
\begin{assumption}\label{ass_k}
The kernel function $k(x, y)$ is bounded, i.e. $\forall x,y \in \mathcal{X}, k(x, y) \leq D_k$; $G_k$-Lipschitz continuous, i.e. $\forall x, x', y \in \mathcal{X}, |k(x, y) - k(x', y)|\leq G_c\|x - x'\|$.
\end{assumption}
\subsection{Finite Time Convergence Analysis}
In this section, we prove that \sinkhorndescent converges to a stationary point of problem \eqref{eqn_sinkhorn_barycenter} at the rate of $\mathcal{O}(\frac{1}{t})$, where $t$ is the number of iterations.
We first introduce a discrepancy quantity.
\begin{definition}\label{definition_KSBD}
Recall the definition of the functional $\mathcal{S}_{\alpha}$ in \eqref{eqn_functional_per_iteration} and the definition of \FD in Definition \ref{definition_variation_rkhs}.
Given a probability measure $\alpha\in\mathcal{M}_1^+(\mathcal{X})$, the { Kernelized Sinkhorn Barycenter Discrepancy} (KSBD) for the Sinkhorn barycenter problem is defined as
\begin{align}\label{ExtraWang1}
\mathbf{S}(\alpha, \{\beta_i\}_{i=1}^n) {:=} \|D\mathcal{S}_{\alpha}[0]\|^2_{\mathcal{H}^d}.
\end{align}
\end{definition}
Note that in each round $t$, $\mathbf{S}(\alpha^t, \{\beta_i\}_{i=1}^n)$ metrizes the stationarity of \texttt{SD}, which can be used to quantify the per-iteration improvement.
\begin{lemma}[Sufficient Descent] \label{lemma_sufficient_descent}
Recall the definition of the Sinkhorn Barycenter problem in \eqref{eqn_sinkhorn_barycenter} and the sequence of measures $\{\alpha^t\}_{t\geq 0}$
in \eqref{eqn_sequence_of_measures} generated by \texttt{SD} (Algorithm \ref{alg:Sinkhonr_Descent_Finite_Particles}).
Under Assumption \ref{ass_c}, if we have $\eta\leq \min\{{1}/({8L_fM_\mathcal{H}^2}), {1}/({8\sqrt{d}L_TM_\mathcal{H}^2})\}$, the Sinkhorn objective always decreases,
\begin{equation}
\mathcal{S}_{\gamma}(\alpha_{t+1}) - \mathcal{S}_{\gamma}(\alpha_{t}) \leq - {\eta}/{2}\cdot\mathbf{S}(\alpha^t, \{\beta_i\}_{i=1}^n).
\end{equation}
See $M_\mathcal{H}$ in \eqref{eqn_RKHS_norm}, $L_f {:=} {4G_c^2}/{\gamma}+L_c$ and $L_T\defi2 G_c^2\exp(3M_c/\gamma)/\gamma$\footnote{We acknowledge the factor $\exp(1/\gamma)$ is non-ideal, but such quantity constantly appears in the literature related to the Sinkhorn divergence, e.g. Theorem 5 in \citep{NIPS2019_9130} and Theorem 3 in \citep{genevay2019sample}. It would be an interesting future work to remove this factor.}.
\end{lemma}
The proof of the lemma in given Appendix \ref{proof_lemma_sufficient_descent}.
Based on this result, we can derive the following convergence result demonstrating that \texttt{SD} converges to a stationary point in a sublinear rate.
\begin{theorem}[Convergence] \label{theorem_convergence}
Suppose \texttt{SD} is initialized with $\alpha^0\in\mathcal{M}_1^+(\mathcal{X})$ and outputs $\alpha^t\in\mathcal{M}_1^+(\mathcal{X})$ after $t$ iterations.
Under Assumption \ref{ass_c}, we have
\begin{equation}
\min_{t} \mathbf{S}(\alpha^t, \{\beta_i\}_{i=1}^n) \leq {2\mathcal{S}_{\gamma}(\alpha^0)}/{(\eta t)},
\end{equation}
where $0<\eta\leq \min\{{1}/({8L_fM_\mathcal{H}^2}), {1}/({8\sqrt{d}L_TM_\mathcal{H}^2})\}$ is the step size.
\end{theorem}
With a slight change to \texttt{SD}, we can conclude its last term convergence as elaborated in Appendix \ref{appendix_last_term_convergence}.
\subsection{Mean Field Limit Analysis} \label{section_mean_field_limit}
While \sinkhorndescent accepts both discrete and continuous measures as initialization, in practice, we start from a discrete initial measure $\alpha_N^0$ with $|\mathbf{supp}(\alpha^0_N)| = N$.
If $\alpha_N^0$ is an empirical measure sampled from an underlying measure $\alpha^0_{\infty}$, we have the weak convergence at time $t=0$, i.e. $\alpha_N^0\rightharpoonup\alpha^0_{\infty}$ as $N\rightarrow \infty$.
The mean field limit analysis demonstrates that \sinkhorndescent preserves such weak convergence for any finite time $t$:
\begin{equation*}
\alpha_N^0\rightharpoonup\alpha^0_{\infty} \Rightarrow \alpha_N^{t} = {\texttt{SD}}^t(\alpha_N^0)\rightharpoonup\alpha_\infty^{t} = {\texttt{SD}}^t(\alpha^0_{\infty}),
\end{equation*}
where we use ${\texttt{SD}}^t$ to denote the output of \texttt{SD} after $t$ steps and use $\rightharpoonup$ to denote the weak convergence.
\begin{lemma}
Recall the push-forward mapping $\mathcal{T}[\alpha](x)$ in \texttt{SD} from \eqref{eqn_pushforward_mapping} and
recall $L_f$ in Lemma \ref{lemma_sufficient_descent}.
Under Assumptions \ref{ass_c} and \ref{ass_k}, for two probability measures $\alpha$ and $\alpha'$, we have
\begin{align}
d_{bl}(\mathcal{T}[\alpha]_\sharp \alpha, \mathcal{T}[\alpha']_\sharp \alpha') \leq (1+\eta C)d_{bl}(\alpha, \alpha'),
\end{align}
where $C = G_cG_k+\max\{d L_f D_k + dG_c G_k, {D_k L_{bl}}\}$ and $L_{bl}{:=} 8G_c^2\exp(6M_c/\gamma)$.
\label{lemma_large_N}
\end{lemma}
The proof is presented in Appendix \ref{proof_lemma_large_N}.
This is a discrete version of Dobrushin's estimate (section 1.4. in \citep{golse2016dynamics}).
As a result, we directly have the following large N characterization of ${\texttt{SD}}^t(\alpha^0_N)$.
\begin{theorem}[Mean Field Limit]
Let $\alpha^0_N$ be an empirical initial measure with $|\mathbf{supp}(\alpha^0_N)| = N$ and let $\alpha^0_\infty$ be the underlying measure such that $\alpha_N^0 \rightharpoonup \alpha^0_{\infty}$.
Use ${\texttt{SD}}^t(\alpha_N^0)$ and ${\texttt{SD}}^t(\alpha^0_\infty)$ to denote the outputs of \texttt{SD} after $t$ iterations, under the initializations $\alpha_N^0$ and $\alpha^0_\infty$ respectively.
Under Assumptions \ref{ass_c} and \ref{ass_k}, for any finite time $t$, we have
\begin{equation*}
d_{bl}({\texttt{SD}}^t(\alpha_N^0), {\texttt{SD}}^t(\alpha^0_{\infty})) \leq (1+\eta C)^t d_{bl}(\alpha_N^0, \alpha_\infty^0),
\end{equation*}
and hence as $N\rightarrow\infty$ we have
\begin{equation}
\alpha_N^{t} = {\texttt{SD}}^t(\alpha_N^0) \rightharpoonup \alpha_\infty^{t} = {\texttt{SD}}^t(\alpha^0_{\infty}).
\end{equation}
\end{theorem}
\subsection{KSBD as Discrepancy Measure}
In this section, we show that, under additional assumptions, KSBD is a valid discrepancy measure, i.e. $\mathcal{S}_{\gamma}(\alpha) = 0$ implies that $\alpha$ is a global optimal solution to the Sinkhorn barycenter problem \eqref{eqn_sinkhorn_barycenter}. The proof is provided in Appendix \ref{appendix_global_optimality}.
First, we introduce the following positivity condition.
\begin{definition}
A kernel $k(x, x')$ is said to be integrally strictly positive definite (ISPD) w.r.t. a measure $\alpha\in\mathcal{M}_1^+(\mathcal{X})$, if $\forall \xi:\mathcal{X}\rightarrow\mathbb{R}^d$ with $0<\int_{\mathcal{X}}\|\xi(x)\|^2 \mathbf{d} \alpha(x)<\infty$, it holds that
\begin{equation}
\int_{\mathcal{X}^2} \xi(x) k(x, x') \xi(x') \mathbf{d} \alpha(x)\mathbf{d} \alpha(x') >0. \label{eqn_kernel_positivity}
\end{equation}
\end{definition}
\begin{theorem} \label{thm_optimality} Recall the \FD of the Sinkhorn Barycenter problem in \eqref{eqn_gradient_sinkhorn_barycenter} and KSBD in \eqref{ExtraWang1}.
Denote $\xi(x){:=} \frac{1}{n}\sum_{i=1}^{n} \big(\nabla f_{\alpha, \beta_i}(x) - \nabla f_{\alpha, \alpha}(x)\big)$. We have $\int_{\mathcal{X}}\|\xi(x)\|^2 \mathbf{d} \alpha(x)<\infty$.\\
(i) If the kernel function $k(x, x')$ is ISPD w.r.t. $\alpha\in\mathcal{M}_1^+(\mathcal{X})$ and $\alpha$ is fully supported on $\mathcal{X}$, then the vanishing of KSBD, i.e. $\mathbf{S}(\alpha, \{\beta_i\}_{i=1}^n) = 0$, implies that $\alpha$ globally minimizes problem \eqref{eqn_sinkhorn_barycenter}.\\
(ii) Use ${\alpha}^t$ to denote the output of \sd after $t$ iterations.
If further one of the accumulation points of the sequence $\{\alpha^t\}$ is fully supported on $\mathcal{X}$, then $\lim_{t\rightarrow\infty} \mathcal{S}_{\gamma}(\alpha^t) = \mathcal{S}_{\gamma}(\alpha^*)$.
\end{theorem}
We show in Appendix \ref{appendix_fully_supported}, under an absolutely continuous (a.c.) and fully supported (f.s.) initialization, $\alpha^t$ remains a.c. and f.s. for any finite $t$.
This leads to our assumption in (ii): One of the accumulation points of $\{\alpha^t\}$ is f.s..
However, to rigorously analyze the support of $\alpha^t$ in the asymptotic case ($t\rightarrow \infty$) requires a separate proof.
Establishing the global convergence of the functional gradient descent is known to be difficult in the literature, even for some much easier settings compared to our problem \eqref{eqn_sinkhorn_barycenter}.
For instance, \citep{mroueh2019sobolev,arbel2019maximum} prove the global convergence of their MMD descent algorithms. Both works require additional assumptions on the \emph{entire} measure sequence $\{\alpha^t\}$ as detailed in Appendix \ref{appendix_previous_work_assumption}. See also the convergence analysis of SVGD in \citep{lu2019scaling} under very strong assumptions of the score functions.
\section{Proof of Lemmas}
\subsection{Proof of Lemma \ref{lemma_lipschitz_sinkhorn_potential}} \label{proof_lemma_lipschitz_sinkhorn_potential}
For simplicity, we omit the subscript of the Sinkhorn potential $f_{\alpha,\beta}$ and simply use $f$.
Recall the definition of $h(x, y)$ in Lemma \ref{lemma_optimality_sinkhorn_potential_appendix}:
\begin{equation*}
h(x, y) = \exp\left(\frac{1}{\gamma}(f(x) + g(y) - c(x, y))\right).
\end{equation*}
Subtract the optimality condition \eqref{eqn_optimality_sinkhorn_potential_x} at different points $x$ and $x'$ to derive
\begin{align*}
\int_\mathcal{X} \big(h(x, y) - h(x',y)\big) \mathbf{d}\beta(y) = 0 \Rightarrow\\
\int_\mathcal{X} h(x', y) &\left(\exp(\frac{f(x) - f(x') - c(x, y) + c(x', y)}{\gamma})-1\right) \mathbf{d}\beta(y) = 0
\end{align*}
Since $\int_\mathcal{X} h(x', y) \mathbf{d}\beta(y) = 1$ (Lemma \ref{lemma_optimality_sinkhorn_potential_appendix}), we have
\begin{align*}
\int_\mathcal{X} h(x', y) \exp(\frac{f(x) - f(x') - (c(x, y) - c(x', y))}{\gamma}) \mathbf{d}\beta(y) = 1 \\
\Rightarrow \int_\mathcal{X} h(x', y) \exp(\frac{c(x', y) - c(x, y)}{\gamma}) \mathbf{d}\beta(y) &= \exp(\frac{f(x') - f(x)}{\gamma}).
\end{align*}
Further, since we have $h(x',y)\geq0$ and from Assumption \ref{ass_bounded_c} we have$$\exp(\frac{c(x', y) - c(x, y)}{\gamma}) \leq \exp(\frac{|c(x', y) - c(x, y)|}{\gamma}) \leq \exp(\frac{G_c\|x' - x\|}{\gamma}),$$ we derive
\begin{equation*}
|\frac{f(x') - f(x)}{\gamma}| \leq |\log(\int_\mathcal{X} h(x', y) \exp(\frac{G_c\|x' - x\|}{\gamma}) \mathbf{d}\beta(y))| \leq \frac{G_c\|x' - x\|}{\gamma},
\end{equation*}
by using $\int_\mathcal{X} h(x', y) \mathbf{d}\beta(y) = 1$ again, which consequently leads to
\begin{equation*}
|f(x') - f(x)|\leq G_c\|x' - x\|.
\end{equation*}
\subsection{Proof of Lemma \ref{lemma_lipschitz_sinkhorn_potential_gradient}} \label{proof_lemma_lipschitz_sinkhorn_potential_gradient}
Recall the expression of $\nabla f$ in \eqref{eqn_sinkhorn_potential_gradient_x}:
\begin{align}
\nabla f(x) = \int_\mathcal{X} h(x, y)\nabla_x c(x, y) \mathbf{d}\beta(y),
\end{align}
where $h(x, y) {:=} \exp\left(\frac{1}{\gamma}(f_{\alpha,\beta}(x) + \mathcal{A}[f_{\alpha,\beta}, \alpha](y) - c(x, y))\right)$.
For any $x, x'\in\mathcal{X}$ such that $\|x_1 - x_2\|\leq\frac{\gamma}{2G_c}$, we bound
\begin{align*}
\|\nabla f(x) - \nabla f(x')\| =&\ \|\int_\mathcal{X} h(x, y)\nabla_x c(x, y) - h(x', y)\nabla_x c(x', y) \mathbf{d}\beta(y)\| \\
\leq&\ \int_\mathcal{X} \|h(x, y)\nabla_x c(x, y) - h(x', y)\nabla_x c(x', y)\| \mathbf{d}\beta(y)
\end{align*}
To bound the last integral, observe that
\begin{align*}
h(x, y)\nabla_x c(x, y) - h(x', y)\nabla_x c(x', y)\\
= h(x, y)\big(\nabla_x c(x, y) - \nabla_x c(x', y)\big) &+ \big(h(x, y) - h(x', y)\big)\nabla_x c(x', y),
\end{align*}
and therefore
\begin{align*}
\|h(x, y)\nabla_x c(x, y) - h(x', y)\nabla_x c(x', y)\|\\
\leq h(x, y)\|\nabla_x c(x, y) -\nabla_x c(x', y)\|& + |h(x, y) - h(x', y)|\|\nabla_x c(x', y)\|.
\end{align*}
For the first term, we use the Lipschitz continuity of $\nabla_x c$ from Assumption \ref{ass_bounded_infty_c_hessian} to bound
\begin{equation*}
h(x, y)\|\nabla_x c(x, y) -\nabla_x c(x', y)\| \leq L_c h(x, y)\|x-x'\|.
\end{equation*}
For the second term, observe that $\|\nabla_x c(x', y)\|\leq G_c$ from Assumption \ref{ass_bounded_infty_c_gradient} and
\begin{align*}
|h(x, y) - h(x', y)| =&\ h(x', y)|\exp(\frac{f(x) - f(x') - c(x, y) + c(x', y)}{\gamma}) - 1|\\
<&\ 2h(x', y)|\frac{f(x) - f(x') - c(x, y) + c(x', y)}{\gamma}|.
\end{align*}
Since $|\exp(z)-1|< 2|z|$ when $|z|\leq 1$ ($z=|\frac{f(x) - f(x') - c(x, y) + c(x', y)}{\gamma}|\leq 1$ from the restriction on $\|x-x'\|$), we further derive
\begin{equation*}
|h(x, y) - h(x', y)|\leq \frac{2G_c}{\gamma}h(x', y)[2G_c\|x-x'\|] = \frac{4G_c^2}{\gamma}h(x', y) \|x-x'\|.
\end{equation*}
Using the optimality condition $\int_\mathcal{X} h(x', y) \mathbf{d}\beta(y) = 1$ and $\int_\mathcal{X} h(x, y) \mathbf{d}\beta(y) = 1$ from Lemma \ref{lemma_optimality_sinkhorn_potential}, we derive
\begin{equation*}
\|\nabla f(x) - \nabla f(x')\|\leq \int_\mathcal{X} L_c h(x, y)\|x-x'\| + \frac{4G_c^2}{\gamma}h(x', y) \|x-x'\| \mathbf{d}\beta(y) = (L_c+\frac{4G_c^2}{\gamma})\|x-x'\|.
\end{equation*}
This implies that $\nabla^2 f(x)$ exists and is bounded from above: $\forall x\in\mathcal{X}, \|\nabla^2 f(x)\|\leq L_f$, which concludes the proof.
\subsection{Proof of Lemma \ref{lemma_Lipschitz_continuity_of_variation}}
\label{proof_lemma_Lipschitz_continuity_of_variation}
Let $(f, g)$ and $(f', g')$ be the Sinkhorn potentials to $\OTgamma(\alpha, \beta)$ and $\OTgamma(\alpha', \beta')$ respectively.
Denote $u {:=} \exp(f/\gamma)$, $v {:=} \exp(g/\gamma)$ and $u' {:=} \exp(f'/\gamma)$, $v' {:=} \exp(g'/\gamma)$.
From Lemma \ref{lemma_sinkhorn_potential_bound}, $u$ is bounded in terms of the $L^\infty$ norm:
\begin{equation*}
\|u\|_\infty = \max_{x\in\mathcal{X}} |u(x)| = \max_{x\in\mathcal{X}} \exp(f/\gamma) \leq \exp(2M_c/\gamma),
\end{equation*}
which also holds for $v, u', v'$.
Additionally, from Lemma \ref{lemma_lipschitz_sinkhorn_potential}, $\nabla u$ exists and $\|\nabla u\|$ is bounded:
\begin{equation*}
\max_x \|\nabla u(x)\| = \max_x \frac{1}{\gamma}|u(x)|\|\nabla f(x)\|\leq \frac{1}{\gamma}\|u(x)\|_\infty\max_x\|\nabla f(x)\|\leq
{G_c\exp(2M_c/\gamma)}/{\gamma}.
\end{equation*}
Define the mapping $A_{\alpha} \mu {:=} 1/(L_\alpha \mu)$ with
\begin{equation*}
L_\alpha \mu = \int_\mathcal{X} l(\cdot, y)\mu(y)\mathbf{d} \alpha(y),
\end{equation*}
where $l(x, y) {:=} \exp(-c(x, y)/\gamma)$.
From Assumption \ref{ass_bounded_c}, we have $\|l\|_\infty\leq\exp(M_c/\gamma)$ and from Assumption \ref{ass_bounded_infty_c_gradient} we have $\|\nabla_x l(x, y)\|\leq \exp(M_c/\gamma)\frac{G_c}{\gamma}$.
From the optimality condition of $f$ and $g$, we have $v = A_{\alpha} u$ and $u = A_{\beta} v$. Similarly, $v' = A_{\alpha'} u'$ and $u' = A_{\beta'} v'$.
Further use $d_H:\mathcal{C}(\mathcal{X})\times\mathcal{C}(\mathcal{X})\rightarrow\mathbb{R}$ to denote the Hilbert metric of continuous functions, $$d_H(\mu, \nu) = \log \max_{x,x'\in\mathcal{X}}\frac{\mu(x)\nu(x')}{\mu(x')\nu(x)}.$$
Note that $d_H(\mu, \nu) = d_H(1/\mu, 1/\nu)$ if $\mu(x)>0$ and $\nu(x)>0$ $\forall x\in\mathcal{X}$ and hence $d_H(L_\alpha\mu, L_\alpha\nu) = d_H(A_\alpha\mu, A_\alpha\nu)$.
Under the above notations, we introduce the following existing result.
\begin{lemma}[Birkhoff-Hopf Theorem \cite{lemmens2012nonlinear}, see Lemma B.4 in \cite{NIPS2019_9130}]
\label{lemma_Birkhoff-Hopf}
Let $\lambda = \frac{\exp(M_c/\gamma) - 1}{\exp(M_c/\gamma) + 1}$ and $\alpha\in\mathcal{M}_1^+(\mathcal{X})$. Then for every $u, v\in\mathcal{C}(\mathcal{X})$, such that $u(x)>0, v(x)>0$ for all $x\in\mathcal{X}$, we have
\begin{equation*}
d_H(L_\alpha u, L_\alpha v)\leq \lambda d_H(u, v).
\end{equation*}
\end{lemma}
Note that from the definition of $d_H$, one has
\begin{align*}
\|\log\mu-\log\nu \|_\infty\leq d_H(\mu, \nu) =&\ \max_x[\log\mu(x) - \log \nu(x)]+\max_x[\log\nu(x) - \log \mu(x)]\\
\leq&\ 2\|\log\mu-\log\nu \|_\infty.
\end{align*}
In the following, we derive upper bound for $d_H(\mu, \nu)$ and use such bound to analyze the Lipschitz continuity of the Sinkhorn potentials $f$ and $g$.\\
Construct $\tilde{v} {:=} A_{\alpha} u'$.
Using the triangle inequality (which holds since $v(x), v'(x), \tilde{v}(x) >0$ for all $x\in\mathcal{X}$), we have
\begin{align*}
d_H(v, v')\leq d_H(v, \tilde{v}) + d_H(\tilde{v}, v') \leq
\lambda d_H(u, u') + d_H(\tilde{v}, v'),
\end{align*}
where the second inequality is due to Lemma \ref{lemma_Birkhoff-Hopf}.
Similarly, Construct $\tilde{u} {:=} A_{\beta} v'$.
Apply Lemma \ref{lemma_Birkhoff-Hopf} again to obtain
\begin{equation*}
d_H(u, u') \leq d_H(u, \tilde u) + d_H(\tilde u, u')\leq \lambda d_H(v, v') + d_H(\tilde{u}, u').
\end{equation*}
Together, we obtain
\begin{equation*}
d_H(v, v') \leq \lambda^2d_H(v, v') + d_H(\tilde{v}, v') + \lambda d_H(\tilde{u}, u') \leq \lambda^2d_H(v, v') + d_H(\tilde{v}, v') + d_H(\tilde{u}, u'),
\end{equation*}
which leads to
\begin{equation*}
d_H(v, v') \leq \frac{1}{1- \lambda^2}[d_H(\tilde{v}, v') + d_H(\tilde{u}, u')].
\end{equation*}
To bound $d_H(\tilde{v}, v')$ and similarly $d_H(\tilde{u}, u')$, observe the following:
\begin{align}
d_H(v', \tilde v) =& d_H(L_{\alpha'} u', L_{\alpha} u') \leq 2\|\log L_{\alpha'} u' - \log L_{\alpha} u'\|_\infty \notag\\
=& 2\max_{x\in\mathcal{X}}| \nabla \log(a_x) ([L_{\alpha'} u'](x) - [L_{\alpha} u'](x))| = 2\max_{x\in\mathcal{X}} \frac{1}{a_x} |[L_{\alpha'} u'](x) - [L_{\alpha} u'](x)|\notag\\
\leq& 2\max\{\|1/L_{\alpha'} u'\|_\infty, \|1/L_{\alpha} u'\|_\infty\}\|L_{\alpha'} u' - L_{\alpha} u'\|_\infty \label{appendix_proof_i},
\end{align}
where $a_x\in[[L_{\alpha'} u'](x), [L_{\alpha} u'](x)]]$ in the second line is from the mean value theorem.
Further, in the inequality we use $\max\{\|1/L_{\alpha} u'\|_\infty, \|1/L_{\alpha} u'\|_\infty\} = \max\{\|A_{\alpha'} u'\|_\infty, \|A_{\alpha} u'\|_\infty\} \leq \exp(2M_c/\gamma)$.
Consequently, all we need to bound is the last term $\|L_{\alpha'} u' - L_{\alpha} u'\|_\infty$.
{\bf Result (i)}
We first note that $\forall x\in\mathcal{X}$, $\|l(x, \cdot)u'(\cdot)\|_{bl}<\infty$: In terms of $\|\cdot\|_\infty$
\begin{equation*}
\|l(x, \cdot)u'(\cdot)\|_\infty \leq \|l(x, \cdot)\|_\infty\|u'\|_\infty\leq \exp(3M_c/\gamma) <\infty.
\end{equation*}
In terms of $\|\cdot\|_{lip}$, we bound
\begin{align*}
\|l(x, \cdot)u'(\cdot)\|_{lip} \leq&\ \|l(x, \cdot)\|_\infty\|u'\|_{lip} + \|l(x, \cdot)\|_{lip}\|u'\|_{\infty}\\
\leq&\ \exp(M_c/\gamma){G_c\exp(2M_c/\gamma)}/{\gamma} + \exp(M_c/\gamma){G_c}\exp(2M_c/\gamma)/{\gamma} \\
=&\ {2G_c\exp(3M_c/\gamma)}/{\gamma}< \infty.
\end{align*}
Together we have $\|l(x, y)u'(y)\|_{bl} \leq \max\{\exp(3M_c/\gamma), {2G_c\exp(3M_c/\gamma)}/{\gamma}\}$.
From the definition of the operator $L_{\alpha}$, we have
\begin{align*}
\|L_{\alpha'} u' - L_{\alpha} u'\|_\infty =&\ \max_x |\int_\mathcal{X} l(x, y)u'(y)\mathbf{d}\alpha'(y) - \int_\mathcal{X} l(x, y)u'(y)\mathbf{d}\alpha(y)|\\
\leq&\ \|l(x, y)u'(y)\|_{bl} d_{bl}(\alpha', \alpha).
\end{align*}
All together we derive
\begin{equation*}
d_H(v', v) \leq \frac{2\exp(2M_c/\gamma)\|l(x, y)u'(y)\|_{bl}}{1-\lambda^2} [d_{bl}(\alpha', \alpha)+d_{bl}(\beta', \beta)] \quad(\lambda = \frac{\exp(M_c/\gamma) - 1}{\exp(M_c/\gamma) + 1}).
\end{equation*}
Further, since $d_H(v', v) \geq \|\log v'-\log v \|_\infty = \frac{1}{\gamma}\|f'- f \|_\infty$, we have the result:
\begin{equation}
\|f'- f \|_\infty\leq \frac{2\gamma\exp(2M_c/\gamma)\|l(x, y)u'(y)\|_{bl}}{1-\lambda^2} [d_{bl}(\alpha', \alpha)+d_{bl}(\beta', \beta)].
\end{equation}
Similar argument can be made for $\|g'- g \|_\infty$.
{\bf Result (ii)} Recall that $\alpha' = T_\phi\sharp\alpha$ and $\beta' = \beta$ with $T_\phi(x) = x + \phi(x)$.
For simplicity we denote $f' = f_{T_\phi\sharp\alpha, \beta}$ and $g' = g_{T_\phi\sharp\alpha, \beta}$ and $f = f_{\alpha, \beta}$ and $g = g_{\alpha, \beta}$. We denote similarly $u'$, $v'$, $u$, and $v$.
Use \eqref{appendix_proof_i} and the change-of-variables formula of the push-forward measure to obtain
\begin{align*}
\|L_{T_\phi\sharp\alpha} u' - L_{\alpha} u'\|_\infty = \max_x \int [l(x,T_\phi( y))u'(T_\phi(y)) - l(x, y)u'(y)]\mathbf{d}\alpha(y).
\end{align*}
We now bound the integrand:
\begin{align*}
&|l(x,T_\phi( y))u'(T_\phi(y)) - l(x, y)u'(y)|\\
= &|l(x,T_\phi(y))u'(T_\phi(y)) - l(x,T_\phi( y))u'(y)| + |l(x, T_\phi( y))u'(y) - l(x, y)u'(y)| \\
\leq& \exp(M_c/\gamma)\cdot \frac{G_c\exp(2M_c/\gamma)}{\gamma}\|\phi(y)\| + \exp(M_c/\gamma)\frac{G_c}{\gamma} \cdot\exp(2M_c/\gamma)\cdot\|\phi(y)\|\\
\leq & \frac{2 G_c\exp(3M_c/\gamma)}{\gamma}\cdot\|\phi(y)\|,
\end{align*}
where we use the Lipschitz continuity of $u'$ for the first term and the Lipschitz continuity of $l$ for the second term.
\subsection{Proof of Lemma \ref{lemma_Lipschitz_continuity_of_variation_gradient}}
\label{proof_lemma_Lipschitz_continuity_of_variation_gradient}
From the restriction on $d_{bl}(\alpha', \alpha)+d_{bl}(\beta', \beta)$ or the size of the mapping $\|\phi\|_\infty$, we always have $|f(x) + g(y) - f'(x) - g'(y)|<1$ from Lemma \ref{lemma_Lipschitz_continuity_of_variation}.\\
Denote the Sinkhorn potentials to $\OTgamma(\alpha, \beta)$ and $\OTgamma(\alpha', \beta')$ by $(f, g)$ and $(f', g')$ respectively.
From the expression \eqref{eqn_sinkhorn_potential_gradient_x} of $\nabla f$ (and $\nabla f'$), we have
\begin{align*}
\|\nabla f(x) - \nabla f'(x)\| =& \|\int_\mathcal{X} (h(x, y) - h'(x, y))\nabla_x c(x, y) \mathbf{d}\beta(y)\|\\
=& \|\int_\mathcal{X} h'(x, y)(\exp(f(x) + g(y) - f'(x) - g'(y))-1)\nabla_x c(x, y) \mathbf{d}\beta(y)\|\\
\leq& \int_\mathcal{X} h'(x, y)|\exp(f(x) + g(y) - f'(x) - g'(y))-1|\|\nabla_x c(x, y)\| \mathbf{d}\beta(y) \\
\leq& \int_\mathcal{X} 2h'(x, y)|f(x) + g(y) - f'(x) - g'(y)| \|\nabla_x c(x, y)\| \mathbf{d}\beta(y),
\end{align*}
where $h'(x, y) {:=} \exp(\frac{1}{\gamma}(f'(x) + g'(y) - c(x, y)))$, the second inequality holds since $|exp(x) - 1| < 2|x|$ when $|x|\leq 1$ and $|f(x) + g(y) - f'(x) - g'(y)|<1$. We can use results from Lemma \ref{lemma_Lipschitz_continuity_of_variation} to bound the term $|f(x) + g(y) - f'(x) - g'(y)|$.
\noindent{\bf Result (i):} Using (i) of Lemma \ref{lemma_Lipschitz_continuity_of_variation}, we bound
\begin{equation*}
\|\nabla f(x) - \nabla f'(x)\| \leq 2G_cG_{bl}[d_{bl}(\alpha', \alpha)+d_{bl}(\beta', \beta)].
\end{equation*}
\noindent{\bf Result (ii):} Using (ii) of Lemma \ref{lemma_Lipschitz_continuity_of_variation}, we bound
\begin{equation*}
\|\nabla f(x) - \nabla f'(x)\| \leq 2 G_cG_{T}\|\phi\|_\infty.
\end{equation*}
\subsection{Proof of Proposition \ref{proposition_variation_I}}
\label{proof_proposition_variation_I}
We will compute $DF_1[0]$ based on the definition of the Fr\'echet derivatives in Definition \ref{definition_variation_rkhs}.
The computation of $DF_2[0]$ follows similarly.\\
Denote $T_\psi = \mathcal{I} + \psi$.
Note that we are interested in the case when $\psi=0$ and hence $T_{\psi+\epsilon\phi}(x) = T_{\epsilon\phi}(x) = x + \epsilon\phi(x)$.
Additionally, $T_\psi$ is the identity operator when $\psi = 0$ and hence $F_1[0] = \OTgamma(\alpha, \beta)$.
For simplicity, we drop the subscript of $T_{\epsilon\phi}$ ($\psi = 0$) and simply denote it by $T$ in the rest of the proof.
Let $f$ and $g$ be the Sinkhorn potentials to $\OTgamma(\alpha, \beta)$, by \eqref{eqn_OTepsilon_dual} and the optimality of $f$ and $g$, one has
\[
\OTgamma(\alpha, \beta) = \langle f, \alpha \rangle + \langle g, \beta \rangle.
\]
However, $f$ and $g$ are not necessarily the optimal dual variables for $\OTgamma(T_\sharp\alpha, \beta)$, so one has
\[
\OTgamma(T_\sharp\alpha, \beta) \geq \langle f, T_\sharp\alpha \rangle + \langle g, \beta\rangle - \gamma \langle h-1, T_\sharp\alpha\otimes\beta \rangle.
\]
Using the optimality from Lemma \ref{lemma_optimality_sinkhorn_potential_appendix}, we have $\int_\mathcal{X} h(x, y)\mathbf{d} \beta(y) = 1$ and hence $\langle h-1, T_\sharp\alpha\otimes\beta\rangle = 0$.
Subtracting the 1st equality from the last inequality,
\begin{align*}
\OTgamma(T_\sharp \alpha, \beta) - \OTgamma(\alpha, \beta) \geq \langle f, T_\sharp\alpha - \alpha\rangle.
\end{align*}
Use the change-of-variables formula of the push-forward measure to obtain
\begin{align*}
\frac{1}{\epsilon}\langle f, T_\sharp\alpha - \alpha\rangle = \frac{1}{\epsilon}\int_\mathcal{X} \big((f\circ T)(x) - f(x)\big) \mathbf{d}\alpha(x)
=\int_\mathcal{X} \nabla f(x+\epsilon'\phi(x)) \phi(x) \mathbf{d}\alpha(x),
\end{align*}
where $\epsilon' \in [0, \epsilon]$ is from the mean value theorem.
Further use the Lipschitz continuity of $\nabla f$ in Lemma \ref{lemma_lipschitz_sinkhorn_potential_gradient}, we have
\begin{equation*}
\lim_{\epsilon\rightarrow 0} \frac{1}{\epsilon}\langle f, T_\sharp\alpha - \alpha\rangle = \int_\mathcal{X} \nabla f(x) \phi(x) \mathbf{d}\alpha(x).
\end{equation*}
Since $\phi\in\mathcal{H}^d$, we have $\phi(x) = \langle\phi, k(x, \cdot)\rangle_{\mathcal{H}^d}$ and hence
\begin{equation*}
\lim_{\epsilon\rightarrow0}\frac{1}{\epsilon} \big(\OTgamma(T_\sharp \alpha, \beta) - \OTgamma(\alpha, \beta)\big) \geq \langle\int \nabla f(x)k(x, \cdot) \mathbf{d}\alpha(x), \phi\rangle_{\mathcal{H}^d}.
\end{equation*}
Similarly, let $f'$ and $g'$ be the Sinkhorn potentials to $\OTgamma(T_\sharp\alpha, \beta)$, using $f'\rightarrow f$ as $\epsilon\rightarrow 0$, we can have an upper bound
\begin{equation*}
\lim_{\epsilon\rightarrow0}\frac{1}{\epsilon}\big(\OTgamma(T_\sharp \alpha, \beta) - \OTgamma(\alpha, \beta)\big) \leq \langle\int_\mathcal{X} \lim_{\epsilon\rightarrow 0}\nabla f'(x + \epsilon'\phi(x))k(x, \cdot) \mathbf{d}\alpha(x), \phi\rangle_{\mathcal{H}^d}.
\end{equation*}
Since $\phi\in\mathcal{H}^d$, we have $\|\phi\|_{2,\infty}\leq M_\mathcal{H}\|\phi\|_{\mathcal{H}^d} < \infty$ with $M_\mathcal{H}\in\mathbb{R}_+$ being a constant.
Using Lemma \ref{lemma_Lipschitz_continuity_of_variation}, we have that $\nabla f'$ is Lipschitz continuous with respect to the mapping
\begin{equation*}
\lim_{\epsilon\rightarrow 0}\|\nabla f'(x + \epsilon'\phi(x)) - \nabla f(x + \epsilon'\phi(x))\|\leq \lim_{\epsilon\rightarrow 0}\epsilon G_T\|\phi\|_{2, \infty} = 0.
\end{equation*}
Besides, using Lemma \ref{lemma_lipschitz_sinkhorn_potential_gradient} we have that $\nabla f$ is continuous and hence $\lim_{\epsilon\rightarrow 0}\nabla f(x + \epsilon'\phi(x)) = \nabla f(x)$.
Consequently we have $\lim_{\epsilon\rightarrow 0}\nabla f'(x + \epsilon'\phi(x)) = \nabla f(x)$ and hence
\begin{equation*}
\lim_{\epsilon\rightarrow0}\frac{1}{\epsilon}\big(\OTgamma(T_\sharp \alpha, \beta) - \OTgamma(\alpha, \beta)\big) = \langle\nabla f(x)k(x, \cdot) \mathbf{d}\alpha(x), \phi\rangle_{\mathcal{H}^d}.
\end{equation*}
From Definition \ref{definition_variation_rkhs}, we have the result of $DF_1[0]$.
The result of $DF_2[0]$ can be obtained similarly.
\subsection{Proof of Lemma \ref{lemma_large_N}}
\label{proof_lemma_large_N}
From Proposition \ref{proposition_variation_I} and \eqref{eqn_gradient_sinkhorn_barycenter}, we recall the expression of $D\mathcal{S}_{\alpha}[0]$ by
\begin{equation}
D\mathcal{S}_{\alpha}[0] = \int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha, \beta_i}(x) - \nabla f_{\alpha, \alpha}(x)] k(x, y)\mathbf{d} \alpha(x),
\end{equation}
and we have $\mathcal{T}[\alpha](x) = x - \eta D\mathcal{S}_{\alpha}[0](x)$.
Consequently, using Corollary \ref{corollary_lipschitz_variation_mapping} we have
\begin{align*}
\|\mathcal{T}[\alpha]\|_{lip} =&\ \max_{x\neq y} \frac{\|\mathcal{T}[\alpha](x) - \mathcal{T}[\alpha](y)\|}{\|x-y\|}
= \max_{x\neq y} \frac{\|x - y - \eta (D\mathcal{S}_{\alpha}[0](x) - D\mathcal{S}_{\alpha}[0](y))\|}{\|x-y\|}\\
\leq&\ 1+\eta\|D\mathcal{S}_{\alpha}[0]\|_{lip} \leq 1+\eta G_cG_k.
\end{align*}
The following lemma states that $\mathcal{T}[\alpha]$ is Lipschitz w.r.t. $\alpha$ in terms of the bounded Lipschitz norm.
\begin{lemma} \label{lemma_appendix_i}
For any $y\in\mathcal{X}$ and any $\alpha, \alpha'\in\mathcal{M}_1^+(\mathcal{X})$, we have
\begin{equation*}
\|\mathcal{T}[\alpha](y) - \mathcal{T}[\alpha'](y)\|_{2,\infty} \leq \eta \max\{d L_f D_k + dG_c G_k, {D_k L_{bl}}\}d_{bl}(\alpha', \alpha).
\end{equation*}
\end{lemma}
We defer the proof to Appendix \ref{proof_lemma_appendix_i}.
Based on such lemma, for any $h$ with $\|h\|_{bl}\leq 1$, we have
\begin{align*}
&|\langle h, \mathcal{T}[\alpha]_{\sharp}\alpha\rangle - \langle h, \mathcal{T}[\alpha']_{\sharp}\alpha'\rangle| = |\langle h\circ \mathcal{T}[\alpha],\alpha\rangle - \langle h\circ \mathcal{T}[\alpha'],\alpha'\rangle |\\
\leq& |\langle h\circ \mathcal{T}[\alpha],\alpha\rangle - \langle h\circ \mathcal{T}[\alpha],\alpha'\rangle | + |\langle h\circ \mathcal{T}[\alpha],\alpha'\rangle - \langle h\circ \mathcal{T}[\alpha'],\alpha'\rangle |.
\end{align*}
We now bound these two terms individually: For the first term,
\begin{align*}
|\langle h\circ \mathcal{T}[\alpha],\alpha\rangle - \langle h\circ \mathcal{T}[\alpha],\alpha'\rangle | \leq \|h\circ \mathcal{T}[\alpha]\|_{bl} d_{bl}(\alpha, \alpha') \\
\leq \max\{\|h\|_\infty, \|h\|_{lip}\|\mathcal{T}[\alpha]\|_{lip} \}d_{bl}(\alpha, \alpha')
&\leq (1+\eta G_cG_k)d_{bl}(\alpha, \alpha');
\end{align*}
And for the second term, use Lemma \ref{lemma_appendix_i} to derive
\begin{align*}
&\ |\langle h\circ \mathcal{T}[\alpha],\alpha'\rangle - \langle h\circ \mathcal{T}[\alpha'],\alpha'\rangle |\\
\leq&\ \| h\circ \mathcal{T}[\alpha] - h\circ \mathcal{T}[\alpha']\|_\infty \leq \|h\|_{lip}\max_{x\in\mathcal{X}}\|\mathcal{T}[\alpha](x) - \mathcal{T}[\alpha'](x)\| \\
\leq&\ \eta \max\{d L_f D_k + dG_c G_k, {D_k L_{bl}}\}d_{bl}(\alpha', \alpha).
\end{align*}
Combining the above inequalities, we have the result
\begin{equation*}
d_{bl}(\mathcal{T}[\alpha]_\sharp\alpha, \mathcal{T}[\alpha']_\sharp\alpha')\leq(1+\eta G_cG_k+\eta \max\{d L_f D_k + dG_c G_k, {D_k L_{bl}}\})d_{bl}(\alpha', \alpha).
\end{equation*}
\subsubsection{Proof of Lemma \ref{lemma_appendix_i}}\label{proof_lemma_appendix_i}
Recall the definition of $\mathcal{T}[\alpha](x) = x - \eta D\mathcal{S}_{\alpha}[0](x)$, where the functional $\mathcal{S}_{\alpha}$ is defined in \eqref{eqn_functional_per_iteration} and the Fr\'echet derivative is computed in \eqref{eqn_gradient_sinkhorn_barycenter}. For any $y\in\mathcal{X}$, we have
\begin{align*}
&\|\mathcal{T}[\alpha](y) - \mathcal{T}[\alpha'](y)\|\leq \eta\|D\mathcal{S}_{\alpha}[0](y) - D\mathcal{S}_{\alpha'}[0](y)\|\\
\leq& \eta\|\int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha, \beta_i}(x) - \nabla f_{\alpha, \alpha}(x)] k(x, y)\mathbf{d} \alpha(x) - \int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)] k(x, y)\mathbf{d} \alpha'(x)\| \\
\leq& \eta\|\int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha, \beta_i}(x) - \nabla f_{\alpha, \alpha}(x)] k(x, y)\mathbf{d} \alpha(x) - \int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)] k(x, y)\mathbf{d} \alpha(x)\| \\
&+\eta\|\int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)] k(x, y)\mathbf{d} \alpha(x) - \int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)] k(x, y)\mathbf{d} \alpha'(x)\| \\
=& \eta\|\int_\mathcal{X} \left([\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha, \beta_i}(x) - \nabla f_{\alpha, \alpha}(x)] -[\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)]\right) k(x, y)\mathbf{d} \alpha(x)\| \\
&+\eta\|\int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)] k(x, y)\mathbf{d} \alpha(x) - \int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)] k(x, y)\mathbf{d} \alpha'(x)\|.
\end{align*}
For the first term, use Lemma \ref{lemma_Lipschitz_continuity_of_variation_gradient} to bound
\begin{align*}
&\|\int_\mathcal{X} \left([\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha, \beta_i}(x) - \nabla f_{\alpha, \alpha}(x)] -[\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)]\right) k(x, y)\mathbf{d} \alpha(x)\|\\
=& \|\int_\mathcal{X} \left(\frac{1}{n}[\sum_{i=1}^{n}\nabla f_{\alpha, \beta_i}(x) - \nabla f_{\alpha', \beta_i}(x)] - \nabla f_{\alpha, \alpha}(x) + \nabla f_{\alpha', \alpha'}(x) \right) k(x, y)\mathbf{d} \alpha(x)\|\\
\leq& {D_k L_{bl} d_{bl}(\alpha', \alpha).}
\end{align*}
For the second term, we bound
\begin{align*}
&\|\int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)] k(x, y)\mathbf{d} \alpha(x) - \int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)] k(x, y)\mathbf{d} \alpha'(x)\|\\
\leq& \|\int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)] k(x, y)\mathbf{d} \alpha(x) - \int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)] k(x, y)\mathbf{d} \alpha'(x)\|_1 \\
\leq& \sum_{i=1}^{d} |\int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)]_i k(x, y)\mathbf{d} \alpha(x) - \int_\mathcal{X} [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)]_i k(x, y)\mathbf{d} \alpha'(x)|\\
=& \sum_{i=1}^{d} | \langle [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(\cdot) - \nabla f_{\alpha', \alpha'}(\cdot)]_i k(\cdot, y), \alpha\rangle - \langle [\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(\cdot) - \nabla f_{\alpha', \alpha'}(\cdot)]_i k(\cdot, y), \alpha'\rangle| \\
\leq& \sum_{i=1}^{d} \|[\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(\cdot) - \nabla f_{\alpha', \alpha'}(\cdot)]_i k(\cdot, y)\|_{bl} d_{bl}(\alpha', \alpha).
\end{align*}
Therefore, we only need to bound $\sum_{i=1}^{d}\|[\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)]_i k(x, y)\|_{bl}$.
In terms of $L^\infty$ norm, we have
\begin{equation*}
\sum_{i=1}^{d} \|[\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(\cdot) - \nabla f_{\alpha', \alpha'}(\cdot)]_i k(\cdot, y)\|_\infty \leq d D_k \|[\nabla f_{\alpha', \beta_i}]_i\|_\infty \leq d D_k G_c.
\end{equation*}
In terms of $\|\cdot\|_{lip}$, denote $\tilde \nabla(x) = \frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(x) - \nabla f_{\alpha', \alpha'}(x)$. For all $x, x'\in\mathcal{X}$, we have
\begin{align*}
&\ \frac{|[\tilde \nabla(x)]_i k(x, y) - [\tilde \nabla(x')]_i k(x', y)|}{\|x - x'\|} \\
\leq&\ \frac{|[\tilde \nabla(x)]_i k(x, y) - [\tilde \nabla(x')]_i k(x, y)| + |[\tilde \nabla(x')]_i k(x, y) - [\tilde \nabla(x')]_i k(x', y)|}{\|x - x'\|}\\
\leq &\ L_f D_k + G_c G_k,
\end{align*}
and hence $\sum_{i=1}^{d} \|[\frac{1}{n}\sum_{i=1}^{n}\nabla f_{\alpha', \beta_i}(\cdot) - \nabla f_{\alpha', \alpha'}(\cdot)]_i k(\cdot, y)\|_{lip} \leq d L_f D_k + dG_c G_k$.
All together, we have for any $y\in\mathcal{X}$
\begin{equation*}
\|\mathcal{T}[\alpha](y) - \mathcal{T}[\alpha'](y)\| \leq \eta \max\{d L_f D_k + dG_c G_k, {D_k L_{bl}}\}d_{bl}(\alpha', \alpha).
\end{equation*}
\subsection{Proof of Lemma \ref{lemma_sufficient_descent}} \label{proof_lemma_sufficient_descent}
We first recall a proposition from \cite{feydy2019interpolating}, which shows that the dual potentials are the variations of $\OTgamma$ w.r.t. the underlying probability measure.
\begin{definition}
We say $h \in\mathcal{C}(\mathcal{X})$ is the first-order variation of a functional $F:\mathcal{M}_1^+(\mathcal{X})\rightarrow\mathbb{R}$ at $\alpha\in\mathcal{M}_1^+(\mathcal{X})$ if for any displacement $\xi = \beta - \alpha$ with $\beta\in\mathcal{M}_1^+(\mathcal{X})$, we have
\begin{equation*}
F(\alpha+t\xi) = F(\alpha) + t\langle h, \xi\rangle + o(t).
\end{equation*}
Further we denote $h = \nabla_\alpha F(\alpha)$.
\end{definition}
\begin{lemma} \label{lemma_variation_OTgamma}
The first-order variation of $\OTgamma(\alpha, \beta) (\alpha\neq\beta)$ with respect to the measures $\alpha$ and $\beta$ is the corresponding Sinkhorn potential, i.e.
$\nabla_{(\alpha, \beta)}\OTgamma(\alpha, \beta) = (f_{\alpha, \beta}, g_{\alpha, \beta})$.
Further, if $\alpha = \beta$, we have
$\nabla_{\alpha} \OTgamma(\alpha, \alpha) = 2f_{\alpha, \alpha}$.
\end{lemma}
Recall that $\alpha^{t+1} = \mathcal{T}[\alpha^t]_{\sharp}\alpha^t$ where the push-forward mapping is of the form $\mathcal{T}[\alpha^t](x) = x - \eta D\mathcal{S}_{\alpha^t}[0](x)$ with $D\mathcal{S}_{\alpha^t}[0]$ given in \eqref{eqn_gradient_sinkhorn_barycenter}.
Using the convexity of $\mathcal{S}_{\gamma}$ and Lemma \ref{lemma_variation_OTgamma}, we have
\begin{align*}
&\ \mathcal{S}_{\gamma}(\alpha^{t+1}) - \mathcal{S}_{\gamma}(\alpha^t) \\
\leq& \langle \nabla_{\alpha} \mathcal{S}_{\gamma}(\alpha)|_{\alpha = \alpha^{t+1}}, \alpha^{t+1} - \alpha^{t}\rangle && \text{\# convexity of $\mathcal{S}_{\gamma}$} \\
=& \langle \frac{1}{n}\sum_{i=1}^{n} f_{\alpha^{t+1}, \beta_i} - f_{\alpha^{t+1}, \alpha^{t+1}}, \mathcal{T}[\alpha^t]_{\sharp}\alpha^t - \alpha^t\rangle && \text{\# Lemma \ref{lemma_variation_OTgamma}}\\
=& \langle [\frac{1}{n}\sum_{i=1}^{n} f_{\alpha^{t+1}, \beta_i} - f_{\alpha^{t+1}, \alpha^{t+1}}]\circ \mathcal{T}[\alpha^t] - [\frac{1}{n}\sum_{i=1}^{n} f_{\alpha^{t+1}, \beta_i} - f_{\alpha^{t+1}, \alpha^{t+1}}], \alpha^t\rangle. && \text{\# change-of-variables}
\end{align*}
For succinctness, denote $\xi^{t} {:=} \frac{1}{n}\sum_{i=1}^{n}f_{\alpha^{t}, \beta_i} - f_{\alpha^{t}, \alpha^{t}}$.
Hence, we have
\begin{align*}
\mathcal{S}_{\gamma}(\alpha^{t+1}) - \mathcal{S}_{\gamma}(\alpha^t)
\leq&\ \langle \xi^{t+1}\circ \mathcal{T}[\alpha^t] - \xi^{t+1}, \alpha^t\rangle = \int \xi^{t+1}(x - \eta D\mathcal{S}_{\alpha^t}[0](x)) - \xi^{t+1}(x) \mathbf{d} \alpha^{t}(x) \\
=&\ - \eta \int \langle\nabla \xi^{t+1}(x - \eta' D\mathcal{S}_{\alpha^t}[0](x)), D\mathcal{S}_{\alpha^t}[0](x)\rangle \mathbf{d} \alpha^{t}(x),
\end{align*}
where the last equality is from the mean value theorem with $\eta' \in [0,\eta]$.
We now bound the integral by splitting it into three terms and analyze them one by one.
\begin{align*}
&\int_{\mathcal{X}} \langle\nabla \xi^{t+1}(x - \eta' D\mathcal{S}_{\alpha^t}[0](x)), D\mathcal{S}_{\alpha^t}[0](x)\rangle \mathbf{d} \alpha^{t}(x) \\
= &\int_{\mathcal{X}} \langle\nabla \xi^{t}(x), D\mathcal{S}_{\alpha^t}[0](x)\rangle \mathbf{d} \alpha^{t}(x) && \textcircled{1}\\
&+ \int_{\mathcal{X}} \langle\nabla \xi^{t}(x - \eta' D\mathcal{S}_{\alpha^t}[0](x)) - \nabla \xi^{t}(x), D\mathcal{S}_{\alpha^t}[0](x)\rangle \mathbf{d} \alpha^{t}(x) && \textcircled{2} \\
&+ \int_{\mathcal{X}} \langle\nabla \xi^{t+1}(x - \eta' D\mathcal{S}_{\alpha^t}[0](x)) - \nabla \xi^{t}(x - \eta' D\mathcal{S}_{\alpha^t}[0](x)), D\mathcal{S}_{\alpha^t}[0](x)\rangle \mathbf{d} \alpha^{t}(x). && \textcircled{3}
\end{align*}
For $\textcircled{1}$, since $D\mathcal{S}_{\alpha^t}[0] \in \mathcal{H}^d$, we have $D\mathcal{S}_{\alpha^t}[0](x) = \langle D\mathcal{S}_{\alpha^t}[0], k(x, \cdot)\rangle$ and hence
\begin{align*}
\int_\mathcal{X} \langle\nabla \xi^{t}(x), D\mathcal{S}_{\alpha^t}[0](x)\rangle \mathbf{d} \alpha^{t}(x) =&\ \int \langle\nabla \xi^{t}(x) k(x, \cdot), D\mathcal{S}_{\alpha^t}[0]\rangle_{\mathcal{H}^d} \mathbf{d} \alpha^{t}(x) \\
=&\ \|D\mathcal{S}_{\alpha^t}[0]\|^2_{\mathcal{H}^d} = \mathbf{S}(\alpha^t, \{\beta_i\}_{i=1}^n),
\end{align*}
where the last equality is from the Definition \ref{definition_KSBD} and the expression of $D\mathcal{S}_{\alpha}[0]$ in \eqref{eqn_gradient_sinkhorn_barycenter}.\\
For $\textcircled{2}$, note that the summands of $\nabla \xi^t$ are of the form $\nabla f_{\alpha, \beta}$ (or $\nabla f_{\alpha, \beta}$) which is proved to be Lipschitz in Lemma \ref{lemma_lipschitz_sinkhorn_potential_gradient}.
Consequently, we bound
\begin{align*}
&|\int \langle\nabla \xi^{t}(x - \eta' D\mathcal{S}_{\alpha^t}[0](x)) - \nabla \xi^{t}(x), D\mathcal{S}_{\alpha^t}[0](x)\rangle \mathbf{d} \alpha^{t}(x)| \\
\leq& \int \|\nabla \xi^{t}(x - \eta' D\mathcal{S}_{\alpha^t}[0](x)) - \nabla \xi^{t}(x)\|\|D\mathcal{S}_{\alpha^t}[0](x)\| \mathbf{d} \alpha^{t}(x) \\
\leq& \int 2L_f\eta\|D\mathcal{S}_{\alpha^t}[0](x)\|^2 \mathbf{d} \alpha^{t}(x) && \text{\# Lemma \ref{lemma_lipschitz_sinkhorn_potential_gradient}} \\
\leq& 2\eta L_f M_\mathcal{H}^2 \|D\mathcal{S}_{\alpha^t}[0]\|^2_{\mathcal{H}^d} = 2\eta L_f M_\mathcal{H}^2 \mathbf{S}(\alpha^t, \{\beta_i\}_{i=1}^n). && \text{\# see \eqref{eqn_RKHS_norm}}
\end{align*}
where we use $\forall f \in \mathcal{H}^d, \exists M_\mathcal{H}>0$ s.t. $\|f(x)\|\leq M_\mathcal{H}\|f\|_{\mathcal{H}^d}, \forall x\in\mathcal{X}$ in the third inequality.\\
For $\textcircled{3}$, similar to $\textcircled{2}$, the summands of $\nabla \xi^t$ are proved to be Lipschitz in (ii) of Lemma \ref{lemma_Lipschitz_continuity_of_variation_gradient}, and hence we bound
\begin{align*}
&|\int \langle\nabla \xi^{t+1}(x - \eta' D\mathcal{S}_{\alpha^t}[0](x)) - \nabla \xi^{t}(x - \eta' D\mathcal{S}_{\alpha^t}[0](x)), D\mathcal{S}_{\alpha^t}[0](x)\rangle \mathbf{d} \alpha^{t}(x)|\\
\leq& \int \|\nabla \xi^{t+1}(x - \eta' D\mathcal{S}_{\alpha^t}[0](x)) - \nabla \xi^{t}(x - \eta' D\mathcal{S}_{\alpha^t}[0](x))\|\|D\mathcal{S}_{\alpha^t}[0](x)\| \mathbf{d} \alpha^{t}(x) \\
\leq& \int \sqrt{d}\eta L_{T}\|D\mathcal{S}_{\alpha^t}[0]\|_{2,\infty}\|D\mathcal{S}_{\alpha^t}[0](x)\| \mathbf{d} \alpha^{t}(x) && \text{\# Lemma \ref{lemma_Lipschitz_continuity_of_variation_gradient}}\\
\leq& 2\eta \sqrt{d}L_T M_\mathcal{H}^2 \|D\mathcal{S}_{\alpha^t}[0]\|^2_{\mathcal{H}^d} = 2\eta\sqrt{d} L_T M_\mathcal{H}^2 \mathbf{S}(\alpha^t, \{\beta_i\}_{i=1}^n) && \text{\# see \eqref{eqn_RKHS_norm}}
\end{align*}
Combining the bounds on $\textcircled{1}, \textcircled{2}, \textcircled{3}$, we have:
\begin{equation*}
\mathcal{S}_{\gamma}(\alpha^{t+1}) - \mathcal{S}_{\gamma}(\alpha^t) \leq - \eta(1-2\eta L_f M_\mathcal{H}^2-2\eta\sqrt{d} L_T M_\mathcal{H}^2) \mathbf{S}(\alpha^t, \{\beta_i\}_{i=1}^n),
\end{equation*}
which leads to the result when we set $\eta\leq \min\{\frac{1}{8L_fM_\mathcal{H}^2}, \frac{1}{8\sqrt{d}L_TM_\mathcal{H}^2}\}$.
\clearpage
\section{A Discussion on the Global Optimality}\label{appendix_global_optimality}
\subsection{Proof of {Theorem }\ref{thm_optimality}} \label{Proof_of_thm_optimality}
We first show $\int_{\mathcal{X}}\|\xi(x)\|^2 \mathbf{d} \alpha(x)<\infty$:\\
\begin{align*}
\int_{\mathcal{X}}\|\xi(x)\|^2 \mathbf{d} \alpha(x) &= \int_{\mathcal{X}} ||\frac{1}{n}\sum_{i=1}^{n} \nabla f_{\alpha, \beta_i}(x) - \nabla f_{\alpha, \alpha}(x)||^2_2 \mathbf{d} \alpha(x) \\
&= \int_{\mathcal{X}} 2||\frac{1}{n}\sum_{i=1}^{n} \nabla f_{\alpha, \beta_i}(x)\|^2 + 2\|\nabla f_{\alpha, \alpha}(x)||_2^2 \mathbf{d} \alpha (x) \leq 4G_f < \infty
\end{align*}
(i) $\mathbf{S}(\alpha, \{\beta_i\}_{i=1}^n) = 0\ \&\ \mathbf{supp}(\alpha) =\mathcal{X}$ $\Rightarrow \max_{\beta\in\mathcal{M}_1^+(\mathcal{X})} \langle -\nabla_{\alpha}\mathcal{S}_{\gamma}(\alpha), \beta - \alpha\rangle \leq 0$: \\
From the integrally strictly positive definiteness of the kernel function $k(x, x')$, we have that $\int_{\mathcal{X}}\|\xi(x)\|^2 \mathbf{d} \alpha(x) = 0$ which implies $\nabla \xi = \frac{1}{n}\sum_{i=1}^{n} \nabla f_{\alpha, \beta_i} - \nabla f_{\alpha, \alpha}(x) = 0$ for all $x\in\mathbf{supp}(\alpha)$.
Further, we have that $\xi$ is a constant function on $\mathcal{X}$ by $\mathbf{supp}(\alpha) =\mathcal{X}$.
Since we can shift the Sinkhorn potential by a constant amount without losing its optimality, we can always ensure that $\xi$ is exactly a zero function. This implies the optimality condition of the Sinkhorn barycenter problem: $\max_{\beta\in\mathcal{M}_1^+(\mathcal{X})} \langle -\nabla_{\alpha}\mathcal{S}_{\gamma}(\alpha), \beta - \alpha\rangle \leq 0$.\\
(ii) Using Theorem \ref{theorem_convergence} and (i), one directly has the result.
\subsection{Fully Supported Property of \texttt{SD} at Finite Time} \label{appendix_fully_supported}
WLOG, suppose that $c(x,y)=\infty$ if $x\notin\mathcal{X}$. From the monotonicity of Lemma \ref{lemma_sufficient_descent}, the support of $\alpha^t$ will not grow beyond $\mathcal{X}$. Let $p^t$ be the density function of $\alpha^t$. The density $p^{t+1}$ is given by $p^{t+1}(x) = p^t(\mathcal{T}[\alpha^{t}]^{-1}(x)) \big|\det(\nabla \mathcal{T}[\alpha^{t}]^{-1}(x))\big|$, where $\mathcal{T}[\alpha^{t}]$ is the mapping defined in \eqref{eqn_pushforward_mapping}. For a sufficiently small step size, the determinant is always positive. Consequently, $p^{t+1}(x) = 0$ implies $p^t(\mathcal{T}[\alpha^{t}]^{-1}(x))=0$ which is impossible since $p^t$ is f.s. Therefore, $p^{t+1}$ is also a.c. and f.s.
\subsection{Review the Assumptions for Global Convergence in Previous Works} \label{appendix_previous_work_assumption}
We briefly describe the assumptions required by previous works \cite{arbel2019maximum,mroueh2019sobolev} to guarantee the global convergence to the MMD minimization problem.
We emphasize that both of these works make assumptions on the ENTIRE measure sequence.
In the following, we use $\nu_p$ to denote the target measure.
In \cite{mroueh2019sobolev}, given a measure $\nu\in\mathcal{M}_1^+(\mathcal{X})$, \citet{mroueh2019sobolev} define the Kernel Derivative Gramian Embedding (KDGE) of $\nu$ by
\begin{equation}
D(\nu) {:=} \mathbb{E}_{x\sim\nu} \left([J\Phi(x)]^\top J\Phi(x)\right),
\end{equation}
where $\Phi$ is the feature map of a given RKHS and $J\Phi$ denotes its Jacobian matrix.
Further denote the classic Kernel Mean Embedding (KME) by
\begin{equation}
\mbox{\boldmath$\mu$\unboldmath}(\nu) {:=} \mathbb{E}_{x\sim\nu} \Phi(x).
\end{equation}
\sod requires the entire variable measure sequence $\{\nu_q\}, q\geq 0$ to satisfy for any measure $\nu_q$ such that $\delta_{p,q}{:=}\mbox{\boldmath$\mu$\unboldmath}(\nu_q) - \mbox{\boldmath$\mu$\unboldmath}(\nu_p) \neq 0$
\begin{equation}
D(\nu) \delta_{p,q} \neq 0.
\end{equation}
In \cite{arbel2019maximum}, \citet{arbel2019maximum} proposed two types of assumptions such that either of them leads to the global convergence of their (noisy) gradient flow algorithm.
Specifically, denote the squared weighted Sobolev semi-norm of a function $f$ in an RKHS with respect to a measure $\nu$ by $\|f\|_{\dot H(\nu)} = \int_\mathcal{X} \|\nabla f(x)\|^2d \nu(x)$.
Given two probability measures on $\mathcal{X}$, $\nu_p$ and $\nu_q$, define the weighted negative Sobolev distance $\|\nu_p - \nu_q\|_{\dot H(\nu)^{-1}(\nu)}$ by
\begin{equation}
\|\nu_p - \nu_q\|_{\dot H(\nu)^{-1}(\nu)} = \sup_{f \in L_2(\nu), \|f\|_{\dot H(\nu)}\leq 1} \big|\int_{\mathcal{X}}f(x)\nu_p(x) - \int_{\mathcal{X}}f(x)\nu_q(x)\big|.
\end{equation}
In Proposition 7 of \cite{arbel2019maximum}, if for the entire variable measure sequence $\{\nu_q\}$ generated by their gradient flow algorithm, $\|\nu_p - \nu_q\|_{\dot H(\nu)^{-1}(\nu)}$ is always bounded, then $\nu_q$ weakly converges to $\nu_p$ under the MMD sense.\\
Further, the authors also propose another noisy gradient flow algorithm and provide its global convergence guarantee under a different assumption:
Let $f_{\nu_p, \nu_q}$ be the unnormalized witness function to $\mathrm{MMD}(\nu_p, \nu_q)$. Let $\mu$ be the standard gaussian distribution and let $\beta>0$ be a noise level.
Denote $\mathcal{D}_\beta(\nu_q) {:=} \mathbb{E}_{x\sim\nu_q, \mu}[\|\nabla f_{\nu_p, \nu_q}(x + \beta \mu)\|^2]$.
The noisy gradient flow algorithm globally converges if for all $n$ there exists a noise level $\beta_n$ such that
\begin{equation}
8\lambda^2\beta_n^2 \mathrm{MMD}(\nu_p, \nu_n) \leq \mathcal{D}_{\beta_n}(\nu_n),
\end{equation}
and $\sum_{i=0}^n \beta_i^2\rightarrow \infty$.
Here $\lambda$ is some problem dependent constant.
\section{Sinkhorn Descent as Gradient Flow}
\input{experiments.tex}
\section{Broader Impact}
This work has the following potential positive impact in the society:
We propose the first algorithm for the Sinkhorn barycenter problem that is scalable with respect to the problem dimension $d$ (linear dependence), while existing works all have an exponential dependence on $d$.
Further, we expect that this functional gradient descent method can be applied to more general optimization problems involving distribution sampling: In principle, the negative gradient of the dual variables instructs the particles in the measure to search the landscape of the minimizer.
\bibliographystyle{abbrvnat}
\section{Preliminaries on the Sinkhorn Potentials}
The Sinkhorn potential is the cornerstone of the entropy regularized OT problem. In the discrete case, it can be computed by a standard method in \cite{genevay2016stochastic}. In particular, when $\alpha$ is discrete, $f$ can be simply represented by a finite dimensional vector since only its values on $\mathbf{supp}(\alpha)$ matter. We describe such method in Appendix \ref{section_computation_of_sinkhorn_potential} for completeness.
In the following, we treat the computation of Sinkhorn potentials as a blackbox, and refer to it as $\mathcal{SP}_{\gamma}(\alpha,\beta)$.
\section{Experiments}
We conduct experimental studies to show the efficiency and efficacy of \sinkhorndescent by comparing with the recently proposed functional Frank-Wolfe method (\fw) from \citep{NIPS2019_9130}\footnote{ \citep{claici2018stochastic} is not included as it only applies to the Wasserstein barycenter problem ($\gamma = 0$).}.
Note that in round $t$, \fw requires to globally minimize the nonconvex function
$Q(x) {:=} \sum_{i=1}^{n} f_{\alpha^t, \beta_i}(x) - f_{\alpha^t, \alpha^t}(x)$
in order to choose the next Dirac measure to be added to the support.
Here, $f_{\alpha^t, \beta_i}$ and $f_{\alpha^t, \alpha^t}$ are the Sinkhorn potentials.
Such operation is implemented by an exhaustive grid search so that \fw returns a reasonably accurate solution.
Consequently, \fw is computationally expensive even for low dimensional problems and we only compare \texttt{SD} with \fw in the first two image experiments, where $d = 2$. (the grid size used in \fw grows exponentially with $d$.)\\
Importantly, the size of the support $N$ affects the computational efficiency as well as the solution quality of both methods.
A large support size usually means higher computational complexity but allows a more accurate approximation of the barycenter.
However, since \texttt{SD} and \fw have different support size patterns, it is hard to compare them directly:
The support size of \texttt{SD} is fixed after its initialization while \fw starts from an initial small-size support and gradually increases it during the optimization procedure.
We hence fix the support size of the output measure from \fw and vary the support size of \texttt{SD} for a more comprehensive comparison.
\begin{figure*}[t]
\centering
\begin{tabular}{c c c}
\includegraphics[width=.3\columnwidth]{ellipses.pdf} & \includegraphics[width=.3\columnwidth]{matching.pdf} & \includegraphics[width=.3\columnwidth]{gaussian.pdf}\\
(a) Concentric Ellipses & (b) Distribution Sketching & (c) Gaussians
\end{tabular}
\caption{$N$ is the support size. \fw is not included in (c) as it is impractical in high-dimensional problems (here, the dimension is $100$)}
\label{fig}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{tabular}{c c}
\includegraphics[height=.08\columnwidth, width=.08\columnwidth]{plots/SD/barycenter_0.png}
\includegraphics[height=.08\columnwidth, width=.08\columnwidth]{plots/SD/barycenter_2.png}
\includegraphics[height=.08\columnwidth, width=.08\columnwidth]{plots/SD/barycenter_4.png}
\includegraphics[height=.08\columnwidth, width=.08\columnwidth]{plots/SD/barycenter_6.png}
\includegraphics[height=.08\columnwidth, width=.08\columnwidth]{plots/SD/barycenter_8.png}
&
\includegraphics[height= .08\columnwidth, width=.08\columnwidth]{plots/sketching/SD/barycenter_0.png}
\includegraphics[height= .08\columnwidth, width=.08\columnwidth]{plots/sketching/SD/barycenter_40.png}
\includegraphics[height= .08\columnwidth, width=.08\columnwidth]{plots/sketching/SD/barycenter_80.png}
\includegraphics[height= .08\columnwidth, width=.08\columnwidth]{plots/sketching/SD/barycenter_120.png}
\includegraphics[height= .08\columnwidth, width=.08\columnwidth]{plots/sketching/SD/barycenter_160.png}
\includegraphics[height= .08\columnwidth, width=.08\columnwidth]{plots/sketching/SD/barycenter_200.png} \\
($a_1$) \texttt{SD} on ellipses &
($b_1$) \texttt{SD} on sketching\\
left to right, using 1 to 9 \texttt{SD} steps; &
left to right, using 1 to 201 \texttt{SD} steps \\
\includegraphics[height=.08\columnwidth, width=.08\columnwidth]{plots/FW/barycenter_410.png}
\includegraphics[height=.08\columnwidth, width=.08\columnwidth]{plots/FW/barycenter_430.png}
\includegraphics[height=.08\columnwidth, width=.08\columnwidth]{plots/FW/barycenter_450.png}
\includegraphics[height=.08\columnwidth, width=.08\columnwidth]{plots/FW/barycenter_470.png}
\includegraphics[height=.08\columnwidth, width=.08\columnwidth]{plots/FW/barycenter_490.png}
&
\includegraphics[height= .08\columnwidth, width=.08\columnwidth]{plots/sketching/FW/barycenter_9900.png}
\includegraphics[height= .08\columnwidth, width=.08\columnwidth]{plots/sketching/FW/barycenter_11900.png}
\includegraphics[height= .08\columnwidth, width=.08\columnwidth]{plots/sketching/FW/barycenter_13900.png}
\includegraphics[height= .08\columnwidth, width=.08\columnwidth]{plots/sketching/FW/barycenter_15900.png}
\includegraphics[height= .08\columnwidth, width=.08\columnwidth]{plots/sketching/FW/barycenter_17900.png}
\includegraphics[height= .08\columnwidth, width=.08\columnwidth]{plots/sketching/FW/barycenter_19900.png} \\
($a_2$) \texttt{FW} on ellipses &
($b_2$) \texttt{FW} on sketching\\
left to right, using 411 to 491 \texttt{FW} steps; &
left to right, using 9901 to 19901 \texttt{FW} steps
\end{tabular}
\caption{Visual results of the ellipses and sketching problem.}
\label{fig_visual}
\end{figure*}
\paragraph{Barycenter of Concentric Ellipses}
We compute the barycenter of 30 randomly generated concentric ellipses similarly as done in \citep{cuturi2014fast,NIPS2019_9130}.
We run \fw for $500$ iterations and hence the output measure of \fw has support size $N=600$ (\fw increases its support size by $1$ in each iteration). \texttt{SD} is initialized with a discrete uniform distribution with support size varying from $N \in \{20, 40, 80\}$. Note that in these experiments the chosen support size for \texttt{SD} is even smaller than the initial support size of \fw.
The result is reported in Figure \ref{fig}(a).
In terms of convergence rate, we observe that \texttt{SD} is much faster than \fw. Even $20$ iterations are sufficient for \texttt{SD} to find a good solution.
More importantly, in terms of the quality of the solution, \texttt{SD} with support size $N = 20$ outperforms \fw with final support size $N=600$.
In fact, \fw cannot find a solution with better quality even with a larger support size.
This phenomenon is due to an inevitable limitation of the \fw optimization procedure: Each \fw step requires to globally minimize the non-convex function (32) via an exhaustive grid search. This introduces an inherent error to the procedure as the actual solution to (32) potentially resides outside the grid points. Such error limits the accuracy of FW even when the number of particles grows. In contrast, \texttt{SD} adjusts the particles to minimize the objective without any inherent error. As a result, we observe \texttt{SD} outperforms FW on both efficiency and accuracy.
\paragraph{Distribution Sketching}
We consider a special case of the barycenter problem where we only have one source distribution, similarly as done in \citep{NIPS2019_9130}.
This problem can be viewed as approximating a given distribution with a fixed support size budget and is hence called distribution sketching.
Specifically, a natural image of a cheetah is used as the source measure in $\mathbb{R}^2$.
We run \fw for $20000$ iterations and the support size of \texttt{SD} is $N \in \{2000, 4000\}$.
The result is reported in Figure \ref{fig}(b).
Since we only have one source measure, the Sinkhorn barycenter loss is very small and hence we use log-scale in the y-axis.
We can observe that \texttt{SD} outperforms \fw in terms of the quality of the solution as well as the convergence rate.
\paragraph{Barycenter of Gaussians}
To demonstrate the efficiency of \texttt{SD} on high dimensional problems, we consider the problem of finding the barycenter of multivariate Gaussian distributions.
Concretely, we pick $5$ isotropic Gaussians in $\mathbb{R}^{100}$ with different means.
For each of them, we sample an empirical measure with $50000$ points and used the obtained empirical measures as source measures.
We initialize \texttt{SD} with an empirical measure sampled from the uniform distribution with support size $N = 5000$.
We did not compare with \fw as the global minimizer of $Q(x)$ can not be computed in $\mathbb{R}^{100}$.
The result is reported in Figure \ref{fig}(c).
We can see that just like the previous two experiments, \texttt{SD} converges in less than $20$ iterations.
\paragraph{Visual Results on Ellipses and Sketching.}
To compare \texttt{SD} with \texttt{FW} visually, we allow \texttt{SD} with \texttt{FW} to have a similar amount of particles in the ellipses and sketching tasks, and report the results in Figure \ref{fig_visual}. Specifically, in $(a_1)$ \texttt{SD} has 500 particles while in $(a_2)$ \texttt{FW} has 511 to 591 particles (recall that the support size of \texttt{FW} grows over iterations); in $(b_1)$ \texttt{SD} has 8000 particles while in $(a_2)$ \texttt{FW} has 10001 to 20001 particles.
In all cases \texttt{FW} has at least as much particles as \texttt{SD} does while having significantly more steps.
However, the visual result produced by \texttt{SD} is clearly better than \texttt{FW}: in $(a_1)$, the circle is very clear in the last picture while in $(a_2)$ all pictures remain vague; in $(b_1)$, the eyes of cheetah are clear, but in $(b_2)$ the eyes remain gloomy.
\clearpage
\section{Introduction} \label{section_introduction}
Computing a nonlinear interpolation between a set of probability measures is a foundational task
across many disciplines. This problem is typically referred as the barycenter problem and, as it provides a meaningful metric to aggregate knowledge, it has found numerous applications. Examples include distribution clustering \citep{ye2017fast}, Bayesian inference \citep{srivastava2015wasp}, texture mixing \citep{rabin2011wasserstein}, and graphics \citep{solomon2015convolutional}, etc.
The barycenter problem can be naturally cast as minimization of the average distance between the target measure (barycenter) and the source measures; and the choice of the distance metric can significantly impact the quality of the barycenter \citep{feydy2019interpolating}.
In this regard, the Optimal Transport (OT) distance (a.k.a. the Wasserstein distance) and its entropy regularized variant (a.k.a. the Sinkhorn divergence) are the most suitable geometrically-faithful metrics, while the latter is more computational friendly. In this paper, we provide efficient and provable methods for the Sinkhorn barycenter problem.
The prior work in this domain has mainly focused on finding the barycenter by optimizing directly in the space of (discrete) probability measures. We can divide these previous methods into three broad classes depending on how the support of the barycenter is determined:\\
\indent (i) The first class assumes a fixed and prespecified support set for the barycenter and only optimizes the corresponding weights \citep{staib2017parallel,dvurechenskii2018decentralize,kroshnin2019complexity}.
Accordingly, the problem reduces to minimizing a convex objective subject to a simplex constraint. However, fixing the support without any prior knowledge creates undesired bias and affects the quality of the final solution. While increasing the support size (possibly exponentially in the dimension $d$) can help to mitigate the bias, it renders the procedure computationally prohibitive as $d$ grows.\\
\indent (ii) To reduce the bias, the second class considers optimizing the support and the weights through an alternating procedure \citep{cuturi2014fast,claici2018stochastic}.
Since the barycenter objective is not jointly convex with respect to the support and the weights, these methods in general only converge to a stationary point, which can be far from the true minimizers.\\
\indent (iii) Unlike the aforementioned classes, \citet{NIPS2019_9130} recently proposed
a conditional gradient method with a growing support set.
This method enjoys sublinear convergence to the global optimum
under the premise that a $d$-dimensional nonconvex subproblem can be globally minimized per-iteration.
However, nonconvex optimization is generally intractable in high dimensional problems (large $d$) and only stationary points can be efficiently reached.
Hence, the guarantee of \citep{NIPS2019_9130} has limited applicability as the dimension grows.
In this paper, we provide a new perspective on the Sinkhorn barycenter problem: Instead of operating in the space of probability measures, we view the barycenter as the push-forward measure of a given initial measure under an unknown mapping.
We thus recast the barycenter problem as an unconstrained functional optimization over the space of mappings. Equipped with this perspective, we make the following contributions:
\begin{itemize}
\item We develop a novel functional gradient descent method, called \sinkhorndescent (\texttt{SD}), which operates by finding the push-forward mapping in a Reproducing Kernel Hilbert Space that allows the fastest descent, and consequently solves the Sinkhorn barycenter problem iteratively.
We then define the Kernelized Sinkhorn Barycenter Discrepancy (KSBD) to characterize the non-asymptotic convergence of \texttt{SD}.
In particular, we prove that KSBD vanishes under the \texttt{SD} iterates at the rate of $\mathcal{O}(\frac{1}{t})$, where $t$ is the iteration number.
\item
We prove that \texttt{SD} preserves the weak convergence of empirical measures.
Concretely, use ${\texttt{SD}}^t(\cdot)$ to denote the output of \texttt{SD} after $t$ iterations and let $\alpha_N$ be an empirical measure of $\alpha$ with $N$ samples.
We have $\lim_{N\rightarrow\infty}{\texttt{SD}}^t(\alpha_N) = {\texttt{SD}}^t(\alpha)$.
Such asymptotic analysis allows us to jointly study the behavior of \texttt{SD} under either discrete or continuous initialization.
\item
Under a mild assumption, we prove that KSBD is a valid discrepancy to characterize the optimality of the solution, i.e. the vanishing of KSBD implies the output measure of \texttt{SD} converges to the global optimal solution set of the Sinkhorn barycenter problem.
\end{itemize}
Further, we show the efficiency and efficacy of \texttt{SD} by comparing it with prior art on several problems.
We note that the computation complexity of \texttt{SD} depends \emph{linearly} on the dimension $d$.
We hence validate the scalability of \texttt{SD} by solving a $100$-dimensional barycenter problem, which cannot be handled by previous methods due to their exponential dependence on the problem dimension.
\paragraph{Notations.}
Let $\mathcal{X}\subseteq\mathbb{R}^d$ be a compact ground set, endowed with a symmetric ground metric $c:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}_+$.
Without loss of generality, we assume $c(x, y) = \infty$ if $x\notin\mathcal{X}$ or $y \notin \mathcal{X}$.
We use $\nabla_1 c(\cdot,\cdot):\mathcal{X}^2\rightarrow \mathcal{X}$ to denote its gradient w.r.t. its first argument.
Let $\mathcal{M}_1^+(\mathcal{X})$ and $\mathcal{C}(\mathcal{X})$ be the space of probability measures and continuous functions on $\mathcal{X}$.
We denote the support for a probability measure $\alpha \in \mathcal{M}_1^+(\mathcal{X})$ by $\mathbf{supp}(\alpha)$ and we use $\alpha -a.e.$ to denote "almost everywhere w.r.t. $\alpha$".
For a vector $\mathbf{a}\in\mathbb{R}^d$, we denote its $\ell_2$ norm by $\|\mathbf{a}\|$.
For a function $f:\mathcal{X}\rightarrow\mathbb{R}$, we denote its $L^\infty$ norm by $\|f\|_\infty {:=} \max_{x\in\mathcal{X}} |f(x)|$ and denote its gradient by $\nabla f(\cdot):\mathcal{X}\rightarrow\mathbb{R}^d$.
For a vector function $f:\mathcal{X}\rightarrow\mathbb{R}^d$, we denote its $(2,\infty)$ norm by $\|f\|_{2, \infty} {:=} \max_{x\in\mathcal{X}} \|f(x)\|$.
For an integer $n$, denote $[n]{:=} \{1, \cdots, n\}$.\\
Given an Reproducing Kernel Hilbert Space (RKHS) $\mathcal{H}$ with a kernel function $k:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}_+$, we say a vector function $\psi = [[\psi]_1, \cdots, [\psi]_d]\in\mathcal{H}^d$ if each component $[\psi]_i$ is in $\mathcal{H}$. The space $\mathcal{H}$ has a natural inner product structure and an induced norm, and so does $\mathcal{H}^d$, i.e. $\langle f, g\rangle_{\mathcal{H}^d} = \sum_{i=1}^{d} \langle [f]_i, [g]_i\rangle_{\mathcal{H}}, \forall f, g\in\mathcal{H}^d$ and the norm $\|f\|_{\mathcal{H}^d}^2 = {\langle f, f\rangle_{\mathcal{H}^d}}$. The reproducing property of the RKHS $\mathcal{H}$ reads that given $f \in \mathcal{H}^d$, one has $[f]_i(x) = \langle [f]_i, k_x \rangle_{\mathcal{H}}$ with $k_x(y) = k(x, y)$, which by Cauchy-Schwarz inequality implies that there exists some constant $M_{\mathcal{H}}>0$ such that
\begin{equation} \label{eqn_RKHS_norm}
\|f\|_{2, \infty} \leq M_{\mathcal{H}}\|f\|_{\mathcal{H}^d}, \forall f\in \mathcal{H}^d.
\vspace{-.1cm}
\end{equation}
Additionally, for a functional $F: \mathcal{H}^d \to \mathbb{R}$, the Fr\'echet derivative of $F$ is defined as follows.
\begin{definition}[Fr\'echet derivative in RKHS] \label{definition_variation_rkhs}
For a functional $F:\mathcal{H}^d\rightarrow\mathbb{R}$, its Fr\'echet derivative $DF[\psi]$ at $\psi \in\mathcal{H}^d$ is a function in $\mathcal{H}^d$ satisfying the following: For any $\xi\in\mathcal{H}^d$ with $\|\xi\|_{\mathcal{H}^d}<\infty$,
$$\lim_{\epsilon\rightarrow0}\frac{F[\psi+\epsilon \xi] - F[\psi]}{\epsilon} = \langle DF[\psi], \xi\rangle_{\mathcal{H}^d}.$$
\end{definition}
Note that the Fr\'echet derivative at $\psi$, i.e. $DF[\psi]$, is a bounded linear operator from $\mathcal{H}^d$ to $\mathbb{R}$.
It can be written in the form $DF[\psi](\xi) = \langle DF[\psi], \xi\rangle_{\mathcal{H}^d}$ due to the Riesz–Fr\'echet representation theorem.
\section{More Related Work}
\section{Implementation}
The code to reproducing the experimental results can be found in the following link: \url{https://github.com/shenzebang/Sinkhorn_Descent}.
Our implementation is based on Pytorch and geomloss\footnote{\url{https://www.kernel-operations.io/geomloss/}}.
\section{Preliminaries on the Sinkhorn Potentials}
\begin{lemma}[Lemma \ref{lemma_sinkhorn_potential_bound} elaborated] \label{lemma_optimality_sinkhorn_potential_appendix}
For a probability measure $\alpha\in\mathcal{M}_1^+(\mathcal{X})$, use $\alpha -a.e.$ to denote ``almost everywhere w.r.t. $\alpha$".
The pair $(f, g)$ are the Sinkhorn potentials of the entropy-regularized optimal transport problem \eqref{eqn_OTepsilon_dual} if they satisfy
\vspace{-.15cm}
\begin{equation}
f = \mathcal{A}(g, \beta), \alpha-a.e. \quad \textrm{and}\quad g = \mathcal{A}(f, \alpha), \beta-a.e.,
\end{equation}
or equivalently
\vspace{-.2cm}
\begin{align}
\vspace{-.2cm}
\int_\mathcal{X} h(x, y)\mathbf{d} \beta(y) = 1,\ \alpha-a.e., \label{eqn_optimality_sinkhorn_potential_x}\\
\vspace{-.2cm}
\int_\mathcal{X} h(x, y)\mathbf{d} \alpha(x) = 1,\ \beta-a.e., \label{eqn_optimality_sinkhorn_potential_y}
\vspace{-.2cm}
\end{align}
where $h(x, y) {:=} \exp\left(\frac{1}{\gamma}(f(x) + g(y) - c(x, y))\right)$.
\end{lemma}
One can observe that the Sinkhorn potentials are not unique.
In fact, for $\alpha \neq \beta$, the pair $(f_{\alpha,\beta}, g_{\alpha,\beta})$ remains optimal under a constant shift, i.e. $(f_{\alpha,\beta}+C, g_{\alpha,\beta}-C)$ are still the Sinkhorn potentials of $\OTgamma(\alpha, \beta)$ for an arbitrary finite $C\in\mathbb{R}$.
Fortunately, it is proved in \cite{cuturi2013sinkhorn} that the Sinkhorn potentials are unique up to such scalar translation.
\\
To reduce the ambiguity, we fix an $x_o \in \mathcal{X}$ and choose $f_{\alpha,\beta}(x_o) = 0$, since otherwise we can always shift {$f_{\alpha,\beta}$ and $g_{\alpha,\beta}$} by the amount of $f_{\alpha,\beta}(x_o)$.
While it is possible that $x_o\notin\mathbf{supp}(\alpha)$, such choice of $f_{\alpha,\beta}$ is still feasible.
This is because the Sinkhorn potentials can be naturally extended to the entire $\mathcal{X}$ from Lemma \ref{lemma_optimality_sinkhorn_potential}, even though the above optimality condition characterizes the Sinkhorn potentials on $\mathbf{supp}(\alpha), \mathbf{supp}(\beta)$ only.\\
Further, this choice of $f_{\alpha,\beta}$ allows us to bound $\|f_{\alpha,\beta}\|_\infty$ given that the ground cost function $c$ is bounded on $\mathcal{X}$.
\begin{assumption}\label{ass_bounded_c}
The cost function $c(x, y)$ is bounded: $\forall x,y\in\mathcal{X}, c(x, y)\leq M_c$.
\end{assumption}
\begin{lemma}[Boundedness of the Sinkhorn Potentials] \label{lemma_sinkhorn_potential_bound}
Let $(f, g)$ be the Sinkhorn potentials of problem \eqref{eqn_OTepsilon_dual} and assume that there exists $x_o\in\mathcal{X}$ such that $f(x_o) = 0$ (otherwise shift the pair by $f(x_o)$). Then, under Assumption \ref{ass_bounded_c}, $\|f\|_\infty \leq 2M_c$ and $\|g\|_\infty \leq 2M_c$.
\end{lemma}
Next, we analyze the Lipschitz continuity of the Sinkhorn potential $f_{\alpha,\beta}(x)$ with respect to $x$.
\begin{assumption}\label{ass_bounded_infty_c_gradient}
The cost function $c$ is $G_c$-Lipschitz continuous with respect to one of its inputs:
$$\forall x, x' \in \mathcal{X}, |c(x, y) - c(x', y)|\leq G_c\|x - x'\|.$$
\end{assumption}
Assumption \ref{ass_bounded_infty_c_gradient} implies that $\nabla_x c(x,y)$ exists and for all $x, y\in\mathcal{X}, \|\nabla_x c(x,y)\|\leq G_c$.
It further ensures the Lipschitz-continuity of the Sinkhorn potential.
\begin{lemma}[Proposition 12 of \cite{feydy2019interpolating}]
\label{lemma_lipschitz_sinkhorn_potential}
Under Assumption \ref{ass_bounded_infty_c_gradient}, for a fixed pair of measures $(\alpha, \beta)$, the Sinkhorn potential $f_{\alpha, \beta}:\mathcal{X}\rightarrow\mathbb{R}$ is $G_c$-Lipschitz continuous,
\begin{equation}
\forall x, x' \in \mathcal{X}, |f_{\alpha, \beta}(x) - f_{\alpha, \beta}(x')|\leq G_c\|x - x'\|.
\end{equation}
Further, the gradient $\nabla f_{\alpha, \beta}$ exists {at every point $x \in \mathcal{X}$}, and $\|\nabla f_{\alpha, \beta}(x)\|\leq G_c, \forall x\in\mathcal{X}$.
\end{lemma}
\begin{assumption}\label{ass_bounded_infty_c_hessian}
The gradient of the cost function $c$ is $L_c$-Lipschitz continuous: for all $x, x' \in \mathcal{X}$, $$\|\nabla_1 c(x, y) - \nabla_1 c(x', y)\|\leq L_c\|x - x'\|.$$
\end{assumption}
\begin{lemma}\label{lemma_lipschitz_sinkhorn_potential_gradient}
Assume Assumptions \ref{ass_bounded_infty_c_gradient} and \ref{ass_bounded_infty_c_hessian}, and denote $L_f {:=} {4G_c^2}/{\gamma}+L_c$.
For a pair of measures $(\alpha, \beta)$, the gradient of the corresponding Sinkhorn potential $f_{\alpha, \beta}:\mathcal{X}\rightarrow\mathbb{R}$ is Lipschitz continuous,
\begin{equation}
\forall x, x' \in \mathcal{X}, \|\nabla f_{\alpha, \beta}(x) - \nabla f_{\alpha, \beta}(x')\|\leq L_f\|x - x'\|.
\end{equation}
\end{lemma}
\subsection{Computation of Sinkhorn Potentials}\label{section_computation_of_sinkhorn_potential}
The Sinkhorn potential is the cornerstone of the entropy regularized OT problem $\OTgamma(\alpha,\beta)$. Hence, a key component of our method is to efficiently compute this quantity.
An efficient method is given in \cite{genevay2016stochastic} when both $\alpha$ and $\beta$ are discrete measures (discrete case), as well as when $\alpha$ is discrete but $\beta$ is continuous (semi-discrete case). More precisely,
by plugging in the optimality condition on $g$ in \eqref{eqn_optimality_sinkhorn_potential_xy}, the dual problem \eqref{eqn_OTepsilon_dual} becomes
\begin{equation} \label{eqn_regularized_OT_semi_dual}
\OTgamma(\alpha, \beta) = \max_{f\in\mathcal{C}} \langle f, \alpha\rangle + \langle\mathcal{A}(f, \alpha), \beta\rangle.
\end{equation}
Note that \eqref{eqn_regularized_OT_semi_dual} only depends on the values of $f$ on the support of $\alpha$, $\rm{supp}(\alpha)$, which can be represented by a finite dimensional vector $\mathbf{f}\in\mathbb{R}^{|\mathbf{supp}(\alpha)|}$.
Viewing the discrete measure $\alpha$ as a weight vector $\omega_{\alpha}$ on $\mathbf{supp}(\alpha)$,
we have
\begin{equation*}
\OTgamma(\alpha, \beta) = \max_{\mathbf{f}\in\mathbb{R}^d} \left\{F(\mathbf{f}) := \mathbf{f}^\top\omega_{\alpha} + \mathbb{E}_{y\sim\beta}\left[\mathcal{A}(\mathbf{f}, \alpha)(y) \right]\right\},
\end{equation*}
that is, $\OTgamma(\alpha, \beta)$ is equivalent to a standard concave stochastic optimization problem, where randomness of the problem comes from $\beta$ (see Proposition 2.1 in \cite{genevay2016stochastic}).
Hence, the problem can be solved using off-the-shelf stochastic optimization methods.
In the main body, this method is referred as $\mathcal{SP}_{\gamma}(\alpha,\beta)$.
\subsection{{Related Work on Functional Gradient Descent}}
A related functional gradient descent type method is the Stein Variation Gradient Descent (\svgd) method by \citet{liu2016stein}.
\svgd considers the problem of minimizing the Kullback–Leibler (KL) divergence between a variable distribution and a posterior $p$.
Note that \svgd updates the positions of a set of $N$ particles using the score function of the posterior $p$, i.e. $\nabla \log p$.
Consequently, it requires the access to the target distribution function.
Later, \citet{liu2017stein} prove that \svgd has convergence guarantee in its continuous-time limit (taking infinitesimal step size) using infinite number of particles ($N\rightarrow\infty$).
In comparison, \texttt{SD} is designed to solve the significantly more complicated Sinkhon barycenter problem and has a stronger convergence guarantee.
More precisely, while \texttt{SD} updates the measure using only a sampling machinery of the target measures (no score functions), it is guaranteed to converge sub-linearly to a stationary point when $\alpha$ is a \emph{discrete} measure using \emph{discrete} time steps.
This is in sharp contrast to the results for \svgd.
In another work, \citet{mroueh2019sobolev} considers minimizing the Maximum Mean Discrepancy (MMD) between a source measure and a variable measure.
They solve this problem by incrementally following a Sobolev critic function and propose the Sobolev Descent (\sod) method.
To show the global convergence of the measure sequence generated by \sod, \citet{mroueh2019sobolev} assumes the \emph{entire} sequence satisfies certain spectral properties, which is in general difficult to verify.
Later, \citet{arbel2019maximum} consider the same MMD minimization problem from a gradient flow perspective.
They propose two assumptions that if either one holds, the MMD gradient flow converges to the global solution.
However, similar to \citep{mroueh2019sobolev}, these assumptions have to be satisfied for the \emph{entire} measure sequence.
We note that the Sinkhorn barycenter is a strict generalization of the above MMD minimization problem and is hence much more challenging: By setting the number of source measures $n=1$ and setting the entropy regularization parameter $\gamma = \infty$, problem \eqref{eqn_sinkhorn_barycenter} degenerates to the special case of MMD.
Further, the MMD between two probability measures has a closed form expression while the Sinkhorn Divergence can only be described via a set of optimization problems. Consequently, the Sinkhorn barycenter is significantly more challenging. To guarantee global convergence, the proposed \sd algorithm only requires one of accumulation points of the measure sequence to be fully supported on $\mathcal{X}$ with no restriction on the entire sequence.
\section{Sinkhorn Barycenter} \label{section_sinkhorn_barycenter}
We first introduce the entropy-regularized optimal transport distance and its debiased version, a.k.a. the Sinkhorn divergence.
Given two probability measures $\alpha, \beta \in \mathcal{M}_1^+(\mathcal{X})$,
use $\Pi(\alpha, \beta)$ to denote the set of joint distributions over $\mathcal{X}^2$ with marginals $\alpha$ and $\beta$. For $\pi\in\Pi$, use $\langle c, \pi\rangle$ to denote the integral $\langle c, \pi\rangle = \int_{\mathcal{X}^2} c(x, y)\mathbf{d}\pi(x, y)$ and use $\rm{KL}(\pi||\alpha\otimes\beta)$ to denote the {Kullback-Leibler divergence} between the candidate transport plan $\pi$ and the product measure $\alpha\otimes\beta$.
The entropy-regularized optimal transport distance $\OTgamma(\alpha, \beta):\mathcal{M}_1^+(\mathcal{X})\times\mathcal{M}_1^+(\mathcal{X})\rightarrow\mathbb{R}_+$ is defined as
\begin{equation}
\OTgamma(\alpha, \beta) = \min_{\pi\in\Pi(\alpha, \beta)} \langle c, \pi\rangle + \gamma \rm{KL}(\pi||\alpha\otimes\beta).
\label{eqn_OTepsilon}
\end{equation}
Here, $\gamma > 0$ is a regularization parameter. Note that $\OTgamma(\alpha, \beta)$ is not a valid metric as there exists $\alpha \in \mathcal{M}_1^+(\mathcal{X})$ such that $\OTgamma(\alpha, \alpha)\neq 0$ when $\gamma\neq 0$.
To remove this bias, \citet{peyre2019computational} introduced the \emph{Sinkhorn divergence} $\mathbb{S}_\gamma(\alpha, \beta):\mathcal{M}_1^+(\mathcal{X})\times\mathcal{M}_1^+(\mathcal{X})\rightarrow\mathbb{R}_+$:
\begin{equation} \label{s-o}
\mathbb{S}_\gamma(\alpha, \beta) {:=} \OTgamma(\alpha, \beta) - \frac{1}{2}\OTgamma(\alpha, \alpha) - \frac{1}{2}\OTgamma(\beta, \beta),
\end{equation}
which is a {debiased version} of $\OTgamma(\alpha, \beta)$.
It is further proved that $\mathbb{S}_\gamma(\alpha, \beta)$ is nonnegative, bi-convex and metrizes the convergence in law when the ground set $\mathcal{X}$ is compact and the metric $c$ is Lipschitz.
Now given a set of probability measures $\{\beta_i\}_{i=1}^n$, the Sinkhorn barycenter is the measure $\alpha\in \mathcal{M}_1^+(\mathcal{X})$ that minimizes the average of Sinkhorn divergences
\begin{equation}
\min_{\alpha\in \mathcal{M}_1^+(\mathcal{X})} \Big( \mathcal{S}_{\gamma}(\alpha){:=}\frac{1}{n}\sum_{i=1}^{n}\mathbb{S}_\gamma(\alpha, \beta_i) \Big).
\label{eqn_sinkhorn_barycenter}
\end{equation}
We will next focus on the properties of $\OTgamma$ since $\mathcal{S}_{\gamma}(\alpha)$ is the linear combination of these terms.
\paragraph{The Dual Formulation of $\OTgamma$.} As a convex program, the entropy-regularized optimal transport problem $\OTgamma$ \eqref{eqn_OTepsilon} has a equivalent dual formulation, which is given as follows:
\begin{align} \label{eqn_OTepsilon_dual}
\OTgamma(\alpha, \beta) = \max_{f, g\in\mathcal{C}(\mathcal{X})} \langle f, \alpha\rangle
+ \langle g, \beta\rangle - \gamma\langle \exp((f\oplus g - c)/\gamma) - 1, \alpha\otimes\beta\rangle,
\end{align}
where we denote $[f\oplus g](x, y) = f(x) + g(y)$.
The maximizers $f_{\alpha, \beta}$ and $g_{\alpha, \beta}$ of \eqref{eqn_OTepsilon_dual} are called the \emph{Sinkhorn potentials} of $\OTgamma(\alpha, \beta)$.
Define the Sinkhorn mapping $\mathcal{A}:\mathcal{C}(\mathcal{X})\times\mathcal{M}_1^+(\mathcal{X}) \rightarrow \mathcal{C}(\mathcal{X})$ by
\begin{equation} \label{eqn_Sinkhorn_mapping}
\mathcal{A}(f, \alpha)(y) = -\gamma\log\int_\mathcal{X}\exp\big(({f(x) - c(x, y)})/{\gamma}\big)\mathbf{d}\alpha(x).
\end{equation}
The following lemma states the optimality condition for the Sinkhorn potentials $f_{\alpha, \beta}$ and $g_{\alpha, \beta}$.
\begin{lemma}[Optimality \cite{peyre2019computational}] \label{lemma_optimality_sinkhorn_potential}
The pair $(f, g)$ are the Sinkhorn potentials of the entropy-regularized optimal transport problem \eqref{eqn_OTepsilon_dual} if they satisfy
\begin{equation}
f = \mathcal{A}(g, \beta), \alpha-a.e. \quad \textrm{and}\quad g = \mathcal{A}(f, \alpha), \beta-a.e. \label{eqn_optimality_sinkhorn_potential_xy}.
\end{equation}
\end{lemma}
\section{Methodology}
We present the \sinkhorndescent (\texttt{SD}) algorithm for the Sinkhorn barycenter problem \eqref{eqn_sinkhorn_barycenter} in two steps:
We first reformulate \eqref{eqn_sinkhorn_barycenter} as an unconstrained functional minimization problem and then derive the descent direction as the negative functional gradient over a RKHS $\mathcal{H}^d$.
Operating in RKHS allows us to measure the quality of the iterates using a so-called kernelized discrepancy which we introduce in Definition \ref{definition_KSBD}. This quantity will be crucial for our convergence analysis.
The restriction of a functional optimization problem to RKHS is common in the literature as discussed in Remark \ref{remark_RKHS}.
\paragraph{Alternative Formulation.}
Instead of directly solving the Sinkhorn barycenter problem in the probability space $\mathcal{M}_1^+(\mathcal{X})$, we reformulate it as a functional minimization over all mappings on $\mathcal{X}$:
\begin{equation}
\min_{\mathcal{P}} \Big( \mathcal{S}_{\gamma}(\mathcal{P}_\sharp\alpha_0) {:=}\frac{1}{n}\sum_{i=1}^{n}\mathbb{S}_\gamma(\mathcal{P}_\sharp\alpha_0, \beta_i) \Big),
\label{eqn_sinkhorn_barycenter_transform}
\end{equation}
where $\alpha_0\in\mathcal{M}_1^+(\mathcal{X})$ is some given initial measure, and $\mathcal{P}_\sharp\alpha$ is the push-forward measure of $\alpha\in\mathcal{M}_1^+(\mathcal{X})$ under the mapping $\mathcal{P}:\mathcal{X}\rightarrow\mathcal{X}$.
When $\alpha_0$ is sufficiently regular, e.g. absolutely continuous, for any $\alpha\in\mathcal{M}_1^+(\mathcal{X})$ there always exists a mapping $\mathcal{P}$ such that $\alpha = \mathcal{P}_\sharp\alpha_0$ (see Theorem 1.33 of \citep{ambrosio2013user}).
Consequently, problems \eqref{eqn_sinkhorn_barycenter_transform} and \eqref{eqn_sinkhorn_barycenter} are equivalent with appropriate initialization.
\paragraph{Algorithm Derivation.}
\begin{algorithm}[tb]
\caption{\sinkhorndescent (\texttt{SD})}
\label{alg:Sinkhonr_Descent_Finite_Particles}
\begin{algorithmic}
\STATE {\bfseries Input:} measures $\{\beta_i\}_{i=1}^n$, a discrete initial measure $\alpha^{0}$, a step size $\eta$, and number of iterations $S$;
\STATE {\bfseries Output:} A measure $\alpha^{S}$ that approximates the Sinkhorn barycenter of $\{\beta_i\}_{i=1}^n$;
\FOR{$t = 0$ {\bfseries to} $S-1$}
\STATE $\alpha^{t+1} := \mathcal{T}[\alpha^{t}]_\sharp\alpha^{t}$, with $\mathcal{T}[\alpha^{t}]$ defined in \eqref{eqn_pushforward_mapping};
\ENDFOR
\end{algorithmic}
\end{algorithm}
For a probability measure $\alpha$, define the functional $\mathcal{S}_\alpha:\mathcal{H}^d\rightarrow\mathbb{R}$
\begin{equation} \label{eqn_functional_per_iteration}
\mathcal{S}_\alpha[\psi] = \mathcal{S}_{\gamma}\big({(\mathcal{I} + \psi)}_\sharp\alpha\big), \psi\in\mathcal{H}^d.
\end{equation}
Here $\mathcal{I}$ is the identity mapping and $\mathcal{S}_{\gamma}$ is defined in \eqref{eqn_sinkhorn_barycenter}.
Let $\alpha^t$ be the estimation of the Sinkhorn barycenter in the $t^{th}$ iteration.
\sinkhorndescent (\texttt{SD}) iteratively updates the measure $\alpha^{t + 1}$ as
\begin{equation}
\alpha^{t+1} = {\mathcal{T}[\alpha^{t}]}_\sharp\alpha^{t}, \label{eqn_sequence_of_measures}
\end{equation}
via the push-forward mapping (with $\eta>0$ being a step-size)
\begin{equation} \label{eqn_pushforward_mapping}
\mathcal{T}[\alpha^{t}](x) = x - \eta\cdot D\mathcal{S}_{\alpha^{t}}[0](x).
\end{equation}
Recall that $D\mathcal{S}_\alpha[0]$ is the Fr\'echet derivative of $\mathcal{S}_\alpha$ at $\psi = 0$ (see Definition \ref{definition_variation_rkhs}).
Note that $(\mathcal{I}+\psi)_\sharp \alpha = \alpha$ when $\psi = 0$.
Our choice of the negative Fr\'echet derivative in $\mathcal{T}[\alpha^{t}]$ allows the objective $\mathcal{S}_\gamma(\alpha)$ to have the fastest descent at the current measure $\alpha = \alpha^t$.
We our line the details of \texttt{SD} in Algorithm~\ref{alg:Sinkhonr_Descent_Finite_Particles}.
Consequently, a solution of \eqref{eqn_sinkhorn_barycenter_transform} will be found by finite-step compositions and then formally passing to the limit
$\mathcal{P} = \lim_{t\rightarrow\infty} \left(\mathcal{P}^t {:=} \mathcal{T}[\alpha^t]\circ\cdots\circ \mathcal{T}[\alpha^0] \right)$.
\begin{remark} \label{remark_RKHS}
We restrict $\psi$ in \eqref{eqn_functional_per_iteration} to the space $\mathcal{H}^d$ to avoid the inherent difficulty when the perturbation of Sinkhorn potentials introduced by the mapping $(\mathcal{I}+\psi)$ can no longer be properly bounded (for $\psi \in \mathcal{H}^d$, we always have the upper bound \eqref{eqn_RKHS_norm} which is necessary in our convergence analysis).
This restriction will potentially introduce error to the minimization of \eqref{eqn_sinkhorn_barycenter_transform}.
However, this restriction is a common practice for general functional optimization problems:
Both \svgd \citep{liu2016stein} and \sod \citep{mroueh2019sobolev} explicitly make such RKHS restriction on their transport mappings.
\citep{arbel2019maximum} constructs the transport mapping using the witness function of the Maximum Mean Discrepancy (MMD) which also lies in an RKHS.
\end{remark}
In what follows, we first derive a formula for the Fr\'echet derivative $D\mathcal{S}_{\alpha^{t}}[0]$ (see \eqref{eqn_gradient_sinkhorn_barycenter}) and then explain how it is efficiently computed.
The proof of the next proposition requires additional continuity study of the Sinkhorn potentials and is deferred to Appendix \ref{proof_proposition_variation_I}.
\begin{proposition} \label{proposition_variation_I}
Recall the Fr\'echet derivative in Definition \ref{definition_variation_rkhs}.
Given $\alpha,\beta\in\mathcal{M}_1^+(\mathcal{X})$, for $\psi\in\mathcal{H}^d$ denote $F_1[\psi] = \OTgamma\big({(\mathcal{I}+\psi)}_\sharp\alpha, \beta\big)$ and $F_2[\psi] = \OTgamma\big({(\mathcal{I}+\psi)}_\sharp\alpha, {(\mathcal{I}+\psi)}_\sharp\alpha\big)$.
Under Assumptions \ref{ass_c} and \ref{ass_k} (described below), we can compute
\begin{align}
DF_1[0](y) = \int_\mathcal{X} \nabla f_{\alpha,\beta}(x) k(x, y) \mathbf{d} \alpha(x), \quad
DF_2[0](y) = 2\int_\mathcal{X} \nabla f_{\alpha,\alpha}(x) k(x, y) \mathbf{d} \alpha(x), \label{eqn_proposition_variation_I}
\end{align}
where $\nabla f_{\alpha,\beta}$ and $\nabla f_{\alpha,\alpha}$ are the gradients of the Sinkhorn potentials of $\OTgamma(\alpha, \beta)$ and $\OTgamma(\alpha, \alpha)$ respectively, and $k$ is the kernel function of the RKHS $\mathcal{H}$.
\end{proposition}
Consequently the \FD of the Sinkhorn Barycenter problem \eqref{eqn_functional_per_iteration} can be computed by
\begin{align}
D\mathcal{S}_{\alpha}[0](y)=\int_\mathcal{X} \frac{1}{n}[\sum_{i=1}^{n}\nabla f_{\alpha,\beta_i}(x) -
\nabla f_{\alpha, \alpha}(x)] k(x, y) \mathbf{d} \alpha(x). \label{eqn_gradient_sinkhorn_barycenter}
\end{align}
This quantity can be computed efficiently when $\alpha$ is discrete: Consider an individual term $\nabla f_{\alpha,\beta}$.
Define
$h(x, y) {:=} \exp\left(\frac{1}{\gamma}(f_{\alpha,\beta}(x) + \mathcal{A}[f_{\alpha,\beta}, \alpha](y) - c(x, y))\right)$. Lemma \ref{lemma_optimality_sinkhorn_potential} implies $$\int h(x, y)\mathbf{d}\beta(y) = 1.$$
Taking derivative with respect to $x$ on both sides and rearranging terms, we have
\begin{align}
\nabla f_{\alpha,\beta}(x) = \frac{\int_\mathcal{X} h(x, y) \nabla_x c(x, y) \mathbf{d}\beta(y)}{\int h(x, y)\mathbf{d}\beta(y)} = \int_\mathcal{X} h(x, y)\nabla_x c(x, y) \mathbf{d}\beta(y) \label{eqn_sinkhorn_potential_gradient_x},
\end{align}
which itself is an expectation.
Note that to evaluate \eqref{eqn_gradient_sinkhorn_barycenter}, we only need $\nabla f_{\alpha,\beta}(x)$ on $\mathbf{supp}(\alpha)$.
Using $\mathcal{SP}_{\gamma}(\alpha,\beta)$ (see the end of Section \ref{section_sinkhorn_barycenter}), the function value of $f_{\alpha,\beta}$ on $\mathbf{supp}(\alpha)$ can be efficiently computed.
Together with the expression in \eqref{eqn_sinkhorn_potential_gradient_x}, the gradients $\nabla f_{\alpha,\beta}(x)$ at $x\in \mathbf{supp}(\alpha)$ can also be obtained by a simple Monte-Carlo integration with respect to $\beta$.
|
3,212,635,537,589 | arxiv | \section{Introduction} \label{sec:intro}
In its broadest sense, the term Ramsey theory refers to any mathematical statement which says that a structure of a given kind is guaranteed to contain a large well-organised substructure. There are examples of such statements in many areas, including geometry, number theory, logic and analysis. For example, a key ingredient in the proof of the Bolzano--Weierstrass theorem in real analysis is a lemma showing that any infinite sequence must contain an infinite monotone subsequence.
A classic example from number theory, proved by van der Waerden \cite{vdW27} in 1927, says that if the natural numbers are coloured in any fixed number of colours then one of the colour classes contains arbitrarily long arithmetic progressions. This result has many generalisations. The most famous, due to Szemer\'edi \cite{Sz75}, says that any subset of the natural numbers of positive upper density contains arbitrarily long arithmetic progressions. Though proved in 1975, the influence of this result is still being felt today. For example, it was a key ingredient in Green and Tao's proof~\cite{GT08} that the primes contain arbitrarily long arithmetic progressions.
Though there are many further examples from across mathematics, our focus in this survey will be on graph Ramsey theory. The classic theorem in this area, from which Ramsey theory as a whole derives its name, is Ramsey's theorem \cite{R30}. This theorem says that for any graph $H$ there exists a natural number $N$ such that any two-colouring of the edges of $K_N$ contains a monochromatic copy of $H$. The smallest such $N$ is known as the {\it Ramsey number} of $H$ and is denoted $r(H)$. When $H = K_t$, we simply write $r(t)$.
Though Ramsey proved his theorem in 1930 and clearly holds precedence in the matter, it was a subsequent paper by Erd\H{o}s and Szekeres \cite{ES35} which brought the matter to a wider audience. Amongst other things, Erd\H{o}s and Szekeres were the first to give a reasonable estimate on Ramsey numbers.\footnote{Ramsey's original paper mentions the bound $r(t) \leq t!$, but he does not pursue the matter further. It is an amusing exercise to find a natural proof that gives exactly this bound.} To describe their advance, we define the {\it off-diagonal Ramsey number} $r(H_1, H_2)$ as the smallest natural number $N$ such that any red/blue-colouring of the edges of $K_N$ contains either a red copy of $H_1$ or a blue copy of $H_2$. If we write $r(s, t)$ for $r(K_s, K_t)$, then what Erd\H{o}s and Szekeres proved is the bound
\[r(s, t) \leq \binom{s + t - 2}{s-1}.\]
For $s = t$, this yields $r(t) = O(\frac{4^t}{\sqrt{t}})$, while if $s$ is fixed, it gives $r(s, t) \leq t^{s-1}$. Over the years, much effort has been expended on improving these bounds or showing that they are close to tight, with only partial success. However, these problems have been remarkably influential in combinatorics, playing a key role in the development of random graphs and the probabilistic method, as well as the theory of quasirandomness (see \cite{AS}). We will highlight some of these connections in Section~\ref{sec:completegraphs} when we discuss the current state of the art on estimating $r(s, t)$.
If we move away from complete graphs, a number of interesting phenomena start to appear. For example, a famous result of Chv\'atal, R\"odl, Szemer\'edi and Trotter \cite{CRST83} says that if $H$ is a graph with $n$ vertices and maximum degree $\Delta$, then the Ramsey number $r(H)$ is bounded by $c(\Delta) n$ for some constant $c(\Delta)$ depending only on $\Delta$. That is, the Ramsey number of bounded-degree graphs grows linearly in the number of vertices. This and related developments will be discussed in Section~\ref{sec:sparse}, while other aspects of Ramsey numbers for general $H$ will be explored in Sections~\ref{sec:edges}, \ref{sec:goodness} and \ref{sec:mult}.
In full generality, Ramsey's theorem applies not only to graphs but also to $k$-uniform hypergraphs. Formally, a {\it $k$-uniform hypergraph} is a pair $H = (V, E)$, where $V$ is a collection of vertices and $E$ is a collection of subsets of $V$, each of order $k$. We write $K_N^{(k)}$ for the complete $k$-uniform hypergraph on $N$ vertices, that is, $V$ has order $N$ and $E$ contains all subsets of $V$ of order $k$.
The full statement of Ramsey's theorem, which also allows for more than two colours, now says that for any natural number $q \geq 2$ and any $k$-uniform hypergraphs $H_1, \dots, H_q$ there exists a natural number $N$ such that any $q$-colouring of the edges of $K_N^{(k)}$ contains a copy of $H_i$ in the $i$th colour for some $i$. The smallest such $N$ is known as the {\it Ramsey number} of $H_1, \dots, H_q$ and is denoted $r_k(H_1, \dots, H_q)$. If $H_i = K_{t_i}^{(k)}$ for each $i$, we write $r_k(t_1, \dots, t_q)$. Moreover, if $H_1 = \dots = H_q = H$, we simply write $r_k(H; q)$, which we refer to as the {\it $q$-colour Ramsey number} of $H$. If $H = K_t^{(k)}$, we write $r_k(t; q)$. If either $k$ or $q$ is equal to two, it is omitted.
Even for complete $3$-uniform hypergraphs, the growth rate of the Ramsey number is not well understood. Indeed, it is only known that
\[2^{c' t^2} \leq r_3(t) \leq2^{2^{c t}}.\]
Determining the correct asymptotic for this function is of particular importance, since it is known that an accurate estimate for $r_3(t)$ would imply an accurate estimate on $r_k(t)$ for all $k \geq 4$. This and related topics will be discussed in depth in Section~\ref{sec:hypergraphs}, though we will make reference to hypergraph analogues of graph Ramsey problems throughout the survey. As we will see, these questions often throw up new and interesting behaviour which is strikingly different from the graph case.
While our focus in Section~\ref{sec:classical} will be on the classical Ramsey function, we will move on to discussing a number of variants in Section~\ref{sec:variants}. These variants include well-established topics such as induced Ramsey numbers and size Ramsey numbers, as well as a number of more recent themes such as ordered Ramsey numbers. We will not try to give a summary of these variants here, instead referring the reader to the individual sections, each of which is self-contained.
We should note that this paper is not intended to serve as an exhaustive survey of the subject. Instead, we have focused on those areas which are most closely related to our own interests. For the most part, this has meant that we have treated problems of an asymptotic nature rather than being concerned with the computation of exact Ramsey numbers.\footnote{For this and more, we refer the reader to the excellent dynamic survey of Radziszowski \cite{R14}.} Even with this caveat, it has still been necessary to gloss over a number of interesting topics. We apologise in advance for any particularly glaring omissions.
We will maintain a number of conventions throughout the paper. For the sake of clarity of presentation, we will sometimes omit floor and ceiling signs when they are not crucial. Unless specified otherwise, we use $\log$ to denote the logarithm taken to the base two. We will use standard asymptotic notation with a subscript indicating that the implied constant may depend on that subscript. All other notation will be explained in the relevant sections.
\section{The classical problem} \label{sec:classical}
\subsection{Complete graphs} \label{sec:completegraphs}
As already mentioned in the introduction, the classical bound on Ramsey numbers for complete graphs is the Erd\H{o}s--Szekeres bound
\[r(s, t) \leq \binom{s + t - 2}{s-1}.\]
In particular, for $s = t$, this gives $r(t) = O(\frac{4^t}{\sqrt{t}})$. The proof of the Erd\H{o}s--Szekeres bound relies on the simple inequality
\[r(s, t) \leq r(s, t-1) + r(s-1, t).\]
To prove this inequality, consider a red/blue-colouring of the edges of $K_{r(s,t)-1}$ containing no red copy of $K_s$ and no blue copy of $K_t$. The critical observation is that the red degree of every vertex, that is, the number of neighbours in red, is at most $r(s-1, t) - 1$. Indeed, if the red neighbourhood of any vertex $v$ contained $r(s-1,t)$ vertices, it would contain either a blue $K_t$, which would contradict our choice of colouring, or a red $K_{s-1}$, which together with $v$ would form a red $K_s$, again a contradiction. Similarly, the blue degree of every vertex is at most $r(s, t-1) - 1$. Since the union of any particular vertex with its red and blue neighbourhoods is the entire vertex set, we see that
\[r(s, t) - 1 \leq 1 + (r(s-1, t) - 1) + (r(s, t-1) - 1).\]
The required inequality follows.
The key observation here, that in any graph containing neither a red $K_s$ nor a blue $K_t$ the red degree of any vertex is less than $r(s-1, t)$ and the blue degree is less than $r(s, t-1)$, may be generalised. Indeed, an argument almost exactly analogous to that above shows that in any graph containing neither a red $K_s$ nor a blue $K_t$, any red edge must be contained in fewer than $r(s-2, t)$ red triangles and any blue edge must be contained in fewer than $r(s, t-2)$ blue triangles. Indeed, if a red edge $uv$ were contained in at least $r(s-2, t)$ red triangles, then the set $W$ of vertices $w$ joined to both $u$ and $v$ in red would have order at least $r(s-2, t)$. If this set contained a blue $K_t$, we would have a contradiction, so the set must contain a red $K_{s-2}$. But the union of this clique with $u$ and $v$ forms a $K_s$, again a contradiction. Together with Goodman's formula~\cite{G59} for the number of monochromatic triangles in a two-colouring of $K_N$, this observation may be used to show that
\[r(t, t) \leq 4 r(t, t-2) + 2.\]
Using the idea behind this inequality, Thomason \cite{T88} was able to improve the upper bound for diagonal Ramsey numbers to $r(t) = O(\frac{4^t}{t})$, improving an earlier result of R\"odl \cite{GR}, who was the first to show that $r(t) = o(\frac{4^t}{\sqrt{t}})$.
As the observant reader may already have noted, the argument of the previous paragraph is itself a special case of the following observation.
\begin{observation} \label{obs:key}
In any graph containing neither a red $K_s$ nor a blue $K_t$, any red copy of $K_p$ must be contained in fewer than $r(s-p, t)$ red copies of $K_{p+1}$ and any blue copy of $K_p$ must be contained in fewer than $r(s, t-p)$ blue copies of $K_{p+1}$.
\end{observation}
By using this additional information, Conlon~\cite{C09} was able to give the following superpolynomial improvement on the Erd\H{o}s--Szekeres bound.
\begin{theorem} \label{thm:daviddiag}
There exists a positive constant $c$ such that
\[r(t) \leq t^{-c \log t/\log \log t} 4^t.\]
\end{theorem}
In broad outline, the proof of Theorem~\ref{thm:daviddiag} proceeds by using the $p = 1$ and $p = 2$ cases of Observation~\ref{obs:key} to show that any red/blue-colouring of the edges of a complete graph with at least $t^{-c \log t/\log \log t} 4^t$ vertices which contains no monochromatic $K_t$ is quasirandom. Through a delicate counting argument, this is then shown to contradict Observation~\ref{obs:key} for $p$ roughly $\log t/\log \log t$.
The first significant lower bound for the diagonal Ramsey number $r(t)$ was proved by Erd\H{o}s \cite{E47} in 1947. This was one of the first applications of the probabilistic method and most introductions to this beautiful subject begin with his simple argument. Though we run the risk of being repetitious, we will also include this argument.
Colour the edges of the complete graph $K_N$ randomly. That is, we colour each edge red with probability $1/2$ and blue with probability $1/2$. Since the probability that a given copy of $K_t$ has all edges red is $2^{-\binom{t}{2}}$, the expected number of red copies of $K_t$ in this graph is $2^{-\binom{t}{2}} \binom{N}{t}$. Similarly, the expected number of blue copies of $K_t$ is $2^{-\binom{t}{2}} \binom{N}{t}$. Therefore, the expected number of monochromatic copies of $K_t$ is
\[2^{1-\binom{t}{2}} \binom{N}{t} \leq 2^{1 - t(t-1)/2} \left(\frac{eN}{t}\right)^t.\]
For $N = (1 - o(1)) \frac{t}{\sqrt{2} e} \sqrt{2}^t$, we see that this expectation is less than one. Therefore, there must be some colouring of $K_N$ for which there are no monochromatic copies of $K_t$. This bound,
\[r(t) \geq (1 - o(1)) \frac{t}{\sqrt{2} e} \sqrt{2}^t,\]
has been astonishingly resilient to improvement. Since 1947, there has only been one noteworthy improvement. This was achieved by Spencer~\cite{S75}, who used the Lov\'asz local lemma to show that
\[r(t) \geq (1 - o(1)) \frac{\sqrt{2} t}{e} \sqrt{2}^t.\]
That is, he improved Erd\H{o}s' bound by a factor of two! Any further improvement to this bound, no matter how tiny, would be of significant interest.
\begin{problem}
Does there exist a positive constant $\epsilon$ such that
\[r(t) \geq (1 + \epsilon) \frac{\sqrt{2} t}{e} \sqrt{2}^t\]
for all sufficiently large $t$?
\end{problem}
For off-diagonal Ramsey numbers, where $s$ is fixed and $t$ tends to infinity, the Erd\H{o}s--Szekeres bound shows that $r(s, t) \leq t^{s-1}$. In 1980, this bound was improved by Ajtai, Koml\'os and Szemer\'edi~\cite{AKS80}, who proved that for any $s$ there exists a constant $c_s$ such that
\[r(s,t) \leq c_s \frac{t^{s-1}}{(\log t)^{s-2}}.\]
When $s = 3$, this follows from the statement that any triangle-free graph on $N$ vertices with average degree $d$ contains an independent set of order $\Omega(\frac{N}{d} \log d)$. Indeed, in a triangle-free graph, the neighbourhood of every vertex must form an independent set and so $d < t$. But then the graph must contain an independent set of order $\Omega(\frac{N}{t} \log t)$ and, hence, for $c$ sufficiently large and $N \geq c t^2/\log t$, the graph contains an independent set of order $t$.
For $s = 3$, this result was shown to be sharp up to the constant by Kim \cite{K95}. That is, he showed that there exists a positive constant $c'$ such that
\[r(3, t) \geq c' \frac{t^2}{\log t}.\]
This improved on earlier work of Erd\H{o}s \cite{E61}, who used an intricate probabilistic argument to show that $r(3, t) \geq c' (t/\log t)^2$, a result which was subsequently reproved using the local lemma \cite{S77}.
Kim's proof of this bound was a landmark application of the so-called semi-random method. Recently, an alternative proof was found by Bohman \cite{B09} using the triangle-free process. This is a stochastic graph process where one starts with the empty graph on $N$ vertices and adds one edge at a time to create a graph. At each step, we randomly select an edge which is not in the graph and add it to the graph if and only if it does not complete a triangle. The process runs until every non-edge is contained in a triangle. By analysing the independence number of the resulting graph, Bohman was able to reprove Kim's bound. More recently, Bohman and Keevash~\cite{BK14} and, independently, Fiz Pontiveros, Griffiths and Morris~\cite{FGM14}
gave more precise estimates for the running time of the triangle-free process and as a consequence proved the following result.
\begin{theorem}
\[r(3, t) \geq \left(\frac{1}{4} - o(1)\right) \frac{t^2}{\log t}.\]
\end{theorem}
This is within an asymptotic factor of $4$ of the best upper bound, due to Shearer~\cite{Sh83}, who showed that
\[r(3,t) \leq (1 + o(1)) \frac{t^2}{\log t}.\]
This is already a very satisfactory state of affairs, though it would be of great interest to improve either bound further.
For general $s$, the best lower bound is due to Bohman and Keevash \cite{BK10} and uses the analogous $K_s$-free process. Their analysis shows that for any $s$ there exists a positive constant $c'_s$ such that
\[r(s, t) \geq c'_s \frac{t^{\frac{s+1}{2}}}{(\log t)^{\frac{s+1}{2} - \frac{1}{s-2}}}.\]
Even for $s = 4$, there is a polynomial difference between the upper and lower bounds. Bringing these bounds closer together remains one of the most tantalising open problems in Ramsey theory.
Before concluding this section, we say a little about the multicolour generalisations of these problems. An easy extension of the Erd\H{o}s--Szekeres argument gives an upper bound for the multicolour diagonal Ramsey number of the form $r(t; q) \leq q^{q t}$. On the other hand, an elementary product argument shows that, for any positive integers $p$ and $d$, we have $r(t; pd) > (r(t; p)-1)^d$. In particular, taking $p=2$, we see that $r(t; q) > (r(t; 2)-1)^{q/2} > 2^{qt/4}$ for $q$ even and $t \geq 3$. To prove the bound, suppose that $\chi$ is a $p$-colouring of the edges of the complete graph on vertex set $[r(t; p)-1] = \{1, 2, \dots, r(t;p) -1\}$ with no monochromatic $K_t$ and consider the lexicographic $d^{\textrm{th}}$ power of $\chi$. This is a $pd$-colouring of the edges of the complete graph with vertex set $[r(t;p)-1]^d$ such that the colour of the edge between two distinct vertices $(u_1,\ldots,u_d)$ and $(v_1,\ldots,v_d)$ is $(i,\chi(u_i,v_i))$, where $i$ is the first coordinate for which $u_i \neq v_i$. It is easy to check that this colouring contains no monochromatic $K_t$. Since the set has $(r(t; p)-1)^d$ vertices, the result follows.
The key question in the multicolour case is to determine the dependence on the number of colours. Even for $t = 3$, we only know that
\[2^{c' q} \leq r(3; q) \leq c q!,\]
where $c \leq e$ and $c' \geq 1$ are constants whose values have each been improved a little over time. It is a major open problem to improve these bounds by a more significant factor.
In the off-diagonal case, less seems to be known, but we would like to highlight one result. While it is easy to see that
\[r(\underbrace{K_3, \dots, K_3,}_{q-1} K_t) = O(t^q),\]
it was an open question for many years to even show that the ratio $r(K_3, K_3, K_t)/r(K_3, K_t)$ tends to infinity with $t$. Alon and R\"odl \cite{AR05} solved this problem in a strong form by showing that the bound quoted above is tight up to logarithmic factors for all $q$. Their elegant construction involves overlaying a collection of random shifts of a sufficiently pseudorandom triangle-free graph.
\subsection{Complete hypergraphs} \label{sec:hypergraphs}
Although there are already significant gaps between the lower and upper bounds for graph Ramsey numbers, our knowledge of hypergraph Ramsey numbers is even weaker. Recall that $r_k(s,t)$ is the minimum $N$ such that every red/blue-colouring of the $k$-tuples of an $N$-element set contains a red $K_s^{(k)}$ or a blue $K_t^{(k)}$. While a naive extension of the Erd\H{o}s--Szekeres argument gives extremely poor bounds for hypergraph Ramsey numbers when $k \geq 3$, a more careful induction, discovered by Erd\H{o}s and Rado \cite{ER52}, allows one to bound Ramsey numbers for $k$-uniform hypergraphs using estimates for the Ramsey number of $(k-1)$-uniform hypergraphs. Quantitatively, their result says the following.
\begin{theorem}\label{erdosrado}
$r_k(s, t) \leq 2^{{r_{k-1}(s-1, t-1) \choose k-1}} + k - 2$.
\end{theorem}
Together with the standard exponential upper bound on $r(t)$, this shows that $r_3(t) \leq 2^{2^{c t}}$ for some constant $c$. On the other hand, by considering a random two-colouring of the edges of $K_N^{(k)}$, Erd\H{o}s, Hajnal and Rado \cite{EHR65} showed that there is a positive constant $c'$ such that $r_3(t) \geq 2^{c' t^2}$. However, they conjectured that the upper bound is closer to the truth and Erd\H{o}s later offered a \$500 reward for a proof.
\begin{conjecture}
There exists a positive constant $c'$ such that $$r_3(t) \geq 2^{2^{c' t}}.$$
\end{conjecture}
Fifty years after the work of Erd\H{o}s, Hajnal and Rado, the bounds for $r_3(t)$ still differ by an exponential. Similarly, for $k \geq 4$, there is a difference of one exponential between the known upper and lower bounds for $r_k(t)$, our best bounds being
$$t_{k-1}(c' t^2) \leq r_k(t) \leq t_k(c t),$$
where the tower function $t_k(x)$ is defined by $t_1(x)=x$ and $t_{i+1}(x)=2^{t_i(x)}$. The upper bound here is a straightforward consequence of Theorem~\ref{erdosrado}, while the lower bound follows from an ingenious construction of Erd\H{o}s and Hajnal known as the stepping-up lemma (see, e.g., Chapter 4.7 in \cite{GRS90}). This allows one to construct lower bound colourings for uniformity $k+1$ from colourings for uniformity $k$, effectively gaining an extra exponential each time it is applied. Unfortunately, the smallest $k$ for which it works is $k=3$. However, if we could prove that $r_3(t)$ is double exponential in $t$, this would automatically close the gap between the upper and lower bounds
for $r_k(t)$ for all uniformities $k$.
For more than two colours, the problem becomes easier and Erd\H{o}s and Hajnal (see \cite{GRS90}) were able to construct a $4$-colouring of the triples of a set of double-exponential size which does not contain a monochromatic clique of order $t$. By a standard extension of the Erd\H{o}s--Rado upper bound to more than two colours, this result is sharp.
\begin{theorem}
There exists a positive constant $c'$ such that
$$r_3(t;4) \geq 2^{2^{c' t}}.$$
\end{theorem}
We will now sketch this construction, since it is a good illustration of how the stepping-up lemma works. Let $m = 2^{(t-1)/2}$ and suppose we are given a red/blue-colouring $\chi$ of the edges of $K_m$ with no monochromatic clique of order $t-1$ (in Section~\ref{sec:completegraphs}, we showed that such a colouring exists). Let $N = 2^m$ and consider the set of all binary strings of length $m$, where each string corresponds to the binary representation of an integer between $0$ and $N-1$. For any two strings $x$ and $y$, let $\delta(x,y)$ be the largest index in which they differ. Note that if $x<y<z$ (as numbers), then we have that $\delta(x,y) \not = \delta(y,z)$ and $\delta(x,z)$ is the maximum of $\delta(x,y)$ and $\delta(y,z)$. More generally, if $x_1<\cdots< x_t$, then $\delta(x_1,x_t)=\max_i \delta(x_i,x_{i+1})$. Given vertices $x<y<z$ with $\delta_1=\delta(x,y)$ and $\delta_2=\delta(y,z)$, we let the colour of $(x, y, z)$ be
\begin{itemize}
\item
$A$ if $\delta_1 < \delta_2$ and $\chi(\delta_1, \delta_2) =$ red;
\item
$B$ if $\delta_1 < \delta_2$ and $\chi(\delta_1, \delta_2) =$ blue;
\item
$C$ if $\delta_1 > \delta_2$ and $\chi(\delta_1, \delta_2) =$ red;
\item
$D$ if $\delta_1 > \delta_2$ and $\chi(\delta_1, \delta_2) =$ blue.
\end{itemize}
Suppose now that $x_1<\cdots< x_t$ is a monochromatic set in colour $A$ (the other cases are similar) and let $\delta_i=\delta(x_i,x_{i+1})$. We claim that $\delta_1,\ldots, \delta_{t-1}$ form a red clique in the original colouring of $K_m$, which is a contradiction. Indeed, since $(x_i, x_{i+1}, x_{i+2})$ has colour $A$, we must have that $\delta_i < \delta_{i+1}$ for all $i$. Therefore, $\delta_1< \dots <\delta_{t-1}$ and $\delta(x_{i+1},x_{j+1})=\delta(x_j,x_{j+1})=\delta_j$ for all $i < j$. Since the colour of the triple $(x_i,x_{i+1},x_{j+1})$ is determined by the colour of $(\delta_i,\delta_j)$, this now tells us that $\chi(\delta_i,\delta_j)$ is red for all $i < j$, as required.
For the intermediate case of three colours, Erd\H{o}s and Hajnal \cite{EH89} made a small improvement on the lower bound of $2^{c' t^2}$, showing that $r_3(t;3) \geq 2^{c' t^2 \log^2 t}$. Extending the stepping-up approach described above, the authors \cite{CFS10} improved this bound as follows, giving a strong indication that $r_3(t; 3)$ is indeed double exponential.
\begin{theorem} \label{threecolour}
There exists a positive constant $c'$ such that
$$r_3(t;3) \geq 2^{t^{c' \log t}}.$$
\end{theorem}
Though Erd\H{o}s \cite{CG98, E90} believed that $r_3(t)$ is closer to $2^{2^{c' t}}$, he and Hajnal \cite{EH89} discovered the following interesting fact which they thought might indicate the opposite. They proved that there are positive constants $c$ and $\epsilon$ such that every two-colouring of the triples of an $N$-element set contains a subset $S$ of order $s \geq c(\log N)^{1/2}$ such that at least $(1/2+\epsilon){s \choose 3}$ triples of $S$ have the same colour. That is, the density of each colour deviates from $1/2$ by at least some fixed positive constant.
In the graph case, a random colouring of the edges of $K_N$ has the property that every subset of order $\omega(\log N)$ has roughly the same number of edges in both colours. That is, the Ramsey problem and the discrepancy problem have similar quantitative behaviour. Because of this, Erd\H{o}s~\cite{E94} remarked that he would begin to doubt that $r_3(t)$ is double exponential in $t$ if one could prove that any two-colouring of the triples of an $N$-set contains some set of order $s=c(\epsilon)(\log N)^{\delta}$ for which at least $(1-\epsilon){s \choose 3}$ triples have the same colour, where $\delta>0$ is an absolute constant and $\epsilon>0$ is arbitrary. Erd\H{o}s and Hajnal proposed \cite{EH89} that such a statement may even be true with $\delta = 1/2$, which would be tight up to the constant factor $c$. The following result, due to the authors \cite{CFS11}, shows that this is indeed the case.
\begin{theorem}\label{discrepancy}
For each $\epsilon > 0$, there is $c=c(\epsilon)>0$ such that
every two-colouring of the triples of an $N$-element set contains a
subset $S$ of order $s=c\sqrt{\log N}$ such that at least $(1-\epsilon){s \choose 3}$ triples of $S$ have the same colour.
\end{theorem}
Unlike Erd\H{o}s, we do not feel that this result suggests that the growth of $r_3(t)$ is smaller than double exponential. Indeed, this theorem also holds for any fixed number of colours $q$ but, for $q \geq 4$, the hypergraph Ramsey number does grow as a double exponential. That is, the $q$-colour analogue of Theorem~\ref{discrepancy} shows that the largest almost monochromatic subset in a $q$-colouring of the triples of an $N$-element set is much larger than the largest monochromatic subset. This is in striking constrast to graphs, where we have already remarked that the two quantities have the same order of magnitude.
It would be very interesting to extend Theorem \ref{discrepancy} to higher uniformities. In \cite{CFS10}, the authors proved that for all $k, q$ and $\epsilon>0$ there is $\delta=\delta(k,q,\epsilon)>0$ such that every $q$-colouring of the $k$-tuples of an $N$-element set contains a subset of order $s=(\log N)^{\delta}$ which contains at least $(1-\epsilon){s \choose k}$ $k$-tuples of the same colour. Unfortunately, $\delta$ here depends on $\epsilon$. On the other hand, this result could hold with $\delta =1/(k-1)$ (which is the case for $k=3$).
\begin{problem}
Is it true that for any $k \geq 4$ and $\epsilon > 0$ there exists $c = c(k, \epsilon) > 0$ such that every two-colouring of the $k$-tuples of an $N$-element set contains a subset $S$ of order $s=c(\log N)^{1/(k-1)}$ such that at least $(1-\epsilon){s \choose k}$ $k$-tuples of $S$ have the same colour?
\end{problem}
Another wide open problem is that of estimating off-diagonal Ramsey numbers for hypergraphs. Progress on this question was slow and for several decades the best known bound was that obtained by Erd\H{o}s and Rado \cite{ER52}. Combining their estimate from Theorem \ref{erdosrado} with the best upper bound on $r(s-1,t-1)$ shows that for fixed $s$, $$r_3(s,t) \leq 2^{{r(s-1,t-1) \choose 2}}+1 \leq 2^{c t^{2s-4}/\log^{2s-6} t}.$$ Recently, the authors \cite{CFS10} discovered an interesting connection between the problem of bounding $r_3(s,t)$ and a new game-theoretic parameter. To describe this parameter, we start with the classical approach of Erd\H{o}s and Rado and then indicate how it can be improved.
Let $p=r(s-1,t-1), N=2^{{p \choose 2}}+1$ and consider a red/blue-colouring $c$ of all triples on the vertex set $[N] = \{1, 2, \dots, N\}$. We will show how to find vertices $v_1,\ldots, v_p, v_{p+1}$ such that, for each $i<j$, all triples $(v_i,v_j,v_k)$ with $k>j$ have the same colour, which we denote by $\chi(i,j)$. This will solve the problem, since, by the definition of $p$, the colouring $\chi$ of $v_1,\ldots, v_p$ contains either a red $K_{s-1}$ or a blue $K_{t-1}$, which together with $v_{p+1}$ would give a monochromatic set of triples of the correct order in the original colouring. We will pick the vertices $v_i$ in rounds. Suppose that we already have vertices $v_1,\ldots, v_m$ with the required property as well as a set of vertices $S_m$ such that for every $v_i, v_j$ and every $w \in S_m$ the colour of the triple $(v_i,v_j,w)$ is given by $\chi(i,j)$ and so does not depend on $w$. Pick $v_{m+1} \in S_m$ arbitrarily. For all other $w$ in $S_m$, consider the colour vector $(c_1,\dots,c_m)$ such that $c_i=c(v_i,v_{m+1}, w)$, which are the only new triples we need worry about. Let $S_{m+1}$ be the largest subset of $S_m$ such that every vertex in this subset has the same colour vector $(c_1,\dots,c_m)$. Clearly, this set has order at least $2^{-m}(|S_m|-1)$. Notice that $v_1,\ldots, v_{m+1}$ and $S_{m+1}$ have the desired properties. We may therefore continue the algorithm, noting that we have lost a factor of $2^m$ in the size of the remaining set of vertices, i.e., a factor of $2$ for every edge coloured by $\chi$.
To improve this approach, we note that the colouring $\chi$ does not need to colour every pair of vertices. This idea is captured nicely by the notion of vertex on-line Ramsey number. Consider the following game, played by two players, Builder and Painter: at step $m+1$ a new vertex $v_{m+1}$ is revealed; then, for every existing vertex $v_j$, $j = 1, \cdots, m$, the Builder decides, in order, whether to draw the edge $v_j v_{m+1}$; if he does expose such an edge, the Painter has to colour it either red or blue immediately. The {\it vertex on-line Ramsey number} $\tilde{r}_v(k,l)$ is then defined as the minimum number of edges that Builder has to draw in order to force Painter to create a red $K_k$ or a blue $K_l$. Using an approach similar to that described in the previous paragraph, one can bound the Ramsey number $r_3(s,t)$ roughly by an exponential in $\tilde{r}_v(s-1,t-1)$. By estimating $\tilde{r}_v(s-1,t-1)$, this observation, together with some additional ideas, allowed the authors to improve the Erd\H{o}s--Rado estimate for off-diagonal hypergraph Ramsey numbers as follows.
\begin{theorem} \label{upperbound}
For every natural number $s \geq 4$, there exists a positive constant $c$ such that
$$r_3(s,t) \leq 2^{c t^{s-2} \log t}.$$
\end{theorem}
\noindent
A similar improvement for off-diagonal Ramsey numbers of higher uniformity follows from combining this result with Theorem~\ref{erdosrado}.
How accurate is this estimate? For the first non-trivial case, when $s=4$, the problem was first considered by Erd\H{o}s and Hajnal \cite{EH72} in 1972. Using the following clever construction, they showed that $r_3(4,t)$ is exponential in $t$. Consider a random tournament with vertex set $[N]$. This is a complete graph on $N$ vertices whose edges are oriented uniformly at random. Colour a triple in $[N]$ red if it forms a cyclic triangle and blue otherwise. Since it is well known and easy to show that every tournament on four vertices contains at most two cyclic triangles and a random tournament on $N$ vertices with high probability does not contain a transitive subtournament of order $c \log N$, the resulting colouring has neither a red subset of order $4$ nor a blue subset of order $c \log N$. In the same paper \cite{EH72}, Erd\H{o}s and Hajnal conjectured that $\frac{\log r_3(4,t)}{t} \to \infty$. This was recently confirmed in \cite{CFS10}, where the authors obtained a more general result which in particular implies that $r_3(4,t) \geq 2^{c' t \log t}$. This should be compared with the upper bound $r_3(4,t) \leq 2^{ct^2 \log t}$ obtained above.
\subsection{Sparse graphs} \label{sec:sparse}
After the complete graph, the next most classical topic in graph Ramsey theory concerns the Ramsey numbers of sparse graphs, i.e., graphs with certain constraints on the degrees of the vertices. Burr and Erd\H{o}s~\cite{BE75} initiated the study of these Ramsey numbers in 1975 and this topic has since placed a central role in graph Ramsey theory, leading to the development of many important techniques with broader applicability.
In their foundational paper, Burr and Erd\H{o}s~\cite{BE75} conjectured that for every positive integer $\Delta$ there is a constant $c(\Delta)$ such that every graph $H$ with $n$ vertices and maximum degree $\Delta$ satisfies $r(H) \leq c(\Delta)n$. This conjecture was proved by Chv\'atal, R\"odl, Szemer\'edi and Trotter \cite{CRST83} as an early application of Szemer\'edi's regularity lemma \cite{Sz76}. We will now sketch their proof, first reviewing the statement of the regularity lemma.
Roughly speaking, the regularity lemma says that the vertex set of any graph may be partitioned into a small number of parts such that the bipartite subgraph between almost every pair of parts is random-like. More formally, we say that a pair of disjoint vertex subsets $(A, B)$ in a graph $G$ is {\it $\epsilon$-regular} if, for every $A' \subseteq A$ and $B' \subseteq B$ with $|A'| \geq \epsilon |A|$ and $|B'| \geq \epsilon|B|$, the density $d(A', B')$ of edges between $A'$ and $B'$ satisfies $|d(A', B') - d(A, B)| \leq \epsilon$. That is, the density between any two large subsets of $A$ and $B$ is close to the density between $A$ and $B$. The regularity lemma then says that for every $\epsilon > 0$ there exists $M = M(\epsilon)$ such that the vertex set of any graph $G$ may be partitioned into $m \leq M$ parts $V_1, \dots, V_m$ such that $||V_i| - |V_j|| \leq 1$ for all $1 \leq i, j \leq m$ and all but $\epsilon \binom{m}{2}$ pairs $(V_i, V_j)$ are $\epsilon$-regular.
Suppose now that $N = c(\Delta) n$ and the edges of $K_N$ have been two-coloured. To begin, we apply the regularity lemma with approximation parameter $\epsilon=4^{-\Delta}$ (since the colours are complementary, we may apply the regularity lemma to either the red or the blue subgraph, obtaining a regular partition for both). This gives a partition of the vertex set into $m \leq M$ parts of roughly equal size, where $M$ depends only on $\Delta$, such that all but $\epsilon \binom{m}{2}$ pairs of parts are $\epsilon$-regular. By applying Tur\'an's theorem, we may find $4^{\Delta}$ parts such that every pair of parts is $\epsilon$-regular. Since $r(\Delta+1) \leq 4^{\Delta}$, an application of Ramsey's theorem then implies that there are $\Delta+1$ parts $V_1, \dots, V_{\Delta+1}$ such that every pair is $\epsilon$-regular and the graph between each pair has density at least $1/2$ in one particular colour, say red. As $\chi(H) \leq \Delta+1$, we can partition the vertex set of $H$ into independent sets $U_1,\ldots,U_{\Delta+1}$. The regularity between the sets $V_1, \dots, V_{\Delta + 1}$ now allows us to greedily construct a red copy of $H$, embedding one vertex at a time and mapping $U_i$ into $V_i$ for each $i$. Throughout the embedding process, we must ensure that for any vertex $u$ of $U_i$ which is not yet embedded the set of potential vertices in $V_i$ into which one may embed $u$ is large (at step $t$, we guarantee that it has order at least $4^{-d(t,u)}|V_i|-t$, where $d(t,u) \leq \Delta$ is the number of neighbours of $u$ among the first $t$ embedded vertices). Though an elegant application of the regularity lemma, this method gives a poor bound on $c(\Delta)$, namely, a tower of $2$s with height exponential in $\Delta$.
Since this theorem was first proved, the problem of determining the correct order of magnitude for $c(\Delta)$ as a function of $\Delta$ has received considerable attention from various researchers. The first progress was made by Eaton~\cite{E98}, who showed that $c(\Delta) \leq 2^{2^{c\Delta}}$ for some fixed $c$, the key observation being that the proof above does not need the full strength of the regularity lemma. Instead, one only needs to find $4^{\Delta}$ large vertex subsets such that the graph between each pair is $\epsilon$-regular. This may be achieved using a weak regularity lemma due to Duke, Lefmann and R\"odl~\cite{DLR95}.
A novel approach of Graham, R\"odl and Rucinski \cite{GRR00} was the first to give a linear upper bound on Ramsey numbers of bounded-degree graphs without using any form of the regularity lemma. Their proof also gave good quantitative control, showing that one may take $c(\Delta) \leq 2^{c\Delta \log^2 \Delta}$. As in the regularity proof, they try to greedily construct a red copy of $H$ one vertex at a time, at each step ensuring that the set of potential vertices into which one might embed any remaining vertex is large. If this process fails, we will find two large vertex subsets such that the red graph between them has very low density. Put differently, this means that the blue graph between these vertex sets has very high density. We now iterate this procedure within each of the two subsets, trying to embed greedily in red and, if this fails, finding two large vertex subsets with high blue density between them. After $\log 8\Delta$ iterations, we will either have found the required red copy of $H$ or we will have $8\Delta$ subsets of equal size with high blue density between all pairs of sets. If the constants are chosen appropriately, the union of these sets will have blue density at least $1-\frac{1}{4\Delta}$ and at least $4n$ vertices. One can then greedily embed a blue copy of $H$ one vertex at a time.
Recently, the authors \cite{CFS12} improved this bound to $c(\Delta) \leq 2^{c\Delta \log \Delta}$.
\begin{theorem} \label{thm:CRSTBound}
There exists a constant $c$ such that any graph $H$ on $n$ vertices with maximum degree $\Delta$ satisfies
\[r(H) \leq 2^{c \Delta \log \Delta} n.\]
\end{theorem}
\noindent
In the approach of Graham, R\"odl and Ruci\'nski, the two colours play asymmetrical roles. Either we find a set where the red graph has some reasonable density between any two large vertex subsets or a set which is almost complete in blue. In either case, a greedy embedding gives the required monochromatic copy of $H$. The approach we take in \cite{CFS12} is more symmetrical. The basic idea is that once we find a pair of vertex subsets $(V_1,V_2)$ such that the graph between them is almost complete in blue, we split $H$ into two parts $U_1$ and $U_2$, each of which induces a subgraph of maximum degree at most $\Delta/2$, and try to embed blue copies of $H[U_i]$ into $V_i$ for $i=1, 2$, using the high blue density between $V_1$ and $V_2$ to ensure that this gives a blue embedding of $H$. The gain comes from the fact that when we iterate the maximum degree of the graph we wish to embed shrinks. Unfortunately, while this gives some of the intuition behind the proof, the details are rather more involved.
Graham, R\"odl and Rucinski~\cite{GRR01} observed that for bipartite graphs $H$ on $n$ vertices with maximum degree $\Delta$ their technique could be used to prove a bound of the form $r(H) \leq 2^{c\Delta \log \Delta}n$. Indeed, if greedily embedding a red copy of $H$ fails, then there will be two large vertex subsets $V_1$ and $V_2$ such that the graph between them is almost complete in blue. A blue copy of $H$ can then be greedily embedded between these sets. In the other direction, they showed that there is a positive constant $c'$ such that for each $\Delta$ and $n$ sufficiently large there is a bipartite graph $H$ on $n$ vertices with maximum degree $\Delta$ for which $r(H) \geq 2^{c'\Delta}n$. Conlon~\cite{C092} and, independently, Fox and Sudakov~\cite{FS09} showed that this bound is essentially tight, that is, there is a constant $c$ such that $r(H) \leq 2^{c\Delta}n$ for every bipartite graph $H$ on $n$ vertices with maximum degree $\Delta$. Both proofs are quite similar, each relying on an application of dependent random choice and a hypergraph embedding lemma.
Dependent random choice is a powerful probabilistic technique which has recently led to a number of advances in extremal graph theory, additive combinatorics, Ramsey theory and combinatorial geometry. Early variants of this technique were developed by Gowers~\cite{G98}, Kostochka and R\"odl~\cite{KR01} and Sudakov~\cite{Su03}. In many applications, including that under discussion, the technique is used to prove the useful fact that every dense graph contains a large subset $U$ in which almost every set of $d$ vertices has many common neighbours. To prove this fact, we let $R$ be a random set of vertices from our graph and take $U$ to be the set of all common neighbours of $R$. Intuitively, it is clear that if some subset of $U$ of order $d$ has only a few common neighbours, then it is unlikely that all the members of $R$ could have been chosen from this set of neighbours. It is therefore unlikely that $U$ contains many subsets of this type. For more information about dependent random choice and its applications, we refer the interested reader to the recent survey~\cite{FS11}.
Using the Lov\'asz local lemma, the authors~\cite{CFS14} recently improved on the hypergraph embedding lemmas used in their earlier proofs to obtain a bound of the form $r(H) \leq c2^{\Delta}n$ for every bipartite graph $H$ on $n$ vertices with maximum degree $\Delta$. Like the earlier results, this follows from a more general density result which shows that the denser of the two colour classes will contain the required monochromatic copy of $H$.
By repeated application of the dependent random choice technique and an appropriate adaptation of the embedding technique, Fox and Sudakov \cite{FS09} also proved that $r(H) \leq 2^{4\chi\Delta}n$ for all graphs $H$ on $n$ vertices with chromatic number $\chi$ and maximum degree $\Delta$. However, the dependency on $\chi$ is unlikely to be necessary here.
\begin{conjecture}\label{conjecturemaxdeg}
There is a constant $c$ such that every graph $H$ on $n$ vertices with maximum degree $\Delta$ satisfies $r(H) \leq 2^{c\Delta}n$.
\end{conjecture}
One particular family of bipartite graphs that has received significant attention in Ramsey theory are hypercubes. The {\it hypercube} $Q_n$ is the $n$-regular graph on vertex set $\{0,1\}^n$ where two vertices are connected by an edge if and only if they differ in exactly one coordinate. Burr and Erd\H{o}s~\cite{BE75} conjectured that $r(Q_n)$ is linear in $|Q_n|$.
\begin{conjecture}
$$r(Q_n) = O(2^n).$$
\end{conjecture}
After several improvements over the trivial bound $r(Q_n) \leq r(|Q_n|) \leq 4^{|Q_n|} = 2^{2^{n+1}}$ by Beck~\cite{B83}, Graham, R\"odl and Ruci\'nski~\cite{GRR01}, Shi~\cite{S01,S07} and Fox and Sudakov~\cite{FS09}, the authors~\cite{CFS14} obtained the best known upper bound of $r(H) = O(2^{2n})$, which is quadratic in the number of vertices. This follows immediately from the general upper bound on Ramsey numbers of bipartite graphs with given maximum degree stated earlier.
Another natural notion of sparseness which has been studied extensively in the literature is that of degeneracy. A graph is said to be {\it $d$-degenerate} if every subgraph has a vertex of degree at most $d$. Equivalently, a graph is $d$-degenerate if there is an ordering of the vertices such that each vertex has at most $d$ neighbours that precede it in the ordering. The {\it degeneracy} of a graph is the smallest $d$ such that the graph is $d$-degenerate. Burr and Erd\H{o}s~\cite{BE75} conjectured that every graph with bounded degeneracy has linear Ramsey number.
\begin{conjecture}
For every natural number $d$, there is a constant $c(d)$ such that every $d$-degenerate graph $H$ on $n$ vertices satisfies $r(H) \leq c(d)n$.
\end{conjecture}
This conjecture is one of the most important open problems in graph Ramsey theory. The first significant progress on the conjecture was made by Kostochka and Sudakov~\cite{KS03}, who proved an almost linear upper bound. That is, for fixed $d$, they showed that every $d$-degenerate graph $H$ on $n$ vertices satisfies $r(H) = n^{1+o(1)}$. This result was later refined by Fox and Sudakov~\cite{FS092}, who showed that every $d$-degenerate graph $H$ on $n$ vertices satisfies $r(H) \leq e^{c(d)\sqrt{\log n}}n$.
Partial progress of a different sort was made by Chen and Schelp~\cite{CS93}, who considered a notion of sparseness which is intermediate between having bounded degree and having bounded degeneracy. We say that a graph is {\it $p$-arrangeable} if there is an ordering $v_1, v_2, \dots,v_n$ of its vertices such that for each vertex $v_i$, its neighbours to the right of $v_i$ have together at most $p$ neighbours to the left of $v_i$ (including $v_i$). The {\it arrangeability} of a graph is the smallest $p$ such that the graph is $p$-arrangeable. Extending the result of Chv\'atal et al.~\cite{CRST83}, Chen and Schelp~\cite{CS93} proved that for every $p$ there is a constant $c(p)$ such that every $p$-arrangeable graph on $n$ vertices has Ramsey number at most $c(p)n$. Graphs with bounded arrangeability include planar graphs and graphs embeddable on a fixed surface. More generally, R\"odl and Thomas~\cite{RT97} proved that graphs which do not contain a subdivision of a fixed graph have bounded arrangeability and hence have linear Ramsey number. Another application was given by Fox and Sudakov~\cite{FS092}, who proved that for fixed $d$ the Erd\H{o}s--Renyi random graph $G(n,d/n)$ almost surely has arrangeability on the order of $d^2$ and hence almost surely has linear Ramsey number.
In general, the Ramsey number of a graph appears to be intimately connected to its degeneracy. Indeed, if $d(H)$ is the degeneracy of $H$, a random colouring easily implies that $r(H) \geq 2^{d(H)/2}$. Since it is also clear that $r(H) \geq n$ for any $n$-vertex graph, we see that $\log r(H) = \Omega(d(H) + \log n)$. We conjecture that this bound is tight up to the constant. It is even plausible that $r(H) \leq 2^{O(d)} n$ for every $d$-degenerate graph $H$ on $n$ vertices. Since the degeneracy of a graph is easily computable, this would give a very satisfying approximation for the Ramsey number of a general graph.
\begin{conjecture} \label{conjectureapproxrams}
For every $n$-vertex graph $H$, $$\log r(H)=\Theta\left(d(H)+\log n\right).$$
\end{conjecture}
\noindent
For graphs of bounded chromatic number, Conjecture~\ref{conjectureapproxrams} follows from a bound on Ramsey numbers due to Fox and Sudakov (Theorem 2.1 in~\cite{FS092}). Moreover, another result from the same paper (Theorem 3.1 in~\cite{FS092}) shows that Conjecture~\ref{conjectureapproxrams} always holds up to a factor of $\log^2 d(H)$.
In graph Ramsey theory, it is natural to expect there should be no significant qualitative difference between the bounds for two colours and the bounds for any fixed number of colours. However, there are many well-known problems where this intuition has yet to be verified, the classic example being the bounds for hypergraph Ramsey numbers. Another important example is furnished by the results of this section. Indeed, the proof technique of Graham, R\"odl and Ruci\'nski can be extended to work for more than two colours, but only gives the estimate $r(H;q) \leq 2^{\Delta^{q-1+o(1)}}n$ for the Ramsey number of graphs $H$ with $n$ vertices and maximum degree $\Delta$. While dependent random choice does better, giving a bound of the form $r(H;q) \leq 2^{O_q(\Delta^2)}n$, we believe that for a fixed number of colours, the exponent of $\Delta$ should still be $1$. In particular, we conjecture that the following bound holds.
\begin{conjecture}
For every graph $H$ on $n$ vertices with maximum degree $\Delta$, the $3$-colour Ramsey number of $H$ satisfies $$r(H,H,H) \leq 2^{\Delta^{1+o(1)}}n,$$ where the $o(1)$ is a function of $\Delta$ which tends to $0$ as $\Delta$ tends to infinity.
\end{conjecture}
With the development of the hypergraph regularity method~\cite{G07, NRS06, RS04}, the result that bounded-degree graphs have linear Ramsey numbers was extended to $3$-uniform hypergraphs by Cooley, Fountoulakis, K\"uhn and Osthus \cite{CFKO07} and Nagle, Olsen, R\"odl and Schacht \cite{NORS07} and to $k$-uniform hypergraphs by Cooley et al. \cite{CFKO072}. That is, for each $k$ and $\Delta$ there is $c(\Delta,k)$ such that every $k$-uniform hypergraph $H$ on $n$ vertices with maximum degree $\Delta$ satisfies $r(H) \leq c(\Delta,k)n$. However, because they use the hypergraph regularity lemma, their proof only gives an enormous Ackermann-type upper bound on $c(\Delta,k)$. In~\cite{CFS09}, the authors gave another shorter proof of this theorem which gives the right type of behaviour for $c(\Delta,k)$. The proof relies on an appropriate generalisation of the dependent random choice technique to hypergraphs. As in Section~\ref{sec:hypergraphs}, we write $t_1(x) = x$ and $t_{i+1}(x) = 2^{t_i(x)}$.
\begin{theorem} \label{thm:CRSTHyper}
For any natural numbers $k \geq 3$ and $q \geq 2$, there exists a constant $c = c(k, q)$ such that the $q$-colour Ramsey number of any $k$-uniform hypergraph $H$ on $n$ vertices with maximum degree $\Delta$ satisfies
\[r_3(H;q) \leq 2^{2^{c \Delta \log \Delta}} n \mbox{ and, for $k \geq 4$, } r_k(H;q) \leq t_k(c \Delta) n.\]
\end{theorem}
We say that a hypergraph is {\it $d$-degenerate} if every subgraph has a vertex of degree at most $d$. Equivalently, a hypergraph is $d$-degenerate if there is an ordering of the vertices $v_1, v_2, \dots,v_n$ such that each vertex $v_i$ is the final vertex in at most $d$ edges in this ordering. Kostochka and R\"odl~\cite{KR06} showed that the hypergraph analogue of the Burr--Erd\H{o}s conjecture is false for uniformity $k \geq 4$. In particular, they constructed a $4$-uniform hypergraph on $n$ vertices which is $1$-degenerate but has Ramsey number at least $2^{\Omega(n^{1/3})}$.
\subsection{Graphs with a given number of edges} \label{sec:edges}
In 1973, Erd\H{o}s and Graham~\cite{EG75} conjectured that among all connected graphs with $m = \binom{n}{2}$ edges, the complete graph has the largest Ramsey number. As this question seems unapproachable, Erd\H{o}s~\cite{E84} asked whether one could at least show that the Ramsey number of any graph with $m$ edges is not substantially larger than that of the complete graph with the same size. Since the number of vertices in a complete graph with $m$ edges is a constant multiple of $\sqrt{m}$, he conjectured that there exists a constant $c$ such that $r(H) \leq 2^{c \sqrt{m}}$ for any graph $H$ with $m$ edges and no isolated vertices.
The first progress on this conjecture was made by Alon, Krivelevich and Sudakov~\cite{AKS03}, who showed that there exists a constant $c$ such that $r(H) \leq 2^{c \sqrt{m} \log m}$ for any graph $H$ with $m$ edges and no isolated vertices. They also proved the conjecture in the special case where $H$ is bipartite. Another proof of the same bound, though starting from a different angle, was later given by Conlon~\cite{C13}. This approach, which focused on estimating the Ramsey number of graphs with a given density, allowed one to show that graphs on $n$ vertices with $o(n^2)$ edges have Ramsey number $2^{o(n)}$. Soon after this work, Erd\H{o}s' conjecture was completely resolved by Sudakov~\cite{Su11}, so that it may now be stated as a theorem.
\begin{theorem} \label{thm:medge}
There exists a constant $c$ such that any graph $H$ with $m$ edges and no isolated vertices satisfies
\[r(H) \leq 2^{c \sqrt{m}}.\]
\end{theorem}
The proof of this theorem relies upon several ingredients, including the machinery of Graham, R\"odl and Ruci\'nski \cite{GRR00} mentioned in the previous section and a result of Erd\H{o}s and Szemer\'edi~\cite{ES72} which says that if a graph has low density then it contains a larger clique or independent set than would be guaranteed by Ramsey's theorem alone.\footnote{The Erd\H{o}s--Szemer\'edi theorem is the starting point for another interesting topic which we have not had space to discuss, namely, the problem of determining what properties a graph with no clique or independent set of order $c \log n$ must satisfy. The Erd\H{o}s--Szemer\'edi theorem shows that any such graph must have density bounded away from both $0$ and $1$ and there are numerous further papers (see, for example, \cite{ABKS09, BS07,FS08} and their references) showing that these graphs must exhibit random-like behaviour.} However, these techniques are very specific to two colours, so the following problem remains wide open.
\begin{problem} \label{prob:medgeqcolour}
Show that for any $q \geq 3$ there exists $c_q$ such that $r(H; q) \leq 2^{c_q \sqrt{m}}$ for any graph $H$ with $m$ edges and no isolated vertices.
\end{problem}
If no vertex in the graph $H$ has unusually high degree, it is often possible to improve on Theorem~\ref{thm:medge}. For example, the following result~\cite{C13, CFLS143} implies that if a graph with $n$ vertices and $m$ edges has degeneracy at most $10m/n$, say, then the Ramsey number is at most an exponential in $\frac{m}{n} \log^2(\frac{n^2}{m})$. For $m = o(n^2)$, this is significantly smaller than $\sqrt{m}$.
\begin{theorem} \label{label:densedegenerate}
There exists a constant $c$ such that any graph $H$ on $n$ vertices with degeneracy at most $d$ satisfies
\[r(H) \leq 2^{c d \log^2(2n/d)}.\]
\end{theorem}
The analogous question for hypergraphs was studied by the authors in~\cite{CFS09}. Though the same rationale that led Erd\H{o}s to conjecture Theorem~\ref{thm:medge} naturally leads one to conjecture that $r_3(H) \leq 2^{2^{c m^{1/3}}}$ for all $3$-uniform hypergraphs $H$ with $m$ edges and no isolated vertices, it turns out that there are connected $3$-uniform hypergraphs $H$ with $m$ edges for which $r_3(H; 4) \geq 2^{2^{c' \sqrt{m}}}$. This is also close to being sharp, since $r_3(H; q) \leq 2^{2^{c_q \sqrt{m} \log m}}$ for any $3$-uniform hypergraph $H$ with $m$ edges and no isolated vertices and any $q \geq 2$. For higher uniformities, $k \geq 4$, one can do slightly better. Writing $t_1(x) = x$ and $t_{i+1}(x) = 2^{t_i(x)}$ as in Section~\ref{sec:hypergraphs}, the authors showed that $r_k(H; q) \leq t_k(c_{k,q} \sqrt{m})$ for any $k$-uniform hypergraph $H$ with $m$ edges and no isolated vertices and any $q \geq 2$. It would be interesting to improve the bound in the $3$-uniform case to bring it in line with higher uniformities.
\begin{problem} \label{prob:medgehyper}
Show that for any $q \geq 2$ there exists $c_q$ such that $r_3(H; q) \leq 2^{2^{c_q \sqrt{m}}}$ for any $3$-uniform hypergraph $H$ with $m$ edges and no isolated vertices.
\end{problem}
\noindent
This would likely follow if the bound for the Ramsey number of $3$-uniform hypergraphs with $n$ vertices and maximum degree $\Delta$ given in Theorem~\ref{thm:CRSTHyper} could be improved to $2^{2^{c \Delta}} n$.
\subsection{Ramsey goodness} \label{sec:goodness}
If one tries to prove a lower bound for the off-diagonal Ramsey number $r(G, H)$, one simple construction, usually attributed to Chv\'atal and Harary \cite{CH72}, is to take $\chi(H) - 1$ red cliques, each of order $|G| - 1$, and to colour all edges between these sets in blue. If $G$ is connected, this colouring clearly contains no red copy of $G$ and no blue copy of $H$ and so $r(G, H) \geq (|G| - 1)(\chi(H) - 1) + 1$. If we write $\sigma(H)$ for the order of the smallest colour class in any $\chi(H)$-colouring of the vertices of $H$, we see, provided $|G| \geq \sigma(H)$, that we may add a further red clique of order $\sigma(H) - 1$ to our construction. This additional observation, due to Burr \cite{B81}, allows us to improve our lower bound to
\[r(G, H) \geq (|G| - 1)(\chi(H) - 1) + \sigma(H),\]
provided $|G| \geq \sigma(H)$. Following Burr and Erd\H{o}s \cite{B81, BE83}, we will say that a graph $G$ is {\it $H$-good} if this inequality is an equality, that is, if $r(G, H) = (|G| - 1)(\chi(H) - 1) + \sigma(H)$. Given a family of graphs $\mathcal{G}$, we say that $\mathcal{G}$ is {\it $H$-good} if equality holds for all sufficiently large graphs $G \in \mathcal{G}$. In the particular case where $H = K_s$, we say that a graph or family of graphs is {\it $s$-good}.
The classical result on Ramsey goodness, which predates the definition, is the theorem of Chv\'atal \cite{C77} showing that all trees are $s$-good for every $s$. However, the family of trees is not $H$-good for every graph $H$. For example \cite{BEFRS89}, there is a constant $c < \frac{1}{2}$ such that $r(K_{1,t}, K_{2,2}) \geq t + \sqrt{t} - t^c$ for $t$ sufficiently large, whereas $(|K_{1,t}|-1)(\chi(K_{2,2}) - 1) + \sigma(K_{2,2}) = t + 2$.
In an effort to determine what properties contribute to being good, Burr and Erd\H{o}s \cite{B87, BE83} conjectured that if $\Delta$ is fixed then the family of graphs with maximum degree at most $\Delta$ is $s$-good for every $s$. However, this conjecture was disproved by Brandt \cite{B96}, who showed that if a graph is a good expander then it cannot be $3$-good. In particular, his result implies that for $\Delta \geq \Delta_0$ almost every $\Delta$-regular graph on a sufficiently large number of vertices is not $3$-good.
On the other hand, graphs with poor expansion properties are often good. The first such result, due to Burr and Erd\H{o}s \cite{BE83}, states that for any fixed $\ell$ the family of connected graphs with bandwidth at most $\ell$ is $s$-good for any $s$, where the {\it bandwidth} of a graph $G$ is the smallest number $\ell$ for which there exists an ordering $v_1, v_2, \dots, v_n$ of the vertices of $G$ such that every edge $v_i v_j$ satisfies $|i - j| \leq \ell$. This result was recently extended by Allen, Brightwell and Skokan \cite{ABS12}, who showed that the set of connected graphs with bandwidth at most $\ell$ is $H$-good for every $H$. Their result even allows the bandwidth $\ell$ to grow at a reasonable rate with the order of the graph $G$. If $G$ is known to have bounded maximum degree, their results are particularly strong, their main theorem in this case being the following.
\begin{theorem}
For any $\Delta$ and any fixed graph $H$, there exists $c > 0$ such that if $G$ is a connected graph on $n$ vertices with maximum degree $\Delta$ and bandwidth at most $c n$ then $G$ is $H$-good.
\end{theorem}
Another result of this type, proved by Nikiforov and Rousseau \cite{NR09}, shows that graphs with small separators are $s$-good. Recall that the degeneracy $d(G)$ of a graph $G$ is the smallest natural number $d$ such that every induced subgraph of $G$ has a vertex of degree at most $d$. Furthermore, we say that a graph $G$ has a {\it $(t, \eta)$-separator} if there exists a vertex subset $T \subseteq V(G)$ such that $|T| \leq t$ and every connected component of $V(G)\char92T$ has order at most $\eta |V(G)|$. The result of Nikiforov and Rousseau is now as follows.
\begin{theorem} \label{NikRou}
For any $s \geq 3$, $d \geq 1$ and $0 < \gamma < 1$, there exists $\eta > 0$ such that the class $\mathcal{G}$ of connected $d$-degenerate graphs $G$ with a $(|V(G)|^{1-\gamma}, \eta)$-separator is $s$-good.
\end{theorem}
Nikiforov and Rousseau used this result to resolve a number of outstanding questions of Burr and Erd\H{o}s \cite{BE83} regarding Ramsey goodness. For example, they showed that the $1$-subdivision of $K_n$, the graph formed by adding an extra vertex to each edge of $K_n$, is $s$-good for $n$ sufficiently large. Moreover, using this result, it was shown in \cite{CFLS13} that the family of connected planar graphs is $s$-good for every $s$. This is a special case of a more general result. We say that a graph $H$ is a {\it minor} of $G$ if $H$ can be obtained from a subgraph of $G$ by contracting edges. By an {\it $H$-minor} of $G$, we mean a minor of $G$ which is isomorphic to $H$. For a graph $H$, let $\mathcal{G}_H$ be the family of connected graphs which do not contain an $H$-minor. Since the family of planar graphs consists precisely of those graphs which do not contain $K_5$ or $K_{3,3}$ as a minor, our claim about planar graphs is an immediate corollary of the following result. The proof is an easy corollary of Theorem~\ref{NikRou}, a result of Mader~\cite{M68} which bounds the average degree of $H$-minor-free graphs and a separator theorem for $H$-minor-free graphs due to Alon, Seymour and Thomas~\cite{AST90}.
\begin{theorem} \label{forbidminor}
For every fixed graph $H$, the class $\mathcal{G}_H$ of connected graphs $G$ which do not contain an $H$-minor is $s$-good for every $s \geq 3$.
\end{theorem}
One of the original problems of Burr and Erd\H{o}s that was left open after the work of Nikiforov and Rousseau was to determine whether the family of hypercubes is $s$-good for every $s$. Recall that the hypercube $Q_n$ is the graph on vertex set $\{0,1\}^n$ where two vertices are connected by an edge if and only if they differ in exactly one coordinate. Since $Q_n$ has $2^n$ vertices, the problem asks whether $r(Q_n, K_s) = (s-1)(2^n - 1) + 1$ for $n$ sufficiently large. The first progress on this question was made by Conlon, Fox, Lee and Sudakov \cite{CFLS13}, who obtained an upper bound of the form $c_s 2^n$, the main tool in the proof being a novel technique for embedding hypercubes. Using a variant of this embedding technique and a number of additional ingredients, the original question was subsequently resolved by Fiz Pontiveros, Griffiths, Morris, Saxton and Skokan \cite{FGMSS13, FGMSS132}.
\begin{theorem}
The family of hypercubes is $s$-good for every $s \geq 3$.
\end{theorem}
\subsection{Ramsey multiplicity} \label{sec:mult}
For any fixed graph $H$, Ramsey's theorem tells us that when $N$ is sufficiently large, any two-colouring of the edges of $K_N$ contains a monochromatic copy of $H$. But how many monochromatic copies of $H$ will this two-colouring contain? To be more precise, we let $m_H(G)$ be the number of copies of one graph $H$ in another graph $G$ and define
\[m_H(N) =\min\{m_H(G) + m_H(\overline{G}): |G| = N\},\]
that is, $m_H(N)$ is the minimum number of monochromatic copies of $H$ that occur in any two-colouring of $K_N$. For the clique $K_t$, we simply write $m_t(N)$. We now define the {\it Ramsey multiplicity constant}\footnote{We note that sometimes the term Ramsey multiplicity is used for the quantity $m_H(r(H))$, that is, the minimum number of copies of $H$ that must appear once one copy of $H$ appears. For example, it is well known that every two-colouring of $K_6$ contains not just one but at least two monochromatic copies of $K_3$. In general, this quantity is rather intractable and we will not discuss it further.} to be
\[c_H = \lim_{N \rightarrow \infty} \frac{m_H(N)}{m_H(K_N)}.\]
That is, we consider the minimum proportion of copies of $H$ which are monochromatic, where the minimum is taken over all two-colourings of $K_N$, and then take the limit as $N$ tends to infinity. Since one may show that the fractions $m_H(N)/m_H(K_N)$ are increasing in $N$ and bounded above by $1$, this limit is well defined. For cliques, we simply write $c_t := c_{K_t} = \lim_{N \rightarrow \infty} m_t(N)/\binom{N}{t}$. We also write $c_{H, q}$ and $c_{t, q}$ for the analogous functions with $q$ rather than two colours.
The earliest result on Ramsey multiplicity is the famous result of Goodman~\cite{G59}, which says that $c_3 \geq \frac{1}{4}$. This result is sharp, as may be seen by considering a random two-colouring of the edges of $K_N$. Erd\H{o}s \cite{E62} conjectured that a similar phenomenon should hold for larger cliques, that is, that the Ramsey multiplicity should be asymptotically minimised by the graph $G_{N, 1/2}$. Quantitatively, this would imply that $c_t \geq 2^{1 - \binom{t}{2}}$. This conjecture was later generalised by Burr and Rosta \cite{BR80}, who conjectured that $c_H \geq 2^{1 - e(H)}$ for all graphs $H$. Following standard practice, we will call a graph {\it common} if it satisfies the Burr--Rosta conjecture.
The Burr--Rosta conjecture was disproved by Sidorenko \cite{S89}, who showed that a triangle with a pendant edge is not common. Soon after, Thomason~\cite{T89} disproved Erd\H{o}s' conjecture by showing that $K_4$ is not common. Indeed, he showed that $c_4 < \frac{1}{33}$, where Erd\H{o}s' conjecture would have implied that $c_4 \geq \frac{1}{32}$. More generally, Jagger, \v{S}\v{t}ov\'{i}\v{c}ek and Thomason~\cite{JST96} showed that any graph which contains $K_4$ is not common. They also asked whether the conjecture holds for the $5$-wheel, the graph formed by taking a cycle of length $5$ and adding a central vertex connected to each of the vertices in the cycle. Determining whether this graph satisfies the Burr--Rosta conjecture was of particular interest because it is the smallest graph of chromatic number $4$ which does not contain $K_4$. Using flag algebras \cite{R07}, this question was answered positively by Hatami, Hladk\'y, Kr\'al', Norine and Razborov \cite{HHKNR12}.
\begin{theorem}
The $5$-wheel is common.
\end{theorem}
Therefore, there exist $4$-chromatic common graphs. The following question, whether there exist common graphs of any chromatic number, was stated explicitly in \cite{HHKNR12}. For example, is it the case that the graphs arising in Mycielski's famous construction of triangle-free graphs with arbitrarily high chromatic number are common?
\begin{problem}
Do there exist common graphs of all chromatic numbers?
\end{problem}
For bipartite graphs (that is, graphs of chromatic number two), the question of whether the graph is common is closely related to a famous conjecture of Sidorenko \cite{S93} and Erd\H{o}s--Simonovits~\cite{ES84}. This conjecture states that if $H$ is a bipartite graph then the random graph with density $p$ has in expectation asymptotically the minimum number of copies of $H$ over all graphs of the same order and edge density. In particular, if this conjecture is true for a given bipartite graph $H$ then so is the Burr--Rosta conjecture. Since Sidorenko's conjecture is now known to hold for a number of large classes of graphs, we will not attempt an exhaustive summary here, instead referring the reader to some of the recent papers on the subject \cite{CFS102, KLL14, LS14}.
In general, the problem of estimating the constants $c_H$ seems to be difficult. For complete graphs, the upper bound $c_t \leq 2^{1 - \binom{t}{2}}$ has only ever been improved by small constant factors, while the best lower bound, due to Conlon \cite{C12}, is $c_t \geq C^{-(1 +o(1))t^2}$, where $C \approx 2.18$ is an explicitly defined constant. The argument that gives this bound may be seen as a multiplicity analogue of the usual Erd\H{o}s--Szekeres argument that bounds Ramsey numbers. We accordingly expect that it will be difficult to improve. For fixed $t$, the flag algebra method offers some hope. For example, it is now known \cite{N14, S14} that $c_4 > \frac{1}{35}$. A more striking recent success of this method, by Cummings, Kr\'al', Pfender, Sperfeld, Treglown and Young \cite{CKPSTY}, is an exact determination of $c_{3,3} = \frac{1}{25}$.
A strong quantitative counterexample to the Burr--Rosta conjecture was found by Fox \cite{F08}. Indeed, suppose that $H$ is connected and split the vertex set of $K_N$ into $\chi(H) - 1$ vertex sets, each of order $\frac{N}{\chi(H) - 1}$, colouring the edges between any two sets blue and those within each set red. Since there are only $\chi(H) - 1$ sets, there cannot be a blue copy of $H$. As every red copy of $H$ must lie completely within one of the $\chi(H) - 1$ vertex sets, a simple calculation then shows that $c_H \leq (\chi(H) - 1)^{1 - v(H)}$. Consider now the graph $H$ consisting of a clique with $t = \sqrt{m}$ vertices and an appended path with $m - \binom{t}{2} \geq \frac{m}{2}$ edges. Since $\chi(H) = \sqrt{m}$ and $v(H) \geq \frac{m}{2}$, we see that $c_H \leq m^{-(1 - o(1))m/4}$. Since $e(H) = m$, this gives a strong disproof of the conjecture that $c_H \geq 2^{1 - m}$. However, the following conjecture \cite{F08} still remains plausible.
\begin{conjecture} \label{mult:almost}
For any $\epsilon > 0$, there exists $m_0$ such that if $H$ is a graph with at least $m_0$ edges, then
\[c_H \geq 2^{-e(H)^{1 + \epsilon}}.\]
\end{conjecture}
When $q \geq 3$, the Ramsey multiplicity constants $c_{H, q}$ behave very differently. To see this, consider a two-colouring, in red and blue, of the complete graph on $r(t) - 1$ vertices which contains no monochromatic copy of $K_t$. We now form a three-colouring of $K_N$ by blowing up each vertex in this two-colouring to have order $\frac{N}{r(t) - 1}$ and placing a green clique in each vertex set. This colouring contains no red or blue copies of $K_t$. Therefore, if $H$ is the graph defined above, that is, a clique with $t = \sqrt{m}$ vertices and an appended path with $m - \binom{t}{2} \geq \frac{m}{2}$ edges, it is easy to check that $c_{H,3} \leq (r(t) - 1)^{1 - v(H)} \leq 2^{-(1-o(1)) m^{3/2}/4}$, where we used that $r(t) \geq 2^{t/2}$. In particular, Conjecture~\ref{mult:almost} is false for more than two colours. We hope to discuss this topic further in a forthcoming paper \cite{CFS14}.
\section{Variants} \label{sec:variants}
There are a huge number of interesting variants of the usual Ramsey function. In this section, we will consider only a few of these, focusing on those that we believe to be of the greatest importance.
\subsection{Induced Ramsey numbers} \label{sec:induced}
A graph $H$ is said to be an {\it induced subgraph} of $H$ if $V(H) \subset
V(G)$ and two vertices of $H$ are adjacent if and only if they are adjacent in
$G$. The {\it induced Ramsey number} $r_{\textrm{ind}} (H)$ is the smallest natural
number $N$ for which there is a graph $G$ on $N$ vertices such that every
two-colouring of the edges of $G$ contains an induced monochromatic copy of $H$.
The existence of these numbers was proved independently by Deuber \cite{De},
Erd\H{o}s, Hajnal and P\'osa \cite{ErHaPo} and R\"{o}dl \cite{R73}, though the bounds
these proofs give on $r_{\textrm{ind}} (H)$ are enormous. However, Erd\H{o}s \cite{E75}
conjectured the existence of a constant $c$ such that every graph $H$ with $n$ vertices
satisfies $r_{\textrm{ind}} (H) \leq 2^{c n}$. If true, this would clearly be best possible.
In a problem paper, Erd\H{o}s \cite{E84} stated that he and Hajnal had proved a
bound of the form $r_{\textrm{ind}} (H) \leq 2^{2^{n^{1 + o(1)}}}$. This remained the
state of the art for some years until Kohayakawa, Pr\"{o}mel and R\"{o}dl
\cite{KoPrRo} proved that there is a constant $c$ such that every graph $H$ on
$n$ vertices satisfies $r_{\textrm{ind}} (H) \leq 2^{c n \log^2 n}$. Using similar ideas to those used in the proof of Theorem~\ref{thm:CRSTBound}, the authors~\cite{CFS12} recently improved this bound, removing one of the logarithmic factors from the exponent.
\begin{theorem} \label{induced}
There exists a constant $c$ such that every graph $H$ with $n$ vertices
satisfies
\[r_{\textrm{ind}} (H) \leq 2^{c n \log n}.\]
\end{theorem}
The graph $G$ used by Kohayakawa, Pr\"{o}mel and R\"{o}dl in their proof is a random graph constructed with projective planes. This graph is specifically designed so as to contain many copies of the target graph $H$. Subsequently, Fox and Sudakov \cite{FS08} showed how to prove the same bounds as Kohayakawa, Pr\"{o}mel and R\"{o}dl using explicit pseudorandom graphs. The approach in \cite{CFS12} also uses pseudorandom graphs.
A graph is said to be pseudorandom if it imitates some of the properties of a random graph. One such property, introduced by Thomason \cite{T87, T872}, is that of having approximately the same density between any pair of large disjoint vertex sets. More formally, we say that a graph $G = (V, E)$ is {\it $(p, \lambda)$-jumbled} if, for all subsets $A, B$ of $V$, the number of edges $e(A,B)$ between $A$ and $B$ satisfies
\[|e(A, B) - p|A||B|| \leq \lambda \sqrt{|A||B|}.\]
The {\it binomial random graph} $G(N,p)$, where each edge in an $N$-vertex graph is chosen independently with probability $p$, is itself a $(p, \lambda)$-jumbled graph with $\lambda = O(\sqrt{p N})$. An example of an explicit $(\frac{1}{2}, \sqrt{N})$-jumbled graph is the Paley graph $P_N$. This is the graph with vertex set $\mathbb{Z}_N$, where $N$ is a prime which is congruent to 1 modulo 4 and two vertices $x$ and $y$ are adjacent if and only if $x - y$ is a quadratic residue. For further examples, we refer the reader to \cite{KS06}. We may now state the result that lies behind Theorem~\ref{induced}.
\begin{theorem} \label{maininduced}
There exists a constant $c$ such that, for any $n \in \mathbb{N}$ and any
$(\frac{1}{2}, \lambda)$-jumbled graph $G$ on $N$ vertices with $\lambda
\leq 2^{-c n \log n} N$, every graph on $n$ vertices occurs as an induced
monochromatic copy in all two-colourings of the edges of $G$. Moreover, all of these
induced monochromatic copies can be found in the same colour.
\end{theorem}
For graphs of bounded maximum degree, Trotter conjectured that the induced Ramsey number is at most polynomial in the number of vertices. That is, for each $\Delta$ there should be $d(\Delta)$ such that $r_{\textrm{ind}}(H) \leq n^{d(\Delta)}$ for any $n$-vertex graph $H$ with maximum degree $\Delta$. This was proved by \L uczak and R\"odl \cite{LR96}, who gave an enormous upper bound for $d(\Delta)$, namely, a tower of twos of height $O(\Delta^2)$. More recently, Fox and Sudakov \cite{FS08} proved the much more reasonable bound $d(\Delta) = O(\Delta \log \Delta)$. This was improved by Conlon, Fox and Zhao \cite{CFZ14} as follows.
\begin{theorem} \label{maxinduced}
For every natural number $\Delta$, there exists a constant $c$ such that $r_{\textrm{ind}}(H) \leq c n^{2 \Delta + 8}$ for every $n$-vertex graph $H$ of maximum degree $\Delta$.
\end{theorem}
Again, this is a special case of a much more general result. Like Theorem~\ref{maininduced}, it says that if a graph on $N$ vertices is $(p, \lambda)$-jumbled for $\lambda$ sufficiently small in terms of $p$ and $N$, then the graph has strong Ramsey properties.\footnote{We note that this is itself a simple corollary of the main result in \cite{CFZ14}, which gives a counting lemma for subgraphs of sparse pseudorandom graphs and thereby a mechanism for transferring combinatorial theorems such as Ramsey's theorem to the sparse context. For further details, we refer the interested reader to \cite{CFZ14}.}
\begin{theorem} \label{mainmaxinduced}
For every natural number $\Delta$, there exists a constant $c$ such that, for any $n \in \mathbb{N}$ and any
$(\frac{1}{n}, \lambda)$-jumbled graph $G$ on $N$ vertices with $\lambda
\leq c n^{-\Delta - \frac{9}{2}} N$, every graph on $n$ vertices with maximum degree $\Delta$ occurs as an induced
monochromatic copy in all two-colourings of the edges of $G$. Moreover, all of these
induced monochromatic copies can be found in the same colour.
\end{theorem}
In particular, this gives the stronger result that there are graphs $G$ on $c n^{2 \Delta + 8}$ vertices such that in every two-colouring of the edges of $G$ there is a colour which contains induced monochromatic copies of every graph on $n$ vertices with maximum degree $\Delta$. The exponent of $n$ in this result is best possible up to a multiplicative factor, since, even for the much weaker condition that $G$ contains an induced copy of all graphs on $n$ vertices with maximum degree $\Delta$, $G$ must contain $\Omega(n^{\Delta/2})$ vertices \cite{Bu09}.
Theorems~\ref{maxinduced} and \ref{mainmaxinduced} easily extend to more than two colours. This is not the case for Theorems~\ref{induced} and \ref{maininduced}, where the following problem remains open. As usual, $r_{\textrm{ind}}(H; q)$ denotes the $q$-colour analogue of the induced Ramsey number.
\begin{problem}
Show that if $H$ is a graph on $n$ vertices and $q \geq 3$ is a natural number, then $r_{\textrm{ind}}(H; q) \leq 2^{n^{1 + o(1)}}$.
\end{problem}
It also remains to decide whether Theorem~\ref{maxinduced} can be improved to show that the induced Ramsey number of every graph with $n$ vertices and maximum degree $\Delta$ is at most a polynomial in $n$ whose exponent is independent of $\Delta$.
\begin{problem}
Does there exist a constant $d$ such that $r_{\textrm{ind}}(H) \leq c(\Delta) n^d$ for all graphs with $n$ vertices and maximum degree $\Delta$?
\end{problem}
\subsection{Folkman numbers}
In the late sixties, Erd\H{o}s and Hajnal \cite{EH67} asked whether, for any positive integers $t \geq 3$ and $q \geq 2$, there exists a graph $G$ which is $K_{t+1}$-free but such that any $q$-colouring of the edges of $G$ contains a monochromatic copy of $K_t$. For two colours, this problem was solved in the affirmative by Folkman~\cite{F70}. However, his method did not generalise to more than two colours and it was several years before Ne\v set\v ril and R\"odl \cite{NR76} found another proof which worked for any number of colours.
Once we know that these graphs exist, it is natural to try and estimate their size. To do this, we define the {\it Folkman number} $f(t)$ to be the smallest natural number $N$ for which there exists a $K_{t+1}$-free graph $G$ on $N$ vertices such that every two-colouring of the edges of $G$ contains a monochromatic copy of $K_t$. The lower bound for $f(t)$ is essentially the same as for the usual Ramsey function, that is, $f(t) \geq 2^{c' t}$. On the other hand, the proofs mentioned above (and some subsequent ones~\cite{NR81, RR95}) use induction schemes which result in the required graphs $G$ having enormous numbers of vertices.
Because of the difficulties involved in proving reasonable bounds for these numbers, a substantial amount of effort has gone into understanding the bounds for $f(3)$. In particular, Erd\H{o}s asked for a proof that $f(3)$ is smaller than $10^{10}$. This was subsequently given by Spencer \cite{S88}, building on work of Frankl and R\"odl \cite{FR86}, but has since been improved further \cite{DR08, L07}. The current best bound, due to Lange, Radziszowski and Xu \cite{LRX12}, stands at $f(3) \leq 786$.
The work of Frankl and R\"odl \cite{FR86} and Spencer \cite{S88} relied upon analysing the Ramsey properties of random graphs. Recall that the binomial random graph $G_{n,p}$ is a graph on $n$ vertices where each of the $\binom{n}{2}$ possible edges is chosen independently with probability $p$. Building on the work of Frankl and R\"odl, R\"odl and Ruci\'nski~\cite{RR93, RR95} determined the threshold for Ramsey's theorem to hold in a binomial random graph and used it to give another proof of Folkman's theorem. To state their theorem, let us say that a graph $G$ is {\it $(H, q)$-Ramsey} if any $q$-colouring of the edges of $G$ contains a monochromatic copy of $H$.
\begin{theorem} \label{thm:RR}
For any graph $H$ that is not a forest consisting of stars and paths of length $3$ and any positive integer~$q \geq 2$, there exist positive constants $c$ and $C$ such that
\[
\lim_{n \rightarrow \infty} \mathbb{P} [G_{n,p} \mbox{ is $(H,q)$-Ramsey}] =
\begin{cases}
0 & \text{if $p < c n^{-1/m_2(H)}$}, \\
1 & \text{if $p > C n^{-1/m_2(H)}$},
\end{cases}
\]
where
\[m_2(H) = \max\left\{\frac{e(H') - 1}{v(H') - 2}: H' \subseteq H \mbox{ and } v(H') \geq 3\right\}.\]
\end{theorem}
Very recently, it was noted \cite{CG142, RRS14} that some new methods for proving this theorem yield significantly stronger bounds for Folkman numbers. As we have already remarked, the connection between these two topics is not a new one. However, in recent years, a number of very general methods have been developed for proving combinatorial theorems in random sets~\cite{BMS14, CG14, FRS10, ST14, S12} and some of these methods return good quantitative estimates. In particular, the following result was proved by R\"odl, Ruci\'nski and Schacht \cite{RRS14}. The proof relies heavily on the hypergraph container method of Balogh, Morris and Samotij~\cite{BMS14} and Saxton and Thomason~\cite{ST14} and an observation of Nenadov and Steger~\cite{NS14} that allows one to apply this machinery to Ramsey problems.
\begin{theorem} \label{Folkman}
There exists a constant $c$ such that
\[f(t) \leq 2^{c t^4 \log t}.\]
\end{theorem}
\noindent
Their method also returns a comparable bound for the $q$-colour analogue $f(t; q)$. Given how close these bounds now lie to the lower bound, we are willing to conjecture that, like the usual Ramsey number, the Folkman number is at most exponential in $t$.
\begin{conjecture}
There exists a constant $c$ such that
\[f(t) \leq 2^{ct}.\]
\end{conjecture}
\subsection{The Erd\H{o}s--Hajnal conjecture}
There are several results and conjectures saying that graphs which do not contain a fixed induced subgraph are highly structured. The most famous conjecture of this type is due to Erd\H{o}s and Hajnal \cite{EH89} and asks whether any such graph must contain very large cliques or independent sets.\footnote{Although their 1989 paper \cite{EH89} is usually cited as the origin of this problem, the Erd\H{o}s--Hajnal conjecture already appeared in a paper from 1977~\cite{EH77}.}
\begin{conjecture}
For every graph $H$, there exists a positive constant $c(H)$ such that any graph on $n$ vertices which does not contain an induced copy of $H$ has a clique or an independent set of order at least $n^{c(H)}$.
\end{conjecture}
This is in stark contrast with general graphs, since the probabilistic argument that gives the standard lower bound on Ramsey numbers shows that almost all graphs on $n$ vertices contain no clique or independent set of order $2 \log n$. Therefore, the Erd\H{o}s--Hajnal conjecture may be seen as saying that the bound on Ramsey numbers can be improved from exponential to polynomial when one restricts to colourings that have a fixed forbidden subcolouring.
The Erd\H{o}s--Hajnal conjecture has been solved in some special cases. For example, the bounds for off-diagonal Ramsey numbers imply that it holds when $H$ is itself a clique or an independent set. Moreover, Alon, Pach and Solymosi~\cite{APS01} observed that if the conjecture is true for two graphs $H_1$ and $H_2$, then it also holds for the graph $H$ formed by blowing up a vertex of $H_1$ and replacing it with a copy of $H_2$. These results easily allow one to prove that the conjecture holds for all graphs on at most four vertices with the exception of $P_4$, the path with $3$ edges. However, this case follows from noting that any graph which contains no induced $P_4$ is perfect. The conjecture remains open for a number of graphs on five vertices, including the cycle $C_5$ and the path $P_5$. However, Chudnovsky and Safra~\cite{CS08} recently proved the conjecture for the graph on five vertices known as the bull, consisting of a triangle with two pendant edges. We refer the reader to the survey by Chudnovsky~\cite{Chu14} for further information on this and related results.
The best general bound, due to Erd\H{o}s and Hajnal \cite{EH89}, is as follows.
\begin{theorem} \label{thm:EHBound}
For every graph $H$, there exists a positive constant $c(H)$ such that any graph on $n$ vertices which does not contain an induced copy of $H$ has a clique or an independent set of order at least $e^{c(H)\sqrt{\log n}}$.
\end{theorem}
\noindent
Despite much attention, this bound has not been improved. However, an off-diagonal generalisation was proved by Fox and Sudakov~\cite{FS09} using dependent random choice. This says that for any graph $H$ there exists a positive constant $c(H)$ such that for every induced-$H$-free graph $G$ on $n$ vertices and any positive integers $n_1$ and $n_2$ satisfying $(\log n_1)(\log n_2) \leq c(H) \log n$, $G$ contains either a clique of order $n_1$ or an independent set of order $n_2$.
Another result of this type, due to Promel and R\"odl~\cite{PR99}, states that for each $C$ there is $c>0$ such that every graph on $n$ vertices contains every graph on at most $c \log n$ vertices as an induced subgraph or has a clique or independent set of order at least $C\log n$. That is, every graph contains all small graphs as induced subgraphs or has an unusually large clique or independent set. Fox and Sudakov \cite{FS08} proved a result which implies both the Erd\H{o}s--Hajnal result and the Promel--R\"odl result. It states that there are absolute constants $c,c'>0$ such that for all positive integers $n$ and $k$ every graph on $n$ vertices contains every graph on at most $k$ vertices as an induced subgraph or has a clique or independent set of order $c2^{c'\sqrt{\frac{\log n}{k}}}\log n$. When $k$ is constant, this gives the Erd\H{o}s--Hajnal bound and when $k$ is a small multiple of $\log n$, we obtain the Promel--R\"odl result.
It is also interesting to see what happens if one forbids not just one but many graphs as induced subgraphs. A family $\mathcal{F}$ of graphs is {\it hereditary} if it is closed under taking induced subgraphs. We say that it is {\it proper} if it does not contain all graphs. A family $\mathcal{F}$ of graphs has the {\it Erd\H{o}s--Hajnal property} if there is $c=c(\mathcal{F})>0$ such that every graph $G \in \mathcal{F}$ has a clique or an independent set of order $|G|^c$. The Erd\H{o}s--Hajnal conjecture is easily seen to be equivalent to the statement that every proper hereditary family of graphs has the Erd\H{o}s--Hajnal property.
A family $\mathcal{F}$ of graphs has the {\it strong Erd\H{o}s--Hajnal property} if there is $c'=c'(\mathcal{F})>0$ such that for every graph $G \in \mathcal{F}$ on at least two vertices, $G$ or its complement $\bar G$ contains a complete bipartite subgraph with parts of order $c'|G|$. A simple induction argument (see \cite{FP08}) shows that if a hereditary family of graphs has the strong Erd\H{o}s--Hajnal property, then it also has the Erd\H{o}s--Hajnal property. However, not every proper hereditary family of graphs has the strong Erd\H{o}s--Hajnal property. For example, it is easy to see that the family of triangle-free graphs does not have the strong Erd\H{o}s--Hajnal property. Even so, the strong Erd\H{o}s--Hajnal property has been a useful way to deduce the Erd\H{o}s--Hajnal property for some families of graphs. A good example is the recent result of Bousquet, Lagoutte and Thomass\'e~\cite{BLT} which states that for each positive integer $t$ the family of graphs that excludes both the path $P_t$ on $t$ vertices and its complement as induced subgraphs has the strong Erd\H{o}s--Hajnal property (using different techniques, Chudnovsky and Seymour~\cite{CS14+} had earlier proved that this family has the Erd\H{o}s--Hajnal property when $t = 6$). Bonamy, Bousquet and Thomass\'e~\cite{BBT} later extended the result of \cite{BLT}, proving that for each $t \geq 3$ the family of graphs that excludes all cycles on at least $t$ vertices and their complements as induced subgraphs has the strong Erd\H{o}s--Hajnal property.
This approach also applies quite well in combinatorial geometry, where a common problem is to show that arrangements of geometric objects have large crossing or disjoint patterns. This is usually proved by showing that the auxiliary {\it intersection graph}, with a vertex for each object and an edge between two vertices if the corresponding objects intersect, has a large clique or independent set. Larman, Matou\v sek, Pach and T\"or\H{o}csik~\cite{LMPT} proved that the family of intersection graphs of convex sets in the plane has the Erd\H{o}s--Hajnal property. This was later strengthened by Fox, Pach and T\'oth~\cite{FPT10}, who proved that this family has the strong Erd\H{o}s--Hajnal property. Alon, Pach, Pinchasi, Radoi\v ci\'c and Sharir \cite{APPRS05} proved that the family of semi-algebraic graphs of bounded description complexity has the strong Erd\H{o}s--Hajnal property. This implies the existence of large patterns in many graphs that arise naturally in discrete geometry.
String graphs are intersection graphs of curves in the plane. It is still an open problem to decide whether every family of $n$ curves in the plane contains a subfamily of size $n^{c}$ whose elements are either pairwise intersecting or pairwise disjoint, i.e., whether the family $\mathcal{S}$ of string graphs has the Erd\H{o}s--Hajnal property. The best known bound is $n^{c/\log\log n}$, due to Fox and Pach~\cite{FP14}. This follows by first proving that every string graph on $n \geq 2$ vertices contains a complete or empty bipartite subgraph with parts of order $\Omega(n/\log n)$. This latter result is tight up to the constant factor, so the family of string graphs does not have the strong Erd\H{o}s--Hajnal property. On the other hand, Fox, Pach and T\'oth~\cite{FPT10} proved that the family $\mathcal{S}_k$ of intersection graphs of curves where each pair of curves intersects at most $k$ times does have the strong Erd\H{o}s--Hajnal property.
We have already noted that the strong Erd\H{o}s--Hajnal property does not always hold for induced-$H$-free graphs. However, Erd\H{o}s, Hajnal and Pach \cite{EHP00} proved that a bipartite analogue of the Erd\H{o}s-Hajnal conjecture does hold. That is, for every graph $H$ there is a positive constant $c(H)$ such that every induced-$H$-free graph on $n \geq 2$ vertices contains a complete or empty bipartite graph with parts of order $n^{c(H)}$. Using dependent random choice, Fox and Sudakov \cite{FS09} proved a strengthening of this result, showing that every such graph contains a complete bipartite graph with parts of order $n^{c(H)}$ or an independent set of order $n^{c(H)}$.
In a slightly different direction, R\"odl \cite{R86} showed that any graph with a forbidden induced subgraph contains a linear-sized subset which is close to being complete or empty. That is, for every graph $H$ and every $\epsilon>0$, there is $\delta>0$ such that every induced-$H$-free graph on $n$ vertices contains an induced subgraph on at least $\delta n$ vertices with edge density at most $\epsilon$ or at least $1-\epsilon$. R\"odl's proof uses Szemer\'edi's regularity lemma and consequently gives a tower-type bound on $\delta^{-1}$. Fox and Sudakov~\cite{FS08} proved the much better bound $\delta \geq 2^{-c|H|(\log 1/\epsilon)^2}$, which easily implies Theorem~\ref{thm:EHBound} as a corollary. They also conjectured that a polynomial dependency holds, which would in turn imply the Erd\H{o}s--Hajnal conjecture.
\begin{conjecture}
For every graph $H$, there is a positive constant $c(H)$ such that for every $\epsilon>0$ every induced-$H$-free graph on $n$ vertices contains an induced subgraph on $\epsilon^{c(H)}n$ vertices with density at most $\epsilon$ or at least $1-\epsilon$.
\end{conjecture}
One of the key steps in proving Theorem~\ref{thm:EHBound} is to find, in an induced-$H$-free graph on $n$ vertices, two disjoint subsets of order at least $\epsilon^{c}n$ for some $c = c(H) > 0$ such that the edge density between them is at most $\epsilon$ or at least $1-\epsilon$. We wonder whether this can be improved so that one part is of linear size.
\begin{problem}
Is it true that for every graph $H$ there is $c=c(H)>0$ such that for every $\epsilon > 0$ every induced-$H$-free graph on $n$ vertices contains two disjoint subsets of orders $cn$ and $\epsilon^{c}n$ such that the edge density between them is at most $\epsilon$ or at least $1-\epsilon$?
\end{problem}
A positive answer to this question would improve the bound on the Erd\H{o}s--Hajnal conjecture to $e^{c\sqrt{\log n \log \log n}}$. However, we do not even know the answer when $H$ is a triangle. A positive answer in this case would imply the following conjecture.
\begin{conjecture}
There is a positive constant $c$ such that every triangle-free graph on $n \geq 2$ vertices contains disjoint subsets of orders $cn$ and $n^c$ with no edges between them.
\end{conjecture}
\noindent
Restated, this conjecture says that there exists a positive constant $c$ such that the Ramsey number of a triangle versus a complete bipartite graph with parts of orders $cn$ and $n^c$ is at most $n$.
There is also a multicolour generalisation of the Erd\H{o}s--Hajnal conjecture.
\begin{conjecture} For every $q$-edge-coloured complete graph $K$, there exists a positive constant $c(K)$ such that every $q$-edge-colouring of the complete graph on $n$ vertices which does not contain a copy of $K$ has an induced subgraph on $n^{c(K)}$ vertices which uses at most $q-1$ colours.
\end{conjecture}
The Erd\H{o}s--Hajnal conjecture clearly corresponds to the case $q = 2$, as we can take the edges of our graph as one colour and the non-edges as the other colour. For $q = 3$, Fox, Grinshpun and Pach~\cite{FGP} proved that every rainbow-triangle-free $3$-edge-colouring of the complete graph on $n$ vertices contains a two-coloured subset with at least $cn^{1/3} \log^2 n$ vertices. This bound is tight up to the constant factor and answers a question of Hajnal \cite{H08}, the construction that demonstrates tightness being the lexicographic product of three two-colourings of the complete graph on $n^{1/3}$ vertices, one for each pair of colours and each having no monochromatic clique of order $\log n$.
Alon, Pach and Solymosi~\cite{APS01} observed that the Erd\H{o}s--Hajnal conjecture is equivalent to the following variant for tournaments. For every tournament $T$, there is a positive constant $c(T)$ such that every tournament on $n$ vertices which does not contain $T$ as a subtournament has a transitive subtournament of order $n^{c(T)}$. Recently, Berger, Choromanski and Chudnovsky~\cite{BCC14} proved that this conjecture holds for every tournament $T$ on at most five vertices, as well as for an infinite family of tournaments that cannot be obtained through the tournament analogue of the substitution procedure of Alon, Pach and Solymosi.
Analogues of the Erd\H{o}s--Hajnal conjecture have also been studied for hypergraphs. The authors~\cite{CFS12b} proved that for $k \geq 4$ no analogue of the standard Erd\H{o}s--Hajnal conjecture can hold in $k$-uniform hypergraphs. That is, there are $k$-uniform hypergraphs $H$ and sequences of induced-$H$-free hypergraphs which do not contain cliques or independent sets of order appreciably larger than is guaranteed by Ramsey's theorem. The proof uses the fact that the stepping-up construction of Erd\H{o}s and Hajnal has forbidden induced subgraphs.
Nevertheless, one can still show that $3$-uniform hypergraphs with forbidden induced subgraphs contain some unusually large configurations. It is well known that every $3$-uniform hypergraph on $n$ vertices contains a complete or empty tripartite subgraph with parts of order $c(\log n)^{1/2}$ and a random $3$-uniform hypergraph shows that this bound is tight up to the constant factor. R\"odl and Schacht~\cite{RS12} proved that this bound can be improved by any constant factor for sufficiently large induced-$H$-free hypergraphs. This result was subsequently improved by the authors~\cite{CFS12b}, who showed that for every $3$-uniform hypergraph $H$ there exists a positive constant $\delta(H)$ such that, for $n$ sufficiently large, every induced-$H$-free $3$-uniform hypergraph on $n$ vertices contains a complete or empty tripartite subgraph with parts of order $(\log n)^{1/2 + \delta(H)}$. We believe that this bound can be improved further. If true, the following conjecture would be best possible.
\begin{conjecture}
For every $3$-uniform hypergraph $H$, any induced-$H$-free hypergraph on $n$ vertices contains a complete or empty tripartite subgraph with parts of order $(\log n)^{1-o(1)}$.
\end{conjecture}
\subsection{Size Ramsey numbers}
Given a graph $H$, the {\it size Ramsey number} $\hat{r}(H)$ is defined to be the smallest $m$ for which there exists a graph $G$ with $m$ edges such that $G$ is Ramsey with respect to $H$, that is, such that any two-colouring of the edges of $G$ contains a monochromatic copy of $H$. This concept was introduced by Erd\H{o}s, Faudree, Rousseau and Schelp \cite{EFRS78}. Since the complete graph on $r(H)$ vertices is Ramsey with respect to $H$, it is clear that $\hat{r}(H) \leq \binom{r(H)}{2}$. Moreover, as observed by Chv\'atal (see \cite{EFRS78}), this inequality is tight when $H$ is a complete graph. This follows easily from noting that any graph which is Ramsey with respect to $K_t$ must have chromatic number at least $r(t)$.
The most famous result in this area is the following rather surprising theorem of Beck \cite{B832}, which says that the size Ramsey number of a path is linear in the number of vertices. Here $P_n$ is the path with $n$ vertices.
\begin{theorem} \label{size:path}
There exists a constant $c$ such that $\hat{r}(P_n) \leq c n$.
\end{theorem}
This result, which answered a question of Erd\H{o}s, Faudree, Rousseau and Schelp \cite{EFRS78} (see also \cite{E812}), was later extended to trees of bounded maximum degree \cite{FP87} and to cycles \cite{HKL95}. For a more general result on the size Ramsey number of trees, we refer the reader to the recent work of Dellamonica \cite{D12}.
Beck \cite{B90} raised the question of whether this result could be generalised to graphs of bounded maximum degree. That is, he asked whether for any $\Delta$ there exists a constant $c$, depending only on $\Delta$, such that any graph on $n$ vertices with maximum degree $\Delta$ has size Ramsey number at most $c n$. This question was answered in the negative by R\"odl and Szemer\'edi \cite{RS00}, who proved that there are already graphs of maximum degree $3$ with superlinear size Ramsey number.
\begin{theorem} \label{size:lower}
There are positive constants $c$ and $\alpha$ and, for every $n$, a graph $H$ with $n$ vertices and maximum degree $3$ such that
\[\hat{r}(H) \geq c n (\log n)^{\alpha}.\]
\end{theorem}
On the other hand, a result of Kohayakawa, R\"odl, Schacht and Szemer\'edi \cite{KRSS11} shows that the size Ramsey number of graphs with bounded maximum degree is subquadratic.
\begin{theorem}
For every natural number $\Delta$, there exists a constant $c_\Delta$ such that any graph $H$ on $n$ vertices with maximum degree $\Delta$ satisfies
\[\hat{r}(H) \leq c_\Delta n^{2 - 1/\Delta} (\log n)^{1/\Delta}.\]
\end{theorem}
We are not sure where the truth lies, though it seems likely that Theorem~\ref{size:lower} can be improved by a polynomial factor. This was formally conjectured by R\"odl and Szemer\'edi \cite{RS00}.
\begin{conjecture}
For every natural number $\Delta \geq 3$, there exists a constant $\epsilon > 0$ such that for all sufficiently large $n$ there is a graph $H$ on $n$ vertices with maximum degree $\Delta$ for which $\hat{r}(H) \geq n^{1 + \epsilon}$.
\end{conjecture}
More generally, given a real-valued graph parameter $f$, we may define the {\it $f$-Ramsey number} $r_f(H)$ of $H$ to be the minimum value of $f(G)$, taken over all graphs $G$ which are Ramsey with respect to $H$. The usual Ramsey number is the case where $f(G) = v(G)$, while the size Ramsey number is the case where $f(G) = e(G)$. However, there have also been studies of other variants, such as the {\it chromatic Ramsey number} $r_\chi(H)$, where $f(G) = \chi(G)$, and the {\it degree Ramsey number} $r_\Delta(H)$, where $f(G) = \Delta(G)$. We will point out one result concerning the first parameter and a problem concerning the second.
The chromatic Ramsey number was introduced by Burr, Erd\H{o}s and Lov\'asz \cite{BEL76}, who observed that any graph $H$ with chromatic number $t$ has $r_\chi(H) \geq (t-1)^2 + 1$ and conjectured that there are graphs of chromatic number $t$ for which this bound is sharp. In their paper, they outlined a proof of this conjecture based on the still unproven Hedetniemi conjecture, which concerns the chromatic number of the tensor product of graphs. Recently, Zhu \cite{Z11} proved a fractional version of the Hedetniemi conjecture, which, by an observation of Paul and Tardif \cite{PT12}, was sufficient to establish the conjecture.
\begin{theorem}
For every natural number $t$, there exists a graph $H$ of chromatic number $t$ such that
\[r_\chi(H) = (t-1)^2 + 1.\]
\end{theorem}
The outstanding open problem concerning the degree Ramsey number is the following, which seems to have been first noted by Kinnersley, Milans and West \cite{KMW12}.
\begin{problem} \label{size:degree}
Is it true that for every $\Delta \geq 3$, there exists a natural number $\Delta'$ such that $r_\Delta(H) \leq \Delta'$ for every graph $H$ of maximum degree $\Delta$?
\end{problem}
\noindent
We suspect that the answer is no, but the problem appears to be difficult. For $\Delta = 2$, the answer is yes (see, for example, \cite{HMR14}).
An on-line variant of the size Ramsey number was introduced by Beck \cite{B93} and, independently, by Kurek and Ruci\'nski \cite{KR05}. It is best described as a game between two players, known as Builder and Painter. Builder draws a sequence of edges and, as each edge appears, Painter must colour it in either red or blue. Builder's goal is to force Painter to draw a monochromatic copy of some fixed graph $H$. The smallest number of turns needed by Builder to force Painter to draw a monochromatic copy of $H$ is known as the {\it on-line Ramsey number} of $H$ and denoted $\tilde{r}(H)$. As usual, we write $\tilde{r}(t)$ for $\tilde{r}(K_t)$.
The basic question in this area, attributed to R\"odl (see \cite{KR05}), is to show that $\lim_{t \rightarrow \infty} \tilde{r}(t)/\hat{r}(t) = 0$. Put differently, we would like to show that $\tilde{r}(t) = o(\binom{r(t)}{2})$. This conjecture remains open (and is probably difficult), but the following result, due to Conlon \cite{C093}, shows that the on-line Ramsey number $\tilde{r}(t)$ is exponentially smaller than the size Ramsey number $\hat{r}(t)$ for infinitely many values of $t$.
\begin{theorem}
There exists a constant $c > 1$ such that for infinitely many $t$,
\[\tilde{r}(t) \leq c^{-t} \binom{r(t)}{2}.\]
\end{theorem}
On-line analogues of $f$-Ramsey numbers were considered by Grytczuk, Ha\l uszczak and Kierstead \cite{GHK04}. The most impressive result in this direction, proved by Grytczuk, Ha\l uszczak, Kierstead and Konjevod over two papers \cite{GHK04, KK09}, says that Builder may force Painter to draw a monochromatic copy of any graph with chromatic number $t$ while only exposing a graph of chromatic number $t$ herself. We also note that the on-line analogue of Problem~\ref{size:degree} was studied in \cite{BGKMSW09} but again seems likely to have a negative answer for $\Delta \geq 3$ (though we refer the interested reader to~\cite{CFS14A} for a positive answer to the analogous question when maximum degree is replaced by degeneracy).
\subsection{Generalised Ramsey numbers}
In this section, we will consider two generalisations of the usual Ramsey function, both of which have been referred to in the literature as generalised Ramsey numbers.
\subsubsection{The Erd\H{o}s--Gy\'arf\'as function}
Let $p$ and $q$ be positive integers with $2 \leq q \leq \binom{p}{2}$. An edge colouring of the complete graph $K_n$ is said to be a {\it $(p, q)$-colouring} if every $K_p$ receives at least $q$ different colours. The function $f(n, p, q)$ is defined to be the minimum number of colours that are needed for $K_n$ to have a $(p,q)$-colouring. This function generalises the usual Ramsey function, as may be seen by noting that $f(n, p, 2)$ is the minimum number of colours needed to guarantee that no $K_p$ is monochromatic. In particular, if we invert the bounds $2^s \leq r(3; s) \leq e s!$, we get
\[c' \frac{\log n}{\log \log n} \leq f(n, 3, 2) \leq c \log n.\]
This function was first introduced by Erd\H{o}s and Shelah \cite{E75, E81} and studied in depth by Erd\H{o}s and Gy\'arf\'as \cite{EG97}, who proved a number of interesting results, demonstrating how the function falls off from being equal to $\binom{n}{2}$ when $q = \binom{p}{2}$ and $p \geq 4$ to being at most logarithmic when $q = 2$. They also determined ranges of $p$ and $q$ where the function $f(n, p, q)$ is linear in $n$, where it is quadratic in $n$ and where it is asymptotically equal to $\binom{n}{2}$. Many of these results were subsequently strengthened by S\'ark\"ozy and Selkow \cite{SS01, SS03}.
One simple observation of Erd\H{o}s and Gy\'arf\'as is that $f(n, p, p)$ is always polynomial in $n$. To see this, it is sufficient to show that a colouring with fewer than $n^{1/(p-2)} - 1$ colours contains a $K_p$ with at most $p - 1$ colours. For $p = 3$, this follows since one only needs that some vertex has at least two neighbours in the same colour. For $p = 4$, we have that any vertex will have at least $n^{1/2}$ neighbours in some fixed colour. But then there are fewer than $n^{1/2} - 1$ colours on this neighbourhood of order at least $n^{1/2}$, so the $p = 3$ case implies that it contains a triangle with at most two colours. The general case follows similarly.
Erd\H{o}s and Gy\'arf\'as \cite{EG97} asked whether this result is best possible, that is, whether $q = p$ is the smallest value of $q$ for which $f(n, p, q)$ is polynomial in $n$. For $p = 3$, this is certainly true, since we know that $f(n, 3, 2) \leq c \log n$. However, for general $p$, they were only able to show that $f(n, p, \lceil \log p \rceil)$ is subpolynomial. This left the question of determining whether $f(n, p, p - 1)$ is subpolynomial wide open, even for $p = 4$.
The first progress on this question was made by Mubayi \cite{M98}, who found a $(4,3)$-colouring of $K_n$ with only $e^{c \sqrt{\log n}}$ colours, thus showing that $f(n, 4,3) \leq e^{c \sqrt{\log n}}$. Later, Eichhorn and Mubayi \cite{EM00} showed that this colouring is also a $(5,4)$-colouring and, more generally, a $(p, 2 \lceil \log p \rceil - 2)$-colouring for all $p \geq 5$. It will be instructive to describe this colouring (or rather a slight variant).
Given $n$, let $t$ be the smallest integer such that $n \leq 2^{t^2}$ and $m = 2^t$. We consider the vertex set $[n]$ as a subset of $[m]^t$. For two vertices $x = (x_1, \ldots,x_t)$ and $y = (y_1 ,\ldots, y_t)$, let
\[ c_M(x,y) = \Big( \{x_i, y_i\}, a_1, \ldots,a_t \Big), \]
where $i$ is the minimum index in which $x$ and $y$ differ and $a_j = 0$ or $1$ depending on whether $x_j = y_j$ or not. Since $2^{(t-1)^2} < n$, the total number of colours used is at most
\[ m^2 \cdot 2^t = 2^{3 t} < 2^{3 (1 + \sqrt{\log n})} \leq 2^{6 \sqrt{\log n}}. \]
Hence, $c_M$ uses at most $2^{6\sqrt{\log n}}$ colours to colour the edge set of the complete graph $K_n$. The proof that $c_M$ is a $(4,3)$-colouring is a straightforward case analysis which we leave as an exercise. We have already noted that it is also a $(5,4)$-colouring. However, as observed in~\cite{CFLS141}, it cannot be a $(p, p -1)$-colouring for all $p$.
Nevertheless, in a recent paper, Conlon, Fox, Lee and Sudakov \cite{CFLS141} found a way to extend this construction and answer the question of Erd\H{o}s and Gy\'arf\'as for all $p$. Stated in a quantitative form (though one which we expect to be very far from best possible), this result is as follows.
\begin{theorem} \label{erdosgyar}
For any natural number $p \geq 4$, there exists a constant $c_p$ such that
\[f(n, p, p-1) \leq 2^{c_p (\log n)^{1 - 1/(p-2)}}.\]
\end{theorem}
Our quantitative understanding of these functions is poor, even for $f(n, 4, 3)$. Improving a result of Kostochka and Mubayi \cite{KM08}, Fox and Sudakov \cite{FS093} showed that $f(n, 4, 3) \geq c' \log n$ for some positive constant $c'$. Though a substantial improvement on the trivial bound $f(n, 4, 3) \geq f(n, 4, 2) \geq c' \log n/\log \log n$, it still remains very far from the upper bound of $e^{c\sqrt{\log n}}$. We suspect that the upper bound may be closer to the truth. An answer to the following question would be a small step in the right direction.
\begin{problem}
Show that $f(n, 4, 3) = \omega(\log n)$.
\end{problem}
For $p \geq k + 1$ and $2 \leq q \leq \binom{p}{k}$, we define the natural hypergraph generalisation $f_k(n,p,q)$ as the minimum number of colours that are needed for $K_n^{(k)}$ to have a $(p, q)$-colouring, where here a $(p, q)$-colouring means that every $K_p^{(k)}$ receives at least $q$ distinct colours. As in the graph case, it is comparatively straightforward to show that $f_k(n, p, \binom{p-1}{k-1} + 1)$ is polynomial in $n$ for all $p \geq k + 1$. With Lee \cite{CFLS142}, we conjecture the following.
\begin{conjecture}
$f_k(n, p, \binom{p-1}{k-1})$ is subpolynomial for all $p \geq k + 1$.
\end{conjecture}
Theorem~\ref{erdosgyar} addresses the $k = 2$ case, while the cases where $k = 3$ and $p = 4$ and $5$ were addressed in \cite{CFLS142}. These cases already require additional ideas beyond those used to resolve the graph case. The case where $k = 3$ and $p = 4$ is of particular interest, because it is closely related to Shelah's famous primitive recursive bound for the Hales--Jewett theorem \cite{Sh89}.
Shelah's proof relied in a crucial way on a lemma now known as the Shelah cube lemma. The simplest case of this lemma concerns the {\it grid graph} $\Gamma_{m,n}$, the graph on vertex set $[m] \times [n]$ where two distinct vertices $(i,j)$ and $(i', j')$ are adjacent if and only if either $i=i'$ or $j=j'$. That is, $\Gamma_{m,n}$ is the Cartesian product $K_m \times K_n$. A {\it rectangle} in $\Gamma_{m,n}$ is a copy of $K_2 \times K_2$, that is, an induced subgraph over a vertex subset of the form $\{(i,j), (i',j), (i,j'),(i',j') \}$ for some integers $1 \le i < i' \le m$ and $1 \le j < j' \le n$. We will denote this rectangle by $(i,j,i',j')$. For an edge-coloured grid graph, an \emph{alternating rectangle} is a rectangle $(i,j,i',j')$ such that the colour of the edges $\{(i,j), (i',j)\}$ and $\{(i,j'), (i',j')\}$ are equal and the colour of the edges $\{(i,j), (i,j')\}$ and $\{(i',j), (i',j')\}$ are equal, that is, opposite sides of the rectangle receive the same colour. The basic case of Shelah's lemma, which we refer to as the grid Ramsey problem, asks for an estimate on $G(r)$, the smallest $n$ such that every $r$-colouring of the edges of $\Gamma_{n,n}$ contains an alternating rectangle.
It is easy to show that $G(r) \leq r^{\binom{r+1}{2}} + 1$. Indeed, let $n = r^{r+1 \choose 2} + 1$ and suppose that an $r$-colouring of $\Gamma_{r+1, n}$ is given. Since each column is a copy of $K_{r+1}$, there are at most $r^{r+1 \choose 2}$ ways to colour the edges of a fixed column with $r$ colours. Since $n > r^{r+1 \choose 2}$, the pigeonhole principle implies that there are two columns which are identically coloured. Let these columns be the $j$-th column and the $j'$-th column and consider the edges that connect these two columns. Since there are $r+1$ rows, the pigeonhole principle implies that there are $i$ and $i'$ such that the edges $\{(i, j), (i, j')\}$ and $\{(i', j), (i', j')\}$ have the same colour. Since the edges $\{(i,j), (i',j)\}$ and $\{(i, j'), (i',j')\}$ also have the same colour, the rectangle $(i,j, i', j')$ is alternating.
This argument is very asymmetrical and yet the resulting bound on $G(r)$ remains essentially the best known. The only improvement, due to Gy\'arf\'as \cite{Gy94}, is $G(r) \leq r^{\binom{r+1}{2}} - r^{\binom{r-1}{2} + 1} + 1$. Though it seems likely that $G(r)$ is significantly smaller than this, the following problem already appears to be difficult.
\begin{problem}
Show that $G(r) = o(r^{\binom{r+1}{2}})$.
\end{problem}
In the second edition of their book on Ramsey theory \cite{GRS90}, Graham, Rothschild and Spencer suggested that $G(r)$ may even be polynomial in $r$. This was recently disproved by Conlon, Fox, Lee and Sudakov \cite{CFLS142}, who showed the following.
\begin{theorem} \label{grid}
There exists a positive constant $c$ such that
\[ G(r) > 2^{c (\log r)^{5/2}/\sqrt{\log \log r}}. \]
\end{theorem}
To see how this relates back to estimating $f_3(n, 4, 3)$, we let $g(n)$ be the inverse function of $G(r)$, defined as the minimum integer $s$ for which there exists an $s$-colouring of the edges of $\Gamma_{n,n}$ with no alternating rectangle. Letting $K^{(3)}(n,n)$ be the $3$-uniform hypergraph with vertex set $A \cup B$, where $|A| = |B| = n$, and edge set consisting of all those triples which intersect both $A$ and $B$, we claim that $g(n)$ is within a factor of two of the minimum integer $r$ for which there exists an $r$-colouring of the edges of $K^{(3)}(n,n)$ such that any copy of $K_4^{(3)}$ has at least three colours on its edges.
To prove this claim, we define a bijection between the edges of $\Gamma_{n,n}$ and the edges of $K^{(3)}(n,n)$ such that the rectangles of $\Gamma_{n,n}$ are in one-to-one correspondence with the copies of $K_4^{(3)}$ in $K^{(3)}(n,n)$. For $i \in A$ and $j,j' \in B$, we map the edge $(i,j,j')$ of $K^{(3)}(n,n)$ to the edge $\{(i,j), (i, j')\}$ of $\Gamma_{n,n}$ and, for $i, i' \in A$ and $j \in B$, we map the edge $(i,i',j)$ of $K^{(3)}(n,n)$ to the edge $\{(i,j), (i', j)\}$ of $\Gamma_{n,n}$. Given a colouring of $K^{(3)}(n,n)$ where every $K_4^{(3)}$ receives at least three colours, this correspondence gives a colouring of $\Gamma_{n,n}$ where every rectangle receives at least three colours, showing that $g(n) \le r$. Similarly, given a colouring of $\Gamma_{n,n}$ with no alternating rectangles, we may double the number of colours to ensure that the set of colours used for row edges is disjoint from the set used for column edges. This gives a colouring where every $K_4^{(3)}$ receives at least three colours, so $r \le 2 g(n)$.
Therefore, essentially the only difference between $g(n)$ and $f_3(2n, 4, 3)$ is that
the base hypergraph for $g(n)$ is $K^{(3)}(n,n)$ rather than $K_{2n}^{(3)}$. This observation allows
us to show that
\[g(n) \le f_3(2n, 4,3) \le 2 \lceil \log n \rceil^2 g(n).\]
In particular, this allows us to establish a subpolynomial upper bound for $f_3(n,4,3)$.
More generally, Shelah's work on the Hales--Jewett theorem requires an estimate for the function $f_{2d-1}(n, 2d, d+1)$. If the growth rate of these functions was bounded below by, say, $c'_d \log \log \log n$, then it might be possible to give a tower-type bound for Hales--Jewett numbers. However, we expect that this is not the case.
\begin{problem}
Show that for all $s$, there exist $d$ and $n_0$ such that
\[ f_{2d-1}\left(n, 2d, d+1\right) \leq \underbrace{\log \log \dots \log \log}_s n\]
for all $n \geq n_0$.
\end{problem}
We conclude this section with one further problem which arose in studying $f(n,p,q)$ and its generalisations. Mubayi's colouring $c_M$ was originally designed to have the property that the union of any two colour classes contains no $K_4$. However, in \cite{CFLS142}, it was shown to have the stronger property that the union of any two colour classes has chromatic number at most three. We suspect that this property can be generalised.
\begin{problem}
Let $p \ge 5$ be an integer. Does there exist an edge colouring of $K_n$ with $n^{o(1)}$ colours such that the union of every $p-1$ colour classes has chromatic number at most $p$?
\end{problem}
\noindent
For $p = 4$, Mubayi's colouring again has the desired property, though it is known that it cannot work for all $p$. However, it may be that the colourings used in the proof of Theorem~\ref{erdosgyar} suffice.
\subsubsection{The Erd\H{o}s--Rogers function}
Given an integer $s\geq 2$, a set of vertices $U$ in a graph $G$ is said to be {\em $s$-independent} if
$G[U]$ contains no copy of $K_s$. When $s=2$, this simply means that $U$ is an independent set in $G$.
We write $\alpha_s(G)$ for the order of the largest $s$-independent subset in a graph $G$.
The problem of estimating Ramsey numbers can be rephrased as a problem about determining the minimum independence number over all $K_t$-free graphs
with a given number of vertices. In 1962, Erd\H{o}s and Rogers \cite{ER} initiated the study of the more general question obtained by replacing the
notion of independence number with the $s$-independence number. Suppose $2 \leq s \leq t <n$ are integers. Erd\H{o}s and Rogers defined
$$f_{s,t}(n)=\min \alpha_s(G),$$
where the minimum is taken over all $K_t$-free graphs $G$ on $n$ vertices.
In particular, for $s=2$, we have $f_{2,t}(n)<\ell$ if and only if the Ramsey number $r(\ell,t)$ satisfies
$r(\ell,t)>n$.
The first lower bound for $f_{s,t}$ was given by Bollob\'as and Hind \cite{BH}, who proved that $f_{s,t}(n) \geq n^{1/(t-s+1)}$. Their proof is by induction on $t$. When $t = s$, the bound holds trivially, since the graph contains no $K_s$. Now suppose that $G$ is an $n$-vertex graph with no $K_t$ and let $v$ be a vertex of maximum degree. If $|N(v)| \geq n^{\frac{t-s}{t-s+1}}$, then we can apply induction to the subgraph of $G$ induced by this set, since this subgraph is clearly $K_{t-1}$-free. Otherwise, by Brooks' theorem, the independence number of $G$ is at least $n/|N(v)| \geq n^{1/(t-s+1)}$. The bound in this argument can be improved by a polylogarithmic factor using a result of Shearer \cite{Sh} on the independence number of $K_t$-free graphs. As was pointed out by Bollob\'as and Hind \cite{BH}, this proof usually finds an independent set rather than an $s$-independent set. Another approach, which better utilises the fact that we are looking for an $s$-independent set, was proposed by Sudakov \cite{Su1}.
To illustrate this approach, we show that $f_{3,5}(n) \geq c n^{2/5}$ for some constant $c>0$, improving on the bound of $n^{1/3}$ given above. Let $G$ be a $K_5$-free graph on $n$ vertices and assume that it does not contain a $3$-independent subset of order $n^{2/5}$. For every edge $(u,v)$ of $G$, the set of common neighbours $N(u,v)$ is triangle-free. Therefore, we may assume that it has order less than $n^{2/5}$. Moreover, for any vertex $v$, its set of neighbours $N(v)$ is $K_4$-free. But, by the Bollob\'as--Hind bound, $N(v)$ contains a triangle-free subset of order $|N(v)|^{1/2}$. Therefore, if there is a vertex $v$ of degree at least $n^{4/5}$, there will be a triangle-free subset of order $|N(v)|^{1/2} \geq n^{2/5}$. Hence, we may assume that all degrees in $G$ are less than $n^{4/5}$. This implies that every vertex in $G$ is contained in at most $n^{4/5} \cdot n^{2/5}=n^{6/5}$ triangles.
We now consider the auxiliary $3$-uniform hypergraph $H$ on the same vertex set as $G$ whose edges are the triangles in $G$. Crucially, an independent set in $H$ is a $3$-independent set in $G$. The number $m$ of edges in $H$ satisfies $m \leq n \cdot n^{6/5}=n^{11/5}$. Therefore, using a well-known bound on the independence number of $3$-uniform hypergraphs, we conclude that $\alpha_3(G)=\alpha(H) \geq c n^{3/2}/\sqrt{m} \geq c n^{2/5}$. This bound can be further improved by combining the above argument with a variant of dependent random choice. Using this approach, Sudakov \cite{Su2} showed that $f_{3,5}(n)$ is at least $n^{5/12}$ times a polylogarithmic factor. For $t > s + 1$, he also proved that $f_{s,t}(n) = \Omega(n^{a_t})$, where $a_t(s)$ is roughly $s/2t+O_s(t^{-2})$. More precisely, he showed the following.
\begin{theorem}
For any $s \geq 3$ and $t > s+1$, $f_{s,t}(n) = \Omega(n^{a_t})$, where
$$\frac{1}{a_t}=1+\frac{1}{s-1}\sum_{i=1}^{s-1} \frac{1}{a_{t-i}}, \quad a_{s+1}=\frac{3s-4}{5s-6} \quad\mbox{and}\quad a_3=\cdots=a_s=1.$$
\end{theorem}
The study of upper bounds for $f_{s,t}(n)$ goes back to the original paper of Erd\H{o}s and Rogers \cite{ER}. They considered the case where $s$ and $t=s+1$ are fixed and
$n$ tends to infinity, proving that there exists a positive constant $\epsilon(s)$ such that $f_{s,s+1}(n) \leq n^{1-\epsilon(s)}$. That is, they found a $K_{s+1}$-free graph of order $n$ such that every induced subgraph of order $n^{1-\epsilon(s)}$ contains a copy of $K_s$. About thirty years later, Bollob\'as and Hind \cite{BH} improved the estimate for $\epsilon(s)$. This bound was then improved again by Krivelevich \cite{Kr95}, who showed that
$$f_{s,t}(n) \leq c n^{\frac{s}{t+1}}(\log n)^{\frac{1}{s-1}},$$
where $c$ is some constant depending only on $s$ and $t$. Note that this upper bound is roughly the square of the lower bound from \cite{Su2}. We also note that all of the constructions mentioned above rely on applications of the probabilistic method, but explicit constructions showing that $f_{s, s+1}(n) \leq n^{1 - \epsilon(s)}$ were obtained by Alon and Krivelevich \cite{AK}.
One of the most intriguing problems in this area concerned the case where $t=s+1$. For many years, the best bounds for this question were very far apart, the lower bound being
roughly $n^{1/2}$ and the upper bound being $n^{1-\epsilon(s)}$, with $\epsilon(s)$ tending to zero as $s$ tends to infinity. Both Krivelevich~\cite{Kr95} and Sudakov~\cite{Su2} asked whether
the upper bound is closer to the correct order of magnitude for $f_{s,s+1}(n)$. Quite surprisingly, this was recently disproved in a sequence of three papers.
First, Dudek and R\"odl \cite{DR} proved that $f_{s,s+1}(n) = O(n^{2/3})$. Then Wolfovitz \cite{W}, building on their work but adding further ideas, managed to show that the lower bound for $f_{3,4}(n)$ is correct up to logarithmic factors. Finally, Dudek, Retter and R\"odl~\cite{DRR}, extending the approach from~\cite{W}, proved that $f_{s,s+1}(n)=n^{1/2+o(1)}$. More explicitly, they proved the following.
\begin{theorem}
For every $s \geq 3$, there exists a constant $c_s$ such that
\[f_{s, s+1}(n) \leq c_s (\log n)^{4 s^2} \sqrt{n}.\]
\end{theorem}
\noindent
It would be interesting to close the gap between this and the best lower bound, observed by Dudek and Mubayi \cite{DM14}, which stands at
\[f_{s, s+1}(n) \geq c'_s \left(\frac{n \log n}{\log \log n}\right)^{1/2}.\]
We will now sketch the neat construction from \cite{DR} showing that $f_{3,4}(n) = O(n^{2/3})$. Let $p$ be a prime,
$n=p^3+p^2+p+1$ and let $L_1,\ldots, L_n$ be the lines of a generalised quadrangle. The reader not familiar with this concept may consult \cite{GR01}.
For our purposes, it will be sufficient to note that this is a collection of points and lines with the following two properties:
\begin{itemize}
\item
every line is a subset of $[n]$ of order $p+1$ and every vertex in $[n]$ lies on $p+1$ lines;
\item
any two vertices belong to at most one line and every three lines with non-empty pairwise intersection have one point in common (i.e., every triangle of lines is degenerate).
\end{itemize}
We construct a random graph $G$ on $[n]$ as follows. Partition the vertex set of every line $L_i$ into three parts $L_{i,j}, 1 \leq j\leq 3$, uniformly at random. Take a complete $3$-partite graph on these parts and let $G$ be the union of all such graphs for $1 \leq i \leq n$. Note that the second property above implies that the vertices of every triangle in $G$ belong to some line. This easily implies that $G$ is $K_4$-free. Consider now an arbitrary subset $X$ of $G$ of order $6p^2$ and let $x_i=|L_i \cap X|$. If $X$ contains no triangles, then, for every $i$, there is an index $j$ such that the set $L_{i,j} \cap X$ is empty. The probability that this happens for a fixed $i$ is at most $3(2/3)^{x_i}$. Therefore, since these events are independent for different lines, the probability that $X$ is triangle-free is at most $3^n (2/3)^{\sum x_i}$. Since every vertex lies on $p+1$ lines, we have that $\sum x_i=(p+1)|X|>5n$. Since the number of subsets $X$ is at most $2^n$ and $2^n 3^n (2/3)^{5n} \ll 1$, we conclude that with probability close to one every subset of $G$ of order at least $10n^{2/3} > 6 p^2$ contains a triangle.
There are many open problems remaining regarding the Erd\H{o}s--Rogers function. For example, it follows from the work of Sudakov \cite{Su2} and Dudek, Retter and R\"odl \cite{DRR} that for any $\epsilon > 0$ there exists $s_0$ such that if $s \geq s_0$, then
\[c' n^{1/2 - \epsilon} \leq f_{s, s+2}(n) \leq c n^{1/2}\]
for some positive constants $c'$ and $c$. It remains to decide if the upper bound can be improved for fixed values of $s$. The following question was posed by Dudek, Retter and R\"odl \cite{DRR}.
\begin{problem}
For any $s \geq 3$, is it true that $f_{s, s+2}(n) = o(\sqrt{n})$?
\end{problem}
The hypergraph generalisation of the Erd\H{o}s--Rogers function was first studied by Dudek and Mubayi \cite{DM14}. For $s \leq t$, let $f_{s,t}^{(k)}(n)$ be given by
\[f_{s,t}^{(k)}(n) = \min\{\max\{|W|: W \subseteq V(G) \mbox{ and $G[W]$ contains no $K_s^{(k)}$}\}\},\]
where the minimum is taken over all $K_t^{(k)}$-free $k$-uniform hypergraphs $G$ on $n$ vertices. Dudek and Mubayi proved the following.
\begin{theorem}
For any $s \geq 3$ and $t \geq s + 1$,
\[f_{s-1,t-1}(\lfloor \sqrt{\log n} \rfloor) \leq f_{s, t}^{(3)}(n) \leq c_s \log n.\]
\end{theorem}
\noindent
In particular, for $t = s+1$, this gives constants $c_1$ and $c_2$ depending only on $s$ such that
\[c_1 (\log n)^{1/4} \left(\frac{\log\log n}{\log\log\log n}\right)^{1/2} \leq f_{s, s+1}^{(3)}(n) \leq c_2 \log n.\]
The lower bound was subsequently improved by the authors \cite{CFS14A}, using ideas on hypergraph Ramsey numbers developed in \cite{CFS10}.
\begin{theorem}
For any natural number $s \geq 3$, there exists a positive constant $c$ such that
\[f_{s, s+1}^{(3)} (n) \geq c \left(\frac{\log n}{\log \log \log n}\right)^{1/3}.\]
\end{theorem}
This result easily extends to higher uniformities to give $f_{s, s+1}^{(k)} (n) \geq (\log_{(k-2)} n)^{1/3 - o(1)}$,
where $\log_{(0)} x = x$ and $\log_{(i+1)} x = \log (\log_{(i)} x)$. This improves an analogous result of Dudek and Mubayi~\cite{DM14} with a $1/4$ in the exponent but remains far from their upper bound $f_{s, s+1}^{(k)} (n) \leq c_{s,k} (\log n)^{1/(k-2)}$. It would be interesting to close the gap between the upper and lower bounds. In particular, we have the following problem.
\begin{problem}
Is it the case that
\[f^{(4)}_{s, s+1}(n) = (\log n)^{o(1)}?\]
\end{problem}
\subsection{Monochromatic cliques with additional structure}
There are a number of variants of the classical Ramsey question which ask for further structure on the monochromatic cliques being found. The classic example of such a theorem is the Paris--Harrington theorem \cite{PH77}, which says that any for any $t$, $k$ and $q$, there exists an $N$ such that any $q$-colouring of the edges of the complete $k$-uniform hypergraph on the set $\{1, 2, \dots, N\}$ contains a monochromatic $K_s^{(k)}$ with vertices $a_1 < \dots < a_s$ for which $s \geq \max\{t, a_1\}$. That is, the clique is at least as large as its minimal element. This theorem, which follows easily from a compactness argument, is famous for being a natural statement which is not provable in Peano arithmetic (though we note that for graphs and two colours, the function is quite well behaved and grows as a double exponential in $t$~\cite{Mi85}). In this section, we will discuss two decidedly less pathological strengthenings of Ramsey's theorem.
\subsubsection{Weighted cliques}
In the early 1980s, Erd\H{o}s considered the following variant of Ramsey's theorem. For a finite set $S$ of integers greater
than one, define its weight $w(S)$ by
$$w(S)=\sum_{s \in S} \frac{1}{\log s},$$
where, as usual, $\log$ is assumed to be base $2$. For a red/blue-colouring $c$ of the edges of the complete graph on $[2,n]=\{2,\ldots,n\}$, let
$f(c)$ be the maximum weight $w(S)$ taken over all sets $S \subset [2,n]$ which form a monochromatic clique in the colouring $c$. For each integer $n \geq 2$, let $f(n)$ be the minimum of $f(c)$ over all red/blue-colourings $c$ of the edges of the complete graph on $\{2,\ldots,n\}$.
Erd\H{o}s \cite{E812} conjectured that $f(n)$ tends to infinity, choosing this particular weight function because the standard bound $r(t) \leq 2^{2t}$ only allows one to show that $f(n) \geq \frac{\log n}{2} \cdot \frac{1}{\log n}=\frac{1}{2}$. Erd\H{o}s' conjecture was verified by R\"odl \cite{R03}, who proved that there exist positive constants $c$ and $c'$ such that
\[c' \frac{\log \log \log \log n}{\log \log \log \log \log n} \leq f(n) \leq c \log \log \log n.\]
To prove R\"odl's upper bound, we cover the interval $[2,n]$ by $s =\lfloor \log \log n \rfloor + 1$ intervals, where the $i$th interval is $[2^{2^{i-1}},2^{2^i})$. Using the bound $r(t) \geq 2^{t/2}$, we can colour the edges of the complete graph on the $i$th interval so that the maximum monochromatic clique in this interval has order $2^{i+1}$. Since the log of any element in this interval is at least $2^{i-1}$, the maximum weight of any monochromatic clique is at most $4$. If we again use the lower bound on $r(t)$, we see that there is a red/blue-colouring of the edges of the complete graph on vertex set $\{1, 2, \dots, s\}$ whose largest monochromatic clique is of order $O(\log s)$. Colour the edges of the complete bipartite graph between the $i$th and $j$th interval by the colour of edge $(i,j)$ in this colouring. We get a red/blue-colouring of the edges of the complete graph on $[2,n]$ such that any monochromatic clique in this colouring has a non-empty intersection with at most $O(\log s)$ intervals. Since every interval can contribute at most $4$ to the weight of this clique, the total weight of any monochromatic clique is $O(\log s)=O(\log \log \log n)$.
Answering a further question of Erd\H{o}s, the authors \cite{CFS13} showed that this upper bound is tight up to a constant factor. The key idea behind the proof is to try to force the type of situation that arises in the upper bound construction. In practice, this means that we split our graph into intervals $I_1, \dots, I_s$ of the form $[2^{2^{i-1}},2^{2^i})$ and, for each $i = 1, \dots, s$, we find a subset $I'_i \subset I_i$ such that $I'_i$ is the union of a red and a blue clique and all edges between $I'_i$ and $I'_j$ are monochromatic for each $1 \leq i < j \leq s$. In broad outline, this was also the method used by R\"odl to prove his lower bound but our proof uses two additional ingredients, dependent random choice and a certain weighted version of Ramsey's theorem.
\begin{theorem}\label{erdosrodl}
For $n$ sufficiently large, every two-colouring of the edges of the complete graph on the interval $\{2,\ldots,n\}$ contains a monochromatic clique with vertex set $S$ such that $$\sum_{s \in S} \frac{1}{\log s} \geq 2^{-8} \log \log \log n.$$ Hence, $f(n)=\Theta(\log \log \log n)$.
\end{theorem}
It also makes sense to consider the function $f_q(n)$, defined now as the minimum over all $q$-colourings of the edges of the complete graph on $\{2, 3, \dots, n\}$ of the maximum weight of a monochromatic clique. However, as observed by R\"odl, the analogue of Erd\H{o}s' conjecture for three colours does not hold. To see this, we again cover the interval $[2,n]$ by $s = \lfloor \log \log n \rfloor + 1$ intervals of the form $[2^{2^{i-1}},2^{2^i})$. The edges inside these intervals are coloured red and blue as in the previous construction, while the edges between the intervals are coloured green. But then the maximum weight of any red or blue clique is at most $4$ and the maximum weight of any green clique is at most $\sum_{i\geq 1} 2^{-i+1} = 2$.
We may also ask whether there are other weight functions for which an analogue of R\"odl's result holds. If $w(i)$ is a weight function defined on all
positive integers $n \geq a$, we let $f(n,w)$ be the minimum over all red/blue-colourings of the edges of the complete graph on $[a, n]$ of the maximum weight of a monochromatic clique. In particular, if
$w_1(i)=1/\log i$ and $a = 2$, then $f(n,w_1)=f(n)$.
The next interesting case is when $w_2(i) = 1/\log i \log \log \log i$, since, for any function $u(i)$ which tends to infinity with $i$, Theorem \ref{erdosrodl}
implies that $f(n, u') \rightarrow \infty$, where $u'(i) = u(i)/\log i \log \log \log i$. To derive a lower bound for $f(n, w_2)$, we colour the interval $I_i = [2^{2^{i-1}}, 2^{2^i})$ so that the largest clique has order at most $2^{i+1}$. Then the contribution of the $i$th interval will be $O(1/\log i)$. If we now treat $I_i$ as though it were a vertex of weight $1/\log i$, we may blow up R\"odl's colouring and colour monochromatically between the $I_i$ so that the weight of any monochromatic clique is $O(\log \log \log s) =
O(\log \log \log \log \log n)$. This bound is also sharp \cite{CFS13}, that is, $f(n, w_2) = \Theta(\log \log \log \log \log n)$.
More generally, we have the following theorem, which determines the boundary below which $f(n, \cdot)$ converges. Here $\log_{(i)} (x)$ is again the iterated logarithm given by $\log_{(0)} x = x$ and $\log_{(i+1)} x = \log (\log_{(i)} x)$.
\begin{theorem}
Let $w_s(i)=1/\prod_{j=1}^s \log_{(2j-1)} i$. Then $f(n,w_s) = \Theta(\log_{(2s+1)} n)$. However, if
$w'_s(i)=w_s(i)/(\log_{(2s-1)} i)^{\epsilon}$ for any fixed $\epsilon > 0$, $f(n,w'_s)$ converges.
\end{theorem}
\subsubsection{Cliques of fixed order type}
Motivated by an application in model theory, V\"a\"an\"anen \cite{NeVa} asked whether, for any positive integers $t$ and $q$ and any permutation $\pi$ of $[t-1] = \{1, 2, \dots, t-1\}$, there is a positive integer $R$ such that every $q$-colouring of the edges of the complete graph on vertex set $[R]$ contains a monochromatic $K_t$ with vertices $a_1<\dots<a_t$ satisfying
$$a_{\pi(1)+1}-a_{\pi(1)}>a_{\pi(2)+1}-a_{\pi(2)}>\dots>a_{\pi(t-1)+1}-a_{\pi(t-1)}.$$
That is, we want the set of differences $\{a_{i+1} - a_i: 1 \leq i \leq t - 1\}$ to have a prescribed order. The least such positive integer $R$ is denoted by $R_{\pi}(t;q)$ and we let $R(t;q)=\max_{\pi} R_{\pi}(t;q)$, where the maximum is over all permutations $\pi$ of $[t-1]$.
V\"a\"an\"anen's question was answered positively by Alon \cite{NeVa} and, independently, by Erd\H{o}s, Hajnal and Pach \cite{EHP97}. Alon's proof uses the Gallai--Witt theorem and so gives a weak bound on $R(t;q)$, whereas the proof of Erd\H{o}s, Hajnal and Pach uses a compactness argument and gives no bound at all. Later, Alon, Shelah and Stacey all found proofs giving tower-type bounds for $R(t;q)$, but these were never published, since a double-exponential upper bound $R(t;q) \leq 2^{(q(t+1)^3)^{qt}}$ was then found by Shelah \cite{Sh97}.
A natural conjecture, made by Alon (see \cite{Sh97}), is that for any $q$ there exists a constant $c_q$ such that $R(t;q) \leq 2^{c_q t}$. For the trivial permutation, this was confirmed by Alon and Spencer. For a general permutation, the best known bound, due to the authors \cite{CFS13}, is as follows. Once again, dependent random choice plays a key role in the proof.
\begin{theorem} \label{shelahorder}
For any positive integers $t$ and $q$ and any permutation $\pi$ of $[t-1]$, every $q$-colouring of the edges of the complete graph on vertex set $[R]$ with $R=2^{t^{20q}}$ contains a monochromatic $K_t$ with vertices $a_1<\dots<a_t$ satisfying
$$a_{\pi(1)+1}-a_{\pi(1)}>a_{\pi(2)+1}-a_{\pi(2)}>\dots>a_{\pi(t-1)+1}-a_{\pi(t-1)}.$$
That is, $R(t;q) \leq 2^{t^{20q}}$.
\end{theorem}
There are several variants of V\"a\"an\"anen's question which have negative answers. For example, the natural hypergraph analogue fails. To see this, we colour an edge $(a_1,a_2, a_3)$ with $a_1 < a_2 < a_3$ red if $a_3-a_2 \geq a_2-a_1$ and blue otherwise. Hence, if the subgraph with vertices $a_1 < \dots < a_t$ is monochromatic, the sequence $a_2-a_1, \dots, a_t-a_{t-1}$ must be monotone increasing or decreasing, depending on whether the subgraph is coloured red or blue.
\subsection{Ordered Ramsey numbers}
An {\it ordered graph} on $n$ vertices is a graph whose vertices have been labelled with the vertex set $[n] = \{1, 2, \dots, n\}$. We say that an ordered graph $G$ on vertex set $[N]$ contains another ordered graph $H$ on vertex set $[n]$ if there exists a map $\phi: [n] \rightarrow [N]$ such that $\phi(i) < \phi(j)$ for all $i < j$ and $(\phi(i), \phi(j))$ is an edge of $G$ whenever $(i, j)$ is an edge of $H$. Given an ordered graph $H$, we define the {\it ordered Ramsey number} $r_<(H)$ to be the smallest $N$ such that every two-colouring of the complete graph on vertex set $[N]$ contains a monochromatic ordered copy of $H$.
As a first observation, we note the elementary inequalities,
\[r(H) \leq r_<(H) \leq r(K_{v(H)}).\]
In particular, $r_<(K_t) = r(K_t)$. However, for sparse graphs, the ordered Ramsey number may differ substantially from the usual Ramsey number. This was first observed by Conlon, Fox, Lee and Sudakov \cite{CFLS143} and by Balko, Cibulka, Kr\'al and Kyn\v cl \cite{BCKK14}, who proved the following result.
\begin{theorem} \label{order:matchlower}
There exists a positive constant $c$ such that, for every even $n$, there exists an ordered matching $M$ on $n$ vertices for which
\[r_<(M) \geq n^{c \log n/\log \log n}.\]
\end{theorem}
\noindent
In \cite{CFLS143}, it was proved that this lower bound holds for almost all orderings of a matching. This differs considerably from the usual Ramsey number, where it is trivial to show that $r(M)$ is linear in the number of vertices. It is also close to best possible, since, for all matchings $M$, $r_<(M) \leq n^{\lceil \log n \rceil}$.
For general graphs, it was proved in \cite{CFLS143} that the ordered Ramsey number cannot be too much larger than the usual Ramsey number. Recall, from Section \ref{sec:sparse}, that a graph is $d$-degenerate if there is an ordering of the vertices, say $v_1, v_2, \dots, v_n$, such that every vertex $v_i$ has at most $d$ neighbours $v_j$ preceding it in the ordering, that is, such that $j < i$. We stress that in the following theorems the degenerate ordering need not agree with the given ordering.
\begin{theorem} \label{order:general}
There exists a constant $c$ such that for any ordered graph $H$ on $n$ vertices with degeneracy $d$,
\[r_<(H) \leq r(H)^{c \gamma(H)},\]
where $\gamma(H) = \min\{\log^2(2n/d), d \log(2n/d)\}$.
\end{theorem}
An important role in ordered Ramsey theory is played by the concept of interval chromatic number. The {\it interval chromatic number} $\chi_<(H)$ of an ordered graph $H$ is defined to be the minimum number of intervals into which the vertex set of $H$ may be partitioned so that each interval forms an independent set in the graph. This is similar to the usual chromatic number but with arbitrary vertex sets replaced by intervals. For an ordered graph $H$ with bounded degeneracy and bounded interval chromatic number, the ordered Ramsey number is at most polynomial in the number of vertices. This is the content of the following theorem from \cite{CFLS143} (we note that a weaker version was also proved in \cite{BCKK14}).
\begin{theorem} \label{order:interval}
There exists a constant $c$ such that any ordered graph $H$ on $n$ vertices with degeneracy at most $d$ and interval chromatic number at most $\chi$ satisfies
\[r_<(H) \leq n^{c d \log \chi}.\]
\end{theorem}
If $H$ is an ordered graph with vertices $\{1, 2, \dots, n\}$, we define the {\it bandwidth} of $H$ to be the smallest $\ell$ such that $|i - j| \leq \ell$ for all edges $i j \in E(H)$. Answering a question of Lee and the authors \cite{CFLS143}, Balko, Cibulka, Kr\'al and Kyn\v cl \cite{BCKK14} showed that the ordered Ramsey number of ordered graphs with bounded bandwidth is at most polynomial in the number of vertices.
\begin{theorem}
For any positive integer $\ell$, there exists a constant $c_\ell$ such that any ordered graph on $n$ vertices with bandwidth at most $\ell$ satisfies
\[r_<(H) \leq n^{c_\ell}.\]
\end{theorem}
\noindent
In \cite{BCKK14}, it is shown that for $n$ sufficiently large in terms of $\ell$ one may take $c_\ell = O(\ell)$. It is plausible that the correct value of $c_\ell$ is significantly smaller than this.
A large number of questions about ordered Ramsey numbers remain open. Here we will discuss just one such problem, referring the reader to \cite{CFLS143} for a more complete discussion. As usual, we define $r_<(G, H)$ to be the smallest $N$ such that any red/blue-colouring of the edges of the complete graph on $[N]$ contains a red ordered copy of $G$ or a blue ordered copy of $H$. Given an ordered matching $M$ on $n$ vertices, it is easy to see that
\[r_<(K_3, M) \leq r(3, n) = O\left(\frac{n^2}{\log n}\right).\]
In the other direction, it is known \cite{CFLS143} that there exists a positive constant $c$ such that, for all even $n$, there exists an ordered matching $M$ on $n$ vertices for which $r_<(K_3, M) \geq c(\frac{n}{\log n})^{4/3}$. It remains to determine which bound is closer to the truth. In particular, we have the following problem.
\begin{problem}
Does there exist an $\epsilon > 0$ such that for any matching $M$ on $n$ vertices $r(K_3, M) = O(n^{2 - \epsilon})$?
\end{problem}
Finally, we note that for hypergraphs the difference between Ramsey numbers and their ordered counterparts is even more pronounced. If we write $P_n^{(k)}$ for the monotone $k$-uniform tight path on $\{1, 2, \dots, n\}$, where $\{i, i+1, \dots, i+ k -1\}$ is an edge for $1 \leq i \leq n - k + 1$, then results of Fox, Pach, Sudakov and Suk \cite{FPSS} and Moshkovitz and Shapira \cite{MS14} (see also~\cite{MSW15}) show that for $k \geq 3$ the ordered Ramsey number $r_<(P_n^{(k)})$ grows as a $(k-2)$-fold exponential in $n$. This is in stark contrast with the unordered problem, where $r(P_n^{(k)})$ is known to grow linearly in $n$ for all $k$.
\section{Concluding remarks}
Given the length of this survey, it is perhaps unnecessary to add any further remarks. However, we would like to highlight two further problems which we believe to be of signal importance but which did not fit neatly in any of the sections above.
The first problem we wish to mention, proposed by Erd\H{o}s, Fajtlowicz and Staton \cite{CG98, E92}, asks for an estimate on the order of the largest regular induced subgraph in a graph on $n$ vertices. Ramsey's theorem tells us that any graph on $n$ vertices contains a clique or an independent set of order at least $\frac{1}{2} \log n$. Since cliques and independent sets are both regular, this shows that there is always a regular induced subgraph of order at least $\frac{1}{2} \log n$. The infamous conjecture of Erd\H{o}s, Fajtlowicz and Staton, which we now state, asks whether this simple bound can be improved.
\begin{conjecture}
Any graph on $n$ vertices contains a regular induced subgraph with $\omega(\log n)$ vertices.
\end{conjecture}
By using an inhomogeneous random graph, Bollob\'as showed that for any $\epsilon > 0$ and $n$ sufficiently large depending on $\epsilon$ there are graphs on $n$ vertices for which the largest regular induced subgraph has order at most $n^{1/2 + \epsilon}$. This result was sharpened slightly by Alon, Krivelevich and Sudakov \cite{AKS08}, who showed that there is a constant $c$ and graphs on $n$ vertices with no regular induced subgraph of order at least $c n^{1/2} \log^{1/4} n$. Any polynomial improvement on this upper bound would be of considerable interest.
The second problem is that of constructing explicit Ramsey graphs. Erd\H{o}s' famous probabilistic lower bound argument, discussed at length in Section~\ref{sec:completegraphs}, shows that almost all colourings of the complete graph on $\sqrt{2}^t$ vertices do not contain a monochromatic copy of $K_t$. While this proves that the Ramsey number $r(t)$ is greater than $\sqrt{2}^t$, it does not give any constructive procedure for producing a colouring which exhibits this fact.
For many years, the best explicit example of a Ramsey graph was the following remarkable construction due to Frankl and Wilson \cite{FW81}. Let $p$ be a prime and let $r = p^2 - 1$. Let $G$ be the graph whose vertices are all subsets of order $r$ from the set $[m] = \{1, 2, \dots, m\}$ and where two vertices are adjacent if and only if their corresponding sets have intersection of size congruent to $-1$ (mod $p$). This is a graph with $\binom{m}{r}$ vertices and may be shown to contain no clique or independent set of order larger than $\binom{m}{p-1}$. Taking $m = p^3$ and $t = \binom{p^3}{p-1}$, this gives a graph on $t^{c \log t/\log \log t}$ vertices with no clique or independent set of order $t$.
Recently, Barak, Rao, Shaltiel and Wigderson \cite{BRSW12} found a construction which improves on the Frankl--Wilson bound, giving graphs on
\[2^{2^{(\log \log t)^{1 + \epsilon}}}\]
vertices with no clique or independent set of order $t$, where $\epsilon > 0$ is a fixed constant. Unfortunately, their construction does not have any simple description. Instead, it is constructive in the sense that given the labels of any two vertices in the graph, it is possible to decide whether they are connected in polynomial time. It would be very interesting to know whether the same bound, or any significant improvement over the Frankl--Wilson bound, could be achieved by graphs with a simpler description. It still seems that we are a long way from resolving Erd\H{o}s' problem~\cite{CG98} of constructing explicit graphs exhibiting $r(t) > (1 + \epsilon)^t$, but for those who do not believe that hard work is its own reward, Erd\H{o}s has offered the princely sum of \$100 as an enticement.
\vspace{3mm}
\noindent
{\bf Acknowledgements.} The authors would like to thank the anonymous referee for a number of useful comments.
|
3,212,635,537,590 | arxiv | \section{Introduction}
The usual way (but not the original Henkin's proof \cite{Hen49,Hen50})
for proving the completeness theorem for second-order logic is to
deduce it from the completeness theorem for first-order multi-sorted
logic \cite{Fef74}. There is clearly a trivial translation from second-order
logic to first-order multi-sorted
logic, by associating one sort to first-order objects and, for each $n
\in {\mathbb N}$, one sort for predicates of arity $n$.
Another way (due Van Dalen \cite{VDal94}) to is to deduce it from the completeness theorem for
first-order mono-sorted logic: Van Dalen method's is to associate a first-order variable $x$ to each second-order
variable $X$ of ariry $n$, and encode the atomic formula $X(x_1,\dots,x_n)$
by $\Ap_n(x,x_1,\dots,x_n)$ where $\Ap_n$ is a relation
symbol of arity $n+1$. Then, this coding is extended to all
formulas.We write it $F \mapsto F^*$. However, to allow the translation
between second-order proofs and first-order proofs, one adds some
axioms to discriminate between first and second-order objects.
The critical point is the translation of quantifications:
\begin{itemize}
\item For first-order quantification we define
$(\forall x\, F)^* = \forall x (v(x) \rightarrow F^*)$ where $v$ is a new
predicate constant.
\item For second-order quantification of arity $n$ we define
$(\forall X^n\, F)^* = \forall x (V_n(x) \rightarrow F^*)$ where $V_n$ is a new
predicate constant.
\end{itemize}
Then we add axioms relating $v$, $V_n$ and $\Ap_n$ such as
$\forall x\forall y (\Ap_1(x,y) \rightarrow V_1(x) \et v(y))$.
The problem is that this translation is not surjective. So it is
not immediate to prove that if $F^*$ is provable in first-order logic
then $F$ is provable in second-order logic, because all the formulas
appearing in the proof of $F^*$ are not necessarily of the shape
$G^*$. It is not even clear that the proof in \cite{VDal94} which is
only sketched can be completed into a correct proof (at least the
authors do not know how to end his proof). May be there is a solution
using the fact that subformulas of $F^*$ are nearly of the shape $G^*$
and one could use this in a direct, but very tedious, proof by
induction on the proof of $F$ using the subformula property which is a
strong result.
Our solution, is to simplify Van Dalen's translation $F \mapsto F^*$
from second-order logic to first-order. The novelty of this paper is
to replace Van Dalen's axiom's and extra predicate constant by a
coding $F \mapsto \rev{F}$ from first-order logic to second-order such
that $\rev{F^*}$ and $F$ are logically equivalent. To achieve this we
consider that in first order logic the same variable may have
different meanings (in the semantics) depending on it's position in
atomic formulas. Thus, we can translate any first-order formula back
to a second-order formula.
Using this method we can also deduce a definition of Kripke models
\cite{Kri65} for second-order intuitionistic logic and easily get a
completeness theorem. This models are similar to Prawitz's
second-order Beth's models \cite{Pra70,Bet56}.
This was not at all so clear with Van Dalen's method (as we do not how
to end his proof) if we need classical absurdity to use the extra
axioms. We also give some simple examples showing that despite a
complex definition, computation is possible in these models.
\medskip
{\bf Acknowledgement.} We wish to thank both referees for their
comments that helped a lot to improve the paper and Miss Christelle
Favre for her assistance in the checking for certain proofs while she
was preparing her master-thesis.
\section{Coding}
\begin{definition}[second-order language]
Let $\Lang_2$, the language of second-order logic, be the following:
\begin{itemize}
\item The logical symbols $\faux$,$\fl$ , $\et$, $\ou$, $\q$ and $\e$.
\item A countable set ${\cal V}$ of first-order variables :
$x_0, x_1, x_2, \dots$
\item A countable set $\Sigma$ of constants and functions symbols (of
various arity) : $a, b, f, g, h, \dots$.
\item Using ${\cal V}$ and $\Sigma$ we construct the set of
first-order terms $\cal T$ : $t_1,t_2,...$
\item For each $n \in \mathbb{N}$, a countable set ${\cal V}_n$ of
second-order variables of arity $n$ : $X_0^n , X_1^n , X_2^n, \dots$.
\end{itemize}
\end{definition}
To simplify, we omit second-order constants (they can be replaced by
free variables).
\begin{definition}[first-order language]
Let $\Lang_1$, a particular language of first-order logic, be the following:
\begin{itemize}
\item The logical symbols $\faux$,$\fl$ , $\et$, $\ou$, $\q$ and $\e$.
\item A countable set ${\cal V}$ of first-order variables :
$x_0, x_1, x_2,\dots$ (it is simpler to use the same set of first-order variables in $\Lang_1$ and $\Lang_2$).
\item A countable set $\Sigma$ of constants and functions symbols (of
various arity) : $a, b, f, g, h, \dots$. Here again we use the same
set as for $\Lang_2$.
\item For each $n \in \mathbb{N}$, a relation symbol $\Ap_n$ of arity $n+1$.
\end{itemize}
\end{definition}
\paragraph{Notations}
\begin{itemize}
\item We write $\Free(F)$ for the set of all free variables of a
formula $F$.
\item We write $F \equi G$ for $(F \fl G) \et (G \fl F)$.
\item We write $F[x:=t]$ for the first-order substitution of a term.
\item We write $F[X^n:=Y^n]$ for the second-order substitution of a variable.
\item We write $F[X^n:=\lambda x_1\dots x_n G]$ for the second-order
substitution of a formula.
\item We will use natural deduction \cite{Pra65,VDal94} both for second and first-order
logic, and we will write $\Gamma \vdash_k^n F$ with $k \in \{i , c \}$
(for intuitionistic or classical logic) and $n \in \{1 , 2 \}$ (for
first or second-order).
\end{itemize}
We have the following lemma:
\begin{lemma}\label{substitution}
If $\Gamma \vdash_k^n A$ then, for every substitution
$\sigma$, $\Gamma[\sigma] \vdash_k^n A[\sigma]$.
\end{lemma}
\begin{definition}[coding]
We choose for each $n \in \mathbb{N}$ a bijection $\phi_n$ from
${\cal V}_n$ to ${\cal V}$. The fact that it is a bijection for
each $n$ is the main point in our method.
Let $F$ be a second-order formula, we define a first-order formula $F^*$ by induction as
follows:
\begin{itemize}
\item $\faux^* = \faux$
\item $(X^n(t_1,\dots,t_n))^* = \Ap_n(\phi_n(X^n),t_1,\dots,t_n)$
\item $(A \Diamond B)^* = A^*\Diamond B^*$ where $\Diamond \in \{ \fl ,
\et, \ou \}$
\item $(Q x\, A)^* = Q y (A[x:=y])^*$ where $y \not \in \Free(A^*)$ and
$Q \in \{ \q, \e \}$
\item $(Q X^n\, A)^* = Q y (A[X^n:=Y^n])^*$ where $\Phi_n(Y^n) = y$, $y
\not \in \Free(A^*)$ and $Q \in \{ \q, \e \}$
\end{itemize}
\end{definition}
\begin{remark} In the coding, the same free first-order variable (this
will not be the case for bound ones) has different meanings depending on
its location in the translated formula.
\end{remark}
\begin{example} $(\q X (X(x) \fl X(y)))^* = \q z (\Ap_1(z,x) \fl
\Ap_1(z,y))$. This example illustrates why we need renaming. For
instance, if $\Phi_1(X)$ were equal to $x$ or $y$ in $(X(x) \fl X(y))^*$.
\end{example}
\begin{remark} The mapping $F \mapsto F^*$ is not surjective, for
instance there is no antecedent for $\q x\,\Ap_1(x,x)$ or $\Ap_1(f(a),a)$.
\end{remark}
\begin{definition}[comprehension schemas]
The second-order comprehension schema $SC_2$ is the set of all closed formulas
$SC_2(G;x_1,\dots,x_n;\chi_1,\dots,\chi_m)$ where $\{x_1,\dots,x_n\}
\subset {\cal V}$ and $\Free(G) \subseteq \{x_1,\dots,x_n,\chi_1,\dots,\chi_m\}$ and
$$SC_2(G;x_1,\dots,x_n;\chi_1,\dots,\chi_m) = \q \chi_1\dots\q \chi_m \exists X^n \q x_1\dots\q x_n \left(G \equi
X^n(x_1,\dots,x_n)\right) \in SC_2$$
where $X^n \not \in F_v(G)$.
The first-order comprehension schema $SC_1$ is defined simply as
$SC_2^* = \{F^*, F \in SC_2\}$
\end{definition}
It is easy to show that $SC_2$ is provable in second order logic.
\begin{remark}\label{remarkSC}
Let $F = X(x)$ where $\Phi_1(X) = x$. We have:
\begin{itemize}
\item $SC_2(F;x;X) = \q X \e Y \q x (F \equi Y(x)) \in SC_2$.
\item $SC_2(F;x;X)^* = (\q X \e Y \q x (F \equi Y(x)))^* = \q z \e y \q x
(Ap_1(z,x) \equi Ap_1(y,x)) \in SC_1$.
\end{itemize}
It is easy to see that $(\q X \e Y \q x (F \equi Y(x)))^* =
\q z \e y \q x (F[X := Z]^* \equi Ap_1(y,x))$ where $\phi_1(Z) = z
\not = x$.
In general we have the following result : for each second-order formula $G$
there is a variable substitution $\sigma$ such that
$$
\begin{array}{rcl}
SC_2(G;x_1,\dots,x_n;\chi_1,\dots,\chi_m)^* &=&
\left(\q \chi_1\dots\q \chi_m \exists X^n \q x_1\dots\q x_n \left(G \equi
X^n(x_1,\dots,x_n)\right)\right)^* \cr &=&
\q y_1\dots\q y_m \exists x \q x_1\dots\q x_n \left(G[\sigma]^* \equi
Ap_n(x,x_1,\dots,x_n)\right). \cr
\end{array}
$$
\end{remark}
We can now show the following theorem (we will not use it):
\begin{theorem}\label{transproofun}
Let $\G$ be a second-order context and $A$ a second-order formula.
If $\G \vdash_{k}^{2} A$ then $\G^*,SC_1 \vdash_{k}^{1} A^*$ ($k
\in \{ i , c \}$).
\end{theorem}
\begin{proof}
By induction on the derivation of $\G \vdash_{k}^{2} A$, using $SC_1$,
remark \ref{remarkSC} and lemma \ref{substitution}
for the case of the second-order elimination of $\q$ and the
second-order introduction of $\e$. \qed
\end{proof}
\begin{definition}[reverse coding]
Let $F$ be a first-order formula, we define a second-order formula
$\rev{F}$ by induction as follows:
\begin{itemize}
\item $\rev{\faux} = \faux$
\item $\rev{Ap_n(x,t_1,\dots,t_n)} = X^n(t_1,\dots,t_n)$ where $X^n= \phi^{-1}_n(x)$
\item $\rev{Ap_n(t,t_1,\dots,t_n)} = \faux$ if $t$ is not a variable.
\item $\rev{(A \Diamond B)} = \rev{A} \Diamond \rev{B}$ where $\Diamond \in \{ \fl ,
\et, \ou \}$
\item $\rev{(Q x\,A)} = Q x Q X^{i_1} \dots Q X^{i_p} \rev{A}$ where
$Q \in \{ \q, \e \}$, $X^n= \phi^{-1}_n(x)$ for all $n \in \mathbb{N}$,
$i_1 < i_2 < \dots < i_p$ and $\{X^{i_1},\dots,X^{i_p}\} =
{\cal V}_n \cap \Free(\rev{A})$
\end{itemize}
\end{definition}
\begin{remark}
We don't need renaming in order to define $\rev{(Q x\,A)}$ since the $\phi_n$ are bijections.
\end{remark}
\begin{lemma}\label{idempotent}
If $A$ is a second order formula then $\vdash_{i}^{2} \rev{A^*} \equi A$.
\end{lemma}
\begin{proof}
By induction on the formula $A$. \qed
\end{proof}
\begin{remark}
The embarrassing case of decoding $Ap_n(t,t_1,\dots,t_n)$ (where $t$
is not a variable) never arises here since we only decode encoded
formulas. We can not say that $\rev{A^*} = A$, because in the case of
the quantifier, we can add or remove some quantifiers on variables
with no occurrence. For instance, if $X^0 \neq Y^0$, $\Phi_0(X^0) =
x$ and $\Phi_0(Y^0) =
y$ then $\rev{(\q X^0\,Y^0)^*} = \rev{(\q x\,Ap_0(y))} = \q x\,Y^0$. \qed
\end{remark}
\begin{corollary}\label{SCdeux}
$\vdash_{i}^{2} \rev{(SC_1)} \equi SC_2$ which means that each formula
in $\rev{(SC_1)}$ is equivalent to at least one formula in $SC_2$ and
vice versa.
\end{corollary}
\begin{proof}
Consequence of \ref{idempotent}. \qed
\end{proof}
\begin{example}\label{exempleutile} The aim of this example is to give
an idea of the proof of lemma \ref{transproofdeux}.
Let $\G$ be a first-order context, $F = Ap_1(x,y) \fl Ap_2(x,y,y) \ou Ap_1(y,x)$ and $t$ a term.
We have :
\begin{itemize}
\item $\rev{(\q x \,F)} = \q x \q X^1 \q X^2 (X^1(y) \fl X^2(y,y) \ou
Y^1(x))$ and $\rev{(\e x \,F)} = \e x \e X^1 \e X^2 (X^1(y) \fl X^2(y,y) \ou
Y^1(x))$ (where $\phi_1(Y^1) = y$).
\item If $t = z$, then $\rev{(F[x:=t])} = Z^1(y) \fl Z^2(y,y) \ou Y^1(z)$
(where $\phi_1(Z^1) = \phi_2(Z^2) = z$) and if $t$ is not a variable, then $\rev{(F[x:=t])} = \perp \fl \perp \ou Y^1(t)$
\end{itemize}
We remark that :
\begin{itemize}
\item $\rev{(F[x:=z])} = Z^1(y) \fl Z^2(y,y) \ou
Y^1(z) = \rev{F}[X^1 := Z^1][x := z]$ if $z$ is a variable such that $\phi_1(Z^1) = \phi_2(Z^2) = z$.
\item $\rev{(F[x:=t])} = \perp
\fl \perp \ou Y^1(t) = \rev{F}[X^1 := \lambda x_1 \perp][x := t]$ if $t$ is not a variable.
\end{itemize}
and then :
\begin{itemize}
\item If $\rev{\G} \vdash_{k}^{2} \rev{(\q x \,F)}$, then (by using
some $\q$-elimination rules) $\rev{\G} \vdash_{k}^{2} \rev{(F[x:=t])}$.
\item If $\rev{\G} \vdash_{k}^{2} \rev{(F[x:=t])}$, then (by using
some $\e$-introduction rules) $\rev{\G} \vdash_{k}^{2} \rev{(\e x \,F)}$.
\end{itemize}
\end{example}
\begin{lemma}\label{transproofdeux}
Let $\G$ be a first-order context and $A$ a first-order formula.
If $\G\vdash_{k}^{1} A$ then $\rev{\G} \vdash_{k}^{2} \rev{A}$ ($k
\in \{ i , c \}$).
\end{lemma}
\begin{proof} By induction on the derivation of $\G \vdash_k^1 A$.
The only difficult cases are the case of the elimination of $\q$ and the
introduction of $\e$ which are treated in the same way as the
examples \ref{exempleutile}. \qed
\end{proof}
\medskip
Now, we can prove the converse of theorem
\ref{transproofun}, which is the main tool to prove our completeness theorems:
\begin{theorem}\label{transprooftrois}
Let $\G$ be a second-order context and $A$ a second-order formula.
If $\G^*,SC_1 \vdash_{k}^{1} A^*$ then $\G \vdash_{k}^{2} A$ ($k
\in \{ i , c \}$).
\end{theorem}
\begin{proof} By lemma \ref{transproofdeux}, corollary \ref{SCdeux}, lemma
\ref{idempotent} and using the fact that formulas in $SC_2$ are provable.\qed
\end{proof}
\section{Classical completeness}
Here is the usual definition of second order models \cite{Man96,Pra67,VDal94}:
\begin{definition}[second-order classical model]
A second-order model for $\Lang_2$ is given by a tuple
${\cal M}_2 = ({\cal D} , \overline{\Sigma},
\{{\cal P}_n\}_{n \in \mathbb{N}})$ where
\begin{itemize}
\item ${\cal D}$ is a non empty set.
\item $\overline{\Sigma}$ contains a function $\overline{f}$ from
${\cal D}^n$ to ${\cal D}$ for each function $f$ of arity $n$ in $\Sigma$.
\item ${\cal P}_n \subseteq {\cal P}({\cal D}^n)$ for each $n \in \mathbb{N}$.
The set ${\cal P}_n$ of subsets of ${\cal D}^n$ will be used as the range
for the second-order quantification of arity $n$.
For $n=0$, we assume that ${\cal P}_0 = {\cal P}({\cal D}^0) =
\{0,1\}$ because ${\cal P}({\cal D}^0) = {\cal P}(\emptyset) =
\{\emptyset, \{\emptyset\}\} = \{0,1\}$.
\end{itemize}
An ${\cal M}_2$-interpretation $\sigma$ is a function on $\Var
\cup \bigcup_{n \in \mathbb{N}} \Var_n$ such that $\sigma(x) \in {\cal D}$ for $x
\in \Var$ and $\sigma(X^n) \in {\cal P}_n$ for $X^n \in \Var_n$.
If $\sigma$ is a ${\cal M}_2$-interpretation, we define
$\sigma(t)$ the interpretation of a first-order term by induction with
$\sigma(f(t_1,\dots,t_n)) = \overline{f}(\sigma(t_1),\dots,\sigma(t_n))$.
Then if $\sigma$ is a ${\cal M}_2$-interpretation we
define ${\cal M}_2, \sigma \models A$ for a formula $A$ by
induction as follows:
\begin{itemize}
\item ${\cal M}_2, \sigma \models X^n(t_1,\dots,t_n)$ iff $(\sigma(t_1),\dots,\sigma(t_n)) \in \sigma(X^n)$
\item ${\cal M}_2, \sigma \models A \rightarrow B$ iff
${\cal M}_2, \sigma \models A$ implies ${\cal M}_2, \sigma \models B$
\item ${\cal M}_2, \sigma \models A \wedge B$ iff
${\cal M}_2, \sigma \models A$ and ${\cal M}_2, \sigma \models B$
\item ${\cal M}_2, \sigma \models A \vee B$ iff
${\cal M}_2, \sigma \models A$ or ${\cal M}_2, \sigma \models B$
\item ${\cal M}_2, \sigma \models \forall x\,A$ iff
for all $v \in {\cal D}$ we have ${\cal M}_2, \sigma[x := v] \models A$
\item ${\cal M}_2, \sigma \models \exists x\,A$ iff there exists
$v \in {\cal D}$ such that ${\cal M}_2, \sigma[x := v] \models A$
\item ${\cal M}_2, \sigma \models \forall X^n\,A$ iff
for all $\pi \in {\cal P}_n$ we have ${\cal M}_2, \sigma[X^n := \pi] \models A$
\item ${\cal M}_2, \sigma \models \exists X^n\,A$ iff there exists
$\pi \in {\cal P}_n$ such that ${\cal M}_2, \sigma[X^n := \pi] \models A$
\end{itemize}
We will write ${\cal M}_2 \models A$ if for all ${\cal
M}_2$-interpretation $\sigma$ we have ${\cal M}_2, \sigma
\models A$.
\end{definition}
\begin{definition}[first-order classical model]
A first-order model for $\Lang_1$ is given by a tuple
${\cal M}_1 = ({\cal D} , \overline{\Sigma}, \{\alpha_n\}_{n \in \mathbb{N}})$ where
\begin{itemize}
\item ${\cal D}$ is a non empty set.
\item $\overline{\Sigma}$ contains a function $\overline{f}$ from
${\cal D}^n$ to ${\cal D}$ for each function $f$ of arity $n$ in $\Sigma$.
\item $\alpha_n \subseteq {\cal D}^{n+1}$ for each $n \in \mathbb{N}$.
The relation $\alpha_n$ will be the interpretation of $\Ap_n$.
\end{itemize}
An ${\cal M}_1$-interpretation $\sigma$ is a function from $\Var$ to
${\cal D}$.
For any first-order model ${\cal M}_1$, any first-oder
formula $A$ and any ${\cal M}_1$-interpretation $\sigma$, we define
${\cal M}_1, \sigma \models A$ et ${\cal M}_1 \models A$
as above by induction on $A$
(we just have to remove the cases for second-order quantification).
\end{definition}
\begin{definition}[semantical translation]
Let ${\cal M}_1 = ({\cal D} , \overline{\Sigma}, \{\alpha_n\}_{n \in \mathbb{N}})$
be a first-order model. We define a second-order model
$\rev{{\cal M}_1} = ({\cal D} , \overline{\Sigma} , \{{\cal P}_n\}_{n \in \mathbb{N}})$
where ${\cal P}_0 = \{0,1\}$
and for $n > 0$, ${\cal P}_n = \{|a|_n; a \in {\cal D}\}$ where $|a|_n
= \{(a_1, \dots, a_n) \in {\cal D}^n;
(a,a_1, \dots, a_n) \in \alpha_n\}$.
Let $\sigma$ be an ${\cal M}_1$-interpretation, we define
$\rev{\sigma}$ an $\rev{{\cal M}_1}-interpretation$ by
$\rev{\sigma}(x) = \sigma(x)$ if $x \in \Var$
and
$\rev{\sigma}(X^n) = |\sigma(\phi(X^n))|_n$.
\end{definition}
\begin{lemma}\label{csemone}
For any first-order model ${\cal M}_1$, any ${\cal
M}_1$-interpretation $\sigma$ and any second order formula $A$,
${\cal M}_1, \sigma \models A^*$ if and only if
$\rev{{\cal M}_1}, \rev{\sigma} \models A$.
\end{lemma}
\begin{proof}
By induction on the formula $A$, this is an immediate consequence of
the definition of semantical translation. \qed
\end{proof}
\begin{corollary}\label{csemtwo}
For any first-order model ${\cal M}_1$,
${\cal M}_1 \models SC_1$ if and only if
$\rev{{\cal M}_1} \models SC_2$.
\end{corollary}
\begin{proof}
Immediate consequence of lemma \ref{csemone} using the fact that formulas in
$SC_1$ and $SC_2$ are closed. \qed
\end{proof}
\begin{theorem}[Completeness of second order classical semantic]\label{ccomplet}
Let A be a closed second-order formula. $\vdash_{c}^{2} A$ iff
for any second-order model ${\cal M}_2$ such that
${\cal M}_2 \models SC_2$ we have ${\cal M}_2 \models A$.
\end{theorem}
\begin{proof}
$\Longrightarrow$ Usual direct proof by induction on the proof of
$\vdash_{c}^{2} A$.
$\Longleftarrow$ Let ${\cal M}_1$ be a first-order model such that
${\cal M}_1 \models SC_1$.
Using corollary \ref{csemtwo} we have
$\rev{{\cal M}_1} \models SC_2$ and by hypothesis, we get
$\rev{{\cal M}_1} \models A$. Then using lemma \ref{csemone}
we have ${\cal M}_1 \models A^*$. As this is true for any
first-order model satisfying $SC_1$, the first-order completeness
theorem gives $SC_1 \vdash_{c}^{1} A^*$ and this leads to the wanted result
$\vdash_{c}^{2} A$ using theorem \ref{transprooftrois}. \qed
\end{proof}
\section{Intuitionistic completeness}
Our method, when applied to the intuitionistic case, gives the
following definition of second-order models (similar to Prawitz's
adaptation of Beth's
models \cite{Pra70}). We mean that the definition arises mechanically
if we want to get lemma \ref{isemone} (which is the
analogous of lemma \ref{csemone} in the classical case).
\begin{definition}[second-order intuitionistic model]
A second-order Kripke model for $\Lang_2$ is given by a tuple
${\cal K}_2 = ({\cal K}, 0, \leq, \{{\cal D}_p\}_{p \in {\cal K}} ,
\{\overline{\Sigma}_p\}_{p \in {\cal K}},
\{\Pi_{n,p}\}_{n \in \mathbb{N}, p \in {\cal K}})$ where
\begin{itemize}
\item $({\cal K}, \leq, 0)$ is a partially ordered set with $0$ as bottom element.
\item ${\cal D}_p$ are non empty sets such that
for all $p,q \in {\cal K}$, $p \leq q$ implies ${\cal D}_p \subseteq
{\cal D}_q$.
\item $\overline{\Sigma}_p$ contains a function $\overline{f}_p$ from
${\cal D}^n_p$ to ${\cal D}_p$ for each function $f$ of arity $n$ in
$\Sigma$. Moreover, for all $p,q \in {\cal K}$, $p \leq q$ implies
that for all $(a_1,\dots,a_n) \in {\cal D}^n_p \subseteq {\cal D}^n_q$
we have $\overline{f}_p(a_1,\dots,a_n) = \overline{f}_q(a_1,\dots,a_n)$.
\item $\Pi_{n,p}$ are non empty sets of increasing functions $
(P_q)_{q \geq p}$ such that for all $q \geq p, P_q \in
{\cal P}({\cal D}^n_q)$ (increasing means for all $q,q' \geq p$, $q \leq q'$
implies $P_q \subseteq P_{q'}$). Moreover, if $q \geq p$ and $\pi \in
\Pi_{n,p}$ then $\pi$ restricted to all $q' \geq q$ belongs to $\Pi_{n,q}$.
In particular, an element of
$\Pi_{0,p}$ is a particular increasing function in \{0,1\} with $0 = \emptyset$ and
$1 = \{\emptyset\}$.
\end{itemize}
A ${\cal K}_2$-interpretation $\sigma$ at level $p$ is a function
$\sigma$ such that
$\sigma(x) \in {\cal D}_p$ for $x
\in \Var$ and $\sigma(X^n) \in \Pi_{n,p}$ for $X^n \in \Var_n$.
\end{definition}
\begin{remark} If $\sigma$ is a ${\cal K}_2$-interpretation
at level $p$ and $p \leq q$ then $\sigma$ can be considered as ${\cal
K}_2$-interpretation at level $q$ by restricting all the values of
second order variables to $q' \geq q$. Then we write ${\cal K}_2,
\sigma, q \realise A$ even if $\sigma$ is defined at a level $p \leq
q$. This is used mainly in the definition of the interpretation of
implication.
\end{remark}
\begin{definition}
If $\sigma$ is a ${\cal K}_2$-interpretation at level p, we
define $\sigma(t)$ the interpretation of a first-order term by induction with
$\sigma(f(t_1,\dots,t_n)) = \overline{f}_p(\sigma(t_1),\dots,\sigma(t_n))$.
Then if $\sigma$ is a ${\cal K}_2$-interpretation at level $p$ we
define ${\cal K}_2, \sigma, p \realise A$ for a formula $A$ by
induction as follows:
\begin{itemize}
\item ${\cal K}_2, \sigma, p \realise X^n(t_1,\dots,t_n)$ iff $(\sigma(t_1),\dots,\sigma(t_n)) \in \sigma(X^n)(p)$
\item ${\cal K}_2, \sigma, p \realise A \rightarrow B$ iff for all $q \geq p$ if
${\cal K}_2, \sigma, q \realise A$ then ${\cal K}_2, \sigma, q \realise B$
\item ${\cal K}_2, \sigma, p \realise A \wedge B$ iff
${\cal K}_2, \sigma, p \realise A$ and ${\cal K}_2, \sigma, p \realise B$
\item ${\cal K}_2, \sigma, p \realise A \vee B$ iff
${\cal K}_2, \sigma, p \realise A$ or ${\cal K}_2, \sigma, p \realise B$
\item ${\cal K}_2, \sigma, p \realise \forall x\,A$ iff for all $q \geq
p$, for all $v \in {\cal D}_q$ we have ${\cal K}_2, \sigma[x := v], q \realise A$
\item ${\cal K}_2, \sigma, p \realise \exists x\,A$ iff there exists
$v \in {\cal D}_p$ such that ${\cal K}_2, \sigma[x := v], p \realise A$
\item ${\cal K}_2, \sigma, p \realise \forall X^n\,A$ iff for all $q \geq
p$, for all $\pi \in \Pi_{n,q}$ we have ${\cal K}_2, \sigma[X^n := \pi], q \realise A$
\item ${\cal K}_2, \sigma, p \realise \exists X^n\,A$ iff there exists
$\pi \in \Pi_{n,p}$ such that ${\cal K}_2, \sigma[X^n := \pi], p \realise A$
\end{itemize}
We will write ${\cal K}_2 \realise A$ if for all ${\cal
K}_2$-interpretation $\sigma$ at level $0$ we have ${\cal K}_2, \sigma, 0 \realise A$.
\end{definition}
\begin{remark}
Interpretations are monotonic, this means that the set of true
statements only increase when we go from world $p$ to world $q$ with
$p \leq q$.
\end{remark}
\medskip
We recall here the usual Kripke's definition \cite{Kri65} of intuitionistic models:
\begin{definition}[first-order intuitionistic model]
A first-order Kripke model is given by a tuple
${\cal K}_1 = ({\cal K}, 0, \leq, \{{\cal D}_p\}_{p \in {\cal K}} ,
\{\overline{\Sigma}_p\}_{p \in {\cal K}},
\{\alpha_{n,p}\}_{n \in \mathbb{N}, p \in {\cal K}}, \realise)$ where
\begin{itemize}
\item $({\cal K}, \leq, 0)$ is a partially ordered set with $0$ as bottom element.
\item ${\cal D}_p$ are non empty sets such that
for all $p,q \in {\cal K}$, $p \leq q$ implies ${\cal D}_p \subseteq
{\cal D}_q$.
\item $\overline{\Sigma}_p$ contains a function $\overline{f}_p$ from
${\cal D}^n_p$ to ${\cal D}_p$ for each function $f$ of arity $n$ in
$\Sigma$. Moreover, for all $p,q \in {\cal K}$, $p \leq q$ implies
that for all $(a_1,\dots,a_n) \in {\cal D}^n_p \subseteq {\cal D}^n_q$
we have $\overline{f}_p(a_1,\dots,a_n) = \overline{f}_q(a_1,\dots,a_n)$.
\item $\alpha_{n,p}$ are subsets of ${\cal D}_p^{n+1}$ such that
for all $p,q \in {\cal K}$, for all $n \in \mathbb{N}$,
$p \leq q$ implies $\alpha_{n,p}\subseteq
\alpha_{n,q}$.
\item $\realise$ is the relation defined by
$p \realise \Ap_n(a,a_1,\dots,a_n)$ if and only if
$p \in {\cal K}$ and
$(a,a_1,\dots,a_n) \in \alpha_{n,p}$.
\end{itemize}
A ${\cal K}_1$-interpretation $\sigma$ at level $p$ is a function from $\Var$
to ${\cal D}_p$.
For any first-order Kripke model ${\cal K}_1$, any first-oder formula
$A$ and any ${\cal K}_1$-interpretation $\sigma$, we define ${\cal
K}_1, p, \sigma \realise A$ as above.
We will write ${\cal K}_1 \realise A$ iff for ${\cal
K}_1$-interpretation $\sigma$ at level $0$ we have
${\cal K}_1, \sigma, 0 \realise A$.
\end{definition}
\begin{definition}[semantical translation]
Let
$${\cal K}_1 = ({\cal K}, 0, \leq, \{{\cal D}_p\}_{p \in {\cal K}} ,
\{\overline{\Sigma}_p\}_{p \in {\cal K}},
\{\alpha_{n,p}\}_{n \in \mathbb{N}, p \in {\cal K}}, \realise)$$
be a first-order Kripke model. We define a second-order Kripke model
$$\rev{{\cal K}_1} = ({\cal K}, 0, \leq, \{{\cal D}_p\}_{p \in {\cal K}},
\{\overline{\Sigma}_p\}_{p \in {\cal K}},
\{\Pi_{n,p}\}_{n \in \mathbb{N}, p \in {\cal K}})$$
where $\Pi_{n,p} = \{|a|_n; a \in {\cal D}_p\}$ with for all $q
\geq p$, $|a|_n(q) = \{(a_1, \dots, a_n) \in {\cal D}^n_q;
(a,a_1, \dots, a_n) \in \alpha_{n,q}\}$.
Let $\sigma$ be a ${\cal K}_1$-interpretation at level $p$, we define
$\rev{\sigma}$ a $\rev{{\cal K}_1}$-interpretation at level $p$ by
$\rev{\sigma}(x) = \sigma(x)$
and
$\rev{\sigma}(X^n) = |\sigma(\phi(X^n))|_n$.
\end{definition}
\begin{lemma}\label{isemone}
For any first-order Kripke model ${\cal K}_1$, any ${\cal
K}_1$-interpretation $\sigma$ at level $p$ and any second order formula $A$,
${\cal K}_1,\sigma,p \realise A^*$ if and only if
$\rev{{\cal K}_1},\rev{\sigma},p \realise A$.
\end{lemma}
\begin{proof}
By induction on the formula $A$, this is an immediate consequence of
the definition of semantical translation. \qed
\end{proof}
\begin{corollary}\label{isemtwo}
For any first-order Kripke model ${\cal K}_1$,
${\cal K}_1 \realise SC_1$ if and only if
$\rev{{\cal K}_1} \realise SC_2$.
\end{corollary}
\begin{proof}
Immediate consequence of lemma \ref{isemone}. \qed
\end{proof}
\begin{theorem}[Completeness of second order intuitionistic semantic]\label{icomplet}
Let A be a closed second-order. $\vdash_{i}^{2} A$ iff
for all second-order Kripke model ${\cal K}_2$ such that
${\cal K}_2 \realise SC_2$ we have ${\cal K}_2 \realise A$.
\end{theorem}
\begin{proof}
$\Longrightarrow$ Usual direct proof by induction on the proof of
$\vdash_{i}^{2} A$.
$\Longleftarrow$ Identical to the proof of theorem \ref{ccomplet}
using the lemmas \ref{isemone} and \ref{isemtwo} instead of lemmas
\ref{csemone} and \ref{csemtwo}.\qed
\end{proof}
\section{Examples of second order propositional intuitionistic models}
In this section we will only consider propositional intuitionistic
logic. Then the definition of models can be simplified using the
following remark:
\begin{remark}
The interpretation of a propositional variable at level $p$ can be
seen as a bar: a bar being a set $\cal B$ with
\begin{itemize}
\item for all $q \in \cal B$, $q \geq p$
\item for all $q, q' \in \cal B$ such that $q \neq q'$, we have neither $q \leq q'$ nor $q' \leq q$
\end{itemize}
In the case of finite model, there is a canonical isomorphism between
the set of bars and the set of increasing functions in $\{0, 1\}$ by
associating to a bar $\cal B$ the function $\pi$ such that $\pi(q) =
1$ if and only if there exists $r \in \cal B$ such that $q \geq r$.
This usually helps to ``see'' the interpretation of a formula.
This is not the case for infinite model, if we consider $\mathbb Q^+$,
the set of rational greater than $\sqrt{2}$ is not a bar.
\end{remark}
\begin{example}
We will now construct a counter model for the universally quantified
Peirce's law: $P = \forall X \forall Y (((X \rightarrow Y) \rightarrow
X) \rightarrow X)$: We take a model ${\cal K}_2$ with two points $0,p$
and such that $\Pi_{0,0}$ contains $\pi_1$ and $\pi_2$ defined by
$\pi_1(0) = \pi_2(0) = \pi_2(p) = 0$ and $\pi_1(p) = 1$ (this means
that $\pi_2$ is the empty bar and $\pi_1$ is then bar $\{p\}$). It is clear
that ${\cal K}_2, \sigma[X:=\pi_1,Y:=\pi_2], 0 \not\realise ((X
\rightarrow Y) \rightarrow X) \rightarrow X$. So we have ${\cal K}_2
\not\realise P$. We can also remark that this model is not full the
bar $\{0\}$ is missing.
\end{example}
A natural question arises: if one codes as usual conjunction,
disjunction and existential using implication and second order
universal quantification what semantics is induced by this coding?
If we keep the original conjunction, disjunction and existential, it
is obvious that the defined connective are provably equivalent to the
original ones, and therefore, have the same semantics.
However, if we remove conjunction, disjunction and existential from
the model we only have the following:
\begin{proposition}
The semantics induced by the second order coding of conjunction,
disjunction and existential is the standard Kripke's semantics if the
model is full (that is if $\Pi_{n,p}$ is the set of all increasing
functions with the desired properties).
\end{proposition}
\begin{proof}
\begin{description}
\item[$A \wedge B = \forall X ((A \rightarrow (B \rightarrow X)) \rightarrow X)$]: We must prove
that ${\cal K}_2, \sigma, p \realise A \wedge B$ if and only if
${\cal K}_2, \sigma, p \realise A$ and ${\cal K}_2, \sigma, p \realise
B$. The right to left implication is trivial. For the left to right,
we assume ${\cal K}_2, \sigma, p \realise A \wedge B$, We consider the
interpretation $\pi$ defined by $\pi(q) = 1$ if and only if
${\cal K}_2, \sigma, q \realise A$ and ${\cal K}_2, \sigma, q \realise
B$. Then it is immediate that
${\cal K}_2, \sigma[X := \pi], p \realise A \rightarrow (B \rightarrow X)$. So we have
${\cal K}_2, \sigma[X := \pi], p \realise X$ which means that
$\pi(p) = 1$ which is equivalent to ${\cal K}_2, \sigma, p \realise A$
and ${\cal K}_2, \sigma, p \realise
B$.
\item[$A \vee B = \forall X ((A \rightarrow X) \rightarrow (B \rightarrow X) \rightarrow X)$]: The proof is
similar using $\pi$ defined by
$\pi(q) = 1$ if and only if
${\cal K}_2, \sigma, q \realise A$ or ${\cal K}_2, \sigma, q \realise
B$.
\item[$\exists \chi\,A = \forall X (\forall \chi (A \rightarrow X) \rightarrow X)$]: The proof is
similar using $\pi$ defined by
$\pi(q) = 1$ if and only if there exists $\phi$
a possible interpretation for $\chi$ such that
${\cal K}_2, \sigma[\chi := \phi], q \realise A$.\qed
\end{description}
\end{proof}
\begin{remark}
If we compare this proof to the proof in \cite{Kri90e,Par88} about data-types
in AF2, we remark that second order intuitionistic models are very
similar to realizability models. Moreover, in both cases, we are in general
unable to compute the semantics of a formula if the model is not full
(for realizability, not full means that the interpretation of second
order quantification is an intersection over a strict subset of the
set of all sets of lambda-terms).
Moreover, the standard interpretation of the conjunction is
${\cal K}, \sigma, p \realise A \wedge B$ if and only if ${\cal K},
\sigma, p \realise A$ and ${\cal K}, \sigma, p \realise B$. However,
if the model is not full and if the language does not contain the
conjunction, the function $\pi$ defined for $q \geq p$ by $\pi(p) = 1$
if and only if ${\cal K}, \sigma, q \realise A$ and ${\cal K},
\sigma, q \realise B$ does not always belong to $\Pi_{0,p}$. In this
case, the interpretation of the second order definition of the
conjunction is strictly smaller than the natural interpretation.
It would be interesting to be able to construct such non standard
model, but this is very hard (due to the comprehension schemas). In
fact the authors do not know any practical way to construct such a non
full model. In the framework of realizability, such non full model
would be very useful to prove that some terms are not typable of type
$A$ in Girard's system F while they belong to the interpretation of
$A$ in all full models (for instance Maurey's term for the $\inf$
function on natural number).
\end{remark}
\begin{small}
\baselineskip=0.17in
\nocite{*}
|
3,212,635,537,591 | arxiv | \section{Introduction}
A Delsarte surface $S$ is a surface of ${\mathbf{P}}^3$ defined by the vanishing of a polynomial $F$ consisting of four monomials. Let $A$ be the exponent matrix of $F$, then a Delsarte surface is the quotient of a Fermat surface if and only if $\det(A)\neq 0$. Shioda used this observation in \cite{ShiodaPic} to present an algorithm to determine the Lefschetz number of any smooth surface that is birationally equivalent with $S$.
Fix now two disjoint lines $\ell_1,\ell_2$ in ${\mathbf{P}}^3$. The projection with center $\ell_1$ onto $\ell_2$ yields a rational map $S\dashrightarrow {\mathbf{P}}^1$. Resolving the indeterminancies of this map yields a fibration $\tilde{S} \to {\mathbf{P}}^1$.
If the genus of the general fiber is one and this morphism has a section then Shioda's algorithm together with the Shioda-Tate formula allows one to determine the Mordell-Weil rank of the group of sections. Shioda applied this to the surface
\[ y^2+x^3+1+t^n\]
and showed in \cite{ShiodaAstr} that the maximal Mordell-Weil rank (by varying $n$) is 68.
If both lines $\ell_i$ are intersections of two coordinate hyperplane then we call the obtained fibration a \emph{Delsarte fibration}. We will introduce the notion of a \emph{Delsarte base change}. Roughly said, this is a base change ${\mathbf{P}}^1\to{\mathbf{P}}^1$ completely ramified over $0$ and $\infty$. In particular, the pullback of a Delsarte fibration under a Delsarte base change is again a Delsarte fibration.
The first author determined in his PhD thesis \cite{HeijnePhD} the maximal Mordell-Weil rank under Delsarte base changes of any Delsarte fibration such that the general fiber has genus one. In this way he showed that Shioda's example has the highest possible rank among Delsarte fibration of genus one.
In \cite{FastExtremal} and \cite{FastLongTable} Fastenberg calculated the maximal Mordell-Weil rank under base changes $t\mapsto t^n$ for a special class of elliptic surface, i.e., elliptic curves over ${\mathbf{C}}(t)$ with nonconstant $j$-invariant such that a certain invariant $\gamma$ is smaller than $1$. It turned out that all the ranks that occur for Delsarte surfaces with nonconstant $j$-invariant also occur in Fastenberg's list. In \cite[Ch. 6]{HeijnePhD} it is shown that for every Delsarte fibration of genus one, there exist integers $m,n$ such that the Delsarte base change of degree $m$ of the Delsarte fibration is isomorphic to a base change of the form $t\mapsto t^n$ of one of the surfaces in Fastenberg's list.
In this paper we present a more conceptual proof for this phenomenon: First we study the configuration of singular fibers of a Delsarte fibration. We show that for any Delsarte fibration each two singular fibers over points $t\neq0,\infty$ are isomorphic. Then we show that after a base change of the form $t\mapsto t^n$ the Delsarte fibration is a base change of the form $t\mapsto t^m$ of a fibration with at most one singular fiber away from $0,\infty$.
If there is no singular fiber away from $0,\infty$, then the fibration becomes split after a base change of $t\mapsto t^m$. If there is at least one singular fiber then we show that there are three possibilities, namely the function field extension $K(S)/K({\mathbf{P}}^1)=K(x,y,t)/K(t)$ is given by $m_1+m_2+(1+t)m_3$, where the $m_i$ are monomials in $x$ and $y$, or this extension is given by $y^a=x^b+x^c+tx^d$, where $b,c,d$ are mutually distinct, or the singular fiber away from $0$ and $\infty$ has only nodes as singularities and is therefore semistable. See Proposition~\ref{propStandardForm}.
In the case of a genus one fibration we can use this classification to check almost immediately that any Delsarte fibration of genus one admits a base change of the form $t\mapsto t^n$ such that the pulled back fibration is the pull back of a fibration with $\gamma<1$ or has a constant $j$-invariant. See Corollary~\ref{corGamma}. This procedure is carried out in Section~\ref{secDes}.
The techniques used in the papers by Fastenberg use the fact that the fibration is not isotrivial and it seems very hard to extend these techniques to isotrivial fibrations.
In Section~\ref{secIsoTrivial} we consider an example of a class of isotrivial Delsarte fibrations. Shioda's algorithm yields the Lefschetz number of any Delsarte surface with $\det(A)\neq 0$. Hence it is interesting to see how it works in the case where Fastenberg's method breaks down. Let $p$ be an odd prime number, and $a$ a positive integer. We consider the family of surfaces
\[ S:y^2=x^p+t^{2ap}+s^{2ap}\]
in ${\mathbf{P}}(2a,ap,1,1)$. Then $S$ is birational to a Delsarte surface. After blowing up $(1:1:0:0)$ we obtain a smooth surface $\tilde{S}$ together with a morphism $\tilde{S}\to {\mathbf{P}}^1$. The general fiber of this morphism is a hyperelliptc curve of genus $(p-1)/2$. We show that if $p>7$ then $\rho(\tilde{S})=2+6(p-1)$, in particular the Picard number is independent of $a$. Two of the generators of the N\'eron-Severi group of $S$ can be easily explained: the first one is the pullback of the hyperplane class on $S$, the second class is the exceptional divisor of the morphism $\tilde{S}\to S$. In Example~\ref{exaIso} we give also equations for some other classes.
If we take $p=3$ then we find back Shioda's original example. However, in Shioda's example one has that $\rho(\tilde{S})$ is not completely independent of $a$, it depends namely on $\gcd(a,60)$. Similarly one can show that if $p=5$ and $p=7$ then $\rho(\tilde{S})$ depends on $a$. Our result shows that these cases are exceptions, i.e., for $p>7$ we have that $\rho(\tilde{S})$ is completely independent of $a$.
\section{Delsarte surfaces}\label{secDes}
In this section we work over an algebraically closed field $K$ of characteristic zero.
\begin{definition} \label{defBasic} A surface $S\subset {\mathbf{P}}^3$ is called a \emph{Delsarte surface} if $S$ is the zero-set of a polynomial of the form
\[F:=\sum_{i=0}^3 c_i \prod_{j=0}^3 X_i^{a_{i,j}},\]
with $c_i\in K^*$ and $a_{i,j}\in {\mathbf{Z}}_{\geq0}$. The $4\times 4$ matrix $A:=(a_{i,j})$ is called the \emph{exponent matrix} of $S$.
A \emph{Delsarte fibration of genus $g$} on a Delsarte surface $S$ consists of the choice of two disjoint lines $\ell_1,\ell_2$ such that both the $\ell_i$ are the intersection of two coordinate hyperplanes and the generic fiber of the projection $S\dashrightarrow \ell_2$ with center $\ell_1$ is an irreducible curve of geometric genus $g$.
A \emph{Delsarte birational map} is a birational map $\varphi:{\mathbf{P}}^3\dashrightarrow {\mathbf{P}}^3$ such that $\varphi(X_0:\dots:X_3)= (\prod X_j^{b_{0j}}:\dots: \prod X_j^{b_{3j}})$, i.e., $\varphi$ is a birational monomial map.
\end{definition}
\begin{remark} \label{rmkCoeff} Since $K$ is algebraically closed, we can multiply each of the four coordinates $X_i$ by a nonzero constant such that all four constants in $F$ coincide, hence without loss of generality we may assume that $c_i=1$.
After permuting the coordinates, if necessary, we may assume that $\ell_1$ equals $V(X_2,X_3)$ and $\ell_2$ equals $V(X_0,X_1)$. Then the projection map $S\dashrightarrow \ell_2$ is just the map $[X_0:X_1:X_2:X_3]\to [X_2:X_3]$. Let $f(x,y,t):=F(x,y,t,1)\in K(t)[x,y]$.
Then the function field extension $K(S)/K(\ell_2)$ is isomorphic to the function field extension $K(x,y,t)/f$ over $K(t)$.
We call a Delsarte fibration with $\ell_1=V(X_2,X_3)$ and $\ell_2=V(X_0,X_1)$ the \emph{standard fibration} on $S$.
\end{remark}
\begin{definition}\label{defBaseChange}
Let $n$ be a nonzero integer. A \emph{Delsarte base change of degree $|n|$} of a Delsarte fibration $\varphi :S\dashrightarrow {\mathbf{P}}^1$ is a Delsarte surface $S_n$, together with a Delsarte fibration $\varphi_n: S_n\dashrightarrow {\mathbf{P}}^1$ and a Delsarte rational map $S_n\dashrightarrow S$ of degree $n$, such that there exists a commutative diagram
\[
\xymatrix{S_n\ar@{-->}[r] \ar@{-->}[d] &S\ar@{-->}[d] \\\
{\mathbf{P}}^1 \ar[r]&{\mathbf{P}}^1}
\]
and $K(S_n)/K({\mathbf{P}}^1)$ is isomorphic to the function field extension $K(x,y,s)/(f(x,y,s^n)$ over $K(s)$.
\end{definition}
\begin{remark}
Note that $n$ is allowed to be negative. If $n$ is negative then a base change of degree $-n$ is the composition of the automorphism $t\mapsto 1/t$ of ${\mathbf{P}}^1$ with the usual degree $-n$ base change $t\mapsto t^{-n}$. In many cases we compose a base change with a Delsarte birational map which respects the standard fibration. In affine coordinates such a map is given by $(x,y,s)\mapsto (xs^a,ys^b,s^n)$ for some integers $a,b$.
\end{remark}
\begin{lemma}\label{lemRatFib} Let $S$ be a Delsarte surface. Suppose there is a nonzero vector ${\mathbf{v}}=(a,b,0,0)^T$ in ${\mathbf{Z}}^4$ such that $A{\mathbf{v}}\in \spa(1,1,1,1)^T$. Then the generic fiber of the standard fibration $\varphi:S\to {\mathbf{P}}^1$ is a rational curve.
\end{lemma}
\begin{proof}
After interchanging $x$ and $y$, if necessary, we may assume that $a$ is nonzero. Consider now $f_0:=f(x^a,x^by,t)$. The exponents of $x$ in the four monomials of $f_0$ are precisely the entries of $A{\mathbf{v}}$. Since $A{\mathbf{v}}=e(1,1,1,1)^T$ for some integer $e$ we have that $f_0=x^eg(y,t)$. This implies that the generic fiber of $\varphi$ is dominated by a finite union of rational curves. Since the generic fiber is irreducible it follows that the generic fiber of $\varphi$ is a rational curve.
\end{proof}
\begin{lemma} \label{lemProd} Let $S$ be a Delsarte surface. Suppose there is a nonzero vector ${\mathbf{v}}=(a,b,c,0)^T$ in ${\mathbf{Z}}^4$ such that $c\neq 0$ and $A{\mathbf{v}}\in \spa(1,1,1,1)^T$. Then there is a Delsarte base change of degree $|c|$ such that the pull back of the standard fibration on $S$ is birational to a product $C\times {\mathbf{P}}^1\to{\mathbf{P}}^1$.
\end{lemma}
\begin{proof}
Consider now $f_0:=f(xt^a,yt^b,t^c)$. The exponents of $t$ in the four monomials of $f_0$ are precisely the entries of $A{\mathbf{v}}$.
Since $A{\mathbf{v}}=e(1,1,1,1)^T$ for some integer $e$ we have that $f_0=t^eg(x,y)$. Let $S'$ be the projective closure of $g=0$ in ${\mathbf{P}}^3$. Then $S'$ is a cone over the plane curve $g=0$, in particular $S'$ is birational to $C\times {\mathbf{P}}^1$ and the standard fibration on $S'$ is birational to the projection $C\times {\mathbf{P}}^1\to {\mathbf{P}}^1$.
Now $S'$ is birational to the surface $S_c$, the projective closure of $f(x,y,t^c)=0$. Hence $S_c \to {\mathbf{P}}^1$ is birational to $C\times {\mathbf{P}}^1\to {\mathbf{P}}^1$.
\end{proof}
\begin{lemma}\label{lemDetA} Let $A$ be the exponent matrix of a Delsarte surface. There exists a nonzero vector ${\mathbf{v}}=(a,b,c,0)^T$ in ${\mathbf{Z}}^4$ such that $A{\mathbf{v}}\in \spa(1,1,1,1)^T$ if and only if $\det(A)=0$.
\end{lemma}
\begin{proof}
Since each row sum of $A$ equals $d$, the degree of the $i$-th monomial in $F$, it follows that $A(1,1,1,1)^T =d(1,1,1,1)^T$. Suppose first that $\det(A)\neq 0$. Then from $A(1,1,1,1)^T =d(1,1,1,1)^T$ it follows that $A^{-1}(1,1,1,1)^T\in\spa (1,1,1,1)^T$, which does not contain a nonzero vector with vanishing fourth coordinate.
Suppose now that $\det(A)=0$. Denote with $A_i$ the $i$-th column of $A$. From the fact that each row sum of $A$ equals $d$ we get that $A_1+A_2+A_3+A_4=d(1,1,1,1)^T$.
Since $\det(A)=0$ there exists a nonzero vector $(a_1,a_2,a_3,a_4)$ such that $\sum a_iA_i=0$. From this we obtain
\[ (a_4-a_1)A_1+(a_4-a_2)A_2+(a_4-a_3)A_3=a_4(A_1+A_2+A_3+A_4)= a_4d(1,1,1,1)^T.\]
I.e., ${\mathbf{v}}=(a_4-a_1,a_4-a_2,a_4-a_3,0)^T$ is a vector such that $A{\mathbf{v}}\in \spa (1,1,1,1)^T$. We need to show that ${\mathbf{v}}$ is nonzero. Suppose the contrary, then also $A{\mathbf{v}}=a_4d(1,1,1,1)^T$ is zero and therefore $a_4=0$. Substituting this in ${\mathbf{v}}$ yields that ${\mathbf{v}}=(-a_1,-a_2,-a_3,0)=(0,0,0,0)$ holds, which contradicts our assumption that $(a_1,a_2,a_3,a_4)$ is nonzero.
\end{proof}
\begin{remark} \label{rmkDet}
We want to continue to investigate the singular fibers of a Delsarte fibration, in particular the singular fibers over points $t=t_0$ with $t_0\neq 0,\infty$.
If $\det(A)=0$ then either the generic fiber has geometric genus 0 or after a Delsarte base change the fibration is split, i.e., the fibration is birational to a product. In the latter case all the fibers away from $0$ and $\infty$ are smooth. Hence from now on we restrict to the case where $\det(A)\neq 0$.
\end{remark}
\begin{lemma} \label{lemOneT} Let $S$ be a Delsarte surface with $\det(A)\neq0$, such that the generic fiber has positive geometric genus. Let $\varphi: S \to {\mathbf{P}}^1$ be the standard Delsarte fibration. Then there exists a Delsarte base change of $\varphi$ that is birational to the standard fibration on a Delsarte surface $S'$ with affine equation of the form $m_1+m_2+m_3+t^nm_4$, where each $m_i$ is a monomial in $x$ and $y$.
\end{lemma}
\begin{proof}
Let $\mathbf{e}_0=(1,1,1,1)^T$ and $\mathbf{e}_i$ be the $i$-th standard basis vector of ${\mathbf{Q}}^4$.
Let $V_i$ be the vector space spanned by $\mathbf{e}_0$ and $\mathbf{e}_i$. Since $A^{-1}\mathbf{e}_0=\frac{1}{d}\mathbf{e}_0$ it follows that $A^{-1}V_i$ is not contained in $\spa \{\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3\}$.
In particular, $\dim A^{-1}V_i\cap \spa \{\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3\}=1$.
Let $\ell_i$ be the line $A^{-1}V_i\cap \spa \{\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3\}$ and let ${\mathbf{v}}_i$ be a vector spanning $\ell_i$.
We can scale ${\mathbf{v}}_i$ such that $A{\mathbf{v}}_i=\mathbf{e}_i+t_i\mathbf{e}_0$ for some $t_i \in K$. Since $\mathbf{e}_0,\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3$ are linearly independent it follows that $\{\mathbf{e}_i+t_i\mathbf{e}_0\}_{i=1}^3$ are linearly independent and therefore ${\mathbf{v}}_1,{\mathbf{v}}_2,{\mathbf{v}}_3$ are linearly independent. Hence $\spa \{{\mathbf{v}}_1,{\mathbf{v}}_2,{\mathbf{v}}_3\}$ is three-dimensional and there is at least one ${\mathbf{v}}_i=(a_i,b_i,c_i,0)$ with $c_i\neq0$. Then the rational map defined by $(x,y,t)\mapsto (xt^{a_i},yt^{b_t},t^{c_i})$ is a composition of a Delsarte base change and a Delsarte rational map.
Now three of the four entries of $A{\mathbf{v}}_i$ coincide, say they equal $e$. The exponent of $t$ in the four monomials of $f_0:=f(xt^{a_i},yt^{b_i},t^{c_i})$ are the entries of $A{\mathbf{v}}_i$. In particular, in precisely three of the four monomials the exponents of $t$ equal the same constant $e$. Therefore $g:=f_0/t^e$ consists of four monomials of which precisely one contains a $t$. If the exponent of $t$ in this monomial is negative then we replace $t$ by $1/t$ in $g$. Then $g=0$ is an affine polynomial equation for the surface $S'$.
\end{proof}
Recall that we investigate the singular fibers of a Delsarte fibration, in particular the singular fibers over points $t=t_0$ with $t_0\neq 0,\infty$.
If we have a Delsarte fibration and take a Delsarte base change then the type of singular fiber over $t=0,\infty$ may change, since the base change map is ramified over these points. Over points with $t\neq 0,\infty$ the base change map is unramified and therefore the type of singular fibers remains the same.
Hence to describe the possible types of singular fibers over points with $t\neq0,\infty$ it suffices by Lemma~\ref{lemOneT} to study Delsarte surfaces such that only one monomial contains a $t$, i.e.,
we may restrict ourselves to Delsarte surfaces with affine equation $m_1+m_2+m_3+t^nm_4$. If $n=0$ then the fibration is split and there are no singular fibers. If $n\neq 0$ then the possible types of singular fibers are already determined at $n=1$, i.e., it suffices to consider Delsarte surfaces with affine equation $m_1+m_2+m_3+tm_4$.
\begin{definition}\label{defMinFib}
We call the standard fibration on a Delsarte surface a \emph{minimal Delsarte fibration} if the following conditions hold:
\begin{enumerate}
\item The affine equation for the standard fibration is of the form $m_1+m_2+m_3+tm_4$, where the $m_i$ are monomials in $x$ and $y$.
\item The exponent matrix $A$ of the corresponding surface $S\subset {\mathbf{P}}^3$ satisfies $\det(A)\neq 0$.
\end{enumerate}
\end{definition}
\begin{remark}\label{remMinFib} In the function field $K(S)=K(x,y,t)/f$ we have the relation $t=(-m_1-m_2-m_3)/m_4$. In particular $K(S)=K(x,y)$ and therefore $S$ is a rational surface.
Consider now the defining polynomial for $S$, i.e., $M_1+M_2+M_3+X_2M_4$, where the $M_i$ are monomials in $X_0,X_1,X_3$, the degrees of $M_1, M_2$ and $M_3$ are the same, say $d$ and the degree of $M_4$ equals $d-1$.
The Delsarte fibration is induced by the map $(X_0:X_1:X_2:X_3)\mapsto (X_2:X_3)$.
If $S$ contains the line $\ell_1:X_2=X_3=0$ then this rational map can be extended to a morphism on all of $S$, otherwise we blow-up the intersection of this line with $S$ and obtain a morphism $\tilde{S}\to{\mathbf{P}}^1$, such that each fiber is a plane curve of degree $d$.
There is a different way to obtain this family of plane curves. Define $N_i'$ as follows:
\[ N_i':=M_i(X_0,X_1,X_2,X_2) \mbox{ for } i=1,2,3 \mbox{ and } N_4'=X_2M_4(X_0,X_1,X_2,X_2)\]
Now the four $N_i'$ have a nontrivial greatest common divisor if and only if $X_3\mid M_i$ for $i=1,2,3$. The later condition is equivalent to the condition that the line $\ell_1$ is contained in $S$. Moreover, if the greatest common divisor is nontrivial then it equals $X_2$.
Now set $N_i=N'_i$ if $\ell_1\not \subset S$ and set $N_i=N'_i/X_3$ if $\ell_1\subset S$.
Then $\lambda(N_1+N_2+N_3)+\mu N_4$ is a pencil of plane curves of degree $d$ or $d-1$ and the generic member of this pencil is precisely the generic fiber of the standard fibration on $S$.
We can consider the generic member of this family as a projective curve $C$ over $K(t)$ with defining polynomial $G:=N_1+N_2+N_3+tN_4\in K(t)[X_0,X_1,X_2]$. Let $A'$ be the exponent matrix of $C$ (considered as a curve in ${\mathbf{P}}^2_{K(t)}$). Set
\[ B:=\left(\begin{matrix} 1&0&0\\0&1&0\\0&0&1\\ 0&0&1\end{matrix}\right) \mbox{ if } \ell_1\not\subset S \mbox{ and } B:=\left(\begin{matrix} 1&0&\frac{-1}{d}\\0&1&\frac{-1}{d}\\0&0&\frac{d-1}{d}\\ 0&0&\frac{d-1}{d}\end{matrix}\right) \mbox{ otherwise.}\]
Then $A'=AB$. Since $A$ is invertible and $B$ has rank $3$ it follows that $\rank A'=3$. Moreover the first three rows of $A'$ are linearly independent, since the upper $3\times 3$-minor of $A'$ equals the upper $3\times 3$-minor of $A$ times the upper $3\times 3$ minor of $B$.
In particular, there is a vector $\mathbf{k}$, unique up to scalar multiplication, such that $\mathbf{k} A'=0$. Since the upper three rows of $A'$ are linearly independent it follows that the fourth entry $k_4$ of $\mathbf{k}$ is nonzero. We can make the vector $\mathbf{k}$ unique, by requiring that $k_4>0$, $k_i\in {\mathbf{Z}}$ for $i=1,\dots, 4$ and $\gcd(k_1,k_2,k_3,k_4)=1$.
Moreover from $\rank A'=3$ and the fact that $(0,0,1,-1)B$ vanishes it follows that $\mathbf{k}\in \spa \{(0,0,1,-1)A^{-1}\}$.
Since none of the rows of $A'$ is zero there are at least two nonzero entries in $\mathbf{k}$. Suppose that there are precisely two nonzero entries, say $k_i$ and $k_4$. Then $-k_i$ times the $i$-th row of $A'$ equals $k_4$ times the fourth row of $A'$. Each row sum of $A'$ equals the degree of $C$, say $d$. From this it follows that $k_id=-k_4d$ and hence that $k_i=-1,k_4=1$. In particular, the $i$-th row and the fourth row coincide. After permuting $m_1,m_2,m_3$, if necessary, we may assume that the affine equation for the standard fibration is of the form $m_1+m_2+(1+t)m_3$.
Hence if the four monomials $m_1,m_2,m_3,m_4$ (in $x,y$) are distinct then at least three of the four entries of $\mathbf{k}$ are nonzero.
Let $A'_i$ be the $i$-th row of $A'$. Recall that each row sum of $A'_i$ equals $d$. Since $\sum k_iA_i$ equals zero it follows that $0=\sum_i k_i\sum_j A'_{i,j}=\sum_i k_id$ and hence $\sum k_i=0$. Let $p$ be a prime number dividing one of the $k_i$. Since $\gcd(k_1,\dots,k_4)=1$ there is a $j$ such that $p\nmid k_j$. From $\sum k_i=0$ it follows that there is a $j'\neq j$ such that $p\nmid k_{j'}$. Hence each prime number $p$ does not divide at least two of the entries of $\mathbf{k}$.
\end{remark}
\begin{proposition}\label{propDiscrim} Let $S\to {\mathbf{P}}^1$ be a minimal Delsarte fibration. Let $A'$, $\mathbf{k}$ and $N_i$ be as above. Suppose that the fiber over $t=t_0$ is singular and $t_0\neq0,\infty$ then
\[ t_0^{k_4}-\prod_{i: k_i\neq 0} k_i^{k_i}=0\]
or two of the $N_i$ coincide.
\end{proposition}
\begin{proof}
Let $G\in K(t)[X_0,X_1,X_2]$ be as above. Then $G$ defines a pencil of plane curves in ${\mathbf{P}}^2_K$. Assume that no two of the $N_i$ coincide. We aim at determining the singular members of the pencil defined by $G$.
Let $B_t$ be the matrix obtained from $A'$ by multiplying the fourth row by $t$. Let us consider the matrix $B_{t_0}$ for some $t_0\in K^*$.
Since the upper $3\times 3$ minor of $B_{t_0}$ equals the upper $3\times 3$ minor of $A'$, and this minor is nonzero, it follows that $\rank B_{t_0}=3$. Hence the kernel of right-multiplication by $B_{t_0}$ is one-dimensional and is generated by $(k_1,k_2,k_3,\frac{k_4}{t_0})$.
Consider now the closure of the image of the rational map $M:K^3\dashrightarrow K^4$ sending $(x,y,z)$ to $(N_1,N_2,N_3,N_4)$. Let $z_1,z_2,z_3,z_4$ be the coordinates on $K^4$. Then by the definition of the vector $(k_1,k_2,k_3,k_4)$ one has that $\prod N_i^{k_i}=1$ holds, i.e., on the image of $M$ one has
\[ \prod z_i^{k_i}=1\]
Since the greatest common divisor of the $k_i$ equals one, this defines an irreducible hypersurface $\overline{V}$ in $K^4$. Moreover, from the fact that $\rank A'$ equals 3 it follows that $M$ has finite fibers, hence the image of $M$ is three-dimensional and the closure of the image of $M$ is precisely the closure of $ \prod z_i^{k_i}=1$.
We want now to determine the values for which $t_0$ the corresponding member of the pencil of plane curves is singular. Hence we want to find $(x_0:y_0:z_0)\in {\mathbf{P}}^2$ and $t_0\in K^*$ such that for $(t,X_0,X_1,X_2)=(t_0,x_0,y_0,z_0)$ the vector $(G_{X_0},G_{X_1},G_{X_2})$ is zero. In particular, the vector $(X_0G_{X_0},X_1G_{X_1},X_2G_{X_2})$ is zero. A direct calculation shows that the latter vector equals $(N_1,N_2,N_3,tN_4)A'$, which in turn equals $(N_1,N_2,N_3,N_4)B_t$.
Hence if $(x_0,y_0,z_0)$ is a singular point of a fiber over $t=t_0$ then $M(x_0,y_0,z_0)$ is contained in $\ker B_{t_0}\cap \overline{V}$.
We consider first the case where $M(x_0,y_0,z_0)$ is nonzero and $t_0\neq 0$. Then $ \prod_{i:k_i\neq 0} z_i^{k_i}=1$ and $(z_1,z_2,z_3,z_4)$ is a multiple of $(k_1,k_2,k_3,k_4/t_0)$. In particular,
\[ \frac{\prod_{i:k_i\neq 0} k_i^{k_i}}{t_0^{k_4}}=1\]
holds, which finishes the case where $M(x_0,y_0,z_0)$ is nonzero.
To finish we show that if $t_0\neq 0$ and $\ker B_{t_0}\cap \overline{V}$ consists only of $(0,0,0,0)$ then the fiber over $t_0$ is smooth. Since $\ker B_{t_0}\cap \overline{V}$ consists only of $(0,0,0,0)$ each singular point of the fiber satisfies $N_1=N_2=N_3=N_4=0$. In particular at least two of the $X_i$ are zero. Without loss of generality we may assume that the point $(0:0:1)$ is singular. Consider now $G(x,y,1)$ and write this as $m_1+m_2+m_3+tm_4$.
Since all the four $N_i$ are distinct we have that $m_1+m_2+m_3+tm_4=0$ is an equisingular deformation of $m_1+m_2+m_3+t_0m_4$ for $t$ in a small neighborhood of $t_0$. Hence we can resolve this singularity simultaneously for all $t$ in a neighborhood of $t_0$. Therefore all fibers in a neighborhood of $t_0$ are smooth and, in particular, the fiber over $t_0$ is smooth.
\end{proof}
\begin{lemma}\label{lemBaseAuto} Let $\varphi: S\to {\mathbf{P}}^1$ be a minimal Delsarte fibration. Then there is an automorphism $\sigma:S\to S$, mapping fibers of $\varphi$ to fibers, such that its action on the base curve is $t\mapsto \zeta_{k_4} t$.
\end{lemma}
\begin{proof}
Let $d$ be the smallest integer such that $D:=dA^{-1}$ has integral coefficients.
Let $T=\{\sum X_i^d=0\}\subset {\mathbf{P}}^3$ be the Fermat surface of degree $d$. Then there is a rational map $T\dashrightarrow S$ given by $(X_0:X_1:X_2:X_3)\mapsto ( \prod X_j^{d_{0j}}: \dots: \prod X_j^{d_{3j}})$. On $T$ there is a natural action of $({\mathbf{Z}}/d{\mathbf{Z}})^3$, given by $(X_0:X_1:X_2:X_3)\mapsto (\zeta_d^{a_1}X_0:\zeta_d^{a_2}X_1:\zeta_d^{a_3}X_2:X_3)$. On the affine chart $X_3\neq 0$ with coordinates $x,y,t$ this action is given by $(x,y,t)\mapsto (\zeta_d^{a_1}x,\zeta_d^{a_2}y,\zeta_d^{a_3}t)$.
The rational map $T\dashrightarrow S$ is given (in affine coordinates) by
\[(x,y,t)\mapsto \left(\frac{x^{d_{00}}y^{d_{01}}t^{d_{02}}}{x^{d_{30}}y^{b_{31}}t^{b_{32}}},\frac{x^{d_{10}}y^{d_{11}}t^{d_{12}}}{x^{d_{30}}y^{d_{31}}t^{d_{32}}},\frac{x^{d_{20}}y^{d_{21}}t^{d_{22}}}{x^{d_{30}}y^{d_{31}}t^{d_{32}}}\right).\]
The action of $({\mathbf{Z}}/d{\mathbf{Z}})^3$ descents to $S$ and respects the standard fibration. Let $t=X_2/X_3$ be a coordinate on the base of the standard fibration. Then $(a_1,a_2,a_3)\in ({\mathbf{Z}}/d{\mathbf{Z}})^3$ acts as $t \mapsto \zeta_d^e t$ with $e\equiv (d_{20}-d_{30})a_1+(d_{21}-d_{31})a_2+(d_{22}-d_{32})a_3\bmod d$.
Since $\mathbf{k}$ as defined in Remark \ref{remMinFib} is proportional to $(0,0,1,-1)A^{-1}$ it follows that $\mathbf{k}$ is proportional to $(d_{20}-d_{30},d_{21}-d_{31},d_{22}-d_{32},d_{23}-d_{33})$, i.e., there is an $m\in {\mathbf{Z}}$ such that $mk_i=d_{2i}-d_{3i}$. In particular, setting $d'=d/m$ it follows that $(a_0,a_1,a_2)$ acts as $t \mapsto \zeta_{d'}^e t$ with $e\equiv k_1a_1+k_2a_2+k_3a_3\bmod d'$.
Let $p$ be a prime number and suppose that $p^m$ divides $k_4$. Since $k_4$ is a divisor of $d$ and the greatest common divisor of the $k_i$ equals one it follows that $p^m$ also divides $d'$. Since the greatest common divisor of the $k_i$ equals one it follows that at least one of the $k_i$ is not divisible by $p$. Without loss of generality we may assume that $k_1$ is invertible modulo $p$. From this it follows that we can choose $a_1$ in such a way that $a_1k_1+a_2k_2+a_3k_3\equiv 1 \bmod p^m$.
The corresponding automorphism $\sigma'_{p^m}$ of $S$ maps $t$ to $t\times \zeta t$ where $\zeta$ is a primitive $p^tn$-th root of unity. Take now $\sigma_{p^m}:=(\sigma'_{p^m})^n$. Then $\sigma_{p^m}$ multiplies $t$ with a primitive $p^t$-root of unity.
Write now $k_4=\prod p_i^{t_i}$. Then $\sigma:=\prod_i \sigma_{p_i^{t_i}}$ multiplies $t$ with a primitive $k_4$-th root of unity.
\end{proof}
\begin{proposition}\label{propBaseChangeOneSingFib} Let $S\to {\mathbf{P}}^1$ be a Delsarte fibration with $\det(A)\neq 0$ then there exists a Delsarte base change $S_n\to {\mathbf{P}}^1$ of $S\to {\mathbf{P}}^1$ which is isomorphic to the base change of a genus $g$ fibration $S_0\to {\mathbf{P}}^1$ with at most one singular fiber outside $0,\infty$.
\end{proposition}
\begin{proof}
From Lemma~\ref{lemOneT} it follows that we may assume that the Delsarte fibration is a minimal Delsarte fibration, i.e., we have an affine equation for the generic fiber of the form $m_1+m_2+m_3+tm_4$, where the $m_i$ are monomials in $x$ and $y$.
On a minimal Delsarte fibration $\varphi :S\to {\mathbf{P}}^1$ there is an automorphism of order $k_4$ that acts on the $t$-coordinate as $t\mapsto \zeta_{k_4}t$. In particular, all the fixed points of this automorphism are in the fibers over 0 and $\infty$.
Consider next $\psi:S/\langle\sigma\rangle\to {\mathbf{P}}^1/\langle\sigma\rangle\cong {\mathbf{P}}^1$. Now the singular fibers of $\varphi$ are possibly at $t=0, \infty$ and at $t^{k_4}=\prod k_i^{k_i}$, hence the singular fibers of $\psi$ are possibly at $t=0,\infty$ and $t=\prod k_i^{k_i}$.
\end{proof}
\begin{proposition}\label{propStandardForm} Let $\varphi: S\to {\mathbf{P}}^1$ be a minimal Delsarte fibration with affine equation $m_1+m_2+m_3+tm_4$ such that the general fiber has positive geometric genus. Then one of the following happens
\begin{itemize}
\item $m_4$ equals one of $m_1,m_2,m_3$. In this case the fibration is isotrivial.
\item $S$ is Delsarte-birational to a Delsarte surface with equation of the form $y^a=f(x,t)$.
\item every singular fiber over $t=t_0$ with $t_0\neq 0,\infty$ is semistable.
\end{itemize}
\end{proposition}
\begin{proof} Assume that all four $m_i$ are distinct. Let $N_i$ be as in Remark~\ref{remMinFib}.
Let $t_0\in K^*$ be such that the fiber over $t=t_0$ is singular. Let $P=(X_0:X_1:X_2)\in {\mathbf{P}}^2$ be a singular point of the fiber.
From the proof of Proposition~\ref{propDiscrim} it follows that at least one of the $N_i$ is nonzero and that $(N_1:N_2:N_3:N_4)=(k_1:k_2:k_3:\frac{k_4}{t_0})$ holds.
From Remark~\ref{remMinFib} it follows that at most one of the $k_i$ is zero.
Suppose first that one of the $k_i$, say $k_1$ is zero. This implies that $N_1$ vanishes and that the other $N_i$ are nonzero. Therefore one of the coordinate of $P$ has to be zero (in order to have $N_1=0$). If two of the coordinates of $P$ are zero then from $\det(A)\neq0$ it follows that there is some $i\neq 1$ such that $N_i=0$, which contradicts the fact that at most one $N_i$ vanishes. Hence without loss of generality we may assume $P=(\alpha:0:1)$ with $\alpha\neq 0$, $X_1\mid N_1$ and $X_1\nmid N_i$ for $i=1,2,3$. In particular, we have an affine equation for the fibration of the form $m_1+m_2+m_3+tm_4$, where $y$ divides $m_1$, and $m_2,m_3,m_4$ are of the form $x^{a_i}$.
Multiply the equation with a power of $x$ such that $m_1$ is of the form $x^{ab}y^{b}$ and set $y_1=y/x^a$. Then we obtain an equation of the form $y_1^b=f(x,t)$, where $f(x,t)$ is of the form $x^a+x^b+tx^c$. This yields the second case.
It remains to consider the case where all the $N_i$ are nonzero. Let $P\in S$ be a point where the fiber over $t=t_0$ singular. Let $f$ be an affine equation for $S$. We prove below that if we localize $K[x,y,z,t]/(f_x,f_y,f_z,t-t_0)$ at $P$ then this ring is isomorphic to $k[x]$. Hence the scheme defined by the Jacobian ideal of fiber at $t=t_0$ has length one at the point $P$. Equivalently, the Milnor number of the singularity of the fiber at $t=t_0$ at the point $P$ equals one. In particular, the singularity of the fiber at $P$ is an ordinary double point.
Consider now the rational map $\tau:{\mathbf{P}}^2\setminus V(X_0X_1X_2) \to {\mathbf{P}}^3$ given by $(X_0:X_1:X_2)\mapsto (N_1:N_2:N_3:N_4)$. The map $\tau$ is unramified at all points $Q\in {\mathbf{P}}^2$ such that $\tau (Q)\not \in V(X_0X_1X_2X_3)$.
Since we assumed that all the $N_i$ are nonzero it follows that also all the $X_i$ are nonzero. Hence the length of $V(f_{X_0},f_{X_1},f_{X_2},t-t_0)$ at $P$ equals the length of $V(X_0f_{X_0},X_1f_{X_1},X_2f_{X_2},t-t_0)$ at $P$. From the proof of Proposition~\ref{propDiscrim} it follows that $V(X_0f_{X_0},X_1f_{X_1},X_2f_{X_2},t-t_0)$ is the scheme-theoretic intersection of $\ker B_{t_0}$ and $V(\prod z_i^{k_i}-1)$ and that this intersection is locally given by
$V(k_4Z_0-t_0k_1Z_3,k_4Z_1-t_0k_2Z_3,k_4Z_2-t_0k_3Z_4, Z_2-t_0Z_4)$, whence the length of the scheme equals one, and therefore the local Milnor number equals one, and the singularity is an ordinary double point.
\end{proof}
\begin{theorem}\label{thmSingFib} Suppose $S\to {\mathbf{P}}^1$ is a Delsarte fibration of genus $1$ with nonconstant $j$-invariant. Then every singular fiber at $t\neq 0,\infty$ is of type $I_\nu$.
\end{theorem}
\begin{proof}
Without loss of generality we may assume the fibration is a Delsarte minimal fibration. In particular we have an affine equation for this fibration of the form described in the previous Proposition.
In the first case the fibration is isotrivial and therefore the $j$-invariant is constant, hence we may exclude this case. If we are in the third case then each singular fiber at $t=t_0$ is semistable and, in particular, is of type $I_\nu$.
It remains to consider the second case. In this case we have an affine equation of the form $y^a=f(x,t)$. Suppose first that $a>2$ holds. Then the generic fiber has an automorphism of order $a$ with fixed points. This implies that the $j$-invariant of the generic fiber is either $0$ or $1728$. In particular, the $j$-invariant is constant and that the fibration is isotrivial. Hence we may assume $a=2$. In this case we have an affine equation $y^2=f(x,t)$. Without loss of generality we may assume $x^2\nmid f$. Since the generic fiber has genus $1$ it follows that $\deg_x(f)\in \{3,4\}$. Since $S$ is a Delsarte surface it follows that $f$ contains three monomials.
Suppose first that $\deg_x(f)=3$ and that at $t=t_0$ there is a singular fiber of type different from $I_v$. Then $f(x,t_0)$ has a triple root, i.e., $f(x,t_0)=(x-t_0)^3$. This implies that $f(x,t_0)$ consists of either one or four monomials in $x$. This contradicts the fact that $f(x,t)$ consists of three monomials and $t_0\neq 0$.
If $\deg_x(f)=4$ then we may assume (after permuting coordinates, if necessary) that $f=x^4+x^a+t$ or $f=x^4+tx^a+1$.
If the fiber type at $t=t_0$ is different from $I_v$ then $f(x,t_0)$ consists of three monomials and $y^2=f(x,t_0)$ has at singularity different from a node. In particular, $f(x,t_0)$ has a zero or order at least 3 and therefore $f(x,t_0)=(x-a)^4$ or $f(x,t_0)=(x-a)(x-b)^3$. In the first case $f(x,t_0)$ contains five monomials, contradicting the fact that is has three monomials. In the second case note that the constant coefficient of $f(x,t_0)$ is nonzero and hence $ab\neq 0$. Now either the coefficient of $x$ or of $x^3$ is zero. From this it follows that either $b=-3a$ or $a=-3b$ holds. Substituting this in $f(x,t_0)$ and the the fact that $f(x,t_0)$ has at most three monomials yields $b=0$, contradicting $ab\neq 0$.
\end{proof}
\begin{corollary}\label{corSingFib} Let $\varphi: S\to {\mathbf{P}}^1$ be an elliptic Delsarte surface, then there exists a cyclic base change of $\varphi$ ramified only at 0 and $\infty$
that is isomorphic to a cyclic base change, ramified only at $0$ and $\infty$, of an elliptic surface with at most one singular fiber away from 0 and $\infty$ and this fiber is of type $I_v$.
\end{corollary}
Let $\pi:E\to {\mathbf{P}}^1$ be an elliptic surface (with section). Define $\gamma(\pi)$ to be
\[ \gamma(\pi):=\sum_{t\neq0,\infty} \left( f_t-\frac{e_t}{6}\right)-\frac{n_0}{6}-\frac{n_\infty}{6},\]
where $f_t$ is the conductor of $\pi^{-1}(t)$, $e_t$ the Euler number of $\pi^{-1}(t)$ and $n_p$ is zero unless the fiber at $p$ is of type $I_n$ or $I_n^*$ and in this cases $n_p=n$.
In \cite{FastExtremal} and \cite{FastLongTable} Fastenberg studies rational elliptic surfaces with $\gamma<1$. She determines the maximal Mordell-Weil rank of such elliptic surfaces under cyclic base changes of the form $t\mapsto t^n$.
We will now show that each Delsarte fibration of genus 1 with nonconstant $j$-invariant becomes after a Delsarte base change the base change of a rational elliptic surface with $\gamma<1$. In particular, the maximal Mordell-Weil ranks for Delsarte fibrations of genus 1 under cyclic base change (as presented in \cite[§3.4]{HeijnePhD} and \cite{HeijneMoC}) can also be obtained from \cite{FastLongTable}.
\begin{corollary}\label{corGamma} Let $\pi: S\to {\mathbf{P}}^1$ a minimal Delsarte fibration of genus $1$ with nonconstant $j$-invariant. Then $S$ is the base change of a rational elliptic surface with $\gamma<1$.
\end{corollary}
\begin{proof}
From Theorem~\ref{thmSingFib} it follows that $\pi$ is the base change of an elliptic fibration $\pi':S'\to {\mathbf{P}}^1$ with at most one singular fiber away from $0$ and $\infty$ and this fiber is of type $I_v$.
Since the $j$-invariant is nonconstant it follows that $\pi'$ has at least three singular fibers, hence there is precisely one singular fiber away from $0$ and $\infty$.
Since this fiber is of type $I_\nu$ it follows that $f_t=1$ for this fiber. Hence
\[ \gamma=1-\frac{e_t+n_0+n_\infty}{6}<1.\]
\end{proof}
\begin{remark} The converse statement to this results holds also true: let $\pi:S\to {\mathbf{P}}^1$ be a rational elliptic surface with $\gamma<1$, only one singular fiber away from $0$ and $\infty$ and this fiber is of type $I_\nu$. Then there exists a base change of the form $t\mapsto t^n$ such that the pullback of $\pi:S\to{\mathbf{P}}^1$ along this base change is birational to the standard fibration on a Delsarte surface. One can obtain this result by comparing the classification of elliptic Delsarte surfaces from \cite[Chapter 3]{HeijnePhD} with the tables in \cite{FastExtremal} and \cite{FastLongTable}.
\end{remark}
\begin{example}
According to \cite{FastLongTable} there is an elliptic surface with a $IV$-fiber at $t=0$, an $I_1$-fiber at $t=\infty$ and one further singular fiber that is of type $I_1^*$, such that the maximal rank under base changes of the form $t\mapsto t^n$ is $9$. Such a fibration has a nonconstant $j$-invariant. Corollary~\ref{corSingFib} now implies that this fibration is not a Delsarte fibration.
If we twist the $I_1^*$ fiber and one of the fibers at $t=0$ or $t=\infty$ then we get the following fiber configurations $IV;I_1^*;I_1$ or $II^*;I_1;I_1$. Then maximal rank under base changes of the form $t\mapsto t^n$ equals 9 in both cases. Now $y^2=x^3+x^2+t$ has singular fibers of type $I_1$ at $t=0$ and $t=-4/27$ and of type $II^*$ at $t=\infty$ and $y^2=x^3+tx+t^2$ has a $IV$-fiber at $t=0$, a $I_1$ fiber at $t=-4/27$ and a $I_1^*$ fiber at $t=\infty$. Hence both fibration occur as Delsarte fibrations.
\end{example}
\begin{example}
Consider the elliptic Delsarte surface that corresponds to
\[Y^2=X^3+X^2+tX.\]
We can easily compute the discriminant and $j$-invariant of this fibration:
\[ \Delta=-64t^3-16t^2 \mbox{ and } j=256\frac{(3t-1)^3}{4t^3-t^2}.\]
From this we can see that there are three singular fibres. Over $t=0$ there is $I_2$-fiber, over $t=\infty$ there is a $III$-fiber and over $t=-1/4$ there is a $I_1$-fiber.
We then check that this corresponds to the second entry in the list of \cite{FastLongTable}.
\end{example}
\begin{remark}\label{rmkDifference} The approaches to determine the maximal Mordell-Weil ranks under cyclic base change in \cite{FastLongTable} and in \cite{HeijnePhD} are quite different. The former relies on studying the local system coming from the elliptic fibration, whereas the latter purely relies on Shioda's algorithm to determine Lefschetz numbers of Delsarte surfaces. This explains why Fastenberg can deal with several base changes where the ``minimal" fibration has four singular fibers (which cannot be covered by Shioda's algorithm because of Proposition~\ref{propBaseChangeOneSingFib}) but cannot deal with fibration with constant $j$-invariant. Instead Shioda's algorithm can handle some of them.
\end{remark}
\section{Isotrivial fibrations}\label{secIsoTrivial}
Using Proposition~\ref{propStandardForm} one easily describes all possible isotrivial minimal Delsarte fibrations.
\begin{proposition} Suppose the standard fibration on $S$ is isotrivial and that the genus of the generic fiber is positive. Then there is a Delsarte base change and a Delsarte birational map such that the pull back of the standard fibration is of the form $m_1+m_2+(1+t^n)m_3$, $y^3+x^3+x^2+t^n$, or $y^a+x^2+x+t^n$.
\end{proposition}
\begin{proof} Suppose the affine equation for $S$ is of the third type of Proposition~\ref{propStandardForm}. Then $S$ admits a semistable fiber and in particular the fibration cannot be isotrivial.
If the affine equation for $S$ is of the first type of Proposition~\ref{propStandardForm}, then the generic fiber is (after an extension of the base field) isomorphic to $m_1+m_2+m_3$, in particular each two smooth fibers of the standard fibration are isomorphic and therefore this fibration is isotrivial. In this case $S$ is the pull back of $m_1+m_2+(1+t^n)m_3$.
Hence we may restrict ourselves to the case where we have an affine equation of the form $y^a=x^bf(x,t)$ where $f$ consists of three monomials, $f(0,t)$ is not zero and the exponent of $x$ in each of the three monomials in $f$ is different. Moreover, after a Delsarte birational map we may assume that $b<a$.
The surface $S$ is birational to a surface $y^a=x^bz^{c+\deg(f)}f(x/z,t)$ in ${\mathbf{P}}(1,w,1)$, with $0\leq c<a$ and $w=(b+c+\deg(f))/a\in {\mathbf{Z}}$.
The standard fibration on $S$ is isotrivial if and only if the moduli of the zero-set of $x^bz^{c+\deg(f)}f(x/z,t)$ in ${\mathbf{P}}^1_{(x:z)}$ is independent of $t$. We will now consider this problem.
We cover first the case where $d':=\deg_x(f)>2$ holds.
After swapping the role of $x$ and $z$, if necessary, we may assume that the coefficient of $xz^{d'-1}$ is zero.
We claim that after a map of the form $y=t^{c_1}y, x=t^{c_2}x,z=z,t=t^{c_3}$ we may assume that $f=x^{d'}+x^{c'}z^{d'-c'}+tz^{d'}$. To see this, take an affine equation for $S$ of the form
$y^a=x^b(a_1x^{d'}+a_2x^{c'}+a_3)$, where $a_i\in\{1,t\}$, and two of the $a_i$ equal $1$. If $a_1=t$ then we need to take an integer solution of $ac_1=bc_2+d'c_2+c_3=bc_2+c'c_2$, if $a_2=t$ then we need to take an integer solution of $ac_1=bc_2+d'c_2+c_3=bc_2+c'c_2$. In both cases we obtain an affine equation of the form $y^a=x^b(x^{d'}+x^{c'}+t^n)$. This fibration is isotrivial if and only if $y^a=x^b(x^{d'}+x^{c'}+t)$ is isotrivial, which proves the claim.
Hence from now on we assume that $f$ is of the form $x^{d'}+x^{c'}z^{d'-c'}+tz^{d'}$ with $d'>2$ and $c'>1$.
Let $s$ denote the number of distinct zeroes of $g(x,z):=x^bz^{c+\deg(f)}f(x/z,t)$ for a general $t$-value.
We say that fiber at $t=t_0$ is bad if $x^bz^{c+\deg(f)}f(x/z,t)$ has at most $s-1$ distinct zeroes.
The main result from \cite{KT} yields that if the fiber a $t=t_0$ is bad then $g(x,z)$ has at most $3$ distinct zeroes. We are first going to classify all $g$ satisfying this condition. Then we will check case-by-case whether the moduli of the zeroes of $g$ are independent of $t$.
Consider the fiber over $t=0$. From $c'>1$ it follows that $x=0$ is a multiple zero of $f(x,0)$. Hence that the fiber over $t=0$ is bad. If $c$ is positive then the criterion from \cite{KT} implies that $g(x,0)$ can have at most one further zero and hence $d'=c'+1$. If $c=0$ then $g$ can have at most two further zeroes and therefore $d'-c'\in \{1,2\}$.
Suppose first $d'=c'+1$.
Consider $f'(x,t):=\frac{\partial}{\partial x}f(x,t)$. Our assumption on $f$ implies that $f'(x,t)$ is a polynomial only in $x$. The fiber at $t=t_0$ is bad if and only if $f'(x,t_0)$ and $f(x,t_0)$ have a common zero. From $c'=d'-1$ it follows that $f'(x,t)$ has a unique zero different from $0$, say $x_0$, and $x_0$ is a simple zero of $f'(x,t)$. Now $f(x_0,t)$ is a linear polynomial in $t$. Hence there is a unique nonzero $t$-value $t_0$ over which there is a bad fiber. Since $x_0$ is a simple zero of $f'(x,t_0)$ it follows that $(x-x_0)^2$ divides $f(x,t_0)$ and that there are $d'-2$ further distinct zeroes, all different from $0$. Using that $g$ has at most $3$ zeroes it follows that if both $b$ and $c$ are nonzero then $d'-2=0$, if one of $b,c$ is zero then $d'-2\leq 1$ and if both $b$ and $c$ are zero then $d'-2\leq 2$. Using that we assumed that $d'$ is at least $3$ we obtain the following possibilities for $g$:
$x^b(x^3+x^2z+tz^3)$, $z^c(x^3+x^2z+tz^3)$, $x^3+x^2z+tz^3$ and $x^4+x^3z+tz^4$.
Suppose now $c=0$ and $d'=c'+2$. Then $f'$ is of the form $\beta(x^2+\alpha)x^{d'-3}$. In particular, there are two possible $x$-values for a bad point in a bad fiber. If they occur in the same fiber and $b=0$ then $d'\in \{4,5\}$, otherwise $d'\in \{3,4\}$. Since $2\leq c'=d'-2$ we may exclude $d'=3$ and we obtain that the two polynomials $x^4+x^2+t$ and $x^5+x^3+t$ are the only possibilities for $f$. We can exclude $x^5+x^3+t$, since it has bad fibers at $t^2=\frac{-3125}{108}$ and a necessary condition to have $d'=5$ is that there is at most one bad fiber with $t\neq0,\infty$.
If $b>0$ then $d'\leq 4$; in particular we have only $x^b(x^4+x^2+t)$ to check.
Actually only in one of the above cases the moduli are independent of $t$, namely $g=x^3+x^2z+tz^3$:
Note that the $j$-invariants of the elliptic curves $y^2=x^3+x^2+t$ and $y^2=tz^3+z+1$ are not constant, hence the moduli of the zeros of $x^b(x^3+x^2z+tz^3)$ and of $z^c(x^3+x^2z+tz^3)$ depend on $t$ (if $b>0$ resp. $c>0$ holds). Since $x^3+x^2z+tz^3$ has degree 3, the moduli of its zeroes are obviously constant.
The family of genus one curves $y^2=x^4+x^2+t$ has a semistable fiber at $t=\frac{1}{4}$ and the family of genus one curves $y^2=x^4+x^3z+tz^4$ has a semistable fiber at $t=\frac{27}{256}$. Hence the moduli of the zeroes of $x^b(x^4+x^2+t)$ for $b\geq 0$ and of $x^4+x^3z+tz^4$ depend on $t$.
Consider now the final case $d'=2$. Then $f=x^2+x+t$, and therefore automatically two of the three possibilities for $g$, namely $z^c(x^2+xz+tz^2)$ and $x^b(x^2+xz+tz^2)$ have constant moduli since they define 3 points in ${\mathbf{P}}^1$.
Now $y^{b+2}=z^b(x^2+xz+tz^2)$ and $y^{b+2}=x^b(x^2+xz+tz^2)$ are birationally equivalent up to a Delsarte base change, e.g., take $((x:y:z),t)\mapsto ((z:\frac{y}{t}:\frac{x}{t^{b+2}}),t^{b+2})$. Hence these two cases yield only one case up to isomorphism. We may assume that the affine equation equals $y^b+x^2+x+t^n$.
If $b=c=0$ holds, then the generic fiber is a cyclic cover of ${\mathbf{P}}^1$ ramified at two points, and in particular has genus 0. Hence we can exclude this case.
Finally, $x^bz^c(x^2+xz+tz^2)$ does not have constant moduli since the $j$-invariant of $y^2=x^3+x^2+tx$ is nonconstant.
\end{proof}
\begin{remark} In the case of $y^a+x^2+x+t$ we may complete the square. This yields a surface that is isomorphic to $y^a+x^2+1+t$, in particular the fibration is birationally equivalent to a fibration of the first kind. However, they are not Delsarte birational.
In \cite[Section 3.5.1]{HeijnePhD} it is shown that $y^3+x^3+x^2+t$ is birational to $y^2+x^3+t^3+1$, however the given birational map is not a Delsarte birational map.
Hence both exceptional case are fibration that are birational to a fibration of the first type.
\end{remark}
From the previous discussion it follows that almost all minimal isotrivial Delsarte fibrations are of the form $m_1+m_2+(1+t)m_3$.
We will calculate the Picard numbers for one class of such fibration and consider the behavior of the Picard number under Delsarte base change, i.e., base changes of the form $t\mapsto t^a$.
\begin{example}\label{exaIso}
Let $p=2g+1$ be a prime number.
Consider the isotrivial fibration $y^2=x^{p}+t^{2ap}+s^{2ap}$ of genus $g$-curves over ${\mathbf{P}}^1_{(s:t)}$. This equation defines a quasi-smooth surface $S$ of degree $2ap$ in $ {\mathbf{P}}(2a,ap,1,1)$. The surface $S$ has one singular point, namely at $(1:1:0:0)$. A single blow-up of this suffices to obtain a smooth surface $\tilde{S}$. The Lefschetz number of $\tilde{S}$ can be computed by using Shioda's algorithm, which we do below. The exceptional divisor of $\tilde{S}\to S$ is a smooth rational curve. In particular, using the Mayer-Vietoris sequence one easily obtains that $h^2(\tilde{S})=h^2(S)+1$ and $\rho(\tilde{S})=\rho(S)+1$.
Since $S$ is quasi-smooth one has a pure weight 2 Hodge structure on $H^2(S)$. To determine the Hodge numbers of this Hodge structure we use a method of Griffiths and Steenbrink. Note first that $\dim H^2(S)_{\prim}=h^2(S)-1=h^2(\tilde{S})-2$.
Let $R$ be the Jacobian ring of $S$, i.e.,
\[ R={\mathbf{C}}[x,y,t,s]/\left(\frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial f}{\partial s},\frac{\partial f}{\partial t}\right)={\mathbf{C}}[x,y,s,t]/(x^{p-1},y,t^{2p-1},s^{2p-1})\]
This is a graded ring with weights $(2a,ap,1,1)$. Let $d=2ap$ be the degree of $S$ and $w=ap+2a+2$ the sum of the weights.
From Griffiths-Steenbrink \cite{SteQua} it follows that
$H^{2-q,q}(S)_{\prim}$ is isomorphic with
\begin{eqnarray*} R_{qd-w}&=&\spa \{ x^it^js^k\mid 2ai+j+k=qd-w, 0\leq i<p-1, 0\leq j,k< 2p-1\}\\ &=& \spa\{ yx^it^js^k \mid 2ai+j+k=qd, 0< i\leq p-1, 0<j,k\leq 2p-1\}. \end{eqnarray*}
In other words the basis elements of $R_{qd-w}$ correspond one-to-one to vectors
\[\left\{ \left( \frac{1}{2},\frac{i}{p},\frac{j}{2ap},\frac{k}{2ap}\right) \in ({\mathbf{Q}}/{\mathbf{Z}})^4 \left| \begin{array}{c} i,j,k\in {\mathbf{Z}};0<i<p; 0<j,k<2ap; \\ \frac{1}{2}+\frac{i}{p}+\frac{j}{2ap}+\frac{k}{2ap}=q\end{array} \right. \right\}.\]
In \cite[Section 2.1]{HeijnePhD} a variant of Shioda's algorithm \cite{ShiodaPic} is presented. This algorithm calculates the Lefschetz number of a resolution of singularities of a Delsarte surface in ${\mathbf{P}}^3$. In our case we apply this algorithm to the surface $T\subset {\mathbf{P}}^4$ given by
\[-Y^2Z^{2ap-2}+X^pZ^{2ap-p}+W^{2ap}+Z^{2ap}\]
Since the Lefschetz number is a birational invariant, one has that the Lefschetz number of $\tilde{S}$ and $\tilde{T}$ coincide.
Following \cite{HeijnePhD} we need to take the exponent matrix
\[A=
\left(
\begin{array}{cccc}
0&2&0&2ap-2\\
p&0&0&(2a-1)p\\
0&0&2ap&0\\
0&0&0&2ap\\
\end{array}
\right)
\]
and then to determine the three vectors \[{\mathbf{v}}_1:=A^{-1}(1,0,0,-1)^T, {\mathbf{v}}_2:=A^{-1}(1,0,0,-1)^T\mbox{ and }{\mathbf{v}}_3:=A^{-1}(0,0,1,-1)^T.\]
In our case this yields the vectors \[{\mathbf{v}}_1=\left(0,\frac{1}{p},0,\frac{-1}{p}\right) , {\mathbf{v}}_2=\left(\frac{1}{2},0,0,\frac{-1}{2}\right)\mbox { and }{\mathbf{v}}_3=\left(0,0,\frac{1}{2ap},\frac{-1}{2ap}\right).\]
Consider now the set $L:=i{\mathbf{v}}_1+k{\mathbf{v}}_2+j{\mathbf{v}}_3\in {\mathbf{Q}}/{\mathbf{Z}}$. This are precisely the vectors of the form
\[\left\{\left( \frac{k}{2},\frac{i}{p},\frac{j}{2ap},\frac{-apk-2ai-j}{2ap} \right)\in ({\mathbf{Q}}/{\mathbf{Z}})^4\left| i,j,k\in {\mathbf{Z}}\right.\right\} \]
Let $L_0\subset L$ be the set of vectors $v\in L$ such that none of the entries of $v$ equals $0$ modulo ${\mathbf{Z}}$, i.e.,
\[ \left\{\left( \frac{1}{2},\frac{i}{p},\frac{j}{2ap},\frac{-ap-2ai-j}{2ap} \right)\in ({\mathbf{Q}}/{\mathbf{Z}})^4\left| \begin{array}{l} i,j\in {\mathbf{Z}}, 0<i<p,0<j<2ap, \\ j \not \equiv -ap-2ai \bmod 2ap \end{array}\right.\right\} \]
Note that $\#L_0$ is precisely $h^2(S)_{\prim}$.
For an element $\alpha\in {\mathbf{Q}}/{\mathbf{Z}}$ denote with $\fr{\alpha}$ the fractional part, i.e., the unique element $\beta\in {\mathbf{Q}}\cap [0,1)$ such that $\alpha-\beta\equiv 0 \bmod {\mathbf{Z}}$ and with $\ord_{+}(\alpha)$ the smallest integer $k>0$ such that $k\alpha\in {\mathbf{Z}}$.
Define the following $\Lambda\subset L_0$ consisting of elements $(\alpha_1,\alpha_2,\alpha_3,\alpha_4)\in L_0$ such that there is a $t\in {\mathbf{Z}}$ for which $\ord_+(\alpha_kt)=\ord_+(\alpha_k)$ holds for $k=1,2,3,4$ and $\fr{t\alpha_1}+\fr{t\alpha_2}+\fr{t\alpha_3}+\fr{t\alpha_4}\neq 2$. The condition $\ord_+(\alpha_kt)=\ord_+(\alpha_k)$ for $k=1,2,3,4$ is equivalent with $t$ being invertible modulo $2a'p$, where $a'=a/\gcd(a,j)$.
Then the Lefschetz number $\lambda=h^2(\tilde{T})-\rho(\tilde{T})$ equals $\#\Lambda$.
Since $\lambda(\tilde{S})=\lambda(\tilde{T})$ and $h^2(\tilde{S})=2+\# L_0$ it follows that $\rho(S)$ equals
\[2+\# \left\{ (\alpha_1,\alpha_2,\alpha_3,\alpha_4)\in L_0 \mid\begin{array}{l} \frd{t\alpha_1}+\frd{t\alpha_2}+\frd{t\alpha_3}+\frd{t\alpha_4}=2 \mbox{ for } t\in {\mathbf{Z}} \\\mbox{ such that } \ord_{+}(t\alpha_k)=\ord_{+}(\alpha_k), k=1,2,3,4\end{array}\right \}.\]
We now determine this set.
Consider now a vector ${\mathbf{v}}$ from $L_0$, i.e., a vector
\[ \left(\frac{1}{2},\frac{i}{p},\frac{j}{2ap},\frac{ap-2ai-j}{2ap}\right) \]
with $i,j\in {\mathbf{Z}}$, $i\nequiv 0 \bmod p, j\nequiv 0 \bmod 2ap, ap-2api-j \nequiv 0 \bmod 2ap$.
Take $t\in \{1,\dots, 2a'p-1\}$ such that $\gcd(t,2a'p)= 1$ and $t\equiv i^{-1} \bmod p$. Then $v\in \Lambda$ if and only if $t{\mathbf{v}} \in \Lambda$. Hence to determine whether a vector is in $\Lambda$ it suffices to assume $i\equiv 1 \bmod p$.
Suppose now that $p>7$.
In Lemma~\ref{lemExcl} we show that ${\mathbf{v}}\not \in \Lambda$ if and only if the fractional part $\fr{\frac{j}{2ap}}$ is in the set $ \left\{ \frac{p-1}{2p}, \frac{1}{2}, \frac{p+2}{2p},\frac{2p-4}{2p}, \frac{2p-2}{2p}, \frac{2p-1}{2p}\right\}$. Each of the six values for $j$ yields $(p-1)$ elements in $L_0\setminus \Lambda$, hence $\rho(\tilde{S})=2+6(p-1)$.
One can easily find several divisors on $\tilde{S}$. We remarked in the introduction that the pull back of the hyperplane class and the exception divisor yield two independent classes in $NS(\tilde{S})$. We give now $2(p-1)$ further independent classes:
Let $\zeta$ be $p$-th root of unity. Let $C_{1,i}$ be $x=t^{2a}\zeta^i$, $y=s^{ap}$, $C_{2,i}$ be $x=s^{2a}\zeta^i$, $y=t^{ap}$. Then $\sum_i C_{1,i}$ and $\sum C_{2,i}$ equal the hyperplane class in $Pic(S)$. However each $C_{i,j}$ is nonzero in $\Pic(S)$: Let $\sigma$ be the automorphism of $S$ sending $t$ to $t$ times a $2ap$-th root of unity and leaving the other coordinates invariant. Then the characteristic polynomial of $\sigma$ on the image of $C_{1,i}$, $i=1,\dots,p-1$ is $(t^p-1)/(t-1)$. In particular the image of $\spa \{[C_{1,j}]\}$ is either 0 or $p-1$ dimensional. (One can exclude the former possibility by checking intersection numbers.)
Similarly using the automorphism $s$ is mapped to $s$ times a $2ap$-th root of unity it follows that $\spa\{[C_{2,i}]\}$ has dimension $p-1$ and that $\spa\{ [C_{i,j}]\}$ is $2(p-1)$ dimensional.
\end{example}
\begin{lemma}\label{lemExcl} Suppose $p>7$.
Let
\[{\mathbf{v}}=\left( \frac{1}{2},\frac{1}{p},\frac{j}{2ap},\frac{-2a-ap-j}{2ap} \right)\in ({\mathbf{Q}}/{\mathbf{Z}})^4\]
such that $j\nequiv 0 \bmod 2ap, 2a+j+ap\nequiv 0 \bmod 2ap$.
Then ${\mathbf{v}}\not \in\Lambda$ if and only if
\[ \frac{j}{2ap} \in\left\{\frac{p-1}{2p}, \frac{1}{2},\frac{p+2}{2p}, \frac{2p-4}{2p},\frac{2p-2}{2p},\frac{2p-1}{2p}\right\}.\]
\end{lemma}
\begin{proof}
Without loss of generality we may assume that $\gcd(a,j)=1$.
We start by proving that if a prime $\ell\geq 5$ divides $a$ then $v \in \Lambda$. For this it suffices to give a $t$, invertible modulo $2ap$ such that
\[\frd{\frac{t}{2}}+\frd{\frac{t}{p}}+\frd{\frac{tj}{2ap}}+\frd{\frac{(-2a-ap-j)t}{2ap}}=1.\]
Since the left hand side is an integer for any choice of $t$, and each summand is smaller than one it suffices to prove that
\[\frd{\frac{t}{2}}+\frd{\frac{t}{p}}+\frd{\frac{tj}{2ap}} \leq 1.\]
Consider the value
\[t=1+ck\frac{2ap}{\ell},\]
with $c\equiv j^{-1}\bmod \ell$ and $k\in {\mathbf{Z}}$ such that $k\not\equiv (c\frac{2ap}{\ell})^{-1} \bmod \ell$ and $k$ in the interval
\[\left(-\frac{\ell j}{2ap},-\frac{\ell j}{2ap}+\frac{\ell(p-2)}{2p}\right).\]
Note that we have to assume $p>7$ or $\ell\geq 5$ to ensure the existence of such a $k$.
Then $\fr{\frac{t}{2}}=\frac{1}{2}$ and
\[ \frd{\frac{t}{p}} = \frd{\frac{1}{p}+ck \frac{2a}{\ell}} =\frac{1}{p}.\]
Moreover, we have that
\[\frd{\frac{tj}{2ap}}= \frd{\frac{(1+ck\frac{2ap}{\ell})j}{2ap}}=\frd{\frac{j}{2ap}+\frac{k}{\ell}}\leq \frac{(p-2)}{2p}.\]
From this it follows that
\[\frd{\frac{t}{2}}+\frd{\frac{t}{p}}+\frd{\frac{tj}{2ap}}\leq 1\]
holds, which finishes this case.
Suppose now that the only primes dividing $a$ are $2$ or $3$. If $p=11,13,17$ and $a=3$ then one can find easily by hand a $t$-value such that
\[\frd{\frac{t}{2}}+\frd{\frac{t}{p}}+\frd{\frac{tj}{2ap}}+\frd{\frac{(-2a-ap-j)t}{2ap}}=1\]
holds. For all other combinations $(a,j,p)$ with $a>1$ we give a value for $t$ in Table~\ref{TabTVal} such that the above formula holds.
\begin{table}[hbtp]\label{TabTVal}
\begin{tabular}{|c|c|c|c|}
\hline
$a$ & $\frac{j}{2ap} \in I $& & $t$ \\
\hline
$4\mid a$& $(0,\frac{p-2}{2p})$ &&$1$\\
$p>3$ & $ (\frac{1}{2},\frac{p-1}{p})$& & $1+ap$\\
& $ (0,\frac{p-4}{4p})\cup(\frac{3}{4},1)$& $j\equiv 1 \bmod 4$ & $1+\frac{ap}{2}$\\
& $ (\frac{1}{4},\frac{3p-4}{4p})$ & $j\equiv 1 \bmod 4$ & $1+\frac{3ap}{2}$\\
& $(0,\frac{p-4}{4p})\cup(\frac{3}{4},1)$ & $j\equiv 3 \bmod 4$ & $1+\frac{3ap}{2}$\\
& $(\frac{1}{4},\frac{3p-4}{4p})$ & $j\equiv 3 \bmod 4$ & $1+\frac{ap}{2}$\\
\hline
$2\mid a$, $4\nmid a'$ & $(0,\frac{p-2}{2p})$ &&$1$\\
$p>7$ & $ (\frac{1}{2},\frac{p-1}{p})$ && $1+ap$\\
& $(0,\frac18-\frac1p)\cup(\frac{3}{8},\frac{5}{8}-\frac{1}{p})\cup (\frac78,1)$ & $j\equiv 1 \bmod 4$ & $2+\frac{ap}{2}$\\
& $ (\frac{3}{8},\frac{5}{8}-\frac{1}{p})$&$j\equiv 3 \bmod 4$ & $2+\frac{3ap}{2}$\\%aanname 1/p<1/8 Gaat dus nog fout bij p=7.
\hline
$9\mid a$& $ (0,\frac{p-2}{2p})$ &&$1$\\
$p>5$ & $(\frac{1}{3},\frac{5}{6}-\frac{1}{p})$ & $j\equiv 2 \bmod 3$ & $1+\frac{2ap}{3}$\\
& $(0, \frac{1}{3}-\frac{1}{p})\cup (\frac{2}{3},1)$ & $j\equiv 2 \bmod 3$ & $1+\frac{4ap}{3}$\\
& $(\frac{1}{3},\frac{5}{6}-\frac{1}{p})$ & $j\equiv 1 \bmod 3$ & $1+\frac{4ap}{3}$\\
& $(0, \frac{1}{3}-\frac{1}{p})\cup (\frac{2}{3},1)$ & $j\equiv 1 \bmod 3$ &$1+\frac{2ap}{3}$\\
\hline
$a=3$ & $ (0,\frac{p-2}{2p})$ &&$1$\\
$p\equiv 1\bmod 3$ & $ (\frac{1}{3},\frac{5}{6}-\frac{1}{p})$ &$j\equiv 1 \bmod 3$ & $1+4p$ \\
$p>18$ & $ (\frac{8}{9},\frac{19}{18}-\frac{1}{p})$& $j\equiv 1 \bmod 3$ &$3+2p$ \\
& $ (\frac{7}{9},\frac{17}{18}-\frac{1}{p})$ &$j\equiv 1 \bmod 3$ &$3+4p$ \\
& $(\frac{2}{3},1)$ & $j\equiv 2 \bmod 3$ & $1+4p$ \\
& $ (\frac{4}{9},\frac{11}{18}-\frac{1}{p})$ & $j\equiv 2 \bmod 3$ &$3+2p$ \\
& $ (\frac{5}{9},\frac{13}{18}-\frac{1}{p})$ & $j\equiv 2 \bmod 3$ &$3+4p$ \\
\hline
$a=3$ & $ (0,\frac{p-2}{2p})$ &&$1$\\
$p\equiv 2\bmod 3$ & $(\frac{2}{3},1)$ & $j\equiv 1 \bmod 3$ & $1+2p$ \\
$p>18$ & $(\frac{5}{9},\frac{13}{18}-\frac{1}{p})$ & $j\equiv 1 \bmod 3$ &$3+2p$ \\
& $ (\frac{4}{9},\frac{11}{18}-\frac{1}{p})$ & $j\equiv 1 \bmod 3$ &$3+4p$ \\
& $ (\frac{1}{3},\frac{5}{6}-\frac{1}{p})$ & $j\equiv 2 \bmod 3$ &$1+2p$ \\
& $ (\frac{7}{9},\frac{17}{18}-\frac{1}{p})$ & $j\equiv 2 \bmod 3$ &$3+2p$ \\
& $ (\frac{8}{9},\frac{19}{18}-\frac{1}{p})$ & $j\equiv 2 \bmod 3$ &$3+4p$ \\
\hline
\end{tabular}
\caption{$t$-values for the case $a=2^{v_2}3^{v_3}$, $a\neq 1$}
\end{table}
The only case left to consider is the case $a=1$. If $p\leq 30$ then one can easily find an appropriate $t$-value by hand. Hence we
may assume that $p>30$.
If we take $t=1$ then we see that $v\in\Lambda$ whenever
\[\frac{j}{2ap}=\frac{j}{2p}\in \left(0,\frac{1}{2}-\frac{1}{p}\right).\]
We will consider what happens if $\frac{j}{2p}>\frac{p-2}{2p}$.
Suppose $t<p$ is an odd integer, and $k$ is an integer such that $k \leq \frac{tj}{2p}< k+1$. Then we have
\[ \frd{\frac{t}{2}}+\frd{\frac{t}{p }}+\frd{\frac{tj}{2p}} = \frac{1}{2}+\frac{t}{p}+\frac{tj}{2p}-k\]
The right hand side is at most $1$ if
\[ \frac{j}{2p} \leq \frac{1+2k}{2t}-\frac{1}{p}\]
Hence if
\[ \frac{j}{2p} \in \left( \frac{k}{t},\frac{1+2k}{2t}-\frac{1}{p} \right)\]
Then ${\mathbf{v}}\in \Lambda$.
If we take $k=t-1$ then we get the interval
\[ I_t:=\left( 1-\frac{1}{t},1-\frac{1}{2t}-\frac{1}{p}\right)\]
and if we take $k=(t+1)/2$ then we get
\[ I'_t:=\left(\frac{1}{2}+\frac{1}{2t},\frac{1}{2}+\frac{1}{t}-\frac{1}{p}\right).\]
Note that $I_3=I'_3$.
We claim that if $p>30$ and $5\leq t\leq \frac{p-1}{2}-3$ then $I'_t\cap I'_{t-2}\neq \emptyset$ and $I_t\cap I_{t-2}\neq \emptyset$.
For this it suffices to check that
\[ \frac{1}{2}+\frac{1}{2(t-2)} < \frac{1}{2}+\frac{1}{t}-\frac{1}{p} \mbox{ and }1-\frac{1}{2(t-2)}-\frac{1}{p} >1-\frac{1}{t} \]
Both conditions are equivalent with
\begin{equation}\label{eqnBoundPnt} 2t^2-(p+4)t+4p<0\end{equation}
The smallest value to check is $t=5$ then the above formula yields that $p>30$, which is actually the case. For fixed $p$ we have that the above bound is equivalent with $t\in (\frac{1}{4}p+1-\frac{1}{4} \sqrt{p^2-24p+16}, \frac{1}{4}p+1+\frac{1}{4} \sqrt{p^2-24p+16})$.
The previous argument already shows that the left boundary of this interval is smaller than $5$. Substituting $t=\frac{p-1}{2}-3$ in (\ref{eqnBoundPnt}), yields that for $p>77/3$ the boundary point on the right is at bigger than $\frac{p-1}{2}-3$. In particular, if $p>30$, $t$ is odd then $I'_t\cap I'_{t-2}\neq \emptyset$ and $I_t\cap I_{t-2}\neq \emptyset$.
Take now the union of $I'_t$ and $I_t$ for all odd $t$ between $3$ and $\frac{p-1}{2}-5$. This yields an interval $I=(\alpha,\beta)$ such that for all $\frac{j}{2p}\in I$ we have that ${\mathbf{v}} \in \Lambda$.
The maximal $t$-value is either $\frac{p-1}{2}-3$ or $\frac{p-1}{2}-4$ (depending on $p\bmod 4$). Hence we know only that the maximal $t$ is at least $\frac{p-1}{2}-4$. From this it follows that
\[I\supset \left( \frac{1}{2}+\frac{1}{p-9} , 1-\frac{1}{p}-\frac{1}{p-9}\right).\]
Note that $p-9>\frac{2}{3}p$ and hence $\frac{1}{p}+\frac{1}{p-9}\leq \frac{5}{2p}$. Hence the only possibilities for $\frac{j}{2p} \not \in I$ and $p-2<j <2p$ are
\[ \left\{\frac{p-1}{2p} ,\frac{p}{2p},\frac{p+1}{2p},\frac{p+2}{2p}, \frac{2p-4}{2p}. \frac{2p-3}{2p},\frac{2p-2}{2p},\frac{2p-1}{2p}\right\}.\]
If $\frac{j}{2p}\in \{\frac{p+1}{2p},\frac{2p-3}{2p}\}$ then we have that ${\mathbf{v}}$ is in $\Lambda$. This can be verified by taking $t=p-2$. Hence we have shown that for all but six values for $\frac{j}{2p}$ the corresponding vector is in $\Lambda$.
It remains to show that for the remaining values of $\frac{j}{2p}$ we have that ${\mathbf{v}} \not\in \Lambda$.
If $\frac{j}{2p}\in \{\frac{1}{2},\frac{2p-2}{2p}\}$ then two coordinates $\alpha,\beta$ of ${\mathbf{v}}$ equal $\frac{1}{2}$. Hence for any admissible $t$ we have
\[\frd{\frac{t}{2}}+\frd{\frac{t}{p}}+\frd{\frac{tj}{2ap}}+\frd{\frac{(-2a-ap-j)t}{2ap}}> \frac{1}{2}+\frac{1}{2} =1\]
Since the left hand side is an integer, it is at least 2.
In the other four cases we have two entries $\alpha,\beta$ such $\alpha=\beta+\frac{1}{2}$. Since $t$ is odd we have then that $|\fr{t\alpha}-\fr{t\beta}|=\frac{1}{2}$ and therefore
\[\frd{\frac{t}{2}}+\frd{\frac{t}{p}}+\frd{\frac{tj}{2p}}+\frd{\frac{(-2-p-j)t}{2p}}>\frd{\frac{1}{2}} +\frd{t\alpha}+\frd{t\beta}>1.\]
Summarizing we have that for all $t$ that are invertible modulo $2p$ that
\[\frd{\frac{t}{2}}+\frd{\frac{t}{p}}+\frd{\frac{tj}{2p}}+\frd{\frac{(-2-p-j)t}{2p}}\geq 2\]
holds. Using the symmetry of the coordinates it follows that for all $t$ that are invertible modulo $2p$ we have
\[\frd{\frac{t}{2}}+\frd{\frac{t}{p}}+\frd{\frac{tj}{2p}}+\frd{\frac{(-2-p-j)t}{2p}}\leq 2\]
hence ${\mathbf{v}} \not \in \Lambda$, which finishes the proof.
\end{proof}
\begin{remark}
If we had taking $p$ to be a non-prime the result would be different.
This would restrict the number of possible $t$'s that can be used.
As a consequence the Picard number will probably be slightly higher.
The cases for $p=7$, $p=5$ and $p=3$ can also be computed.
If $p=7$ and $3|a$ we get $\rho(\tilde{S})=2+14(p-1)$.
For $p=5$ and $6|a$ we get $\rho(\tilde{S})=2+18(p-1)$.
For $p=3$ and $60|a$ we get $\rho(\tilde{S})=2+30(p-1)$.
Also for small $p$ the result would be higher.
\end{remark}
\bibliographystyle{plain}
|
3,212,635,537,592 | arxiv | \section{Introduction}
Since the discovery of topological insulators and topological
superconductors,
much effort has been devoted
to exploring new topological phases of matters.\cite{Hasan2010, Qi2011,
Tanaka-Sato-Nagaosa2012, AndoReview, Sato-Fujimoto2016,SatoAndo2016}
Whereas only fully gapped systems had been regarded as topological phases
in the early stage of the study, recent developments have clarified
that bulk gapless materials like Weyl semimetals also exhibit
non-trivial topological phenomena.
The existence of surface Fermi arcs and related anomalous transports are
typical topological phenomena in the latter case.
In the exploration of such topological materials, symmetry plays an
important role:
In the absence of symmetry, a fully gapped non-interacting system
may realize only an integer quantum Hall state in up to three
dimensions.\cite{Avron1983}
Indeed, for realization of topological insulators and topological
superconductors, time-reversal and charge conjugation
(i.e. particle-hole symmetry (PHS)) are essential.\cite{Kane-Mele2,
QiHughesRaghuZhang2009, Schnyder2008}
Furthermore,
systems often have other symmetries specific to their structures.
In particular, materials in condensed matter physics support crystalline
symmetries of space groups or magnetic space groups.
Such crystalline symmetries also stabilize distinct topological structures in
gapful materials as well as gapless ones.~\cite{Teo2008, Mong2010, Fu2010,
Hsieh2012, Fang2012, Mizushima2012, Slanger2013,
Ueno2013,
Chiu2013, Zhang-Kane-Mele2013, Morimoto2013, Benalcazar2013, Fulga2014,
Alexandradinata2014,
ShiozakiSato2014, ShiozakiSatoGomi2015, ShiozakiSatoGomi2016, FangFu2015,
Varjas-deJuan-Lu2015,
Watanabe2016,Young2012, Wang2012, Wang2013,Yang2014, Kobayashi2014,
ChiuSchnyder2014, Watanabe2016, Kobayashi2016, Agterberg2016,
Micklitz2016,Nomoto2016,Mathai-Thiang2017Global}
In this paper, we formulate such topological crystalline materials on
the basis of the $K$-theory.
The $K$-theory approach has successfully revealed
all possible topological phases protected by general symmetries of
time-reversal and charge conjugation.\cite{Horava2005, Kitaev2009, Ryu2010}
Depending on the presence or absence of the general symmetries, systems are
classified into Altland-Zirnbauer (AZ) ten fold symmetry
classes.~\cite{Schnyder2008, Altland1997}
All
possible topological numbers in the AZ classes are identified in any
dimensions.\cite{Kitaev2009, Ryu2010, Teo2010, Stone2011,
Abramovici2012}.
One of the main purposes of the present paper is to generalize the $K$-theory
approach in the presence of crystalline symmetries.
Partial generalization of the $K$-theory approach have been attempted
previously:
Motivated by the discovery of topological mirror insulator
SnTe,~\cite{Hsieh2012, Tanaka2012a, Dziawa2012, Xu2012}
mirror-reflection symmetric insulators and superconductors have
been classified topologically.\cite{Chiu2013,Morimoto2013}
Furthermore, a complete topological classification of crystalline
insulators/superconductors with
order-two space groups has been accomplished by means of the
$K$-theory.\cite{ShiozakiSato2014, ShiozakiSatoGomi2015, ShiozakiSatoGomi2016}
The order-two space groups include reflection, two-fold rotation, inversion
and their magnetic versions, and
many proposed topological crystalline insulators and superconductors
have been understood systematically in the latter classification.
The order-two space group classification also has revealed that
nonsymmorphic glide symmetry provides
novel $\mathbb{Z}_2$~\cite{FangFu2015, ShiozakiSatoGomi2015} and $\mathbb{Z}_4$ phases~\cite{ShiozakiSatoGomi2016} with
M\"{o}bius twisted surface states.
Material realization of such a glide protected topological phase
has been proposed theoretically~\cite{wang2016hourglass} and
confirmed experimentally.~\cite{ma2016experimental}
There is also a different proposal for material realization of
the M\"{o}bius twisted surface states in heavy fermion systems.~\cite{chang2016m}
Our present formulation is applicable to any bulk gapful topological crystalline
insulators/superconductors (TCIs/TCSCs) and their gapless boundary and
defect states, as well as bulk gapless topological crystalline materials.
On the basis of the twisted equivariant $K$-theory,~\cite{Freed2013,
Thiang2016} we illustrate how space groups and
magnetic space groups are
incorporated into topological classification in a unified manner:
Following the idea by Freed and Moore~\cite{Freed2013},
the space group action on Hamiltonians is introduced as a ``twist''
$(\tau,c)$ of that on the base space, and anti-unitary symmetries are
specified by a $\mathbb{Z}_2$-valued function $\phi$ for group elements.
Then,
the $K$-group ${}^{\phi} K_{\cal G}^{(\tau, c)-n}(X)$ on the base space
$X$ is introduced in terms of
the Karoubi's formulation of the $K$-theory.~\cite{Karoubi2008}
The $K$-group ${}^{\phi} K_{\cal G}^{(\tau, c)-n}(T^d)$ for the Brillouin
zone (BZ) torus $T^d$ provides
topological classification of $d$-dimensional crystalline insulators and
superconductors subject to symmetry ${\cal G}$.
Bearing in mind applications in condensed matter physics, we clarify
connections between the
$K$-theory and the traditional band theory. We also explain
practical methods
to compute $K$-groups.
In particular, we show the following:
\begin{itemize}
\item The crystal data of
Wyckoff positions
are
naturally taken into account in our formulation.
The $K$-group for space group ${\cal G}$ has elements corresponding to
Wyckoff positions for ${\cal G}$.
\item Not only crystal structures determine properties of
materials. Atomic orbital characters of band electrons also
strongly affect their properties.
For instance, if we change the physical degrees of freedom from
$s$-orbital electrons to $p$-orbital ones, the
topological nature of the material may change.
This remarkable aspect of crystalline materials is involved in our
formulation as
the $R(P)$-module structure of the $K$-group, where
$R(P)$ is the representation ring of a point group $P$.
An element $V \in R(P)$ acts on the $K$-group
as the tensor product for the symmetry operator, which induces the
change of the representations of
physical degrees of freedom.
\item TCIs and TCSCs support stable gapless boundary excitations
associated with bulk topological numbers if the boundary is
compatible with symmetry responsible for the topological numbers.
This so-called bulk-boundary correspondence is explained by using
dimension-raising maps, of which the existence is ensured by the Gysin
exact sequence in the $K$-theory.
\item Defect gapless modes in TCIs and TCSCs are understood as boundary
gapless states in lower dimensional TCIs and TCSCs.
\item Bulk gapless topological crystalline materials are formulated in
terms of the $K$-theory. This formulation provides a novel systematic
method to explore gapless topological crystalline materials.
\item
We present the topological table for topological crystalline surface
states protected by wallpaper groups, in the absence of
time-reversal symmetry (TRS).
The additive structures of the relevant
$K$-groups were previously calculated in the
literature for the spinless case with and without chiral
symmetry~\cite{Yang1997, LuckStamm2000}
and for the spinful case without chiral symmetry.~\cite{DongLiu2015}
We complete the topological classification
by determining their $R(P)$-module structures and considering the
spinful case with chiral symmetry.
\item The Mayer-Vietoris exact sequence and the Gysin exact sequence play
central roles in computing $K$-groups.
We illustrate the calculation of $K$-groups in various examples.
\end{itemize}
The organization of the paper is as follows.
In Sec.~\ref{Sec:SpaceGroup},
we explain how space group symmetries are incorporated in the
Hamiltonian formalism.
Nonsymmorphic space groups can be thought of as unavoidable $U(1)$ phase
factors in the projective representations of point
groups.
Section \ref{Sec:K-theory} is devoted to introducing the twisted
equivariant $K$-theory.
Two alternative but equivalent constructions of $K$-groups are explained.
It is shown that
$K$-groups are not just additive groups,
but have module structures induced by the tensor product of
representations of point groups.
The treatment of anti-unitary symmetries in the twisted equivariant
$K$-theory is explained in Sec.~\ref{With TRS and/or PHS}.
Not only TRS and PHS, but also magnetic
space group symmetries are taken into account in a unified manner.
Using chiral symmetries, we also introduce the integer grading of the
$K$-groups.
In Sec.~\ref{sec:Topological crystalline insulators and superconductors},
we formulate TCIs and TCSCs on the basis of the twisted equivariant
$K$-theory.
Characteristic physical properties of TCIs and TCSCs are disucssed here.
In Sec.~\ref{sec:Topological nodal semimetals and superconductors},
we propose a systematic method to classify bulk gapless topological
crystalline materials.
Weyl and Dirac semimetals and nodal superconductors are treated in a
unified manner.
As an application of the twisted equivariant $K$-theory,
in Sec.~\ref{Wallpaper_summary}, we summarize the topological classification of
crystalline insulators with wallpaper groups in the absence of TRS.
We illustrate computations of $K$-groups
in various examples in Sec.~\ref{sec:Example of K-theory classification}.
Finally, we conclude the paper in Sec.~\ref{sec:Conclusion}.
We explain some useful mathematical details of the twisted equivariant
$K$-theories in Appendices.
\section{Hamiltonian and Space group}
\label{Sec:SpaceGroup}
\subsection{Periodic Bloch Hamiltonian}
\label{Sec:Pre}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=\linewidth, trim=0cm 2cm 0cm 0cm]{UnitCell.pdf}
\end{center}
\caption{A crystal structure [a]. The Bravais lattice [b]. The unit cell [c].}
\label{Fig:Crystal}
\end{figure}
In this paper, we consider one-particle Hamiltonians $\hat H$ with lattice translational symmetry.
Take a proper localized basis, say L\"{o}wdin orbitals $|{\bm R}, \alpha, i\rangle$,
where ${\bm R}$ is a vector of the Bravais lattice $\Pi \cong \mathbb{Z}^d$ for
a given crystal structure in $d$-space dimensions, $\alpha$ is a
label for the $\alpha$-th atom in
the unit cell, and $i$ represents internal degrees of freedom such as
orbital and spin (see Fig.\ref{Fig:Crystal}).
Then the system is well described by the tight-binding Hamiltonian
\begin{equation}
H= \sum_{\bm{R} , \bm{R}' \in \Pi} \psi_{\alpha i}^{\dag}(\bm{R})
H_{\alpha i,\beta j}(\bm{R}-\bm{R}')
\psi_{\beta j}(\bm{R}'),
\end{equation}
with
\begin{eqnarray}
H_{\alpha i,\beta j}(\bm{R}-\bm{R}')=\langle {\bm R}, \alpha, i|\hat
H|{\bm R}', \beta, j\rangle.
\end{eqnarray}
Because the topological phase of the one-particle Hamiltonian is
examined in the momentum space,
we perform the Fourier transformation of $|{\bm R}, \alpha, i\rangle$
by taking a Bloch basis.
The standard Bloch basis is given by
\begin{align}
&\ket{\bm{k},\alpha,i}' := \frac{1}{\sqrt{N}}\sum_{\bm{R} \in \Pi} \ket{\bm{R},\alpha,i} e^{i \bm{k} \cdot (\bm{R} + \bm{x}_{\alpha})},
\label{DefFourierTr1}
\end{align}
where $\bm{x}_{\alpha}$ is the localized position of the $\alpha$-th atom
measured from the center of the unit cell specified by $\bm{R}$, and $N$
is the number of unit cells in the crystal.
This basis, however, is somewhat inconvenient in
topological classification:
The basis $\ket{\bm{k},\alpha,i}'$ is not periodic in the
Brillouin zone (BZ) torus $T^d$,
obeying the twisted periodic boundary condition
\begin{eqnarray}
\ket{\bm{k} +\bm{G},\alpha,i}' = \ket{\bm{k},\alpha,i}' e^{i \bm{G} \cdot
\bm{x}_{\alpha}}
\end{eqnarray}
with $\bm{G}$ a reciprocal vector, so is not the resultant Bloch
Hamiltonian,
\begin{eqnarray}
H'_{\alpha i, \beta j}({\bm k})=\langle {\bm k}, \alpha, i|'\hat{H}|{\bm
k},\beta, j\rangle'.
\end{eqnarray}
The non-periodicity of the Hamiltonian gives
an undesirable complication in topological classificaition.
To avoid this problem, we take here an alternative Bloch basis
which makes the Hamiltonian $H(\bm{k})$
periodic,
\begin{align}
&\ket{\bm{k},\alpha,i} := \frac{1}{\sqrt{N}}
\sum_{\bm{R} \in \Pi} \ket{\bm{R},\alpha,i} e^{i \bm{k} \cdot \bm{R}}.
\label{DefFourierTr2}
\end{align}
Obviously, the Bloch basis (\ref{DefFourierTr2}) is periodic in the BZ
torus,
\begin{eqnarray}
|{\bm k}+{\bm G}, \alpha, i\rangle=|{\bm k}, \alpha, i\rangle,
\end{eqnarray}
and so is the Bloch Hamiltonian $H_{\alpha i, \beta j}(\bm{k})$,
\begin{equation}
H_{\alpha i, \beta j}({\bm k})=
\langle {\bm k}, \alpha, i|\hat H|{\bm
k}, \beta, j\rangle.
\end{equation}
We call this basis $(\ref{DefFourierTr2})$ the periodic Bloch basis.
Here we note that the periodic Bolch basis (\ref{DefFourierTr2}) loses
the information on the localized position $\bm{x}_{\alpha}$ of the
$\alpha$-th atom in the unit
cell, so it may
cause complication in relations between the Berry
connections and observables.
Bearing this remark in mind,
we employ the periodic basis (\ref{DefFourierTr2}) throughout the
present paper.
For simplicity, we often omit the matrix indices $(\alpha, i)$ below,
and we simply denote the Bloch Hamiltonian $H_{\alpha i, \beta
j}({\bm k})$ as $H({\bm k})$.
\subsection{Space group and unavoidable $U(1)$ factor}
\label{sec3:space}
The Bloch Hamiltonian $H({\bm k})$ has
space group symmetry $G$ for a given crystal structure.
An element of $G$ is denoted as $\{p|{\bm a}\}\in G$, under which
${\bm x}$ transforms as ${\bm x}\to p{\bm x}+{\bm a}$.
Here $p\in P$ is an element of the point group $P$.
In this notation, the lattice translation is denoted as $\{1|{\bm t}\}$
with a lattice vector ${\bm t}\in \Pi$. ($\Pi$ is the Bravais lattice.)
The multiplication in $G$ is given as
\begin{eqnarray}
\{p|{\bm a}\}\cdot \{p'|{\bm
a}'\}=\{pp'| p{\bm a}'+{\bm a}\},
\end{eqnarray}
and the inverse is
\begin{eqnarray}
\{p|{\bm
a}\}^{-1}=\{p^{-1}|-p^{-1}{\bm a}\}.
\end{eqnarray}
For each $p\in P$, one can choose a representative $\{p|{\bm a}_p\}\in
G$, so that
any element $\{p|{\bm
a}\}\in G$ can be written as a product of $\{p|{\bm a}_p\}$ and
a lattice translation $\{1|{\bm t}\}$.
Since the lattice translation trivially acts on the Bloch Hamiltonian,
it is enough to consider a set of representatives $\left\{\{p|{\bm
a}_p\}\in G: p\in P\right\}$ in the topological classification of the
Bloch Hamiltonian.
For $\{p|{\bm a}_p\}\in G$,
the Bloch Hamiltonian $H({\bm k})$ obeys
\begin{align}
U_{p}({\bm k}) H({\bm k})U_{p}({\bm k})^{-1}= H(p{\bm k}),
\label{eq:UHU}
\end{align}
with a unitary matrix $U_p({\bm k})$, which is periodic in the BZ,
$U_p({\bm k}+{\bm G})=U_p({\bm k})$.
The multiplication in $G$ implies
\begin{align}
U_p(p'{\bm k})U_{p'}({\bm k})=e^{i\tau_{p,p'}(pp'{\bm k})}U_{pp'}({\bm k}),
\label{eq:UU}
\end{align}
where $U_p({\bm k})$, $U_{p'}({\bm k})$ and $U_{pp'}({\bm k})$ are the
unitary matrices for $\{p|{\bm a}_p\}$, $\{p'|{\bm
a}_{p'}\}$ and $\{pp'|{\bm a}_{pp'}\}$, respectively.
The U(1) factor $e^{i\tau_{p,p'}({\bm k})}$ above arises
because $\{p|{\bm a}_p\}\cdot\{p'|{\bm a}_{p'}\}$ is not equal to
$\{pp'|{\bm a}_{pp'}\}$, in general.
Actually it holds that
\begin{eqnarray}
\{p|{\bm a}_p\}\cdot\{p'|{\bm a}_{p'}\}=\{1|{\bm
\nu}_{p,p'}\}\cdot \{pp'|{\bm a}_{pp'}\}
\end{eqnarray}
with a lattice vector ${\bm \nu}_{p,p'}\equiv p{\bm a}_{p'}+{\bm
a}_p-{\bm a}_{pp'}\in \Pi$.
Due to the Bloch factor $e^{i{\bm k} \cdot {\bm
R}}$ of $|{\bm k},\alpha, i\rangle$ in Eq.(\ref{DefFourierTr2}),
the lattice
translation $\{1|{\bm \nu}_{p,p'}\}$ gives the U(1) factor
\begin{eqnarray}
e^{i\tau_{p,p'}({\bm k})}=e^{-i{\bm k}\cdot{\bm \nu}_{p,p'}}.
\label{eq:twist_lattice}
\end{eqnarray}
Here note that if ${\bm a}$ for any element of $G$ is given by a
lattice vector ${\bm t}$, then the U(1) factor in
Eq.(\ref{eq:twist_lattice}) can
be $1$ by choosing ${\bm a}_p=0$ for any $p\in P$.
Such a space group is called symmorphic.
On the other hand, if $G$ contains an element $\{p|{\bm a}\}$ with
a non-lattice vector ${\bm a}$, such as glide or screw, a non-trivial
U(1) factor is unavoidable.
The latter space group is called nonsymmorphic.
For spinful fermions, there exists a different source of
the U(1) factor $e^{i\tau_{p.p'}({\bm k})}$ in Eq.(\ref{eq:UU}).
This is because rotation in the spin space is not given as an
original O(3) rotation, but given as its projective U(2)
rotation.
Different from the U(1) factor in Eq.(\ref{eq:twist_lattice}), the
resultant U(1) factor is ${\bm k}$-independent.
As illustrated in Fig,\ref{fig:twist}, these non-trivial U(1) factors
in Eq.(\ref{eq:UU}) provide a twist in a vector (or Hilbert) space on which the
Bloch Hamiltonian is defined.
In the following, we denote the twist $\tau$ caused by nonsymmorphic
space group $G$ (the projective representation of rotation) as $\tau=\tau_G$
($\tau=\omega$), and if both twists coexist, we denote it as $\tau=\tau_G+\omega$.
\begin{figure}[!]
\begin{center}
\includegraphics[width=0.6\linewidth]{twist.pdf}
\end{center}
\caption{A $U(1)$ factor associated with group action on a vector bundle on which the Hamiltonian is defined.}
\label{fig:twist}
\end{figure}
\subsubsection{More on space group: group cohomology perspective}
More general treatment of the twist is as follows.
(The reader can skip this section on a first reading.)
Mathematically, space groups and their projective representations are
characterized by inequivalent $U(1)$ phases $\{e^{i
\tau_{p,p'}(\bm{k})}\}$, which are
classified by the group cohomology.
The $U(1)$ phases $\{e^{i \tau_{p,p'}(\bm{k})}\}$
can be considered as an obstruction of the group structure of the group action
on the (trivial) vector bundle on which the Hamiltonian $H(\bm{k})$ is defined.
See Fig.~\ref{fig:twist}.
To apply the group cohomology classification,
we introduce the Abelian group $C(T^d,U(1))$ of the $U(1)$-valued functions on the BZ torus $T^d$.
The Abelian structure of $C(T^d,U(1))$ is given by the usual product of $U(1)$ phases:
$e^{i \alpha_1(\bm{k})} \cdot e^{i \alpha_2(\bm{k})} = e^{i ( \alpha_1(\bm{k}) + \alpha_2(\bm{k}))}$.
The point group $P$ acts on $C(T^d,U(1))$ by $e^{i (p \cdot \alpha)(\bm{k})} = e^{i \alpha(p^{-1} \bm{k})}$,
where we have denoted the point group action on the BZ by $p \bm{k}$ for $p
\in P$.
We also introduce the group cochain $C^*(P,C(T^d,U(1)))$.
The $U(1)$ factor in (\ref{eq:UU}) is a two-cochain $\{ e^{i \tau_{p,p'}(\bm{k})} \}_{p,p' \in P} \in C^2(P,C(T^d,U(1)))$.
The associativity $(\hat U_{p_1} \hat U_{p_2}) \hat U_{p_3} = \hat U_{p_1} (\hat U_{p_2} \hat U_{p_3})$
implies the two-cocycle condition
\begin{align}
\delta \tau = 0 \quad \Leftrightarrow \quad
\tau_{p_2,p_3}(p_1^{-1}\bm{k})-\tau_{p_1p_2,p_3}(\bm{k})+\tau_{p_1,p_2p_3}(\bm{k})-\tau_{p_1,p_2}(\bm{k}) \equiv 0 \qquad {\rm mod } \ 2 \pi,
\end{align}
and the redefinition of the $U(1)$ factor
$U_p(\bm{k}) \mapsto e^{i \theta_p(\bm{k})} U_p(\bm{k})$
induces the equivalence relation from the two-coboundary
\begin{align}
\tau \sim \tau+\delta \theta \quad \Leftrightarrow \quad
\tau_{p_1,p_2}(\bm{k})\sim \tau_{p_1,p_2}(\bm{k})+\theta_{p_2}(p_1^{-1}\bm{k})-\theta_{p_1p_2}(\bm{k})+\theta_{p_1}(\bm{k}) \qquad {\rm mod }\ 2 \pi.
\end{align}
(See Appendix \ref{Group_cohomology} for the definition of $\delta$ and
the group cohomology. )
Then, we can conclude that
\begin{itemize}
\item For a given Bravais lattice $\Pi$ and point group $P$,
the set of inequivalent $U(1)$ phase factors $\{ e^{i \tau_{p,p'}(\bm{k})} \}$
is given by the group cohomology $H^2(P,C(T^d,U(1)))$.
\end{itemize}
The group cohomology can be divided into two parts~\cite{Gomi2015}
\begin{align}
H^2(P,C(T^d,U(1))) &\cong H^2(P,H^1(T^d,\mathbb{Z})) \oplus H^2(P,U(1)), \\
[\tau] &= [\tau_G] + [\omega],
\end{align}
The latter part $H^2(P,U(1))$ represents
the classification of the projective representations of the
point group $P$.
Moreover, it holds that $H^1(T^d,\mathbb{Z}) \cong {\rm Hom} (T^d,U(1)) \cong \Pi$.
(Notice that the BZ torus $T^d$ is the Pontryagin dual
$\hat \Pi = {\rm Hom}(\Pi, U(1))$ of the Bravais lattice $\Pi$.)
Therefore, the former part coincides with the group cohomology $H^2(P,\Pi)$,
which is known to provide the classification of space groups
for a given Bravais lattice $\Pi$ and a point group $P$.~\cite{Hiller1986}
The two-cocycle $\{\bm{\nu}_{p,p'} \in \Pi\}$ introduced in the
previous subsection represents an element of the group
cohomology $H^2(P,\Pi)$.
\subsubsection{Anti space group}
\label{sec:Anti space group}
In addition to ordinary space group operations,
one may consider a space group operation $U_p({\bm k})$
that changes the sign of the Bloch Hamiltonian,
\begin{eqnarray}
U_p({\bm k})H({\bm k})U_p({\bm k})^{-1}=-H(p{\bm k})
\end{eqnarray}
Such an operation is called antisymmetry.~\footnote{
The antisymmetry is equivalent to the antiunitary PHS
in the many-body Hilbert space.}
The anti space group
symmetry also affects topological nature of the system.
To treat ordinary symmetries and antisymmetries in a unified manner,
we introduce a function $c(p)=\pm 1$
that specifies the symmetry or antisymmetry relations,
\begin{eqnarray}
U_p({\bm k})H({\bm k})U_p({\bm k})^{-1}=c(p)H(p{\bm k}).
\end{eqnarray}
It is found that $c(p)$ is a homomorphism on $G$,
i.e. $c(pp')=c(p)c(p')$.
\subsection{Chiral symmetry}
For topological classification based on the $K$-theory, so-called chiral
symmetry plays a special role: As we shall show later, one can change the
dimension of the system keeping the topological structure by
imposing or breaking chiral symmetry.
Chiral symmetry is defined as
\begin{eqnarray}
\{H({\bm k}), \Gamma\}=0, \quad \Gamma^2=1,
\end{eqnarray}
where $\Gamma$ is a unitary operator.
In the presence of space group symmetry,
\begin{eqnarray}
U_p({\bm k})H({\bm k}) U^{-1}_p({\bm k})=c(p)H(p{\bm k}),
\quad
U_p(p'{\bm k})U_p({\bm k})=e^{i\tau_{p,p'}({pp'\bm k})}U_{pp'}({\bm k}),
\end{eqnarray}
we introduce a compatible chiral symmetry as
\begin{align}
\{H({\bm k}), \Gamma\}=0,
\quad
U_p({\bm k})\Gamma U_p^{-1}(\bm{k})
=c(p)\Gamma, \quad
\Gamma^2=1.
\end{align}
\section{Twisted equivariant $K$-theory}
\label{Sec:K-theory}
\subsection{Occupied states and $K$-group}
\label{Topological classification and the $K$-group}
Suppose that a Bloch Hamiltonian $H({\bm k})$ is gapped on a compact
momentum space $X$. We
consider the vector bundle $E$ that is
spanned by the occupied states on $X$:
In other words, $E$ is spanned by the states $|\phi({\bm k})\rangle$, ${\bm k}\in X$
in the form of
\begin{eqnarray}
|\phi({\bm k})\rangle = \sum_{{\cal E}_n({\bm k})<{\cal E}_{\rm
F}}c_n({\bm k})|u_n({\bm k})\rangle,
\end{eqnarray}
where $|u_n({\bm k})\rangle$ is an eigenstate of $H({\bm k})$,
\begin{eqnarray}
H({\bm k})|u_n({\bm k})\rangle={\cal E}_n({\bm k})|u_n({\bm k})\rangle,
\quad
\langle u_n({\bm k})|u_m({\bm k})\rangle=\delta_{n,m}.
\end{eqnarray}
Here ${\cal E}_{\rm F}$ is the Fermi energy, and
$c_n({\bm k})$ is an arbitrary complex function with the
normalization condition
$\sum_n|c_n({\bm k})|^2=1$.
We use the notation $[E]$ to represent a set of vector
bundles that are deformable to $E$.
Vector bundles $[E]$ and $[F]$ can be added as their direct sum,
$[E]+[F]:=[E\oplus F]$.
Namely $|\phi_i({\bm k})\rangle\in [E_i]$ ($i=1,2$) can be added as
\begin{eqnarray}
\left(
\begin{array}{c}
|\phi_1({\bm k})\rangle\\
|\phi_2({\bm k})\rangle
\end{array}
\right)\in [E_1]+[E_2].
\end{eqnarray}
The zero element $0$ in this summation can be introduced as
the vector bundle of rank zero.
Physically, such a rank zero vector is obtained when $H({\bm k})$ does
not have an occupied state that satisfies ${\cal E}_n({\bm k})<{\cal E}_{\rm F}$.
To compare vector bundles $[E_1]$ and $[E_2]$,
we consider the pair $([E_1],[E_2])$, where the addition is given by
\begin{eqnarray}
([E_1],[E_2])+([E_1'], [E_2'])=([E_1]+[E_1'], [E_2]+[E_2']).
\label{eq3:addition}
\end{eqnarray}
Since the ``difference'' between $[E_1]$ and $[E_2]$ does not change
even when a common vector bundle $[F]$ is added to both $[E_1]$ and
$[E_2]$, the pair $([E_1], [E_2])$ can be identified with
$([E_1]+[F], [E_2]+[F])$.
This motivates us to introduce the following equivalence
relation $\sim$,
\begin{align}
( [E_1] , [E_2] ) \sim ( [E_1'] , [E_2'] )
&\overset{\mathrm{def}}{\Longleftrightarrow}
\mbox{${}^{\exists} [F], {}^{\exists}[G]$ such that
$([E_1], [E_2]) + ([F],[F]) = ([E_1'], [E_2']) +([G],[G])$}.
\label{eq3:equivalence}
\end{align}
The following properties follow in the equivalence class.
\begin{enumerate}
\renewcommand{\labelenumi}{(\roman{enumi})}
\item The elements of the form $([E], [E])$ represent the zero for
the addition in Eq.(\ref{eq3:addition}).
\vspace{1ex}
$\because$
From Eq.(\ref{eq3:addition}), we have
$([E_1],[E_2])+([E],[E])=([E_1]+[E],[E_2]+[E])$, which
implies that $([E_1],[E_2])\sim ([E_1]+[E], [E_2]+[E])$. So in
the equivalence class, the same equation gives
$([E_1],[E_2])+([E],[E])=([E_1],[E_2])$, which leads to $([E],[E])=0$.
\item The additive inverse of $([E_1], [E_2])$ is $([E_2], [E_1])$.
\vspace{1ex}
$\because$
From (i), one can show that
$([E_1],[E_2])+([E_2],[E_1])=([E_1]+[E_2], [E_1]+[E_2])=0$, since
$E_1\oplus E_2$ is continuously deformed into $E_2\oplus E_1$, so
$[E_1]+[E_2]=[E_2]+[E_1]$.
\end{enumerate}
The equivalence classes define an Abelian group, which is
known as the $K$-group or the $K$-theory $K(X)$.
The above properties (i) and (ii) also justify the `formal difference'
notation $[E_1] - [E_2] \in K(X)$ for the pair $([E_1], [E_2])$.
Accordingly, we often mean by $[E] \in K(X)$ the element $[E] - 0 \in
K(X)$ or equivalently $([E], 0) \in K(X)$.
The formal difference $[E_1]-[E_2]$ naturally measures the topological difference
between $E_1$ and $E_2$: Indeed, from (i), one finds that if $E_1$ and
$E_2$ are smoothly deformable to each other, then $[E_1]-[E_2]=0 \in K(X)$.
Therefore, we use it to define topological phase on $X$:
When $[E_1]-[E_2]=0$, we say that $E_1$ and $E_2$
belong to the same topological phase on $X$.
To the contrary, when $[E_1]-[E_2]\neq 0$, we say that $E_1$ and $E_2$
belong to different topological phases on $X$.
In this definition of topological phases, $[E]-0 \in K(X)$ gives a
topological number of $E$ through the calculation of $K(X)$,
since a state with no occupied state and the
corresponding vector bundle 0 should be
topologically trivial.
It should be noted here that $[E_1]-[E_2]$ (namely $([E_1], [E_2])$ in the
equivalence relation (\ref{eq3:equivalence})) can
be zero even when
$[E_1]\neq [E_2]$:
Actually, even if $E_1$ and $E_2$ are not smoothly deformable to each other,
$E_1\oplus E$ and $E_2\oplus E$ could be by choosing a proper vector
bundle $E$. If this happen, we have $[E_1]+[E]=[E_2]+[E]$, and thus the
above (i) and (ii) lead to
$([E_1], [E_2])=([E_1], [E_2])+([E],[E])=([E_1]+[E],[E_2]+[E])=0$.
Physically, this result means that a common occupied state can be added
without changing topological difference between $E_1$ and $E_2$.
See Appendix \ref{app:An example of mismatch} for a simple example of a
mismatch between the $K$-theory and
the monoid of isomorphism classes of vector bundles.
In topological (crystalline) insulators and superconductors, the vector
bundles $[E]$ of occupied states are
subject to constraints from symmetries.
The original $K$-theory presented here is not convenient in
order for the symmetry constraints to be taken into account .
In the next section, we introduce a different formulation of $K$-theory,
which is much more suitable for the application in topological
(crystalline) insulators and superconductors.
\subsection{Flattened Hamiltonian and Karoubi's formulation of $K$-theory.}
\label{sec3:Kroubi}
Since $E_i$ $(i=1,2)$ is defined as a vector bundle that is spanned by
occupied states of $H_i({\bm k})$ ($i=1,2$), one may use the
triple $(E, H_1, H_2)$ with $E$ the vector bundle on which $H_i({\bm k})$ acts,
instead of the pair $([E_1], [E_2])$.
In the triple, we impose the additional constraint $H_i^2({\bm k})=1$.
Indeed, any gapped Hamiltonian can satisfy this
constraint by a
smooth deformation without gap closing: Any Bloch Hamiltonian $H({\bm k})$
is diagonalized as
\begin{eqnarray}
H({\bm k})=U({\bm k})
\left(
\begin{array}{ccc}
{\cal E}_1({\bm k}) & &\\
& \ddots &\\
&& {\cal E}_n({\bm k})\\
\end{array}
\right)U^{\dagger}({\bm k}),
\end{eqnarray}
with a unitary matrix $U({\bm k})$, and if $H({\bm k})$ is gapped, then
there is a clear distinction between the empty levels ${\cal E}_{i\le p}({\bm
k})$ and the occupied ones ${\cal E}_{i\ge p+1}({\bm k})$,
\begin{eqnarray}
{\cal E}_{i\le p}({\bm k}) >{\cal E}_{\rm F}> {\cal E}_{i\ge p+1}({\bm k}).
\end{eqnarray}
Therefore, one may adiabatically deform these levels so that
${\cal E}_{i\le p}({\bm k})\rightarrow 1$, ${\cal E}_{i\ge p+1}({\bm k})\rightarrow -1$
without gap closing. After this deformation, one obtains
\begin{eqnarray}
H({\bm k})=U({\bm k})
\left(
\begin{array}{cc}
{\bm 1}_{p\times p} & \\
& -{\bm 1}_{(n-p)\times (n-p)}
\end{array}
\right)U^{\dagger}({\bm k}),
\end{eqnarray}
which satisfies $H^2({\bm k})=1$.
The flattened Hamiltonian retains the same topological property as the
original one, because the vector bundle spanned by the occupied states
remains the same.
We also regard $H_i$ in the triple as a set of Hamiltonians that are deformable
to $H_i({\bm k})$ keeping the flattened condition $H_i^2({\bm k})=1$.
In a manner similar to Eq.(\ref{eq3:addition}),
the addition for the triples is given by
\begin{eqnarray}
(E, H_1, H_2)+(E', H_1', H_2')=(E \oplus E', H_1 \oplus H_1', H_2 \oplus H_2').
\label{eq3:addition2}
\end{eqnarray}
We can also impose
the equivalence relation $\sim$
\begin{align}
&(E, H_1, H_2) \sim (E', H'_1, H'_2)
\nonumber\\
&\overset{\mathrm{def}}{\Longleftrightarrow}
\mbox{${}^\exists (E'', H'', H'')$, ${}^\exists (E''', H''', H''')$
such that
$(E, H_1, H_2)+(E'', H'', H'')=(E', H_1', H_2')+(E''', H''', H''')$
}.
\label{eq3:equivalence2}
\end{align}
We denote the equivalence class for the triple $(E, H_1, H_2)$ as $[E,
H_1, H_2]$.
Then, correspondingly to (i) and (ii), the following properties are
obtained,
\begin{enumerate}
\renewcommand{\labelenumi}{(\roman{enumi}')}
\item The elements of the form $[E, H, H]$ represent zero in the addition.
\item The additive inverse of $[E, H_1, H_2]$ is
$[E, H_2, H_1]$, i.e.$-[E, H_1, H_2]=[E, H_2, H_1]$.
\end{enumerate}
The equivalence classes provide an alternative definition of
the $K$-group $K(X)$, which is known as the Karoubi's
formulation of the $K$-theory.
(Karoubi calls the Hamiltonians $H_i$ ($i=1,2$) as gradations.~\cite{Karoubi2008})
In the presence of chiral symmetry $\Gamma$
\begin{eqnarray}
\{\Gamma, H({\bm k}) \}=0, \quad \Gamma^2=1,
\end{eqnarray}
we use the quadruple $(E, \Gamma, H_1, H_2)$ with $E$ the vector
bundle on which $\Gamma$ and $H_i({\bm k})$ act.
Here $H_i({\bm k})$ is flattened, and $H_i$ in the quadruple represents
a set of Hamiltonians that are deformable to $H_i({\bm k})$.
We can generalize
the notion of equivalence to that on the quadruples $(E, \Gamma, H_1,
H_2)$, and the equivalence classes constitute an Abelian group $K^{-1}(X)$.
\subsection{Space group and twisted equivariant K-theory}
The Karoubi's formulation can be generalized to insulators subject to
space groups.
In a crystalline insulator, $H({\bm k})$ is subject to a
constraint from the (anti) space group $G$.
As mentioned in Sec.\ref{sec3:space}, the space group $G$ acts on
$H({\bm k})$ through the point group $P$ with twist $\tau=\tau_G, \omega,
\tau_G+\omega$.
The symmetries can be expressed as the following constraint on the Hamiltonian
\begin{eqnarray}
U_p({\bm k})H({\bm k}) U_p({\bm k})^{-1}=c(p)H(p{\bm k}),
\quad
U_p(p'{\bm k})U_p({\bm k})=e^{i\tau_{p,p'}(pp'{\bm k})}U_{pp'}({\bm k}),
\label{eq3:Up}
\end{eqnarray}
where $p \in P$ is the point group part of an element $\{p|{\bm a}_p\}$ of
$G$, and $U_p({\bm k})$ is a unitary representation matrix of $p$.
The index $c(p)=\pm 1$ specifies
symmetry or antisymmetry. In a manner similar to Sec.\ref{sec3:Kroubi}, a
triple $(E, H_1, H_2)$ with flattened Hamiltonian $H_i$ ($i=1,2$) subject to the
constraint (\ref{eq3:Up}) defines a
twisted $K$-class $[E, H_1, H_2]\in K_P^{(\tau, c)-0}(X)$, in the
twisted equivariant $K$-theory.
It should be noted here that the direct sum $H({\bm
k})\oplus H'({\bm k})$ satisfies the same constraint (\ref{eq3:Up}) with
the same $c(p)$ and twist $e^{i\tau_{p,p'}({\bm k})}$ if we
consider the corresponding direct sum for $U_p({\bm k})$.
Furthermore, when there exists a compatible chiral symmetry $\Gamma$,
\begin{align}
&U_p({\bm k})H({\bm k}) U_p({\bm k})^{-1}=c(p)H(p{\bm k}),
\quad
U_p(p'{\bm k})U_p({\bm k})=e^{i\tau_{p,p'}(pp'{\bm k})}U_{pp'}({\bm k})
\nonumber\\
&U_p({\bm k})\Gamma U_p({\bm k})^{-1}=c(p)\Gamma,
\quad \{H({\bm k}), \Gamma\}=0,
\quad \Gamma^2=1,
\end{align}
a quadruple $(E, \Gamma, H_1, H_2)$ subject to this constraint
defines another twisted $K$-class $[E,\Gamma, H_1, H_2] \in
K_P^{(\tau,c)-1}(X)$.
\subsection{Module structure}
\label{sec:module}
We note that the twisted equivalent $K$-group is not simply an
additive group, but has a more complicated structure.
Indeed, we can multiply an element of the $K$-group by a representation
$R(P)$ of the point group $P$.
To see this, consider a unitary matrix $R(p)$ for an element $p\in P$ in
the representation $R(P)$.
Then, we can multiply $U_p({\bm k})$ by $R(P)$ taking the tensor product of
$R(p)$ and $U_p({\bm
k})$, i.e.
\begin{eqnarray}
R(P)\cdot U_p({\bm k}):=R(p)\otimes U_p({\bm k})
\end{eqnarray}
From the multiplication law in $R(P)$,
$
R(p)R(p')=R(pp'),
$
we find that the obtained unitary matrix has the same twist as $U_p({\bm k})$
\begin{eqnarray}
\left[R(P)\cdot U_p(p'{\bm k})\right]
\left[R(P)\cdot U_{p'}({\bm k})\right]
=e^{i\tau_{p,p'}(pp'{\bm k})} R(P)\cdot U_{pp'}({\bm k})
\end{eqnarray}
which defines an action of the point group $P$ on the representation space of the tensor product.
Furthermore, the multiplication of the Hamiltonian $H$
by $R(P)$ can be defined as
\begin{eqnarray}
R(P)\cdot H
({\bm k}):={\bm 1} \otimes H({\bm k}),
\label{eq:RcdotH}
\end{eqnarray}
with the identity matrix ${\bm 1}$ in the representation space of $R(P)$.
Equation (\ref{eq:RcdotH}) gives a Hamiltonian the space group symmetry $G$
\begin{eqnarray}
\left[R(P)\cdot U_p({\bm k})\right]
\left[R(P)\cdot H({\bm k})\right]
\left[R(P)\cdot U_p({\bm k})\right]^{-1}
=\left[R(P)\cdot H(p{\bm k})\right],
\end{eqnarray}
where $\left[R(P)\cdot U_p({\bm k})\right]^{-1}=[R(p)^{-1} \otimes U_{p}({\bm k})^{-1}]$.
Correspondingly, for the vector space $E$ on which $H$ is
defined, $R(P)\cdot E$ is defined as the tensor product of the representation
space of $R(P)$ and $E$.
Using these definitions, we can eventually introduce the
multiplication of the triple $(E, H_1, H_2)$ by $R$ as
\begin{eqnarray}
R(P)\cdot (E, H_1, H_2):=(R(P)\cdot E, R(P)\cdot H_1, R(P)\cdot H_2),
\end{eqnarray}
which defines the multiplication of the element $[E, H_1, H_2]\in
K_P^{(\tau,c)-0}(X)$ by $R(P)$.
The multiplication by $R(P)$ is compatible with the Abelian group structure
of the $K$-group,
\begin{eqnarray}
R(P)\cdot(E, H_1, H_2)+ R(P)\cdot(E', H'_1, H'_2)
\nonumber\\
=R(P)\cdot (E \oplus E', H_1 \oplus H_1', H_2 \oplus H_2'),
\end{eqnarray}
and thus the $K$-group is an $R(P)$-module.
In a similar manner, we can show that $K_P^{(\tau,c)-1}(X)$ is also an
$R(P)$-module.
Remembering that $[E]$ is the space spanned by occupied states
of $H$, one finds that $R\cdot H$ naturally gives the
tensor product of the representation space of $R(P)$ and $[E]$, which we
denote as $R(P)\cdot [E]$.
Therefore, from the correspondence between $(E, H_1, H_2)$ and
$([E_1], [E_2])$, we can equivalently define the product of $R(P)$ and the element $([E_1], [E_2])$ in the $K$-group as
\begin{eqnarray}
R(P)\cdot ([E_1], [E_2]):=(R(P)\cdot[E_1], R(P)\cdot[E_2]).
\end{eqnarray}
This definition is also useful to identify the $R(P)$-module structure
of the $K$-group.
\section{Coexistence of Anti-unitary symmetry}
\label{With TRS and/or PHS}
So far, we have considered only unitary symmetries.
In this section, we describe how to take into account antiunitary
symmetries such as TRS, PHS,
and magnetic space groups. \cite{Freed2013}
Hamiltonians considered here include Bogoliubov-de Gennes Hamiltonians as
well as Bloch Hamiltonians.
We take a suitable basis in which the Hamiltonians are periodic in the BZ
torus, $H({\bm k}+{\bm G})=H({\bm k})$.
Suppose that the Hamiltonians $H({\bm k})$ is subject to a symmetry
group ${\cal G}$.
The symmetry group ${\cal G}$ may include any symmetry operations including
anti-unitary ones.
For $g\in {\cal G}$, we have
\begin{eqnarray}
U_g({\bm k}) H({\bm k}) U_g({\bm k})^{-1} = c(g) H(g {\bm k}),
\label{eq4:c}
\end{eqnarray}
where $g{\bm k}$ denotes the group action on the momentum space for
$g\in {\cal G}$.
Here $c(g)=\pm 1$ is a function on ${\cal G}$ which specifies symmetry ($c(g)=1$) or
anti-symmetry ($c(g)=-1$). It is a homomorphism on ${\cal G}$,
i.e. $c(gg')=c(g)c(g')$.
We also introduce a function $\phi(g)=\pm 1$
\begin{eqnarray}
U_g({\bm k}) i= \phi(g)i U_g({\bm k}),
\label{eq4:phi}
\end{eqnarray}
with the imaginary unit $i$, in order
to specify unitarity
($\phi(g)=1$) or anti-unitarity ($\phi(g)=-1$) of $U_g({\bm k})$.
Again, it is a homomorphism on ${\cal G}$, i.e. $\phi(gg')=\phi(g)\phi(g')$.
The multiplication in ${\cal G}$ implies that
\begin{eqnarray}
U_g(g'{\bm k})U_{g'}({\bm k})=
e^{i\tau_{g,g'}(gg'{\bm k})}U_{gg'}({\bm k}),
\label{eq4:tau}
\end{eqnarray}
with a U(1) factor $e^{i\tau_{g,g'}({\bm k})}$.
From the associativity
\begin{eqnarray}
(U_{g_1}(g_2g_3{\bm k})U_{g_2}(g_3{\bm k}))U_{g_3}({\bm k})
= U_{g_1}(g_2g_3{\bm k})(U_{g_2}(g_3{\bm k})U_{g_3}({\bm k})),
\quad g_1,g_2,g_3\in {\cal G},
\label{eq4:two-cocycle}
\end{eqnarray}
the U(1) factor obeys
\begin{align}
\delta \tau = 0 \quad \Leftrightarrow \quad
\phi(g_1)\tau_{g_2,g_3}(g_1^{-1}\bm{k})-\tau_{g_1g_2,g_3}(\bm{k})+\tau_{g_1,g_2g_3}(\bm{k})-\tau_{g_1,g_2}(\bm{k}) \equiv 0 \qquad {\rm mod } \ 2 \pi,
\label{eq4:one-coboundary}
\end{align}
The U(1) gauge ambiguity of $U_p({\bm k})$
\begin{eqnarray}
U_g({\bm k})\rightarrow e^{i\theta_g({\bm k})} U_g({\bm k})
\end{eqnarray}
also induces the equivalence relation
\begin{align}
\tau \sim \tau+\delta \theta \quad \Leftrightarrow \quad
\tau_{g_1,g_2}(\bm{k})\sim \tau_{g_1,g_2}(\bm{k})+\phi(g_1)\theta_{g_2}(g_1^{-1}\bm{k})-\theta_{g_1g_2}(\bm{k})+\theta_{g_1}(\bm{k}) \qquad {\rm mod }\ 2 \pi.
\end{align}
Equations (\ref{eq4:two-cocycle}) and (\ref{eq4:one-coboundary}) imply
that a set of inequivalent U(1)
phase factors $\{e^{i\tau_{g,g'}({\bm k})}\}_{g,g'\in {\cal G}}$ gives
an element of the group
cohomology $H^2({\cal G},C(T^d, U(1)_\phi))$. Here $C(T^d, U(1)_\phi)$ is the
set of U(1)-valued functions on the BZ torus $T^d$, where the Abelian group
structure is given by the usual product of U(1) phases,
$e^{i\alpha_1({\bm k})}\cdot e^{i\alpha_2({\bm k})}
=e^{i(\alpha_1({\bm k})+\alpha_2({\bm k}))}, e^{i\alpha_i({\bm k})}\in
C(T^d, U(1)_{\phi})$,
and the group ${\cal G}$ acts on $C(T^d, U(1)_\phi)$ by
$e^{i(g \cdot \alpha)({\bm k})}=e^{i \phi(g) \alpha(g^{-1}{\bm k})}$
from the left.
As explained in Appendix \ref{Group_cohomology},
Eq.(\ref{eq4:two-cocycle}) gives the two-cocycle condition, and
Eq.(\ref{eq4:one-coboundary}) is the equivalence relation from the
two-coboundary in the cohomology.
The above three data $(c, \phi, \tau)$ in Eqs.(\ref{eq4:c}),
(\ref{eq4:phi}) and (\ref{eq4:tau})
specify the exact action of ${\cal G}$ on $H({\bm k})$ and the momentum space.
In a manner similar to Sec.\ref{sec3:Kroubi}, we can introduce a $K$-group by
using the Karoubi's formulation.
For flattened Hamiltonians $H_i({\bm k})$ ($i=1,2$) subject to the
symmetry group ${\cal G}$, we consider a triple
$
(E, H_1, H_2),
$
where $E$ is a vector bundle on a compact momentum space $X$, and the
Hamiltonians $H_i$ ($i=1,2$) act on the common vector bundle $E$.
The addition is defined by Eq.(\ref{eq3:addition2}), and the equivalence
relation is imposed by Eq.(\ref{eq3:equivalence2}).
As a result, we obtain the twisted equivariant $K$-group consisting of sets of the
equivalence classes $[E,H_1,H_2]$, which we denote by
${}^\phi K_{\cal G}^{(\tau,c)}(X)$.
We introduce the integer grading of the $K$-group,
${}^\phi K_{\cal G}^{(\tau,c)-n}(X)$, $(n=1,2,3,\dots)$
by imposing $n$ additional chiral symmetries which are compatible with
${\cal G}$,
\begin{align}
&\Gamma_i H({\bm k}) \Gamma_i^{-1} = - H({\bm k}), \qquad \{ \Gamma_i, \Gamma_j\} = 2 \delta_{ij}, \qquad i = 1 , \dots, n, \\
&U_g(\bm{k})\Gamma_i U^{-1}_g({\bm k}) = c(g)\Gamma_i,
\end{align}
together with Eq.(\ref{eq4:c}).
For $n\ge 2$, we also impose the subsector condition
$i\Gamma_{2i-1}\Gamma_{2i}=1$
$(i=1,\dots,[n/2])$:
By dressing antiunitary operators with chiral operators as shown in
Table \ref{tab:n_and_AUS}, the operator $i\Gamma_{2i-1}\Gamma_{2i}$
commutes with
all symmetry operators in ${\cal G}$ as well as with Hamiltonians in the
triple. Thus, we have consistently impose the above condition.
It is also found that for an odd $n$, there remains a chiral symmetry $\Gamma$
that is compatible with the subsector condition. See Table \ref{tab:n_and_AUS}.
In general, the twist $(\tau, c)$ for the dressed antiunitary operators
is different from the original one.
However, as summarized in Table.\ref{tab:n_and_AUS}, the twist in each
grading is uniquely determined by the
original twist, so
we use the same notation $(\tau, c)$ to denote the twist in each
grading.
It is also noted that the chiral operator $\Gamma$ for an odd $n$ obeys
the same symmetry constraints as the Hamiltonian: When $U_g({\bm k})$
acts on the Hamiltonian as symmetry (anti-symmetry), $U_g({\bm k})$
commutes (anticommutes) with $\Gamma$.
The graded twist $(\tau, c)$ has a modulo 8 periodicity (Bott
periodicity)
for the grading
integer $n$. For instance, the dressed antiunitary operator
$\Gamma_7\Gamma_5\Gamma_3\Gamma_1 U_g({\bm k})$ for $n=8$ has the same
$e^{i\tau_{g,g'}({\bm k})}$ and
$c(g)$ as $U_g({\bm k})$.
Therefore, the same modulo 8 periodicity appears in the
$K$-groups, ${}^\phi K_{\cal G}^{(\tau,c)-n-8}(X)
={}^\phi K_{\cal G}^{(\tau,c)-n}(X)$.
One can introduce ${}^\phi K_{\cal G}^{(\tau,c)+n}(X)$
so as to keep the modulo 8 periodicity.
Namely, ${}^\phi K_{\cal G}^{(\tau,c)+n}(X)\equiv {}^\phi K_{\cal
G}^{(\tau,c)-(8m-n)}(X)$ with $8m-n\ge 0$ ($m,n\in \mathbb{Z}$).\footnote{In the
absence of anti-unitary symmetry, the Bott periodicity becomes 2. Thus,
it holds that $K_G^{(\tau,c)+n}(X)=K_G^{(\tau,c)-n}(X)$. }
\begin{table}
\caption{Symmetry operators and twist for each grading.}
\label{tab:n_and_AUS}
\begin{tabular}{c|ccc|ccccc}
Grade & \multicolumn{3}{c|}{Symmetry operators}& \multicolumn{5}{c}{Twist $(\phi(g),\phi(g'))$}\\
\hline
$n$
&CS
& $\phi(g)=1$
& $\phi(g)=-1$
&$c$
& $(1,1)$
& $(1,-1)$
& $(-1,1)$
& $(-1,-1)$
\\
\hline
$n=0$
& 0
& \multirow{2}{*}{$U_g({\bm k})$}
& \multirow{2}{*}{$U_g({\bm k})$}
& \multirow{2}{*}{$c(g)$}
& \multirow{2}{*}{$e^{i\tau_{g,g'}(\bm{k})}$}
& \multirow{2}{*}{$e^{i\tau_{g,g'}(\bm{k})}$}
& \multirow{2}{*}{$e^{i\tau_{g,g'}(\bm{k})}$}
& \multirow{2}{*}{$e^{i\tau_{g,g'}(\bm{k})}$}
\\
$n=1$
& $\Gamma_1$
&&
\\
$n=2$
& 0
& \multirow{2}{*}{$U_g({\bm k})$}
&\multirow{2}{*}{$\Gamma_1U_g({\bm k})$}
& \multirow{2}{*}{$\phi(g)c(g)$}
& \multirow{2}{*}{$e^{i\tau_{g,g'}(\bm{k})}$}
& \multirow{2}{*}{$c(g)e^{i\tau_{g,g'}(\bm{k})}$}
& \multirow{2}{*}{$e^{i\tau_{g,g'}(\bm{k})}$}
& \multirow{2}{*}{$c(g)e^{i\tau_{g,g'}(\bm{k})}$}
\\
$n=3$
& $\Gamma_3$
&&
\\
$n=4$
& 0
& \multirow{2}{*}{$U_g({\bm k})$}
&\multirow{2}{*}{$\Gamma_3\Gamma_1U_g({\bm k})$}
& \multirow{2}{*}{$c(g)$}
& \multirow{2}{*}{$e^{i\tau_{g,g'}(\bm{k})}$}
& \multirow{2}{*}{$e^{i\tau_{g,g'}(\bm{k})}$}
& \multirow{2}{*}{$e^{i\tau_{g,g'}(\bm{k})}$}
& \multirow{2}{*}{$-e^{i\tau_{g,g'}(\bm{k})}$}
\\
$n=5$
& $\Gamma_5$
&&
\\
$n=6$
& 0
& \multirow{2}{*}{$U_g({\bm k})$}
&\multirow{2}{*}{$\Gamma_5\Gamma_3\Gamma_1U_g({\bm k})$}
& \multirow{2}{*}{$c(g)\phi(g)$}
& \multirow{2}{*}{$e^{i\tau_{g,g'}(\bm{k})}$}
& \multirow{2}{*}{$c(g)e^{i\tau_{g,g'}(\bm{k})}$}
& \multirow{2}{*}{$e^{i\tau_{g,g'}(\bm{k})}$}
& \multirow{2}{*}{$-c(g)e^{i\tau_{g,g'}(\bm{k})}$}
\\
$n=7$
& $\Gamma_7$
& &
\end{tabular}
\end{table}
An important class of symmetries in this category are unitary space
groups with real AZ symmetries (TRS and/or PHS).
They can be treated in a unified way by considering
symmetry group $\mathbb{Z}_2 \times G$ with integer grading.
Here $G$ is a unitary space group, and $\mathbb{Z}_2=\{1, -1\}$ is an order-two
cyclic group that commutes with
all elements of $G$, i.e. $(-1) \cdot g=g\cdot (-1)$, $g\in G$.
To include real AZ symmetries, we take the operators for $\mathbb{Z}_2$ as
$U_{-1}({\bm{k}})={\cal T}$ and $U_{1}({\bm k})=1$, where
${\cal T}$ is the time-reversal operator with ${\cal T}^2=1$.
We also define
$U_{(-1) \cdot g}({\bm k})$ as $U_{(-1) \cdot g}({\bm k})=U_g(-{\bm{k}}){\cal T}$.
The presence of such TRS is referred as class AI in
the AZ symmetry classes.
The data $(\phi,c, \tau)$ are summarized as
\begin{eqnarray}
&&\phi(-1) = -1, \quad c(-1)=1, \quad {\cal T}^2=1,
\quad
\phi(g)=1, \quad c(g)= \pm 1,
\nonumber\\
&&U_{g}(g'{\bm k}) U_{g'}({\bm k})
=e^{i\tau_{g,g'}(gg'{\bm k})}U_{gg'}({\bm k}),
\quad
{\cal T}U_{g}({\bm k}) = e^{i\tau_{-1,g}(-g{\bm k})} U_{g}(-{\bm k}) {\cal T},
\end{eqnarray}
Imposing the chiral symmetries $\Gamma_i$ $(i=1,\dots, n)$, one can shift AZ
classes.~\cite{ShiozakiSatoGomi2016}
The AZ class for the $n$-th grading $K$-group
${}^\phi K_{\mathbb{Z}_2\times G}^{(\tau,c)-n}(X)$
is summarized in Table ~\ref{tab:n_and_AZ}.
\begin{table}
\caption{The relation between the integer grading $n \ ({\rm mod}\ 8)$,
AZ classes, and additional symmetries. }
\label{tab:n_and_AZ}
\begin{tabular}{cccccccccccccc}
$n$ & AZ class & TRS & PHS & $\tau_{T,g}$ & $\tau_{C,g}$ \\
\hline
$n=0$ & AI & $T={\cal T}$ & &
$T U_g({\bm k}) = e^{i\tau_{-1,g}(-g\bm{k})} U_g(-\bm{k}) T$ & \\
$n=1$ & BDI & $T={\cal T}$ & $C=\Gamma_1 {\cal T}$ &
$T U_g({\bm k}) = e^{i\tau_{-1,g}(-g\bm{k})} U_g(-\bm{k}) T$ & $C U_g(\bm{k}) = c(g) e^{i\tau_{-1,g}(-g\bm{k})} U_g(-\bm{k}) C$ \\
$n=2$ & D & & $C = \Gamma_1 {\cal T}$ & &
$C U_g(\bm{k}) = c(g) e^{i\tau_{-1,g}(-g\bm{k})} U_g(-\bm{k}) C$ \\
$n=3$ & DIII & $T=\Gamma_3\Gamma_1{\cal T}$ & $C=\Gamma_1 {\cal T}$ &
$T U_g({\bm k}) = e^{i\tau_{-1,g}(-g\bm{k})} U_g(-\bm{k}) T$ & $C U_g(\bm{k}) = c(g) e^{i\tau_{-1,g}(-g\bm{k})} U_g(-\bm{k}) C$ \\
$n=4$ & AII & $T=\Gamma_3\Gamma_1{\cal T}$ & &
$T U_g({\bm k}) = e^{i\tau_{-1,g}(-g\bm{k})} U_g(-\bm{k}) T$ & \\
$n=5$ & CII & $T=\Gamma_3\Gamma_1{\cal T}$ & $C=\Gamma_5\Gamma_3\Gamma_1{\cal T}$ &
$T U_g({\bm k}) = e^{i\tau_{-1,g}(-g\bm{k})} U_g(-\bm{k}) T$ & $C U_g(\bm{k}) = c(g) e^{i\tau_{-1,g}(-g\bm{k})} U_g(-\bm{k}) C$ \\
$n=6$ & C & & $C=\Gamma_5\Gamma_3\Gamma_1{\cal T}$ & &
$C U_g(\bm{k}) = c(g) e^{i\tau_{-1,g}(-g\bm{k})} U_g(-\bm{k}) C$ \\
$n=7$ & CI & $T=\Gamma_7\Gamma_5\Gamma_3\Gamma_1{\cal T}$ & $C=\Gamma_5\Gamma_3\Gamma_1{\cal T}$ &
$T U_g({\bm k}) = e^{i\tau_{-1,g}(-g\bm{k})} U_g(-\bm{k}) T$ & $C U_g(\bm{k}) = c(g) e^{i\tau_{-1,g}(-g\bm{k})} U_g(-\bm{k}) C$ \\
\end{tabular}
\end{table}
\section{Topological crystalline insulators and superconductors}
\label{sec:Topological crystalline insulators and superconductors}
In this section, we consider insulators or superconductors that are
gapped in the whole BZ $T^d$.
Deforming Hamiltonians of the systems, one can
obtain flattened Hamiltonians in the whole BZ without gap closing.
Using the Karoubi's formulation, these flattened Hamiltonians
define $K$-groups on $T^d$.
Under the constraint of a symmetry group ${\cal G}$ with the data
$(c,\tau, \phi)$, the obtained $K$-group is the twisted equivariant
$K$-group ${}^{\phi}K_{\cal G}^{(\tau,c)-n}(T^d)$.
We formulate below TCIs and TCSCs in terms of the $K$-group ${}^{\phi}K_{\cal
G}^{(\tau,c)-n}(T^d)$.
\subsection{$K$-theory classification}
\label{sec:Stable classification of bulk insulators}
First, we define TCIs and TCSCs
on the basis of the $K$-theory:
For this purpose,
consider two different flattened Hamiltonians, $H_1$
and $H_2$, which are defined on the same vector bundle $E$ and are
subject to the same symmetry constraints for ${}^{\phi}K_{\cal
G}^{(\tau,c)-n}(T^d)$.
As shown in Sec.\ref{Sec:K-theory}, $[E, H_1, H_2]
\in {}^{\phi}K_{\cal G}^{(\tau,c)-n}(T^d)$ measures
a topological difference between $H_1$ and $H_2$,
so we can define that $H_1$ and $H_2$ are the same (different)
TCIs or TCSCs if
$[E, H_1, H_2]=0 \in {}^{\phi}K_{\cal G}^{(\tau,c)-n}(T^d)$ ($[E, H_1,
H_2]\neq 0\in {}^{\phi}K_{\cal G}^{(\tau,c)-n}(T^d)$).
Some remarks are in order.
\begin{enumerate}
\item We call $H_1$ and $H_2$ are stably equivalent to each other when
$[E, H_1, H_2]=0$. $H_1$ and $H_2$ are stably equivalent, if they
are continuously deformable to each other, but the inverse is not
true: Indeed, as mentioned in Sec.\ref{Topological classification
and the $K$-group},
$[E, H_1, H_2]$=0 does not necessarily mean that $H_1$ and $H_2$
are smoothly deformable to each other. Even when they are
not deformable to each other, $H_1\oplus H'$ and $H_2\oplus H'$
could be by choosing a proper flattened Hamiltonian $H'$ on $E'$,
and if this happens, one finds $[E, H_1, H_2]=0$.
This means that even if $H_1$ and $H_2$ are not smoothly deformable to
each other, they could represent the same TCI
or TCSC. In this sense, the $K$-theory
approach presents a loose classification of TCIs and TCSCs.
\item When ${\cal G}$ does not include any anti-symmetry, the identity
operator $1$ on $E$ is regarded as a flattened Hamiltonian $H_0=1$
which satisfies all
the constraints from ${\cal G}$. Since $H_0=1$ does not have an
occupied state, the vector bundle spanned by its occupied state
is of rank zero (i.e.\ empty), and so $H_0=1$ obviously
describes a topologically trivial state.
Therefore, for this particular class of ${\cal G}$, one can use
the identity Hamiltonian as a reference, by which the topological
index of $H$ is defined as $[E, H, 1]$. When $[E,H,1]$ is
nonzero, one can say that $H$ is a TCI.
\item Each triple $[E, H_1, H_2]$ has its own symmetry operators
$U_g({\bm k})$ for $g\in{\cal G}$ defined on $E$.
For $H_1$ and $H_2$ in the same triple, the symmetry operators
commonly act on these Hamiltonians. On the other hand,
explicit forms of symmetry operators can be different for
different triples, as long as the symmetry operators have the same
data $(\phi, \tau,c)$.
\end{enumerate}
\subsection{Symmetry protected topologically distinct atomic insulators}
\label{sec:Dependence on unit cell and Wyckoff position}
\subsubsection{Wyckoff position}
In the presence of symmetry, short-range entangled states can be
topologically distinct due to symmetry constraints.
TCIs and TCSCs may illustrate
such symmetry protected topological phases in an extreme manner: Atomic
insulators can be topologically different to each other due to space
group symmetry.
An atomic insulator is an insulator where all electrons are tightly
bound to atoms, so its electric properties are local and insensitive to
the boundary condition.
In particular, it does not support topological gapless boundary states.
Nevertheless, in the presence of crystalline space group symmetry, there
arises topological distinction between atomic insulators.
This is because crystalline symmetry restricts possible positions of
atoms in the unit cell.
Each space group (or magnetic space group) has a finite number of
different Wyckoff positions,
according to which atoms are placed in the unit cell, and
the different Wyckoff
positions remain different under any adiabatic deformation keeping
the space group symmetry.
This means that atomic insulators with different Wyckoff positions
should be topologically different.
For example, let us consider atomic insulators with the spatial reflection
symmetry $m$, $x\rightarrow -x$ in one dimension. Spacial reflection in one
dimension has three different Wyckoff positions: (a) 0 (b) 1/2 (c) $x$, $-x$,
which are invariant under reflection up to the lattice
translation $x\rightarrow x+1$.
We illustrate below atomic insulators with Wyckoff positions (a) 0 and
(b) 1/2, respectively:
\begin{align}
&{\rm (a)} \qquad
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(0,0)}*{\bigcirc},
!{(2,0)}*{\bigcirc},
!{(-2,0)}*{\bigcirc},
!{(4,0)}*{\bigcirc},
!{(-4,0)}*{\bigcirc},
!{(0,-0.4)}*{0},
!{(-5.5,0)}-@{->}!{(5.5,0)},
!{(0,-0.8)}-@{.}!{(0,0.8)},
!{(0,0.6)}*{m},
!{(0.4,0.4)}-@{<->}!{(-0.4,0.4)},
!{(-1,-0.8)}-@{|-|}!{(1,-0.8)},
!{(0,-1)}*{\rm unit\ cell},
!{(6,0)}*{x}
}
\\
&{\rm (b)} \qquad
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(1,0)}*{\bigcirc},
!{(3,0)}*{\bigcirc},
!{(-1,0)}*{\bigcirc},
!{(-3,0)}*{\bigcirc},
!{(-5,0)}*{\bigcirc},
!{(5,0)}*{\bigcirc},
!{(1,-0.4)}*{1/2},
!{(-1,-0.4)}*{-1/2},
!{(-5.5,0)}-@{->}!{(5.5,0)},
!{(0,-0.8)}-@{.}!{(0,0.8)},
!{(0,0.6)}*{m},
!{(0.4,0.4)}-@{<->}!{(-0.4,0.4)},
!{(-1,-0.8)}-@{|-|}!{(1,-0.8)},
!{(0,-1)}*{\rm unit\ cell},
!{(6,0)}*{x}
}
\end{align}
Here, ``$\bigcirc$'' represents an atom,
and the dashed line is the center of the reflection.
Although the difference between (a) and (b) is
just a difference in choice of the unit cell, the crystal (a) cannot
adiabatically deform
into (b) keeping the reflection symmetry.
Therefore, they are topologically distinguished from each other.
In the Karoubi's formulation of the $K$-theory,
the difference between Wyckoff positions
is manifest in the reflection operator.
Consider the one-dimensional reflection symmetric insulators (a) and (b)
again.
The reflection
operator $U^{\rm (a)}_{m}(k_x)$ for the atomic
insulator (a) does not coincide with the reflection operator $U^{\rm
(b)}_{m}(k_x)$ for (b), even when atoms in both crystals are identical:
In the crystal (b), after reflection, an additional lattice translation
is needed for an atom in the unit cell to go back to the original position.
As a result, an additional Bloch factor $e^{-ik_x}$ appears in
$U_{m}^{\rm (b)}(k_x)$ as $U_{m}^{\rm (b)}(k_x)=U_{m}^{\rm
(a)}(k_x)e^{-ik_x}$.
Here it should be noted that the twist in $U_{m}^{\rm (b)}(k_x)$ is
the same as
that in $U_{m}^{\rm (a)}(k_x)$ because
$U_{m}^{\rm (b)}(-k_x)U_{m}^{\rm (b)}(k_x)=U_{m}^{\rm (a)}(-k_x)U_{m}^{\rm (a)}(k_x)$.
Thus, both $U_{m}^{\rm (a)}(k_x)$ and $U_{m}^{\rm (b)}(k_x)$ are
allowed in the same twisted $K$-theory.
\subsubsection{Representation dependence and $R(P)$-module structure}
Let us consider a set of all unitary symmetry operations $g
\in {\cal G}$, which are characterized by $c(g)=\phi(g)=1$.
The set forms a subgroup of ${\cal G}$ because of the relations
$c(gg')=c(g)c(g')$ and $\phi(gg')=\phi(g)\phi(g')$.
This unitary symmetry subgroup is given by a space group $G$.
The space group $G$ also provides topologically nontrivial structures.
To see this, consider the symmetry constraint in Eq.(\ref{eq4:c}). From
Eq.(\ref{eq4:c}),
$H({\bm k})$ at ${\bm k}=0$ commutes with any unitary operator in
the above mentioned space group $G$.
Since the space group $G$ reduces to the point group $P$ at ${\bm k}=0$,
the constraint implies that
any energy
eigenstate of $H({\bm k})$ at ${\bm k}=0$ should belong to a
representation of $P$.
In particular, occupied states of $H({\bm k})$ at ${\bm k}=0$ constitute
a set of representations of $P$.
It is evident that if occupied states of $H_1({\bm k})$ and those
of $H_2({\bm k})$ constitute different sets of representations of $P$
at ${\bm k}=0$, $H_1({\bm k})$ and $H_2({\bm k})$ are not deformable
to each other as long as they keep symmetry $P$ and gaps of the systems.
In this sense, the representation of $P$ provides topological
differences in insulators and superconductors.
The above arguments also work for atomic insulators.
For illustration, consider again reflection
symmetric atomic insulators in one-dimension.
Below, we show atomic insulators (a1) and (a2) which share the same
Wyckoff position.
\begin{align}
{\rm (a1)} \qquad
&\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(0,0)}*{\bigcirc},
!{(2,0)}*{\bigcirc},
!{(-2,0)}*{\bigcirc},
!{(4,0)}*{\bigcirc},
!{(-4,0)}*{\bigcirc},
!{(0,-0.4)}*{\ket{s}},
!{(2,-0.4)}*{\ket{s}},
!{(-2,-0.4)}*{\ket{s}},
!{(4,-0.4)}*{\ket{s}},
!{(-4,-0.4)}*{\ket{s}},
!{(-5.5,0)}-@{->}!{(5.5,0)},
!{(0,-0.8)}-@{.}!{(0,0.8)},
!{(0,0.6)}*{m},
!{(0.4,0.4)}-@{<->}!{(-0.4,0.4)},
!{(6,0)}*{x}
}
\\
{\rm (a2)} \qquad
&\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(0,0)}*{\bigcirc},
!{(2,0)}*{\bigcirc},
!{(-2,0)}*{\bigcirc},
!{(4,0)}*{\bigcirc},
!{(-4,0)}*{\bigcirc},
!{(0,-0.4)}*{\ket{p}},
!{(2,-0.4)}*{\ket{p}},
!{(-2,-0.4)}*{\ket{p}},
!{(4,-0.4)}*{\ket{p}},
!{(-4,-0.4)}*{\ket{p}},
!{(-5.5,0)}-@{->}!{(5.5,0)},
!{(0,-0.8)}-@{.}!{(0,0.8)},
!{(0,0.6)}*{m},
!{(0.4,0.4)}-@{<->}!{(-0.4,0.4)},
!{(6,0)}*{x}
}
\end{align}
In the atomic insulator (a1), electrons in $s$-orbitals are tightly bound to
atoms, while in (a2), electrons in $p$-orbitals are bound to atoms.
Correspondingly, an occupied state in (a1) is even under reflection,
\begin{eqnarray}
U_{m}^{\rm (a1)}(k_x)|k_x\rangle_{\rm (a1)}=|-k_x\rangle_{\rm (a1)},
\end{eqnarray}
but that in (a2) is odd under reflection,
\begin{eqnarray}
U_{m}^{\rm (a2)}(k_x)|k_x\rangle_{\rm (a2)}=-|-k_x\rangle_{\rm (a2)}.
\end{eqnarray}
$U^{\rm (a1)}_{m}(k_x)$
and $U^{\rm (a2)}_{m}(k_x)$ have the same twist since we have
\begin{eqnarray}
U^{\rm (a1)}_{m}(-k_x)
U_{m}^{\rm (a1)}(k_x)|k_x\rangle_{\rm (a1)}
=U^{\rm (a1)}_{m}(-k_x)
|-k_x\rangle_{\rm (a1)}
=|k_x\rangle_{\rm (a1)},
\nonumber\\
U^{\rm (a2)}_{m}(-k_x)
U_{m}^{\rm (a2)}(k_x)|k_x\rangle_{\rm (a2)}
=-U^{\rm (a2)}_{m}(-k_x)
|-k_x\rangle_{\rm (a2)}
=|k_x\rangle_{\rm (a2)}.
\end{eqnarray}
Thus, these two insulators can be compared in the same twisted
$K$-theory.
Obviously, these two insulators are not topologically the same in the
presence of the reflection symmetry.
In the $K$-theory, the representation dependence is properly treated as
the $R(P)$-module structure in Sec.\ref{sec:module}.
In terms of the Karoubi's formulation,
the atomic insulators (a1) and (a2) are described as the triples with the
same form
\begin{eqnarray}
[E, -1, 1],
\end{eqnarray}
where $E$ is given by $|k_x\rangle_{(\rm a1)}$ and
$|k_x\rangle_{(\rm a2)}$, respectively.
Indeed, since $E$ is the occupied state for $H=-1$ and no
occupied state exists for $H=-1$, the triple corresponds to
$([E],0)=[E]-0$, which is
naturally identified with $|k_x\rangle_{(a1)}$ and $|k_x\rangle_{(a2)}$,
respectively.
Since $|k_x\rangle_{(a1)}$ and $|k_x\rangle_{(a2)}$ belong to different
representations under the reflection, they correspond to different elements
of the $R(P)$-module in the $K$-theory.
\subsection{Dimensional hierarchy}
A remarkable feature of TCIs and TCSCs is that those in different
dimensions can be related to
each other. Such a hierarchy in spatial dimension has been useful
for a systematic classification of topological insulators and
superconductors:\cite{Qi2008,Kitaev2009, Ryu2010, Teo2010}
Furthermore, from this property, topological classification of a class of
crystalline insulators and superconductors protected by order-two point
groups (order-two nonsymmorphic space groups) in any dimensions
reduces to that in zero dimension
(one-dimension),
which makes
it possible to complete topological classification of those classes of
systems in any dimensions.\cite{ShiozakiSato2014, ShiozakiSatoGomi2016}
In this section, we discuss dimensional hierarchy for
generic TCIs and TCSCs.
\subsubsection{Dimension-raising maps}
\label{sec5:DRM}
The dimensional hierarchy is given by dimension-raising maps
in the Karoubi's formulation:
Consider a triple $[E, H_1({\bm k}), H_0({\bm k})]\in {}^\phi K_{\cal
G}^{(\tau, c)-n}(X)$ for an even $n$, or a quadruple $[E, \Gamma, H_1({\bm
k}), H_0({\bm k})]\in {}^\phi K_{\cal G}^{(\tau, c)-n}(X)$
for an odd $n$, which describes a relative topological difference
of crystalline insulators or superconductors in $d$-dimensions.
We assume that $[E, H_1({\bm k}), H_0({\bm k})]\neq 0$ or
$[E, \Gamma, H_1({\bm k}), H_0({\bm k})]\neq 0$, which
implies that $H_1(\bm{k})$ has a ``nonzero topological charge'' relative to
$H_0(\bm{k})$ on $X$.
To construct dimension-raising maps, we consider a one-parameter
Hamiltonian $H_{10}({\bm k}, m)$,
where $m\in [-1,1]$ is a parameter
connecting $H_{10}({\bm k},-1)=H_0({\bm k})$ and $H_{10}({\bm k},1)=H_1({\bm
k})$, and $H_{10}({\bm k}, m)$ keeps the same symmetry constraint as
$H_1({\bm k})$ and $H_0(\bm{k})$.
For example, the following one-parameter Hamiltonian satisfies this
requirement,
\begin{eqnarray}
H_{10}({\bm k},m)=\left\{
\begin{array}{ll}
m H_0({\bm k}),& \mbox{for $m\in [-1,0]$}\\
m H_{1}({\bm k}), &\mbox{for $m\in (0,1]$}\\
\end{array}
\right. .
\label{eq5:interpolation}
\end{eqnarray}
Note that $H_{10}({\bm k}, m)$ should have a gap-closing topological phase
transition point
in the middle region of $m \in [-1,1]$,
since $H_1(\bm{k})$ and $H_0(\bm{k})$ have different topological charges.
See Fig.~\ref{Fig:HamiltonianMap} [a].
In $H_{10}({\bm k}, m)$ of Eq.(\ref{eq5:interpolation}),
the gap-closing point is given at $m=0$.
Depending on the absence (for an even $n$) or presence (for an odd $n$) of
chiral symmetry, we have a map from the Hamiltonians on $X$ to
a new Hamiltonian $H(\bm{k},\hat n)$ on $X\times S^d$,
which has the same topological charge as $H_1(\bm{k})$,
in the following manner.
\begin{figure}[!]
\begin{center}
\includegraphics[width=\linewidth, trim=0cm 1cm 0cm 0cm]{HamiltonianMap.pdf}
\end{center}
\caption{
[a] A parameter $m$ connecting two different topological phases.
[b] The dimensional raising map from $X$ to $X \times S^1$.
}
\label{Fig:HamiltonianMap}
\end{figure}
{\it $\gamma$ matrices}---
For preparation, we introduce the following $\gamma$ matrices,
\begin{align}
&\gamma_1^{(k)}=\sigma_y\otimes \underbrace{\sigma_z\otimes \cdots
\otimes \sigma_z}_{k-1},
&\gamma_2^{(k)}&=-\sigma_x\otimes \underbrace{\sigma_z\otimes \cdots
\otimes \sigma_z}_{k-1},
\nonumber\\
&\gamma_3^{(k)}=\sigma_0\otimes\sigma_y\otimes
\underbrace{\sigma_z\otimes \cdots
\otimes \sigma_z}_{k-2},
&\gamma_4^{(k)}&=\sigma_0\otimes(-\sigma_x)\otimes
\underbrace{\sigma_z\otimes \cdots \otimes \sigma_z}_{k-2},
\nonumber\\
&\quad\vdots & \vdots
\nonumber\\
&\gamma_{2k-1}^{(k)}=\underbrace{\sigma_0\otimes\cdots\otimes\sigma_0}_{k-1}
\otimes\sigma_y,
&
\gamma_{2k}^{(k)}&=\underbrace{\sigma_0\otimes\cdots\otimes\sigma_0}_{k-1}
\otimes(-\sigma_x),
\end{align}
and
$\gamma_{2k+1}^{(k)}=\sigma_z\otimes\cdots\otimes\sigma_z$,
which obey $\{\gamma^{(k)}_i,\gamma^{(k)}_j\}=2\delta_{i,j}$.
They also satisfy
\begin{eqnarray}
\gamma_i^{(k)}\otimes \gamma_{2l+1}^{(l)}=\gamma_i^{(k+l)},
\quad
\underbrace{\sigma_0\otimes \cdots\otimes \sigma_0}_{k}
\otimes \gamma_{j}^{(l)}=\gamma_{2k+j}^{(k+l)},
\quad
\gamma_{2k+1}^{(k)}\otimes \gamma_{2l+1}^{(l)}=\gamma_{2(k+l)+1}^{(k+l)},
\end{eqnarray}
for $i=1,\dots, 2k$ and $j=1,\dots, 2l$.
We also define $\gamma_1^{(0)}$ as $\gamma_1^{(0)}=1$.
The $\gamma$ matrices are useful to construct
dimension-raising maps.
{\it Maps from nonchiral class}---
For an even $n$, $H_{10}({\bm k}, m)$ does not have
chiral symmetry.
Here we construct the dimension-raising map that changes the base
space $X$ into $X\times S^{2r-1}$ or $X\times S^{2r}$ $(r=1,2,\dots)$
in this nonchiral case.
For this purpose, we first formaly increase the rank of the Hamiltonian
\begin{eqnarray}
\mathbb{H}_{10}({\bm k}, m)=H_{10}({\bm k},m)\otimes \gamma_{2r+1}^{(r)},
\label{eq5:Hext}
\end{eqnarray}
and that of symmetry operators $\mathbb{U}_g(\bm{k})$,
\begin{eqnarray}
\mathbb{U}_g({\bm k})=\left\{
\begin{array}{ll}
U_g({\bm k})\otimes\underbrace{\sigma_0\otimes\cdots\otimes\sigma_0}_{r},
& \mbox{for $c(g)=1$},\\
U_g({\bm k})\otimes\gamma_{2r+1}^{(r)},
&\mbox{for $c(g)=-1$},
\end{array}
\right.
\label{eq5:Uext}
\end{eqnarray}
by using the $\gamma$ matrices.
$\mathbb{H}_{10}({\bm k},m)$ and $\mathbb{U}_g({\bm k})$
keep the same symmetry relations as $H_{10}({\bm k}, m)$ and $U_g({\bm
k})$,
\begin{eqnarray}
\mathbb{U}_g(\bm{k})
\mathbb{H}_{10}({\bm k}, m)
\mathbb{U}^{-1}_g(\bm{k})=c(g)\mathbb{H}_{10}(g\bm{k}),
\quad
\mathbb{U}_g(g'\bm{k})\mathbb{U}_{g'}(\bm{k})=e^{i\tau_{g,g'}(\bm{k})}\mathbb{U}_{gg'}(\bm{k}),
\quad
\mathbb{U}_g(\bm{k} )i =\phi(g) i\mathbb{U}_g(\bm{k} ),
\end{eqnarray}
but there appear additional chiral symmetries
\begin{eqnarray}
\{\mathbb{H}_{10}(\bm{k}, m), \mathbb{\Gamma}_i^{(\pm)}\}=0,
\quad
(i=1,\dots, r)
\end{eqnarray}
with
\begin{eqnarray}
\mathbb{U}_g(\bm{k})
\mathbb{\Gamma}_i^{(\pm)}
\mathbb{U}^{-1}_g(\bm{k})=c(g)\mathbb{\Gamma}^{(\pm)}_i,
\quad
\{\mathbb{\Gamma}_i^{(+)}, \mathbb{\Gamma}_j^{(+)}\}=2\delta_{i,j},
\quad
\{\mathbb{\Gamma}_i^{(-)}, \mathbb{\Gamma}_j^{(-)}\}=-2\delta_{i,j},
\quad
\{\mathbb{\Gamma}_i^{(+)}, \mathbb{\Gamma}_j^{(-)}\}=0,
\end{eqnarray}
where the chiral operators $\mathbb{\Gamma}_i^{(\pm)}$ $(i=1,\dots,r)$
are defined as
\begin{eqnarray}
\mathbb{\Gamma}_i^{(+)}=1\otimes \gamma_{2i}^{(r)},
\quad
\mathbb{\Gamma}_i^{(-)}=1\otimes i\gamma_{2i-1}^{(r)},
\label{eq5:Gext}
\end{eqnarray}
Note that
$\mathbb{\Gamma}_i^{(+)}\mathbb{\Gamma}_i^{(-)}$
($i=1,\dots, r$)
commute with $\mathbb{H}_{10}({\bm k}, m)$, $\mathbb{U}_g(\bm{k})$, and
each other.
Since $\mathbb{H}_{10}({\bm k}, m)$ and $\mathbb{U}_g(\bm{k})$ reduce to
$H_{10}({\bm k}, m)$ and $U_g({\bm k})$ in the diagonal basis of
$\mathbb{\Gamma}_i^{(+)}\mathbb{\Gamma}_i^{(-)}=\pm 1$, $\mathbb{H}_{10}({\bm
k}, m)$ retains the same topological properties as $H_{10}({\bm k}, m)$.
The following equation defines the dimension-raising map from $H({\bm k}, m)$ on $X$
to the Hamiltonian $H(\bm{k},\hat n)$ on $X\times S^{2r-1}$,
\begin{align}
H(\bm{k},\hat n)
=\mathbb{H}_{10}(\bm{k},n_0)
+in_{1} \mathbb{\Gamma}_1^{(-)}
+\cdots
+in_{r} \mathbb{\Gamma}_r^{(-)}
+n_{r+1}\mathbb{\Gamma}_1^{(+)}
+\cdots
+n_{2r-1}\mathbb{\Gamma}_{r-1}^{(+)},
\label{eq:dimensional_raise_nonchiral}
\end{align}
where we introduced the spherical coordinate
$\hat n=(n_0,\bm{n})=(n_0,n_1,\dots,n_{2r-1})$ with $n_0^2+\bm{n}^2=1$.
The obtained Hamiltonian is fully gapped and can be flattened because
$H({\bm k},\hat n)^2=H_{10}({\bm k},n_0)^2+\bm{n}^2$ is
positive definite.
In particular, for $H_{10}({\bm k},m)$ in Eq.(\ref{eq5:interpolation}),
one can show directly that $H({\bm k},\hat n)^2=1$.
We can also extend symmetry ${\cal G}$ on $X$ into that on $X\times S^{2r-1}$:
The simplest extension is that $g\in {\cal G}$ acts on
$S^{2r-1}$ trivially. For anti-unitary operators, however,
the momentum and the coordinate behave in a different manner under the
trivial action.
While the momentum changes the sign under the
trivial action of anti-unitary operators, the coordinate does not.
Correspondingly, there exist two different trivial extensions:
For the momentum sphere $S^{2r-1}$, the trivial extension is given by
\begin{eqnarray}
U_g(\bm{k},\hat n)=\left\{
\begin{array}{ll}
\mathbb{U}_g(\bm{k}), & \mbox{for $\phi(g)=1$}\\
\mathbb{\Gamma}_{r-1}^{(+)}\cdots\mathbb{\Gamma}_1^{(+)}\mathbb{U}_g(\bm{k}),
& \mbox{for $\phi(g)=-1$} \\
\end{array}
\right.,
\label{eq5:AUSM}
\end{eqnarray}
which yeilds
\begin{eqnarray}
U_g(\bm{k}, \hat{n})H(\bm{k},\hat{n})U_g^{-1}(\bm{k},
\hat{n})=c(g)[\phi(g)]^{r-1}H(\bm{k}, n_0,\phi(g){\bm n}),
\label{eq5:HM}
\end{eqnarray}
and for $S^{2r-1}$ in the coordinate space,
\begin{eqnarray}
U_g(\bm{k},\hat n)=\left\{
\begin{array}{ll}
\mathbb{U}_g(\bm{k}), & \mbox{for $\phi(g)=1$}\\
\mathbb{\Gamma}_{r}^{(-)}\cdots\mathbb{\Gamma}_1^{(-)}\mathbb{U}_g(\bm{k}),
& \mbox{for $\phi(g)=-1$} \\
\end{array}
\right.,
\label{eq5:AUSS}
\end{eqnarray}
which leads to
\begin{eqnarray}
U_g(\bm{k}, \hat{n})H(\bm{k},\hat{n})U_g^{-1}(\bm{k},
\hat{n})=c(g)[\phi(g)]^{r}H(\bm{k}, n_0,{\bm n}).
\end{eqnarray}
Here note that ${\bm n}$ changes the sign under the action of
anti-unitary operators in the former extension.
(See also Sec.\ref{sec:momentum_sphere})
The mapped Hamiltonian also has chiral symmetry.
\begin{eqnarray}
\{H(\bm{k},\hat n), \Gamma\}=0,
\quad
\Gamma=\mathbb{\Gamma}^{(+)}_{r}.
\end{eqnarray}
From Eqs.(\ref{eq5:AUSM}) and (\ref{eq5:AUSS}), one can calculate directly how the twist
$(\tau, c)$ changes for the momentum sphere extension and the coordinate sphere extension,
respectively. In these cases, the change of the twist results in
the change of the grading.
The grading integer $n$ is increased (decreased) by
$2r-1$ for the momentum (coordinate) sphere case.
Figure \ref{Fig:HamiltonianMap}[b] illustrates the map in the $r=1$
case,
\begin{align}
H(\bm{k},\theta)&=\mathbb{H}_{10}(\bm{k}, \cos\theta)+i\sin\theta\mathbb{\Gamma}_1^{(-)}
\nonumber\\
&=H_{10}(\bm{k}, \cos\theta)\otimes \sigma_z -\sin \theta \otimes \sigma_y.
\end{align}
When $\theta=0$ and $\theta=\pi$, the mapped Hamiltonian $H(\bm{k},
\theta)$ is essentially
the same as $H_1(\bm{k})$ and $H_0(\bm{k})$, respectively.
Then, with keeping the gap, $H_{1}(\bm{k})$ and
$H_0(\bm{k})$ are extended in the $\theta$ direction and they are
glued together.
In the above construction, the nonzero topological charge of $H_1({\bm
k})$, which is illustrated as a ``vortex'' in Fig.\
\ref{Fig:HamiltonianMap}[b],
becomes a ``monopole'' inside $X \times S^1$. Therefore,
$H(\bm{k},\theta)$ has the same topological charge
as $H_1(\bm{k})$.
The same argument works for any $r$.
Thus, the mapped Hamiltonian $H({\bm k}, \hat n)$ also has the same topological charge as
the original Hamiltonian $H_1(\bm{k})$.
For the dimension-raising map from $X$ to $X\times S^{2r}$,
we consider the following Hamiltonian
\begin{align}
H(\bm{k},\hat n)
=\mathbb{H}_{10}(\bm{k},n_0)+in_1\mathbb{\Gamma}_1^{(-)}
+\cdots+
in_r\mathbb{\Gamma}_r^{(-)}
+n_{r+1}\mathbb{\Gamma}_1^{(+)}
+\cdots
+n_{2r}\mathbb{\Gamma}_{r}^{(+)},
\label{eq:dimensional_raise_nonchiral2}
\end{align}
which is also gapped and has the
same topological charge as $H_1(\bm{k})$.
We also has the trivial extension of ${\cal G}$,
\begin{eqnarray}
U_g(\bm{k},\hat n)=\left\{
\begin{array}{ll}
\mathbb{U}_g(\bm{k}), & \mbox{for $\phi(g)=1$}\\
\mathbb{\Gamma}_{r}^{(+)}\cdots\mathbb{\Gamma}_1^{(+)}\mathbb{U}_g(\bm{k}),
& \mbox{for $\phi(g)=-1$} \\
\end{array}
\right.,
\label{eq5:AUSM2}
\end{eqnarray}
for the momentum sphere $S^{2r}$, and
\begin{eqnarray}
U_g(\bm{k},\hat n)=\left\{
\begin{array}{ll}
\mathbb{U}_g(\bm{k}), & \mbox{for $\phi(g)=1$}\\
\mathbb{\Gamma}_{r}^{(-)}\cdots\mathbb{\Gamma}_1^{(-)}\mathbb{U}_g(\bm{k}),
& \mbox{for $\phi(g)=-1$} \\
\end{array}
\right.,
\label{eq5:AUSS2}
\end{eqnarray}
for the coordinate sphere $S^{2r}$.
The mapped Hamiltonian does not have chiral symmetry.
The above extension increases (decreases) the grading integer $n$ by
$2r$ for the momentum (coordinate) extension.
{\it Map from chiral class}---
For an odd $n$, $H_{10}(\bm{k},m)$ has chiral symmetry $\Gamma$,
the dimension-raising map is constructed in a manner parallel to
the even $n$ case, with a minor modification.
Using $\Gamma$, we first introduce $\mathbb{\Gamma}$ by
\begin{eqnarray}
\mathbb{\Gamma}=\Gamma\otimes \gamma_{2r+1}^{(r)},
\label{eq:introduce_new_chiral}
\end{eqnarray}
as well as $\mathbb{H}_{10}(\bm{k}, m)$, $\mathbb{U}_g(\bm{k})$, and
$\mathbb{\Gamma}_i^{(\pm)}$ $(i=1,\dots, r)$ defined in Eqs.(\ref{eq5:Hext}),
({\ref{eq5:Uext}}), and ({\ref{eq5:Gext}}), respectively.
Since $\Gamma$ obeys
$U_g(\bm{k})\Gamma U_g(\bm{k})^{-1}=c(g)\Gamma$,
we have
\begin{eqnarray}
\mathbb{U}_{g}(\bm{k})\mathbb{\Gamma}\mathbb{U}^{-1}_g(\bm{k})=c(g)\mathbb{\Gamma}.
\end{eqnarray}
For the dimension-raising map from $X$ to $X\times S^{2r+1}$
$(r=0,1,\dots)$, we
consider
\begin{align}
H(\bm{k}, \hat n)
=\mathbb{H}(\bm{k},n_0)
+in_1\mathbb{\Gamma}_1^{(-)}
+\cdots+
in_r\mathbb{\Gamma}_r^{(-)}
+n_{r+1}\mathbb{\Gamma}_1^{(+)}
+\cdots
+n_{2r}\mathbb{\Gamma}_{r}^{(+)}
+n_{2r+1}\mathbb{\Gamma},
\label{eq:dimensional_raise_chiral}
\end{align}
where the extension of ${\cal G}$ is given by
\begin{eqnarray}
U_g(\bm{k},\hat n)=\left\{
\begin{array}{ll}
\mathbb{U}_g(\bm{k}), & \mbox{for $\phi(g)=1$}\\
\mathbb{\Gamma}_{r}^{(+)}\cdots\mathbb{\Gamma}_1^{(+)}
\mathbb{\Gamma}
\mathbb{U}_g(\bm{k}),
& \mbox{for $\phi(g)=-1$} \\
\end{array}
\right.,
\end{eqnarray}
for the momentum sphere $S^{2r+1}$,
and
\begin{eqnarray}
U_g(\bm{k},\hat n)=\left\{
\begin{array}{ll}
\mathbb{U}_g(\bm{k}), & \mbox{for $\phi(g)=1$}\\
\mathbb{\Gamma}_{r}^{(-)}\cdots\mathbb{\Gamma}_1^{(-)}\mathbb{U}_g(\bm{k}),
& \mbox{for $\phi(g)=-1$} \\
\end{array}
\right.,
\end{eqnarray}
for the coordinate sphere $S^{2r+1}$.
The mapped Hamiltonian $H(\bm{k},\hat n)$ does not
have chiral symmetry.
On the other hand, for the map from $X$ to $X\times S^{2r}$
$(r=1,2,\dots)$, we have
\begin{align}
H(\bm{k}, \hat n)
=\mathbb{H}(\bm{k},n_0)
+in_1\mathbb{\Gamma}_1^{(-)}
+\cdots+
in_r\mathbb{\Gamma}_r^{(-)}
+n_{r+1}\mathbb{\Gamma}_1^{(+)}
+\cdots
+n_{2r-1}\mathbb{\Gamma}_{r-1}^{(+)}
+n_{2r}\mathbb{\Gamma},
\label{eq:dimensional_raise_chiral2}
\end{align}
where the extension of ${\cal G}$ is given by
\begin{eqnarray}
U_g(\bm{k},\hat n)=\left\{
\begin{array}{ll}
\mathbb{U}_g(\bm{k}), & \mbox{for $\phi(g)=1$}\\
\mathbb{\Gamma}_{r-1}^{(+)}\cdots\mathbb{\Gamma}_1^{(+)}
\mathbb{\Gamma}
\mathbb{U}_g(\bm{k}),
& \mbox{for $\phi(g)=-1$}
\end{array}
\right.,
\end{eqnarray}
for the momentum sphere $S^{2r}$,
and
\begin{eqnarray}
U_g(\bm{k},\hat n)=\left\{
\begin{array}{ll}
\mathbb{U}_g(\bm{k}), & \mbox{for $\phi(g)=1$}\\
\mathbb{\Gamma}_{r}^{(-)}\cdots\mathbb{\Gamma}_1^{(-)}\mathbb{U}_g(\bm{k}),
& \mbox{for $\phi(g)=-1$}
\end{array}
\right.,
\end{eqnarray}
for the coordinate sphere $S^{2r+1}$.
The Hamiltonian $H(\bm{k},\hat n)$ has chiral symmetry,
\begin{eqnarray}
\{H(\bm{k},\hat n), \Gamma'\}=0,
\quad
\Gamma'=\mathbb{\Gamma}_{r}^{(+)}.
\end{eqnarray}
The maps in Eqs.(\ref{eq:dimensional_raise_chiral}) and
({\ref{eq:dimensional_raise_chiral2}}) increase (decrease) the grading
integer $n$ by $2r+1$ and $2r$, respectively, for the momentum
(coordinate) sphere extension.
For the same reason as the even $n$ case, the mapped Hamiltonians in
Eqs.(\ref{eq:dimensional_raise_chiral}) and ({\ref{eq:dimensional_raise_chiral2}}) keep the same topological charge as the
starting Hamiltonian $H_1({\bm k})$.
{\it Isomorphism}---
The dimension-raising maps keep the topological charge, with shifting
the grading of the Hamiltonian and the dimension of the base manifold.
In terms of the $K$-theory, these results are summarized as the isomorphism
\begin{eqnarray}
{}^\phi K^{\pi^*(\tau,c)-n}_{\cal G}(X \times S^D) \cong
\underbrace{{}^{\phi} K^{(\tau,c)-(n-D)}_{\cal G}(X)}_{S^D {\rm \mathchar`- dependent\ contribution}}
\oplus
\underbrace{{}^\phi K^{(\tau,c)-n}_{\cal G}(X)}_{S^D {\rm \mathchar`-
independent\ contribution}},
\label{Eq:DimShift_general}
\end{eqnarray}
for the momentum sphere $S^D$, and
\begin{eqnarray}
{}^\phi K^{\pi^*(\tau,c)-n}_{\cal G}(X \times S^D) \cong
\underbrace{{}^{\phi} K^{(\tau,c)-(n+D)}_{\cal G}(X)}_{S^D {\rm \mathchar`- dependent\ contribution}}
\oplus
\underbrace{{}^\phi K^{(\tau,c)-n}_{\cal G}(X)}_{S^D {\rm \mathchar`-
independent\ contribution}},
\label{Eq:DimShift_general2}
\end{eqnarray}
for the coordinate space sphere $S^D$.
Here ${\cal G}$ acts on $S^D$ trivially,
and $\pi^*$ is the pull back of the obvious projection $\pi$: $X\times
S^{2r-1}\to X$.
Strictly speaking, the twist for $U_g({\bm k},\hat n)$ is
defined on $X\times S^{2r-1}$, not on $X$, so
to make it clear, we denote the
twist of $U_g({\bm k},\hat n)$ as $\pi^* (\tau, c)$.
The mapped Hamiltonian $H(\bm{k},\hat n)$ gives an
element of ${}^\phi K^{\pi^*(\tau,c)-n}_{\cal G}(X \times S^D)$
corresponding to the first term of the right hand side in
Eq.(\ref{Eq:DimShift_general}) or (\ref{Eq:DimShift_general2}).
The second terms in Eqs. (\ref{Eq:DimShift_general}) and
(\ref{Eq:DimShift_general})
are trivial contributions
from Hamiltonians independent of $S^D$.
The exact relation between a mapped Hamiltonian $H(\bm{k},\hat n)$
and an element
of the K-group is obtained as follows:
Starting the zero element $[E, H_0,
H_0]=0$ or $[E, \Gamma, H_0, H_0]=0$
in ${}^\phi K_{\cal G}^{(\tau, c)-(n\mp D)}(X)$,
we first construct a topologically trivial Hamiltonian $H_0(\bm{k},\hat n)$ using the dimension-raising map.
Then the element of ${}^\phi K_{\cal G}^{\pi*(\tau,
c)-n}(X\times S^D)$
is given by the triple
$[E, H, H_0]$ on $X\times S^D$ or the quadruple $[E, \Gamma, H, H_0]$
on $X\times S^D$.
In Appendix \ref{Sec:Gysin}, we outline the proof of the isomorphisms
by using the Gysin sequence.
As discussed below, the first terms in the isomorphisms
ensure the existence of gapless
boundary and defect states of TCIs and TCSCs.
\subsubsection{Momentum sphere $S^D$}
\label{sec:momentum_sphere}
In the previous section, we have introduced the momentum sphere $S^D$
parameterized by $\hat{n}=(n_0,{\bm n})$ with $n_0^2+{\bm n}^2=1$.
Here we explain its relation to the actual momentum space.
For the simplest case $S^1$, the momentum sphere can be naturally
identified with the one-dimensional BZ, where $\hat{n}$ is given in the
form of $(n_0, n_1)=(\cos k, \sin k)$ with momentum $k$.
Under the action of anti-unitary operators, $k$ goes to $-k$, so only
$n_1$ changes the sign. This behavior is consistent with Eq.(\ref{eq5:HM}).
Moreover, a general $S^D$ can be regarded as a compactified
$D$-dimensional momentum space.
Using the following map
\begin{eqnarray}
{\bm k}=\frac{\bm n}{1+n_0},
\end{eqnarray}
one can obtain the original decompactified $D$-dimensional momentum space.
Thus, the sign change of ${\bm k}$ is induced by the transformation
$(n_0,{\bm n})\rightarrow (n_0, -{\bm n})$.
This behavior is also consistent with Eq.(\ref{eq5:HM}).
We also note that $O(D+1)$ rotations of $S^D$ that fix the north
($n_0=1$) and south pole $(n_0=-1)$ induce $O(D)$ rotations around the
origin in the decompactified momentum space.
This property will be used in Sec.\ref{sec5:more}.
\subsubsection{Examples}
{\it $d=0$ class A $\to$ $d=1$ class AIII}---
Let us consider class A insulators in 0-space dimension.
The $K$-theory is $K^0(pt) = \mathbb{Z}$ and generator of $K^0(pt)$
is represented by the triple $[\mathbb{C},1,-1]$.
Then, the mapped Hamiltonian (\ref{eq:dimensional_raise_nonchiral}) reads
\begin{align}
H(k_x) = \cos k_x \sigma_z - \sin k_x \sigma_y, \quad
\Gamma = -\sigma_x,
\label{eq:model_1d_class_aiii}
\end{align}
which leads to the $K$-theory isomorphism
\begin{align}
K^{-1}(S^1) \cong K^{0}(pt) \oplus K^{-1}(pt) = K^{0}(pt)=\mathbb{Z}.
\end{align}
{\it $d=1$ class AIII $\to d=2$ class A}---
Let us consider the $K$-theory isomorphism
\begin{equation}
K^{0}(T^2)
\cong K^{1}(S^1) \oplus K^{0}(S^1) = K^{-1}(S^1) \oplus K^{0}(S^1) =
\mathbb{Z} \oplus \mathbb{Z}.
\end{equation}
The second term is a weak index.
The first term is given by the dimensional raising map.
From Eq.\ (\ref{eq:model_1d_class_aiii}),
a Hamiltonian $H(k_x,m)$ connecting the topological phase ($1 \in \mathbb{Z}$)
and the trivial phase ($0 \in \mathbb{Z}$) is given by
\begin{align}
H_{10}(k_x,m) = ( m -1 + \cos k_x ) \sigma_z -\sin k_x \sigma_y, \quad
\Gamma = -\sigma_x, \quad
m \in [-1,1].
\end{align}
Then, the mapped Hamiltonian (\ref{eq:dimensional_raise_chiral}) becomes
\begin{equation}
H(k_x,k_y) = ( -1 + \cos k_x + \cos k_y ) \sigma_z - \sin k_x \sigma_y - \sin k_y \sigma_x.
\end{equation}
\subsubsection{More on dimension-raising maps}
\label{sec5:more}
To construct the dimension-raising maps in Sec.\ref{sec5:DRM},
we have considered the trivial extension of symmetry ${\cal G}$ from $X$
to $X\times S^D$.
Here we present different dimension-raising maps by using a non-trivial
extension of ${\cal G}$.
For simplicity, we only present here maps from non-chiral systems, but the
generalization to the chiral case is straightforward.
As shown in Sec.\ref{sec5:DRM}, we have the following
set of equations before increasing the dimension of the base manifold,
\begin{eqnarray}
&&\mathbb{U}_g(\bm{k})
\mathbb{H}_{10}(\bm{k})
\mathbb{U}^{-1}_g(\bm{k})=c(g)\mathbb{H}(g\bm{k}),
\quad
\mathbb{U}_g(\bm{k})
\mathbb{\Gamma}_i^{(\pm)}
\mathbb{U}^{-1}_g(\bm{k})=c(g)\mathbb{\Gamma}_i^{(\pm)},
\quad
\mathbb{U}_g(g'\bm{k})\mathbb{U}_{g'}(\bm{k})=e^{i\tau_{g,g'}(\bm{k})}\mathbb{U}_{gg'}(\bm{k}),
\nonumber\\
&&\{\mathbb{\Gamma}_i^{(+)}, \mathbb{\Gamma}_j^{(+)}\}=2\delta_{ij},
\quad
\{\mathbb{\Gamma}_i^{(-)}, \mathbb{\Gamma}_j^{(-)}\}=-2\delta_{ij},
\quad
\{\mathbb{\Gamma}_i^{(-)}, \mathbb{\Gamma}_j^{(-)}\}=0.
\end{eqnarray}
For the nontrivial extension, we take into account $SO(D)$ generators
\begin{eqnarray}
\mathbb{M}^{(++)}_{ij}
=\frac{[\mathbb{\Gamma}_i^{(+)}, \mathbb{\Gamma}_j^{(+)}]}{2i},
\quad
\mathbb{M}^{(--)}_{ij}
=\frac{[\mathbb{\Gamma}_i^{(-)}, \mathbb{\Gamma}_j^{(-)}]}{2i},
\quad
\mathbb{M}^{(+-)}_{ij}
=\frac{[\mathbb{\Gamma}_i^{(+)}, \mathbb{\Gamma}_j^{(-)}]}{2i}.
\end{eqnarray}
By using them, a map from $g\in {\cal G}$ to $\mathbb{V}_g\in
Pin(D)$ (projective group of $O(D)$) can
be expressed as
\begin{eqnarray}
\mathbb{V}_g=\left\{
\begin{array}{ll}
\exp\left[
i\sum_{ij\sigma\sigma'}\mathbb{M}_{ij}^{(\sigma
\sigma')}\theta_{\sigma\sigma'}^{ij}(g)
\right],
&
\mbox{for $p_V(g)=0$}
\\
\mathbb{\Gamma}_1^{(+)}
\exp\left[
i\sum_{ij\sigma\sigma'}\mathbb{M}_{ij}^{(\sigma
\sigma')}\theta_{\sigma\sigma'}^{ij}(g)
\right],
&
\mbox{for $p_V(g)=1$}
\end{array}
\right. ,
\end{eqnarray}
where $p_V(g)$ is the index distinguishing two
different forms of $\mathbb{V}_g$.
The index $p_V(g)$ satisfies
\begin{eqnarray}
p_V(gg')=p_V(g)+p_V(g')\quad (\mbox{mod. 2}).
\end{eqnarray}
If the map keeps the group structure of ${\cal G}$ as
\begin{eqnarray}
\mathbb{V}_g \mathbb{V}_{g'}=e^{i\tau_V(g,g')}\mathbb{V}_{gg'},
\label{eq5:conditionVg}
\end{eqnarray}
where the twist $e^{i\tau_V(g,g')}=\pm 1\in \omega$ is allowed from the
projective nature of $Pin(D)$,
we can use $\mathbb{U}_g^V({\bm k})$ defined by
\begin{eqnarray}
\mathbb{U}^V_g(\bm{k})=\mathbb{V}_g\mathbb{U}_g(\bm{k}),
\end{eqnarray}
instead of
$\mathbb{U}_g({\bm k})$, to construct the symmetry operator on $X\times
S^D$ in Eqs.(\ref{eq5:AUSM}) and ({\ref{eq5:AUSS}}) (or Eqs.
(\ref{eq5:AUSM2}) and (\ref{eq5:AUSS2})),
The presence of $\mathbb{V}_g$ induces an $O(D+1)$ rotation of $S^D$ that fixes
the north pole ($n_0=1$) and the south pole ($n_0=-1$) of $S^D$.
Since $\mathbb{U}_g^V(\bm{k})$ obeys
\begin{eqnarray}
\mathbb{U}^V_g(\bm{k})
\mathbb{H}(\bm{k})
(\mathbb{U}^{V}_g)^{-1}(\bm{k})=(-)^{p_V(g)}c(g)\mathbb{H}(g\bm{k}),
\quad
\mathbb{U}^V_g(g'\bm{k})\mathbb{U}^V_{g'}(\bm{k})
=[c(g)]^{p_V(g')}e^{i[\tau_{g,g'}(gg'\bm{k})+\tau_V(g,g')]}\mathbb{U}^V_{gg'}(\bm{k}),
\label{eq5:extra_twist}
\end{eqnarray}
the dimension-raising map in the above presents an extra twist, in addition to
that given by the change of the grading integer.
The above dimension-raising map is summarized as the following
isomorphism in the $K$-theory,
\begin{align}
{}^\phi K^{\pi^*(\tau,c)-n}_{\cal G}(X \times S^D )
\cong {}^\phi K_{\cal G}^{(\tau,c)+(\tau_V,c_V)-(n\mp D)}(X) \oplus
{}^\phi K_{\cal G}^{(\tau,c)-n}(X),
\label{eq5:dimension_raising_rotation}
\end{align}
where ${\cal G}$ acts on $S^D$ through
${\cal G}\to O(D+1)$ with the north and south poles fixed, and $-$ $(+)$ in the
double sign corresponds to the momentum (coordinate) $S^D$.
Here $(\tau_V, c_V)$ denotes the extra twist due to
Eq.(\ref{eq5:extra_twist}).
\subsection{Building block}
As shown in the previous subsection, using the dimension-raising maps,
one can construct a sequence of mapped Hamiltonians on the manifolds
\begin{align}
X \to X \times S^{r_1} \to X \times S^{r_1} \times S^{r_2} \to X \times S^{r_1} \times S^{r_2} \times S^{r_3} \to \cdots.
\end{align}
For Hamiltonians fitting in any of the mapped Hamiltonians, their
topological classification reduces to that of the starting lower
dimensional Hamiltonians on $X$.
Therefore, $X$ is regarding as a ``building block'' of the classification.
Some examples of building blocks with relevant symmetries are summarized
in Table ~\ref{tab:building_block}.
\begin{table}
\caption{Building blocks.}
\label{tab:building_block}
\begin{tabular}{>{\centering\arraybackslash}m{10cm} | >{\centering\arraybackslash}m{3cm} | >{\centering\arraybackslash}m{3cm} }
Symmetry & Building block Brillouin zone & Related refs. \\
\hline
No symmetry & $\{ pt \}$ & \\
\hline
TRS and/or PHS & $\{ pt \}$ & \onlinecite{Schnyder2008, Kitaev2009, Ryu2010} \\
\hline
Onsite symmetry & $\{ pt \}$ & \\
\hline
\shortstack{Order-two point group symmetry \\
(reflection, $\pi$-rotation, inversion, reflection $\times$ reflection, $\dots$ )} & $\{pt\}$ & \onlinecite{Chiu2013, Morimoto2013, ShiozakiSato2014} \\
\hline
\shortstack{Order-two nonsymmorphic space group symmetry \\
(half-lattice translation, glide, two-fold screw, glide $\times$ reflection, $\dots$)} & $S^1$ & \onlinecite{ShiozakiSatoGomi2016} \\
\hline
General wallpaper group & $T^2$ & \onlinecite{Yang1997, LuckStamm2000, Alexandradinata2014, DongLiu2015, Kruthoff2016} \\
\hline
General space group & $T^3$ & \\
\end{tabular}
\end{table}
\subsection{Boundary gapless states}
\label{sec:Classification of boundary gapless states}
\begin{figure}[!]
\begin{center}
\includegraphics[width=0.5\linewidth, trim=0cm 0cm 0cm 0cm]{bulk_boundary.pdf}
\end{center}
\caption{Bulk-boundary correspondence.}
\label{fig:bulk_boundary}
\end{figure}
The isomorphism in Eq.(\ref{Eq:DimShift_general}) predicts
one of the most
important characteristics of TCIs and TCSCs, the existence of gapless boundary states:
Consider a crystalline insulator or superconductor in
$d$-dimensions with the boundary normal to the $x_{d}$-direction
as illustrated in Fig.~\ref{fig:bulk_boundary}.
Symmetry of the system compatible with the boundary should act trivially
on the $x_{d}$-direction, so it is identical to
that for ${}^\phi K_{\cal G}^{\pi^*(\tau, c)-n}(X\times S^1)$ in
Eq.(\ref{Eq:DimShift_general}), where $S^1$ is the momentum sphere
conjugate to $x_{d}$, $X$ is surface BZ conjugate to $x_1,\dots,x_{d-1}$,
and the data of symmetry, $(\phi,\tau, c)$,
$n$, and ${\cal G}$, are properly chosen.
The $K$-group ${}^\phi K_{\cal G}^{\pi^*(\tau, c)-n}(X\times S^1)$
determines topological properties of the system with the boundary.
In particular, if the system has a non-zero topological number
corresponding to the first term ${}^\phi K_{\cal G}^{(\tau,c)-(n-1)}(X)$ of the right hand side in
Eq.(\ref{Eq:DimShift_general}),
the TCI or TCSC
hosts topologically protected gapless states on the boundary.
This is a manifestation of the bulk-boundary correspondence:
A non-trivial element of the first term implies the existence of
a topologically twisted structure of the bulk gapped system in the $k_{d}$-direction, which
manifests the existence of gapless boundary states in the presence of
a boundary normal to the $x_{d}$-direction.
On the other hand, the second term of the right hand side in
Eq.(\ref{Eq:DimShift_general}) merely provides a``weak topological
index'' that can be supported by $d$-dimensional gapped systems
trivially stacked in
the $x_{d}$-dimension.
Since the stacked system is $k_{d}$-independent,
the second term does not provide any gapless state on the boundary normal to
the $x_{d}$-direction.
These important properties of TCIs and TCSCs are summarized as follows.
\begin{itemize}
\item[($\star$)]
Gapless states for crystalline insulators and superconductors in
$d$-dimensions are topologically classified by
the $K$-group ${}^\phi K_{\cal G}^{(\tau,c)-(n-1)}(X)$, where $X$ is the
$(d-1)$-dimensional surface BZ and symmetry of the system is given
by $(\phi,\tau,c)$, $n$, and ${\cal G}$.
Note that the grading of the $K$-group is shifted by $-1$ in comparison
with that of symmetry of the system: The grading of
$K$-group is $n-1$, while that of symmetry is $n$.
\end{itemize}
The dimensional raising maps (\ref{eq:dimensional_raise_nonchiral}) and
(\ref{eq:dimensional_raise_chiral}) present representative Hamiltonians
with non-zero topological numbers of the $K$-group ${}^\phi K_{\cal
G}^{(\tau,c)-n}(X)$, by which one can confirm the existence of gapless
states on the boundary.
The gapless states on the surface BZ $X$ have their own
effective Hamiltonians given by self-adjoint Fredholm operators acting on
the infinite
dimensional Hilbert space.
These Fredholm operators also represent elements of the
$K$-group, which also classifies
all possible stable gapless states.
In the present paper, we do not describe the detail of this formulation of
the $K$-theory
since it requires an additional mathematical preparation.
For the outline, see Ref.~\onlinecite{Adem2016} for example.
It should be noted that
in contrast to the classification of bulk gapped insulators and
superconductors, where a pair of Hamiltonians $[E, H_1, H_2]$ are needed
in the $K$-theory, the alternative formulation
requires only
a {\it single} effective Hamiltonian for gapless states to represent an
element of the $K$-group.
\subsection{Defect gapless modes}
\subsubsection{Semiclassical Hamiltonian}
Here, we consider topological defects of
band insulators and superconductors.
Away from the topological defects, the systems are
gapped, and they are described by spatially modulated Bloch
and BdG Hamiltonians,~\cite{volovik2003universe,Teo2010}
\begin{eqnarray}
H(\bm{k},{\bm r}),
\end{eqnarray}
where the base space of the Hamiltonian is composed of
momentum $\bm{k}$, defined in the $d$-dimensional BZ
$T^d$, and real-space coordinates ${\bm r}$ of a $D$-dimensional sphere
${\widetilde S}^D$ surrounding a defect.
We treat $\bm{k}$ and ${\bm r}$ in the Hamiltonian as
classical variables, i.e., momentum operators $\bm{k}$ and coordinate
operators ${\bm r}$ commute with each other. This semiclassical
approach is justified if the characteristic length of the spatial
inhomogeneity is sufficiently longer than that of the quantum
coherence. A realistic Hamiltonian would not satisfy this
semiclassical condition, but if there is no bulk gapless mode,
then the Hamiltonian can be adiabatically deformed so as to
satisfy the condition. Because the adiabatic deformation does
not close the bulk energy gap, it retains the topological nature
of the system.
The defect defines a $(d-D-1)$-dimensional submanifold.
We assume that the defect keeps the lattice translation symmetry along
the submanifold.
Whereas the exact momentum space is $T^d$, we retain the torus
structure only in the directions of the defect submanifold, and thus
consider a simpler space $T^{d-D-1}\times S^1\times S^{D}$, where $S^D$ is
conjugate to $\widetilde S^D$, in the following:
This simplification keeps any symmetry compatible with the defect
configuration,
so it does not affect the classification of
symmetry protected topological defect gapless modes.
\subsubsection{Topological classification}
Consider a defect described by the
semiclassical Hamiltonian $H(\bm{k},
{\bm r})$ on $T^{d-D-1}\times S^1\times S^{D}\times \widetilde S^D$.
We impose symmetry ${\cal G}$ compatible with the defect
configuration on $H(\bm{k}, {\bm r})$, with the grading integer $n$.
The topological classification of the above system is given by
the $K$-group ${}^\phi K_{\cal G}^{\pi^*(\tau, c)-n}(T^{d-D-1}\times
S^1\times S^D\times \widetilde S^D)$.
Since $S^D$ and $\widetilde S^D$ are conjugate to each other, ${\cal G}$
acts on them in the same manner.
The compatibility with the defect
configuration implies that the action of ${\cal G}$ on $S^D$ and $\widetilde
S^D$ should be $O(D+1)$ rotations with a point fixed.
Thus one can apply the isomorphism in Eq.(\ref{eq5:dimension_raising_rotation}) to evaluate
${}^\phi K_{\cal G}^{\pi^*(\tau, c)-n}(T^{d-D-1}\times
S^1\times S^D\times \widetilde S^D)$:
\begin{align}
&{}^\phi K^{\pi^*(\tau,c)-n}_{\cal G}(T^{d-D-1} \times S^1 \times S^{D} \times
\tilde S^{D})
\nonumber\\
&\cong {}^\phi K_{\cal G}^{(\tau,c)+(\tau_V, c_V)-(n+D)}(T^{d-D-1}\times S^1\times S^D)
\oplus
\underbrace{
{}^\phi K_{\cal G}^{(\tau,c)-n}(T^{d-D-1}\times S^1\times S^D)}_{\widetilde
S^D-{\rm independent}}
\nonumber\\
&\cong {}^\phi K_{\cal G}^{(\tau,c)-n}(T^{d-D-1}\times S^1)
\oplus
\underbrace{
{}^\phi K_{\cal G}^{(\tau,c)+(\tau_V, c_V)-(n+D)}(T^{d-D-1}\times S^1)}_{S^D-{\rm independent}}
\oplus
\underbrace{
{}^\phi K_{\cal G}^{(\tau,c)-n}(T^{d-D-1}\times S^1\times S^D)}_{\widetilde
S^D {\rm -independent}}.
\end{align}
Here no extra twist $(\tau_V, c_V)$ appears in the first term of the
right hand side:
The extra twist $(\tau_V, c_V)$ from
the $O(D+1)$ rotation on $\widetilde S^D$ is canceled by that on $S^D$.
The second and the third terms on the final line in the right hand side
are given by the
Hamiltonian $H(\bm{k}, {\bm r})$ that are independent of either ${\bm k}$
or ${\bm r}$, so they merely provide a weak topological index and a bulk
topological number irrelevant to the defect, respectively.
Therefore, only the first term gives a strong topological index for the defect.
We note here that the first term coincides with the $K$-group for
TCIs and TCSCs in
$(d-D)$-dimensions, where the boundary can be identified with the
$(d-D-1)$-dimensional defect submanifold, as illustrated in
Fig.\ref{Fig:Defect}.
Thus, we obtain the following result.
\begin{itemize}
\item[($\star \star$)]
A defect can be considered as a boundary of a lower dimensional
TCI or TCSC.
Defect gapless modes are topologically classified as boundary gapless
states of the TCI or TCSC.
\end{itemize}
\begin{figure}[!]
\begin{center}
\includegraphics[width=0.7\linewidth, trim=0cm 0cm 0cm 0cm]{DefectZero.pdf}
\end{center}
\caption{[a] A topological defect with $\delta$-dimensions in a $d$-dimensional insulator. The blue circle represents a sphere $S^{d-\delta-1}$ surrounding the topological defect.
[b] A boundary gapless state in $(\delta+1)$-dimensional topological insulators. }
\label{Fig:Defect}
\end{figure}
\section{Topological nodal semimetals and superconductors}
\label{sec:Topological nodal semimetals and superconductors}
\subsection{Formulation by $K$-theory}
Weyl and Dirac semimetals or nodal superconductors host bulk gapless
excitations as
band touching points and/or
lines in the BZ.
The gapless excitations have their own topological numbers which ensure
stability under small perturbations.
There have been a lot of efforts to classify such bulk gapless
topological phases.~\cite{
Kobayashi2014, ChiuSchnyder2014, Watanabe2016, MathaiThiang2016}
Whereas the bulk gapless phases resemble to gapless boundary and defect
modes in TCIs and TCSCs, their theoretical
treatment is different from that of the latter:
While the topological structure of the latter can be examined by
a bulk Hamiltonian flattened in the entire BZ, that of the
former cannot be, since the information on the band touching structure
is obviously lost by the flattening.
Therefore, one needs a different approach to characterize
gapless topological phases in the $K$-theory formulation.
A simple way to characterize topological semimetals and nodal
superconductors is to consider subspaces of the
BZ, together with the entire one.\footnote{
We illustrate this view point in terms of the $K$-theory, but
the same discussion is possible for isomorphism classes of vector bundles.}
Let $Y \subset T^d$ be a closed subspace in the BZ torus $T^d$.
The subspace $Y$ may not retain the full symmetry ${\cal
G}$ of the
system, and we denote it as
${\cal G}_Y$, the subgroup of ${\cal G}$ keeping $Y$ invariant.
(Namely, for $g\in {\cal G}_Y$ and $\bm{k} \in Y$, it holds
that $g\bm{k} \in Y$.)
Then, the trivial inclusion $i_Y: Y \to T^d$ induces the following homomorphism
$i_Y^*$ from the $K$-group on $T^d$ to a $K$-group on $Y$,
\begin{align}
i_Y^*: {}^\phi K_{{\cal G}}^{(\tau,c)-n}(T^d) \to
{}^{i_Y^*\phi} K^{i_Y^*(\tau,c)-n}_{{\cal G}_Y}(Y).
\label{eq:restriction_sub_skeleton}
\end{align}
Actually, from a triple
$[E, H_1, H_2]\in {}^\phi K_{{\cal G}}^{(\tau,c)-n}(T^d)$
for an even $n$ or a quadruple
$[E, \Gamma, H_1, H_2]\in {}^\phi K_{{\cal G}}^{(\tau,c)-n}(T^d)$ for an
odd $n$, one can have an unique element of ${}^{i_Y^*\phi} K_{{\cal
G}_Y}^{i^*_Y(\tau,c)-n}(Y)$, just by restricting the vector bundle $E$
and the Hamiltonians $H_i$ $(i=1,2)$
to the subspace $Y$, and by relaxing the symmetry constraint from
${\cal G}$ to ${\cal G}_Y$. Here we have represented the twist $(\tau, c)$
and $\phi$ for ${\cal G}_Y$ as $i_Y^*(\tau, c)$ and $i_Y^{*}{\phi}$,
respectively, since they are determined by those data of ${\cal G}$.
Noting that any fully gapped insulator or superconductor subject to the symmetry
${\cal G}$ with the grading integer $n$ is identified with an element of
${}^\phi K_{{\cal G}}^{(\tau,c)-n}(T^d)$,
we have the following statement:
\begin{itemize}
\item
If one restricts a full gapped crystalline insulator or superconductor
to a subspace $Y$, the resultant system on $Y$ gives a
$K$-group element that lies inside
the image of the
homomorphism $i^*_Y$.
\end{itemize}
Now consider a system which is fully gapped on $Y$ but not
necessarily so on the whole BZ $T^d$.
The restriction on $Y$ also gives an element of ${}^{i_Y^*\phi}
K^{i_Y^*(\tau,c)-n}_{{\cal G}_Y}(Y)$.
Interestingly, the contraposition of the above statement leads to
the following non-trivial statement:
\begin{itemize}
\item
If the above $K$-group element on $Y$ lies outside the
image of the homomorphism $i^*_Y$, the original system should
support a gapless region outside $Y$.
\end{itemize}
Since elements outside the image of $i^*_Y$ is nothing but the cokernel of
$i_Y^{*}$ in mathematics, the second statement can be rephrased as follows.
\begin{itemize}
\item
Non-zero elements of ${\rm coker }(i_Y^*)$=${}^{i_Y^*\phi} K_{{\cal
G}_Y}^{i^*_Y(\tau,c)-n}(Y)/{\rm Im}(i_Y^*)$ provide bulk topological
gapless phases.
In other words, the ${\rm coker }(i_Y^*)$ defines bulk topological gapless
phases in the $K$-theory formulation.
\end{itemize}
Not all
elements of ${}^{i_Y^*\phi} K_{{\cal G}_Y}^{i^*_Y(\tau,c)-n}(Y)$ can be
obtained from elements of ${}^{\phi} K_{{\cal G}}^{(\tau,c)-n}(T^d)$,
so the cokernel of $i_Y^*$ is not empty in general.
Below, we illustrate this viewpoint in some examples.
\begin{figure}[!]
\begin{center}
\includegraphics[width=\linewidth, trim=0cm 0cm 0cm 0cm]{subskeleton.pdf}
\end{center}
\caption{Subspaces in the BZ torus.
[a] Two planes $Y_1$ and $Y_2$ compose the subspace $Y$.
[b] The subspace is a single point $Y_0$.
[c] The 1-dimensional subspace $X_1$ in the BZ torus $T^2$ and a symmetric point $Y_0$.
[d-1] The real projective plane arising from the inversion symmetry acting on the sub sphere $S^2$,
[d-2] The Klein bottle from the inversion symmetry acting on the sub torus $T^2$.
}
\label{fig:subskeleton}
\end{figure}
\subsection{Examples}
\subsubsection{Weyl semimetals}
The first example is Weyl semimetals that support bulk band touching points
in the BZ.\cite{Nielsen-Ninomiya1983,Murakami2007, Wan2011, Burkov-Balents2011}
As originally discussed by Nielsen and
Ninomiya~\cite{NielsenNinomiya_homotopy}, the band touching points have
local monopole charges defined by the Chern number.
The Weyl semimetals are characterized as the cokernel of a
homomorphism between $K$-groups.
Let $Y_1^{(i)}(i=1,2)$ be planes with $k_x = a_i (i=1,2)$ in
Fig.~\ref{fig:subskeleton} [a], and consider
the disjoint union $Y_1=Y_1^{(1)}\sqcup Y_1^{(2)}$.
The most general $K$-theory on $Y_1$ is $K(Y_1)=K(Y_1^{(1)})\oplus
K(Y_1^{(2)})$, which does not require
any symmetry.
Since the topological index of $K(Y_1^{(i)})$ is the Chern number $ch(a_i)$
on $Y_1^{(i)}$, an element of $K(Y_1)$ is given by $(ch(a_1), ch(a_2))$.
Now consider the trivial inclusion $i_{Y_1}: Y_1\to T^3$,
which induces the homomorphism
$i_{Y_1}^*$ from ${}^* K_{*}^*(T^3)$ to $K(Y_1)$,
where ${}^* K_{*}^*(T^3)$ can be any $K$-group for fully gapped
insulators in three dimensions.
For any fully gapped insulators in three dimensions,
the Chern number $ch_1(k_x)$
at the plane with a constant $k_x$ does not depend on $k_x$, so the
image of $i_{Y_1}^*$ satisfies $ch(a_1)=ch(a_2)$.
Therefore,
if the Chern numbers $ch(a_i) (i=1,2)$ of the two planes $Y_1^{(i)} (i=1,2)$
do not match, there should be a stable gapless point
in the region outside the subspace $Y_1$.
This means that
the cokernel of $i_{Y_1}^*$, which is given by $ch(a_1)-ch(a_2)$,
corresponds to gapless points.
This argument also works for any closed surface $Y$ deformable to a point
and its trivial inclusion $i_Y: Y\to T^3$.
The cokernel of the induced homomorphism $i_Y^{*}$ is nothing but the
Chern number on $Y$ in this case, which defines the monopole charge of
Weyl nodes.
\subsubsection{Nonsymmorphic gapless materials}
As the second example, consider the filling constraint from
nonsymmorphic space groups.
In general, a nonsymmorphic space group
gives rise to a constraint on possible filling numbers of band
insulators, as classified by Watanabe et.~al.~\cite{Watanabe2016}
For example,
let us consider the glide symmetry $(x,y) \mapsto (x+1/2,-y)$
in two dimensions.
The glide operator $G(k_x)$ has the $2\pi$-periodicity
$G(k_x+2\pi)=G(k_x)$ and it also obeys
$G^2(k_x)=e^{-ik_x}$
since two
consecutive glide operations amount to just a lattice translation, which
results in the Bloch factor $e^{-ik_x}$.
The latter equation implies that eigenvalues of $G(k_x)$ are $\pm
e^{-ik_x/2}$.
From these equations, it is found that every band forms a pair
on the glide symmetric line $k_y=0$:
For $k_y=0$, the Bloch Hamiltonian commutes with
$G(k_x)$, so any band is an eigenstate of $G(k_x)$.
Since each eigenvalue of $G(k_x)$ does not have the $2\pi$-periodicity
in $k_x$, bands with opposite eigenvalues appear in a pair to keep the
$2\pi$-periodicity.
In particular, any fully gapped glide symmetric insulator should have an even
number of occupied states.
Let $Y_0= \{(a,0)\}$ be a point on the glide symmetric line $k_y=0$.
At the point $Y_0$, the glide symmetry reduces to
a simple $\mathbb{Z}_2$ symmetry, which defines ${\cal G}_{Y=Y_0}$ in Eq.(\ref{eq:restriction_sub_skeleton}).
Since the $\mathbb{Z}_2$ symmetry only has one-dimensional representations,
the $K$-group on $Y_0$ is different from that obtained by the
restriction of the $K$-group for fully gapped two-dimensional glide
symmetric insulator into $Y_0$.
In particular, the former $K$-group allows an odd number of occupied
states at $Y_0$, while the latter does not as mentioned above.
In other words, the cokernel of $i_{Y_0}^*$ in the present case includes
states with an odd number of occupied states at $Y_0$.
This gives a criterion for glide symmetric gapless materials:
If the filling number of the occupied states at the point $Y_0$ is odd,
then there should be a gapless point at the glide plane $k_y=0$, as
illustrated in Fig.~\ref{fig:subskeleton} [b].
\subsubsection{A gapless phase protected by representation at symmetric point for wallpaper group p4g}
Sometimes a representation of occupied states at a high-symmetric point
enforces a gapless phase.
An example is a two-dimensional spinful system with the wallpaper group p4g.
We will discuss the detail in Sec.~\ref{A stable gapless phase protected
by representation at $X$ point: 2d class A}, and here we only highlight
the consequence.
The point group for p4g is
the $D_4$ group, which is generated by a $C_4$-rotation and a reflection.
In such system, the
$K$-group is characterized by the one-dimensional subspace $X_1$ in
Fig.~\ref{fig:subskeleton} [c].
Let us focus on a high-symmetric point $Y_0 = (\pi,0)$.
Since the little group at $Y_0$ is $D_2 = \mathbb{Z}_2 \times \mathbb{Z}_2$,
a state at $Y_0$ obeys a linear representation of $D_2$.
The linear representation is given by a direct sum of
irreducible representations of $D_2$, i.e.
$A_1, A_2, B_1, B_2$ in the Mulliken notation.
As shown in Sec.~\ref{A stable gapless phase protected
by representation at $X$ point: 2d class A},
for fully gapped systems, the occupied states at $Y_0$ should be
a direct sum of $(A_1 \oplus A_2)$ and $(B_1 \oplus B_2)$ representations.
The contraposition of this result implies that,
if a occupied state at $Y_0$ obeys the other
representations, say $(A_1 \oplus B_1)$,
the system should have a gapless point on the one-dimensional subspace $X_1$.
In this case, the other representations correspond to elements of
the cokernel obtained from the trivial inclusion
$i_{Y_0}: Y_0\to X_1$.
\subsubsection{A $\mathbb{Z}_2$ topological charge induced only by inversion symmetry}
The final example is a bulk three-dimensional $\mathbb{Z}_2$ gapless phase
protected by inversion
symmetry, which has not been discussed before.
The detailed discussion will be presented in
Sec.~\ref{sec:inversion_fermi_pt}.
As a subspace, we consider a sphere $Y_2=S^2$ of which the center is
an inversion symmetric point. See Fig.~\ref{fig:subskeleton} [d-1].
The inversion acts on $S^2$ as the antipodal map, so $S^2$ subject to
inversion is regarded as the quotient $S^2/\mathbb{Z}_2=RP^2$.
The $K$-group on $Y_2$ is $K(RP^2)=\mathbb{Z}_2 \oplus \mathbb{Z}$, where
the $\mathbb{Z}_2$ index $\nu$ (mod. 2) is associated with the
torsion part of the first Chern class on $RP^2$.
The $\mathbb{Z}$ part is irrelevant to the gapless phase, and thus we focus on
the $\mathbb{Z}_2$ part here.
(The $\mathbb{Z}$ part
is a trivial contribution counting the number of occupied
states.)
When the system is fully gapped,
the $\mathbb{Z}_2$ invariant $\nu$ should be trivial
since $S^2$ can shrink to a point preserving inversion symmetry.
This means the following criterion for inversion symmetric
gapless phases:
if the $\mathbb{Z}_2$ invariant $\nu$ is nontrivial on an inversion
symmetric sub-sphere $S^2$,
then there should be a gapless region inside $S^2$.
In this case, the cokernel of the trivial inclusion $i_{Y_2}:
Y_2\to T^3$ is the $\mathbb{Z}_2$ part of $K(RP^2)$.
We present the model Hamiltonian of the gapless phase in
Sec.~\ref{sec:inversion_fermi_pt}.
A similar $\mathbb{Z}_2$ invariant can be defined also for
a torus with inversion symmetry (See Fig.~\ref{fig:subskeleton} [d-2]).
In Sec.~\ref{sec:inversion_fermi_pt},
we also show that the interplay between inversion symmetry and TRS
defines a $\mathbb{Z}_2$ invariant associated with the
Stiefel-Whitney classes on $RP^2$.
\section{The classification of topological insulators with wallpaper group symmetry}
\label{Wallpaper_summary}
In this section, we summarize the $K$-theories over the BZ torus $T^2$ in the presence of 17 wallpaper groups
with and without the chiral symmetry.
Our results do not include TRS or PHS, which is a future problem.
We present these $K$-groups as $R(P)$-modules, where $P$ is the point group associated with each wallpaper group,
which can contrast with previous works.~\cite{Yang1997, LuckStamm2000, DongLiu2015, Kruthoff2016}
The detail of calculations of the $K$-groups will appear in the near future.~\cite{GomiShiozakiSato_Wallpaper}
In the next section \ref{sec:Example of K-theory classification},
we pick up a few examples of wallpaper groups
in order to show how to compute the $K$-group and
apply to the bulk insulators and surface states.
As explained in Sec.\ \ref{sec:Stable classification of bulk insulators},
the $K$-group $K^{\tau-n}_P(T^2)$ ($n=0,1$) on $T^2$ means the stable classification of 2d bulk insulators in class A ($n=0$) and class AIII $(n=1)$.
At the same time, as explained in Sec.~\ref{sec:Classification of boundary gapless states},
the $K$-group $K^{\tau-n}_P(T^2)$ expresses the classification of 2d surface gapless states in class A ($n=1$) and class AIII ($n=0$).
It is worth summarizing these relations to avoid confusion:
$$
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{>{\centering\arraybackslash}m{3cm} | >{\centering\arraybackslash}m{3cm} | >{\centering\arraybackslash}m{3cm} }
$K$-group & Stable classification of bulk insulators & Surface gapless states \\
\hline
$K^{\tau-0}_P(T^2)$ & class A & class AIII \\
$K^{\tau-1}_P(T^2)$ & class AIII & class A \\
\end{tabular}
\renewcommand{\arraystretch}{1.0}
$$
\begin{table*}[!]
\label{Tab:BravaisLattice}
\begin{center}
\caption{2d Bravais lattices, unit cells, point groups, and wallpaper groups.}
\renewcommand{\arraystretch}{2.0}
\begin{tabular}[t]{|c|c|c|c|}
\hline
Bravais lattice & Unit cell & Point group & Wallpaper group \\
\hline
\multirow{2}{*}{Oblique} & \multirow{2}{*}{
$$
\xygraph{
!{<0cm,0cm>;<0.6cm,0cm>:<0cm,0.6cm>::}
!{(0,0)}="a"
!{(2,0)}="b" ([]!{+(0,-0.2)} {a \hat x})
!{(0.6,1.5)}="c" ([]!{+(-1.1,0)} {b \hat x + c \hat y})
!{(2.6,1.5)}="d"
"a"-@{->}"b"
"a"-@{->}"c"
"b"-"d"
"c"-"d"
}
$$
}
& $C_1$ & p1 \\ \cline{3-4}
& & $C_2$ & p2 \\
\hline
\multirow{2}{*}{Rectangular} &
\multirow{2}{*}{
$$
\xygraph{
!{<0cm,0cm>;<0.6cm,0cm>:<0cm,0.6cm>::}
!{(0,0)}="a"
!{(2,0)}="b" ([]!{+(0,-0.2)} {a \hat x})
!{(0,1.5)}="c" ([]!{+(-0.4,0)} {b \hat y})
!{(2,1.5)}="d"
"a"-@{->}"b"
"a"-@{->}"c"
"b"-"d"
"c"-"d"
}
$$
}
& $D_1$ & pm, pg \\ \cline{3-4}
& & $D_2$ & pmm, pmg, pgg \\
\hline
\multirow{2}{*}{Rhombic} &
\multirow{2}{*}{$$
\xygraph{
!{<0cm,0cm>;<0.6cm,0cm>:<0cm,0.6cm>::}
!{(0,0)}="a"
!{(1.4,1.0)}="b" ([]!{+(0.6,-0.4)} {a \hat x + b \hat y})
!{(-1.4,1.0)}="c" ([]!{+(-1.2,-0.4)} {-a \hat x + b \hat y})
!{(0,2.0)}="d"
"a"-@{->}"b"
"a"-@{->}"c"
"b"-"d"
"c"-"d"
}
$$}
& $D_1$ & cm \\ \cline{3-4}
& & $D_2$ & cmm \\
\hline
\multirow{2}{*}{Square} &
\multirow{2}{*}{
$$
\xygraph{
!{<0cm,0cm>;<0.6cm,0cm>:<0cm,0.6cm>::}
!{(0,0)}="a"
!{(1.7,0)}="b" ([]!{+(0.5,0)} {a \hat x})
!{(0,1.7)}="c" ([]!{+(-0.4,0)} {a \hat y})
!{(1.7,1.7)}="d"
"a"-@{->}"b"
"a"-@{->}"c"
"b"-"d"
"c"-"d"
}
$$
}
& $C_4$ & p4 \\ \cline{3-4}
&& $D_4$ &p4m, p4g \\ \cline{3-4}
\hline
\multirow{5}{*}{Hexagonal} &
\multirow{5}{*}{$$
\xygraph{
!{<0cm,0cm>;<0.6cm,0cm>:<0cm,0.6cm>::}
!{(0,0)}="a"
!{(2.0,0)}="b" ([]!{+(0.2,-0.3)} {a \hat x})
!{(1.0,1.732)}="c" ([]!{+(-0.2,0.8)} { a \Big( \frac{1}{2} \hat x + \frac{\sqrt{3}}{2} \hat y \Big)})
!{(3.0,1.732)}="d"
"a"-@{->}"b"
"a"-@{->}"c"
"b"-"d"
"c"-"d"
}
$$}
& $C_3$ & p3 \\ \cline{3-4}
&& $C_6$ &p6 \\ \cline{3-4}
&& $D_3$ &p31m \\ \cline{3-4}
&& $D_3$ &p3m1 \\ \cline{3-4}
&& $D_6$ &p6m\\
\hline
\end{tabular}
\renewcommand{\arraystretch}{1.0}
\end{center}
\end{table*}
There are five Bravais lattices in 2d crystals, which are
listed in Table~\ref{Tab:BravaisLattice} with point groups and wallpaper groups.
In addition to the 17 different wallpaper groups,
the nontrivial projective representations of the point group are the other sources of symmetry classes.
Such contributions can be measured by the group cohomology of the point group as explained in Sec.~\ref{sec3:space}.
For the rotational point group $C_n$,
the group cohomology is trivial $H^2(\mathbb{Z}_n;U(1)) = 0$.
For the dihedral group $D_n$,
there is an even/odd effect:
$H^2(D_{2n};U(1)) = \mathbb{Z}_2$,
$H^2(D_{2n-1};U(1)) = 0$.
Eventually, there are 24 inequivalent symmetry classes.
Tabs.~\ref{Tab:ClassificationTable_A} and \ref{Tab:ClassificationTable_AIII}
summarize the $K$-groups for all wallpaper groups.
We used notations of $R(P)$-modules.
To connect our notations to crystallography,
we provide the character tables of 2d point groups
in Tabs~\ref{Tab:CharacterD_2},
\ref{Tab:CharacterD_3},
\ref{Tab:CharacterD_4},
and
\ref{Tab:CharacterD_6},
where our notations of irreps.\ and
Mulliken's notations are displayed.
The representation rings of 2d point groups
and the module structures of the nontrivial projective representations
are listed in Table~\ref{Tab:RepRing},
which are obtained by the tensor product representations
(see Sec.~\ref{sec:A little bit about representations of D4} for the case of $D_4$).
\begin{table*}[!]
\begin{center}
\caption{Character table of $D_2$.}
\begin{tabular}[t]{cc|cccc}
\hline
Irrep. & Mulliken & $1$ & $m_x$ & $m_y$ & $m_xm_y$ \\
\hline
$1$ & $A_1$ & $1$ & $1$ & $1$ & $1$ \\
$t_x$ & $B_2$ & $1$ & $-1$ & $1$ & $-1$ \\
$t_y$ & $B_1$ & $1$ & $1$ & $-1$ & $-1$ \\
$t_x t_y$ & $A_2$ & $1$ & $-1$ & $-1$ & $1$ \\
\hline
\end{tabular}
\label{Tab:CharacterD_2}
\end{center}
\end{table*}
\begin{table*}[!]
\begin{center}
\caption{Character table of $D_3$.}
\begin{tabular}[t]{cc|ccc}
\hline
irrep. & Mulliken & $1$ & $\{C_3,C_3^{-1}\}$ & $\{\sigma, \sigma C_3, \sigma C_3^2\}$ \\
\hline
$1$ & $A_1$ & $1$ & $1$ & $1$ \\
$A$ & $A_2$ & $1$ & $1$ & $-1$ \\
$E$ & $E$ & $2$ & $-1$ & $0$ \\
\hline
\end{tabular}
\label{Tab:CharacterD_3}
\end{center}
\end{table*}
\begin{table*}[!]
\begin{center}
\caption{Character table of $D_4$.}
\begin{tabular}[t]{cc|ccccc}
\hline
irrep. & Mulliken& $1$ & $\{C_4,C_4^{-1}\}$ & $C_2$ & $\{ \sigma, \sigma C_2 \}$ & $\{\sigma C_4, \sigma C_4^3\}$ \\
\hline
$1$ & $A_1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$A$ & $A_2$ & $1$ & $1$ & $1$ & $-1$ & $-1$ \\
$B$ & $B_1$ & $1$ & $-1$ & $1$ & $1$ & $-1$ \\
$AB$ & $B_2$ & $1$ & $-1$ & $1$ & $-1$ & $1$ \\
$E $ & $E$ & $2$ & $0$ & $-2$ & $0$ & $0$ \\
\hline
\end{tabular}
\label{Tab:CharacterD_4}
\end{center}
\end{table*}
\begin{table*}[!]
\begin{center}
\caption{Character table of $D_6$.}
\begin{tabular}[t]{cc|ccccccccc}
\hline
irrep. & Mulliken & $1$ & $\{C_6,C_6^{-1}\}$ & $\{C_3,C_3^{-1}\}$ & $\{C_2\}$ & $\{\sigma, \sigma C_3 , \sigma C_3^2\}$ & $\{\sigma C_6, \sigma C_2, \sigma C_6^5 \}$ \\
\hline
$1$ & $A_1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$A$ & $A_2$ & $1$ & $1$ & $1$ & $1$ & $-1$ & $-1$ \\
$B$ & $B_1$ & $1$ & $-1$ & $1$ & $-1$ & $1$ & $-1$ \\
$AB$ & $B_2$ & $1$ & $-1$ & $1$ & $-1$ & $-1$ & $1$ \\
$E$ & $E_1$ & $2$ & $1$ & $-1$ & $-2$ & $0$ & $0$ \\
$BE$ & $E_2$ & $2$ & $-1$ & $-1$ & $2$ & $0$ & $0$ \\
\hline
\end{tabular}
\label{Tab:CharacterD_6}
\end{center}
\end{table*}
\begin{table*}[!]
\begin{center}
\caption{The representation rings of the 2d point groups and the module structure of the nontrivial projective representations of $D_2,D_4$ and $D_6$.}
\begin{tabular}[t]{c|l|c}
\hline
Point group $P$ & Representation ring $R(P)$ & Abelian group \\
\hline
$C_n$ & $R(\mathbb{Z}_n) = \mathbb{Z}[t]/(1-t^n)$ & $\mathbb{Z}^n$ \\
$D_2$ & $R(D_2) = \mathbb{Z}[t_1,t_2]/(1-t_1^2,1-t_2^2)$ & $\mathbb{Z}^4$ \\
$D_3$ & $R(D_3) = \mathbb{Z}[A,E]/(1-A^2,E-AE,E^2-1-A-E)$ & $\mathbb{Z}^3$ \\
$D_4$ & $R(D_4) = \mathbb{Z}[A,B,E]/(1-A^2,1-B^2,E-AE,E-BE,E^2-1-A-B-AB)$ & $\mathbb{Z}^5$ \\
$D_6$ & $R(D_6) = \mathbb{Z}[A,B,E]/(1-A^2,1-B^2,E-AE,E^2-1-A-BE)$ & $\mathbb{Z}^6$ \\
\hline
\hline
Point group $P$ & $R(P)$-module of nontrivial projective representations & Abelian group \\
\hline
$D_2$ & $R^{\omega}(D_2) = (1+t_1+t_2+t_1t_2)$ & $\mathbb{Z}$ \\
$D_4$ & $R^{\omega}(D_4) = (1+A+E)$ & $\mathbb{Z}^2$ \\
$D_6$ & $R^{\omega}(D_6) = (1+A+E,E+BE)$ & $\mathbb{Z}^3$ \\
\hline
\end{tabular}
\label{Tab:RepRing}
\end{center}
\end{table*}
\begin{table*}[!]
\begin{center}
\caption{
The stable classification of 2d class A topological insulators with wallpaper groups/
the classification of 2d class AIII surface gapless states with wallpaper groups.
In the fifth column, the overbraces represent $K$-groups as Abelian groups.
The red characters mean that these direct summands are generated by
vector bundles with the first Chern number.
}
\begin{tabular}[t]{c|c|c|c|c}
\hline
Wallpaper group & spinless/spinful & Twist & $R(P)$ & $K^{\tau-0}_{P}(T^2)$ \rule{0pt}{12pt}\\
\hline
p1 & spinless/spinful & $0$ & $\mathbb{Z}$ & $\textcolor{red}{\mathbb{Z}} \oplus \mathbb{Z}$ \rule{0pt}{20pt}\\
p2 & spinless/spinful & $0$ & $R(\mathbb{Z}_2)$& $\textcolor{red}{\overbrace{R(\mathbb{Z}_2)}^{\mathbb{Z}^2}} \oplus \overbrace{R(\mathbb{Z}_2)}^{\mathbb{Z}^2} \oplus \overbrace{(1-t)}^{\mathbb{Z}} \oplus \overbrace{(1-t)}^{\mathbb{Z}}$ \\
p3 & spinless/spinful & $0$ & $R(\mathbb{Z}_3)$& $\textcolor{red}{\overbrace{R(\mathbb{Z}_3)}^{\mathbb{Z}^3}} \oplus \overbrace{R(\mathbb{Z}_3)}^{\mathbb{Z}^3} \oplus \overbrace{(1-t)}^{\mathbb{Z}^2} $ \\
p4 & spinless/spinful & $0$ & $R(\mathbb{Z}_4)$& $\textcolor{red}{\overbrace{R(\mathbb{Z}_4)}^{\mathbb{Z}^4}} \oplus \overbrace{R(\mathbb{Z}_4)}^{\mathbb{Z}^4} \oplus \overbrace{(1-t+t^2-t^3)}^{\mathbb{Z}}$ \\
p6 & spinless/spinful & $0$ & $R(\mathbb{Z}_6)$& $\textcolor{red}{\overbrace{(1-t+t^2)}^{\mathbb{Z}^4}} \oplus \overbrace{R(\mathbb{Z}_6)}^{\mathbb{Z}^6}$ \\
\hline
pm & spinless/spinful & $0$ & $R(\mathbb{Z}_2)$ & $\overbrace{R(\mathbb{Z}_2)}^{\mathbb{Z}^2}\oplus \overbrace{(1-t)}^{\mathbb{Z}}$ \\
cm & spinless/spinful & $0$ & $R(\mathbb{Z}_2)$ & $\overbrace{R(\mathbb{Z}_2)}^{\mathbb{Z}^2}$ \\
pmm & spinless & $0$ & $R(D_2)$ & $\overbrace{R(D_2)}^{\mathbb{Z}^4} \oplus \overbrace{(1-t_1)}^{\mathbb{Z}^2} \oplus \overbrace{(1-t_2)}^{\mathbb{Z}^2} \oplus \overbrace{\big( (1-t_1)(1-t_2) \big)}^{\mathbb{Z}}$ \\
pmm & spinful & $\omega$ & $R(D_2)$ & $\overbrace{R^{\omega}(D_2)}^{\mathbb{Z}}$ \\
cmm & spinless & $0$ & $R(D_2)$ & $\overbrace{R(D_2)}^{\mathbb{Z}^4} \oplus \overbrace{\big( (1-t_1)(1-t_2) \big)}^{\mathbb{Z}} \oplus \overbrace{\big( (1-t_1)(1-t_2) \big)}^{\mathbb{Z}}$ \\
cmm & spinful & $\omega$ & $R(D_2)$ & $\overbrace{R^{\omega}(D_2)}^{\mathbb{Z}} \oplus \overbrace{\big( (1-t_1)(1-t_2) \big)}^{\mathbb{Z}}$ \\
p31m & spinless/spinful & $0$ & $R(D_3)$ & $\overbrace{R(D_3)}^{\mathbb{Z}^3} \oplus \overbrace{(1+A-E)}^{\mathbb{Z}} \oplus \overbrace{(1+A-E)}^{\mathbb{Z}}$ \\
p3m1 & spinless/spinful & $0$ & $R(D_3)$ & $\overbrace{R(D_3)}^{\mathbb{Z}^3} \oplus \overbrace{R(D_3)/(E)}^{\mathbb{Z}} \oplus \overbrace{R(D_3)/(E)}^{\mathbb{Z}}$ \\
p4m & spinless & $0$ & $R(D_4)$ & $\overbrace{R(D_4)}^{\mathbb{Z}^5} \oplus \overbrace{(1+A-E)}^{\mathbb{Z}^2} \oplus \overbrace{(1+B-E)}^{\mathbb{Z}^2}$ \\
p4m & spinful & $\omega$ & $R(D_4)$ & $\overbrace{R^{\omega}(D_4)}^{\mathbb{Z}^2} \oplus \overbrace{\big( (1+A)(1-B) \big)}^{\mathbb{Z}}$ \\
p6m & spinless & $0$ & $R(D_6)$ & $\overbrace{R(D_6)}^{\mathbb{Z}^6} \oplus\overbrace{\big( (1+A)(1-B)(1-E) \big)}^{\mathbb{Z}} \oplus \overbrace{\big( (1+B)(1+A-E) \big)}^{\mathbb{Z}}$ \\
p6m & spinful & $\omega$ & $R(D_6)$ & $\overbrace{R^{\omega}(D_6)}^{\mathbb{Z}^3}\oplus \overbrace{\big( (1+B)(1+A-E) \big)}^{\mathbb{Z}}$ \\
\hline
pg & spinless/spinful & $\tau_{\rm pg}$ & $R(\mathbb{Z}_2)$ & $\overbrace{(1+t)}^{\mathbb{Z}}$ \\
pmg & spinless & $\tau_{\rm pmg}$ & $R(D_2)$ & $\overbrace{(1+t_1,1-t_2)}^{\mathbb{Z}^3}\oplus \overbrace{\big( (1-t_1)(1-t_2) \big)}^{\mathbb{Z}}$ \\
pmg & spinful & $\tau_{\rm pmg}+\omega$ & $R(D_2)$ & $\overbrace{(1+t_1,1-t_2)}^{\mathbb{Z}^3}\oplus \overbrace{\big( (1-t_1)(1-t_2) \big)}^{\mathbb{Z}}$ \\
pgg & spinless & $\tau_{\rm pgg}$ & $R(D_2)$ & $\overbrace{(1+t_1t_2)}^{\mathbb{Z}^2} \oplus \overbrace{((1-t_1)(1-t_2))}^{\mathbb{Z}}$ \\
pgg & spinful & $\tau_{\rm pgg}+\omega$ & $R(D_2)$ & $\overbrace{(1+t_1t_2)}^{\mathbb{Z}^2} \oplus \overbrace{((1-t_1)(1-t_2))}^{\mathbb{Z}}$ \\
p4g & spinless & $\tau_{\rm p4g}$ & $R(D_4)$ & $\overbrace{(1+A-E,1-B)}^{\mathbb{Z}^3} \oplus \overbrace{(1+A-E)}^{\mathbb{Z}^2} \oplus \overbrace{(1+A+B+AB+2E)}^{\mathbb{Z}}$ \\
p4g & spinful & $\tau_{\rm p4g}+\omega$ & $R(D_4)$ & $\overbrace{(1+A+E)}^{\mathbb{Z}^2} \oplus \overbrace{\big( (1+A)(1-B) \big)}^{\mathbb{Z}} \oplus \overbrace{(1+A+B+AB-2E)}^{\mathbb{Z}}$ \\
\hline
\end{tabular}
\label{Tab:ClassificationTable_A}
\end{center}
\end{table*}
\begin{table*}[!]
\begin{center}
\caption{
The stable classification of 2d class AIII topological insulators with wallpaper groups/
the classification of 2d class A surface gapless states with wallpaper groups.
In the fifth column, the overbraces mean $K$-groups as Abelian groups.
}
\begin{tabular}[t]{c|c|c|c|c}
\hline
Wallpaper group & spinless/spinful & Twist & $R(P)$ & $K^{\tau-1}_{P}(T^2)$ \rule{0pt}{12pt}\\
\hline
p1 & spinless/spinful & $0$ & $\mathbb{Z}$& $\mathbb{Z} \oplus \mathbb{Z}$ \rule{0pt}{20pt}\\
p2 & spinless/spinful & $0$ & $R(\mathbb{Z}_2)$& $0$ \rule{0pt}{20pt}\\
p3 & spinless/spinful & $0$ & $R(\mathbb{Z}_3)$& $0$ \rule{0pt}{20pt}\\
p4 & spinless/spinful & $0$ & $R(\mathbb{Z}_4)$& $0$ \rule{0pt}{20pt}\\
p6 & spinless/spinful & $0$ & $R(\mathbb{Z}_6)$& $0$ \rule{0pt}{20pt}\\
\hline
pm & spinless/spinful & $0$ & $R(\mathbb{Z}_2)$ & $\overbrace{R(\mathbb{Z}_2)}^{\mathbb{Z}^2} \oplus \overbrace{(1-t)}^{\mathbb{Z}}$ \\
cm & spinless/spinful & $0$ & $R(\mathbb{Z}_2)$ & $\overbrace{(1+t)}^{\mathbb{Z}} \oplus \overbrace{(1-t)}^{\mathbb{Z}}$ \\
pmm & spinless & $0$ & $R(D_2)$ & $0$ \rule{0pt}{20pt}\\
pmm & spinful & $\omega$ & $R(D_2)$&$\overbrace{(1-t_1 t_2)}^{\mathbb{Z}^2}\oplus \overbrace{\big( (1+t_1)(1-t_2) \big)}^{\mathbb{Z}} \oplus \overbrace{\big( (1-t_1)(1+t_2) \big)}^{\mathbb{Z}}$ \\
cmm & spinless & $0$ & $R(D_2)$ & $0$ \rule{0pt}{20pt}\\
cmm & spinful & $\omega$ & $R(D_2)$ & $\overbrace{(1-t_1 t_2)}^{\mathbb{Z}^2}$ \\
p31m & spinless/spinful & $0$ & $R(D_3)$ & $\overbrace{(1-A)}^{\mathbb{Z}}$ \\
p3m1 & spinless/spinful & $0$ & $R(D_3)$ & $\overbrace{(1-A)}^{\mathbb{Z}}$ \\
p4m & spinless & $0$ & $R(D_4)$ & $0$ \rule{0pt}{20pt}\\
p4m & spinful & $\omega$ & $R(D_4)$ & $\overbrace{(1-A)}^{\mathbb{Z}^2} \oplus \overbrace{\big( (1-A)(1+B) \big)}^{\mathbb{Z}}$ \\
p6m & spinless & $0$ & $R(D_6)$ & $0$ \rule{0pt}{20pt}\\
p6m & spinful & $\omega$ & $R(D_6)$ & $\overbrace{(1-A)}^{\mathbb{Z}^2}$ \\
\hline
pg & spinless/spinful & $\tau_{\rm pg}$ & $R(\mathbb{Z}_2)$ & $\overbrace{(1+t)}^{\mathbb{Z}} \oplus \overbrace{I}^{\mathbb{Z}_2}$ \\
pmg & spinless & $\tau_{\rm pmg}$ & $R(D_2)$ & $\overbrace{\big( (1-t_1)(1+t_2) \big)}^{\mathbb{Z}}$ \\
pmg & spinful & $\tau_{\rm pmg}+\omega$ & $R(D_2)$ & $\overbrace{\big( (1-t_1)(1+t_2) \big)}^{\mathbb{Z}}$ \\
pgg & spinless & $\tau_{\rm pgg}$ & $R(D_2)$ & $\overbrace{I}^{\mathbb{Z}_2}$ \\
pgg & spinful & $\tau_{\rm pgg}+\omega$ & $R(D_2)$ & $\overbrace{I}^{\mathbb{Z}_2}$ \\
p4g & spinless & $\tau_{\rm p4g}$ & $R(D_4)$ & $0$ \rule{0pt}{20pt}\\
p4g & spinful & $\tau_{\rm p4g}+\omega$ & $R(D_4)$ & $\overbrace{\big( (1-A)(1-B)\big)}^{\mathbb{Z}}$ \\
\hline
\end{tabular}
\label{Tab:ClassificationTable_AIII}
\end{center}
\end{table*}
\makeatletter
\renewcommand{\theequation}{%
\arabic{section}.\arabic{subsection}.\arabic{equation}}
\@addtoreset{equation}{subsection}
\makeatother
\section{Example of $K$-theory classification}
\label{sec:Example of K-theory classification}
In this section,
we illustrate the $K$-theory calculations in various examples.
Through concrete problems,
we introduce basics of the $K$-theory calculations such as
the module structure,
the Mayer-Vietoris sequence,
the exact sequence for the pair $(X,Y)$,
and the dimensional raising map.
We also explain the vector bundle representation and Hamiltonian representation of the $K$-groups.
\subsection{$K$-theory on point: representations of symmetry group}
We start with $K$-theories $K^{\omega-n}_P(pt)$ of a point with symmetry group $P$.
$\omega \in Z^2(P;\mathbb{R}/2 \pi\mathbb{Z})$ fixes $U(1)$ phase factors associated with projective representations
\begin{align}
U_p U_{p'} = e^{i \omega_{p,p'}} U_{p p'}.
\end{align}
For class A ($n=0$), the $K$-theory is nothing but the
Abelian group generated by the $\omega$-projective representations.
We denote it by $R^\omega(P)$:
\begin{align}
R^{\omega}(P) := K^{\omega-0}_P(pt).
\end{align}
The tensor product of $\omega$- and $\omega'$-projective representations
has the twist $\omega+\omega' \in Z^2(P;\mathbb{R}/2 \pi \mathbb{Z})$.
Especially, $R(P)$, the $K$-group generated by linear representations
which have the trivial twist $\omega_{p,p'} \equiv 0$, becomes a ring.
For class AIII ($n=1$), the $K$-group is trivial
\begin{align}
K^{\omega-1}_P(pt) = 0
\end{align}
because of the chiral symmetry.
\subsubsection{Cyclic group $\mathbb{Z}_3$}
For example, consider the cyclic group $C_3 = \mathbb{Z}_3 = \{1,\sigma,\sigma^2\}$.
There are three 1-dimensional irreps.\ $\mathbb{C}_0, \mathbb{C}_1, \mathbb{C}_2$
characterized by eigenvalues of $U_{\sigma} = 1, \zeta, \zeta^2$ with $\zeta = e^{2 \pi i/3}$, respectively.
So we have
\begin{equation}
R(\mathbb{Z}_3) = K^0_{\mathbb{Z}_3}(pt) = \mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z} \ \ {\rm as\ an\ Abelian\ group}.
\end{equation}
On the vector bundle representation,
an element $(n_0, n_1, n_2) \in R(\mathbb{Z}_3)$ is represented by the following direct sum
\begin{equation}
[V] \in R(\mathbb{Z}_3), \qquad
V = [\mathbb{C}_0]^{\oplus n_0} \oplus [\mathbb{C}_1]^{\oplus n_1} \oplus [\mathbb{C}_2]^{\oplus n_2}.
\end{equation}
In the Karoubi's representation, the same element is represented by two Hamiltonians acting on $V$ as follows
\begin{equation}
[V, H_0, H_1], \qquad
H_0= 1_{n_0 \times n_0} \oplus 1_{n_1 \times n_1} \oplus 1_{n_2 \times n_2}, \qquad
H_1 = -1_{n_0 \times n_0} \oplus -1_{n_1 \times n_1} \oplus -1_{n_2 \times n_2}.
\end{equation}
The tensor representation $V \otimes V'$ induces the ring structure in $R(\mathbb{Z}_3)$.
The irreps.\ $\mathbb{C}_i (i = 0,1,2)$ acts on the element $(n_0, n_1,n_2)$ as
\begin{equation}
\mathbb{C}_i \otimes ( [\mathbb{C}_0]^{\oplus n_0} \oplus [\mathbb{C}_1]^{\oplus n_1} \oplus [\mathbb{C}_2]^{\oplus n_2} ) =
[\mathbb{C}_{i}]^{\oplus n_0} \oplus [\mathbb{C}_{i+1}]^{\oplus n_1} \oplus [\mathbb{C}_{i+2}]^{\oplus n_2},
\end{equation}
where subscripts $i,i+1, i+2$ are defined modulo $3$.
In short, $R(\mathbb{Z}_3)$ is isomorphic to the quotient of the polynomial ring
\begin{equation}
R(\mathbb{Z}_3) = \mathbb{Z}[t]/(1-t^3) = \{n_0 + n_1 t + n_2 t^2 | n_0,n_1,n_2 \in \mathbb{Z} \}.
\end{equation}
\subsubsection{Dihedral group $D_2$}
Consider the dihedral group $D_2 = \{1,m_x, m_y, m_x m_y\}$.
There are four 1-dimensional linear irreps.\ shown in Table~\ref{Tab:CharacterD_2}.
Tensor products of these irreps.\ lead to
the quotient of the polynomial ring:
\begin{equation}
R(D_2) = K^{0}_{D_2}(pt) = \mathbb{Z}[t_x,t_y]/(1-t_x^2, 1-t_y^2).
\end{equation}
Because of $H^2(D_2;\mathbb{R}/2 \pi \mathbb{Z}) = \mathbb{Z}_2$, there is a nontrivial twist $[\omega] \in H^2(D_2;\mathbb{R}/2 \pi \mathbb{Z})$.
An example of a nontrivial two-cocycle $\omega$ is given by
\begin{align}
e^{i \omega_{p,p'}} \qquad = \qquad
\begin{tabular}{c|cccc}
$p \backslash p'$ & 1 & $m_x$ & $m_y$ & $m_x m_y$ \\
\hline
1 & 1 & 1 & 1 & 1 \\
$m_x$ & 1 & 1 & i & -i \\
$m_y$ & 1 & -i & 1 & i \\
$m_x m_y$ & 1 & i & -i & 1 \\
\end{tabular}
\end{align}
There is one 2-dimensional $c$-projective irrep.
We denote it by $W$ that is represented by the Pauli matrices
\begin{align}
U_{1} = \begin{pmatrix}
1 & 0 \\
0 & 1 \\
\end{pmatrix}, \qquad
U_{m_x} = \begin{pmatrix}
0 & 1 \\
1 & 0 \\
\end{pmatrix}, \qquad
U_{m_y} = \begin{pmatrix}
0 & -i \\
i & 0 \\
\end{pmatrix}, \qquad
U_{m_x m_y}
= \begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}.
\end{align}
The $K$-group is
\begin{align}
R^{\omega}(D_2) = K_{D_2}^{\omega-0}(pt) = \mathbb{Z}
\end{align}
as an Abelian group.
The tensor product $V \otimes W$ by a linear representation $V \in R(D_2)$
is just the multiplication $V \otimes W \cong W^{\oplus {\rm dim } V}$ by the rank of $V$, which leads to
the $R(D_2)$-module structure
\begin{align}
R^{\omega}(D_2) = (1+t_x+t_y+t_x t_y)
= \{(1+t_x+t_y+t_x t_y) f(t_x,t_y) | f(t_x,t_y) \in R(D_2) \}.
\end{align}
\subsection{Onsite symmetry}
Let us consider the $K$-theory associated with the onsite unitary symmetry $G$
\begin{align}
&U_g H(\bm{k}) U_g^{-1} = H(\bm{k}), \qquad g \in G, \\
&U_g U_h = e^{i\omega_{g,h}} U_{g h}, \qquad \omega_{g,h} \in Z^2(G,\mathbb{R}/2 \pi \mathbb{Z}).
\end{align}
For class AIII (n=1), we assume the
onsite symmetry commutes with the chiral symmetry
\begin{align}
\Gamma H(\bm{k}) \Gamma^{-1} = - H(\bm{k}), \qquad
U_g \Gamma = \Gamma U_g.
\end{align}
In such cases, the Hamiltonian $H(\bm{k})$
is decomposed as a direct sum
\begin{align}
H(\bm{k}) = \bigoplus_{\rho} H_{\rho}(\bm{k})
\end{align}
of irreducible $\omega$-projective representations.
In each sector, the Hamiltonian behaves as a class A or AIII insulator.
The topological classification is recast as
\begin{align}
K_G^{\omega-n}(X)
\cong R^{\omega}(G) \otimes_{\mathbb{Z}} K^{n}(X).
\end{align}
For example,
we can immediately have the topological
classification of 2d class A insulators with
onsite unitary $\mathbb{Z}_n$ symmetry:
\begin{align}
K_{\mathbb{Z}_n}^{0}(T^2)
\cong R(\mathbb{Z}_n) \otimes K(T^2)
= R(\mathbb{Z}_n) \otimes_{\mathbb{Z}} (\mathbb{Z} \oplus \mathbb{Z})
= R(\mathbb{Z}_n) \oplus R(\mathbb{Z}_n).
\end{align}
The first direct summand represents
atomic insulators with representations of $\mathbb{Z}_n$.
The second direct summand is generated by the
Chern insulators with irreducible representations of $\mathbb{Z}_n$.
\color{black}
\subsection{Reflection symmetry}
Let us consider reflection symmetric 1d class A/AIII crystalline insulators.
The $\mathbb{Z}_2 = \{1,m\}$ group acts on the BZ circle $S^1$ as a reflection:
\begin{equation}
\tilde S^1 \ \ = \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(0,0)}*{\bullet}="a"
!{(2,0)}*{\bullet}="b"
!{(2.4,0.4)}="c"
!{(2.4,-0.4)}="d"
!{(2.6,0)}*{m}="e",
"a" -@/^1.0cm/ "b",
"a" -@/_1.0cm/ "b",
!{(-0.5,0)}-@{.}!{(2.5,0)}
"c" -@{<->} "d"
}
\label{Fig:TildeS1}
\end{equation}
We denoted the circle $S^1$ with the reflection action by $\tilde S^1$.
There are two fixed points at $k_x = 0, \pi$.
There is no nontrivial twist: $H^2(\mathbb{Z}_2; C(\tilde S^1, U(1))) = 0$.
One can fix the $U(1)$ phases associated with the square of
$\mathbb{Z}_2$ action to 1:
\begin{align}
U_m(-k_x) U_m(k_x) = {\bf 1},
\end{align}
where ${\bf 1}$ is the identity matrix.
In the Karoubi's representation,
each $K$-group $K^{n}_{\mathbb{Z}_2}(\tilde S^1)$ means the
topological classification of the Hamiltonians with the following symmetry
\begin{align}
&{\rm Class\ A}\ (n=0):
&&U_m(k_x) H(k_x) U_m(k_x)^{-1} = H(-k_x), \\
&{\rm Class\ AIII}\ (n=1):
&&\left\{ \begin{array}{l}
\Gamma H(k_x) \Gamma^{-1} = -H(k_x), \\
U_m(k_x) H(k_x) U_m(k_x)^{-1} = H(-k_x), \\
\Gamma U_m(k_x) = U_m(k_x) \Gamma, \\
\end{array}\right.
\end{align}
\subsubsection{Calculation of $K$-group by the Mayer-Vietoris sequence}
One way to calculate the $K$-group $K^{-n}_{\mathbb{Z}_2}(\tilde S^1)$ is to use the Mayer-Vietoris sequence.~\cite{bott2013differential}
See Appendix \ref{app:Mayer-Vietoris sequence} for the details of the Mayer-Vietoris sequence.
We divide $\tilde S^1 = U \cup V$ into two subspaces
\begin{align}
U = \{e^{ik} \in \tilde S^1 | k \in [-\pi/2,\pi/2]\}, &&
V = \{e^{ik} \in \tilde S^1 | k \in [\pi/2,3\pi/2]\},
\end{align}
as shown below:
$$
U \sqcup V =\ \
\begin{xy}
(0,0)*+!R{V}="A"*{\bullet},
"A"+<1cm,0.8cm>="B"*{},
"A"+<1cm,-0.8cm>="C"*{},
"A"+<2.4cm,0cm>*+!L{U}="D"*{\bullet},
"D"+<-1cm,0.8cm>="E"*{},
"D"+<-1cm,-0.8cm>="F"*{},
\ar@/^/@{-} "A";"B",
\ar@/_/@{-} "A";"C",
\ar@/_/@{-} "D";"E",
\ar@/^/@{-} "D";"F"
\end{xy}
$$
Each of the lines $U$ and $V$ is homotopic to a point preserving the reflection symmetry as:
\[
U \sqcup V \ \ \sim \ \ \{0\} \sqcup \{\pi\} \ \ = \ \
\begin{xy}
(0,0)*{\bullet}="A"*{},
"A"+<2cm,0cm>*{\bullet}="D"*{},
"A"+<0.4cm,0cm>*{\{\pi\}},
"D"+<-0.4cm,0cm>*{\{0\}},
"A"+<-1.0cm,0cm>*{\mathbb{Z}_2},
"D"+<1.0cm,0cm>*{\mathbb{Z}_2},
\ar @(lu,ld) "A";"A"
\ar @(ru,rd) "D";"D"
\end{xy}
\]
The intersection $U \cap V$ is homotopic to two points $\mathbb{Z}_2 \times pt$ that are exchanged by the $\mathbb{Z}_2$ action:
\[
U \cap V \ \ \sim \ \ \mathbb{Z}_2 \times pt \ \ = \ \ \{\frac{\pi}{2}, -\frac{\pi}{2}\} \ \ =
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(1,0.6)}="a"*{\bullet}
!{(1,-0.6)}="b"*{\bullet}
!{(1,0.5)}="c"*{}
!{(1,-0.5)}="d"*{}
!{(1.4,0)}*{m}="e"
"c" -@{<->} "d"
}
\]
The Mayer-Vietoris sequence associated to the sequence of the inclusions
\begin{align}
(\tilde S^1 =) \ U \cup V \leftarrow U \sqcup V \leftarrow U \cap V
\end{align}
is the six term exact sequence of the $K$-theory
\begin{equation}
\begin{CD}
K_{\mathbb{Z}_2}^{1}(U \cap V) @<<< K_{\mathbb{Z}_2}^{1}(U) \oplus K_{\mathbb{Z}_2}^{1}(V) @<<< K_{\mathbb{Z}_2}^{1}(\tilde S^1) \\
@VVV @. @AAA \\
K_{\mathbb{Z}_2}^{0}(\tilde S^1) @>>> K_{\mathbb{Z}_2}^{0}(U) \oplus K_{\mathbb{Z}_2}^{0}(V) @>>> K_{\mathbb{Z}_2}^{0}(U \cap V).
\end{CD}
\label{Seq:S1Ref}
\end{equation}
In this sequence, we have
\begin{align}
K^{n}_{\mathbb{Z}_2}(U) \cong K^{n}_{\mathbb{Z}_2}(\{0\}) \cong
\left\{ \begin{array}{ll}
R(\mathbb{Z}_2) & (n=0) \\
0 & (n=1) \\
\end{array}\right., &&
K^{n}_{\mathbb{Z}_2}(V) \cong K^{n}_{\mathbb{Z}_2}(\{\pi\}) \cong
\left\{ \begin{array}{ll}
R(\mathbb{Z}_2) & (n=0) \\
0 & (n=1) \\
\end{array}\right.,
\end{align}
and
\begin{align}
K^{n}_{\mathbb{Z}_2}(U \cap V)
\cong K^n_{\mathbb{Z}_2}(\{\frac{\pi}{2}, - \frac{\pi}{2} \})
\cong K^{n}(\{\frac{\pi}{2}\})
\cong \left\{ \begin{array}{ll}
\mathbb{Z} & (n=0) \\
0 & (n=1) \\
\end{array}\right..
\end{align}
Thus, the sequence (\ref{Seq:S1Ref}) is recast into
\begin{equation}
\begin{CD}
0 @<<< 0 @<<< K_{\mathbb{Z}_2}^{1}(\tilde S^1) \\
@VVV @. @AAA \\
K_{\mathbb{Z}_2}^{0}(\tilde S^1) @>>> R(\mathbb{Z}_2) \oplus R(\mathbb{Z}_2) @>\Delta>> \mathbb{Z}.
\end{CD}
\label{Seq:S1Ref_2}
\end{equation}
Here, the homomorphism $\Delta$ is given by
\begin{align}
\Delta : R(\mathbb{Z}_2) \oplus R(\mathbb{Z}_2) \to \mathbb{Z}, && \Delta (f(t),g(t)) = f(1)-g(1),
\end{align}
under the presentation $R(\mathbb{Z}_2) = \mathbb{Z}[t]/(1-t^2)$.
We have
\begin{align}
K_{\mathbb{Z}_2}^{0}(\tilde S^1) \cong \mathrm{Ker} (\Delta), &&
K_{\mathbb{Z}_2}^{1}(\tilde S^1) \cong \mathrm{Coker} (\Delta).
\end{align}
$\mathrm{Ker}(\Delta)$ is spanned by $\{(1,1),(t,t),(0,1-t)\} \subset R(\mathbb{Z}_2) \oplus R(\mathbb{Z}_2)$,
so we have $\mathrm{Ker}(\Delta) = \mathbb{Z}^3$ as an Abelian group.
The base elements $(1,1)$ and $(t,t)$ span the $R(\mathbb{Z}_2)$-module $R(\mathbb{Z}_2)$, and $(0,1-t)$ the ideal $(1-t) = \{(1-t) f(t) | f(t) \in R(\mathbb{Z}_2) \}$ in $R(\mathbb{Z}_2)$.
As a result, we get the following $R(\mathbb{Z}_2)$-modules as $K$-groups
\begin{align}
{\rm Class\ A}: K_{\mathbb{Z}_2}^{0}(\tilde S^1) \cong \overbrace{R(\mathbb{Z}_2)}^{\mathbb{Z}^2} \oplus \overbrace{(1-t)}^{\mathbb{Z}}, &&
{\rm Class\ AIII}: K_{\mathbb{Z}_2}^{1}(\tilde S^1) \cong 0.
\label{Eq:KGroupS1}
\end{align}
\subsubsection{Characterization of $K$-group by fixed points}
Notice the injection in (\ref{Seq:S1Ref_2}),
\begin{align}
\begin{CD}
0 @>>> K_{\mathbb{Z}_2}^{0}(\tilde S^1) @>>> \underbrace{R(\mathbb{Z}_2)}_{k_x=0} \oplus \underbrace{R(\mathbb{Z}_2)}_{k_x=\pi},
\end{CD}
\end{align}
means that the $K$-group $K_{\mathbb{Z}_2}^{0}(\tilde S^1)$
can be characterized by the representations at the two fixed points.
In general,
representations of the little group at fixed points provide topological invariants
which enable us to distinguish different elements in a $K$-group.
Let $\{ e_1, e_2, e_3 \}$ be a basis of the $K$-group $K_{\mathbb{Z}_2}^{0}(\tilde S^1)$
characterized by the following fixed point representations,
$$
\begin{tabular}[t]{|c|c|c|}
\hline
Base & $\underbrace{R(\mathbb{Z}_2)}_{k_x=0}$ & $\underbrace{R(\mathbb{Z}_2)}_{k_x=\pi}$ \\
\hline
$e_1$ & $1$ & $1$ \\
$e_2$ & $t$ & $t$ \\
$e_3$ & $1$ & $t$ \\
\hline
\end{tabular}
$$
Because of the $R(\mathbb{Z}_2)$-module structures $e_2 = t \cdot e_1$ and $t \cdot (e_1-e_3) = -(e_1-e_3)$,
two base elements $e_1, e_2$ compose $R(\mathbb{Z}_2)$ and $e_1-e_3$ generates $(1-t)$.
\subsubsection{Vector bundle representation}
We give $\mathbb{Z}_2$ equivariant vector bundle representations for the basis $\{ e_1,e_2,e_3 \}$.
We will construct $\mathbb{Z}_2$ equivariant vector bundles $\{ E_1, E_2, E_3 \}$ with the following fixed point data:
$$
\begin{tabular}[t]{|c|c|c|}
\hline
Vector bundle & $E|_{k_x=0}$ & $E|_{k_x=\pi}$ \\
\hline
$E_1$ & $\mathbb{C}_0$ & $\mathbb{C}_0$ \\
$E_2$ & $\mathbb{C}_1$ & $\mathbb{C}_1$ \\
$E_3$ & $\mathbb{C}_0$ & $\mathbb{C}_1$ \\
\hline
\end{tabular}
$$
Here $\mathbb{C}_0$ and $\mathbb{C}_1$ are representations with $U_{m} = 1, -1$, respectively.
$e_1$ is represented by a $\mathbb{Z}_2$ equivariant complex vector bundle $E_1$ of rank 1 with $\mathbb{Z}_2$ action $\rho_{m} : E_1 \to E_1$ as
\begin{align}
e_1 = \left[ \left( E_1 = S^1 \times \mathbb{C}, \ \ \rho_{m}(k_x,v) = (-k_x,v) \right) \right] .
\label{Eq:1DTCIVectE1}
\end{align}
By using the Bloch states, $E_1$ is equivalent to a Bloch state $\ket{k_x}_1$ which satisfies the reflection symmetry as
\begin{align}
e_1 = \left[ \left( \ket{k_x}_1, \ \ \hat U_{m} \ket{k_x}_1 = \ket{-k_x}_1 \right) \right] .
\end{align}
(Recall that the (local) Bloch states $\Phi(\bm{k}) = \{ \ket{\bm{k},n} \}_{n=1, \dots N}$ correspond to
(local) sections of the frame bundle $F(E)$ associated with a vector bundle $E$.)
The Bloch state $\ket{k_x}_1$ is translated to the real space base $\ket{R_x}_1 = \sum_{k_x \in S^1} \ket{k_x}_1 e^{-i k_x R_x}$ with the
reflection symmetry
\begin{align}
e_1 = \left[ \left(\ket{R_x}_1, \ \ \hat U_{m} \ket{R_x}_1 = \ket{-R_x}_1 \right) \right] ,
\end{align}
The base $\ket{R_x}_1$ corresponds to $s$-orbitals
localized at the center of unit cells
\begin{align}
e_1 = \left[ \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(0,0)}*{\bigcirc},
!{(2,0)}*{\bigcirc},
!{(-2,0)}*{\bigcirc},
!{(4,0)}*{\bigcirc},
!{(-4,0)}*{\bigcirc},
!{(0,-0.4)}*{\ket{s}},
!{(2,-0.4)}*{\ket{s}},
!{(-2,-0.4)}*{\ket{s}},
!{(4,-0.4)}*{\ket{s}},
!{(-4,-0.4)}*{\ket{s}},
!{(-5,0)}-@{->}!{(5,0)},
!{(0,-0.8)}-@{.}!{(0,0.8)},
!{(0,0.6)}*{m},
!{(0.4,0.4)}-@{<->}!{(-0.4,0.4)},
!{(-1,-0.8)}-@{|-|}!{(1,-0.8)},
!{(0,-1)}*{\rm unit\ cell},
}
\ \ \right]
\label{Fig:1DTCIe1}
\end{align}
where the reflection axis is placed at the center of the unit cell.
The base $e_2 = t \cdot e_1$ is represented by the $\mathbb{Z}_2$ equivariant vector bundle $E_2 = \mathbb{C}_1 \otimes E_1$ as follows
\begin{align}
e_2 = \left[ \left( E_2 = S^1 \times \mathbb{C}, \ \ \rho_{m}(k_x,v) = (-k_x,-v) \right) \right].
\label{Eq:1DTCIVectE2}
\end{align}
The Bloch state and localized orbitals representation read
\begin{align}
e_2 = \left[ \left( \ket{k_x}_2, \ \ \hat U_{m} \ket{k_x}_2 = - \ket{-k_x}_2\right) \right],
\end{align}
\begin{align}
e_2 = \left[ \left(\ket{R_x}_2, \ \ \hat U_{m} \ket{R_x}_2 = - \ket{-R_x}_2\right) \right].
\end{align}
$\ket{R_x}_2$ corresponds to $p$-orbitals localized at the center of unit cells:
\begin{align}
e_2 = \left[ \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(0,0)}*{\bigcirc},
!{(2,0)}*{\bigcirc},
!{(-2,0)}*{\bigcirc},
!{(4,0)}*{\bigcirc},
!{(-4,0)}*{\bigcirc},
!{(0,-0.4)}*{\ket{p}},
!{(2,-0.4)}*{\ket{p}},
!{(-2,-0.4)}*{\ket{p}},
!{(4,-0.4)}*{\ket{p}},
!{(-4,-0.4)}*{\ket{p}},
!{(-5,0)}-@{->}!{(5,0)},
!{(0,-0.8)}-@{.}!{(0,0.8)},
!{(0,0.6)}*{m},
!{(0.4,0.4)}-@{<->}!{(-0.4,0.4)},
!{(-1,-0.8)}-@{|-|}!{(1,-0.8)},
!{(0,-1)}*{\rm unit\ cell},
}
\ \ \right]
\end{align}
The last base $e_3$ is represented by the following $\mathbb{Z}_2$ equivariant vector bundle $E_3$
\begin{align}
e_3 = \left[ \left( E_3 = S^1 \times \mathbb{C}, \ \ \rho_{m}(k_x,v) = (-k_x, e^{-i k_x} v) \right) \right].
\end{align}
If one uses the Bloch state $\ket{k_x}_3$, then
\begin{align}
e_3 = \left[ \left( \ket{k_x}_3, \ \ \hat U_{m} \ket{k_x}_3 = e^{-i k_x} \ket{-k_x}_3\right) \right].
\end{align}
If one instead uses the localized orbital $\ket{R_x}_3$, then
\begin{align}
e_3 = \left[ \left(\ket{R_x}_3, \ \ \hat U_{m} \ket{R_x}_3 = \ket{-R_x-1}_3\right) \right],
\label{Eq:1DTCIe3_RealSpace}
\end{align}
where $\ket{R_x}_3$ corresponds to the localized $s$-orbitals at the boundary of unit cells:
\begin{align}
e_3 = \left[ \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(1,0)}*{\bigcirc},
!{(3,0)}*{\bigcirc},
!{(-1,0)}*{\bigcirc},
!{(-3,0)}*{\bigcirc},
!{(1,-0.4)}*{\ket{s}},
!{(3,-0.4)}*{\ket{s}},
!{(-1,-0.4)}*{\ket{s}},
!{(-3,-0.4)}*{\ket{s}},
!{(-5,0)}-@{->}!{(5,0)},
!{(0,-0.8)}-@{.}!{(0,0.8)},
!{(0,0.6)}*{m},
!{(0.4,0.4)}-@{<->}!{(-0.4,0.4)},
!{(-1,-0.8)}-@{|-|}!{(1,-0.8)},
!{(0,-1)}*{\rm unit\ cell},
}
\ \ \right]
\label{Fig:1DTCIe3}
\end{align}
Here, we assumed that the $s$-orbital belonging to the unit cell $R_x$ is localized at $R_x + \frac{1}{2}$.
An alternative choice, for example, $R_x -\frac{1}{2}$,
leads to the $\mathbb{Z}_2$ equivariant vector bundle
\begin{align}
\left[ \left( E_3' = S^1 \times \mathbb{C}, \ \ \rho_{m}(k_x,v) = (-k_x, e^{i k_x} v) \right) \right].
\end{align}
$E_3'$ is isomorphic to $E_3$ as a $\mathbb{Z}_2$ equivariant vector bundle,
thus, $E_3$ and $E_3'$ give the same $K$-class $e_3$.
As explained in Sec.~\ref{sec:Dependence on unit cell and Wyckoff position},
even if the localized $s$-orbitals described by (\ref{Fig:1DTCIe1}) and (\ref{Fig:1DTCIe3}) are physically the same,
the corresponding $K$-classes are different.
The $K$-classes depend on the choice of the unit cell center.
\subsubsection{Karoubi's triple representation}
Here we give an alternative representation of $K$-groups, that is the Karoubi's triple representation.
First, from the vector bundle representation,
we can get Karoubi's triple representations $e_i = [(E_i 1,-1)] \ (i=1,2,3)$ for the base elements of the $K$-group
with the following fixed point data:
$$
\begin{tabular}[t]{|c|c|c|}
\hline
Triple & $k_x=0$ & $k_x=\pi$ \\
\hline
$(E_1,1,-1)$ & $(\mathbb{C}_0,1,-1)$ & $(\mathbb{C}_0,1,-1)$ \\
$(E_2,1,-1)$ & $(\mathbb{C}_1,1,-1)$ & $(\mathbb{C}_1,1,-1)$ \\
$(E_3,1,-1)$ & $(\mathbb{C}_0,1,-1)$ & $(\mathbb{C}_1,1,-1)$ \\
\hline
\end{tabular}
$$
A benefit of using the Karoubi's triple is
that we can construct representatives of $e_3$ as a Hamiltonian acting on
the vector bundles $E_1$ and $E_2$.
$E_1 \oplus E_2$ is written by using the Bloch basis as
\begin{align}
E_1 \oplus E_2 =
\left( \Phi_{E_1 \oplus E_2}(k_x) = \left( \ket{k_x}_1 , \ket{k_x}_2 \right), \qquad
\hat U_{m} \Phi_{E_1 \oplus E_2}(k_x) = \Phi_{E_1 \oplus E_2}(-k_x) U_{\sigma}(k_x), \ \ U_{\sigma}(k_x) = \sigma_z \right) ,
\end{align}
where $\sigma_z = \begin{pmatrix}
1 & 0 \\
0 & -1 \\
\end{pmatrix}$ is the $z$ component of the Pauli matrices $\sigma_i (i=x,y,z)$.
A Hamiltonian on the $E_1 \oplus E_2$ should satisfy the reflection symmetry
\begin{align}
\sigma_z H(k_x) \sigma_z = H(-k_x).
\end{align}
We can show that the following triple represents the base $e_3$:
\begin{align}
e_3 = \left[ (E_1 \oplus E_2 , H_0 = \cos (k_x) \sigma_z + \sin (k_x) \sigma_y, H_1 = -\sigma_0 ) \right]
\end{align}
Actually,
the empty and occupied states $\ket{\phi_{\pm}(k_x)}$ of the Hamiltonian $H_0$,
$H_0 \ket{\phi_{\pm}(k_x)} = \pm \ket{\phi_{\pm}(k_x)}$, are given by the following Bloch states
\begin{align}
&\ket{\phi_+(k_x)} = \frac{1}{2}(1+e^{-i k_x}) \ket{k_x}_1 + \frac{1}{2}(1-e^{-i k_x}) \ket{k_x}_2, \\
&\ket{\phi_-(k_x)} = \frac{1}{2}(1-e^{-i k_x}) \ket{k_x}_1 + \frac{1}{2}(1+e^{-i k_x}) \ket{k_x}_2
\end{align}
with reflections symmetry
\begin{align}
\hat U_{m} \ket{\phi_{+}(k_x)} = e^{-i k_x} \ket{\phi_{+}(k_x)}, &&
\hat U_{m} \ket{\phi_{-}(k_x)} = - e^{-i k_x} \ket{\phi_{-}(k_x)},
\end{align}
which means the empty state $\ket{\phi_+(k_x)}$ is the $\mathbb{Z}_2$ equivariant bundle $E_3$,
and $\ket{\phi_-(k_x)}$ is $E_4 = \mathbb{C}_1 \otimes E_3$, i.e.\ $E_1 \oplus E_2$ is isomorphic to $E_3 \oplus E_4$.
Then, by using the stable equivalence, we have
\begin{equation}\begin{split}
(E_1 \oplus E_2 , H_0 = \cos (k_x) \sigma_z + \sin (k_x) \sigma_y, H_1 = -\sigma_0 )
&\sim (E_3 \oplus E_4 , H_0 = 1 \oplus (-1), H_1 = (-1) \oplus (-1)) \\
&\sim (E_3, H_0 = 1, H_1 = -1).
\end{split}\label{Eq:EqivalenceE1E2E3}\end{equation}
Note that if we construct the Wannier orbital $\ket{W_+(R_x)}$ from the energy eigenstate $\ket{\phi_+(k_x)}$
by $\ket{W_+(R_x)} := \sum_{k_x \in S^1} \ket{\phi_+(k_x)} e^{-i k_x R_x}$,
we recover the real space orbital picture (\ref{Eq:1DTCIe3_RealSpace}).
\subsubsection{Real space picture of the isomorphism $E_1 \oplus E_2 \cong E_3 \oplus E_4$}
The above equivalence relation (\ref{Eq:EqivalenceE1E2E3}) is based on the isomorphism
$E_1 \oplus E_2 \cong E_3 \oplus E_4$.
This can be understood by the continuum deformation of the real space orbitals.
The $\mathbb{Z}_2$ equivariant vector bundle $E_1 \oplus E_2$ is
represented by the real space orbitals in which $s$ and $p$-orbitals are placed at the center of unit cell:
\begin{align}
E_1 \oplus E_2 = \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(0,0.2)}*{\bigcirc},
!{(2,0.2)}*{\bigcirc},
!{(-2,0.2)}*{\bigcirc},
!{(4,0.2)}*{\bigcirc},
!{(-4,0.2)}*{\bigcirc},
!{(0,0.6)}*{\ket{s}},
!{(2,0.6)}*{\ket{s}},
!{(-2,0.6)}*{\ket{s}},
!{(4,0.6)}*{\ket{s}},
!{(-4,0.6)}*{\ket{s}},
!{(0,-0.2)}*{\bigcirc},
!{(2,-0.2)}*{\bigcirc},
!{(-2,-0.2)}*{\bigcirc},
!{(4,-0.2)}*{\bigcirc},
!{(-4,-0.2)}*{\bigcirc},
!{(0,-0.6)}*{\ket{p}},
!{(2,-0.6)}*{\ket{p}},
!{(-2,-0.6)}*{\ket{p}},
!{(4,-0.6)}*{\ket{p}},
!{(-4,-0.6)}*{\ket{p}},
!{(-5,0)}-@{->}!{(5,0)},
!{(0,-1)}-@{.}!{(0,1.1)},
!{(0,1.2)}*{m},
!{(0.4,1)}-@{<->}!{(-0.4,1)},
!{(-1,-1)}-@{|-|}!{(1,-1)},
!{(0,-1.2)}*{\rm unit\ cell},
}
\end{align}
To deform the orbital positions,
first, we mix the $s$ and $p$ orbital as $\ket{s\pm p} : = \frac{\ket{s} \pm \ket{p}}{\sqrt{2}}$.
Then, we can continuously translate the localized orbital $\ket{s+p}$ to right and $\ket{s-p}$ to left preserving the reflection symmetry as shown below:
\begin{align}
E_1 \oplus E_2 \cong \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(0.4,0.2)}*{\bigcirc},
!{(2.4,0.2)}*{\bigcirc},
!{(-1.6,0.2)}*{\bigcirc},
!{(4.4,0.2)}*{\bigcirc},
!{(-3.6,0.2)}*{\bigcirc},
!{(0.4,0.6)}*{\ket{s+p}},
!{(2.4,0.6)}*{\ket{s+p}},
!{(-1.6,0.6)}*{\ket{s+p}},
!{(4.4,0.6)}*{\ket{s+p}},
!{(-3.6,0.6)}*{\ket{s+p}},
!{(-0.4,-0.2)}*{\bigcirc},
!{(1.6,-0.2)}*{\bigcirc},
!{(-2.4,-0.2)}*{\bigcirc},
!{(3.6,-0.2)}*{\bigcirc},
!{(-4.4,-0.2)}*{\bigcirc},
!{(-0.4,-0.6)}*{\ket{s-p}},
!{(1.6,-0.6)}*{\ket{s-p}},
!{(-2.4,-0.6)}*{\ket{s-p}},
!{(3.6,-0.6)}*{\ket{s-p}},
!{(-4.4,-0.6)}*{\ket{s-p}},
!{(-5,0)}-@{->}!{(5,0)},
!{(0,-1)}-@{.}!{(0,1.1)},
!{(0,1.2)}*{m},
!{(0.4,1)}-@{<->}!{(-0.4,1)},
!{(0,0.2)}-@{->}!{(0.3,0.2)},
!{(0,-0.2)}-@{->}!{(-0.3,-0.2)},
!{(-1,-1)}-@{|-|}!{(1,-1)},
!{(0,-1.2)}*{\rm unit\ cell},
}
\end{align}
Note that $\hat U_{m}\ket{s \pm p} = \ket{s \mp p}$.
After the half translation, and the inverse transformation $(\ket{s+p}, \ket{s-p}) \mapsto (\ket{s},\ket{p})$,
we get the $\mathbb{Z}_2$ equivariant vector bundle $E_3 \oplus E_4$ :
\begin{align}
E_1 \oplus E_2 \cong \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(1,0.2)}*{\bigcirc},
!{(3,0.2)}*{\bigcirc},
!{(-1,0.2)}*{\bigcirc},
!{(-3,0.2)}*{\bigcirc},
!{(1,0.6)}*{\ket{s}},
!{(3,0.6)}*{\ket{s}},
!{(-1,0.6)}*{\ket{s}},
!{(-3,0.6)}*{\ket{s}},
!{(-1,-0.2)}*{\bigcirc},
!{(1,-0.2)}*{\bigcirc},
!{(-3,-0.2)}*{\bigcirc},
!{(3,-0.2)}*{\bigcirc},
!{(-1,-0.6)}*{\ket{p}},
!{(1,-0.6)}*{\ket{p}},
!{(-3,-0.6)}*{\ket{p}},
!{(3,-0.6)}*{\ket{p}},
!{(-4,0)}-@{->}!{(4,0)},
!{(0,-1)}-@{.}!{(0,1.1)},
!{(0,1.2)}*{m},
!{(0.4,1)}-@{<->}!{(-0.4,1)},
!{(-1,-1)}-@{|-|}!{(1,-1)},
!{(0,-1.2)}*{\rm unit\ cell},
}
\ \ = E_3 \oplus E_4.
\end{align}
The isomorphism $E_1 \oplus E_2 \cong E_3 \oplus E_4$ is written as the $k_x$-dependent unitary transformation
in the Bloch basis
\begin{equation}\begin{split}
\Phi_{E_1 \oplus E_2}(k_x) = (\ket{k_x}_1, \ket{k_x}_2)
&\mapsto \Phi_{E_3 \oplus E_4}(k_x)
= \Phi_{E_1 \oplus E_2}(k_x) \ V(k_x), \ \ \Phi_{E_3 \oplus E_4}(k_x) = (\ket{\phi_+(k_x)}, \ket{\phi_-(k_x)}),
\end{split}\end{equation}
where $V(k_x)$ consists of $W \cdot T_{1/2}(k_x) \cdot W^{-1}$ with
$W = \begin{pmatrix}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\
\end{pmatrix}$ the change of the basis as $( \ket{s}, \ket{p} ) \mapsto (\ket{s+p}, \ket{s-p})$ and $T_{1/2}(k_x) =
\begin{pmatrix}
e^{-i k_x/2} & 0 \\
0 & e^{i k_x/2} \\
\end{pmatrix}$ the half lattice translation for $\ket{s+p}, \ket{s-p}$.
This unitary transformation connects Hamiltonians on $E_1 \oplus E_2$ and $E_3 \oplus E_4$.
For example, the Hamiltonian $\hat H$ represented on the basis $\Phi_{E_3 \oplus E_4}$ as
$\hat H \Phi_{E_3 \oplus E_4}(k_x) = \Phi_{E_3 \oplus E_4}(k_x) \ \sigma_z$ is represented on the basis $\Phi_{E_1 \oplus E_2}(k_x)$ as
\begin{align}
\hat H \Phi_{E_1 \oplus E_2}(k_x) = \Phi_{E_1 \oplus E_2}(k_x) H_{E_1 \oplus E_2}(k_x), &&
H_{E_1 \oplus E_2}(k_x) = V(k_x) \sigma_z V(k_x)^{\dag} = \cos k_x \sigma_z + \sin k_x \sigma_y.
\end{align}
This is nothing but the equivalence relation (\ref{Eq:EqivalenceE1E2E3}).
\subsection{Half lattice translation symmetry}
\subsubsection{Preliminarily}
The most simple nonsymmorphic symmetry is half lattice translation symmetry in 1d.
The symmetry group is $\mathbb{Z}_2 = \{1, \sigma\}$ and the nontrivial $\mathbb{Z}_2$ action is the half lattice translation $\sigma : x \mapsto x+\frac{1}{2}$.
The twist $\tau_{p,p'}(k_x) \in Z^2(\mathbb{Z}_2;C(S^1;\mathbb{R}/2 \pi Z))$ is fixed as
\begin{equation}
e^{i \tau_{p,p'}(k_x)} \qquad = \qquad
\begin{tabular}{c|cc}
$p \backslash p'$ & $1$ & $\sigma$ \\
\hline
$1$ & $1$ & $1$ \\
$\sigma$ & $1$ & $ e^{-i k_x}$ \\
\end{tabular}
\label{Tab:1DTwist}
\end{equation}
On the Hamiltonian, the half translational symmetry is written as
\begin{align}
U_{\sigma}(k_x) H(k_x) U^{-1}_{\sigma}(k_x) = H(k_x), &&
[ U_{\sigma}(k_x) ]^2 = e^{- i k_x}.
\end{align}
\begin{figure}[!]
\begin{center}
\includegraphics[width=0.5\linewidth, trim=0cm 0cm 0cm 0cm]{1DTwist.pdf}
\end{center}
\caption{Structure of the energy eigenstates with the half lattice translation symmetry. }
\label{Fig:1DTwist}
\end{figure}
A characteristic property of the half lattice translation is
the crossing of the pair of eigenstates of $U_{\sigma}(k_x)$.
Because of $[ U_{\sigma}(k_x) ]^2 = e^{- i k_x}$,
eigenvalues of $U_{\sigma}(k_x)$ can not be globally defined on the BZ $S^1$.
We have eigenvalues $u(k_x) = \pm e^{-i k_x/2}$ in local region of $S^1$.
Globally, two eigenstates with eigenvalues $u(k_x) = \pm e^{-i k_x/2}$ are connected since
the continuum change of the eigenvalue by $k_x \mapsto k_x + 2 \pi$ leads to the interchange of the eigenvalues
\begin{equation}
(e^{-ik_x/2},-e^{-ik_x/2}) \underset{k_x \mapsto k_x + 2 \pi}{\mapsto} (-e^{-ik_x/2},e^{-ik_x/2}).
\label{Eq:EigenvalueTwist}
\end{equation}
See Fig. \ref{Fig:1DTwist}.
Especially, the pair of eigenstates with $u = \pm e^{-i k_x/2}$ should cross somewhere.
From the interchange of eigenvalues (\ref{Eq:EigenvalueTwist}),
we expect that when we use the Mayer-Vietoris sequence the gluing of two lines at
$k_x = \pi/2$ and $-\pi/2$ should have relative twisting of the eigenstates of $U_{\sigma}(k_x)$.
If we take the gluing condition for $k_x = \pi/2$ in a proper way, then that for $k_x = - \pi/2$ is twisted, as shown in (\ref{Eq:1DTwist_MV}) below.
\subsubsection{Topological classification}
We want to calculate the twisted equivariant $K$-theory $K^{\tau+ n}_{\mathbb{Z}_2}(S^1)$,
where $\mathbb{Z}_2$ trivially acts on $S^1$ as $\sigma : k_x \mapsto k_x$,
and the twist $\tau$ is given by (\ref{Tab:1DTwist}).
To apply the Mayer-Vietoris sequence to $S^1 = U \cup V$,
We divide $S^1$ into two intervals
\begin{align}
U = \{e^{ik_x} \in \tilde S^1 | k_x \in [-\pi/2,\pi/2]\}, &&
V = \{e^{ik_x} \in \tilde S^1 | k_x \in [\pi/2,3\pi/2]\}.
\label{Eq:1DTwistS1ToUV}
\end{align}
The intersection is
\begin{equation}
U \cap V = \{ \frac{\pi}{2} \} \sqcup \{ -\frac{\pi}{2} \}.
\end{equation}
The sequence of the inclusions
\begin{equation}
\begin{CD}
\xy*\cir<1cm>{}\endxy
@<<<
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(-0.3,0)}*{\cir<1cm>{l^r}}
!{(0.3,0)}*{\cir<1cm>{r^l}}
}@<<<
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(0,0.7)}*{\bullet}
!{(0.5,0.7)}*{\{\frac{\pi}{2} \}}
!{(0,-0.7)}*{\bullet}
!{(0.6,-0.7)}*{\{-\frac{\pi}{2} \}}
}\\
S^1 = U \cup V @<<< U \sqcup V @<<< U \cap V \\
\end{CD}
\end{equation}
induces the six term exact sequence of the twisted equivariant $K$-theory
\begin{equation}
\begin{CD}
K_{\mathbb{Z}_2}^{\tau|_{U \cap V}+1}(U \cap V) @<<< K_{\mathbb{Z}_2}^{\tau|_U+1}(U) \oplus K_{\mathbb{Z}_2}^{\tau|_V+1}(V) @<<< K_{\mathbb{Z}_2}^{\tau+1}(S^1) \\
@VVV @. @AAA \\
K_{\mathbb{Z}_2}^{\tau+0}(S^1) @>>> K_{\mathbb{Z}_2}^{\tau|_U+0}(U) \oplus K_{\mathbb{Z}_2}^{\tau|_V+0}(V) @>\Delta>> K_{\mathbb{Z}_2}^{\tau|_{U \cap V}+0}(U \cap V).
\end{CD}
\label{Seq:1DTwist}
\end{equation}
Here, the twists on $U, V, U \cap V$ are given by the restrictions of the twist $\tau_{\sigma,\sigma}(k_x) = e^{- ik_x}$ to them, and these twists are trivial.
In fact, the twists $\tau|_U, \tau|_V, \tau|_{U \cap V}$ are exact
\begin{align}
\label{Eq:1DTwistTrivU}
&\tau|_U = \delta \beta^U, && \beta^U_{1}(k_x) = 1, \ \ \beta^U_{\sigma}(k_x) = e^{-i k_x/2}, && k_x \in \left[-\frac{\pi}{2},\frac{\pi}{2} \right], \\
&\tau|_V = \delta \beta^V, && \beta^V_{1}(k_x) = 1, \ \ \beta^V_{\sigma}(k_x) = e^{-i k_x/2}, && k_x \in \left[\frac{\pi}{2},\frac{3\pi}{2} \right], \\
\label{Eq:1DTwistTrivUV}
&\tau_{U \cap V} = \delta \beta^{U \cap V}, && \beta^{U \cap V}_1( \pm \frac{\pi}{2} ) = 1, \ \ \beta^{U \cap V}_{\sigma}( \pm \frac{\pi}{2} ) = e^{ \mp i \pi/4 }. &&
\end{align}
Note that $\beta^U_{\sigma}(k_x)$ and $\beta^V_{\sigma}(k_x)$ correspond to local eigenvalues of $U_{\sigma}(k_x)$.
In these trivializations, two eigenstates are connected at $\{ i \}$ and twisted at $\{ -i \}$.
By using these trivializations, we have
\begin{align}
&K_{\mathbb{Z}_2}^{\tau|_U+n}(U)
\overset{\beta^U}{\cong} K_{\mathbb{Z}_2}^{n}(U)
\cong K_{\mathbb{Z}_2}^{n}(pt)
\cong \left\{ \begin{array}{ll}
R(\mathbb{Z}_2) & (n=0) \\
0 & (n=1) \\
\end{array} \right., \\
&K_{\mathbb{Z}_2}^{\tau|_U+n}(V)
\overset{\beta^V}{\cong} K_{\mathbb{Z}_2}^{n}(V)
\cong K_{\mathbb{Z}_2}^{n}(pt)
\cong \left\{ \begin{array}{ll}
R(\mathbb{Z}_2) & (n=0) \\
0 & (n=1) \\
\end{array} \right., \\
&K_{\mathbb{Z}_2}^{\tau|_{U \cup V}+n}(U \cap V)
\overset{\beta^{U \cap V}}{\cong} K_{\mathbb{Z}_2}^{n}( U \cap V)
\cong K_{\mathbb{Z}_2}^{n}( \{i \} \sqcup \{-i\})
\cong \left\{ \begin{array}{ll}
R(\mathbb{Z}_2) \oplus R(\mathbb{Z}_2) & (n=0) \\
0 & (n=1) \\
\end{array} \right..
\end{align}
Then, one may expect that the homomorphism $\Delta : K_{\mathbb{Z}_2}^{\tau|_U+0}(U) \oplus K_{\mathbb{Z}_2}^{\tau|_V+0}(V)\to K_{\mathbb{Z}_2}^{\tau|_{U \cap V}+0}(U \cap V)$
is given by
\begin{align}
j_U^* - j_V^* : K^{n}_{\mathbb{Z}_2}(pt) \oplus K^{n}_{\mathbb{Z}_2}(pt) \to K^{n}_{\mathbb{Z}_2}(\{ \frac{\pi}{2} \} \cap \{ -\frac{\pi}{2}\}) , &&
(f(t), g(t)) \mapsto (\underbrace{f(t) - g(t)}_{\{\pi/2\}}, \underbrace{f(t) - g(t)}_{\{-\pi/2\}}) \qquad
({\rm wrong!}).
\end{align}
This is really wrong because of not respecting the global structure of the twist.
The correct one is
\begin{align}
\Delta = \alpha^U j_U^* - \alpha^V j_V^* : K^{n}_{\mathbb{Z}_2}(pt) \oplus K^{n}_{\mathbb{Z}_2}(pt) \to K^{n}_{\mathbb{Z}_2}(\{ \frac{\pi}{2} \} \cap \{ -\frac{\pi}{2}\})
\label{Eq:1DTwistDelta'}
\end{align}
with $\alpha^U , \alpha^V : K^{n}_{\mathbb{Z}_2}(U \cap V) \to K^{n}_{\mathbb{Z}_2}(U \cap V)$ defined by
\begin{align}
&\alpha^U := \beta_{U \cap V} (\beta^U)^{-1}, && \alpha^U_1(\pm \frac{\pi}{2}) = 1, \ \ \alpha^U_{\sigma}(\pm \frac{\pi}{2}) = 1, \\
&\alpha^V := \beta_{U \cap V} (\beta^V)^{-1}, && \alpha^V_1(\pm \frac{\pi}{2}) = 1, \ \ \alpha^V_{\sigma}(\pm \frac{\pi}{2}) = \pm 1.
\end{align}
Here $\alpha^V_{\sigma}= -1$ corresponds to the change of the eigenvalues as $(1,-1) \mapsto (-1) \cdot (1,-1) = (-1,1)$, which is equivalent to
the action of $t \in R(\mathbb{Z}_2)$. Thus we have
\begin{align}
\Delta : (f(t), g(t)) \mapsto (\underbrace{f(t) - g(t)}_{\{\pi/2\}}, \underbrace{f(t) - t g(t)}_{\{-\pi/2\}}).
\label{Eq:1DTwist_MV}
\end{align}
From the the Mayer-Vietoris sequence (\ref{Seq:1DTwist}), we have
\begin{align}
K^{\tau+0}_{\mathbb{Z}_2}(S^1) \cong \mathrm{Ker} (\Delta), &&
K^{\tau+1}_{\mathbb{Z}_2}(S^1) \cong \mathrm{Coker} (\Delta).
\label{Eq:1DTwistKGroup}
\end{align}
From (\ref{Eq:1DTwist_MV}), we find $\mathrm{Ker}(\Delta) = \mathbb{Z}$ as an Abelian group, and
the generator of $\mathbb{Z}$ is characterized by $(1+t,1+t) \in R(\mathbb{Z}_2) \oplus R(\mathbb{Z}_2)$.
Thus we have
\begin{align}
K^{\tau+0}_{\mathbb{Z}_2}(S^1) \cong \overbrace{(1+t)}^{\mathbb{Z}} \ \ ({\rm class\ A}).
\label{Eq:1DTwistKGroup_n=0}
\end{align}
Here, $(1+t)$ is the $R(\mathbb{Z}_2)$-ideal $(1+t) = \{(1+t) f(t) | f(t) \in R(\mathbb{Z}_2)\}$.
Since $\mathrm{Im}(\Delta) \subset R(\mathbb{Z}_2) \oplus R(\mathbb{Z}_2)$ is spanned by $\{(1,1), (t,t), (1,t)\}$, we have
\begin{align}
\mathrm{Coker} (\Delta)
= (R(\mathbb{Z}_2) \oplus R(\mathbb{Z}_2)) / \mathrm{Im}(\Delta)
= \mathbb{Z}
\end{align}
as an Abelian group.
The generator of $\mathrm{Coker} (\Delta) = \mathbb{Z}$ is represented by $[(1,0)]$ with $(1,0) \in R(\mathbb{Z}_2) \oplus R(\mathbb{Z}_2)$, in which the $R(\mathbb{Z}_2)$ action is given by
$t \cdot (1,0) = (t,0) \sim (1,0)$, leading to $\mathrm{Coker} (\Delta) \cong (1+t)$ as an $R(\mathbb{Z}_2)$-module.
Thus, we have
\begin{align}
K^{\tau+1}_{\mathbb{Z}_2}(S^1) \cong \overbrace{(1+t)}^{\mathbb{Z}} \ \ ({\rm class\ AIII}).
\label{Eq:1DTwistKGroup_n=1}
\end{align}
\subsubsection{Vector bundle representation for $K^{\tau+0}_{\mathbb{Z}_2}(S^1)$}
Here we give the vector bundle representation and the corresponding real space orbital picture.
The generator of the $K$-group $e \in K^{\tau+0}_{\mathbb{Z}_2}(S^1) = (1+t)$ is represented by the
following $\mathbb{Z}_2$ twisted equivariant bundle $E$,
\begin{align}
e= \left[ \left( E = S^1 \times \mathbb{C}^2, \ \ \rho_{\sigma}(k_x,v) = (k_x, U_{\sigma}(k_x) v), \ \ U_{\sigma}(k_x) = \begin{pmatrix}
0 & e^{-i k_x} \\
1 & 0 \\
\end{pmatrix} \right) \right] .
\label{Eq:1DTwistVectE}
\end{align}
By using the Bloch states, $e$ is written as
\begin{align}
e = \left[ \left( \Phi(k_x) = ( \ket{k_x,A}, \ket{k_x,B} ), \ \ \hat U_{\sigma} \Phi(k_x) = \Phi(k_x) U_{\sigma}(k_x) \right) \right] .
\label{Eq:1DTwistBlochE}
\end{align}
By using the real space basis $\ket{R_x,\alpha} =\sum_{k_x \in S^1} \ket{k_x, \alpha} e^{-i k_x R_x}, \ (\alpha = A,B)$, we can write $e$ as
\begin{align}
e = \left[ \left( \Phi(R_x) = ( \ket{R_x,A}, \ket{R_x,B} ), \ \ \hat U_{\sigma} \Phi(R_x) = ( \ket{R_x,B}, \ket{R_x+1,A}) \right) \right] .
\end{align}
Thus, $e$ just describes the two atoms $\ket{R_x,A}$ and $\ket{R_x,B}$ exchanged under the half translation $\hat U_{\sigma}$,
which is figured as:
\begin{align}
e= \left[ \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(0.5,0)}*{\bigcirc},
!{(-0.5,0)}*{\bigcirc},
!{(1.5,0)}*{\bigcirc},
!{(-1.5,0)}*{\bigcirc},
!{(2.5,0)}*{\bigcirc},
!{(-2.5,0)}*{\bigcirc},
!{(3.5,0)}*{\bigcirc},
!{(-3.5,0)}*{\bigcirc},
!{(4.5,0)}*{\bigcirc},
!{(-4.5,0)}*{\bigcirc},
!{(0.5,-0.4)}*{B},
!{(-0.5,-0.4)}*{A},
!{(1.5,-0.4)}*{A},
!{(-1.5,-0.4)}*{B},
!{(2.5,-0.4)}*{B},
!{(-2.5,-0.4)}*{A},
!{(3.5,-0.4)}*{A},
!{(-3.5,-0.4)}*{B},
!{(4.5,-0.4)}*{B},
!{(-4.5,-0.4)}*{A},
!{(-5,0)}-@{->}!{(5,0)},
!{(0,0.8)}*{\sigma},
!{(-0.4,0.4)}-@/^0.2cm/@{->}!{(0.4,0.4)},
!{(1,0.8)}*{\sigma},
!{(0.6,0.4)}-@/^0.2cm/@{->}!{(1.4,0.4)},
!{(-1,-0.8)}-@{|-|}!{(1,-0.8)},
!{(0,-1)}*{\rm unit\ cell},
}
\ \ \right].
\label{Fig:1DTCI_NS_e1}
\end{align}
\subsubsection{Vector bundle representation for $K^{\tau+1}_{\mathbb{Z}_2}(S^1)$}
Here we give a representation of the generator $1_q \in K^{\tau+1}_{\mathbb{Z}_2}(S^1) = (1+t)$ by an automorphism $q : E \to E$,
where $E$ is the $\mathbb{Z}_2$ twisted equivariant bundle introduced in $(\ref{Eq:1DTwistVectE})$.
Because $E = S^1 \times \mathbb{C}^2$ is trivial as a complex vector bundle of rank $2$,
$q : E \to E$ amounts to a function with values in $2 \times 2$ unitary matrices $q : S^1 \to U(2)$,
and $q(k_x)$ commutes with the half lattice translation symmetry
\begin{align}
U_{\sigma}(k_x) q(k_x) U^{-1}_{\sigma}(k_x) = q(k_x).
\end{align}
We can define the topological invariant $W$ charactering $q(k_x)$ as
\begin{align}
W := \frac{1}{2 \pi i } \oint_{S^1} \mathrm{tr} [q^{\dag} d q].
\end{align}
The generator model $q(k_x)$ is characterized by $W = 1$.
The simplest model is given by
\begin{align}
q(k_x) =
\begin{pmatrix}
0 & 1 \\
e^{i k_x} & 0
\end{pmatrix}.
\end{align}
Thus, we have a representation of the generator $1_q \in K^{\tau+1}_{\mathbb{Z}_2}(S^1)$ as
\begin{align}
1_q= \left[ \left( q : E \to E, \ \ q(k_x) = \begin{pmatrix}
0 & 1 \\
e^{i k_x} & 0
\end{pmatrix}
\right) \right] .
\end{align}
By using the Bloch state representation for $E$ in (\ref{Eq:1DTwistBlochE}),
$q : E \to E$ is written in the second quantized form
\begin{align}
1_q= \left[ \ \ \hat q = \sum_{k_x \in S^1} \left( \psi^{\dag}_{f, A}(k_x), \psi^{\dag}_{f, B}(k_x) \right) q(k_x) \begin{pmatrix}
\psi_{i,A}(k_x) \\
\psi_{i,B}(k_x) \\
\end{pmatrix} \ \ \right] ,
\end{align}
where $\{ i,f \}$ are auxiliary indices which distinguish between initial and final states.
In the real space basis, $1_q$ can be written as the following hopping model
\begin{align}
1_q= \left[ \ \ \hat q = \sum_{R_x \in \mathbb{Z}} \left( \psi^{\dag}_{f, A}(R_x) \psi_{i,B}(R_x) + \psi^{\dag}_{f, B}(R_x) \psi_{i,A}(R_x+1) \right) \ \ \right] ,
\end{align}
which is figured out as
\begin{align}
1_q= \left[ \ \
\xygraph{
!{<0cm,-0.5cm>;<1cm,-0.5cm>:<0cm,0.5cm>::}
!{(-5.5,0)}*{E_i},
!{(0.5,0)}*{\bigcirc},
!{(-0.5,0)}*{\bigcirc},
!{(1.5,0)}*{\bigcirc},
!{(-1.5,0)}*{\bigcirc},
!{(2.5,0)}*{\bigcirc},
!{(-2.5,0)}*{\bigcirc},
!{(3.5,0)}*{\bigcirc},
!{(-3.5,0)}*{\bigcirc},
!{(4.5,0)}*{\bigcirc},
!{(-4.5,0)}*{\bigcirc},
!{(0.5,-0.4)}*{B},
!{(-0.5,-0.4)}*{A},
!{(1.5,-0.4)}*{A},
!{(-1.5,-0.4)}*{B},
!{(2.5,-0.4)}*{B},
!{(-2.5,-0.4)}*{A},
!{(3.5,-0.4)}*{A},
!{(-3.5,-0.4)}*{B},
!{(4.5,-0.4)}*{B},
!{(-4.5,-0.4)}*{A},
!{(-5,0)}-@{->}!{(5,0)},
!{(-5.5,1)}*{E_f},
!{(0.5,1)}*{\bigcirc},
!{(-0.5,1)}*{\bigcirc},
!{(1.5,1)}*{\bigcirc},
!{(-1.5,1)}*{\bigcirc},
!{(2.5,1)}*{\bigcirc},
!{(-2.5,1)}*{\bigcirc},
!{(3.5,1)}*{\bigcirc},
!{(-3.5,1)}*{\bigcirc},
!{(4.5,1)}*{\bigcirc},
!{(-4.5,1)}*{\bigcirc},
!{(0.5,1.4)}*{B},
!{(-0.5,1.4)}*{A},
!{(1.5,1.4)}*{A},
!{(-1.5,1.4)}*{B},
!{(2.5,1.4)}*{B},
!{(-2.5,1.4)}*{A},
!{(3.5,1.4)}*{A},
!{(-3.5,1.4)}*{B},
!{(4.5,1.4)}*{B},
!{(-4.5,1.4)}*{A},
!{(-5,1)}-@{->}!{(5,1)},
!{(0.4,0.1)}-@{->}!{(-0.4,0.9)},
!{(1.4,0.1)}-@{->}!{(0.6,0.9)},
!{(2.4,0.1)}-@{->}!{(1.6,0.9)},
!{(3.4,0.1)}-@{->}!{(2.6,0.9)},
!{(4.4,0.1)}-@{->}!{(3.6,0.9)},
!{(-0.6,0.1)}-@{->}!{(-1.4,0.9)},
!{(-1.6,0.1)}-@{->}!{(-2.4,0.9)},
!{(-2.6,0.1)}-@{->}!{(-3.4,0.9)},
!{(-3.6,0.1)}-@{->}!{(-4.4,0.9)},
!{(-0.7,0.5)}*{1},
!{(0.3,0.5)}*{1},
}
\ \ \right].
\label{Fig:1DTCI_NS_e1_AIII}
\end{align}
\subsubsection{Hamiltonian representation for $K^{\tau+1}_{\mathbb{Z}_2}(S^1)$}
We give the Hamiltonian representation for $K^{\tau+1}_{\mathbb{Z}_2}(S^1)$.
If an automorphism representation $q : E \to E$ is obtained,
the Hamiltonian $H_q$ with the chiral symmetry $\Gamma H_q + H_q \Gamma = 0$ is given by
\begin{align}
H_q =
\begin{pmatrix}
0 & q^{\dag} \\
q & 0
\end{pmatrix}
\ \ {\rm with\ \ }
\Gamma =
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}.
\end{align}
In the second quantized form, this means $\hat H_q = \hat q + \hat q^{\dag}$.
\subsection{Glide symmetry}
Let us consider the glide symmetry which is a nonsymmorphic wallpaper group generated by
$\sigma : (x,y) \mapsto (x+\frac{1}{2},-y)$.
The point group is $\mathbb{Z}_2 = \{1,\sigma\}$ which acts on the BZ torus as
\begin{align}
\sigma : (k_x,k_y) \mapsto (k_x,-k_y).
\end{align}
The twist $(\tau_{\sf pg})_{p,p'}(k_x,k_y) \in Z^2(\mathbb{Z}_2;C(T^2,\mathbb{R}/2 \pi \mathbb{Z}))$ of the glide symmetry is given by
\begin{equation}
e^{i (\tau_{\sf pg})_{p,p'}(k_x,k_y)} \qquad = \qquad
\begin{tabular}{c|cc}
$p \backslash p'$ & $1$ & $\sigma$ \\
\hline
$1$ & $1$ & $1$ \\
$\sigma$ & $1$ & $ e^{-i k_x}$ \\
\end{tabular}
\label{Tab:2DTwist}
\end{equation}
Hamiltonians with the glide symmetry are written as
\begin{align}
U_{\sigma}(k_x,k_y) H(k_x,k_y) U^{-1}_{\sigma}(k_x,k_y) = H(k_x,-k_y), &&
U_{\sigma}(k_x,-k_y) U_{\sigma}(k_x,k_y) = e^{- i k_x}.
\end{align}
\subsubsection{Topological classification}
To apply the Mayer-Vietoris sequence to the BZ torus $T^2$,
we divide $T^2$ into two cylinders $U$ and $V$ so that
\begin{align}
U = \left\{(e^{i k_x},e^{i k_y}) \in T^2 | - \frac{\pi}{2} \leq k_x \leq \frac{\pi}{2} \right\}, &&
V = \left\{(e^{i k_x},e^{i k_y}) \in T^2 | \frac{\pi}{2} \leq k_x \leq \frac{3 \pi}{2} \right\}.
\end{align}
The intersection consists of two circles
\begin{align}
U \cap V = \{\pi/2\} \times \tilde S^1 \sqcup \{-\pi/2\} \times \tilde S^1.
\end{align}
$U$ and $V$ are $\mathbb{Z}_2$ equivariantly homotopic to $S^1$:
\begin{align}
U \sim \{0\} \times \tilde S^1, &&
V \sim \{\pi\} \times \tilde S^1.
\end{align}
Here, we denote the $\mathbb{Z}_2$-space $S^1$ with the reflection symmetry by $\tilde S^1$ as introduced previously in (\ref{Fig:TildeS1}).
In the same way as (\ref{Eq:1DTwistTrivU}) - (\ref{Eq:1DTwistTrivUV}),
the twist on $U, V, U \cap V$ can be trivialized as
\begin{align}
&(\tau_{\sf pg})|_U = \delta \beta^U, && \beta^U_{1}(k_x,k_y) = 1, \ \ \beta^U_{\sigma}(k_x,k_y) = e^{-i k_x/2}, && k_x \in \left[-\frac{\pi}{2},\frac{\pi}{2} \right], \\
&(\tau_{\sf pg})|_V = \delta \beta^V, && \beta^V_{1}(k_x,k_y) = 1, \ \ \beta^V_{\sigma}(k_x,k_y) = e^{-i k_x/2}, && k_x \in \left[\frac{\pi}{2},\frac{3\pi}{2} \right], \\
&(\tau_{\sf pg})|_{U \cap V} = \delta \beta^{U \cap V}, && \beta^{U \cap V}_1( \pm \frac{\pi}{2} ,k_y) = 1, \ \ \beta^{U \cap V}_{\sigma}( \pm \frac{\pi}{2},k_y ) = e^{ \mp i \pi/4 }. &&
\end{align}
By using these trivializations and the $K$-group of $\tilde S^1$ (\ref{Eq:KGroupS1}), we have
\begin{align}
&K_{\mathbb{Z}_2}^{(\tau_{\sf pg})|_U+n}(U)
\overset{\beta^U}{\cong} K_{\mathbb{Z}_2}^{n}(U)
\cong K_{\mathbb{Z}_2}^{n}(\{0\} \times \tilde S^1)
\cong \left\{ \begin{array}{ll}
R(\mathbb{Z}_2) \oplus (1-t) & (n=0) \\
0 & (n=1) \\
\end{array} \right., \\
&K_{\mathbb{Z}_2}^{(\tau_{\sf pg})|_U+n}(V)
\overset{\beta^V}{\cong} K_{\mathbb{Z}_2}^{n}(V)
\cong K_{\mathbb{Z}_2}^{n}(\{\pi\} \times \tilde S^1)
\cong \left\{ \begin{array}{ll}
R(\mathbb{Z}_2) \oplus (1-t) & (n=0) \\
0 & (n=1) \\
\end{array} \right.,
\end{align}
\begin{equation}\begin{split}
K_{\mathbb{Z}_2}^{(\tau_{\sf pg})|_{U \cup V}+n}(U \cap V)
&\overset{\beta^{U \cap V}}{\cong} K_{\mathbb{Z}_2}^{n}( U \cap V)
\cong K_{\mathbb{Z}_2}^{n}( \{\pi/2 \} \times \tilde S^1 \sqcup \{-\pi/2\} \times \tilde S^1) \\
&\cong \left\{ \begin{array}{ll}
( R(\mathbb{Z}_2) \oplus (1-t) ) \oplus ( R(\mathbb{Z}_2) \oplus (1-t) ) & (n=0) \\
0 & (n=1) \\
\end{array} \right..
\end{split}\end{equation}
Thus, the Mayer-Vietoris sequence reads
\begin{equation}
\begin{CD}
0 @<<< 0 @<<< K_{\mathbb{Z}_2}^{\tau_{\sf pg}+1}(T^2) \\
@VVV @. @AAA \\
K_{\mathbb{Z}_2}^{\tau_{\sf pg}+0}(T^2) @>>> K_{\mathbb{Z}_2}^{(\tau_{\sf pg})|_U+0}(U) \oplus K_{\mathbb{Z}_2}^{(\tau_{\sf pg})|_V+0}(V) @>\Delta>> K_{\mathbb{Z}_2}^{(\tau_{\sf pg})|_{U \cap V}+0}(U \cap V).
\end{CD}
\end{equation}
Then, in the same way as (\ref{Eq:1DTwistDelta'}) - (\ref{Eq:1DTwistKGroup}),
the $K$-group $K_{\mathbb{Z}_2}^{\tau_{\sf pg}+n}(T^2)$ is given by
\begin{align}
K_{\mathbb{Z}_2}^{\tau_{\sf pg}+0}(T^2) \cong \mathrm{Ker}(\Delta), &&
K_{\mathbb{Z}_2}^{\tau_{\sf pg}+1}(T^2) \cong \mathrm{Coker}(\Delta),
\end{align}
with
\begin{align}
\Delta:
\underbrace{R(\mathbb{Z}_2) \oplus (1-t)}_{\{0\} \times \tilde S^1} \oplus \underbrace{R(\mathbb{Z}_2) \oplus (1-t)}_{\{\pi\} \times \tilde S^1}
&\to
\underbrace{R(\mathbb{Z}_2) \oplus (1-t)}_{\{\pi/2\} \times \tilde S^1} \oplus \underbrace{R(\mathbb{Z}_2) \oplus (1-t)}_{\{-\pi/2\} \times \tilde S^1}, &&
(x,y) \mapsto (x-y, x-t y),
\end{align}
where $\Delta$ is $\Delta = \alpha_U j_U^* - \alpha_V j_V^*$ with $\alpha_U := \beta^{U \cap V}(\beta^U)^{-1}$ and $\alpha_V := \beta^{U \cap V}(\beta^V)^{-1}$.
Note that $x,y \in R(\mathbb{Z}_2) \oplus (1-t)$ are glued with the twist by $t \in R(\mathbb{Z}_2)$ on the circle $\{-\pi/2\} \times \tilde S^1$.
On the direct summands $R(\mathbb{Z}_2) \oplus R(\mathbb{Z}_2)$ and $(1-t) \oplus (1-t)$,
the homomorphism $\Delta$ takes the following forms
\begin{align}
&\Delta|_{R(\mathbb{Z}_2) \oplus R(\mathbb{Z}_2)} : R(\mathbb{Z}_2) \oplus R(\mathbb{Z}_2) \to R(\mathbb{Z}_2) \oplus R(\mathbb{Z}_2), && \big( f(t), g(t) \big) \mapsto \big( f(t) - g(t), f(t) - t g(t) \big), \\
&\Delta|_{(1-t) \oplus (1-t)}: (1-t)\oplus (1-t) \to (1-t) \oplus (1-t), && \big( n(1-t),m(1-t) \big) \mapsto \big( (n - m)(1-t) , (n + m)(1-t) \big).
\end{align}
Note that $t (1-t) = - (1-t)$.
As a result, we get
\begin{align}
K_{\mathbb{Z}_2}^{\tau_{\sf pg}+0}(T^2) \cong \overbrace{(1+t)}^{\mathbb{Z}}\ \ \ \ \ ({\rm class\ A}), &&
K_{\mathbb{Z}_2}^{\tau_{\sf pg}+1}(T^2) \cong \overbrace{(1+t)}^{\mathbb{Z}} \oplus \overbrace{I}^{\mathbb{Z}_2} \ \ \ \ \ ({\rm class\ AIII}).
\label{Eq:2DPgKGroup}
\end{align}
We denoted Abelian groups in the overbraces.
A generator of $I = \mathbb{Z}_2$ is represented by $a = ((1-t),0) \in (1-t) \oplus (1-t)$.
The $R(\mathbb{Z}_2)$ action on $I$ is trivial because $t \cdot ((1-t),0) = (-(1-t),0) \sim ((1-t),0)$.
\subsubsection{Alternative derivation : Gysin sequence}
In the last subsection, we computed the $K$-group on the $2$-dimensional torus directly.
There is an alternative derivation of (\ref{Eq:2DPgKGroup}) by using the Gysin sequence as discussed in Ref.~\onlinecite{ShiozakiSatoGomi2015}.
Here, we will briefly describe this method.
Let $\pi:S^1 \times \tilde S^1 \to S^1$ be the projection onto the $k_x$-direction.
The twisting $\tau_{\sf pg}$ defined in (\ref{Tab:2DTwist}) arises only from the $k_x$-direction,
which means the twisting $\tau_{\sf pg}$ of the glide symmetry is realized as the pull back
$\tau_{\sf pg}=\pi^* \tau$
of the twisting of the half-lattice translation defined in (\ref{Tab:1DTwist}).
Applying the Gysin sequence associated with the reflection
(That is explained in Appendix \ref{Sec:Gysin} and the relevant isomorphism is (\ref{eq:app_Gysin_reflection}).) to
$T^2 = S^1 \times \tilde S^1$,
we have the isomorphism of $R(\mathbb{Z}_2)$-modules
\begin{align}
K^{\tau_{\sf pg}+n}_{\mathbb{Z}_2}(S^1 \times \tilde S^1)
= K^{(\tau,0)+n}_{\mathbb{Z}_2}(S^1) \oplus K^{(\tau,w)+n+1}_{\mathbb{Z}_2}(S^1).
\label{Eq:2DPgGysin}
\end{align}
The first direct summand represents just a ``weak" index, say, the contribution from the $k_y$-independent Hamiltonians,
which is already given in (\ref{Eq:1DTwistKGroup_n=0}) and (\ref{Eq:1DTwistKGroup_n=1}) as
\begin{equation}
K^{(\tau,0)+n}_{\mathbb{Z}_2}(S^1) = \left\{ \begin{array}{ll}
(1+t) & (n=0) \\
(1+t) & (n=1) \\
\end{array} \right..
\label{Eq:1DTwistKGroup_pg}
\end{equation}
So, the second direct summand $K^{(\tau,w)+n+1}_{\mathbb{Z}_2}(S^1)$ is a contribution specific to 2d.
The problem is recast into the 1d problem $K^{(\tau,w)+n+1}(S^1)$.
In the exponent of the $K$-group $K^{(\tau,w)+n}(S^1)$,
$c=w$ means the ``antisymmetry class" $c(\sigma)=-1$ introduced in Sec.~\ref{sec:Anti space group}
which is defined for Hamiltonians by
\begin{align}
&{\rm class\ A} \ \ (n=0) :
\left\{ \begin{array}{l}
U_{\sigma}(k_x) H(k_x) U^{-1}_{\sigma}(k_x) = - H(k_x), \\
{[U_{\sigma}(k_x)]}^2 = e^{-i k_x}, \\
\end{array} \right. \\
&{\rm class\ AIII} \ \ (n=1) :
\left\{ \begin{array}{l}
\Gamma H(k_x) \Gamma^{-1} = - H(k_x), \\
U_{\sigma}(k_x) H(k_x) U^{-1}_{\sigma}(k_x) = - H(k_x), \\
{[U_{\sigma}(k_x)]}^2 = e^{-i k_x}, \\
\Gamma U_{\sigma}(k_x) = - U_{\sigma}(k_x) \Gamma.
\end{array} \right.
\end{align}
By the same decomposition $S^1 = U \cup V$ as (\ref{Eq:1DTwistS1ToUV}) and
the same trivialization of the twist $\tau$ on $U, V, U \cap V$ as (\ref{Eq:1DTwistTrivU} - \ref{Eq:1DTwistTrivUV}),
we can show the following
\begin{align}
&K_{\mathbb{Z}_2}^{(\tau|_U,w)+n}(U)
\overset{\beta^U}{\cong} K_{\mathbb{Z}_2}^{(0,w)+n}(U)
\cong K_{\mathbb{Z}_2}^{(0,w)+n}(pt)
\cong \left\{ \begin{array}{ll}
0 & (n=0) \\
(1-t) & (n=1) \\
\end{array} \right., \\
&K_{\mathbb{Z}_2}^{(\tau|_U,w)+n}(V)
\overset{\beta^V}{\cong} K_{\mathbb{Z}_2}^{(0,w)+n}(V)
\cong K_{\mathbb{Z}_2}^{(0,w)+n}(pt)
\cong \left\{ \begin{array}{ll}
0 & (n=0) \\
(1-t) & (n=1) \\
\end{array} \right., \\
&K_{\mathbb{Z}_2}^{(\tau|_{U \cup V},w)+n}(U \cap V)
\overset{\beta^{U \cap V}}{\cong} K_{\mathbb{Z}_2}^{(0,w)+n}( U \cap V)
\cong K_{\mathbb{Z}_2}^{(0,w)+n}( \{\pi/2 \} \sqcup \{-\pi/2\})
\cong \left\{ \begin{array}{ll}
0 & (n=0) \\
(1-t) \oplus (1-t) & (n=1) \\
\end{array} \right..
\end{align}
Here, the $K$-group of the point $K^{(0,w)+n}_{\mathbb{Z}_2}(pt)$ is given as follows.
For $n=0$, the symmetry restricted to the point is the same as the chiral symmetry $U_{\sigma}(pt) H(pt) U_{\sigma}^{-1}(pt) = - H(pt)$,
leading to $K^{(0,w)+0}_{\mathbb{Z}_2}(pt) = 0$.
For $n=1$, from the double chiral symmetries by $U_{\sigma}(pt)$ and $\Gamma$ which anti-commute with each other,
a symmetry-preserving Hamiltonian takes a form $H(pt) = \widetilde H(pt) \otimes (i U_{\sigma}(pt) \Gamma)$
with no symmetry for $\widetilde H(pt)$.
(Here we assume $\Gamma^2 = U^2_{\sigma}(pt) = 1$.)
Thus, the symmetry class is the same as class A and we find $K^{(0,w)+1}_{\mathbb{Z}_2}(pt) = \mathbb{Z}$ as an Abelian group.
The $R(\mathbb{Z}_2)$-module structure is given by the Karoubi's quadruplet representation.
A generator of $K^{(0,w)+1}_{\mathbb{Z}_2}(pt) = \mathbb{Z}$ is represented by
\begin{align}
e = \left[ \left(\mathbb{C}_0 \oplus \mathbb{C}_1, \Gamma = \sigma_x, H_0(pt) = \sigma_y, H_1(pt) = - \sigma_y \right) \right]
\end{align}
where $\sigma_i, (i=x,y,z)$ is the Pauli matrix, and
$\mathbb{C}_0$ and $\mathbb{C}_1$ are 1-dimensional irreps with eigenvalues $U_{\sigma}(pt) = 1$ and $-1$, respectively.
The $t \in R(\mathbb{Z}_2)$ action is
\begin{equation}\begin{split}
t \cdot e
&= \left[ \left(\mathbb{C}_1 \oplus \mathbb{C}_0, \Gamma = \sigma_x, H_0(pt) = \sigma_y, H_1(pt) = - \sigma_y \right) \right] \\
&= \left[ \left(\mathbb{C}_0 \oplus \mathbb{C}_1, \Gamma = \sigma_x, H_0(pt) = -\sigma_y, H_1(pt) = \sigma_y \right) \right]
= -e,
\end{split}\end{equation}
which leads to $K^{(0,w)+1}_{\mathbb{Z}_2}(pt) \cong (1-t)$.
The Mayer-Vietoris sequence for $S^1 = U \cup V$ is given by
\begin{equation}
\begin{CD}
K_{\mathbb{Z}_2}^{(\tau|_{U \cap V},w)+1}(U \cap V) @<\Delta'<< K_{\mathbb{Z}_2}^{(\tau|_U,w)+1}(U) \oplus K_{\mathbb{Z}_2}^{(\tau|_V,w)+1}(V) @<<< K_{\mathbb{Z}_2}^{(\tau,w)+1}(S^1) \\
@VVV @. @AAA \\
K_{\mathbb{Z}_2}^{(\tau,w)+0}(S^1) @>>> 0 @>>> 0.
\end{CD}
\end{equation}
We have
\begin{align}
K_{\mathbb{Z}_2}^{(\tau,w)+1}(S^1)
\cong \mathrm{Ker}(\Delta') , &&
K_{\mathbb{Z}_2}^{(\tau,w)+0}(S^1)
\cong \mathrm{Coker}(\Delta') ,
\end{align}
where $\Delta' = \alpha^U j^*_U - \alpha^V j_V^* : K_{\mathbb{Z}_2}^{(0,w)+1}(U) \oplus K_{\mathbb{Z}_2}^{(0,w)+1}(V) \to K_{\mathbb{Z}_2}^{(0,w)+1}(U \cap V)$ is
\begin{align}
\Delta' : (1-t) \oplus (1-t) \to (1-t) \oplus (1-t), &&
\left( n(1-t), m(1-t) \right) \mapsto \left( (n-m)(1-t), (n+m)(1-t) \right).
\end{align}
As a result, we get
\begin{align}
K_{\mathbb{Z}_2}^{(\tau,w)+1}(S^1) = 0, &&
K_{\mathbb{Z}_2}^{(\tau,w)+0}(S^1) \cong \overbrace{I}^{\mathbb{Z}_2},
\label{Eq:1DTwist_w_KGroup}
\end{align}
where $R(\mathbb{Z}_2)$ trivially acts on $I = \mathbb{Z}_2$.
Combining (\ref{Eq:2DPgGysin}) with (\ref{Eq:1DTwistKGroup_pg}) and (\ref{Eq:1DTwist_w_KGroup})
we re-provide the $K$-group (\ref{Eq:2DPgKGroup}) for 2d TCI with the glide symmetry
\begin{align}
&K^{\tau_{\sf pg}+0}_{\mathbb{Z}_2}(S^1 \times \tilde S^1)
\cong K^{(\tau,0)+0}_{\mathbb{Z}_2}(S^1) \oplus K_{\mathbb{Z}_2}^{(\tau,w)+1}(S^1)
\cong \overbrace{(1+t)}^{\mathbb{Z}}, \label{Eq:GysinGlide2D_A} \\
&K^{\tau_{\sf pg}+1}_{\mathbb{Z}_2}(S^1 \times \tilde S^1)
\cong K^{(\tau,0)+1}_{\mathbb{Z}_2}(S^1) \oplus K_{\mathbb{Z}_2}^{(\tau,w)+0}(S^1)
\cong \overbrace{(1+t)}^{\mathbb{Z}} \oplus \overbrace{I}^{\mathbb{Z}_2}. \label{Eq:2DAIIIGlideGysin}
\end{align}
\subsubsection{Model and topological invariant}
Model vector bundles/Hamiltonians representing $K^{\tau_{\sf pg}+n}_{\mathbb{Z}_2}(T^2)$ are as follows.
Eqs.\ (\ref{Eq:GysinGlide2D_A}) and (\ref{Eq:2DAIIIGlideGysin})
imply that the free parts $(1+t)$ of $K$-groups $K^{\tau_{\sf pg}+n}_{\mathbb{Z}_2}(T^2)$, $(n=0,1)$, arise from 1d models
which were already introduced in (\ref{Fig:1DTCI_NS_e1}) and (\ref{Fig:1DTCI_NS_e1_AIII}).
The generating Hamiltonian of $\mathbb{Z}_2$ part $I$ in $K^{\tau_{\sf pg}+1}_{\mathbb{Z}_2}(T^2)$
is given by the dimensional raising map from the $K$-group $K^{(\tau,w)+0}(S^1)$.
As shown in Ref.~\onlinecite{ShiozakiSatoGomi2015},
the Karoubi's triple for the generator of $K^{(\tau,w)+0}(S^1)$ is given as
\begin{align}
\Big[ E = S^1 \times \mathbb{C}^2, U(k_x) = \begin{pmatrix}
0 & e^{-i k_x} \\
1 & 0 \\
\end{pmatrix}_{\mu}, H_1 =
\begin{pmatrix}
1 & 0 \\
0 & -1 \\
\end{pmatrix}_{\mu}, H_{0} =
\begin{pmatrix}
-1 & 0 \\
0 & 1 \\
\end{pmatrix}_{\mu} \Big],
\end{align}
where $E$ is the $\mathbb{Z}_2$ twisted equivariant bundle defined in (\ref{Eq:1DTwistVectE})
and the subscript $\mu$ represents the two localized positions inside the unit cell.
Then, the dimensional raising map (\ref{eq:dimensional_raise_nonchiral}) leads us to
the Hamiltonian in class AIII with glide symmetry
\begin{align}
\widetilde H(k_x,k_y)
= \cos k_y \mu_z \otimes \sigma_z + \sin k_y \mu_0 \otimes \sigma_x, \qquad
\widetilde \Gamma = \mu_0 \otimes \sigma_y, \qquad
\widetilde U(k_x)= U(k_x) \otimes \sigma_y.
\label{eq:mode_2d_aiii_glide_z2}
\end{align}
Here, we introduced the Pauli matrices $\mu_{a} (a=0,x,y,z)$ for the $\mu$ space.
Notice that the $\widetilde U(k_x)$ acts on the Hamiltonian $\widetilde H(k_x,k_y)$ as a glide symmetry
which commutes with the chiral symmetry
\begin{align}
\widetilde U(k_x) \widetilde H(k_x,k_y) \widetilde U(k_x)^{-1}
= H(k_x,-k_y), \qquad
\widetilde U(k_x)^2 = e^{-i k_x}, \qquad
[\widetilde \Gamma, \widetilde U(k_x)]=0.
\end{align}
The $\mathbb{Z}_2$ invariant is defined as follows.
This is the 2d analog of $\mathbb{Z}_2$ invariant in 3d class A insulator with glide symmetry.~\cite{FangFu2015, ShiozakiSatoGomi2015}
Due to the chiral symmetry,
the flattened Hamiltonian takes the form of ${\rm sign}[H(k_x,k_y)] = \begin{pmatrix}
0 & q(k_x,k_y) \\
q^{\dag}(k_x,k_y) & 0 \\
\end{pmatrix}$, where $q(k_x,k_y)$ is a unitary matrix in the basis producing the expression $\Gamma = \begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}$.
On the glide lines $k_y = \Gamma_y, \Gamma_y = 0$ and $\pi$,
the Hamiltonian is divided by the glide sectors $U(k_x) = \pm e^{-i k_x/2}$.
Let $q_{\pm}(k_x,\Gamma)$ be the Hamiltonian of the glide sectors with $U(k_x) = \pm e^{-i k_x/2}$.
Since the two glide sectors are glued at the boundary,
these Hamiltonians are connected at the BZ boundary $q_{\pm}(\pi,\Gamma_y) = q_{\mp}(-\pi,\Gamma_y)$.
We define the $\mathbb{Z}_2$ invariant $\nu \in \{0,1/2\}$ by
\begin{equation}\begin{split}
\nu
:= &
\frac{1}{2 \pi i} \Big[
\ln \det q_+(-\pi,0) + \frac{1}{2} \int_{-\pi}^{\pi} d_{k_x} \ln \det q_+(k_x,0)
\Big] \\
&-\frac{1}{2 \pi i} \Big[
\ln \det q_+(-\pi,\pi) + \frac{1}{2} \int_{-\pi}^{\pi} d_{k_x} \ln \det q_+(k_x,\pi)
\Big] \\
&+ \frac{1}{2} \cdot \frac{1}{2 \pi i} \int_0^{\pi} d_{k_y} \ln \det q(-\pi,k_y) \qquad ({\rm mod }\ 1).
\end{split}\end{equation}
By the use of the Stokes' theorem, it is easy to show that $2 \nu = 0 ({\rm mod }\ 1)$, i.e.\
$\nu$ is quantized to $\mathbb{Z}_2$ values.
One can check that the Hamiltonian (\ref{eq:mode_2d_aiii_glide_z2}) has $\nu = 1/2$.
\subsubsection{3d TCIs with glide symmetry}
Applying the dimensional isomorphism (\ref{Eq:DimShift_general})
to 2d $K$-groups (\ref{Eq:2DPgKGroup}) leads to the
topological classification of 3d class A and AIII insulators with glide symmetry
\begin{align}
&{\rm 3d\ class\ A\ bulk}: &&
K_{\mathbb{Z}_2}^{\tau_{\sf pg}+0}(T^3) \cong \underbrace{\overbrace{(1+t)}^{\mathbb{Z}}}_{k_x} \oplus \underbrace{\overbrace{(1+t)}^{\mathbb{Z}}}_{(k_x,k_z)} \oplus \underbrace{\overbrace{I}^{\mathbb{Z}_2}}_{(k_x,k_y,k_z)},
\label{Eq:3DPgKGroup_A} \\
&{\rm 3d\ class\ AIII\ bulk}: &&
K_{\mathbb{Z}_2}^{\tau_{\sf pg}+1}(T^3) \cong \underbrace{\overbrace{(1+t)}^{\mathbb{Z}}}_{k_x} \oplus \underbrace{\overbrace{I}^{\mathbb{Z}_2}}_{(k_x,k_y)} \oplus \underbrace{\overbrace{(1+t)}^{\mathbb{Z}}}_{(k_x,k_z)}.
\label{Eq:3DPgKGroup_AIII}
\end{align}
Here, the underbraces indicate the minimum dimensions
required for realizing generators.
For example, $(k_x,k_z)$ means that a generator model is
adiabatically connected to
a stacking model along the $x$ and $z$-directions.
It is clear that the so-called ``strong index'' appears only in the last $\mathbb{Z}_2$ group in (\ref{Eq:3DPgKGroup_A}).
This $\mathbb{Z}_2$ phase in 3d class A insulators with glide symmetry
was already described in Refs.~\onlinecite{FangFu2015, ShiozakiSatoGomi2015}, so we do not repeat it here.
\subsubsection{2d surface states with glide symmetry}
As explained in Sec.~\ref{sec:Classification of boundary gapless states},
the topological classification of
boundary gapless states is given by the
$K$-group with the shift of the integer grading $-n \mapsto -(n-1)$.
Hence, the results (\ref{Eq:2DPgKGroup}) imply the
classification of surface states:
\begin{align}
&{\rm 2d\ class\ A\ surface\ gapless\ states}: &&
K_{\mathbb{Z}_2}^{\tau_{\sf pg}+1}(T^2) \cong \underbrace{\overbrace{(1+t)}^{\mathbb{Z}}}_{k_x} \oplus \underbrace{\overbrace{I}^{\mathbb{Z}_2}}_{(k_x,k_y)}
\label{Eq:2DPgKGroup_A_surface} \\
&{\rm 2d\ class\ AIII\ surface\ gapless\ states}: &&
K_{\mathbb{Z}_2}^{\tau_{\sf pg}+0}(T^2) \cong \underbrace{\overbrace{(1+t)}^{\mathbb{Z}}}_{k_x}.
\label{Eq:2DPgKGroup_AIII_surface}
\end{align}
The meaning of the underbraces are similar to
(\ref{Eq:3DPgKGroup_A}) and (\ref{Eq:3DPgKGroup_AIII}),
indicating the momentum dependence of the spectrum.
Comparing (\ref{Eq:2DPgKGroup_A_surface}) ( (\ref{Eq:2DPgKGroup_AIII_surface}) ) with
(\ref{Eq:3DPgKGroup_A}) ( (\ref{Eq:3DPgKGroup_AIII}) ),
one can see that the bulk-boundary correspondence holds.
\subsection{$C_4$ rotation symmetry}
In this section, we present a $K$-theory computation
of the TCIs with $C_4$ symmetry in two-dimension for class A and AIII.
The BZ is a square.
The point group $C_4 = \mathbb{Z}_4 = \{1,c_4,c_2=c_4^2,c_4^3\}$ acts on $T^2$ by $ c_4: (k_x,k_y) \mapsto (-k_y,k_x).$
There are two fixed points: $\Gamma = (0,0)$ and $M = (\pi,\pi)$.
$X = (\pi,0)$ is a fixed point of the subgroup $C_2 = \mathbb{Z}_2 = \{1,c_2\} \subset \mathbb{Z}_4$.
We present the representation rings of $\mathbb{Z}_4$ and $\mathbb{Z}_2$ groups as follows
\begin{align}
R(\mathbb{Z}_4) = \mathbb{Z}[t]/(1-t^4), \qquad
R(\mathbb{Z}_2) = \mathbb{Z}[s]/(1-s^2).
\end{align}
$R(\mathbb{Z}_4)$ acts on $R(\mathbb{Z}_2)$ by the restriction of representations of $\mathbb{Z}_4$: $t |_{\mathbb{Z}_2} = s$,
which means $R(\mathbb{Z}_2)$ is $(1+t^2) = \{f(t) (1+t^2) | f(t) \in R(\mathbb{Z}_4) \}$ as an $R(\mathbb{Z}_4)$-module.
\subsubsection{Topological classification}
To compute the $K$-group $K^{n}_{\mathbb{Z}_4}(T^2)$,
we introduce a subspace $Y \subset T^2$ as follows:
$$
Y \ \ = \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(1,1)}="a",
!{(-1,1)}*+{\bullet}="b" ,
!{(-1,0)}*+{\bullet}="e",
!{(-1,-1)}*+{\bullet}="c"([]!{+(0,-0.4)} {(0,0)}),
!{(0,-1)}*+{\bullet}="f" ,
!{(1,-1)}*+{\bullet}="d" ,
"a"-@{.}"b",
"d"-@{.}"a",
"b"-"e",
"e"-"c",
"c"-"f",
"f"-"d",
}
\ \ = \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(0,0)}*+{\bullet}="a"([]!{+(0,-1.0)} {(0,0)}),
!{(1.5,0)}*+{\bullet}="b"([]!{+(0.5,0)} {(\pi,0)}),
!{(-1.5,0)}*+{\bullet}="c"([]!{+(-0.5,0)} {(0,\pi)}),
!{(0,-0.7)}-@{->}!{(0,-0.3)},
"a" -@/^1cm/ "b",
"a" -@/_1cm/ "b",
"a" -@/^1cm/ "c",
"a" -@/_1cm/ "c",
}
$$
Let us compute the $K$-group on $Y$.
We can decompose $Y = U \cup V$ to two parts which are $\mathbb{Z}_4$-equivariantly homotopic to points
\begin{equation}
U \sim (\mathbb{Z}_4/\mathbb{Z}_4) \times pt = \{ \Gamma \}, \qquad
V \sim (\mathbb{Z}_4/\mathbb{Z}_2) \times pt = \{ X, c_4 X \}.
\end{equation}
The intersection is
\begin{equation}
U \cap V \sim \mathbb{Z}_4 \times pt.
\end{equation}
The Mayer-Vietoris sequence for $Y = U \cup V$ is
\begin{equation}\begin{split}
\begin{CD}
0 @<<< 0 @<<< K_{\mathbb{Z}_4}^{1}(Y) \\
@VVV @. @AAA \\
K_{\mathbb{Z}_4}^{0}(Y) @>>> \underbrace{R(\mathbb{Z}_4)}_{\Gamma} \oplus \underbrace{R(\mathbb{Z}_2)}_{X} @>\Delta>> \mathbb{Z}.
\end{CD}
\end{split}\end{equation}
where $\Delta$ is given by
\begin{equation}
\Delta : (f(t), g(s) ) \mapsto f(1)-g(1).
\end{equation}
A basis of $\mathrm{Ker} (\Delta)$ can be chosen as
\begin{equation}
\{ \underbrace{(1,1), (t,s), (t^2,1), (t^3,s)}_{R(\mathbb{Z}_4)}, \underbrace{(0,1-s)}_{(1-t+t^2-t^3)} \} \subset R(\mathbb{Z}_4) \oplus R(\mathbb{Z}_2).
\end{equation}
The former four base elements compose $R(\mathbb{Z}_4)$
and the last base element generates the $R(\mathbb{Z}_4)$-module $(1-t+t^2-t^3)= \{f(t)(1-t+t^2-t^3)|f(t) \in R(\mathbb{Z}_4) \}$.
We have
\begin{align}
K_{\mathbb{Z}_4}^{0}(Y) \cong \mathrm{Ker} ( \Delta ) \cong \overbrace{R(\mathbb{Z}_4)}^{\mathbb{Z}^4} \oplus \overbrace{(1-t+t^2-t^3)}^{\mathbb{Z}}, \qquad
K_{\mathbb{Z}_4}^{1}(Y) \cong 0.
\end{align}
Next, we ``fill in'' the BZ torus $T^2$ with wave functions from $Y$.
To this end, we use the exact sequence for the pair $(T^2, Y)$.
\begin{equation}\begin{split}
\begin{CD}
K^1_{\mathbb{Z}_4}(Y) @<<< K^1_{\mathbb{Z}_4}(T^2) @<<< K^1_{\mathbb{Z}_4}(T^2,Y) \\
@VVV @. @AAA \\
K^0_{\mathbb{Z}_4}(T^2,Y) @>>> K^0_{\mathbb{Z}_4}(T^2) @>>> K^0_{\mathbb{Z}_4}(Y).
\end{CD}
\end{split}\label{Seq:Pair_p4_T^2Y}\end{equation}
The $K$-group of the pair $(T^2,Y)$ is given as follows:
The quotient $T^2/Y$ can be identified with the sphere $D(\mathbb{C}_1)/S(\mathbb{C}_1)$
obtained by shrinking the boundary circle $S(\mathbb{C}_1)$ of the disc $D(\mathbb{C}_1)$.
Here, $\mathbb{C}_1$ is the 1-dimensional complex representation of $\mathbb{Z}_4$,
say, the generator $C_4 \in \mathbb{Z}_4$ acts on $\mathbb{C}$ by $C_4 \cdot z = i z$,
and $\mathbb{Z}_4$ naturally acts on $D(\mathbb{C}_1)$, $S(\mathbb{C}_1)$ and $D(\mathbb{C}_1)/S(\mathbb{C}_1)$.
Then, the Thom isomorphism for the $\mathbb{Z}_4$-equivariant complex vector bundle $\mathbb{C}_1 \to pt$ states (see Appendix \ref{app:Thom})
\begin{align}
K^n_{\mathbb{Z}_4}(T^2,Y)
\cong \widetilde K^n_{\mathbb{Z}_4}(T^2/Y)
\cong \widetilde K^n_{\mathbb{Z}_4}(D(\mathbb{C}_1)/S(\mathbb{C}_1))
\cong K^n_{\mathbb{Z}_4}(D(\mathbb{C}_1),S(\mathbb{C}_1))
\cong K^n_{\mathbb{Z}_4}(pt).
\end{align}
Then, the sequence (\ref{Seq:Pair_p4_T^2Y}) is recast into
\begin{align}
\begin{CD}
0 @<<< K^1_{\mathbb{Z}_4}(T^2) @<<< 0 \\
@VVV @. @AAA \\
R(\mathbb{Z}_4) @>>> K^0_{\mathbb{Z}_4}(T^2) @>i^*>> R(\mathbb{Z}_4) \oplus (1-t+t^2-t^3) \\
\end{CD}
\end{align}
Since the contribution $K^0_{\mathbb{Z}_4}(Z) = R(\mathbb{Z}_4) \subset K^0_{\mathbb{Z}_4}(T^2) = K^0_{\mathbb{Z}_4}(Z) \oplus \widetilde{K}^0_{\mathbb{Z}_4}(T^2)$ from the fixed point $\Gamma$ is identically mapped by $i^*$,
we get the exact sequence for the reduced $K$-theory
\begin{align}
0 \to R(\mathbb{Z}_4) \to \widetilde{K}^0_{\mathbb{Z}_4}(T^2) \to (1-t+t^2-t^3) \to 0.
\end{align}
One can show that the extension of $(1-t+t^2-t^3)$ by $R(\mathbb{Z}_4)$ is unique.
(See Appendix \ref{app:ext} for details.)
We thus get the reduced $K$-group
\begin{align}
\widetilde{K}^0_{\mathbb{Z}_4}(T^2) \cong R(\mathbb{Z}_4) \oplus (1-t+t^2-t^3)
\label{eq:K0_p4_T2}
\end{align}
and the $K$-group
\begin{align}
K^0_{\mathbb{Z}_4}(T^2) \cong \overbrace{R(\mathbb{Z}_4)}^{\mathbb{Z}^4} \oplus \overbrace{R(\mathbb{Z}_4)}^{\mathbb{Z}^4} \oplus \overbrace{(1-t+t^2-t^3)}^{\mathbb{Z}}, \qquad
K^1_{\mathbb{Z}_4}(T^2) = 0.
\label{Eq:P4_KGroup_Modu}
\end{align}
\subsubsection{Models of $K^0_{\mathbb{Z}_4}(T^2)$}
In this subsection, we give generating models of the $K$-group $K^0_{\mathbb{Z}_4}(T^2)$, the 2d TCIs with $C_4$ symmetry.
Through the ``lens'' of topological invariants, one can reconstruct the $R(\mathbb{Z}_4)$-module structure (\ref{Eq:P4_KGroup_Modu}).
As mentioned, the BZ is a square.
$\Gamma=(0,0)$ and $M = (\pi,\pi)$ are the fixed points of the $C_4$ group,
and $X = (\pi,0)$ is fixed under the subgroup $C_2 = \mathbb{Z}_2$:
\begin{align}
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(0,0)}*+{\bullet} ([]!{+(-0.2,-0.2)} {\Gamma}),
!{(1,0)}*+{\bullet} ([]!{+(+0.2,-0.2)} {X}),
!{(1,1)}*+{\bullet} ([]!{+(+0.2,+0.2)} {M}),
!{(0,1)},
!{(1,1)}="a" ,
!{(-1,1)}="b",
!{(-1,-1)}="c",
!{(1,-1)}="d",
"a"-"b",
"b"-"c",
"c"-"d",
"d"-"a",
}
\end{align}
In general, parts of the $K$-group of class A can be represented by
vector bundles realized as atomic insulators.
Put a representation of site symmetry at the Wyckoff positions inside a unit cell.
There are two Wyckoff Positions (a) and (b) of which the filling number is one:
\begin{align}
\left( E_a = T^2 \times \mathbb{C}, \ \ \rho_{c_4}(\bm{k},v) = (c_4 \bm{k}, v) \right)
\ \ \leftrightarrow \ \ \xygraph{
!{<0cm,0cm>;<1.5cm,0cm>:<0cm,1.5cm>::}
!{(0,0)}*+{\bigcirc},([]!{+(0,-0.3)}{\ket{s}}),
!{(0.5,0.5)}-!{(-0.5,0.5)},
!{(-0.5,0.5)}-!{(-0.5,-0.5)},
!{(-0.5,-0.5)}-!{(0.5,-0.5)},
!{(0.5,-0.5)}-!{(0.5,0.5)},
}
\end{align}
\begin{align}
\left( E_b = T^2 \times \mathbb{C}, \ \ \rho_{c_4}(\bm{k},v) = (c_4 \bm{k}, e^{-i k_y} v) \right)
\ \ \leftrightarrow \ \ \xygraph{
!{<0cm,0cm>;<1.5cm,0cm>:<0cm,1.5cm>::}
!{(0.5,0.5)}*+{\bigcirc}([]!{+(0.3,0)}{\ket{s}}),
!{(0.5,0.5)}-!{(-0.5,0.5)},
!{(-0.5,0.5)}-!{(-0.5,-0.5)},
!{(-0.5,-0.5)}-!{(0.5,-0.5)},
!{(0.5,-0.5)}-!{(0.5,0.5)},
}
\end{align}
In the above, the corresponding $\mathbb{Z}_4$-equivariant line bundles are denoted by
$E_a$ and $E_b$.
The solid squares in the figures represent the unit cells.
The $C_4$ action on $E_b$ is determined by
the $C_4$ action on the real space basis $\hat U_{c_4} \ket{(R_x,R_y),s} = \ket{(-R_y-1, R_x),s}$.
Here we put the $s$-orbitals at the Wyckoff positions.
Other representations of $\mathbb{Z}_4$ are obtained by tensor products of elements of $R(\mathbb{Z}_4)$.
We have another generator $E_c$ of rank 2 that is realized by
putting $s$-orbitals at the centers of edges of the square:
\begin{align}
\left( E_c = T^2 \times \mathbb{C}^2, \ \ \rho_{c_4}(\bm{k},v) = (c_4 \bm{k}, \begin{pmatrix}
0 & e^{-i k_y} \\
1 & 0 \\
\end{pmatrix} v) \right)
\ \ \leftrightarrow \ \ \xygraph{
!{<0cm,0cm>;<1.5cm,0cm>:<0cm,1.5cm>::}
!{(0.5,0)}*+{\bigcirc}([]!{+(0.3,0)}{\ket{s}}),
!{(0,0.5)}*+{\bigcirc}([]!{+(0,0.3)}{\ket{s}}),
!{(0.5,0.5)}-!{(-0.5,0.5)},
!{(-0.5,0.5)}-!{(-0.5,-0.5)},
!{(-0.5,-0.5)}-!{(0.5,-0.5)},
!{(0.5,-0.5)}-!{(0.5,0.5)},
}
\end{align}
All other atomic line bundles
can be direct sums of $E_a, E_b, E_c$ and
tensor products by representations of $\mathbb{Z}_4$.
The $K$-group $K^0_{\mathbb{Z}_4}(T^2)$ includes
line bundles with finite Chern number.
To construct a line bundle with a nonzero Chern number,
we gap out a trivial atomic insulator by
introducing (one-body) interaction.
Let $E$ be the atomic $\mathbb{Z}_4$-equivariant bundle
consisting of $s$ and $p_{x+iy}$ orbitals localized at
the center of the unit cell:
\begin{align}
\left( E = T^2 \times \mathbb{C}^2, \ \ \rho_{c_4}(\bm{k},v) = \left(c_4 \bm{k}, \begin{pmatrix}
1 & 0 \\
0 & i \\
\end{pmatrix} v \right) \right)
\ \ \leftrightarrow \ \ \xygraph{
!{<0cm,0cm>;<1.5cm,0cm>:<0cm,1.5cm>::}
!{(0,0)}*+{\bigcirc},([]!{+(0.15,-0.3)}{\ket{s} \oplus \ket{p_{x+iy}}}),
!{(0.5,0.5)}-!{(-0.5,0.5)},
!{(-0.5,0.5)}-!{(-0.5,-0.5)},
!{(-0.5,-0.5)}-!{(0.5,-0.5)},
!{(0.5,-0.5)}-!{(0.5,0.5)},
}
\end{align}
We define four $\mathbb{Z}_4$-equivariant line bundles as the
occupied state of the following $C_4$ symmetric Hamiltonians on the bundle $E$,
\begin{align}
&F_{\Gamma, \pm} : \qquad H(\bm{k}) = \sin k_x \sigma_x + \sin k_y \sigma_y \pm (m - \cos k_x - \cos k_y) \sigma_z, \qquad 0 < m < 2, \\
&F_{M, \pm} : \qquad H(\bm{k}) = \sin k_x \sigma_x + \sin k_y \sigma_y \pm (m - \cos k_x - \cos k_y) \sigma_z, \qquad -2 < m < 0,
\end{align}
where $\sigma_i (i = x,y,z)$ is the Pauli matrices, and
the subscript $\Gamma/M$ represents the location of the band inversion.
There are four topological invariants:
the Chern number $ch_1$ and representations at $\Gamma, M$ and $X$.
These topological invariants have $R(\mathbb{Z}_4)$-module structures.
The above models have the following data of topological invariants:
$$
\begin{tabular}[t]{l|c|c|c|c}
\hline
& $ch_1([E])$ & $[E|_{\Gamma}]$ & $[E|_{M}]$ & $[E|_{X}]$ \\
\hline
Bundle & $(1+t+t^2+t^3)$ & $R(\mathbb{Z}_4)$ & $R(\mathbb{Z}_4)$ & $R(\mathbb{Z}_2 = \{1,c_2\})$ \\
\hline
$E_a$ & $0$ & $1$ & $1$ & $1$ \\
$E_b$ & $0$ & $1$ & $t^2$ & $s$ \\
$E_c$ & $0$ & $1+t^2$ & $t+t^3$ & $1+s$ \\
$F_{\Gamma,+}$ & $1$ & $1$ & $t$ & $s$ \\
$F_{\Gamma,-}$ & $-1$ & $t$ & $1$ & $1$ \\
$F_{M,+}$ & $-1$ & $1$ & $t$ & $1$ \\
$F_{M,-}$ & $1$ & $t$ & $1$ & $s$ \\
\hline
\end{tabular}
$$
From this table, we can read off three generators $e_1, e_2, e_3$ of the $K$-group $K^0_{\mathbb{Z}_2}(T^2)$:
\begin{align}
\begin{tabular}{c|c|c|c|c|c}
\hline
& & $ch_1([E])$ & $[E|_{\Gamma}]$ & $[E|_{M}]$ & $[E|_{X}]$ \\
\hline
$R(\mathbb{Z}_4)$-module structure & generator & $(1+t+t^2+t^3)$ & $R(\mathbb{Z}_4)$ & $R(\mathbb{Z}_4)$ & $R(\mathbb{Z}_2 = \{1,c_2\})$ \\
\hline
$R(\mathbb{Z}_4)$
& $e_1 = [E_a]$ & $0$ & $1$ & $1$ & $1$ \\
& $t e_1$ & $0$ & $t$ & $t$ & $s$ \\
& $t^2 e_1$ & $0$ & $t^2$ & $t^2$ & $1$ \\
& $t^3 e_1$ & $0$ & $t^3$ & $t^3$ & $s$ \\
\hline
$R(\mathbb{Z}_4)$
&$e_2 = [F_{M,+}]-[E_a]$ & $-1$ & $0$ & $t-1$ & $0$ \\
&$t e_2$ & $-1$ & $0$ & $t^2-t$ & $0$ \\
&$t^2 e_2$ & $-1$ & $0$ & $t^3-t^2$ & $0$ \\
&$t^3 e_2$ & $-1$ & $0$ & $1-t^3$ & $0$ \\
\hline
$(1-t+t^2-t^3)$ & $e_3 = [E_c]-(1+t^2) \cdot [E_a]$ & $0$ & $0$ & $-1+t-t^2+t^3$ & $s-1$ \\
\hline
\end{tabular}
\label{tab:p4_top_inv_K}
\end{align}
An arbitrary formal difference $[E_1]-[E_2]$ of two $\mathbb{Z}_4$-equivariant bundles can be a linear combination of
these generators.
For example,
\begin{align}
&[E_b] = e_1 + (t-t^2) e_2 + e_3, \qquad
[E_c] = (1+t^2) e_1 - e_3, \qquad
[F_{\Gamma,+}] = e_1-t^2 e_2 + e_3, \\
&[F_{\Gamma,-}] = t e_1 + t^2 e_2 + e_3, \qquad
[F_{M,-}] = t e_1 - e_2.
\end{align}
The $R(\mathbb{Z}_4)$-module structure is consistent with
the algebraic derivation of the $K$-group (\ref{Eq:P4_KGroup_Modu}).
\subsubsection{Constraint on topological invariants}
There is a constraint on the data of topological invariants
\begin{align}
(ch_1([E]),[E|_{\Gamma}], [E|_{M}], [E|_X]) \in
(1+t+t^2+t^3) \oplus
\underbrace{R(\mathbb{Z}_4)}_{K^0_{\mathbb{Z}_4}(\{\Gamma \})} \oplus
\underbrace{R(\mathbb{Z}_4)}_{K^0_{\mathbb{Z}_4}(\{M \})} \oplus
\underbrace{R(\mathbb{Z}_2)}_{K^0_{\mathbb{Z}_4}(\{X\})}
\label{Eq:TopInvP4}
\end{align}
that arises from the fully gapped condition on the whole BZ torus.
Let us denote the r.h.s\ of (\ref{Eq:TopInvP4}) by ${\rm Top}^0_{\mathbb{Z}_4}(T^2)$.
The constraint can be considered as the condition that
the topological invariant lies in the image of an injective homomorphism
from the $K$-group $K^{0}_{\mathbb{Z}_4}(T^2)$ to the set of topological invariants
\begin{align}
f_{\rm top}: K^{0}_{\mathbb{Z}_4}(T^2) \to {\rm Top}^0_{\mathbb{Z}_4}(T^2).
\end{align}
This homomorphism $f_{\rm top}$ is not surjective in general,
hence the condition
\begin{align}
x \equiv 0 \quad {\rm mod } \ \ \mbox{Im } (f_{\rm top}), \qquad x \in {\rm Top}^0_{\mathbb{Z}_4}(T^2)
\end{align}
makes sense.
From the data (\ref{tab:p4_top_inv_K}), $\mbox{Im }(f_{\rm top})$ is spanned by
\begin{align}
\left\{ \begin{array}{l}
(0,1,1,1) \\
(0,t,t,s) \\
(0,t,t,1) \\
(0,t,t,s) \\
(-1,0,t-1,0) \\
(-1,0,t^2-t,0) \\
(-1,0,t^3-t^2,0) \\
(-1,0,1-t^3,0) \\
(0,0,-1+t-t^2+t^3,s-1) \\
\end{array} \right.
\sim
\left\{ \begin{array}{l}
(0,1,1,1) \\
(0,t,t,s) \\
(0,t,t,1) \\
(0,t,t,s) \\
(-1,0,t-1,0) \\
(-2,0,t^2-1,0) \\
(-3,0,t^3-1,0) \\
(2,0,0,s-1) \\
(4,0,0,0) \\
\end{array} \right.
\end{align}
as an Abelian group.
Let us denote a general element of ${\rm Top}^0_{\mathbb{Z}_4}(T^2)$ by $(ch_1, \Gamma(t), M(t), X(s))$.
Solving the equation $(ch_1, \Gamma(t), M(t), X(s)) = 0 \ {\rm mod } \ \mbox{Im }(f_{\rm top})$ leads us to the constraints
\begin{align}
&{\rm Constraint\ 1}: \quad \Gamma(1) = M(1) = X(1), \\
&{\rm Constraint\ 2}: \quad ch_1 = \Gamma'(1) - M'(1) +2 X'(1) \ \ {\rm mod } \ 4,
\end{align}
where $\Gamma'(1)$ is the derivative $\Gamma'(1) := \frac{d}{d t}\Gamma(t) |_{t \to 1}$ and so are $M'(1)$ and $X'(1)$.
The first constraint means that
the number of occupied states should be uniform around the whole BZ torus.
The breaking of the first condition implies the existence of the Fermi surface.
The latter constraint serves as a criterion
for nontrivial Chern number.~\cite{Fang2012}
\subsection{Wall paper group p4g with projective representation of $D_4$}
In this section, we calculate the
$K$-group of $T^2$ with the wallpaper group p4g and a
nontrivial projective representation of its point group $D_4$,
which corresponds to that the degree of freedom at
a site is spin half-integer.
\subsubsection{Space group p4g}
The space group p4g is generated by the following two elements
\begin{align}
\{ c_4 | \hat y/2 \} : (x,y) \to (-y,x+1/2), &&
\{ \sigma | \hat x/2 \} : (x,y) \to (x+1/2,-y),
\end{align}
and the primitive lattice translations.
This corresponds to the choice of non-primitive
lattice translations $\bm{a}_{c_4} = (0,1/2)$ and $\bm{a}_{\sigma} = (1/2,0)$.
We define other non-primitive translations by
\begin{align}
&\bm{a}_{c_2} := \bm{a}_{c_4} + c_4 \bm{a}_{c_4}, \qquad
\bm{a}_{c_4^3} := \bm{a}_{c_4} + c_4 \bm{a}_{c_2}, \label{eq:ap_p4g_c_4} \\
&\bm{a}_{\sigma c_4} := \bm{a}_{\sigma} +\sigma \bm{a}_{c_4}, \qquad
\bm{a}_{\sigma c_2} := \bm{a}_{\sigma} +\sigma \bm{a}_{c_2}, \qquad
\bm{a}_{\sigma c_4^3} := \bm{a}_{\sigma} +\sigma \bm{a}_{c_4^3}, \label{eq:ap_p4g_sigma}
\end{align}
which are summarized as:
$$
\begin{tabular}[t]{c|c|c|c|c|c|c|c|c}
\hline
$p \in D_4$ & $1$ & $c_4$ & $c_2$ & $c_4^3$ & $\sigma$ & $\sigma c_4$ & $\sigma c_2$ & $\sigma c_4^3$ \\
\hline
$\bm{a}_p$ & $(0,0)$ & $(0,1/2)$ & $(1/2,1/2)$ & $(1/2,0)$ & $(1/2,0)$ & $(1/2,1/2)$ & $(0,1/2)$ & $(0,0)$ \\
\hline
\end{tabular}
$$
Under this choice, the two-cocycle $\bm{\nu}_{p_1,p_2} = \bm{a}_{p_1}+p_1 \bm{a}_{p_2} - \bm{a}_{p_1p_2} \in \Pi$ is given by the following table.
$$
\begin{tabular}[t]{c|ccccccccc}
\hline
$\bm{\nu}_{p_1,p_2}, p_1 \backslash p_2$ & $1$ & $c_4$ & $c_2$ & $c_4^3$ & $\sigma$ & $\sigma c_4$ & $\sigma c_2$ & $\sigma c_4^3$ \\
\hline
$1$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ \\
$c_4$ & $(0,0)$ & $(-1,0)$ & $(-1,1)$ & $(0,1)$ & $(0,1)$ & $(-1,1)$ & $(-1,0)$ & $(0,0)$ \\
$c_2$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ \\
$c_4^3$ & $(0,0)$ & $(1,0)$ & $(1,-1)$ & $(0,-1)$ & $(0,-1)$ & $(1,-1)$ & $(1,0)$ & $(0,0)$ \\
$\sigma$ & $(0,0)$ & $(0,-1)$ & $(1,-1)$ & $(1,0)$ & $(1,0)$ & $(1,-1)$ & $(0,-1)$ & $(0,0)$ \\
$\sigma c_4$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ \\
$\sigma c_2$ & $(0,0)$ & $(0,1)$ & $(-1,1)$ & $(-1,0)$ & $(-1,0)$ & $(-1,1)$ & $(0,1)$ & $(0,0)$ \\
$\sigma c_4^3$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ & $(0,0)$ \\
\hline
\end{tabular}
$$
Next, we move on to the momentum space.
The point group $D_4$ acts on the square BZ torus by
$c_4 \cdot (k_x,k_y) = (-k_y,k_x)$ and $\sigma \cdot (k_x,k_y) = (k_x,-k_y)$.
All the $D_4$ actions are summarized in the following figure:
\begin{align}
\xygraph{
!{<0cm,0cm>;<1.5cm,0cm>:<0cm,1.5cm>::}
!{(0,0)}*+{\bullet}="O" ([]!{+(-0.1,-0.2)} {\Gamma}) ,
!{(1,1)}="a" ,
!{(-1,1)}="b",
!{(-1,-1)}="c",
!{(1,-1)}="d",
!{(0,1)}="e",
"a"-"b",
"b"-"c",
"c"-"d",
"d"-"a",
!{(1,0)}*+{\bullet} ([]!{+(-0.1,-0.2)} {X}) ,
!{(1,1)}*+{\bullet} ([]!{+(-0.1,-0.2)} {M}) ,
!{(-1,0)},
!{(-1,1)},
!{(-1.3,0)}-@{.}!{(1.3,0)},
!{(0,-1.3)}-@{.}!{(0,1.3)},
!{(-1.2,-1.2)}-@{.}!{(1.2,1.2)},
!{(-1.2,1.2)}-@{.}!{(1.2,-1.2)},
!{(1.2,-0.2)}-@{<->}!{(1.2,0.2)},
!{(-0.2,1.2)}-@{<->}!{(0.2,1.2)},
!{(1.0,1.3)}-@{<->}!{(1.3,1.0)},
!{(1.0,-1.3)}-@{<->}!{(1.3,-1.0)},
!{(1.5,0)}*+{\sigma},
!{(0,1.5)}*+{\sigma c_2},
!{(1.4,1.4)}*+{\sigma c_4^3},
!{(1.4,-1.4)}*+{\sigma c_4},
!{(0.4,0)}-@/_0.2cm/@{->}!{(0,0.4)},
!{(0.6,0.2)}*+{c_4},
}
\end{align}
$\Gamma$ and $M$ are fixed points of $D_4$
and $X$ is fixed by the sub-group
$D_2^{(v)} = \{1,c_2,\sigma,\sigma c_2\}$.
The choices (\ref{eq:ap_p4g_c_4}) and (\ref{eq:ap_p4g_sigma})
correspond to
\begin{align}
&U_{1}(\bm{k}):= 1, && U_{c_2}(\bm{k}) := U_{c_4}(c_4 \bm{k}) U_{c_4}(\bm{k}), && U_{c_4^3}(\bm{k}) := U_{c_4}(c_2 \bm{k}) U_{c_2}(\bm{k}), \label{eq:u_p4g_1} \\
&U_{\sigma c_4}(\bm{k}):=U_{\sigma}(c_4 \bm{k}) U_{c_4}(\bm{k}), && U_{\sigma c_2}(\bm{k}) := U_{\sigma}(c_2 \bm{k}) U_{c_2}(\bm{k}), && U_{\sigma c_4^3}(\bm{k}) := U_{\sigma}(c_4^3 \bm{k}) U_{c_4}(\bm{k}) \label{eq:u_p4g_2}
\end{align}
for fixed $U_{c_4}(\bm{k})$ and $U_{\sigma}(\bm{k})$.
The two-cocycle $(\tau_{\sf p4g})_{p,p'}(\bm{k}) = - \bm{k} \cdot \bm{\nu}_{p,p'}$
on the momentum space is summarized as:
$$
e^{i(\tau_{\sf p4g})_{p,p'}(pp'\bm{k})}
\quad = \quad
\begin{tabular}{c|ccccccccc}
\hline
$p \backslash p'$ & $1$ & $c_4$ & $c_2$ & $c_4^3$ & $\sigma$ & $\sigma c_4$ & $\sigma c_2$ & $\sigma c_4^3$ \\
\hline
$1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$c_4$ & $1$ & $1$ & $1$ & $1$ & $e^{- ik_x}$ & $e^{i k_y}$ & $e^{i k_x}$ & $e^{-i k_y}$ \\
$c_2$ & $1$ & $1$ & $1$ & $1$ & $e^{- i(k_x+k_y)}$ & $e^{-i (k_x-k_y)}$ & $e^{i (k_x+k_y)}$ & $e^{i (k_x+k_y)}$ \\
$c_4^3$ & $1$ & $1$ & $1$ & $1$ & $e^{- ik_y}$ & $e^{-i k_x}$ & $e^{i k_y}$ & $e^{i k_x}$ \\
$\sigma$ & $1$ & $1$ & $1$ & $1$ & $e^{- ik_x}$ & $e^{i k_y}$ & $e^{i k_x}$ & $e^{-i k_y}$ \\
$\sigma c_4$ & $1$ & $1$ & $1$ & $1$ & $e^{- i(k_x+k_y)}$ & $e^{-i (k_x-k_y)}$ & $e^{i (k_x+k_y)}$ & $e^{i (k_x+k_y)}$ \\
$\sigma c_2$ & $1$ & $1$ & $1$ & $1$ & $e^{- ik_y}$ & $e^{-i k_x}$ & $e^{i k_y}$ & $e^{i k_x}$ \\
$\sigma c_4^3$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
\hline
\end{tabular}
$$
The two-cocycle at symmetric points can be read off as follows.
At $\Gamma$ point, the two-cocycle is trivial
\begin{equation}
(\tau_{\sf p4g}|_{\Gamma})_{p,p'} = 1, \qquad p,p' \in D_4.
\end{equation}
The restriction to the $M$ point is summarized as:
$$
e^{i(\tau_{\sf p4g}|_M)_{p,p'}}
\quad =\quad
\begin{tabular}{c|ccccccccc}
\hline
$p \backslash p'$ & $1$ & $c_4$ & $c_2$ & $c_4^3$ & $\sigma$ & $\sigma c_4$ & $\sigma c_2$ & $\sigma c_4^3$ \\
\hline
$1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$c_4$ & $1$ & $1$ & $1$ & $1$ & $-1$ & $-1$ & $-1$ & $-1$ \\
$c_2$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$c_4^3$ & $1$ & $1$ & $1$ & $1$ & $-1$ & $-1$ & $-1$ & $-1$ \\
$\sigma$ & $1$ & $1$ & $1$ & $1$ & $-1$ & $-1$ & $-1$ & $-1$ \\
$\sigma c_4$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$\sigma c_2$ & $1$ & $1$ & $1$ & $1$ & $-1$ & $-1$ & $-1$ & $-1$ \\
$\sigma c_4^3$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
\hline
\end{tabular}
$$
This can be trivialized by
the one-cochain $\beta \in C^1(D_4,U(1))$ defined by the following table.
$$
\begin{tabular}[t]{c|ccccccccc}
\hline
$\beta_p$ & $1$ & $c_4$ & $c_2$ & $c_4^3$ & $\sigma$ & $\sigma c_4$ & $\sigma c_2$ & $\sigma c_4^3$ \\
\hline
$p \in D_4$ & $1$ & $i$ & $-1$ & $-i$ & $-i$ & $1$ & $i$ & $-1$ \\
\hline
\end{tabular}
$$
We can see $(\tau_{\sf p4g}|_{M} + \delta \beta)_{p,p'} = 0$ for all $p,p' \in D_4$.
(Note that $(\delta \beta)_{p,p'} = \beta_p \beta_{p'} \beta_{pp'}^{-1}$.)
On the other hand, the
restriction to the $X$ point is summarized in the table:
$$
e^{i(\tau_{\sf p4g}|_X)_{p,p'}}
\quad =\quad
\begin{tabular}{c|ccccccccc}
\hline
$p \backslash p'$ & $1$ & $c_2$ & $\sigma$ & $\sigma c_2$ \\
\hline
$1$ & $1$ & $1$ & $1$ & $1$ \\
$c_2$ & $1$ & $1$ & $-1$ & $-1$\\
$\sigma$ & $1$ & $1$ & $-1$ & $-1$ \\
$\sigma c_2$ & $1$ & $1$ & $1$ & $1$ \\
\hline
\end{tabular}
$$
This two-cocycle $\tau_{\sf p4g}|_X$ cannot be trivialized,
which implies that $\tau_{\sf p4g}|_X$ generates the
nontrivial group cohomology $H^2(D_2,U(1)) = \mathbb{Z}_2$.
\subsubsection{Projective representation of $D_4$}
In this section, we will
consider the spin half integer systems with nonsymmorphic $p4g$ symmetry.
In addition to the twist from the non-primitive lattice translations $\{\bm{a}_p\}_{p \in D_4}$,
the point group $D_4$ obeys a projective representation of which the
factor group represents the nontrivial element of $H^2(D_4,U(1)) = \mathbb{Z}_2$.
A simple way to fix the two-cocycle $\omega \in Z^2(D_4,\mathbb{R}/2 \pi \mathbb{Z})$ is
to consider an explicit form of a projective representation of $D_4$.
Let us consider the following projective representation of $D_4$,
\begin{align}
U_{c_4} = e^{-i \frac{\pi}{4} \sigma_z}, \qquad
U_{\sigma} = e^{-i \frac{\pi}{2} \sigma_y} = -i \sigma_y,
\end{align}
where $\sigma_{\mu} (\mu = x,y,z)$ are the Pauli matrices.
Under the same choice of representation matrices as (\ref{eq:u_p4g_1}) and (\ref{eq:u_p4g_2}),
the two-cocycle $\omega \in Z^2(D_4,\mathbb{R}/2 \pi \mathbb{Z})$ is fixed as in the following table.
\begin{align}
e^{i\omega_{p,p'}}
\quad =\quad
\begin{tabular}{c|ccccccccc}
\hline
$p \backslash p'$ & $1$ & $c_4$ & $c_2$ & $c_4^3$ & $\sigma$ & $\sigma c_4$ & $\sigma c_2$ & $\sigma c_4^3$ \\
\hline
$1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$c_4$ & $1$ & $1$ & $1$ & $-1$ & $-1$ & $1$ & $1$ & $1$ \\
$c_2$ & $1$ & $1$ & $-1$ & $-1$ & $-1$ & $-1$ & $1$ & $1$ \\
$c_4^3$ & $1$ & $-1$ & $-1$ & $-1$ & $-1$ & $-1$ & $-1$ & $1$ \\
$\sigma$ & $1$ & $1$ & $1$ & $1$ & $-1$ & $-1$ & $-1$ & $-1$ \\
$\sigma c_4$ & $1$ & $1$ & $1$ & $-1$ & $1$ & $-1$ & $-1$ & $-1$ \\
$\sigma c_2$ & $1$ & $1$ & $-1$ & $-1$ & $1$ & $1$ & $-1$ & $-1$ \\
$\sigma c_4^3$ & $1$ & $-1$ & $-1$ & $-1$ & $1$ & $1$ & $1$ & $-1$ \\
\hline
\end{tabular}
\label{tab:d4_two_cocycle}
\end{align}
Then, the total two-cocycle $\tau$ for the spin half integer degrees of freedom
with p4g symmetry is given by $\tau = \tau_{\sf p4g} + \omega$.
The spin half integer $p4g$ symmetry is summarized in terms of Hamiltonians by
\begin{equation}
\left\{\begin{array}{l}
U_p(\bm{k}) H(\bm{k}) U_p(\bm{k})^{-1} = H(p \bm{k}), \\
U_{p_1}(p_2 \bm{k}) U_{p_2}(\bm{k}) = e^{i(\tau_{\sf p4g})_{p_1,p_2}(p_1p_2\bm{k})} \cdot e^{i\omega_{p_1,p_2}} U_{p_1,p_2}(\bm{k}).
\end{array}\right.
\end{equation}
The two-cocycle $\tau = \tau_{\sf p4g} + \omega \in Z^2 \big( D_4, C(T^2,\mathbb{R}/2 \pi \mathbb{Z}) \big)$
is summarized in the following table.
\begin{align}
e^{i(\tau_{\sf p4g})_{p,p'}(pp'\bm{k})+i\omega_{p,p'}}
\quad =\quad
\begin{tabular}{c|ccccccccc}
\hline
$p \backslash p'$ & $1$ & $c_4$ & $c_2$ & $c_4^3$ & $\sigma$ & $\sigma c_4$ & $\sigma c_2$ & $\sigma c_4^3$ \\
\hline
$1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$c_4$ & $1$ & $1$ & $1$ & $-1$ & $-e^{- ik_x}$ & $e^{i k_y}$ & $e^{i k_x}$ & $e^{-i k_y}$ \\
$c_2$ & $1$ & $1$ & $-1$ & $-1$ & $-e^{- i(k_x+k_y)}$ & $-e^{-i (k_x-k_y)}$ & $e^{i (k_x+k_y)}$ & $e^{i (k_x+k_y)}$ \\
$c_4^3$ & $1$ & $-1$ & $-1$ & $-1$ & $-e^{- ik_y}$ & $-e^{-i k_x}$ & $-e^{i k_y}$ & $e^{i k_x}$ \\
$\sigma$ & $1$ & $1$ & $1$ & $1$ & $-e^{- ik_x}$ & $-e^{i k_y}$ & $-e^{i k_x}$ & $-e^{-i k_y}$ \\
$\sigma c_4$ & $1$ & $1$ & $1$ & $-1$ & $e^{- i(k_x+k_y)}$ & $-e^{-i (k_x-k_y)}$ & $-e^{i (k_x+k_y)}$ & $-e^{i (k_x+k_y)}$ \\
$\sigma c_2$ & $1$ & $1$ & $-1$ & $-1$ & $e^{- ik_y}$ & $e^{-i k_x}$ & $-e^{i k_y}$ & $-e^{i k_x}$ \\
$\sigma c_4^3$ & $1$ & $-1$ & $-1$ & $-1$ & $1$ & $1$ & $1$ & $-1$ \\
\hline
\end{tabular}
\label{twist_spinful_p4g}
\end{align}
The fixed points $\Gamma$ and $M$ obey
nontrivial projective representations of $D_4$ with
two-cocycles $\tau_{\sf p4g}|_{\Gamma} + \omega$ and $\tau_{\sf p4g}|_{M} + \omega$, respectively.
The $X$ point obeys a trivial projective representation of $D^{(v)}_2 = \{1,c_2, \sigma,\sigma c_2\}$
with two-cocycle $\tau_{\sf p4g}|_X + \omega$.
\subsubsection{A little bit about representations of $D_4$}
\label{sec:A little bit about representations of D4}
To compute the $K$-group $K^{\tau_{\sf p4g}+\omega+n}_{D_4}(T^2)$,
we need to know the representations at high-symmetric points and their
restrictions to subgroups of $D_4$ realized at low-symmetric lines in BZ.
The dihedral group $D_4$ has
four 1-dimensional linear irreps.\ $\{1, A, B, AB\}$, two 2-dimensional linear irrep.\ $\{E\}$,
and two $2$-dimensional nontrivial projective irreps.\ $\{W,BW\}$.
It is useful to introduce the character of a representation, which is defined as
the trace of representation matrices.
The character table of linear representations of the dihedral group $D_4$ is summarized as the following table:
\begin{align}
\begin{tabular}[t]{cc|ccccc}
\hline
irrep. & Mulliken& $\{1\}$ & $\{c_4,c_4^3\}$ & $\{c_2\}$ & $\{\sigma,\sigma c_2\}$ & $\{\sigma c_4,\sigma c_4^3\}$ \\
\hline
$1$ & $A_1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
$A$ & $A_2$ & $1$ & $1$ & $1$ & $-1$ & $-1$ \\
$B$ & $B_1$ & $1$ & $-1$ & $1$ & $1$ & $-1$ \\
$AB$ & $B_2$ & $1$ & $-1$ & $1$ & $-1$ & $1$ \\
$E $ & $E$ & $2$ & $0$ & $-2$ & $0$ & $0$ \\
\hline
\end{tabular}
\end{align}
For projective representations,
we need to specify a two-cocycle $\omega_{p,p'} \in Z^2(D_4,\mathbb{R}/2 \pi \mathbb{Z})$
which appears in
\begin{align}
U(p) U(p') = e^{i \omega_{p,p'}} U(p p'), \qquad p,p' \in D_4.
\end{align}
Once we fix a two-cocycle $\omega$,
projective representations with two-cocycle $\omega$,
dubbed {\it $\omega$-projective representations},
make sense.
Note that fixing of a two-cocycle
is needed for a projective representation with
the trivial group cohomology $[\omega] = 0 \in H^2(D_4,U(1))$.
In the same way,
the $\omega$-projective character is defined as the trace of
the representation matrices
\begin{align}
\chi(p) := {\rm tr\,} (U(P)).
\end{align}
Clearly, $\chi(p)$ is invariant under the unitary transformation $U(p) \mapsto V U(p) V^{\dag}$.
Different choices of two-cocycles with the same cohomology class may change the projective character.
For example, the following table shows the
projective characters at the symmetric points $\Gamma, M$, and $X$:
\begin{align}
\begin{tabular}[t]{c|c|c|cccccccc}
\hline
Symmetric point & two-cocycle & irrep. & $1$ & $c_4$ & $c_2$ & $c_4^3$ & $\sigma$ & $\sigma c_4$ & $\sigma c_2$ & $\sigma c_4^3$ \\
\hline
$\Gamma$ & $\omega$
& $W$ & $2$ & $\sqrt{2}$ & $0$ & $-\sqrt{2}$ & $0$ & $0$ & $0$ & $0$ \\
& (defined in (\ref{tab:d4_two_cocycle})) & $BW$ & $2$ & $-\sqrt{2}$ & $0$ & $\sqrt{2}$ & $0$ & $0$ & $0$ & $0$ \\
\hline
$M$ & $\tau_{\sf p4g}|_{M} + \omega$
& $W$ & $2$ & $-\sqrt{2} i$ & $0$ & $\sqrt{2} i$ & $0$ & $0$ & $0$ & $0$ \\
& & $BW$ & $2$ & $\sqrt{2} i$ & $0$ & $- \sqrt{2} i$ & $0$ & $0$ & $0$ & $0$ \\
\hline
$X$ & $\tau_{\sf p4g}|_{X} + \omega$
& $1$ & $1$ & & $-i$ & & $1$ & & $-i$ & \\
& & $t_{\sigma c_2}$ & $1$ & & $i$ & & $1$ & & $i$ & \\
& & $t_{\sigma}$ & $1$ & & $i$ & & $-1$ & & $-i$ & \\
& & $t_{\sigma c_2} t_{\sigma}$ & $1$ & & $-i$ & & $-1$ & & $i$ & \\
\hline
\end{tabular}
\label{tab:proj_character_d4}
\end{align}
Examples of representations of $D_4$ are shown in Table~\ref{Tab:rep_of_D4}.
\begin{table*}[!]
\begin{center}
\caption{Examples of projective representations of $D_4$.
$\zeta$ means $\zeta = e^{- \pi i/4}$.
}
\begin{tabular}[t]{c|c|ccccccccc}
\hline
two-cocycle & $U_{\rho}(p)$ & $1$ & $c_4$ & $c_2$ & $c_4^3$ & $\sigma$ & $\sigma c_4$ & $\sigma c_2$ & $\sigma c_4^3$ \\
\hline
& $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\
& $A$ & $1$ & $1$ & $1$ & $1$ & $-1$ & $-1$ & $-1$ & $-1$ \\
triv. & $B$ & $1$ & $-1$ & $1$ & $-1$ & $1$ & $-1$ & $1$ & $-1$ \\
(linear reps.) & $AB$ & $1$ & $-1$ & $1$ & $-1$ & $-1$ & $1$ & $-1$ & $1$ \\
& $E$ & $\begin{pmatrix}
1 & 0 \\
0 & 1 \\
\end{pmatrix}$ & $\begin{pmatrix}
0 & -1 \\
1 & 0 \\
\end{pmatrix}$ & $\begin{pmatrix}
-1 & 0 \\
0 & -1 \\
\end{pmatrix}$ & $\begin{pmatrix}
0 & 1 \\
-1 & 0 \\
\end{pmatrix}$ & $\begin{pmatrix}
1 & 0 \\
0 & -1 \\
\end{pmatrix}$ & $\begin{pmatrix}
0 & -1 \\
-1 & 0 \\
\end{pmatrix}$ & $\begin{pmatrix}
-1 & 0 \\
0 & 1 \\
\end{pmatrix}$ & $\begin{pmatrix}
0 & 1 \\
1 & 0 \\
\end{pmatrix}$ \\
\hline
$\omega$ & $W$ &
$\begin{pmatrix}
1 & 0 \\
0 & 1 \\
\end{pmatrix}$
&$\begin{pmatrix}
\zeta & 0 \\
0 & \zeta^{-1} \\
\end{pmatrix}$
& $\begin{pmatrix}
-i & 0 \\
0 & i \\
\end{pmatrix}$
& $\begin{pmatrix}
-i \zeta & 0 \\
0 & i \zeta^{-1} \\
\end{pmatrix}$
& $\begin{pmatrix}
0 & -1 \\
1 & 0 \\
\end{pmatrix}$
& $\begin{pmatrix}
0 & -\zeta^{-1} \\
\zeta & 0 \\
\end{pmatrix}$
& $\begin{pmatrix}
0 & -i \\
-i & 0 \\
\end{pmatrix}$
& $\begin{pmatrix}
0 & -i \zeta^{-1} \\
-i \zeta & 0 \\
\end{pmatrix}$ \\
& $BW$ &
$\begin{pmatrix}
1 & 0 \\
0 & 1 \\
\end{pmatrix}$
&$\begin{pmatrix}
-\zeta & 0 \\
0 & -\zeta^{-1} \\
\end{pmatrix}$
& $\begin{pmatrix}
-i & 0 \\
0 & i \\
\end{pmatrix}$
& $\begin{pmatrix}
i \zeta & 0 \\
0 & -i \zeta^{-1} \\
\end{pmatrix}$
& $\begin{pmatrix}
0 & -1 \\
1 & 0 \\
\end{pmatrix}$
& $\begin{pmatrix}
0 & \zeta^{-1} \\
-\zeta & 0 \\
\end{pmatrix}$
& $\begin{pmatrix}
0 & -i \\
-i & 0 \\
\end{pmatrix}$
& $\begin{pmatrix}
0 & i \zeta^{-1} \\
i \zeta & 0 \\
\end{pmatrix}$ \\
\hline
\end{tabular}
\label{Tab:rep_of_D4}
\end{center}
\end{table*}
\begin{table*}[!]
\begin{center}
\caption{The table of tensor product representations of $D_4$. }
\begin{tabular}[t]{c|ccccc|cc}
\hline
$\rho_1 \otimes \rho_2, \rho_1 \backslash \rho_2$ & $1$ & $A$ & $B$ & $AB$ & $E$ & $W$ & $BW$ \\
\hline
$1$ & $1$ & $A$ & $B$ & $AB$ & $E$ & $W$ & $BW$ \\
$A$ & $A$ & $1$ & $AB$ & $B$ & $E$ & $W$ & $BW$ \\
$B$ & $B$ & $AB$ & $1$ & $A$ & $E$ & $BW$ & $W$ \\
$AB$ & $AB$ & $B$ & $A$ & $1$ & $E$ & $BW$ & $W$ \\
$E $ & $E $ & $E $ & $E $ & $E $ & $1+A+B+AB$ & $W+BW$ & $W+BW$ \\
\hline
\end{tabular}
\label{Tab:ProductR(D_4)}
\end{center}
\end{table*}
The tensor product of two linear representations is defined by
\begin{align}
U_{\rho_1 \otimes \rho_2}(p) := U_{\rho_1}(p) U_{\rho_2}(p),
\label{eq:tensor_prod_rep_d4}
\end{align}
which induces the ring structure on $R(D_4)$, the Abelian group generated by
linear representations of $D_4$.
If $\rho_2$ is a $\omega$-projective representation,
eq.(\ref{eq:tensor_prod_rep_d4}) defines
the $R(D_4)$-module structure of
$R^{\omega}(D_4)$, the Abelian group generated by
$\omega$-projective representations.
Table~\ref{Tab:ProductR(D_4)} summarizes the tensor product representations.
As the notations suggest, $AB$ and $BW$ means $A \otimes B$ and $B \otimes W$, respectively.
From Table~\ref{Tab:ProductR(D_4)},
the representation ring of $D_4$ reads
\begin{equation}
R(D_4) \cong \mathbb{Z}[A,B,E]/(1-A^2,1-B^2,E-AE,E-BE,E^2-1-A-B-AB).
\end{equation}
We can read off the $R(D_4)$-module structure
of the $\omega$-projective representations $R^{\omega}(D_4)$
as
\begin{align}
R^{\omega}(D_4) \cong (1+A+E).
\end{align}
The restriction of group elements of $D_4$ to its
subgroup $H$ leads to
the restriction of the two-cocycle
\begin{align}
\omega \to \omega|_H \in Z^2(H,U(1))
\end{align}
and the restriction of $\omega$-projective representations of $D_4$ to
$(\omega|_H)$-projective representations,
\begin{align}
\rho \to \rho|_H \in R^{\omega|_H}(H).
\end{align}
We summarize the restriction of irreps.\ of $D_4$ in Table~\ref{Tab:RestrictionD_4}.
\begin{table*}[!]
\begin{center}
\caption{
Subgroups of $D_4 = \{1,c_4,c_2,c_4^3, \sigma,\sigma c_4,\sigma c_2,\sigma c_4^3\}$
and restrictions of representations of $D_4$ to subgroups.
In the restriction of two irreps.\ of
$\omega$-projective irreps.,
we trivialize the two-cocycle $\omega$ by the
redefinition $U_{c_4} \mapsto \zeta^{-1} U_{c_4}$.
}
\begin{tabular}{c|l|l|ccccc|cc}
\hline
subgroup $H$ & elements & $R(H)$ & $1|_H$ & $A|_H$ & $B|_H$ & $AB|_H$ & $E|_H$ & $W|_H$ & $BW|_H$ \\
\hline
$C_4$ & $\{1,c_4,c_2,c_4^3\}$ & $\mathbb{Z}[t]/(1-t^4)$ & $1$ & $1$ & $t^2$ & $t^2$ & $t+t^3$ & $1+t$ & $t^2+t^3$ \\
$D_2^{(v)}$ & $\{1,c_2,\sigma ,\sigma c_2\}$ & $\mathbb{Z}[t_1,t_2]/(t_1^2,t_2^2)$ & $1$ & $t_1t_2$ & $1$ & $t_1t_2$ & $t_1+t_2$ & $W$ & $W$ \\
$D_2^{(d)}$ & $\{1,c_2,\sigma c_4,\sigma c_4^3\}$ & $\mathbb{Z}[t_1,t_2]/(t_1^2,t_2^2)$ & $1$ & $t_1t_2$ & $t_1t_2$ & $1$ & $t_1+t_2$ & $W$ & $W$ \\
$\mathbb{Z}_2$ & $\{1,c_2\}$ & $\mathbb{Z}[s]/(1-s^2)$ & $1$ & $1$ & $1$ & $1$ & $2 s$ & $1+s$ & $1+s$ \\
$\mathbb{Z}_2^{(v)}$ & $\{1,\sigma \} \sim \{1,\sigma c_2\}$ & $\mathbb{Z}[s]/(1-s^2)$ & $1$ & $s$ & $1$ & $s$ & $1+s$ & $1+s$ & $1+s$ \\
$\mathbb{Z}_2^{(d)}$ & $\{1,\sigma c_4\} \sim \{1,\sigma c_4^3\}$ & $\mathbb{Z}[s]/(1-s^2)$ & $1$ & $s$ & $s$ & $1$ & $1+s$ & $1+s$ & $1+s$ \\
\hline
\end{tabular}
\label{Tab:RestrictionD_4}
\end{center}
\end{table*}
\subsubsection{$K$-group of 1-dimensional subspace $X_1$}
To compute the $K$-group, we introduce $D_4$-invariant subspaces
$X_1, Y_1$ and $Z$:
$$
X_1 \ \ = \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(1,0)}*+{\bullet}="a",
!{(1,1)}*+{\bullet}="b" ,
!{(0,1)}*+{\bullet}="c" ,
!{(-1,1)}*+{\bullet}="d" ,
!{(-1,0)}*+{\bullet}="e",
!{(-1,-1)}*+{\bullet}="f",
!{(0,-1)}*+{\bullet}="g" ,
!{(1,-1)}*+{\bullet}="h" ,
!{(0,0)}*+{\bullet}="i" ,
"a"-@{.}"b",
"b"-@{.}"c",
"c"-@{.}"d",
"d"-"e",
"e"-"f",
"f"-"g",
"g"-"h",
"h"-@{.}"a",
"a"-"i",
"c"-"i",
"e"-"i",
"g"-"i",
"b"-"i",
"d"-"i",
"f"-"i",
"h"-"i",
}
\qquad \qquad
Y_1 \ \ = \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(1,0)}*+{\bullet}="a",
!{(1,1)}="b" ,
!{(0,1)}*+{\bullet}="c" ,
!{(-1,1)}="d" ,
!{(-1,0)}*+{\bullet}="e",
!{(-1,-1)}="f",
!{(0,-1)}*+{\bullet}="g" ,
!{(1,-1)}="h" ,
!{(0,0)}*+{\bullet}="i" ([]!{+(0.6,-0.6)} {\Gamma}),
!{(0.3,-0.3)}-@{->}!{(0.1,-0.1)},
"a"-@{.}"b",
"b"-@{.}"c",
"c"-@{.}"d",
"d"-@{.}"e",
"e"-@{.}"f",
"f"-@{.}"g",
"g"-@{.}"h",
"h"-@{.}"a",
"a"-"i",
"c"-"i",
"e"-"i",
"g"-"i",
} \qquad \qquad
Z \ \ = \ \ \xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(1,1)}*+{\bullet}="a",
!{(-1,1)}*+{\bullet}="b" ,
!{(-1,-1)}*+{\bullet}="c",
!{(1,-1)}*+{\bullet}="d" ,
!{(0,0)}*+{\bullet}="e",
"a"-@{.}"b",
"b"-@{.}"c",
"c"-@{.}"d",
"d"-@{.}"a",
"a"-"e",
"b"-"e",
"c"-"e",
"d"-"e",
}$$
In the computation below,
we focus on the following fundamental region in the BZ torus that
is surrounded by $\Gamma$, $M$ and $X$ points.
We mark points on the edges of the fundamental region with $I_1, I_2$ and $I_3$.
See the following figure:
$$
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(1,0)}="a"([]!{+(0.2,-0.2)} {X}),
!{(1,1)}="b"([]!{+(0.3,0.3)} {M}),
!{(0,1)}="c" ,
!{(-1,1)}="d" ,
!{(-1,0)}="e",
!{(-1,-1)}="f",
!{(0,-1)}="g" ,
!{(1,-1)}="h" ,
!{(0,0)}="i"([]!{+(-0.2,-0.2)} {\Gamma}),
!{(0.5,0)}*+{\bullet}([]!{+(0,-0.3)} {I_1}),
!{(0.5,0.5)}*+{\bullet}([]!{+(-0.3,0.3)} {I_2}),
!{(1,0.5)}*+{\bullet}([]!{+(0.3,0)} {I_3}),
!{(1,0)}*+{\bullet}([]!{+(0.2,-0.2)} {X}),
!{(1,1)}*+{\bullet}([]!{+(0.3,0.3)} {M}),
!{(0,0)}*+{\bullet}([]!{+(-0.2,-0.2)} {\Gamma}),
"a"-"b",
"a"-"i",
"b"-"i",
"b"-@{.}"d",
"d"-@{.}"f",
"f"-@{.}"h",
"h"-@{.}"b",
"a"-@{.}"e",
"c"-@{.}"g",
"b"-@{.}"f",
"d"-@{.}"h",
}
$$
First, we compute the $K$-group of $Y_1$
by use of the Mayer-Vietoris sequence.
$Y_1$ is divided into two parts $Y_1 = U \cup V$,
where $U$ and $V$ have the following $D_4$-equivariant homotopy equivalences
\begin{equation}
U \sim \{\Gamma \} = (D_4/D_4) \times pt , \qquad
V \sim \{ X, c_4 \cdot X \} \sim (D_4/D_2^{(v)}) \times pt ,
\end{equation}
where $D_2^{(v)} = \{1 , \sigma , \sigma c_2, c_2\}$ is a subgroup of $D_4$.
The intersection has the $D_4$-equivariant homotopy equivalence
\begin{equation}
U \cap V
\sim \{I_1, c_4 I_1, c_2 I_1, c_4^3 I_1 \}
\sim (D_4/\mathbb{Z}_2^{(y)}) \times pt = \big\{ \{ 1, \sigma \}, \{c_2, \sigma c_2\}, \{c_4, \sigma c_4^3\}, \{c_4^3, \sigma c_4 \} \big\}.
\end{equation}
Here we chose $\mathbb{Z}_2^{(y)} = \{1,\sigma\}$ as a $\mathbb{Z}_2$ subgroup.
In this choice, the intersection $U \cap V $ can be labeled by the $D_4$-space $(D_4/\mathbb{Z}_2^{(y)}) \times pt$ as follows:
$$
U \cap V \quad \sim \quad
\xygraph{
!{<0cm,0cm>;<1.5cm,0cm>:<0cm,1.5cm>::}
!{(1,0)}*+{\circ}="a",
!{(1,1)}="b" ,
!{(0,1)}*+{\circ}="c" ,
!{(-1,1)}="d" ,
!{(-1,0)}*+{\circ}="e",
!{(-1,-1)}="f",
!{(0,-1)}*+{\circ}="g" ,
!{(1,-1)}="h" ,
!{(0,0)}*+{\circ}="i",
"a"-@{.}"b",
"b"-@{.}"c",
"c"-@{.}"d",
"d"-@{.}"e",
"e"-@{.}"f",
"f"-@{.}"g",
"g"-@{.}"h",
"h"-@{.}"a",
"a"-"i",
"c"-"i",
"e"-"i",
"g"-"i",
!{(0.5,-0.15)}*+{\{1,\sigma\}},
!{(-0.6,-0.15)}*+{\ \{c_2, \sigma c_2\}},
!{(0.05,0.7)}*+{\{c_4,\sigma c_4^3\}},
!{(0.03,-0.7)}*+{\{c_4^3,\sigma c_4\}},
}
$$
The $D_4$ group naturally acts on the set $D_4/\mathbb{Z}_2^{(y)}$.
An alternative choice is $\mathbb{Z}_2^{(x)} = \{1, \sigma c_2\}$.
The finial expression for the $K$-group $K_{D_4}^{\tau+ c + n}(Y_1)$
does not depend on the choices $\mathbb{Z}_2^{(x)}$ and $\mathbb{Z}_2^{(y)}$.
The six term Mayer-Vietoris sequence associated with the decomposition
$Y_1 = U \cup V$ is given by
\begin{equation}\begin{split}
\begin{CD}
0 @<<< 0 @<<< K_{D_4}^{\tau_{\sf p4g}+\omega+1}(Y_1) \\
@VVV @. @AAA \\
K_{D_4}^{\tau_{\sf p4g}+\omega+0}(Y_1) @>>> \underbrace{R^{\omega}(D_4)}_{\Gamma} \oplus \underbrace{R^{\tau_{\sf p4g}|_X+\omega}(D_2^{(v)})}_{X} @>\Delta_0>> \underbrace{R^{\tau_{\sf p4g}|_{I_1} + \omega}(\mathbb{Z}_2^{(y)})}_{I_1}. \\
\end{CD}
\end{split}\end{equation}
The homomorphism $\Delta_0$ of $R(D_4)$-modules is given by
\begin{equation}
\Delta_0 : (\rho, g(t_{\sigma c_2},t_{\sigma})) \mapsto
\rho|_{\mathbb{Z}_2^{(y)}} \cdot (1+t_{\sigma}) - g(1,t_{\sigma}).
\end{equation}
$\mathrm{Ker} (\Delta_0)$ is spanned by the following basis
\begin{equation}
\{ \underbrace{(W,t_{\sigma c_2}+t_{\sigma}), (BW,t_{\sigma c_2}+t_{\sigma})}_{(1+A+E)}, \underbrace{(0,1-t_{\sigma c_2}), (0,t_{\sigma}-t_{\sigma c_2} t_{\sigma})}_{(1+B-E)} \}.
\end{equation}
We have
\begin{equation}
K_{D_4}^{\tau_{\sf p4g}+\omega+0}(Y_1) \cong \mathrm{Ker}(\Delta_0) \cong \overbrace{(1+A+E)}^{\mathbb{Z}^2} \oplus \overbrace{(1+B-E)}^{\mathbb{Z}^2}, \qquad
K_{D_4}^{\tau_{\sf p4g}+\omega+1}(Y_1) = 0,
\end{equation}
where $(1+A+E)$ and $(1+B-E)$ are $R(D_4)$-ideals defined by
\begin{align}
&(1+A+E) = \{(1+A+E) f(A,B,E) | f(A,B,E) \in R(D_4) \}, \\
&(1+B-E) = \{(1+B-E) f(A,B,E) | f(A,B,E) \in R(D_4) \}.
\end{align}
Next, we compute the $K$-group of the subspace $X_1$.
Decompose $X_1$ to $U \cup V$ as follows:
$$
X_1 = U \cup V \ \ = \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(1,0)}*+{\bullet}="a",
!{(1,1)}="b" ,
!{(0,1)}*+{\bullet}="c" ,
!{(-1,1)}="d" ,
!{(-1,0)}*+{\bullet}="e",
!{(-1,-1)}="f",
!{(0,-1)}*+{\bullet}="g" ,
!{(1,-1)}="h" ,
!{(0,0)}*+{\bullet}="i" ,
"b"-@{.}"d",
"d"-@{.}"f",
"f"-@{.}"h",
"h"-@{.}"b",
"a"-"i",
"c"-"i",
"e"-"i",
"g"-"i",
"i"-!{(0.5,0.5)},
"i"-!{(-0.5,0.5)},
"i"-!{(-0.5,-0.5)},
"i"-!{(0.5,-0.5)},
"e"-!{(-1,0.6)},
"e"-!{(-1,-0.6)},
"g"-!{(-0.6,-1)},
"g"-!{(0.6,-1)},
}\ \
\cup \ \
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(1,0)}="a",
!{(1,1)}*+{\bullet}="b" ,
!{(0,1)}="c" ,
!{(-1,1)}*+{\bullet}="d" ,
!{(-1,0)}="e",
!{(-1,-1)}*+{\bullet}="f",
!{(0,-1)}="g" ,
!{(1,-1)}*+{\bullet}="h" ,
!{(0,0)}="i" ,
"h"-@{.}"b",
"b"-@{.}"d",
"d"-@{.}"f",
"f"-@{.}"h",
"b"-!{(0.5,0.5)},
"d"-!{(-0.5,0.5)},
"d"-!{(-1,0.4)},
"f"-!{(-0.5,-0.5)},
"f"-!{(-1,-0.4)},
"f"-!{(-0.4,-1)},
"h"-!{(0.5,-0.5)},
"h"-!{(0.4,-1)},
}$$
$U$ and $V$ are $D_4$-equivariantly homotopy equivalent to $Y_1$ and the point $(\pi,\pi) \sim D_4/D_4$, respectively.
The intersection $U \cap V$ is $D_4$-equivariantly homotopy equivalent to the disjoint union of two $D_4$-spaces.
\begin{equation}
U \cap V \sim (D_4/\mathbb{Z}_2^{(d)}) \sqcup (D_4/\mathbb{Z}_2^{(x)})
\quad \sim \quad
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(1,0)}="a",
!{(1,1)}*+{\circ}="b" ,
!{(0,1)}="c" ,
!{(-1,1)}*+{\circ}="d" ,
!{(-1,0)}="e",
!{(-1,-1)}*+{\circ}="f",
!{(0,-1)}="g" ,
!{(1,-1)}*+{\circ}="h" ,
!{(0,0)}*+{\circ}="i" ,
"h"-@{.}"b",
"b"-@{.}"d",
"d"-@{.}"f",
"f"-@{.}"h",
"i"-"b",
"i"-"d",
"i"-"f",
"i"-"h",
}
\quad \sqcup \quad
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(1,0)}="a",
!{(1,1)}*+{\circ}="b" ,
!{(0,1)}="c" ,
!{(-1,1)}*+{\circ}="d" ,
!{(-1,0)}*+{\circ}="e",
!{(-1,-1)}*+{\circ}="f",
!{(0,-1)}*+{\circ}="g" ,
!{(1,-1)}*+{\circ}="h" ,
!{(0,0)}="i" ,
"h"-@{.}"b",
"b"-@{.}"d",
"d"-@{.}"f",
"f"-@{.}"h",
"d"-"e",
"e"-"f",
"f"-"g",
"g"-"h",
}
\end{equation}
The Mayer-Vietoris sequence of $X_1 = U \cup V$ is given by
\begin{equation}\begin{split}
\begin{CD}
0 @<<< 0 @<<< K_{D_4}^{\tau_{\sf p4g}+\omega+1}(X_1) \\
@VVV @. @AAA \\
K_{D_4}^{\tau_{\sf p4g}+\omega+0}(X_1) @>>> K^{\tau_{\sf p4g}+\omega+0}_{D_4}(Y_1) \oplus \underbrace{R^{\omega}(D_4)}_{M} @>\Delta_0>> \underbrace{R(\mathbb{Z}_2^{(d)})}_{I_2} \oplus \underbrace{R(\mathbb{Z}_2^{(x)})}_{I_3}. \\
\end{CD}
\end{split}\end{equation}
From Table~\ref{Tab:RestrictionD_4}, the restrictions of elements in the $K$-group $K^{\tau_{\sf p4g}+\omega+0}_{D_4}(Y_1)$ and $R^{\omega}(D_4)$ to the
intersection are given by
\begin{equation}
j_U^* : K^{\tau_{\sf p4g}+\omega+0}_{D_4}(Y_1) \mapsto R(\mathbb{Z}_2^{(d)}) \oplus R(\mathbb{Z}_2^{(x)}), \qquad
\left\{ \begin{array}{l}
(W, t_{\sigma c_2}+t_{\sigma}) \mapsto (1+t_{\sigma c_4^3},1+t_{\sigma c_2}), \\
(BW,t_{\sigma c_2}+t_{\sigma}) \mapsto (1+t_{\sigma c_4^3},1+t_{\sigma c_2}), \\
(0,1-t_{\sigma c_2}) \mapsto (0,1-t_{\sigma c_2}), \\
(0,t_{\sigma}-t_{\sigma c_2} t_{\sigma}) \mapsto (0,1-t_{\sigma c_2}), \\
\end{array} \right. \ \
\end{equation}
\begin{equation}
j_V^* : R^{\omega}(D_4) \mapsto R(\mathbb{Z}_2^{(d)}) \oplus R(\mathbb{Z}_2^{(x)}), \qquad
\left\{ \begin{array}{l}
W \mapsto (1+t_{\sigma c_4^3},1+t_{\sigma c_2}), \\
BW \mapsto (1+t_{\sigma c_4^3},1+t_{\sigma c_2}). \\
\end{array} \right.
\end{equation}
Then, the kernel of $\Delta_0 = j_U^* - j_V^*$ is spanned by the following basis
in terms of representations at symmetric points $(\Gamma,X,M)$
\begin{multline}
\{ \underbrace{(W,t_{\sigma c_2}+t_{\sigma},W), (BW,t_{\sigma c_2}+t_{\sigma},BW)}_{(1+A+E)},
\underbrace{(0,1-t_{\sigma c_2}-t_{\sigma}+t_{\sigma c_2} t_{\sigma},0)}_{(1+A+B+AB-2E)},
\underbrace{(0,0,W-BW)}_{(1+A-B-AB)} \} \\
\subset
R^{\omega}(D_4) \oplus R(D_2^{(v)}) \oplus R^{\omega}(D_4).
\label{eq:local_data_X_1}
\end{multline}
${\rm Im}(\Delta_0)$ is spanned by
\begin{equation}
\{(1+t_{\sigma c_4^3},1+t_{\sigma c_2}), (0,1-t_{\sigma c_2})\} \subset R(\mathbb{Z}_2^{(d)}) \oplus R(\mathbb{Z}_2^{(x)}).
\end{equation}
Notice that a basis of $R(\mathbb{Z}_2^{(d)}) \oplus R(\mathbb{Z}_2^{(x)})$ can be chosen as
\begin{equation}
\{(1,0),(0,1),(1+t_{\sigma c_4^3},1+t_{\sigma c_2}),(0,1-t_{\sigma c_2})\}.
\end{equation}
Hence, $\mathrm{Coker}(\Delta_0)$ is generated by
two elements $\{[1,1],[0,1]\}$.
The $R(D_4)$-actions on these generators,
\begin{align}
\left\{ \begin{array}{l}
A\cdot [1,1] = [(t_{\sigma c_4^3},t_{\sigma c_2})(1,1)] = [t_{\sigma c_4^3},t_{\sigma c_2}] = -[1,1], \\
B\cdot [1,1] = [(t_{\sigma c_4^3},1)(1,1)] = [t_{\sigma c_4^3},1] = -[1,1], \\
E\cdot [1,1] = [(1+t_{\sigma c_4^3},1+t_{\sigma c_2})(1,1)] = [1+t_{\sigma c_4^3},1+t_{\sigma c_2}] = 0, \\
\end{array}\right. \\
\left\{ \begin{array}{l}
A\cdot [0,1] = [(t_{\sigma c_4^3},t_{\sigma c_2})(0,1)] = [0,t_{\sigma c_2}] = [0,1], \\
B\cdot [0,1] = [(t_{\sigma c_4^3},1)(0,1)] = [0,1], \\
E\cdot [0,1] = [(1+t_{\sigma c_4^3},1+t_{\sigma c_2})(0,1)] = [0,1+t_{\sigma c_2}]= 2 [0,1], \\
\end{array}\right.
\end{align}
imply the $R(D_4)$-module structures
$\mathbb{Z}[1,1] \cong (1-A-B+AB)$ and
$\mathbb{Z}[0,1] \cong (1+A+B+AB+2E)$.
We consequently get the $K$-group of $X_1$ as follows
\begin{align}
&K^{\tau_{\sf p4g}+\omega+0}_{D_4}(X_1) \cong
\overbrace{(1+A+E)}^{\mathbb{Z}^2} \oplus
\overbrace{(1+A+B+AB-2E)}^{\mathbb{Z}} \oplus
\overbrace{(1+A-B-AB)}^{\mathbb{Z}}, \\
&K^{\tau_{\sf p4g}+\omega+1}_{D_4}(X_1) \cong
\overbrace{(1-A-B+AB)}^{\mathbb{Z}} \oplus
\overbrace{(1+A+B+AB+2E)}^{\mathbb{Z}}.
\end{align}
\subsubsection{$K$-group of $Y_1 \vee Z$}
In the same way,
the $K$-group of the subspace $Z$ is given by the
Mayer-Vietoris sequence
\begin{equation}\begin{split}
\begin{CD}
0 @<<< 0 @<<< K_{D_4}^{\tau_{\sf p4g}+\omega+1}(Z) \\
@VVV @. @AAA \\
K_{D_4}^{\tau_{\sf p4g}+\omega+0}(Z) @>>> \underbrace{R^{\omega}(D_4)}_{\Gamma} \oplus \underbrace{R^{\tau_{\sf p4g}|_M + \omega}(D_4)}_{M} @>\Delta_0>> \underbrace{R^{\tau_{\sf p4g}|_{I_2}+\omega}(\mathbb{Z}_2^{(d)})}_{I_2}. \\
\end{CD}
\end{split}\end{equation}
Here, $\Delta_0$ is given by
\begin{align}
\Delta_0 : (f(B),g(B)) \mapsto (f(1)-g(1)) (1+t_{\sigma c_4^3}).
\end{align}
The kernel of $\Delta_0$ is spanned by
\begin{align}
&{\rm ker } (\Delta_0) : \qquad \{ \underbrace{(W,W), (BW,BW)}_{(1+A+E)}, \underbrace{(0,W-BW)}_{1+A-B-AB} \} \subset R^{\omega}(D_4) \oplus R^{\omega}(D_4).
\end{align}
The generator of the cokernel of $\Delta_0$ is
represented by $[1] \in R(\mathbb{Z}_2^{(d)})$, and the $R(D_4)$-module structure
is summarized as $A \cdot [1] = -[1]$, $B \cdot [1] = - [1]$, and $E \cdot [1] = 0$,
which implies ${\rm coker }(\Delta_0) \cong (1-A-B+AB)$.
We have
\begin{align}
&K^{\tau_{\sf p4g}+\omega+0}_{D_4}(Z) \cong
\overbrace{(1+A+E)}^{\mathbb{Z}^2} \oplus
\overbrace{(1+A-B-AB)}^{\mathbb{Z}}, \\
&K^{\tau_{\sf p4g}+\omega+1}_{D_4}(Z) \cong
\overbrace{(1-A-B+AB)}^{\mathbb{Z}}.
\end{align}
Gluing the $K$-groups of $Y_1$ and $Z$
at the single fixed point $\Gamma = (0,0)$ of the $D_4$ action,
we have the $K$-group of $Y_1 \vee Z$,
where $Y_1 \vee Z$ is defined as the disjoint union $Y_1 \sqcup Z$ with the $\Gamma$ point of $Y_1$ and that of $Z$ identified,
\begin{align}
&K^{\tau_{\sf p4g}+\omega+0}_{D_4}(Y_1 \vee Z) \cong
\overbrace{(1+A+E)}^{\mathbb{Z}^2} \oplus
\overbrace{(1+A-B-AB)}^{\mathbb{Z}} \oplus
\overbrace{(1+B-E)}^{\mathbb{Z}^2}, \\
&K^{\tau_{\sf p4g}+\omega+1}_{D_4}(Y_1 \vee Z) \cong
\overbrace{(1-A-B+AB)}^{\mathbb{Z}}.
\end{align}
\subsubsection{$K$-group over $T^2$}
Next, we ``extend'' the wave function over the subspaces $X_1$ and $Y_1 \vee Z$ to that over the BZ torus $T^2$.
In other words, we assume that the existence of a finite energy gap
persists in the whole region of BZ torus $T^2$, which gives rise to
a kind of global consistency condition on the wave functions with
p4g symmetry.
Mathematically, this global constraint can be expressed by the
exact sequence for the pair $(T^2,X_1)$
\begin{equation}\begin{split}
\begin{CD}
K^{\tau_{\sf p4g}+\omega+1}_{D_4}(X_1) @<<< K^{\tau_{\sf p4g}+\omega+1}_{D_4}(T^2) @<<< K^{\tau_{\sf p4g}+\omega+1}_{D_4}(T^2,X_1) \\
@VVV @. @AAA \\
K^{\tau_{\sf p4g}+\omega+0}_{D_4}(T^2,X_1) @>>> K^{\tau+0}_{D_4}(T^2) @>>> K^{\tau_{\sf p4g}+\omega+0}_{D_4}(X_1), \\
\end{CD}
\end{split}\end{equation}
which is the exact sequence of $R(D_4)$-modules
\begin{equation}\begin{split}
\begin{CD}
(1-A-B+AB) \oplus (1+A+B+AB+2E) @<<< K^{\tau_{\sf p4g}+\omega+1}_{D_4}(T^2) @<<< 0 \\
@VV\delta V @. @AAA \\
(1+A+B+AB+2E) @>>> K^{\tau_{\sf p4g}+\omega+0}_{D_4}(T^2) @>>> \begin{array}{l}
K^{\tau_{\sf p4g}+\omega+0}_{D_4}(X_1).
\end{array} \\
\end{CD}
\end{split}\end{equation}
Here, we used
\begin{equation}
K^{\tau_{\sf p4g}+\omega+n}_{D_4}(T^2,X_1) \cong \widetilde K^n_{D_4}( D_4 \times e^2) \cong K^n(S^2)
\cong \left\{ \begin{array}{ll}
(1+A+B+AB+2E) & (n=0), \\
0 & (n=1). \\
\end{array}\right.
\end{equation}
Any $R(D_4)$-homomorphism $f : (1-A-B+AB) \to (1+A+B+AB+2E)$ is trivial, because $f(1) = A \cdot f(1) = f( A \cdot 1) = f(-1) = - f(1) = 0$. Therefore $\delta$ is either: (i) trivial; (ii) non-trivial and surjective; or (iii) non-trivial and non-surjective. To determine which is the case, we employ the exact sequence for the pair $(T^2, Y_1 \vee Z)$:
\begin{align}
\begin{CD}
K^{\tau_{\sf p4g}+\omega+1}_{D_4}(Y_1 \vee Z) @<<< K^{\tau_{\sf p4g}+\omega+1}_{D_4}(T^2) @<<< K^{\tau_{\sf p4g}+\omega+1}_{D_4}(T^2,Y_1 \vee Z) @= \underbrace{(1 - A + B - AB)}_{\mathbb{Z}} \\
@VVV @. @AAA \\
0 @>>> K^{\tau_{\sf p4g}+\omega+0}_{D_4}(T^2) @>>> K^{\tau_{\sf p4g}+\omega+0}_{D_4}(Y_1 \vee Z) @= \mathbb{Z}^5 \\
\end{CD}
\label{eq:sequence_Y_1+Z}
\end{align}
Here, we used the excision axiom and the Thom isomorphism to get
\begin{equation}
K^{\tau_{\sf p4g}+\omega+n}_{D_4}(T^2,Y_1 \vee Z) \cong K^n_{\mathbb{Z}_2^{(v)}}(e^2, \partial e^2) \cong \widetilde{K}^n_{\mathbb{Z}_2^{(v)}}(S^2)
=
\left\{ \begin{array}{ll}
0 & (n=0), \\
(1 - A + B - AB), & (n=1), \\
\end{array}\right.
\label{eq:D_4_spinful_T2_Y1_Z}
\end{equation}
where the $\mathbb{Z}_2^{(v)}$-action on the sphere
is the reflection $S^2 \ni (n_0, n_1, n_2) \mapsto (n_0, n_1, -n_2)$.
In the exact sequence (\ref{eq:sequence_Y_1+Z}), the Abelian group
$K^{\tau_{\sf p4g}+\omega+0}_{D_4}(Y_1 \vee Z)$ is free. Hence $K^{\tau_{\sf p4g}+\omega+0}_{D_4}(T^2)$ must be torsion free, and the case (iii) is rejected. Now, let us assume that (i) is the case. Then, the exact sequence for the pair $(T^2, X_1)$ implies $K^{\tau_{\sf p4g}+\omega+1}_{D_4}(T^2) \cong K_{D_4}^{\tau_{\sf p4g}+\omega+1}(X_1)$. Substituting this into the exact sequence (\ref{eq:sequence_Y_1+Z}) for $(T^2, Y_1 \vee Z)$, we find that $K^{\tau_{\sf p4g}+\omega+1}_{D_4}(T^2, Y_1 \vee Z)$ surjects onto $(1 + A + B + AB + 2E)$, because any $R(D_4)$-homomorphism $(1 + A + B + AB + 2E) \to (1 - A - B + AB)$ is trivial. However this is impossible in view of (\ref{eq:D_4_spinful_T2_Y1_Z}). As a result, we conclude that (ii) is the case, and we eventually reached the conclusion
\begin{align}
&K^{\tau_{\sf p4g}+\omega+0}_{D_4}(T^2)
\cong K^{\tau_{\sf p4g}+\omega+0}_{D_4}(X_1)
\cong \overbrace{(1+A+E)}^{\mathbb{Z}^2} \oplus
\overbrace{(1+A+B+AB-2E)}^{\mathbb{Z}} \oplus
\overbrace{(1+A-B-AB)}^{\mathbb{Z}},
\label{eq:K_p4g_0_T2} \\
&K^{\tau_{\sf p4g}+\omega+1}_{D_4}(T^2)
\cong K^{\tau_{\sf p4g}+\omega+1}_{D_4}(Z)
\cong \overbrace{(1-A-B+AB)}^{\mathbb{Z}}.
\label{eq:K_p4g_1_T2}
\end{align}
\subsubsection{Models of $K$-group $K^{\tau_{\sf p4g}+\omega+0}_{D_4}(T^2)$}
In this subsection,
we will reconstruct the $R(D_4)$-module structure (\ref{eq:K_p4g_0_T2})
from models with small filling number.
The minimum number of Wyckoff positions inside a unit cell is two,
which are realized in the two Wyckoff positions labeled by (a) and (b):
\begin{align}
& ({\rm a}): \left\{ \begin{array}{l}
\bm{x}_A = (-1/4,1/4) \\
\bm{x}_B=(1/4,-1/4)
\end{array}\right. && \mapsto &&
\xygraph{
!{<0cm,0cm>;<1.5cm,0cm>:<0cm,1.5cm>::}
!{(-0.3,0.3)}*+{\textcircled{\scriptsize A}}="A" ,
!{(0.3,-0.3)}*+{\textcircled{\scriptsize B}}="B" ,
!{(-0.6,0.6)}-!{(0.6,0.6)},
!{(-0.6,-0.6)}-!{(0.6,-0.6)},
!{(0.6,0.6)}-!{(0.6,-0.6)},
!{(-0.6,0.6)}-!{(-0.6,-0.6)},
!{(-0.8,0)}-@{.}!{(0.8,0)},
!{(1,0)},
!{(0,-0.8)}-@{.}!{(0,0.8)},
!{(0,1.0)},
} \\
& ({\rm b}): \left\{ \begin{array}{l}
\bm{x}_A = (-1/4,-1/4) \\
\bm{x}_B=(1/4,1/4)
\end{array}\right. && \mapsto &&
\xygraph{
!{<0cm,0cm>;<1.5cm,0cm>:<0cm,1.5cm>::}
!{(-0.3,-0.3)}*+{\textcircled{\scriptsize A}}="A" ,
!{(0.3,0.3)}*+{\textcircled{\scriptsize B}}="B" ,
!{(-0.6,0.6)}-!{(0.6,0.6)},
!{(-0.6,-0.6)}-!{(0.6,-0.6)},
!{(0.6,0.6)}-!{(0.6,-0.6)},
!{(-0.6,0.6)}-!{(-0.6,-0.6)},
!{(-0.8,0)}-@{.}!{(0.8,0)},
!{(1,0)},
!{(0,-0.8)}-@{.}!{(0,0.8)},
!{(0,1.0)},
}
\end{align}
In the right figures the solid lines represent the unit cells,
and $\bm{x}_{A}$ and $\bm{x}_B$ are the localized positions from the center of the unit cell.
In the Wyckoff position (a), each $A$ and $B$ is invariant under the subgroup $C_4 = \{1, c_4, c_2, c_4^3\}$ modulo the lattice translation,
hence, local orbitals at $A$ and $B$ obey a representation of $C_4$,
which implies the minimum number of filling of atomic insulators by putting
degrees of freedom at the Wyckoff position (a) becomes two.
On the other hand, in the Wyckoff position (b), each $A$ and $B$ position is
invariant under the subgroup $D^{(d)}_2 = \{1, c_2, \sigma c_4, \sigma c_4^3\}$ modulo the lattice translation,
thus the local orbitals at $A$ and $B$ obey a nontrivial projective representation of $D_2^{(d)}$
if spin is half-integer.
This means that the minimum number of filling for the Wyckoff position (b) is four.
The generating models are given as follows.
First, we consider the Wyckoff position (a).
Put an $s$-orbital with spin up (down) polarized state of spin 1/2 degrees of freedom at A (B).
The $D_4$ group acts on these local states by
$U_{c_4} \ket{s,\uparrow/\downarrow} = e^{\mp \pi i/4} \ket{s,\uparrow/\downarrow}$ and
$U_{\sigma} \ket{s,\uparrow/\downarrow} = \pm \ket{s,\downarrow/\uparrow}$.
By taking the space group transformation into account,
the corresponding $D_4$-equivariant vector bundle $E_1$ is given by
\begin{align}
&\xygraph{
!{<0cm,0cm>;<1.5cm,0cm>:<0cm,1.5cm>::}
!{(-0.3,0.3)}*+{\ket{s,\uparrow}},
!{(0.3,-0.3)}*+{\ket{s,\downarrow}},
!{(-0.6,0.6)}-@{.}!{(0.6,0.6)},
!{(-0.6,-0.6)}-@{.}!{(0.6,-0.6)},
!{(0.6,0.6)}-@{.}!{(0.6,-0.6)},
!{(-0.6,0.6)}-@{.}!{(-0.6,-0.6)},
!{(-0.8,0)}-@{.}!{(0.8,0)},
!{(1,0)},
!{(0,-0.8)}-@{.}!{(0,0.8)},
!{(0,1.0)},
} \quad = \quad
\left(
E_1= T^2 \times \mathbb{C}^2, \ \
U_{c_4}(\bm{k}) = \begin{pmatrix}
\zeta & 0 \\
0 & \zeta^{-1} e^{-i k_x} \\
\end{pmatrix}, \ \
U_{\sigma}(\bm{k}) = \begin{pmatrix}
0 & -e^{-i k_x} \\
1 & 0 \\
\end{pmatrix}
\right),
\label{eq:p4g_E1}
\end{align}
where the matrix is for the A and B space.
The orbital part can be replaced by other
1-dimensional representations $d_{xy}, p_{x+iy}, p_{x-iy}$ of $C_4$,
and spin part can be exchanged.
In addition to the atomic ground state $E_1$,
we have the following three independent atomic ground states:
\begin{align}
&\xygraph{
!{<0cm,0cm>;<1.5cm,0cm>:<0cm,1.5cm>::}
!{(-0.3,0.3)}*+{\ket{s,\downarrow}},
!{(0.3,-0.3)}*+{\ket{s,\uparrow}},
!{(-0.6,0.6)}-@{.}!{(0.6,0.6)},
!{(-0.6,-0.6)}-@{.}!{(0.6,-0.6)},
!{(0.6,0.6)}-@{.}!{(0.6,-0.6)},
!{(-0.6,0.6)}-@{.}!{(-0.6,-0.6)},
!{(-0.8,0)}-@{.}!{(0.8,0)},
!{(1,0)},
!{(0,-0.8)}-@{.}!{(0,0.8)},
!{(0,1.0)},
} \quad = \quad
\left(
E_2= T^2 \times \mathbb{C}^2, \ \
U_{c_4}(\bm{k}) = \begin{pmatrix}
\zeta^{-1} & 0 \\
0 & \zeta e^{-i k_x} \\
\end{pmatrix}, \ \
U_{\sigma}(\bm{k}) = \begin{pmatrix}
0 & e^{-i k_x} \\
-1 & 0 \\
\end{pmatrix}
\right),
\end{align}
\begin{align}
&\xygraph{
!{<0cm,0cm>;<1.5cm,0cm>:<0cm,1.5cm>::}
!{(-0.3,0.3)}*+{\ket{d_{xy},\uparrow}},
!{(0.3,-0.3)}*+{\ket{d_{xy},\downarrow}},
!{(-0.6,0.6)}-@{.}!{(0.6,0.6)},
!{(-0.6,-0.6)}-@{.}!{(0.6,-0.6)},
!{(0.6,0.6)}-@{.}!{(0.6,-0.6)},
!{(-0.6,0.6)}-@{.}!{(-0.6,-0.6)},
!{(-0.8,0)}-@{.}!{(0.8,0)},
!{(1,0)},
!{(0,-0.8)}-@{.}!{(0,0.8)},
!{(0,1.0)},
} \quad = \quad
\left(
AB \cdot E_1 = T^2 \times \mathbb{C}^{\oplus 2}, \ \
U_{c_4}(\bm{k}) = \begin{pmatrix}
-\zeta & 0 \\
0 & - \zeta^{-1} e^{-i k_x} \\
\end{pmatrix}, \ \
U_{\sigma}(\bm{k}) = \begin{pmatrix}
0 & e^{-i k_x} \\
-1 & 0 \\
\end{pmatrix}
\right), \\
&\xygraph{
!{<0cm,0cm>;<1.5cm,0cm>:<0cm,1.5cm>::}
!{(-0.3,0.3)}*+{\ket{d_{xy},\downarrow}},
!{(0.3,-0.3)}*+{\ket{d_{xy},\uparrow}},
!{(-0.6,0.6)}-@{.}!{(0.6,0.6)},
!{(-0.6,-0.6)}-@{.}!{(0.6,-0.6)},
!{(0.6,0.6)}-@{.}!{(0.6,-0.6)},
!{(-0.6,0.6)}-@{.}!{(-0.6,-0.6)},
!{(-0.8,0)}-@{.}!{(0.8,0)},
!{(1,0)},
!{(0,-0.8)}-@{.}!{(0,0.8)},
!{(0,1.0)},
} \quad = \quad
\left(
AB \cdot E_2 = T^2 \times \mathbb{C}^{\oplus 2}, \ \
U_{c_4}(\bm{k}) = \begin{pmatrix}
- \zeta^{-1} & 0 \\
0 & - \zeta e^{-i k_x} \\
\end{pmatrix}, \ \
U_{\sigma}(\bm{k}) = \begin{pmatrix}
0 & -e^{-i k_x} \\
1 & 0 \\
\end{pmatrix}
\right).
\end{align}
From the table of projective characters (\ref{tab:proj_character_d4}),
one can read off the representations at symmetric points of the above
atomic ground states, which are summarized as the following table:
\begin{align}
\begin{tabular}{c|ccc}
$E$ & $E|_{\Gamma}$ & $E|_X$ & $E|_M$ \\
\hline
$E_1$ & $W$ & $1+t_{\sigma c_2} t_{\sigma}$ & $W$ \\
$AB \cdot E_1$ & $BW$ & $t_{\sigma c_2} + t_{\sigma} $ & $BW$ \\
$E_2$ & $W$ & $t_{\sigma c_2} + t_{\sigma} $ & $BW$ \\
$AB \cdot E_2$ & $BW$ & $1+t_{\sigma c_2} t_{\sigma} $ & $W$ \\
\end{tabular}
\label{tab:p4g_atomic_(a)}
\end{align}
Compared these data with the $K$-group (\ref{eq:K_p4g_0_T2}),
one can recognize that the above table (\ref{tab:p4g_atomic_(a)})
lacks the generator with the data
$(W,t_{\sigma c_2} + t_{\sigma},W)$,
$(W,1+t_{\sigma c_2} t_{\sigma},BW)$,
$(BW,1+t_{\sigma c_2} t_{\sigma},BW)$, or
$(BW,t_{\sigma c_2}+t_{\sigma},W)$.
As a formal difference of two vector bundles,
this deficit can be filled with
the atomic ground state obtained by the
Wyckoff position (b).
Let $E_3$ be the atomic ground state
defined by putting an $s$-orbital with spin 1/2 degrees of freedom at the two
positions $A$ and $B$ of the Wyckoff label (b):
\begin{align}
&\xygraph{
!{<0cm,0cm>;<2cm,0cm>:<0cm,2cm>::}
!{(-0.3,-0.3)}*+{\ket{s}\otimes \mathbb{C}^2_{\rm spin}} ,
!{(0.4,0.3)}*+{\ket{s} \otimes \mathbb{C}^2_{\rm spin}},
!{(-0.6,0.6)}-@{.}!{(0.6,0.6)},
!{(-0.6,-0.6)}-@{.}!{(0.6,-0.6)},
!{(0.6,0.6)}-@{.}!{(0.6,-0.6)},
!{(-0.6,0.6)}-@{.}!{(-0.6,-0.6)},
!{(-0.8,0)}-@{.}!{(0.8,0)},
!{(1,0)},
!{(0,-0.8)}-@{.}!{(0,0.8)},
!{(0,1.0)}
} \quad = \quad
\left(
E_3= T^2 \times \mathbb{C}^4, \ \
U_{c_4}(\bm{k}) = \begin{pmatrix}
0 & e^{- \frac{\pi}{4} i \sigma_z} e^{-i k_x} \\
e^{-\frac{\pi}{4} i \sigma_z} & 0 \\
\end{pmatrix}, \ \
U_{\sigma}(\bm{k}) = \begin{pmatrix}
0 & -i \sigma_y e^{-i k_x} \\
-i \sigma_y & 0 \\
\end{pmatrix}
\right).
\end{align}
All the projective characters of $E_3$ at symmetric points are zero,
which leads to the following data of projective representations of $E_3$:
\begin{align}
\begin{tabular}{c|ccc}
$E$ & $E|_{\Gamma}$ & $E|_X$ & $E|_M$ \\
\hline
$E_3$ & $W+BW$ & $1+t_{\sigma c_2} + t_{\sigma} + t_{\sigma c_2} t_{\sigma}$ & $W+BW$ \\
\end{tabular}
\label{tab:p4g_atomic_(b)}
\end{align}
Then, the formal difference $[E_3] - [AB \cdot E_1]$
provides the remaining generator of the $K$-group (\ref{eq:K_p4g_0_T2}).
Interestingly, the vector bundle with the data $(BW,1+t_{\sigma c_2}t_{\sigma},BW)$
can be realized as a band insulator.
Let us consider a Hamiltonian $\hat H$ on the atomic insulator $E_3$,
\begin{equation}
\hat H
:= \psi^{\dag}_{B}(\bm{R}) t_1 \psi_A(\bm{R}) + \psi^{\dag}_{A}(\bm{R}+\hat x) t_2 \psi_A(\bm{R}) + h.c.
+ ({\rm space\ group\ symmetrization}),
\label{eq:p4g_model_on_Wyckoff_b}
\end{equation}
where $t_1$ and $t_2$ are nearest and next nearest hopping terms, respectively.
The space group transformations are defined by
\begin{align}
&\hat U_{c_4} \psi^{\dag}_{A}(\bm{R}) \hat U_{c_4}^{-1} = \psi^{\dag}_{B}(c_4 \bm{R}) e^{-\frac{\pi i}{4} \sigma_z}, &&
\hat U_{c_4} \psi^{\dag}_{B}(\bm{R}) \hat U_{c_4}^{-1} = \psi^{\dag}_{A}(c_4 \bm{R}+\hat y) e^{-\frac{\pi i}{4} \sigma_z}, \\
&\hat U_{\sigma} \psi^{\dag}_{A}(\bm{R}) \hat U_{\sigma}^{-1} = \psi^{\dag}_{B}(\sigma \bm{R}) (-i \sigma_y), &&
\hat U_{\sigma} \psi^{\dag}_{B}(\bm{R}) \hat U_{\sigma}^{-1} = \psi^{\dag}_{A}(\sigma \bm{R}+\hat x) (-i \sigma_y),
\end{align}
which leads to constraints
\begin{align}
&t_1 = \alpha + \beta \frac{\sigma_x - \sigma_y}{\sqrt{2}}, \qquad \alpha, \beta \in \mathbb{C},
\label{eq:p4g_E_3_parameter_1} \\
&t_2 = a + b \sigma_z + i c \sigma_x + i d \sigma_y, \qquad a,b,c,d \in \mathbb{R}.
\label{eq:p4g_E_3_parameter_2}
\end{align}
Let us consider the following Hamiltonian
\begin{equation}
\hat H_4
:= \psi^{\dag}_{B}(\bm{R}) \frac{1+i}{4} \psi_A(\bm{R}) + \psi^{\dag}_{A}(\bm{R}+\hat x) \frac{\sigma_z}{4} \psi_A(\bm{R}) + h.c.
+ ({\rm space\ group\ symmetrization}).
\end{equation}
The one-particle Hamiltonian $H_4(\bm{k})$ in the momentum space is written as
\begin{equation}
H_4(\bm{k})
= \begin{pmatrix}
\frac{1}{2} (\cos k_x - \cos k_y) \sigma_z & \frac{1-i}{4}(1+e^{-i (k_x+k_y)}) + \frac{1+i}{4}(e^{-ik_x} + e^{-i k_y}) \\
\frac{1+i}{4}(1+e^{i (k_x+k_y)}) + \frac{1-i}{4}(e^{ik_x} + e^{i k_y}) & -\frac{1}{2} (\cos k_x - \cos k_y) \sigma_z \\
\end{pmatrix}.
\end{equation}
This model conserves the $z$-component of the spin and is fully gapped with
the dispersion
\begin{equation}
\varepsilon(\bm{k}) = \pm \sqrt{\frac{6+\cos(2 k_x) + \cos (2 k_y)}{8}}.
\end{equation}
At the symmetric points, $H_4(\bm{k})$ takes the following forms
\begin{align}
H_4(\Gamma) = \begin{pmatrix}
0 & \sigma_0 \\
\sigma_0 & 0 \\
\end{pmatrix},&&
H_4(M) = \begin{pmatrix}
0 & -i \sigma_0 \\
i \sigma_0 & 0 \\
\end{pmatrix},&&
H_4(X) = \begin{pmatrix}
-\sigma_z & 0 \\
0 & \sigma_z \\
\end{pmatrix}.
\end{align}
Then, the occupied basis $\Psi_{P} (P=\Gamma, M, X)$ at symmetric points reads
\begin{align}
&\Psi_{\Gamma} = \{\frac{\ket{A,\uparrow}-\ket{B,\uparrow}}{\sqrt{2}},\frac{\ket{A,\downarrow}-\ket{B,\downarrow}}{\sqrt{2}}\}, &&
\Psi_{M} = \{\frac{\ket{A,\uparrow}-i\ket{B,\uparrow}}{\sqrt{2}},\frac{\ket{A,\downarrow}-i\ket{B,\downarrow}}{\sqrt{2}}\}, \\
&\Psi_{X} = \{\ket{A,\uparrow},\ket{B,\downarrow}\}.
\end{align}
Let $E_4$ be the occupied state bundle of the Hamiltonian $H_4(\bm{k})$.
The representation matrices $U_p(P) (P=\Gamma,M,X)$ on $E_4$ are given by
\begin{align}
U_{c_4}(\Gamma) = \begin{pmatrix}
-\zeta & 0 \\
0 & -\zeta^{-1}\\
\end{pmatrix}, &&
U_{c_4}(M) = \begin{pmatrix}
i \zeta & 0 \\
0 & i \zeta^{-1} \\
\end{pmatrix}, &&
U_{c_2}(X)=
\begin{pmatrix}
-i & 0\\
0 & -i \\
\end{pmatrix}, &&
U_{\sigma}(X)=
\begin{pmatrix}
0 & 1\\
1 & 0 \\
\end{pmatrix},
\end{align}
which implies that the occupied state bundle has
the data $E_4 := (BW,1 + t_{\sigma c_2} t_{\sigma},BW)$.
In the same way, the unoccupied states of $\hat H$
has the data $(W,t_{\sigma c_2} + t_{\sigma},W)$.
We conjecture:
\begin{itemize}
\item The vector bundles with the data $(W,t_{\sigma c_2} + t_{\sigma},W)$,
$(W,1+t_{\sigma c_2} t_{\sigma},BW)$,
$(BW,1+t_{\sigma c_2} t_{\sigma},BW)$, and
$(BW,t_{\sigma c_2}+t_{\sigma},W)$ cannot be realized as atomic insulators.
\end{itemize}
If this is true, the band insulator $E_4$ we constructed is a topologically nontrivial ground state
in the sense that there is no atomic orbital representation,
which is similar to filling enforced topological insulators protected by space group symmetry.~\cite{PoWatanabe2016filling}
Our model $E_4$ is not filling enforced since atomic ground states obtained by the
Wyckoff position (a) have the same filling number as $E_4$.
\subsubsection{Models of $K$-group $K^{\tau_{\sf p4g}+\omega+1}_{D_4}(T^2)$: 2d class AIII insulator}
Now we consider a generating model of the $K$-group $K^{\tau_{\sf p4g}+\omega+1}_{D_4}(T^2)$,
which is represented by a class AIII insulator with p4g symmetry in spin half-integer systems.
From (\ref{eq:K_p4g_1_T2}),
the the topological invariant detecting
the $K$-group $K^{\tau_{\sf p4g}+\omega+1}_{D_4}(T^2) \cong \mathbb{Z}$
can be understood by the subspace $Z$.
Under the two-cocycle (\ref{twist_spinful_p4g}),
the reflection $U_{\sigma c_4^3}(\bm{k})$ satisfies $U_{\sigma c_4^3}(ky,kx) U_{\sigma c_4^3}(kx,ky) = -1$.
We can define the mirror winding number $w_{\sigma c_4^3}$ on the
invariant line of $\sigma c_4^3$-reflection as
\begin{align}
w_{\sigma c_4^3} :=
\frac{1}{i} \cdot \frac{1}{4 \pi i} \oint_{-\pi}^{\pi} d k \ {\rm tr\,} \Big[ U_{\sigma c_4^3}(k,k) \Gamma H(k,k)^{-1} \partial_k H(k,k) \Big] \in 2 \mathbb{Z},
\label{eq:mirror_winding_p4g_aiii}
\end{align}
where $\Gamma$ is the chiral operator.
That the mirror winding number $w_{\sigma c_4^3}$
is an even integer is ensured by the
absence of the total winding number associated with the same line.
We give an example of a nontrivial model.
We define a Hamiltonian $H(\bm{k})$ on the atomic vector bundle $E_1 \otimes \mathbb{C}^2$
where $E_1$ is introduced in (\ref{eq:p4g_E1}) and $\mathbb{C}^2$ represents
internal degrees of freedom on which the point group $D_4$ acts trivially.
Let $\hat H$ be the following model with
nearest neighbor and next-nearest neighbor hopping,
\begin{align}
\hat H
&:=
\psi^{\dag}_{B s \downarrow}(\bm{R}) e^{-\pi i/4} \sigma_x \psi_{A s \uparrow}(\bm{R})
+ t \psi^{\dag}_{A s \uparrow}(\bm{R}+\hat x) \sigma_y \psi_{A s \uparrow}(\bm{R})
+ m \psi^{\dag}_{A s \uparrow}(\bm{R}) \sigma_y \psi_{A s \uparrow}(\bm{R}) \\
& \qquad + ({\rm space\ group\ symmetrization}) \\
&= \sum_{\bm{k}} (\psi^{\dag}_{A s \uparrow}(\bm{k}), \psi^{\dag}_{B s \downarrow}(\bm{k})) H(\bm{k})
\begin{pmatrix}
\psi_{A s \uparrow}(\bm{k}) \\
\psi_{B s \downarrow}(\bm{k})
\end{pmatrix}, \\
H(\bm{k}) &= \begin{pmatrix}
(m+2t \cos k_x + 2t \cos k_y) \sigma_y & e^{\pi i/4} (1 - i e^{i k_y} - e^{-i k_x+i k_y}+i e^{-i k_x}) \sigma_x \\
e^{-\pi i/4} (1 + i e^{-i k_y} - e^{i k_x-i k_y}-i e^{i k_x}) \sigma_x & (m+2t \cos k_x + 2t \cos k_y) \sigma_y \\
\end{pmatrix},
\label{eq:p4g_aiii_bulk_model}
\end{align}
where $\sigma_{\mu} (\mu=x,y,z)$ are the Pauli matrices for the
internal degrees of freedom.
The space group transformations are defined by
\begin{align}
&\hat U_{c_4} \psi^{\dag}_{A s \uparrow}(\bm{R}) \hat U_{c_4}^{-1} = \psi^{\dag}_{A s \uparrow}(c_4 \bm{R}) e^{-\pi i/4}, \qquad
\hat U_{c_4} \psi^{\dag}_{B s \downarrow}(\bm{R}) \hat U_{c_4}^{-1} = \psi^{\dag}_{B s \downarrow}(c_4 \bm{R}+\hat y) e^{\pi i/4}, \\
&\hat U_{\sigma} \psi^{\dag}_{A s \uparrow}(\bm{R}) \hat U_{\sigma}^{-1} = \psi^{\dag}_{B s \downarrow}(\sigma \bm{R}), \qquad
\hat U_{\sigma} \psi^{\dag}_{B s \downarrow}(\bm{R}) \hat U_{\sigma}^{-1} = \psi^{\dag}_{A s \uparrow}(\sigma \bm{R}+\hat x) (-1).
\end{align}
The chiral operator is $\Gamma = \sigma_z$.
The one-particle Hamiltonian $H(\bm{k})$ has the mirror winding number
\begin{align}
w_{\sigma c_4^3} = \left\{\begin{array}{ll}
2 & (t<-\frac{|m|}{4}) \\
0 & (-\frac{|m|}{4}<t<\frac{|m|}{4}) \\
-2 & (\frac{|m|}{4}<t) \\
\end{array}\right..
\end{align}
The module structure (\ref{eq:K_p4g_1_T2}) of
the $K$-group can be understood
from the mirror winding number (\ref{eq:mirror_winding_p4g_aiii}).
From the character table \ref{Tab:rep_of_D4},
the operator $U_{\sigma c_4^3}(\bm{k})$
is changed under the actions of $A$ and $B$ irreps.\ as
$U_{\sigma c_4^3}(\bm{k}) \mapsto - U_{\sigma c_4^3}(\bm{k})$,
which implies that the mirror winding number $w_{\sigma c_4^3}$
is the invariant of the $R(D_4)$-module $(1-A-B+AB)$.
\subsubsection{Models of $K$-group $K^{\tau_{\sf p4g}+\omega+1}_{D_4}(T^2)$: 2d class A surface state}
The $K$-group $K^{\tau_{\sf p4g}+\omega+1}_{D_4}(T^2)$ with grading $n=1$
classifies gapless states in 2d BZ torus $T^2$ with p4g symmetry
in spin half-integer systems.
The corresponding 3d model Hamiltonian and topological
invariants immediately follow from (\ref{eq:mirror_winding_p4g_aiii})
and (\ref{eq:p4g_aiii_bulk_model}).
The mirror Chern number is defined on the $\sigma c_4^3$-invariant plane~\cite{DongLiu2015}
\begin{align}
ch_{\sigma c_4^3}
:= \frac{1}{i} \cdot \frac{i}{2 \pi} \oint_{-\pi}^{\pi} d k \oint_{-\pi}^{\pi} d k_z \ {\rm tr\,} \Big[ U_{\sigma c_4^3}(k,k,k_z) {\cal F}_{k k_z}(k,k,k_z) \Big] \in 2 \mathbb{Z},
\end{align}
where ${\cal F}_{k k_z}$ is the Berry curvature on the $\sigma c_4^3$-invariant plane.
From the dimensional raising map,
the 2d class AIII Hamiltonian (\ref{eq:p4g_aiii_bulk_model}) becomes
\begin{align}
\widetilde H(k_x,k_y,k_z)
=
\begin{pmatrix}
(m+2t \cos k_x + 2t \cos k_y + 2t \cos k_z) \sigma_y + \sin k_z \sigma_z & e^{\pi i/4} (1 - i e^{-i k_y} - e^{-i k_x+i k_y}+i e^{-i k_x}) \sigma_x \\
e^{-\pi i/4} (1 + i e^{i k_y} - e^{i k_x-i k_y}-i e^{i k_x}) \sigma_x & (m+2t \cos k_x + 2t \cos k_y + 2t \cos k_z) \sigma_y + \sin k_z \sigma_z \\
\end{pmatrix}.
\end{align}
\subsubsection{A stable gapless phase protected by representation at $X$ point: 2d class A}
\label{A stable gapless phase protected by representation at $X$ point: 2d class A}
The $K$-group (\ref{eq:K_p4g_0_T2}) is
characterized by the representations at
the symmetric points $\Gamma$, $X$ and $M$.
From the local data of the $K$-group (\ref{eq:local_data_X_1}) on $X_1$,
one can find that not every representations at the point $X$ are allowed.
Only two representations
\begin{align}
t_{\sigma c_2} + t_{\sigma}, 1 + t_{\sigma c_2} t_{\sigma} \in R^{\tau_{\sf p4g}|_X+\omega}(D_2^{(v)})
\label{eq:p4g_condition_x_a}
\end{align}
survive on the subspace $X_1$.
The evenness of the rank is due to the nonsymmorphic property of the
wallpaper group $p4g$.
In addition to a simple condition on the number of filing,
(\ref{eq:p4g_condition_x_a}) means there is an additional
condition:
\begin{itemize}
\item If a band spectrum is isolated from other bands on the subspace $X_1$,
then the representation at $X$ point should be a direct sum of $t_{\sigma c_2} + t_{\sigma}, 1 + t_{\sigma c_2} t_{\sigma} \in R^{\tau_{\sf p4g}|_X+\omega}(D_2^{(v)})$.
\end{itemize}
The contraposition of this condition provides a criterion of stable gapless phases:
\begin{itemize}
\item If the representation of a valence band at the $X$ point is not a direct sum of
$t_{\sigma c_2} + t_{\sigma}, 1 + t_{\sigma c_2} t_{\sigma} \in R^{\tau_{\sf p4g}|_X+\omega}(D_2^{(v)})$,
then there should be a gapless point on the subspace $X_1$,
unless the valence band at the $X$ point touches the conduction band.
\end{itemize}
\begin{figure}[!]
\begin{center}
\includegraphics[width=\linewidth, trim=0cm 0cm 0cm 0cm]{p4g_gapless.pdf}
\end{center}
\caption{Band crossings protected by the representation at the $X$ point.
[a] subspace $X_1$.
[b] The energy spectrum on the subspace $X_1$ of Hamiltonian (\ref{eq:hamiltonian_p4g_gapless}) which corresponds to
the parameters $(\alpha,\beta,a,b,c,d) = (0,1,0,0,0,0)$ of eqs.\ (\ref{eq:p4g_E_3_parameter_1}) and (\ref{eq:p4g_E_3_parameter_2}).
[c] Another parameter choice $(\alpha,\beta,a,b,c,d) = (1+i,1,0.2,0,0,0)$.}
\label{fig:p4g_gapless}
\end{figure}
We give a simple model in a form (\ref{eq:p4g_model_on_Wyckoff_b}).
Let us consider the following Hamiltonian on the atomic insulator $E_3$:
\begin{align}
\hat H_5
&:= \psi^{\dag}_{B}(\bm{R}) \frac{\sigma_x-\sigma_y}{2} \psi_A(\bm{R}) + h.c.
+ ({\rm space\ group\ symmetrization})
\label{eq:hamiltonian_p4g_gapless} \\
&= \sum_{\bm{k}} (\psi^{\dag}_{A}(\bm{k}), \psi^{\dag}_{B}(\bm{k})) H_5(\bm{k})
\begin{pmatrix}
\psi_{A}(\bm{k}) \\
\psi_{B}(\bm{k})
\end{pmatrix}, \\
H_5(\bm{k}) &= \begin{pmatrix}
0 & \frac{\sigma_x-\sigma_y}{2}(1 - e^{-i k_x-i k_y}) + \frac{\sigma_x+\sigma_y}{2} (e^{-i k_y}-e^{-i k_x}) \\
\frac{\sigma_x-\sigma_y}{2}(1 - e^{i k_x+i k_y}) + \frac{\sigma_x+\sigma_y}{2} (e^{i k_y}-e^{i k_x}) & \\
\end{pmatrix}.
\end{align}
At the $X$ point
the Hamiltonian becomes $H_5(X) = \begin{pmatrix}
0 & \sigma_x \\
\sigma_x & 0 \\
\end{pmatrix}$ and the occupied states at $X$ belong to the
representation $t_{\sigma c_2} + t_{\sigma c_2} t_{\sigma} \in R^{\tau_{\sf p4g}|_X + \omega}(D_2^{(v)})$.
Then, the above criterion implies that
there should be a topologically stable gapless point on the subspace $X_1$
as long as the mass gap at the $X$ point is preserved.
Fig.~\ref{fig:p4g_gapless} [b] shows the energy spectrum of (\ref{eq:hamiltonian_p4g_gapless}).
Fig.~\ref{fig:p4g_gapless} [c] shows the perturbed energy spectrum from (\ref{eq:hamiltonian_p4g_gapless}).
The band crossing on the subspace $X_1$ is protected by the representation of the $X$ point.
\subsection{Weyl semimetals and nodal superconductors protected by inversion symmetry}
\label{sec:inversion_fermi_pt}
In this section,
we introduce a $\mathbb{Z}_2$ invariant protecting Weyl semimetals and nodal superconductors
defined from the inversion symmetry which is not discussed in the literature.
\subsubsection{$\mathbb{Z}_2$ invariant from unoriented surface}
\label{Z2 invariant from unoriented surface}
We start with a $\mathbb{Z}_2$ invariant arising from unoriented BZ manifold.
Let $X$ be a 2d unoriented manifold.
Complex bundles $E$ on $X$
can be classified by
their first Chern classes $c_1(E) \in H^2(X;\mathbb{Z})$.
If $X$ is nonorientable, $H^2(X;\mathbb{Z})$ may have a torsion part.
For example, the real projective plane $RP^2$ shows $H^2(RP^2;\mathbb{Z}) = \mathbb{Z}_2$,
which implies that we have a ``$\mathbb{Z}_2$ topological insulator'' on $RP^2$.
The torsion part of the first Chern class can be detected as follows.~\cite{Freed1986}
Let ${\cal A}(\bm{k})$ be the Berry connection of occupied states on $RP^2$.
Let $\ell$ be a noncontractible loop on $RP^2$.
Then, $RP^2$ can be considered as a disc $D$ surrounded by the loop $\ell$ and its copy.
See Fig.~\ref{fig:inversion} [a].
Then, the $\mathbb{Z}_2$ invariant $c_1 \in \{0,1/2\}$ is defined by
\begin{align}
c_1 := \frac{i}{2\pi}\ln {\rm hol}_{\ell}({\cal A})+\frac{1}{2} \frac{i}{2\pi}\int_{D} {\rm tr\,} {\cal F} \ (\mathrm{mod}\ 1),
\label{eq:c1_RP2}
\end{align}
where ${\rm hol}_{\ell}({\cal A}) \in U(1)$ is the Berry phase ($U(1)$ holonomy) along the loop $\ell$,
and ${\cal F}$ is the Berry curvature.
$c_1$ is quantized to $0$ or $1/2$ because of the Stokes' theorem
\begin{align}
2 c_1 = \frac{i}{2\pi}\ln {\rm hol}_{\partial D} ({\cal A})+ \frac{i}{2\pi}\int_{D} {\rm tr\,} {\cal F} = 0 \ ({\rm mod } \ 1).
\end{align}
A nontrivial model Hamiltonian will be presented in Sec.~\ref{sec:z2_inversion}.
It is worth reminding the definition of the Berry phase in the cases where
the Berry connection ${\cal A}$ on the loop $\ell$ needs multiple patches.
In such cases, the Berry phase is defined by integral of parallel transports on patches
and transition functions.
Let $\{ U_i \}_{i=1, \dots N}$ be a cover including the loop $\ell$.
We divide $\ell$ to $N$ components so that $\ell_i \subset U_i$.
Let $p_i$ be junction points of $\ell_i$, namely, $\partial \ell_i = p_{i+1} - p_i$.
Then the $U(1)$ holonomy is defined by
\begin{align}
{\rm hol}_{\ell}({\cal A})
= e^{-\int_{\ell_1} {\rm tr\,} {\cal A}_1 } \cdot \det g_{1,2}(p_2) \cdot
e^{-\int_{\ell_2} {\rm tr\,} {\cal A}_2 } \cdot \det g_{2,3}(p_3) \cdots
e^{-\int_{\ell_N} {\rm tr\,} {\cal A}_N } \cdot \det g_{N,1}(p_1),
\end{align}
where ${\cal A}_i$ is the Berry connection on $U_i$
and $g_{i,j}$ is the transition function on $U_i \cap U_j$.
A similar construction is possible for the Klein bottle and also
the torsion part of higher Chern classes $c_d(E)$, $d > 1$.~\cite{Freed1986}
\begin{figure}[!]
\begin{center}
\includegraphics[width=0.5\linewidth, trim=0cm 2cm 0cm 0cm]{inversion.pdf}
\end{center}
\caption{
[a] The real projective plane and a noncontractible loop $\ell$.
[b] The line $\ell$ connecting the inversion symmetric pair of points $P$ and $-P$.
The surface $D$ bounds $\ell$ and its inversion symmetric pair $-\ell$.
}
\label{fig:inversion}
\end{figure}
\subsubsection{$\mathbb{Z}_2$ invariant from the inversion symmetry}
\label{sec:z2_inversion}
Now we discuss an application of
the $\mathbb{Z}_2$ invariant (\ref{eq:c1_RP2}) to Weyl semimetals and nodal superconductors.
Let us consider an inversion symmetric 3d Hamiltonian
\begin{align}
U(\bm{k}) H(\bm{k}) U(\bm{k})^{-1} = H(-\bm{k}), \qquad
U(-\bm{k}) U(\bm{k}) = 1.
\end{align}
The existence of the $\mathbb{Z}_2$ invariant is understood as follows.
We pick a closed surface $\Sigma$ on which
the inversion symmetry freely acts.
We effectively have a Hamiltonian
on the quotient $\Sigma/\mathbb{Z}_2$ which is a nonorientable manifold.
This implies there is a $\mathbb{Z}_2$ invariant similar to (\ref{eq:c1_RP2}).
Let us define the $\mathbb{Z}_2$ invariant.
We pick a pair of inversion symmetric points $P$ and $-P$.
Let $\ell$ be an oriented line from $P$ to $-P$.
In the presence of the inversion symmetry,
even if the line $\ell$ is not closed,
one can define a well-defined Berry phase associated with the line $\ell$.
The Bloch states at $P$ and $-P$ are related by a unitary matrix $V(P)$ as
\begin{align}
U(-P) \Phi(-P) = \Phi(P) V(P),
\label{eq:def_v(p)_inversion}
\end{align}
where $\Phi(\bm{k})$ is the frame of occupied states
$\Phi(\bm{k}) = \big (\ket{\phi_1(\bm{k})}, \dots, \ket{\phi_m(\bm{k})} \big)$.
We define the Berry phase associated with the line $\ell$ by
\begin{align}
{\rm hol}_{\ell}({\cal A})
:= e^{-\int_{P,\ell}^{-P} {\rm tr\,} {\cal A}} \cdot \det [V(P)] \in U(1).
\end{align}
(Here we have assumed that $\ell$ is covered by a single patch.)
The phase ${\rm hol}_{\ell}({\cal A})$ is gauge invariant since
the gauge dependences of the parallel transport and the unitary matrix $V(P)$ are canceled.
It should be noticed that
there is ambiguity in ${\rm hol}_{\ell}({\cal A})$ arising from $U(\bm{k})$.
The change of sign $U(\bm{k}) \mapsto - U(\bm{k})$ induces the $\pi$ phase shift
${\rm hol}_{\ell}({\cal A}) \mapsto -{\rm hol}_{\ell}({\cal A})$.
This ambiguity cannot be eliminated, however, $\mathbb{Z}_2$ distinction is well-defined
if $U(\bm{k})$ is fixed.
In the same way as (\ref{eq:c1_RP2}),
we can define the $\mathbb{Z}_2$ invariant.
The line $\ell$ and its inversion symmetric line $-\ell$
together form a closed loop $\ell \cup (-\ell)$ in the BZ.
We choose a surface $D$ whose boundary is $\ell \cup (-\ell)$.
See Fig.~\ref{fig:inversion} [b].
Then, the same formula as (\ref{eq:c1_RP2}) defines the $\mathbb{Z}_2$ invariant $c_1 \in \{0,1/2\}$.
Notice that the $\mathbb{Z}_2$ invariant $c_1$ depends on both the line $\ell$ and the surface $D$.
Now we give a nontrivial model Hamiltonian.
Let
\begin{align}
\ket{\bm{k}} :=
\frac{1}{|\bm{k}|}
\begin{pmatrix}
k_x+i k_y \\
k_z
\end{pmatrix}, \qquad
\bm{k} \neq \bm{0},
\label{eq:inversion_model_line_bundle}
\end{align}
be a single occupied state with two orbitals near $\bm{k}=\bm{0}$.
The associated 2 by 2 Hamiltonian is given by
\begin{align}
H(\bm{k})
= |\bm{k}|^2 ({\bf 1}_{2 \times 2}- 2 \ket{\bm{k}} \bra{\bm{k}} )
= \begin{pmatrix}
-k_x^2-k_y^2+k_z^2 & -2k_z(k_x-ik_y) \\
-2k_z(k_x+ik_y)& k_x^2+k_y^2-k_z^2
\end{pmatrix}.
\label{eq:model_inversion}
\end{align}
For example, the BdG Hamiltonian of $(d_{zx}+i d_{zy})$-wave superconductors takes this form.
$\bm{k} = \bm{0}$ point is the gapless point of this Hamiltonian.
This model has the symmetry $H(-\bm{k}) = H(\bm{k})$.
Let us compute the $\mathbb{Z}_2$ invariant associated with
the north hemisphere of a $|\bm{k}| = {\rm const.}$\ sphere as shown in
Fig.~\ref{fig:inversion} [b].
Under the choice $U(\bm{k}) = {\bf 1}_{2 \times 2}$,
the inversion symmetry $\ket{-\bm{k}} = - \ket{\bm{k}}$ means that
the $V(\bm{k})$ in (\ref{eq:def_v(p)_inversion}) is $V(\bm{k}) = -1$.
Introduce the spherical coordinate
$\bm{k} = |\bm{k}| (\sin \theta \cos \phi, \sin \theta \sin \phi, \cos \theta)$.
The Berry connection and the curvature of $\ket{\bm{k}}$ are given by
${\cal A} = \frac{i}{2} (1-\cos 2 \theta) d \phi$ and
${\cal F} = i \sin 2 \theta d \theta \wedge d \phi$, respectively.
It is easy to show that the $\mathbb{Z}_2$ invariant (\ref{eq:c1_RP2})
becomes $c_1 = 1/2\ ({\rm mod }\ 1)$.
On the other hand, the trivial nonsingular Hamiltonian $H(\bm{k}) = {\rm diag}(1,-1)$
shows $c_1 = 0 \ ({\rm mod } \ 1)$.
Thus, $c_1 = 1/2 \ ({\rm mod } \ 1)$ protects the gapless point of the Hamiltonian (\ref{eq:model_inversion}).
Notice that the singular point of the Hamiltonian (\ref{eq:model_inversion}) has no Chern number,
so the singularity of (\ref{eq:model_inversion}) can be stabilized only after the inversion symmetry is enforced.
Let us consider more implication of the $\mathbb{Z}_2$ nontriviality.
To make it easy to understand,
we use the notation of the
BdG Hamiltonian of $(d_{zx}+id_{zy})$-wave superconductors with
a spin $s_z$ conserved system.
But the following discussion can be applied to any inversion symmetric systems.
Let us consider a Hamiltonian
\begin{align}
H_{d}(\bm{k})
&= \begin{pmatrix}
\frac{k_x^2 + k_y^2}{2m}-\frac{k_z^2}{2m'} -\mu & \Delta k_z(k_x+i k_y) \\
\Delta k_z(k_x-i k_y) & -\frac{k_x^2 + k_y^2}{2m}+\frac{k_z^2}{2m'} + \mu
\end{pmatrix}, \qquad m,m' > 0.
\label{eq:model_inversion_dwave}
\end{align}
Depending on the sign of the ``chemical potential'' $\mu$,
the singular points of the Hamiltonian (\ref{eq:model_inversion_dwave})
form a ring ($\mu >0$),
single point ($\mu = 0$),
and pair of two points with Chern number ($\mu < 0$)
as shown in Fig.~\ref{fig:inversion_dwave} [a].
An important point is that
both ring and point like singularities
have the same $\mathbb{Z}_2$ invariant $c_1=1/2$,
provided that the inversion symmetric sphere
surrounds these singular regions.
The inversion symmetric version of Nielsen-Ninomiya's theorem holds true.
Let us consider a lattice analog of (\ref{eq:model_inversion_dwave}) along the $z$-direction
\begin{align}
H_{d,{\rm lattice}}(k_x,k_y,k_z)
&= \begin{pmatrix}
\frac{k_x^2 + k_y^2}{2m} - t \cos k_z -\mu & \Delta k_z(k_x+i k_y) \\
\Delta k_z(k_x-i k_y) & -\frac{k_x^2 + k_y^2}{2m} + t \cos k_z + \mu
\end{pmatrix}, \qquad m,t > 0.
\label{eq:model_inversion_dwave_lattice}
\end{align}
For the parameter region $-t < \mu < t$,
the ``Fermi surface'' of the diagonal part of (\ref{eq:model_inversion_dwave_lattice})
forms a spheroid
as shown in Fig.~\ref{fig:inversion_dwave} [b].
There is a ring singularity with $\mathbb{Z}_2$ charge $c_1=1/2$ on the $k_z=0$ plane.
Moreover, near the $(0,0,\pi)$ point,
there are two point like singularities which have the
$\mathbb{Z}_2$ charge $c_1=1/2$ as a pair.
Nielsen-Ninomiya's theorem is that
in the closed BZ torus
the single $\mathbb{Z}_2$ charge $c_1=1/2$ is forbidden.
Like this example, if there is a ring node
near an inversion symmetric point $(0,0,0)$,
there should be another node with $\mathbb{Z}_2$ charge $c_1=1/2$.
\begin{figure}[!]
\begin{center}
\includegraphics[width=\linewidth, trim=0cm 1cm 0cm 0cm]{inversion_dwave.pdf}
\end{center}
\caption{
[a] The red curves and points represent singular gapless points.
The blue curves represent the ``Fermi surfaces'' of the diagonal part of the Hamiltonian (\ref{eq:model_inversion_dwave}).
[b] The pair of two $\mathbb{Z}_2$ charges. The green x marks are inversion symmetric points.
}
\label{fig:inversion_dwave}
\end{figure}
\subsubsection{Generalization to higher dimensions}
It is easy to generalize the discussion so far to higher space dimensions with inversion symmetry.
Let us consider $d$-dimensional systems with inversion symmetry
$U(\bm{k}) H(-\bm{k}) U(\bm{k})^{-1}= H(-\bm{k}), \bm{k} = (k_1, \dots, k_d)$.
We focus on an inversion symmetric $(d-1)$-dimensional sphere $S^{d-1}$.
The $K$-theory on the sphere $S^{d-1}$ is given by~\cite{Adams1962}
\begin{align}
K_{\mathbb{Z}_2}(S^{d-1})
\cong K(S^{d-1}/\mathbb{Z}_2)
= K(RP^{d-1})
= \mathbb{Z}_{p} \oplus \mathbb{Z}, \qquad
p = \left\{\begin{array}{ll}
2^{(d-1)/2} & (d = {\rm odd}) \\
2^{(d-2)/2} & (d = {\rm even}) \\
\end{array}\right. .
\end{align}
Here, $\mathbb{Z}_2$ acts on $S^{d-1}$ as the antipodal map.
The free part $\mathbb{Z}$ of the $K$-group $K_{\mathbb{Z}_2}(S^{d-1})$
is generated by the trivial line bundle $[1]$ on $RP^{d-1}$.
The torsion part $\mathbb{Z}_p$ is generated by
the formal difference $[\xi'] - [1]$,
where $\xi'$ is the complexification $\xi'=\xi\otimes\mathbb{C}$ of
the tautological real line bundle $\xi$ over $RP^{d-1}$.~\cite{Adams1962}
$\mathbb{Z}_p$ implies that $(\xi')^{\oplus p}$ is stably isomorphic to
the trivial bundle $1^{\oplus p}$.
The $\mathbb{Z}_2$-equivariant line bundle on $S^{d-1}$ corresponding to $\xi'$
is given by a form similar to (\ref{eq:inversion_model_line_bundle}),
\begin{align}
\ket{\bm{n}}
=\left\{\begin{array}{ll}
(n_1 + i n_2, n_3 + i n_4, \dots, n_{d})^T & (d = {\rm odd}) \\
(n_1 + i n_2, n_3 + i n_4, \dots, n_{d-1} + i n_{d})^T & (d = {\rm even}) \\
\end{array}\right.,
\end{align}
where $\bm{n} = (n_1, \dots, n_d)$, $|\bm{n}|=1$ is the coordinate of $S^{d-1}$.
For $d \leq 6$ (which corresponds $\mathbb{Z}_2$ or $\mathbb{Z}_4$ classifications),
elements in the $K$-group can be distinguished by the Chern classes.
Recall that the total Chern class $c(E) = 1 + \sum_{j>0} c_j(E)$ of a given complex bundle $E$ over a space $M$
takes values in the cohomology ring $H^*(M;\mathbb{Z})$.
The Whitney sum induces the cup product $c(E \oplus F) = c(E) c(F)$ in $H^*(M;\mathbb{Z})$.
The cohomology of $RP^{d-1}$ is given by
\begin{equation}\begin{split}
H^{j}(RP^{d-1};\mathbb{Z})=
\left\{ \begin{array}{ll}
\mathbb{Z} & (\mbox{$j = 0$; and $j = d - 1$ for even $d$}) \\
\mathbb{Z}_2 & (\mbox{even $j$ with $0< j < d-1$; and $j = d - 1$ for odd $d$}) \\
0 & (\mathrm{otherwise}) \\
\end{array} \right. .
\end{split}\end{equation}
The nonzero elements of $H^{2j}(RP^{d-1};\mathbb{Z})=\mathbb{Z}_2, (0 < j \le [d/2])$ are
given by the cup products $t^j\in{H}^{2j}(RP^{d-1};\mathbb{Z})$ of the
first Chern class $t=c_1(\xi')$ of the tautological line bundle.
The generator $[\xi']$ of the torsion part of the $K$-group has
the Chern class $c(\xi') = 1+t$.
For example, the torsion part of $K(RP^4) = \mathbb{Z}_4 \oplus \mathbb{Z}$ is generated by
$[\xi'] = (1,1) \in K(RP^4)$.
In this case,
the Chern class can distinguish all elements of $\mathbb{Z}_4$,
since
$c(\xi' \oplus \xi') = 1+t^2, c(\xi' \oplus \xi' \oplus \xi') = 1+t+t^2$, and
$c(\xi' \oplus \xi' \oplus \xi' \oplus \xi') = 1$.
I.e.\ the first and second Chern classes detect all the $\mathbb{Z}_4$ phases.
On the other hand,
the torsion part of $K(RP^6) = \mathbb{Z}_8 \oplus \mathbb{Z}$ cannot be
detected by the Chern classes.
This is because the $4 \in \mathbb{Z}_8$ phase
is trivial in the Chern class $c( (\xi')^{\oplus 4} ) = (1+t)^4 = 1 \in H^*(RP^6;\mathbb{Z})$.
\begin{table*}[]
\begin{center}
\caption{
A topological charges of Fermi points in inversion symmetric systems.
$d$ is the space dimension.
In class AI and AII, the inversion symmetry commutes with the TRS.
$\widetilde{K}(RP^{d-1})$, $\widetilde{KO}(RP^{d-1})$, and $\widetilde{KSp}(RP^{d-1})$
represent the reduced complex, real, and quaternionic $K$-theories, respectively.
}
\begin{tabular}[t]{ccccccccccc}
\hline \hline
AZ class & $K$-group & $d=1$ & $d=2$ & $d=3$ & $d=4$ & $d=5$ & $d=6$ & $d=7$ & $d=8$ \\
\hline
A & $\widetilde{K}(RP^{d-1})$ & $0$ & $0$ & $\mathbb{Z}_2$ & $\mathbb{Z}_2$ & $\mathbb{Z}_4$ & $\mathbb{Z}_4$ &$\mathbb{Z}_8$ & $\mathbb{Z}_8$ \\
AI & $\widetilde{KO}(RP^{d-1})$ & $0$ & $\mathbb{Z}_2$ & $\mathbb{Z}_4$ & $\mathbb{Z}_4$ & $\mathbb{Z}_8$ & $\mathbb{Z}_8$ &$\mathbb{Z}_8$ & $\mathbb{Z}_8$ \\
AII & $\widetilde{KSp}(RP^{d-1})$ & $0$ & $0$ & $0$ & $0$ & $\mathbb{Z}_2$ & $\mathbb{Z}_4$ &$\mathbb{Z}_8$ & $\mathbb{Z}_8$ \\
\hline \hline
\end{tabular}
\label{TabISFP}
\end{center}
\end{table*}
\subsubsection{Time-reversal symmetry with inversion symmetry: Stiefel-Whitney class}
The interplay of TRS and inversion symmetry
gives rise to Fermi points with a nontrivial topological charge
and some topological charges can be captured by Stiefel-Whitney (SW) classes.
Let us consider the class AI TRS with inversion symmetry which {\it commutes} with the TRS
\begin{align}
&T H(\bm{k}) T^{-1} = H(-\bm{k}), \qquad T^2 = 1, \\
&U(\bm{k}) H(\bm{k}) U(\bm{k})^{-1} = H(-\bm{k}), \qquad U(-\bm{k}) U(\bm{k}) = 1, \\
&T U(\bm{k}) = U(-\bm{k}) T,
\end{align}
where $T$ is anti-unitary.
We, here, focus on the class AI which is the TRS for spin integer systems.
In the cases of class AII TRS $T^2 = -1$, there is no torsion part in lower space dimensions,
hence, we only show the $K$-group in Table~\ref{TabISFP}.
The combined symmetry $T U(\bm{k})$ acts on the BZ without changing the momentum as
\begin{align}
T U(\bm{k}) H(\bm{k}) (T U(\bm{k}))^{-1} = H(\bm{k}), \qquad (TU(\bm{k}))^2 = 1,
\end{align}
so $T U(\bm{k})$ induces the real structure on the occupied states.
Since the inversion symmetry $U(\bm{k})$ commutes with the combined symmetry $T U(\bm{k})$,
the $K$-theory of a sphere $S^{d-1}$ surrounding the symmetric point $\bm{k}=\bm{0}$ is
recast into that of the quotient $S^{d-1}/\mathbb{Z}_2 = RP^{d-1}$.
The real $K$-theory $KO(RP^{d-1})$ of the real projective space is known:~\cite{Adams1962}
\begin{equation}\begin{split}
{}^{\phi} K^{\tau+0}_{\mathbb{Z}_2}(S^{d-1}) = KO(RP^{d-1})= \mathbb{Z}_{2^g} \oplus \mathbb{Z},
\end{split}\end{equation}
where $g$ is the number of integers $s$ such that $0<s\leq d-1$ and $s\equiv 0,1,2$, or $4$ mod 8.
Here, the twisting $\tau$ represents the commutation relation between $T$ and $U(\bm{k})$.
See Table~\ref{tab:n_and_AZ} for some examples.
The torsion part of $KO(RP^{d-1})$ is additively generated by the formal difference
$([\xi]-[1])$ where $\xi$ is the tautological real line bundle over $RP^{d-1}$.
A generating $\mathbb{Z}_2$-equivariant real line bundle over $S^{d-1}$ corresponding to $\xi$ is given as follows.
Let $\ket{\bm{k}}$ be a line bundle with TRS and inversion
\begin{align}
\ket{\bm{k}} = \frac{1}{|\bm{k}|} (k_1, k_2, \dots, k_d)^T, \qquad
\ket{\bm{k}} = \ket{\bm{k}}^*, \qquad
\ket{-\bm{k}} = -\ket{\bm{k}}.
\end{align}
Notice that $\bm{k} = \bm{0}$ is singular.
The restriction of the line bundle $\ket{\bm{k}}$ to a sphere $|\bm{k}| = {\rm const.}$ leads
to the generator of the torsion part.
A Hamiltonian of which the occupied state is $\ket{\bm{k}}$ is given by
\begin{align}
H(\bm{k})
= |\bm{k}|^2 ({\bf 1}_{d \times d} - 2 \ket{\bm{k}} \bra{\bm{k}}).
\label{eq:model_ai_inversion}
\end{align}
In the same way as the complex $K$-theory of $RP^{d-1}$,
$\mathbb{Z}_2$ and $\mathbb{Z}_4$ classifications of the $K$-groups $KO(RP^{d-1})$
can be characterized by the SW classes.
A real bundle $E$ over a manifold $M$ defines the total
SW class $w(E) = 1 + \sum_{j>0} w_j(E) \in H^*(M;\mathbb{Z}_2)$.
The real projective space $RP^{d-1}$ has the following cohomology with $\mathbb{Z}_2$ coefficients
\begin{equation}\begin{split}
H^{j}(RP^{d-1};\mathbb{Z}_2)=
\left\{ \begin{array}{ll}
\mathbb{Z}_2 & (0\leq j \leq d-1) \\
0 & (\mathrm{otherwise}) \\
\end{array} \right..
\end{split}\end{equation}
As the cohomology ring, $H^*(RP^{d-1};\mathbb{Z}_2)$ is isomorphic to $\mathbb{Z}_2[t]/(1-t^{d})$.
The tautological real line bundle $\xi$ over $RP^{d-1}$ has the data $w(\xi) = 1 + t$.
From the structure of the SW classes $w(E \oplus F) = w(E) w(F)$,
one can show that the $\mathbb{Z}_2$ and $\mathbb{Z}_4$ subgroups in the torsion part of the $K$-group
$KO(RP^{d-1})$ can be characterized by the SW classes.
Let us construct the $\mathbb{Z}_2$-equivariant first SW class on $S^{d-1}$.
A similar invariant defined by TRS and $C_4$-rotation symmetry is discussed in Ref.~\onlinecite{alexandradinata2016berry}.
Choose a point $P$ and its inversion symmetric pair $-P$ in the BZ.
Let $\ell$ be an oriented path from $P$ to $-P$.
Let $\Phi(\bm{k}), (\bm{k} \in \ell)$ be a frame of occupied states which is smoothly defined on the line $\ell$.
We fix the gauge freedom of $\Phi(\bm{k})$ so that
the combined symmetry $TU(\bm{k})$ is represented by a $\bm{k}$-independent unitary matrix $W$ as
$T U(\bm{k}) \Phi(\bm{k}) = \Phi(\bm{k}) W$ on the line $\ell$.
Because of the inversion symmetry,
$\Phi(P)$ and $\Phi(-P)$ are related as
$U(-P) \Phi(-P) = \Phi(P) V(P)$
with $V(P)$ a unitary matrix.
From the assumption $T U(\bm{k}) = U(-\bm{k}) T$, one can show that
$W V(P)^* = V(P) W$, which leads to the
$\mathbb{Z}_2$ quantization of the determinant $\det[V(P)] = \pm 1$.
This determinant $\det[V(P)]$ is the $\mathbb{Z}_2$-equivariant version of the
first SW class.
Notice that the change of sign $U(\bm{k}) \mapsto -U(\bm{k})$
induces $V(P) \mapsto - V(P)$,
thus, the $\mathbb{Z}_2$ invariant $\det[V(P)]$ is relatively well-defined from the trivial occupied state.
On the other hand, unfortunately, there is no simple expression of the second SW class $w_2(E)$
for a given occupied states bundle $E$ with $T$ and $U(\bm{k})$ symmetries.
Here, we give two examples in low dimensions.
In 2-spatial dimensions, the model Hamiltonian (\ref{eq:model_ai_inversion}) reads
\begin{align}
H_{2d}(k_x,k_y)
= \begin{pmatrix}
-k_x^2+k_y^2 & -2 k_x k_y \\
-2 k_x k_y & k_x^2 - k_y^2 \\
\end{pmatrix}.
\label{eq:model_ai_inversion_d=2}
\end{align}
Such a Hamiltonian is realized in a
$d$-wave superconductor and a $d$-density wave in 2-dimensions.
The TRS and inversion symmetry are given as $T = K$ and $U(\bm{k}) = {\bf 1}_{2 \times 2}$,
where $K$ means the complex conjugate.
The occupied state is $\ket{\bm{k}} = (k_x, k_y)^T/ |\bm{k}|, (\bm{k} \neq \bm{0})$.
This occupied state satisfies the gauge fixing condition
$TU(\bm{k}) \ket{\bm{k}} = \ket{\bm{k}}$, that is, $W = 1$.
Because of $U(\bm{k}) \ket{\bm{k}} = - \ket{-\bm{k}}$,
the $\mathbb{Z}_2$ invariant is $\det[V(\bm{k})] = -1$.
Thus, the singular point $\bm{k} = \bm{0}$ of the Hamiltonian
(\ref{eq:model_ai_inversion_d=2}) is stable
unless $T$ or $U(\bm{k})$ symmetry is broken.
In 3-spatial dimensions, the model Hamiltonian (\ref{eq:model_ai_inversion}) reads
\begin{align}
H_{3d}(k_x,k_y,k_z)
= \begin{pmatrix}
-k_x^2+k_y^2+k_z^2 & -2 k_x k_y & -2 k_x k_z \\
-2 k_x k_y & k_x^2 - k_y^2+k_z^2 & -2 k_y k_z \\
-2 k_x k_z & -2 k_y k_z & k_x^2 + k_y^2-k_z^2 \\
\end{pmatrix}.
\label{eq:model_ai_inversion_d=3}
\end{align}
The occupied states of this Hamiltonian
have the $\mathbb{Z}_4$ charge of the
$KO$-theory $KO(RP^2) = \mathbb{Z}_4 \oplus \mathbb{Z}$.
Actually, in the same way as 2-dimensions,
the occupied state $\ket{\bm{k}} = (k_x,k_y,k_z)^T/|\bm{k}|$
has the $\mathbb{Z}_2$ charge $\det[V(\bm{k})] = -1$.
From the property of $w$,
the first SW class of the direct sum $\ket{\bm{k}} \oplus \ket{\bm{k}}$ is trivial,
but the second SW class is non-trivial.
The first and second SW classes of the direct sum $\ket{\bm{k}} \oplus \ket{\bm{k}} \oplus \ket{\bm{k}}$ are both non-trivial.
\section{Conclusion}
\label{sec:Conclusion}
In this paper, we formulate topological crystalline materials on the
basis of the twisted equivariant $K$-theory.
We illustrate how space and magnetic space groups are
incorporated into topological classification of both gapful and gapless
crystalline materials in a unified manner.
The twisted equivariant $K$-theory ${}^{\phi} K_{\cal G}^{(\tau, c)-n}(T^d)$
on the BZ torus $T^d$ serves the stable classification of bulk TCIs and
TCSCs and their boundary and defect gapless states.
$K$-theories are not just additive groups, but are equipped with the module
structures for point groups so that the
classification naturally includes the
information on crystals such as point group representations and Wyckoff
positions.
Using isomorphisms between $K$-theories, we also discuss
bulk-boundary and bulk-defect correspondences in the presence of
crystalline symmetry.
In Sec.~\ref{sec:Topological nodal semimetals and superconductors},
we propose a systematic method to classify bulk gapless topological
crystalline materials in terms of $K$-theory.
We show that the cokernel of the map $i^*_Y$ between
$K$-theories, which is induced by the
inclusion $i_Y$ of a subspace $Y$ into the BZ torus $T^d$, defines
bulk gapless topological materials.
In Sec.~\ref{Wallpaper_summary}, we present topological
table with wallpaper groups in the absence of TRS
and PHS.
In particular, the module structures for point groups are identified
in the wallpaper classification, of which information is important to
understand crystalline materials.
Furthermore, we illustrate computations of $K$-groups
for various systems in Sec.~\ref{sec:Example of K-theory classification}.
More computations of
$K$-groups are necessary to fully explore topological crystalline materials.
Even for relatively simple wallpaper groups, the full computation is
missing in the presence of
TRS and/or PHS, although a part of
computations have been done by the present authors.\cite{ShiozakiSatoGomi2016}
In three dimensions, most of $K$-groups with (magnetic) space groups
have not been known yet.
Our present formulation provides a precise and systematic framework to step
into the unexplored field of topological crystalline materials.
{\it Note Added.}---
While this manuscript was being prepared, we became aware of a recent independent work by Kruthoff {\it et al.},~\cite{Kruthoff2016} which discussed the topological classification of bulk insulators and stable nodal structures in the presence of space groups, mainly focusing on class A spinless systems.
They also gave the classification of class A spinless topological crystalline insulators in two dimensions with wallpaper groups, which is consistent with us and Refs.~\onlinecite{Yang1997, LuckStamm2000}.
\acknowledgements
K.S.\ thanks useful discussions with Aris Alexandradinata and Takahiro Morimoto.
K.S.\ is supported by JSPS Postdoctoral Fellowship for Research Abroad.
M.S.\ is supported by the "Topological Materials Science''
(No.JP15H05855) KAKENHI on Innovative Areas from JSPS of Japan.
K.G.\ is supported by JSPS KAKENHI Grant Number JP15K04871.
|
3,212,635,537,593 | arxiv | \section{Introduction}
Resonant tunneling (RT) through semiconductor diodes has recently
attracted considerable attention because of its applications in
ultra-high-speed electronic devices. Their most remarkable feature
is that their $I$-$V$ characteristics show
negative differential resistance (NDR). Most of these devices are based
in double-barrier structures, where RT occurs via quasi-bound states
within the well region in the same energy band (conduction- or
valence-band). On the contrary, other kind of RT devices present
interband transitions as, for instance, InAs-GaSb or InAs-AlSb-GaSb
structures. \cite{Wang} In the later case, Kane's parameter is not
negligible as compared with bandgaps and, consequently, more elaborated
band-structures are required to fully account for RT.
Recently, Sardela
{\em et al.} \cite{Sardela}
have observed NDR at room temperature in B-$\delta$-doped
diodes grown by Si-molecular beam epitaxy. A schematic
cross section of the studied device in shown in Fig.~\ref{fig1}.
Experimental NDR peaks are seen to
appear at about $\pm 0.2\,$V. However, it is not
clear whether RT occurs via intraband or interband mechanisms: Indeed,
the
authors argued that preliminary calculations based on the static
solution of the Poisson's equation show that the whole structure only
supports a very shallow quasi-bound state and, consequently,
conduction-band modulation cannot explains the observed NDR. Hence, they
claimed that the valence-band plays a major role in RT of these diodes
and therefore two-band Hamiltonians \cite{Bastard} are necessary to
understand the obtained results.
In this paper we will show that scalar Hamiltonians for the
conduction-band envelope functions work well to explain the results of
Sardela {\em et al.} \cite{Sardela} Instead of using self-consistent
approximations to compute the one-electron potential due to
$\delta$-doping, we will concern ourselves with the nonlinear
Thomas-Fermi (TF) formulation as introduced by Ioratti.\cite{Ioratti}
This approach reproduces accurately the various subbands densities in
periodically $\delta$-doped GaAs layers at moderate and high doping
\cite{Egues} and it has proved itself useful n describing the
electronic structure in the presence of applied external fields.
\cite{SST,PRB} We thus obtain
results showing the existence of several quasi-bound
states with large lifetimes localized between the two $\delta$-layers
that successfully explain the $I-V$ characteristics previously
obtained by Sardela {\em et al.}
\section{Model}
We consider an electron of effective mass $m^*$ in the $\delta$-doped
diode (see Fig.~\ref{fig1}) in the presence of an electric field applied
along the growth direction. Nonparabolicity effects are neglected so
that $m^*$ will be taken to be independent of the electron energy. The
electron wave function and the total energy of the electron are given by
$\Psi({\bf r},{\bf k}_\perp)=\exp(i{\bf k}_\perp\cdot{\bf r}_\perp)
\phi(x)$ and $E({\bf k}_\perp)=E+(\hbar^2{\bf k}_\perp^2/2m^*)$,
where $\phi(x)$ is the envelope function. Note that we have taken
isotropic bands at the $X$ minima, and, in fact, the band structure of Si is
strongly anisotropic ($m^*_l/m^*_t \sim 5$). Hence, we should consider
an {\em average} value $m^*$ that would be obtained, for example, in a
measurement of mobility.\cite{Jaros} In the effective-mass approximation
the envelope function satisfies the following Schr\"odinger equation
\begin{equation}
\left[-\,{\hbar^2\over 2m^*}\,{d^2\phantom{x}\over dx^2}+
V_{TF}(x)-eFx\right]\,\phi(x)=E\,\phi(x).
\label{eq1}
\end{equation}
Here $F$ is the applied electric field and $V_{TF}(z)$ is the solution
of the nonlinear TF equation (see Refs.~\onlinecite{SST} and
\onlinecite{PRB} for details). As usual in scattering problems, the
envelope function at the bottom electrode is a superposition of incident
and reflected traveling waves, whereas at the top one there is only a
transmitted wave. Hence standard numerical techniques \cite{SST} can be
used to obtain the transmission coefficient $\tau(E,V)$ for a given
incident energy $E$ and a given applied voltage $V=FL$, $L$ being the
length of the whole structure. Due to the high doping of the
electrodes, the electric field is assumed to be nonzero only within the
structure. In addition, these high screening effects imply that
$V_{TF}(z)$ also vanishes at the electrodes.
\section{Results}
We have set an effective mass $m^*=0.33$ and a dielectric constant
$\epsilon=11.7$ in Si.\cite {Jaros} In the absence of external fields,
the TF potential presents two peaks corresponding to the
$\delta$-layers, as shown in Fig.~\ref{fig2}. It is worth mentioning
that the maximum value of the conduction-band modulation is about $20\%$
of the bandgap in Si. This value is of the same order of that obtained
self-consistently in Sb-$\delta$-doped grown by Si-molecular beam epitaxy
with similar areal concentration of impurities.\cite{Radamson} Assuming
a similar profile for the valence-band, we are led to the conclusion
that interband RT should not play a significant role, at least at
moderate electric fields. Hence, the scalar Hamiltonian (\ref{eq1})
suffices to describe the electronic structure of the diode. In
addition, since we are close to the conduction-band edge, parabolic
subbands hold, as we assumed previously. To elucidate whether the
conduction-band modulation can support narrow quasi-bound states, we
have numerically evaluated the transmission coefficient at zero bias
$\tau(E,0)$ as a function of the incident energy and results are plotted
in Fig.~\ref{fig3}. From this figure it becomes clear that four
resonances (quasi-bound states) appear below the top of the potential.
The levels are nearly equally spaced in this case: This is easy to
understand if we notice that the potential profile between the two
$\delta$-layers is almost parabolic. Interestingly, we thus have
demonstrated that $\delta$-doped diodes could be used to achieve equally
spaced peaks in the collector characteristics, in analogous way to
parabolic wells formerly proposed by Capasso and Kiehl.\cite{Capasso}
The tunneling current density at a given temperature $T$ for the diode
sketched in Fig.~\ref{fig1} can be calculated within the
stationary-state model from
\begin{mathletters}
\label{eq2}
\begin{equation}
j(V)={m^*ek_BT\over 2\pi^2\hbar^3}\,\int_0^\infty\> \tau(E,V)N(E,V)\,dE,
\label{eq2a}
\end{equation}
where $N(E,V)$ accounts for the occupation of states to both sides of
the device, according to the Fermi distribution function, and it is
given by
\begin{equation}
N(E,V)=\ln\left(\frac{1+\exp[(E_F-E)/k_BT]}{1+\exp[(E_F-E-eV)/k_BT]}
\right)
\label{eq2b}
\end{equation}
\end{mathletters}
We show in Fig.~\ref{fig4} the computed $j-V$ characteristics at
$T=77\,$K and at room temperature. The curves have been obtained taking
the Fermi energy at the conduction-band edge away from the
$\delta$-layers. Other values of the Fermi energy simply modify the
scale of $j$, keeping its shape unchanged. The main NDR feature,
also observable at room temperature, appears at about $0.22\,$V,
which is very close to experimental results of Sardela {\em et al.}
\cite{Sardela}
Note that our computation predicts two separated peaks in the $j-V$
characteristics at around $0.22\,$V, while experiments show only a single,
broader peak. Inelastic scattering mechanisms, not included in our
analysis, are known to
cause a broadening of the intrinsic level width, and the
amount of inelastic \cite{Booker} broadening makes the two
separated resonances to
merge into a single one, according to experimental results. Moreover,
there exist smaller peaks at about $0.10\,$V and $0.15\,$V, arising from
the lower resonances of the well. In experiments \cite{Sardela} such
small peaks are almost unnoticeable, probably because we are
overestimating its height due to the absence of inelastic effects in our
model which, of course, tend to disrupt coherent tunneling. The
theoretical peak-to-valley ratio of the main NDR feature is $2.8$ at
$77\,$, larger than the experimental value $1.4$ at the same
temperature. The reason why the
theoretical calculations predicts larger
peak-to-valley ratios is that sequential tunneling is not being
considered.
\section{Summary}
We have proposed a simple model to explain RT through
$\delta$-doped diodes grown by Si-molecular beam epitaxy. The
conduction-band modulation is obtained by means of the TF approach and
the corresponding electronic states are described by a scalar
Hamiltonian. Our results show that NDR effects are due to
conventional resonant tunneling process in vertical transport, whereas
interband tunneling does not give rise to significant contributions.
The obtained results are in excellent agreement
with recent measurements by Sardela {\em et
al.}\cite{Sardela} This success suggests that our approach, being very
simple and computationally inexpensive, may be very useful in dealing
with semiconductor nanostructures.
\acknowledgments
F.\ D-A.\ acknowledges support from UCM through project PR161/93-4811.
A.\ S.\ acknowledges partial support from C.I.C.\ y T.\ (Spain) through
project PB92-0248 and by the European Union Human Capital and Mobility
Programme through contract ERBCHRXCT930413.
|
3,212,635,537,594 | arxiv | \section{Introduction}
A wide variety of fascinating topics have been studied in nonlinear magnetism \cite{b.1,b.2,b.3} and other branches of nonlinear physics. These include solitons \cite{b.4,b.5,b.6,b.7}, period doubling \cite{b.8}, nonlinear combinations of frequencies \cite{b.9,b.10}, strange attractors, limit cycles, chaos \cite{b.11}, and routes to chaos through bifurcation processes \cite{b.12,b.13}. The field has recently been invigorated by the use of nanostructures, which have allowed very large microwave fields to be applied, with amplitudes in excess of several hundred Oe \cite{b.14,b.15}, and have allowed interesting nonlinear conversion between modes \cite{b.16,b.17,b.18}. Almost all of these topics deal with what is happening in the system after all transients have disappeared \cite{b.19,b.20}.
\\ \indent In this paper, in contrast, we theoretically examine the \textit{transient} precessional behavior appropriate for a magnetic nanoparticle. We find a surprising result that, in the nonlinear limit, the transient lifetime can be significantly extended by adding a strong oscillating driving field which is at a \textit{different} frequency than the natural frequency. This increase of the lifetime depends sensitively (and nonmontonically) on the strength of the driving field. Near a critical driving field the lifetime can be extended by factors of over 10,000. We note that this behavior occurs in a nonlinear regime, but chaos is not required for this stabilization of the transient.
\\%Equations of Motion
\section{Transient magnetization dynamics: analysis and characterization}\subsection{Equations of motion}With a static magnetic field applied to the nanoparticle, there is a natural precessional frequency for the magnetization. If the magnetic moment has an initial position, away from equilibrium, the system will exhibit transient behavior in that the magnetization will precess and at the same time decay toward its equilibrium direction as a result of magnetic damping. The theoretical calculations start with the equations of motion for a macrospin, appropriate, for example, for a small magnetic particle with a diameter smaller than the exchange length so that the entire system is in a single domain. For instance, the exchange length in iron is about 2 nm. The equation of motion for the magnetization~\cite{b.21}, $\vectr M$, is given by
\begin{equation}
\label{e.1}
\frac{\upd \vectr M}{\upd t}=\frac{\gamma}{1+\alpha^2}\left(\vectr M\times\vectr H - \alpha\frac{\vectr M \times (\vectr M \times \vectr H)}{M}\right)~.
\end{equation}
Here $\gamma$ is the gyromagnetic ratio, $\alpha$ is a dimensionless damping parameter, and the magnetic field, $\vectr H$, is a sum of a static field in the $\hat{\vectr z}$ direction, demagnetizing fields, and a driving field in the $\hat{\vectr x}$ direction, oscillating with a frequency $\omega$, namely
\begin{equation}
\label{e.2}
\vectr H = H_0~\hat{\vectr z} - \vectr N \cdot \vectr M + h_d \cos{\omega t}~\hat{\vectr x}~.
\end{equation}
We use a demagnetizing tensor appropriate to a sphere or a cube given by
\begin{equation}
\label{e.3}
\vectr N_{\alpha\beta} = \frac{4\pi}{3} \boldsymbol{\delta}_{\alpha\beta}~
\end{equation}
where $\boldsymbol{\delta}_{\alpha\beta}$ is the second rank Kronecker delta tensor.
\\%Limiting cases
\subsection{Limiting cases}
In the absence of damping or a driving field the natural frequency of precessional motion about the $z$-axis is
\begin{equation}
\label{e.4}
\omega_0 = \gamma H_0~.
\end{equation}
As we will see, the transient precession will occur at this frequency in the linear limit, but will be shifted down slightly in the nonlinear case. Applying a driving field at this frequency can speed-up magnetization reversal in a nanoparticle \cite{b.22}.
In the absence of a static field and damping, the motion of the magnetization has an analytical solution. We start the system with $\vectr{M}$ in the $y$-$z$ plane, and with the driving field along the $x$-axis. Define
\begin{equation}
\label{e.5}
\xi \equiv \left(\frac{\gamma h_d}{\omega}\right)\sin{\omega t}~.
\end{equation}
Then,
\begin{equation}
\label{e.6}
M_y(t) = M_{y0} \cos \xi + M_{z0} \sin \xi
\end{equation}
\begin{equation}
\label{e.7}
M_z(t) = M_{z0}\cos \xi - M_{y0} \sin \xi~.
\end{equation}
where $M_{y0}$ and $M_{z0}$ are the initial values for the components of the magnetization. Note that the motion in this limit, although nonlinear, is not chaotic.
\\%Time Evolution
\subsection{Time evolution}
We now examine the time evolution of the system for different driving field amplitudes. The system will be driven at a frequency of $\omega = 1$ GHz and an initial position given by $M_{y0}=M_{z0}=M_s/\sqrt 2$ . Because the system is started in a nonequilibrium configuration, there will be a transient response before the system settles into equilibrium precession at a frequency which matches the driving field frequency The transient involves time-decay of precession at the natural frequency $\omega_0 = 0.29$ GHz, using eq.~(\ref{e.4}).
We have used a variety of numerical integration schemes to solve the differential equation for the motion of the magnetization, eq.~(\ref{e.1}). These include 2nd and 4th order Runge-Kutta and LSODA. All the techniques yielded equivalent results. Unless otherwise stated, the parameters for our calculation are $H_0=100$ Oe, $M_s = 1.67$ $\times10^3$ G (saturation magnetization appropriate for Fe), $\gamma = 1.83\times10^{10}$ rad/s, and $\alpha = 0.01$.
In fig.~\ref{f.1}(a) where there is no driving field, we see a typical time evolution of damped oscillatory behavior. The decay time is about 70 ns. Figure~\ref{f.1}(b) shows the behavior of $M_y(t)$ with a moderate driving field of 100 Oe. In this limit, it should first be noted that the transient (natural) frequency is reduced very slightly from $\omega_0$. Second, one can also see that the system comes into equilibrium with the driving field frequency. Since the driving frequency is substantially higher than $\omega_0$, the system's evolution toward equilibrium is exhibited by the transition from the widely spaced time-trace (natural frequency behavior) toward the dense, closely packed region in the time-trace (driven high frequency behavior. The time it takes for the natural frequency to decay is once again about 70 ns.\\
Finally in fig.~\ref{f.1}(c), where a very strong driving field is applied, we see an interesting behavior where both the frequency of the transient (near $\omega_0$) and of the driving field ($\omega$) are seen.
\\%Figure 1
\begin{figure}
\centerline{\includegraphics[width=0.48\textwidth]{figure1.eps}}
\caption{Time evolution of the normalized $M_y$ value for three different driving field amplitudes. The natural frequency of the transient is 0.29 GHz and the frequency of the driving field is always 1 GHz. Fourier transforms have been applied in 100 ns intervals. For (a) and (b), the decay time of the transient is approximately 70 ns. For (c), the transient has shifted} slightly in frequency and has a much longer lifetime as the amplitude of the driving field is increased.
\label{f.1}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{figure2.eps}}
\caption{Fourier transform of $M_y(t)$ for different driving field amplitudes during different approximate time intervals. In panels (a) and (b) we see the low frequency peak decays with time. In panel (c) where the driving amplitude is large, the transient peak is essentially stabilized and does not decay in the time shown here.}
\label{f.2}
\end{figure}
\subsection{Fourier Analysis}
To quantify the transient time behavior of the magnetization, we do a series of Fourier transforms on $M_y(t)$, each transform encompassing a different time block (approximately 0--100 ns, 100--200 ns, etc). These transforms will identify the major frequencies present in a given time block and the amplitude of the excitations at the different frequencies.
The results are shown in fig.~\ref{f.2}. In fig.~\ref{f.2}(a), we see the Fourier transform as a function of frequency when $h_d = 0$ Oe, for different time blocks. There is a single peak at the natural frequency, which diminishes in height as time is increased. Figure~\ref{f.2}(b) shows the results when $h_d=100$ Oe. Now there are two peaks, one at the driving frequency and one very slightly below the natural frequency. Again the peak near the natural frequency is reduced as time is increased, indicating the typical decay in a damped linear system. The situation is quite different in fig.~\ref{f.2}(c) where the driving field is given by $h_d=410$ Oe. First, the frequency of the transient is shifted down further by the nonlinear interaction. More importantly, the height of the peak representing the transient now remains virtually constant over the time span investigated, indicating a substantially increased lifetime.
\\%Figure 3
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{figure3.eps}}
\caption{The decay time as a function of the amplitude of the driving field ($h_d$). In the upper panel the decay time \textbf{increases} as the driving field amplitude is increased to 410 Oe. In the lower panel the decay time \textbf{decreases} as the driving field amplitude is increased above 410 Oe.}
\label{f.3}
\end{figure}
\subsection{Amplitude decay of Fourier Transform}
The decay in time for different driving field amplitudes is illustrated in fig.~\ref{f.3}, where we plot the amplitude of the natural frequency mode as a function of time for different values of the driving field. Figure~\ref{f.3}(a) shows results for driving fields up to 410 Oe. We see a remarkable and tunable increase in the duration of the transient, by about a factor of 100 for the range of values shown. When the driving fields are increased above the value of 410 Oe, fig.~\ref{f.3}(b), the transient lifetime is now reduced, returning to values close to that found in the linear case when $h_d$ is small.
\\ \indent The features illustrated here are quite robust. For example, the general behavior of transient lifetime is essentially unaffected by all the following:
\\1) The initial position of the magnetization;
\\2) The initial phase of the driving field, i.e. the value of $\phi$ if the driving field is proportional to $\cos{\omega t+\phi}$;
\\3) The value of the damping constant, which scan even be zero.
\\4) Small changes in the structure, which cause a deviation from the spherical or cubic symmetry in the demagnetization tensor. (A sphere or cube has no nonlinear terms involving the demagnetization tensor. These terms do appear, however, for small changes in the structure, but they are not critical to the effect seen here.)
In fact, the lifetime of the transient can be extended substantially by small variations in the amplitude of the driving field. In our example above, a driving field of 410 Oe gives a decay time of 1,300 ns. Driving fields of 414 Oe and 414.7 Oe produce decay times of 23,600 and 1,480,000 ns respectively. We see a dramatic increase in lifetime as the driving field approaches the critical value.
\\%Change in equilibrium
\subsection{A change in equilibrium direction}
We now explore the reason for the extended lifetime. It is associated with a transition in the equilibrium direction induced by the driving field as seen in fig.~\ref{f.4}. For low values of the driving field, the magnetization precesses around the static field, the $+\hat{\vectr z}$ direction. As the driving field is increased, the equilibrium direction eventually shifts to values having a component along the $-\hat{\vectr{z}}$ direction. The dramatic effect that the change in orientation has on the amplitude of the transient can be seen in fig.~\ref{f.5}. Here we plot the FFT amplitude of $M_y(t)$ for the time interval of 2000--2100 ns. Away from the transition the amplitude is essentially zero. But there is a significant effect over a range of driving field amplitudes, from about $h_d = 380$ Oe to $h_d = 450$ Oe.
The situation is similar in many respects to a driven pendulum. In that case, there is only one energy minimum, when the pendulum is at the bottom position. However, with a large driving field, the pendulum motion is nonlinear and it can oscillate about a new equilibrium position where the pendulum is at the uppermost position \cite{b.23,b.24}. The physical mechanism for the stabilization of the ``inverted pendulum" has had an extensive discussion in the literature \cite{b.25}.
\\%Figure 4
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{figure4.eps}}
\caption{Plot of the average value of the polar angle, $\theta$, as a function of the driving field amplitude. The average is found after all transients have died down. Near the critical value of $h_d = 410$ Oe there is a transition where the component of the magnetization changes from pointing along the $+z$ axis to pointing along the $-z$ axis.}
\label{f.4}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{figure5.eps}}
\caption{Fourier Transform of $M_y(t)$ as a function of the driving field, $h_d$. The transform is applied to a fixed interval of 2000--2100 ns. The peak amplitude of the Fourier Transform is centered around the critical driving field, $h_d \approx 410$ Oe, which is associated with a change in equilibrium direction.}
\label{f.5}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{figure6.eps}}
\caption{The transverse components of $\vectr M$ as a function of the driving field. The solid lines indicate the predictions of linear theory, and the large dots show the results of a full nonlinear numerical calculation. The condition $\vectr M_y \approx \vectr M_s$ gives a reasonable approximation for the driving field at which the equilibrium direction changes.}
\label{f.6}
\end{figure}
\subsection{Finding the critical field}
We can develop an approximate analytic expression for the critical field necessary for this phase transition. Because the transition involves a change from precession in the upper half space ($+\hat{\vectr z}$) to the lower half space ($-\hat{\vectr z}$) one might expect that the transition occurs near a critical amplitude, the point at which the average value of $M_z$ becomes zero, or equivalently where the magnitude of $M_y$ is equal to $M_x$. We can estimate this using the linearized equations of motion for $M_x$ and $M_y$. One finds by solving eqs.~(\ref{e.1}) and (\ref{e.2})
\begin{equation}
\label{e.8}
M_x = \left(\frac{\gamma^2 H_0 M_s}{\gamma^2 H_0^2-\omega^2}\right)h_d
\end{equation}
\begin{equation}
\label{e.9}
M_y = i\left(\frac{\gamma M_s \omega}{\gamma^2 H_0^2 - \omega^2}\right)h_d
\end{equation}
We neglect the $\gamma H_0$ in the denominator because it is small compared to the driving frequency $\omega$ here. Setting the magnitude of $M_y$ equal to $M_s$ we obtain the condition for the critical driving field
\begin{equation}
\label{e.10}
h_c = \frac{\omega}{\gamma} .
\end{equation}
With our parameters, we find the expected transition field to be about 340 Oe, somewhat lower than the 410 Oe we find numerically.
It is somewhat surprising that this estimate works as well as it does. One reason for this is that the numerical calculations (shown in fig.~\ref{f.5} by dots) demonstrate that eqs. (\ref{e.8}) and (\ref{e.9}) are reasonable predictors of the maximum amplitudes of $M_x$ and $M_y$ (shown in fig.~\ref{f.5} by lines) for values of $h_d$ which are as large as 275 Oe. The analytical form can explain some general trends we find in the numerical data. For example, we find that doubling the driving frequency requires a doubled critical field amplitude to reach the longest lifetimes, consistent with eq. (\ref{e.10}). We note that $M_x$ is smaller than $M_y$ because the driving field is in the $\hat{\vectr x}$ direction, so the critical condition that the linear prediction for $M_y\approx M_s$ is the correct one to use, as it will occur at a lower driving field. If the driving frequency were smaller than the resonance frequency the value of $M_x$ would generally be larger than $M_y$ as can be seen from eqs. (\ref{e.8}) and (\ref{e.9}).
The idea that the transient lifetime is tunable due to a change in equilibrium direction explains why this lifetime is insensitive to so many parameters. We drive the system off-resonance, so damping is not important. Furthermore, the transition occurs near a large amplitude precession, but the amplitude (in the linear approximation) is independent of initial phase and initial position. We note that the rapid transition from one equilibrium state to another seems to occur only for the case where the driving frequency is larger than the natural frequency.
\\%Conclusion
\section{Conclusion}
The general idea developed here is that a transient oscillation at a natural frequency may be stabilized by a driving field at a different frequency, particularly near a change in equilibrium position. This phenomenon could be detected using the micro-SQUID experiment of ref.~\cite{b.22}. The concept of extending the lifetime of a transient can occur in the chaotic regime \cite{b.26,b.27,b.28}, but here we have developed the result in a \textit{nonchaotic} region. The slow decay of the transient is surprising. In part it seems to be associated with fact that the entire structure takes a long time to settle into equilibrium position near the sharp transition. However, our numerical calculations show that the transient decay time is much longer than the equilibration time.
It is interesting to note that similar results are not found in other nonlinear physical systems such as the nonlinear driven pendulum in one dimension, the spherical pendulum, or nonlinear spring systems (with nonlinear forces that are either quadratic of cubic in the displacement). Although these cases do exhibit a transition from one equilibrium state to another, the lifetime of the natural frequency mode does in fact not change. Thus our results naturally lead us to the intriguing question - what makes the nonlinear magnetization equations unique relative to many other nonlinear systems?
\acknowledgments
This work was supported by UCCS Letters Arts and Sciences Faculty/Student Research Grant.
|
3,212,635,537,595 | arxiv | \section{\@startsection {section}{1}{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus.2ex}%
{\normalfont\large\bfseries}}
\makeatother
\newcommand*{\problem}{%
\needspace{2\baselineskip}%
\section[]{{\color{lightgray}\hrule height 2pt}\hfill}
\subsection[]{{\color{lightgray}\hrule height 1.5pt}\hfill}
}
\begin{document}
\vspace{5mm}
\begin{center}
\vglue .10in
{\Large\bf { Starobinsky inflation from new-minimal supergravity }
}
{\ }
\\[0.3cm]
{\large Fotis Farakos}
\\[0.2cm]
\vspace{.3cm}
{\normalsize {\it Institute for Theoretical Physics, Masaryk University, \\611 37 Brno, Czech Republic}}
\vspace{.3cm}
{\normalsize E-mail: fotisf@mail.muni.cz }
\end{center}
\noindent
{\it Proceedings for DISCRETE 2014
\\
2-6 December 2014
\\
King's College London, Strand Campus }
\vspace{.4cm}
{\small \noindent \textbf{Abstract} \\[0.3cm]
In the new-minimal supergravity formulation we present the embedding of the $R+R^2$ Starobinsky model of inflation.
Starting from the superspace action we perform the projection to component fields
and identify the Starobinsky model in the bosonic sector.
Since there exist no other scalar fields,
this is by construction a single field model.
This higher curvature supergravity also gives rise to a propagating massive vector.
Finally we comment on the issues of higher order corrections and initial conditions.
\vskip 0.5cm
\def\arabic{footnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\def\arabic{footnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\def\left[{\left[}
\def\right]{\right]}
\def\left\{{\left\{}
\def\right\}{\right\}}
\def\partial{\partial}
\def\Sigma{\Sigma}
\def\nonumber{\nonumber}
\section{Introduction and discussion}
The Planck collaboration's constraints on inflation
\cite{Ade:2013uln,Ade:2015lrj} have restricted the inflationary models
to those which are characterized by a plateau potential for the inflaton field.
More specifically, if the perturbations during inflation \cite{Lyth:1998xn} are originated by the same field driving inflation,
these restrictions can be quantified by the following constraints on the spectral index: $n_s = 0.9655 \pm 0.0062$
and the tensor-to-scalar ratio: $r < 0.12$.
A model which lies in the heart of the data is the pure gravitational Starobinsky model \cite{Starobinsky:1980te}
\begin{eqnarray}
e^{-1} {\cal L}= \frac{M_P^2}{2} R + \frac{M_P^2}{12 m^2} R^2 ,
\end{eqnarray}
which in the Einstein frame describes a real scalar field minimally coupled to gravitation,
with a scalar potential given by
\begin{eqnarray}
V_{R^2} = \frac34 m^2 M_P^2 \left( 1 - e^{- \sqrt{\frac23} \phi / M_P} \right)^2 .
\end{eqnarray}
This model gives $n_s -1 \simeq -2/N $ and $r \simeq 12/N^2$.
The Planck data constrain $m \simeq 1.3 \times 10^{-5} M_P$.
Discussions on the generic properties of models with plateau
potentials can be found in \cite{Kehagias:2013mya,Biagetti:2015tja}.
Supergravity \cite{Ferrara:1988qxa,Wess:1992cp}, as the low energy limit of string theory,
is essentially the appropriate framework to study high energy gravitational phenomena like inflation.
The minimal 4D N=1 supergravity multiplet contains as physical fields the graviton,
with 6 bosonic off-shell degrees of freedom,
and the gravitino,
with 12 off-shell fermionic degrees of freedom.
The remaining 6 off-shell bosonic degrees of freedom are auxiliary and can be distributed as follows
\begin{itemize}
\item Old-minimal supergravity auxiliary fields sector \cite{Ferrara:1978em,Stelle:1978ye}:
a complex scalar $M$ (2 DOF) and a real vector $b_m$ (4 DOF).
\item New-minimal supergravity auxiliary fields sector \cite{Sohnius:1981tp}:
a gauge vector for the R-symmetry $A_m$ (3 DOF) and a gauge two-form $B_{mn}$ (3 DOF).
\end{itemize}
The existence of different minimal supergravities can be understood as arising from different solutions to the superspace
Bianchi identities, or as different choice of appropriate Wess-Zumino gauge for the gravitational multiplet,
or also as originating from the different compensating multiplets that break the
underlying superconformal theory to super-Poincar\'e.
The underlying dualities among the compensating multiplets
survive the gauge fixing and lead to equivalent couplings to matter \cite{Ferrara:1983dh},
but break down as soon as higher curvature terms are introduced.
Here we present the embedding of the
Starobinsky model of inflation in new-minimal supergravity \cite{Farakos:2013cqa}.
This higher curvature supergravity is on-shell equivalent to standard supergravity coupled to
a massive vector multiplet \cite{Cecotti:1987qe,Farakos:2013cqa,Ferrara:2013rsa,Ferrara:2014cca}.
The Starobinsky model of inflation is nevertheless accompanied by a series of open issues.
The first concerns the existence of possible higher order curvature corrections.
For example the $R^4$ terms \cite{Farakos:2013cqa}.
As we will see,
these terms spoil the plateau of the scalar potential and if they are relatively large Starobinsky inflation does not take place.
Therefore, one requires a hierarchy to hold during inflation
\begin{eqnarray}
\label{hir}
\frac{M_P^2}{m^2} R_{inf}^2 \gg M_P^2 R_{inf} \ \ , \ \ \frac{M_P^2}{m^2} R_{inf}^2 \gg \xi R_{inf}^4 \, ,
\end{eqnarray}
where $\xi$ is an appropriate parameter for the $R^4$ terms.
This hierarchy has no apparent justification and
a concrete answer to why the $\xi R^4$ terms are expected to be small (therefore pose no threat) is not known.
Proposals of why the hierarchy (\ref{hir}) is expected to hold
in a supergravity framework can be found in \cite{LopesCardoso:1992wv,Ferrara:2013kca}.
A second open issue, which is again related to the scale $m \sim 10^{-5} M_P$ is the initial conditions problem \cite{Goldwirth:1991rj,Ijjas:2013vea}.
If our universe exited the quantum gravity regime with an
energy density $\sim M_P^4$ \cite{Linde:2005ht,Linde:2007fr},
then due to the characteristic upper bound of the potential energy of the Starobinsky model $\sim 10^{-10} M_P^4$,
the total energy density has to be dominated by the kinematic contribution.
This leads to a need for an initial homogeneous patch of radius of the order
\begin{eqnarray}
r_{init} \sim 10^3 \, l_P,
\end{eqnarray}
which means rather special initial conditions.
A proposal of how this is ameliorated in a pure $R+R^2$ setup (supergravity or not),
has been given in \cite{Dalianis:2015fpa} which we also review here.
For a review on inflationary cosmology after the release of the Planck collaboration's results,
and for an approach on the initial conditions problem see also \cite{Linde:2014nna}.
Note that the embedding of the Starobinsky model has been also studied
in the old-minimal supergravity \cite{Ferrara:1978rk,Cecotti:1987sa,Kallosh:2013lkr,Farakos:2013cqa,Ferrara:2013wka,Dalianis:2014aya,Terada:2014uia},
in no-scale supergravity \cite{Ellis:2013xoa,Ellis:2013nxa,Ellis:2014gxa,Lahanas:2015jwa},
in the linearized non-minimal (20/20) supergravity \cite{Farakos:2015hfa},
and also in a generic supergravity setup via gravitino condensates \cite{Alexandre:2013nqa,Alexandre:2014lla}.
\section{$R+R^2$ new-minimal supergravity}
The new-minimal supergravity \cite{Sohnius:1981tp} is the supersymmetric theory of the gravitational multiplet
\begin{equation}
e^a_m\, ,~~\psi_m^\alpha \, , ~~A_m\, , ~~ B_{mn}\ .
\end{equation}
The first two fields are the vierbein and its superpartner the gravitino,
a spin-$\frac{3}{2}$ Rarita-Schwinger field.
The last two fields are auxiliaries.
The real auxiliary vector $A_m$ gauges the $U(1)_\text{R}$ chiral symmetry.
The auxiliary $B_{mn}$ is a real two-form appearing only through its dual field strength $H_m$,
which satisfies $\hat D^a H_a =0$ for the supercovariant derivative $\hat D^a$.
We will use superspace techniques to guarantee that our component form
Lagrangians are supersymmetric.
The interested reader may consult for example \cite{Ferrara:1988qxa} where a treatment of
the new-minimal superspace is given.
The new-minimal supergravity free Lagrangian is given by
\begin{equation}
\label{sugra}
{\cal L}_0= - 2 M^2_P \int d^4 \theta E V_{\text{R}} .
\end{equation}
Here $V_{\text{R}}$ is the gauge multiplet of the R-symmetry,
which (in the appropriate WZ gauge) contains the auxiliary fields in its vector component
\begin{eqnarray}
- \frac{1}{2} [\nabla_\alpha , \bar \nabla_{\dot \alpha} ] V_{\text{R}} | = A^-_{\alpha \dot \alpha}
= A_{\alpha \dot \alpha} - 3 H_{\alpha \dot \alpha} ,
\end{eqnarray}
and the Ricci scalar in its highest component
\begin{eqnarray}
\frac{1}{8} \nabla^\alpha \bar \nabla^2 \nabla_\alpha V_{\text{R}} |= - \frac{1}{2} \left( R + 6 H^a H_a \right) .
\end{eqnarray}
The symbol $E$ stands for the super-determinant of new-minimal supergravity.
The bosonic sector of Lagrangian (\ref{sugra}) is
\begin{equation}
\label{freesugra}
{\cal L}_0=
M^2_P \, e \, \left( \frac{1}{2}
R + 2A_a H^a-3H_a H^a\right) .
\end{equation}
It is well known from linearized supergarvity \cite{Cecotti:1987qe} that the $R^2$ term is accommodated inside
\begin{eqnarray}
\label{R2}
{\cal L}_{R^2} = \frac{\alpha}{4} \int d^2 \theta \, {\cal E} W^2(V_{\text{R}}) + c.c.
\end{eqnarray}
with the standard definition of the field strength
\begin{eqnarray}
W_{\alpha}(V_{\text{R}}) = -\frac{1}{4} \bar \nabla^2 \nabla_\alpha V_{\text{R}},
\end{eqnarray}
and ${\cal E}$ the chiral density.
In component form, the bosonic sector of (\ref{R2}) reads
\begin{eqnarray}
e^{-1} {\cal L}_{R^2} = \frac{\alpha}{8} \left(R+6H^2 \right)^2 - \frac{\alpha}{4} F^2(A^-) .
\end{eqnarray}
The Starobinsky model of inflation in new-minimal supergravity (now we set $M_P=1$) is \cite{Farakos:2013cqa}
\begin{eqnarray}
\nonumber
{\cal L} = -2 \int d^4 \theta \, E V_{\text{R}} + \frac{\alpha}{4} \int d^2 \theta \, {\cal E} W^2(V_{\text{R}}) + c.c.
\end{eqnarray}
with bosonic sector
\begin{eqnarray}
\label{star}
e^{-1} {\cal L}= \frac{1}{2} R+ 2A_a H^a-3H_a H^a
+ \frac{\alpha}{8} \left(R+6H^2\right)^2 - \frac{\alpha}{4} F^2(A^-) .
\end{eqnarray}
Indeed, theory (\ref{star}) describes $R + R^2$, but the curvature terms are mixed with the auxiliary field $H^a$.
To find the theory in the Einstein frame we proceed to integrating out the auxiliary fields.
The Lagrangian (\ref{star}) is classically equivalent to
\begin{eqnarray}
e^{-1} {\cal L}= \frac{1}{2} R+ 2A_a H^a - 3H_a H^a - 2 H^m \partial_m X
+ \frac{\alpha}{8} \Psi \left(R+6H^2\right) - \frac{\alpha}{32} \Psi^2 - \frac{\alpha}{4} F^2(A^-) ,
\end{eqnarray}
where now $H_m$ is an unconstrained real vector.
Indeed, by integrating out the real scalar $X$,
it imposes the appropriate constraint on $H_m$,
and by integrating out $Y$ we get (\ref{star}).
Now we redefine the auxiliary $A_m$ as
\begin{eqnarray}
{\cal V}_m = A_m -3 H_m - \partial_m X,
\end{eqnarray}
and we find
\begin{eqnarray}
e^{-1} {\cal L}= \frac{1}{2}\left(1 + \frac{\alpha}{4} \Psi \right) R - \frac{\alpha}{4} F^2({\cal V})
+ 2{\cal V}_a H^a + 3 \left(1 + \frac{\alpha}{4} \Psi \right) H^2 - \frac{\alpha}{32} \Psi^2 .
\end{eqnarray}
The auxiliary $H_m$ has become quadratic and after it is integrated out we have
\begin{eqnarray}
e^{-1} {\cal L}= \frac{1}{2}\left(1 + \frac{\alpha}{4} \Psi \right) R - \frac{\alpha}{4} F^2({\cal V})
- \frac{\alpha}{32} \Psi^2 - \frac{1}{3} \frac{{\cal V}^2}{\left(1 + \frac{\alpha}{4} \Psi \right)} .
\end{eqnarray}
Note that the original $A_m$ not only has become propagating, but it has also become massive.
After rescaling the theory to go to the Einstein frame by
a conformal transformation
\begin{eqnarray}
e_m^{\ a} \rightarrow \frac{1}{ \sqrt{1 + \frac{\alpha}{4} \Psi }} \, e_m^{\ a} ,
\end{eqnarray}
we have (for $\Psi \rightarrow \Psi / \alpha$ and ${\cal V} \rightarrow {\cal V} / \sqrt \alpha$)
\begin{eqnarray}
e^{-1} {\cal L}= \frac{1}{2} R - \frac{1}{4} F^2({\cal V}) -\frac{3}{64 (1 + \frac{1}{4} \Psi )^2} \partial \Psi \partial \Psi
- \frac{1}{32 \alpha} \Psi^2 \frac{1}{ (1 + \frac{1}{4} \Psi )^2}
- \frac{1}{3 \alpha} \frac{{\cal V}^2}{\left(1 + \frac{1}{4} \Psi \right)^2} .
\end{eqnarray}
Finally, for
\begin{eqnarray}
\phi = \sqrt{\frac32} \, \text{ln} \left(1 + \frac{1}{4} \Psi \right),
\end{eqnarray}
we have (for $\frac1\alpha = 9 g^2$)
\begin{eqnarray}
\label{tot}
e^{-1} {\cal L}= \frac{1}{2} R - \frac{1}{4} F^2({\cal V}) - \frac12 \partial \phi \, \partial \phi
- \frac{9 g^2}{2} \left( 1 - e^{- \sqrt{\frac23} \phi} \right)^2
- 3 g^2 e^{- 2 \sqrt{\frac23} \phi} \, {\cal V}^2.
\end{eqnarray}
This is the dual form of the Starobinsky model.
Here we have reproduced it in a new-minimal supergravity framework \cite{Farakos:2013cqa}.
Notice that there is only one real propagating scalar field, and a massive vector.
From (\ref{tot}) one can calculate the slow-roll parameters and verify that the model is in perfect agreement
with the Planck data \cite{Ade:2013uln}.
\section{Open issues in the Starobinsky model}
In this part we review some known open issues of the Starobinsky model.
\subsection{Higher order corrections}
On top of the $R+R^2$ theory one could ask what is the impact of the higher order corrections.
We will consider here the $R^4$ terms.
The superspace Lagrangian for $R^4$ has the form \cite{Farakos:2013cqa}
\begin{eqnarray}
{\cal L}_{R^4} = 16 \xi \int d^4 \theta E \, W^2(V_{\text{R}}) \bar W^2(V_{\text{R}}) .
\end{eqnarray}
The full Lagrangian including the $R^4$ terms reads
\begin{eqnarray}
{\cal L} = -2 \int d^4 \theta \, E V_{\text{R}}
+ \left\{ \frac{\alpha}{4} \int d^2 \theta \, {\cal E} W^2(V_{\text{R}}) + c.c. \right\}
+ 16 \xi \int d^4 \theta E \, W^2(V_{\text{R}}) \bar W^2(V_{\text{R}}) .
\end{eqnarray}
During inflation only the curvature terms contribute,
therefore we can work with the Lagrangian
\begin{eqnarray}
\label{r4}
e^{-1} {\cal L} = \frac12 R + \frac{\alpha}{8} R^2 + \xi R^4 .
\end{eqnarray}
The bosonic terms that we have ignored in writing (\ref{r4}) would only contribute
to the vector sector in the dual description (see \cite{Farakos:2013cqa,Ferrara:2013rsa,Ferrara:2013kca}).
We can then rewrite the theory with the use of a Lagrange multiplier $Z$ as
\begin{eqnarray}
e^{-1} {\cal L} = (\frac12 + Z) R + \frac{\alpha}{8} Y^2 + \xi Y^4 - Z Y.
\end{eqnarray}
Indeed, by integrating out $Z$ we find $Y=R$ and we get (\ref{r4}).
Now we proceed in the other direction and we integrate out $Y$.
The equation of motion for $Y$ gives
\begin{eqnarray}
Y^3 + \frac{\alpha}{16 \xi} Y - \frac{Z}{4 \xi} = 0,
\end{eqnarray}
which can be solved as
\begin{eqnarray}
Y(Z) = \frac13 \left( \frac{27}{8 \xi} Z + \frac12 \sqrt{\left( \frac{27}{4 \xi} Z \right)^2
+ 4 \left( \frac{3 \alpha}{16 \xi} \right)^3 } \right)^{\frac13}
- \frac{\alpha}{16 \xi} \left( \frac{27}{8 \xi} Z + \frac12 \sqrt{\left( \frac{27}{4 \xi} Z \right)^2
+ 4 \left( \frac{3 \alpha}{16 \xi} \right)^3 } \right)^{-\frac13} .
\end{eqnarray}
After integrating out $Y$, rescaling the metric and redefining $Z$ we find
\begin{eqnarray}
e^{-1} {\cal L} = \frac12 R - \frac12 \partial \phi \partial \phi - V(\phi),
\end{eqnarray}
with scalar potential
\begin{eqnarray}
\label{R4}
V(\phi) = \frac{3 \xi Y^4(Z) + \frac{\alpha}{8} Y^2(Z) }{4 (Z+\frac12)^2} \Big{|}_{Z = \frac12 e^{\sqrt{\frac23} \phi} -\frac12} .
\end{eqnarray}
The plot of the scalar potential (\ref{R4}) can be seen in Figure 1.
It is easy to see that for small $\xi$ values inflation is not ruined,
but larger $\xi$ values may pose a threat \cite{Farakos:2013cqa,Ferrara:2013kca}.
\begin{figure}[htp] \centering{
\includegraphics[scale=1.2]{plot1.pdf}}
\caption{The potential for the Starobinsky model in the dual description, including $R^4$ terms parameterized by $\xi$.
Here we have set $\alpha \sim 0.4 \times 10^{10} M_P^2$ as constrained by the Planck data.
One can see the characteristic plateau of the Starobinsky model (for $\xi=0$) at $V_{inf} \sim 1.3 \times 10^{-10} M_P^4.$ }
\end{figure}
\subsection{Initial conditions problem}
The common lore is that inflation started when our universe exited the quantum gravity regime with
energy densities \cite{Linde:2005ht,Linde:2007fr}
\begin{eqnarray}
\label{equi}
\frac12 (\partial \phi)^2 \lesssim V(\phi) \sim \frac12 \rho_{tot} \sim \frac12 M_P^4 .
\end{eqnarray}
In this case the potential energy dominates the total energy density and the accelerated expansion starts
even for a fundamentally small initial patch of Planck length radius $l_P$.
The essential ingredient for inflation in this setup is the existence of a nearly constant event horizon distance,
also of size $\sim l_P$. The importance of the existence of the event horizon is that it protects the initial smooth patch
from the outside inhomogeneities with nonzero gradients.
If there was no event horizon, these inhomogeneities would infest the initial smooth patch
and spoil inflation \cite{Goldwirth:1991rj}.
For the Starobinsky inflation we have
\begin{eqnarray}
V_{inf} = \frac34 m^2 M_P^2 \sim 10^{-10} M_P^4 \ll M_P^4.
\end{eqnarray}
For the total energy density when our universe exits the quantum gravity regime to be $\sim M_P^4$,
one has to assume
\begin{eqnarray}
V(\phi) \ll \frac12 \dot \phi^2 \sim \rho_{tot}.
\end{eqnarray}
In other words, that a kinematic energy domination regime preceded the inflationary phase.
In such a case, $V(\phi)\ll \frac12 \dot{\phi}^2 \sim \rho_{tot}$, the scale factor grows like $t^{1/3}$ until the domination of the plateau potential yielding an event horizon of size
\begin{eqnarray}
d_\text{event}(t \sim t_{P}) \sim 10^3 H^{-1}_P ,
\end{eqnarray}
where $H^{-1}_P \equiv \sqrt{3}\, l_P$.
Hence, one has to expel the density inhomogeneities at least $10^3$ Hubble scales further if the Universe has emerged from the Planck densities.
The corresponding initially homogeneous volume is at least $10^9$ times bigger than $H_P^{-3}$ which means that, initially, one billion causally disconnected regions were much similar without any dynamical reason.
As we have seen the embedding of the Starobinsky model in the new-minimal supergravity framework
has given rise to an additional propagating massive vector field.
Pursuing a minimal setup it has been proposed in \cite{Dalianis:2015fpa}
that the existence of this vector field can help to ameliorate the initial conditions problem.
Indeed,
one can accomplish an equipartition of the energy density
\begin{eqnarray}
\frac12 \rho_{kin} \sim \frac12 \rho_{pot} \sim \frac12 M_P^4 ,
\end{eqnarray}
by invoking non vanishing values not only for the scalar field but also for the components of the vector field ${\cal V}_m$.
\begin{figure}[t]
\centering
\includegraphics [scale=.9, angle=0]{a_newF.pdf}
\caption{\small{The evolution of the scale factors.
The anisotropic expansion of the Universe is manifest.
After the onset of inflation $t_\text{INF}$ the scale factors evolve similarly and the anisotropy gets diluted. }}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics [scale=.9, angle=0]{event_hor_newF.pdf}
\caption{\small{The evolution of the event horizons.
The anisotropic expansion of the Universe implies that the event horizon distance is respectively anisotropic.
The event horizon distance is in $H^{-1}(t_P)=\sqrt{3} \, l_P$ units.}}
\end{figure}
One can choose the gauge
\begin{eqnarray}
{\cal V}_0 &=&0 ,
\end{eqnarray}
and take the $z$-spatial axis parallel to the direction of the vector
\begin{eqnarray}
{\cal V}_i &=& {\cal A}_z(t) \delta_{i}^{z} .
\end{eqnarray}
By giving to the vector a non-vanishing value a spatial direction is singled out which we have identified with the $z$-axis.
This implies that the metric will be described by two scale factors
\begin{eqnarray} \label{bianchi}
ds^2 = - dt^2 + a^2(t) [dx^2 + dy^2 ] + c^2(t) dz^2 ,
\end{eqnarray}
hence, an anisotropy is created.
A numerical solution to the system of the equations and the evolution of the two scale factors, for $\rho_{kin, init}=0.5 V_{init}$
can be seen in Figure 2 and Figure 3. Accordingly, the event horizon distances change in the $z$-direction and the $x-y$ plane.
The initial condition problem is indeed relaxed, but still a large homogeneous initial patch is required.
For a complete analysis the reader is referred to \cite{Dalianis:2015fpa},
where also a discussion on the old-minimal supergravity embedding can be found.
It is worth mentioning that the initial conditions for Starobinsky inflation,
in a pure gravitational setup, need a minimum amount of tuning for the case of open universe.
Indeed there, the volume of the initial homogeneous patch is more than $10^6$ times smaller than
the volume needed for closed or flat universe. For a complete discussion see \cite{Dalianis:2015fpa}.
\section*{Acknowledgments}
It is a great pleasure to thank I. Dalianis, A. Kehagias, A. Riotto and R. von Unge for our collaborations and discussions.
This work was supported by the Grant agency of the Czech republic under the grant P201/12/G028.
|
3,212,635,537,596 | arxiv | \section{Introduction}
\label{intro}
\IEEEPARstart{B}{oolean} logic and its axiomatization is fundamental to the whole field of computer science.
Traditionally, Boolean logic is axiomatized in terms of conjunction (AND), disjunction (OR) and complementation (INV) operators.
Virtually, all of today's digital computation is performed
by using these operators with their associated laws. Recently, it
was shown that more efficient logic
computation is possible by using a majority operator in place of conjunction and disjunction operators\cite{Amaru, Amaru2, AmaruTCAD, IWLS}.
Moreover, the properties of majority operators, such as stability, have been proved to be the best fit for solving important problems in computer science \cite{stablest,maj1,Sasao, SynATPG}.
Regarding emerging technologies,
majority operators are the natural
logic primitives for several beyond-CMOS candidates \cite{fan2014_spin_mem, augustin_maj, datta_asl, imre_qca, snider99_qca, navi_maj5, qcadesigner, gao13_cmtl, hao14_dwm, li_dna, perkowski_maj_rev, coolswd, QCATOC, QCATOC2, ZhangTCAD}.
In order to exploit the unique opportunity led by majority in computer applications, a sound and complete set of manipulation rules is required. Most of the recent studies on majority logic based computation
consider ternary majority (MAJ-3) operators because the axiomatization in this context is well understood.
To unlock the real expressive power of majority logic, it is of interest to
extend such axiomatization to $n$-ary ($n$ odd) majority operators (MAJ-$n$).
We introduce in this paper a sound and complete axiomatization of MAJ-$n$ logic.
Our axiomatization is the natural extension of existing majority logic systems with fixed number of inputs. Based on the majority axioms
introduced in this work, computing systems
can use at its best the expressive power of majority logic.
The remainder of this paper is organized as follows.
Section \ref{back} gives background and notations useful for
the rest of this paper.
Section \ref{axiom} introduces our sound and complete axiomatization for MAJ-$n$ logic. Section \ref{disc} discusses relevant applications of our majority logic system
in logic optimization, Boolean satisfiability, repetition codes
and emerging technologies.
Section \ref{concl} concludes the paper.
\vspace{0.15in}
\section{Background and Notations}
\label{back}
We provide hereafter terms and notions
useful in the rest of the paper. We start by
introducing basic notation and symbols for logic operators and we continue by presenting special properties of Boolean functions.
We define a compact vector notation for Boolean variables and discuss Boolean algebras with a particular emphasis on MAJ-3/INV Boolean algebra.
\vspace{0.1in}
\subsection{Notations}
In the binary Boolean domain, the symbol $\mathbb{B}$ indicates the set of binary values $\{0,1\}$;
the symbols $\land$ and $\lor$ represent the conjunction (AND) and disjunction (OR) operators; the symbol $\neg$ represents the
complementation (INV) operator; and \texttt{0}/\texttt{1} represent the false/true logic values.
Alternative symbols for $\land$, $\lor$ and $\neg$ are $\cdotp$, $+$, and $'$, respectively.
\vspace{0.1in}
\subsection{Self-Dual Function}
A logic function $f(x,y,..,z)$ is said to be {\em self-dual} if $f(x,y,..,z)=\neg f(\neg x,\neg y,..,\neg z)$ \cite{Sasao}. By complementation, an equivalent {\em self-dual} formulation is
$\neg f(x,y,..,z)=f(\neg x,\neg y,..,\neg z)$.
\vspace{0.1in}
\subsection{Majority Function}
An $n$-input ($n$ being odd) majority function $M_n$ is defined on reaching a threshold
$\lceil n/2 \rceil$ of true inputs \cite{Sasao}. For example, the
three input majority function $M_3(x,y,z)$ can be expressed as $\land,\lor$ by $(x\land y)\lor(x \land z)\lor(y \land z)$. Also
$(x\lor y)\land(x \lor z)\land(y \lor z)$ is a valid representation for $M_3(x,y,z)$. The majority function is {\em self-dual} \cite{Sasao}.
Note that an $M_n$ operator filled with $\lfloor n/2 \rfloor$ \texttt{0}/\texttt{1} collapses into a AND/OR operator \cite{Sasao}.
\vspace{0.1in}
\subsection{Vector Notation for Boolean Variables}
For the sake of compactness, we denote a container (vector) of $n-m+1$ Boolean variables
by $x_m^n$, where the notation starts from index $m$ and ends at index $n$.
When the actual length of the vector is not important, a
simpler notation for $x_m^n$
is boldface {\boldmath{$x$}}.
The element at index $i$ in vector $x_m^n$ is denoted by
$x_i$. The complementation of a vector $x_m^n$ is denoted by $\neg x_m^n$ which means $\neg x_i$ $\forall i \in [m,m+1,..,n-1,n]$.
With this notation, the aforementioned self-dual property becomes $\neg f(x_m^n)=f(\neg x_m^n)$.
For the sake of clarity, we give an example about the vector notation. Let $(a,b,c,d,e)$ be 5 Boolean variables to be represented in vector notation.
Here, the start/end indeces are $m=1$~/~$n=5$, respectively, and the vector itself is $x_1^5$.
The elements of $x_1^5$ are $x_1=a$, $x_2=b$, $x_3=c$, $x_4=d$ and $x_5=e$.
\vspace{0.1in}
\subsection{Boolean Algebra}
\label{boolalg}
The standard binary Boolean algebra (originally axiomatized by Huntington \cite{Huntington}) is a non-empty set $(\mathbb{B},\land,\lor,\neg,0,1)$
subject to {\em
identity, commutativity, distributivity, associativity, and complement} axioms over
$\land,\lor$ and $\neg$\cite{Sasao,Brown}. For the sake of completeness, we report these basic axioms in Eq.~\ref{traditional}.
\vspace{0.1in}
\begin{equation}
\label{traditional}
{\text{\large \boldmath{$\Delta$}}}\left\{\begin{array}{l l}
\text{\bf Identity : \boldmath{$\Delta.I$}} \\x\lor 0=x\\ x\land 1=x\\
\text{\bf Commutativity : \boldmath{$\Delta.C$}} \\ x\land y=y\land x\\ x\lor y=y\lor x \\
\text{\bf Distributivity : \boldmath{$\Delta.D$}} \\ x \lor (y\land z) = (x\lor y)\land(x \lor z)\\ x \land (y \lor z)=(x \land y)\lor(x \land z) \\
\text{\bf Associativity : \boldmath{$\Delta.A$}} \\ x \land (y\land z) = (x\land y)\land z\\ x \lor (y \lor z)=(x \lor y)\lor z \\
\text{\bf Complement : \boldmath{$\Delta.Co$}} \\ x \lor \neg x =1\\ x\land \neg x =0 \\
\end{array} \right.
\end{equation}
\vspace{0.2in}
This axiomatization for Boolean algebra is sound and complete \cite{Jonsson,Brown}. Informally, it means that, logic arguments or formulas, proved by axioms in $\Delta$ are valid (soundness) and
all true logic arguments are provable (completeness). More precisely, it means that, in the induced logic system, all theorems are tautologies (soundness)
and all tautologies are theorems (completeness). We refer the reader to \cite{Jonsson} for a more formal discussion on mathematical logic.
In computer logic applications, only sound axiomatizations are of interest \cite{Brown}. Complete and sound axiomatizations are desirable \cite{Brown}.
Other Boolean algebras exist, with different operators and axiomatizations, such as Robbins algebra,
Freges algebra, Nicods algebra, MAJ-3/INV algebra, etc. \cite{Jonsson}. In the immediate following, we give details on the MAJ-3/INV Boolean algebra.
\vspace{0.1in}
\subsection{MAJ-3/INV Boolean Algebra}
\label{MIGalg}
The MAJ-3/INV Boolean algebra introduced in \cite{Amaru} is
defined over the set $(\mathbb{B},M_3,\neg,0,1)$, where $M_3$ is the ternary majority operator
and $\neg$ is the unary complementation operator.
The following set of five primitive transformation rules, referred to as $\Omega_3$, is an
{\em axiomatic system} for $(\mathbb{B},M_3,\neg,0,1)$. {\color{comm}All variables
belong to $\mathbb{B}$.}
\vspace{0.1in}
\begin{equation}
\label{omega}
{\text{\large \boldmath{$\Omega_3$}}}\left\{\begin{array}{l l}
\text{\bf Commutativity : \boldmath{$\Omega_3.C$}} \\M_3(x,y,z)=M_3(y,x,z)=M_3(z,y,x)\\
\text{\bf Majority : \boldmath{$\Omega_3.M$}} \\ \left\{\begin{array}{l l}
\text{if($x=y$): }M_3(x,y,z)=x=y\\
\text{if($x=\neg y$): }M_3(x,y,z)=z\\
\end{array} \right. \\
\text{\bf Associativity : \boldmath{$\Omega_3.A$}} \\M_3(x,u,M_3(y,u,z))=M_3(z,u,M_3(y,u,x))\\
\text{\bf Distributivity : \boldmath{$\Omega_3.D$}} \\M_3(x,y,M_3(u,v,z))=\\M_3(M_3(x,y,u),M_3(x,y,v),z)\\
\text{\bf Inverter Propagation : \boldmath{$\Omega_3.I$}} \\\neg M_3(x,y,z)=M_3(\neg x,\neg y,\neg z)\\
\end{array} \right.
\end{equation}
\vspace{0.15in}
It has been shown that this axiomatization is sound and complete with respect to $(\mathbb{B},M_3,\neg,0,1)$ \cite{Amaru}.
The MAJ-3/INV Boolean algebra finds
application
in circuit optimization and has already showed some promising results \cite{Amaru}.
Note that early attempts to majority logic
have already been reported in the 60's \cite{akers_maj_manip,cohn_maj_axiom,lindaman_maj_network,Miller,Tohma,Miyata}
but
they mostly focused on three input majority operators.
Also, derived
logic manipulation methods
failed to gain momentum due to their inherent complexity.
While traditional Boolean algebras can be naturally extended from 2 to $n$ variables, it is currently unclear how such a majority axiomatization extends to
an arbitrary number of variables $n$ (odd). In the following, we address this question by proposing a natural axiomatization of MAJ-$n$/INV logic.
\section{Axiomatization of MAJ-$n$ Logic}
\label{axiom}
In this section, we present the generic axiomatization
of MAJ-$n$ logic. We first extend the set of five axioms presented in \cite{Amaru} to $n$-variables, with $n$ being an odd integer.
Then, we show their validity in the Boolean domain. Finally,
we demonstrate their completeness by inclusion of
other complete Boolean axiomatizations.
\subsection{Generic MAJ-$n$/INV Axioms}
The five axioms for MAJ-3/INV logic in \cite{Amaru}
deal with {\em commutativity}, {\em majority},
{\em associativity},
{\em distributivity}, and
{\em inverter propagation} laws. The following
set of equations extends their domain to an arbitrary odd number
$n$ of variables.
Note that all axioms, hold with $n\ge 3$.
\vspace{0.1in}
\begin{equation}
\label{omegan}
{\text{\large \boldmath{$\Omega_n$}}}\left\{\begin{array}{l l}
\text{\bf Commutativity : \boldmath{$\Omega_n.C$}}\\M_n(x_1^{i-1},x_i,x_{i+1}^{j-1},x_j,x_{j+1}^{n})=\\M_n(x_1^{i-1},x_j,x_{i+1}^{j-1},x_i,x_{j+1}^{n})\\
\text{\bf Majority : \boldmath{$\Omega_n.M$}} \\
\text{If($\lceil \frac{n}{2} \rceil$ elements of $x_1^n$ are equal to $y$): }\\\hspace{0.1in} M_n(x_1^n)=y\\
\text{If($x_i\neq x_j$): }\\\hspace{0.1in} M_n(x_1^n)=M_{n-2}(y_1^{n-2})\\\text{\hspace{0.1in} where $y_1^{n-2} = x_1^n$ removing $\{x_i,x_j\}$}\\
\text{\bf Associativity : \boldmath{$\Omega_n.A$}} \\M_n(z_1^{n-2}, y, M_n(z_1^{n-2}, x,w)) =\vspace{0.1in}\\
M_n(z_1^{n-2}, x, M_n(z_1^{n-2}, y,w))\\
\text{\bf Distributivity : \boldmath{$\Omega_n.D$}} \\M_n(x_1^{n-1},M_n(y_1^{n}))=\vspace{0.1in}\\M_n(M_n(x_1^{n-1},y_1),M_n(x_1^{n-1},y_2),...,\\\hspace{0.1in}M_n(x_1^{n-1},y_{\lceil \frac{n}{2} \rceil}),y_{\lceil \frac{n}{2} \rceil+1},...,y_n)=\vspace{0.1in}\\
M_n(M_n(x_1^{n-1},y_1),M_n(x_1^{n-1},y_2),...,\\\hspace{0.1in} M_n(x_1^{n-1},y_{\lceil \frac{n}{2} \rceil+1}),y_{\lceil \frac{n}{2} \rceil+2},...,y_n)= \vspace{0.1in}\\
M_n(M_n(x_1^{n-1},y_1),M_n(x_1^{n-1},y_2),...,\\\hspace{0.1in} M_n(x_1^{n-1},y_{n-1}),y_n)\\
\text{\bf Inverter Propagation : \boldmath{$\Omega_n.I$}} \\\neg M_n(x_1^n)=M_n(\neg x_1^n)\\
\end{array} \right.
\end{equation}
\vspace{0.1in}
Commutativity means that changing the order of the variables in
$M_n$ does not change the result.
Majority defines a logic decision threshold (over $n \ge 3$ variables) and a
hierarchical
reduction of majority operators with complementary variables. Note that $M_3(x,y,\neg y)=x$ as boundary
condition.
Associativity says that swapping pairs of variables
between cascaded $M_n$ sharing $n-2$
variables does not change the result. In this context, it is
important to recall
that $n-2$ is an odd number if $n$ is an odd number.
Distributivity delimits the re-arrangement freedom of variables
over cascaded $M_n$ operators.
Inverter propagation moves complementation freely
from the outputs to the inputs of a $M_n$ operator, and
{\em viceversa}.
For the sake of clarity, we give an example for each axiom over a finite $n$-arity.
Commutativity with $n=5$: \\$M_5(a,b,c,d,e)=M_5(b,a,c,d,e)=M_5(a,b,c,e,d)$.
Majority with $n=7$: \\$M_7(a,b,c,d,e,g,g')=M_5(a,b,c,d,e)$.
Associativity with $n=5$: \\$M_5(a,b,c,d,M_5(a,b,c,g,h))=M_5(a,b,c,g,M_5(a,b,c,d,h))$.
Distributivity with $n=7$: \\$M_7(a,b,c,d,e,g,M_7(x,y,z,w,k,t,v))=M_7(M_7(a,b,c,d,e,g,x),M_7(a,b,c,d,e,g,y),\\ \text{\hspace{0.1in}}M_7(a,b,c,d,e,g,z),M_7(a,b,c,d,e,g,w),k,t,v)$.
Inverter propagation with $n=9$: \\$\neg M_9(a,b,c,d,e,g,h,x,y)=M_9(\neg a,\neg b,\neg c,\neg d,\neg e,\neg g,\neg h,\neg x,\neg y)$.
\subsection{Soundness}
To demonstrate the validity of these laws, and thus the validity
of the MAJ-$n$ axiomatization, we need to show that each equation in $\Omega_n$ is sound with respect to the original domain, i.e., $(\mathbb{B},M_n,\neg,0,1)$
\footnote{By $M_n$, it is intended
any $M_i$ with $i \le n$. Indeed,
any $M_i$ operator with $i\le n$ can be emulated by a fully-fed $M_n$ operator with pairs of regular/complemented variables, e.g., $M_5(a,b,c,d,\neg d)=M_3(a,b,c).$}.
The following theorem addresses this requirement.
\vspace{0.05in}
\begin{theorem}
\label{sound}
Each axiom in $\Omega_n$ is sound (valid) w.r.t. $(\mathbb{B},M_n,\neg,0,1)$.
\end{theorem}
\vspace{0.05in}
\begin{proof}
{\bf Commutativity \boldmath{$\Omega_n.C$}} Since majority is defined on reaching a threshold
$\lceil n/2 \rceil$ of true inputs then it is independent of the order of its inputs. This means
that changing the order of
operands in $M_n$ does not change the output value.
Thus, this axioms is valid in $(\mathbb{B},M_n,\neg,0,1)$.
{\bf Majority \boldmath{$\Omega_n.M$}} Majority
first defines
the output behavior of $M_n$ in the Boolean domain. Being a
definition, it does not need particular proof for soundness.
Consider then the second part of the majority axiom.
The recursive inclusion of $M_{n-2}$ derives from the mutual
cancellation of complementary variables. In a binary majority voting system of $n$ electors, two electors voting
to opposite values annihilate themselves. The final decision
is then just depending on the votes from the remaining $n-2$
electors. Therefore, this axiom is valid in $(\mathbb{B},M_n,\neg,0,1)$.
{\bf Associativity \boldmath{$\Omega_n.A$}} We split this proof in three parts that cover the whole Boolean space. Thus, it is sufficient to prove the validity of the
associativity axiom for each of these parts. {\bf (1) the vector $z_1^{n-2}$ contains at least one logic 1 and one logic 0.} In this case, it is possible to apply $\Omega_n.M$ and reduce
$M_n$ to $M_{n-2}$. If we remain in case (1), we can keep applying $\Omega_n.M$. At some point, we will end up in case (2) or (3). {\bf (2) the vector $z_1^{n-2}$ contains all logic 1.}
For $n>3$, the final voting decision is 1 for both equations, so the equality holds. In case $n=3$
and the the vector $z_1^{n-2}$ contains all logic 1, the majority operator collapses into a disjunction operator.
For example, $M_3(1,a,M_3(1,c,d))=\lor_2(a,\lor_2(c,d))$.
Here, the validity of the associativity axiom follows then
from traditional disjunction associativity. {\bf (3)
the vector $z_1^{n-2}$ contains all logic 0.} For $n>3$, the final voting decision is 0 for both equations, so the equality holds. In case $n=3$
and the vector $z_1^{n-2}$ contains all logic 0, the majority operator collapses into a conjunction operator.
For example, $M_3(0,a,M_3(0,c,d))=\land_2(a,\land_2(c,d))$. Here, the validity of the associativity axiom follows then
from traditional conjunction associativity.
{\bf Distributivity \boldmath{$\Omega_n.D$}} We split this proof in three parts that cover the whole Boolean space. Thus, it is sufficient to prove the validity of the
distributivity axiom for each of these parts.
Note that the distributivity axiom deals with a majority operator $M_n$ where one inner variable is actually another independent majority operator $M_n$.
Distributivity rearranges the computation in $M_n$ moving up the variables at the bottom level and down the variables at the top level.
In this part of the proof we show that such rearrangement does not change the
functionality of $M_n$, i.e., the final voting decision in $\Omega_n.D$.
Recall that $n$ is an odd integer greater than $1$ so $n-1$ must be an even integer.
{\bf (1) half of $x_1^{n-1}$ values are logic 0 and the remaining half are logic 1.} In this case, the final voting decision in
axiom $\Omega_n.D$ only depends on $y_1^n$. Indeed, all elements in
$x_1^{n-1}$ annihilate due to axiom $\Omega_n.M$. In the two
identities of $\Omega_n.D$, we see that when $x_1^{n-1}$ annihilate
the equations simplify to $M_n(y_1^n)$, according to the predicted behavior.
{\bf (2) at least $\lceil n/2 \rceil$ of $x_1^{n-1}$ values are logic 0.}
Owing to $\Omega_n.M$, the final voting decision in this case
is logic 0. This is because more than half of the variables are logic 0
matching the prefixed voting threshold. In the two identities of $\Omega_n.D$, we see that more than half of the inner $M_n$ evaluate to logic 0 by direct application
of $\Omega_n.M$. In the subsequent phase, also the outer $M_n$ evaluates
to logic 0, as more than half of the variables are logic 0, according to the predicted behavior.
{\bf (3) at least $\lceil n/2 \rceil$ of $x_1^{n-1}$ values are logic 1.} This case
is symmetric to the previous one.
{\bf Inverter Propagation \boldmath{$\Omega_n.I$}} Inverter propagation moves complementation from output to inputs, and {\em viceversa}.
This axiom is a special case of the self-duality property previously presented. It holds for all majority operators in
$(\mathbb{B},M_n,\neg,0,1)$.
\end{proof}
\vspace{0.05in}
The soundness of $\Omega_n$ in $(\mathbb{B},M_n,\neg,0,1)$ guarantees that
repeatedly applying $\Omega_n$ axioms to a Boolean formula we do not
corrupt its original functionality. This property is of
interest in logic manipulation systems where functional correctness is
an absolute requirement.
\subsection{Completeness}
While soundness speaks of the correctness of a logic systems,
completeness speaks of its manipulation capabilities.
For an axiomatization to be complete, all possible
manipulations of a Boolean formula must be attainable by a sequence, possibly long,
of primitive axioms.
We study the completeness of $\Omega_n$ axiomatization by comparison
to other complete axiomatizations of Boolean logic. The following theorem
shows our main result.
\vspace{0.05in}
\begin{theorem}
\label{complete}
The set of five axioms in $\Omega_n$ is complete w.r.t. $(\mathbb{B},M_n,\neg,0,1)$.
\end{theorem}
\vspace{0.05in}
\begin{proof}
We first consider $\Omega_3$ and we show that it is complete w.r.t. $(\mathbb{B},M_3,\neg,0,1)$.
We need to prove that every valid argument, i.e.,
$(\mathbb{B},M_3,\neg,0,1)$-formula, has a proof in the system $\Omega_3$. By contradiction, suppose that a true $(\mathbb{B},M_3,\neg,0,1)$-formula, say $\alpha$, cannot
be proven true using $\Omega_3$ rules. Such $(\mathbb{B},M_3,\neg,0,1)$-formula $\alpha$ can always be reduced into a $(\mathbb{B},\land,\lor,\neg,0,1)$-formula. Indeed,
recall that $M(x,y,z)=(x\lor y)\land(x \lor z)\land(y \lor z)$.
Using $\Delta$, all $(\mathbb{B},\land,\lor,\neg,0,1)$-formulas can be proven, including $\alpha$. However, every $(\mathbb{B},\land,\lor,\neg,0,1)$-formula is also contained by $(\mathbb{B},M_3,\neg,0,1)$,
where $\land$ and $\lor$ are emulated by majority operators. Moreover, rules in $\Omega_3$ with one input fixed to $0$ and $1$ behaves as $\Delta$ rules (Eq. \ref{traditional}).
For example, $\Omega_3.A$ with variable $u$
fixed to logic 1 (0) behaves as $\Delta.A$ for disjunction
(conjunction).
The other axioms follow analogously.
This means that
also $\Omega_3$ is capable to prove the reduced $(\mathbb{B},M,\neg,0,1)$-formula $\alpha$, contradicting our assumption.
Thus $\Omega_3$ is complete w.r.t. $(\mathbb{B},M_3,\neg,0,1)$.
We consider now $\Omega_n$. First note that $(\mathbb{B},M_n,\neg,0,1)$ naturally includes
$(\mathbb{B},M_3,\neg,0,1)$. Similarly, $\Omega_n$ axioms inherently
extend the ones in $\Omega_3$. Thus, the completeness property is inherited
provided that $\Omega_n$ axioms are sound.
However, $\Omega_n$ soundness is already proven in Theorem \ref{sound}.
Thus, $\Omega_n$ axiomatization is also complete.
\end{proof}
\vspace{0.05in}
Being sound and complete, the axiomatization $\Omega_n$ defines a consistent framework to operate on
Boolean logic via $n$-ary majority operators and inverters.
In the following section, we discuss some promising
applications in computer science of such majority logic system.
\section{Discussion}
\label{disc}
In this section, we discuss relevant application of $\Omega_n$
axiomatization. We first present the potential of logic optimization
performed via MAJ-$n$ operators and inverters. Then, we show
how Boolean satisfiability can be described in terms of majority
operators
and solved using $\Omega_n$.
Successively, we demonstrate the manipulation of
repetition codes via $\Omega_n$ under a majority logic decoding
scheme.
Finally, we discuss the application of majority logic
to several emerging technologies, such as
quantum-dot cellular automata, spin-wave devices,
threshold logic and others.
\subsection{Logic Optimization}
\label{optimization}
Logic optimization is the process of manipulating a logic data
structure, such as a logic circuit,
in order to minimize some target metric \cite{DeMicheli}.
Usual optimization targets are size (number of nodes/elements),
depth (maximum number of levels) and interconnections (number of edges/nets). More elaborated
targets use a combination of size/depth/interconnections metrics, such as nodes$\times$interconnections and others.
Theoretical results from computer science show that
majority logic
circuits are much more compact than traditional ones based on conjunction and
disjunction operators \cite{maj1}.
For example, majority logic circuits of depth 2 and 3 possess the expressive power to represent arithmetic functions, such as powering, multiplication, division, addition etc., in polynomial size \cite{maj1}. On the other hand, the traditional AND/OR-based counterparts are exponentially sized \cite{maj1}.
Given the existence of very compact majority logic circuits, we need an efficient set of manipulation laws to reach those circuits automatically.
In this context, the axiomatic system previously introduced is the natural set of tools addressing this need. For example, consider a logic circuit (or Boolean function)
$f=M_5(M_3(a,b,c),M_3(a,b,d),M_3(a,b,e),M_3(a,b,g),h)$.
In circuit optimization, a common problem is to minimize the number of elements while keeping short some input-output paths.
Suppose we want to minimize the number
of majority operators while keeping the path $h$ to $f$ as short as possible, i.e., one majority operator.
The original circuit cost is 5 majority operators.
To manipulate this formula, we first equalize
the $n$-arity of the majority operators
using axiom $\Omega_n.M$, i.e., by adding a fake annihilated variable $x$, as:
$f=M_5(M_5(a,b,c,x,\neg x),M_5(a,b,d,x,\neg x),\\M_5(a,b,e,x,\neg x),M_5(a,b,g,x,\neg x),h)$
At this point, we can apply $\Omega_n.D$ and save one
majority operator as:
$f=M_5(M_5(a,b,c,x,\neg x),M_5(a,b,d,x,\neg x),\\M_5(a,b,e,x,\neg x),g,h)$.
Finally, we can reduce the majority $n$-arity to its minimum
via $\Omega_n.M$ as:
$f=M_5(M_3(a,b,c),M_3(a,b,d),M_3(a,b,e),g,h)$.
The resulting circuit cost is 4 majority operators.
\subsubsection{Optimization Script}
As emerged from the previous optimization example,
an intuitive heuristic to optimize
majority logic circuits
consists of majority inflation rules (from $\Omega_n$)
followed by majority reduction rules (from $\Omega_n$).
Alg.~\ref{topcontrol} depicts a simple
optimization script and a brief description follows.
\begin{algorithm}[!ht]
\textbf{INPUT:} Majority Logic Network. \hspace{0.8in}\\ \textbf{OUTPUT:} Optimized Majority Logic Network.
\caption{Majority Logic Optimization Heuristic}
\begin{algorithmic}
\STATE{Majority Operator Increase n-arity($\Omega_n.M$);{\color{blue}\\// increase n-arity of the majority operator}}
\STATE{Majority Operator Simplifcation($\Omega_n.A, \Omega_n.D, \Omega_n.M$);{\color{blue}\\// deleting redundant majority operators}}
\STATE{Majority Operator Reduce n-arity($\Omega_n.M$);{\color{blue}\\// decrease n-arity of the majority operator}}
\end{algorithmic}
\label{topcontrol}
\end{algorithm}
First, the $n$-arity of all majority operators in the logic circuit is temporarily increased by using $\Omega_n.M$ rule from right to left, for example $M_3(a,b,c)=M_5(a,b,c,\neg c, c)$. This operation
unlocks new simplification opportunities.
Then, redundant majority operators are identified
and deleted through $\Omega_n.A, \Omega_n.D, \Omega_n.M$ rules.
Finally, the $n$-arity of all majority operators in the
logic circuit
is decreased to the minimum via $\Omega_n.M$ rule
from left to right.
This approach naturally targets depth and size reductions
in the majority logic network. However,
it can be extended to target
more elaborated metrics, such as $\sum_{i=1}^{M} fanin(node_{i})$ or $M\times N_{inv}$, where $M$ is the total number of nodes and $N_{inv}$ is the number of inverters. The best metric
depends on the considered technology for final
implementation.
\subsubsection{Full-Adder Case Study}
In order to prove the efficacy of the
majority optimization heuristic in Alg.~\ref{topcontrol},
we consider as case study the full-adder logic circuit.
The full-adder logic circuit
is fundamental to most arithmetic circuits.
Consequently, the effective optimization of full-adders is of
paramount importance.
A full-adder represents a three-input and
two-output Boolean function:
$sum= a\oplus b \oplus c_{in}$
$c_{out}=M_3(a,b,c)$
Using just majority operators with $n$-arity equal to three,
the best full-adder implementation counts 3 majority
nodes, inverters apart, as depicted by Fig. \ref{fam3}.
\begin{figure}[!ht]
\centering%
\includegraphics[width=0.5\columnwidth]{fa_maj3}
\caption{Majority logic circuit for the full-adder with
operator $n$-arity equal to 3.
Complementation is represented by bubbles on the edges.}
\label{fam3}
\end{figure}
However, a more compact majority logic
network is possible by exploiting higher $n$-arity degrees
and manipulating such
majority logic circuit via $\Omega_n$.
In particular, the critical operation is $sum$ because
$c_{out}$ is naturally represented by a single
$M_3$ operator. So, for $sum$ our
optimization heuristic first expands the top
majority operator from an $n$-arity of three
$sum=M_3(a,\neg M_3(a,b,c_{in}),M_3(\neg a,b,c_{in}))$
to an $n$-arity
of 5 as
$sum=M_5(a,\neg M_3(a,b,c_{in}),\neg M_3(a,b,c_{in}),\\ M_3(a,b,c_{in}),M_3(\neg a,b,c_{in}))$.
After that, derived simplification rules from $\Omega_n$,
called relevance rules in \cite{Amaru}, reduce the number
of majority operators to 2 as
$sum=M_5(a,\neg M_3(a,b,c_{in}),\neg M_3(a,b,c_{in}), b,c_{in})$.
In its graph representation, depicted by Fig.~\ref{fam5},
this representation of {\em sum} just consists of two majority operators as the internal $M_3(a,b,c_{in}),$ is shared.
\begin{figure}[!ht]
\centering%
\includegraphics[width=0.45\columnwidth]{fa_maj5}
\caption{Majority logic circuit for the full-adder with
unbounded
operator $n$-arity.
Complementation is represented by bubbles on the edges.}
\label{fam5}
\end{figure}
Moreover, $M_3(a,b,c_{in})$ is also generating the
$c_{out}$ function which can be further shared.
This means that the optimized logic circuit in Fig.~\ref{fam5}, counting just two majority
operators, is a minimal implementation for the full-adder
in terms
of majority logic.
\begin{figure}[!ht]
\centering%
\includegraphics[width=0.67\columnwidth]{fa_and}
\caption{AND-inverter logic circuit for the full-adder
optimized via ABC academic tool.
Complementation is represented by bubbles on the edges.}
\label{fan}
\end{figure}
To provide a reference, an optimized AND-inverter graph
representation for the full-adder is depicted by Fig.~\ref{fan}.
It counts 8 nodes and has been optimized using
the state-of-the-art academic ABC optimizer
\cite{ABC} which manipulates
AND-inverter graphs. We can see that the majority
logic circuit produced by our optimization
heuristic is much more compact thanks to the majority
logic expressiveness and
to the properties of our axiomatic system, $\Omega_n$.
The minimality of the majority logic circuit in Fig. \ref{fam5} is
formally proved in the following theorem.
\vspace{0.05in}
\begin{theorem}
\label{faopt}
The majority logic circuit in Fig.~\ref{fam5} for the full-adder
has the minimum number of majority operators.
\end{theorem}
\vspace{0.05in}
\begin{proof}
The full-adder consists of two distinct functions. Being distinct, they require at least two separate majority operators fed
with different signals. The majority logic circuit in
Fig.~\ref{fam5}
actually consists of two majority operators
thus being minimal.
\end{proof}
\vspace{0.05in}
On top of having the minimum number of operators, the
majority network in Fig.~\ref{fam5} has lower
$\sum_{i=1}^{M} fanin(node_{i})$ metric (equal to 8) as compared to the majority network in Fig.~\ref{fam3} (equal to 9). The number of inverters is 2 in both cases.
We see that the axiomatic system $\Omega_n$
can be used to optimize majority logic circuits and
produces excellent results.
As the $\Omega_n$ rules
are simple enough to be programmed on a computer,
MAJ-$n$ logic optimization can be automated and applied to large systems.
\subsection{Boolean Satisfiability}
Boolean {\em satisfiability} (SAT) is the first known NP-complete
problem \cite{Garey}. Traditionally, SAT is
formulated in {\em Conjunctive Normal Form} (CNF)
\cite{Biere}. Recently, majority logic has been
considered as an alternative to CNF to speed-up SAT \cite{IWLS}.
In \cite{IWLS}, a {\em Majority Normal Form} (MNF)
has been introduced, which is
a majority of majorities,
where majorities are fed with literals, {\texttt 0} or {\texttt 1}.
The MNF-SAT problem is NP-complete in
its most general definition \cite{IWLS}.
However, there are interesting restrictions of MNF whose satisfiability can
instead be decided in polynomial time.
For example, when there are no mixed
logic constants appearing in the MNF,
the MNF-SAT problem can be solved in polynomial time.
This result is valid not just for MNF but for
majority logic circuits in general \cite{IWLS}.
In order to solve the general problem of majority logic
satisfiability, and thus of MNF-SAT, a set
of manipulation rules is needed. Indeed,
the core of most modern SAT solving tools
make extensive use of Boolean logic axioms.
When dealing with majority logic, our proposed
axiomatic system $\Omega_n$ is the natural tool to operate
on MNF forms, or alike, and prove their satisfiability.
\vspace{0.05in}
For the sake of clarity, we give an example
of majority SAT solving via $\Omega_n$ laws.
We consider not just an MNF, which is a two level
logic representation form, but a general formula in
$(\mathbb{B},M_n,\neg,0,1)$.
Our example is the {\em unSAT} function
$f=M_5(M_3(a, b, c),M_5(M_5(a,b,c,0,0),\neg b, c,0,0),\neg a, \neg b, 0)$.
In oder to check the satisfiability of $f$, a majority SAT solver first tries to enforce at least 3 over 5 logic \texttt{1} in the top $M_5$ \cite{IWLS}.
Otherwise, a conflict
in the input assignment appears. If all possible input
assignments lead to a conflict the function is declared
unsatisfiable \cite{IWLS}.
Let us first focus on the element
$M_5(M_5(a,b,c,0,0),\neg b, c,0,0)$. Here, even before
looking for possible assignments, our
axiom $\Omega_n.A$ re-arranges
the variables as $M_5(M_5(\neg b,b,c,0,0),a, c,0,0)$.
In this formula, our axiom
$\Omega_n.M$ directly annihilates $b$ and $\neg b$
leading to $M_5(M_3(c,0,0),a, c,0,0)$. Furthermore,
$\Omega_n.M$ still applies twice corresponding to
$M_5(0,a, c,0,0)$ and then $0$. We can substitute
this to the original formula
as
$f=M_5(M_3(a, b, c),0,\neg a, \neg b, 0)$
which symplifies the SAT problem. Now,
we need both $\neg a$ and $\neg b$ to be {\texttt 1}
in order to do avoid an immediate conflict.
This means
$a=0$ and $b=0$. However, this assigment evaluates
always to {\texttt 0} the term
$M_3(a,b,c)$ generating a conflict for all
input patterns. Thus, the original formula is declared
unsatisfiable.
As we can see, our majority logic axiomatic system
$\Omega_n$
is the ground for proving the
satisfiability of formula in $(\mathbb{B},M_n,\neg,0,1)$.
Without $\Omega_n$, SAT tools would need to
decompose all majority operators in AND/ORs
because with conjunctions and disjunctions the classic set of
Boolean manipulation rules apply. However,
such decomposition would nullify the
competitive advantage enabled by the
majority logic
expressiveness. In this scenario, our $\Omega_n$
rules
fill the gap for manipulating majority operators natively.
\subsection{Decoding of Repetition Codes}
Repetition codes are basic error-correcting codes. The main
rationale in using repetition codes is to transmit a message several times over a noisy channel hoping that the channel corrupts only a minority of
the bits\cite{Massey}. In this scenario, decoding the received message
via majority logic is the natural way to correct transmission errors.
Consider safety-critical communication systems. It
is common to have hierarchical levels of coding
to decrease the chance of error and thus resulting in
system malfunction. When applied on several levels,
majority logic decoding is nothing but a majority
logic circuit. The maximum number of cascaded
majority operators
determines the decoding performance.
We want to maximize the decoding performance
while keeping the error probability low. In this scenario,
we can use our axiomatic system $\Omega_n$
to explore different tradeoffs in depth/size manipulation
of the corresponding majority decoding scheme.
For the sake of clarity, we give an example
of the optimization for majority logic decoding via $\Omega_n$.
Consider a
safety-critical communication system sending the same binary message
$a$ over 5 different channels $C_1$, $C_2$,
$C_3$, $C_4$ and $C_5$. Each channel is affected
by different levels of noise requiring just
1 repetition for $C_1$, $C_2$, $C_3$, and $C_4$
but 5 repetitions for $C_5$.
Suppose also the communication over channel 5 is much
slower than in the other channels.
The final decoded
message is the majority of the each decoded message per channel. If we name $x_i$ the decoded message $a$
for $i$-th channel and $y$ the final decoded message,
the system can be represented in majority logic
as $y=M_5(x_1,x_2,x_3,x_4,x_5)$.
Note that for $x_1$, $x_2$, $x_3$, $x_4$ the decoded message is actually identical to the received message because only 1 repetition is sent over the channels.
The element
$x_5$ is the only one needing further majority decoding, namely
$x_5=M_5(z_1,z_2,z_3,z_4,z_5)$ where $z_i$ are the received $a$ messages over channel $C_5$. The final
system is then expressable as $y=M_5(x_1,x_2,x_3,x_4,M_5(z_1,z_2,z_3,z_4,z_5))$.
To decode the final message $y$,
the critical element for
perfomance is $M_5(z_1,z_2,z_3,z_4,z_5)$,
with $z_5$ being the latest arriving message to
be processed. In this context,
we can use $\Omega_n.D$ axiom to redistribute
the decoding operations and obtain an improvement
in performance, which is not a trivial process. The idea is to push to the top
majority level $z_i$ variables, with the highest possible
$i$ index. For this purpose,
axioms $\Omega_n.D$ transforms
$y=M_5(x_1,x_2,x_3,x_4,M_5(z_1,z_2,z_3,z_4,z_5))$
into
$y=M_5(M_5(x_1,x_2,x_3,x_4,z_1),
M_5(x_1,x_2,x_3,x_4,z_2),\\
M_5(x_1,x_2,x_3,x_4,z_3),
z_4,z_5)$.
In this latter model of majority decoding,
most of the computation is performed in advance
before the late messages $z_4$ and $z_5$
arrive. This means that, when the late $z_5$ arrives,
there is need for just one level of majority computation
and not two as in the initial model.
\subsection{Emerging Technologies}
Majority gates with more than $3$ inputs have been simulated and implemented for a variety of non-CMOS technologies. A further generalization of majority gates is threshold logic gate \cite{maj1}, which performs weighted sum of multiple inputs and once the sum is more than a pre-determined threshold, the output is true. As such, a threshold logic gate can be configured to function as a majority logic gate. In the following, we describe a few published works that describes majority or threshold gates with more than $3$ inputs.
Majority logic gates were experimentally demonstrated with {\em Quantum-dot Cellular Automata} (QCA) in~\cite{imre_qca} and~\cite{snider99_qca}. For facilitating QCA circuit design, a tool named QCADesigner is developed~\cite{qcadesigner}. Simulation of $M_5$ gate using QCADesigner is presented in several papers, including~\cite{navi_maj5}.
Fig.~\ref{maj5impl} depicts two possible
QCA implementations for a $M_5$ gate.
\begin{figure}[!ht]
\centering%
\includegraphics[width=1.05\columnwidth]{maj5}
\caption{Two different implementations of a
$M_5$ gate in QCA technology \cite{navi_maj5}.}
\label{maj5impl}
\end{figure}
Applications of large majority gates towards efficient adder construction were also discussed. For example, a $M_7$ has also been
proposed.
Fig.~\ref{maj7impl} depicts a possible
QCA implementation for a $M_5$ gate.
\begin{figure}[!ht]
\centering%
\includegraphics[width=0.9\columnwidth]{maj7}
\caption{Physical implementation of a
$M_7$ gate in QCA technology \cite{navi_maj5}.}
\label{maj7impl}
\end{figure}
Note that a $M_5$ gate, a $M_3$ gate
and an inverter gate are sufficient to build a
full-adder, as highlighted
by the
theoretical
case study in Section \ref{optimization}.
In this
scenario, the proposed $\Omega_n$ axiomatic
system is key to unveil such efficient
circuit implementations
in QCA nanotechnology, where majority gates
are the logic primitives for computation.
Very recently,
a majority logic circuit based on domain-wall nanowires has been proposed in~\cite{hao14_dwm}. The circuit is used for computing binary additions efficiently and can be shown to scale for majority gates with arbitrary number of inputs.
All-spin logic gates are originally proposed in~\cite{datta_asl}. Majority logic gates using all-spin logic is proposed in~\cite{augustin_maj}. There, layout of $M_3$ gate using all-spin logic is shown and it is noted that majority gates with larger number of inputs can also be implemented. Indeed,
a high fan-in majority gate is realizable by
a simple
superposition of spin-waves with same amplitude but
different phases \cite{coolswd}.
Fig.~\ref{cswd} depicts a sketch of a
high fan-in majority gate in spin-wave technology.
\begin{figure}[!ht]
\centering%
\includegraphics[width=1.0\columnwidth]{majswd}
\caption{
Block diagram and schematic representation of a
high fan-in majority gate in spin-wave technology \cite{coolswd}.}
\label{cswd}
\end{figure}
In~\cite{fan2014_spin_mem}, a {\em Spin-Memeristor Threshold Logic} (SMTL) gate using memristive crossbar array is proposed. There, an array of SMTL gates is designed and simulated with experimentally validated device model characteristics. By varying the threshold input count, different possible mappings are demonstrated with good performance improvement over CMOS FPGA structures.
A programmable CMOS/memristor threshold logic is proposed in~\cite{gao13_cmtl}. A $4$-input threshold logic gate is experimentally demonstrated using Ag/a-Si/Pt memristive devices. They also propose a threshold logic network similar to~\cite{fan2014_spin_mem} with programmable fan-in.
It is to be noted that none of the aforementioned implementations employed any automated synthesis flow to exploit majority gates with larger than $3$ inputs. Thus, the potential of compact realization of diverse applications, even if feasible with these technologies, is hardly experimented due to the lack of an efficient synthesis flow. Our proposed sound and complete axiomatization aims at filling this gap.
\vspace{0.02in}
Note that the aforementioned examples are just few of the possible applications of $n$-ary majority logic and of its sound and complete axiomatization. More opportunities exist in other fields of computer
science
but their discussion is out of the scope of this
paper.
\section{Conclusions}
\label{concl}
In this paper, we proposed a sound and complete axiomatization of majority logic. Stemming from previous work on MAJ-3/INV logic, we extended fundamental axioms to
arbitrary $n$-ary majority operators. Based on this general set of axioms, computer applications
can
now fully exploit the expressive power of majority logic. We discussed the potential impact in the fields of logic optimization, Boolean satisfiability, repetition codes
and emerging technologies.
From a general standpoint, the possibility of manipulating logic in terms of majority operators paves the way for
more efficient computer applications where the core reasoning tasks are performed in the Boolean domain.
In particular, possible directions for future work include
the
development of (i) a complete majority satisfiability solver
and (ii) a majority synthesis tool targeting nanotechnologies.
\section*{Acknowledgements}
The authors would like to thank Prof. Maciej Ciesielski for valuable discussions.
This research was supported by ERC-2009-AdG-246810.
|
3,212,635,537,597 | arxiv | \section{Introduction}
The characterisation of spatial distributions in terms of fractal
concepts~\cite{Mandelbrot79, Feder88} is becoming increasingly
important. In particular, many distributions in nature are found to
have the characteristics of a multi-fractal~\cite{Hentschel83,
Halsey86, Paladin87}: among many examples are galaxy
clustering~\cite{Borgani93, Martinez91}, strange
attractors~\cite{Procaccia88a}, fluid turbulence~\cite{Sreenivasan91},
percolation~\cite{Isichenko92}, the shapes of
neurons~\cite{Jelinek01, Jelinek04},
and plant distributions~\cite{Emmerson95} and shapes~\cite{Jones96}.
In application, methods for estimating fractal dimensions are often
unreliable. One source of error lies in largely unknown biases
introduced by the finite size of data sets, addressed by
Grassberger~\cite{Grassberger88b}, and in the associated finite range
of length-scales inherent in gathered data. In situations where
thousands or tens of thousands of data points are known such biases may
be minor; however, in some interesting problems, for example in the
spatial clustering of underwater plants~\cite{Emmerson95}, only of the
order of 100 data points are known and confidence in the fractal
characterisation may be misplaced. We need to know more about factors
that cause errors in dimension estimates.
Section~\ref{ss2} discusses the sensitivity of the multiplicative
multi-fractal process to regions of very low probability (measure).
Since such regions only rarely contribute a data point, an experimental
sample cannot discern them but such regions do affect the generalised
dimensions. Hence I argue that the determination from experimental
data of generalised dimensions,~$D_q$, for non-positive~$q$ is
meaningless; for $0<q<1$ computations are very sensitive to the sample;
and thus the most robust fractal dimension is the information
dimension~$D_1$. The argument is supported in Section~\ref{ss3} by a
maximum likelihood method~\cite{Roberts95b} of estimating the
multi-fractal properties of a data set. The method shows the enormous
sensitivity of~$D_q$ for negative~$q$. In contrast the information
dimension is reliably estimated.
\section{Poor conditioning of generalised dimensions of negative order}
\label{ss2}
For example, consider the Hausdorff dimension,~$D_0$, of multifractals
generated by two different ternary multiplicative process.
\begin{itemize}
\item Consider first the process shown in Figure~\ref{ftern}(a)
where an interval is divided into three thirds and the ``mass'' of
the original interval is assigned as follows: a fraction $f_1>0$
to the left third; a fraction $f_2=1-f_1>0$ to the right third;
and none to the middle third. Repeat this subdivision
recursively. This generates a multiplicative multifractal whose
Hausdorff dimension of $D_0=\log_32=0.6309$ is precisely the same
as the Cantor set because there is no ``mass'' in the middle
thirds.
\item Conversely, and perversely, consider the process shown in
Figure~\ref{ftern}(b) where for some small~$\epsilon$ the ``mass''
is assigned as follows: a fraction $f_1>0$ is assigned to the left third;
a fraction $f_2>0$ is assigned to the rightmost third; and a small
fraction $\epsilon>0$ is assigned to the middle third (such that
$f_1+f_2+\epsilon=1$). Repeat recursively. This generates a
multiplicative multi-fractal whose Hausdorff dimension is $D_0=1$
because there is ``mass'' everywhere along the whole interval!
Although the vast bulk of the ``mass'' can be covered by~$2^n$
intervals of length~$3^{-n}$, we definitely do need~$3^n$
intervals in order to ensure coverage of the thinly spread
``mass'' that fills most of the original interval.
\end{itemize}
The importance of this for the analysis of an experimental data set of
$N$~sampled points is that one cannot tell the difference from the data
between these two multi-fractal generating processes for an
$\epsilon=\ord{1/N}$. Thus one cannot estimate the Hausdorff
dimension~$D_0$ with any accuracy since either answer, $0.6309$~or~$1$
could be correct.
\begin{figure}[tbp]
\centerline{{\tt \setlength{\unitlength}{0.075em}
\begin{picture}(402,196)
\thinlines \put(242,18){$\epsilon f_2$}
\put(170,18){$\epsilon f_1$}
\put(206,18){$\epsilon^2$}
\put(310,18){$f_2\epsilon$}
\put(87,18){$f_1\epsilon$}
\put(206,46){$\epsilon$}
\put(336,10){\line(-1,0){37}}
\put(151,10){\line(1,0){111}}
\put(77,10){\line(1,0){37}}
\put(151,40){\line(1,0){111}}
\put(40,10){\begin{picture}(333,62)
\thicklines \put(311,8){$f_2^2$}
\put(234,8){$f_2f_1$}
\put(88,8){$f_1f_2$}
\put(12,8){$f_1^2$}
\put(278,38){$f_2$}
\put(56,38){$f_1$}
\put(333,0){\line(-1,0){37}}
\put(222,0){\line(1,0){37}}
\put(111,0){\line(-1,0){37}}
\put(0,0){\line(1,0){37}}
\put(222,30){\line(1,0){111}}
\put(0,30){\line(1,0){111}}
\put(0,60){\line(1,0){333}}
\end{picture}}
\put(40,120){\begin{picture}(333,62)
\thicklines \put(311,8){$f_2^2$}
\put(234,8){$f_2f_1$}
\put(88,8){$f_1f_2$}
\put(12,8){$f_1^2$}
\put(278,38){$f_2$}
\put(56,38){$f_1$}
\put(333,0){\line(-1,0){37}}
\put(222,0){\line(1,0){37}}
\put(111,0){\line(-1,0){37}}
\put(0,0){\line(1,0){37}}
\put(222,30){\line(1,0){111}}
\put(0,30){\line(1,0){111}}
\put(0,60){\line(1,0){333}}
\end{picture}}
\thicklines \put(10,67){(b)}
\put(10,177){(a)}
\end{picture}}}
\caption{schematic diagram of the first few stages in the
multiplicative multi-fractal process to illustrate the sensitivity
of the Hausdorff dimension~$D_0$ with respect to low density
regions,~(b), as a perturbation of the same process with zero
density regions,~(a).}
\protect\label{ftern}
\end{figure}
Similar reasoning applies to generalised dimensions with negative~$q$.
Elementary arguments give that the generalised
dimensions~\cite{Hentschel83} of the multi-fractal generated by the
second process above are
\begin{equation}
D_q=\left\{
\begin{array}{ll}
\frac{-1}{q-1}\log_3\left[f_1^q+f_2^q+\epsilon^q\right] & \mbox{if
}q\neq 1\,, \\
-\left[f_1\log_3f_1+f_2\log_3f_2+\epsilon\log_3\epsilon\right] &
\mbox{if }q=1\,.
\end{array}\right.
\end{equation}
It is readily appreciated that for negative order~$q$ and
small~$\epsilon$, the term~$\epsilon^q$ inside the logarithm
dominates the evaluation of the generalised dimension~$D_q$. Hence, all
generalised dimensions for negative~$q$ are also extremely sensitive to
small~$\epsilon$. In a data set obtained from experiments, one cannot
expect to distinguish between zero~$\epsilon$ and small non-zero
$\epsilon=\ord{1/N}$, and yet the generalised exponents and
multi-fractal spectrum are markedly different. See Figure~\ref{fthe}
which plots the generalised dimensions for $f_1\approx1/4$,
$f_2\approx3/4$ and various small~$\epsilon$.
\begin{figure}[tbp]
\centerline{\includegraphics[width=0.95\textwidth]{gendim}}
\caption{multi-fractal generalised dimensions~$D_q$ for the
ternary multi-fractal process with $f_1=(1-\epsilon)/4$,
$f_2=(1-\epsilon)3/4$ and $\epsilon=0$ (solid), $0.01$ (dashed)
and $0.05$ (dotted). This figure shows that $D_q$~for negative
order~$q$ is extraordinarily sensitive to small influences: the
curve of smaller~$\epsilon$ is the most changed.}
\protect\label{fthe}
\end{figure}
We can be more precise about the sensitivity to low density regions
by computing the derivative of~$D_q$ with respect to~$\epsilon$. For
definiteness, suppose $f_1=\phi_1(1-\epsilon)$ and
$f_2=\phi_2(1-\epsilon)$. Then
\begin{equation}
\frac{\partial D_q}{\partial \epsilon}=\frac{-q}{q-1}
\frac{\epsilon^{q-1}-\left(\phi_1^q+\phi_2^q\right)(1-\epsilon)^{q-1}}%
{\log3\,\left[\epsilon^q+\left(\phi_1^q+\phi_2^q\right)(1-\epsilon)^q\right]}
\,.
\label{ede}
\end{equation}
For small, but non-zero, $\epsilon\to 0$ this asymptotes to
\begin{equation}
\frac{\partial D_q}{\partial\epsilon}\sim\frac{1}{\log3}\left\{
\begin{array}{ll}
\frac{q}{q-1} & \mbox{if }1<q\,, \\
\frac{q}{(1-q)(\phi_1^q+\phi_2^q)}\epsilon^{q-1} & \mbox{if
}0<q<1\,, \\
\frac{q}{1-q}\epsilon^{-1} & \mbox{if }\phantom{0<}q<0\,.
\end{array}
\right.
\label{easy}
\end{equation}
This derivative is unbounded as $\epsilon\to0$ for $q<1$, and so any
computation of~$D_q$ is only robust if $q\geq 1$.
The reason for this aberrant behaviour is clear. With a finite number
of data points, it is impossible to tell the difference between truly
empty space and space which is visited so rarely that no data point
happens to fall within it. That is, one cannot tell the difference
between empty space and space that should be filled in with very low
probability. These differences dramatically affect the generalised
dimensions~$D_q$ for $q<1$. Thus for any experimental data set:
\begin{itemize}
\item estimating~$D_q$ for $q\leq0$ is nonsense (including the Hausdorff
dimension);
\item estimates of~$D_q$ for small positive~$q$ are sensitive; and
\item I only recommend the reporting of dimensions~$D_q$
for $q\geq1$ as being robust.
\end{itemize}
Out of all the generalised dimensions for order $q\geq 1$, $D_1$~is
most representative of the fractal as a whole. For large order~$q$,
the computation of~$D_q$ is determined only by the very ``densest''
regions of the multi-fractal and so is not representative of the whole
fractal. In the above multiplicative process,
\begin{displaymath}
D_q\sim-\log_3\mbox{max}(f_1,f_2,\epsilon)
\quad\mbox{as}\quad q\to\infty\,,
\end{displaymath}
showing that the large~$q$ behaviour is dictated by the one parameter
of the process that determines the character of the very densest
clusters in the fractal. The very dense clusters occur rarely in the
fractal; they have low fractal dimension as seen in the low~$f$ value
typically associated with low values of~$\alpha$ in the multi-fractal
spectrum. Because of this rareness, the computation from experimental
data of~$D_q$ for large positive order~$q$ is unreliable. Then,
conversely, the information dimension weights the data most uniformly,
and so ``knows'' most about the fractal, without being overly
sensitive to the possible occurrence of regions of very low
probability. The information dimension seems most informative.
\section{Fractal dimensions unbiased by finite size of data sets}
\label{ss3}
Cronin \& Roberts~\cite{Roberts95b} proposed a novel method to
eliminate biases, caused by finite sized data sets, in determining the
multi-fractal properties of a given data set. Jelenik et
al.~\cite{Jelinek01, Jelinek04} used this method to explore the shape
of neuron cells. The method compares characteristics of the inter-point
distances in the data set with those of artificially generated
multi-fractals. By maximising the likelihood that the characteristics
are the same we model the multi-fractal nature of the data by the
parameters of the artificial multi-fractal. By searching among
artificial multi-fractals with precisely the same number of sample
points as in the data, we anticipate that biases due to the finite
sample size will be statistically the same in the data and in the
artificial multi-fractals; hence predictions based upon the fitted
multi-fractal parameters should be unbiased by the finite sample size.
The method also appears to give a reliable indication of the error in
the estimates---a very desirable feature as also noted by Judd \&
Mees~\cite{Judd91}. Most importantly for this paper, I generate finite
size data sets with specific parameters for the following specific
multiplicative multi-fractal process. Given parameters
$\rho\in[0,0.5]$ and $\phi\in[0,0.5]$ a binary multiplicative
multi-fractal is generated by the recursive procedure of dividing each
interval into two halves, then assigning a fraction~$\phi$ of the
points in the interval to a random sub-interval of length~$\rho$ in the
left half, and the complementary fraction $\phi'=1-\phi$ to a random
sub-interval of length~$\rho$ in the right half. Such a process has
generalised dimension
\begin{equation}
D_q=\frac{\log\left(\phi^q+{\phi'}^q\right)}{(q-1)\log\rho}\,,
\label{emdq}
\end{equation}
and a multi-fractal spectrum $f(\alpha)$~\cite[\S4]{Halsey86} given
parameterically in terms of $0<\xi<1$ and $\xi'=1-\xi$ as
\begin{equation}
f = \frac{\xi\log{\xi}+\xi'\log{\xi'}}
{\log{\rho}}
\,, \qquad
\alpha = \frac{\xi\log{\phi}+\xi'\log{\phi'}}
{\log{\rho}} \,.
\label{emfs}
\end{equation}
Here I chose $\rho=1/3$ and $\phi=1/4$ and sample the process with
$N=100$; such a multi-fractal forms a finite data set whose
parameters we need to estimate from the sample.
As explained in~\cite{Roberts95b}, we analyse such a sample by probing
it with \emph{exactly} the same multiplicative multi-fractal process,
and seek the best fit parameters. Here the resulting estimate of the
original parameters is then in error \emph{only} due to the finite size
of the sample of the original multi-fractal process. Because we fit
the data with a process which we know includes the one that generated
the data (a luxury rare in practise), there is no other
error. Thus the spread in errors that we see is characteristic of only
the errors induced by a finite sized sample, nothing else. In
particular, observe that the deductions of the preceding section are
indeed appropriate.
\begin{figure}
\centerline{\includegraphics[width=0.95\textwidth]{n100means}}
\caption{predicted multi-fractal parameters $(\rho,\phi)$, indicated
by~$\circ$'s, from the maximum likelihood match to an ensemble of 16
different realisations, each of $N=100$ data points, of a binary
multiplicative multi-fractal with parameters $\rho=1/3$ and
$\phi=1/4$, indicated by~$+$. The mean location of the
predictions is indicated by a~$\times$.}
\protect\label{n100means}
\end{figure}
I repeat the sampling of the multi-fractal followed by a maximum
likelihood estimate of the parameters 16~times. Figure~\ref{n100means}
plots the estimates of the parameters. Observe that the whole sampling
and estimation process appears unbiased in that the mean of the
predictions is reasonably close to the correct values of the
parameters.
\begin{figure}
\centerline{\includegraphics[width=0.95\textwidth]{n100dqs}}
\caption{ensemble of multi-fractal generalised dimensions~$D_q$,
dotted, for each of the predictions plotted in
Figure~\protect\ref{n100means} made from samples of $N=100$ data
points. For comparison the generalised dimensions for the actual
fractal is plotted as the solid line. Observe the good estimation
near the information dimension, but the large errors for negative
order~$q$.}
\protect\label{n100dqs}
\end{figure}
Ultimately, experimenters want to examine multi-fractal properties of
the data. Here these will be determined from the parameters
$(\rho,\phi)$ of the best fit multi-fractal substituted into analytic
expressions such as (\ref{emdq})~and~(\ref{emfs}). For each of the
16~realisations and their best-fit estimates, I plot the corresponding
predicted generalised dimensions~$D_q$ in Figure~\ref{n100dqs}. (The
corresponding graphs of the multi-fractal spectra~$f(\alpha)$ are
plotted in Figure~7 of~\cite{Roberts95b} along with the true
$f(\alpha)$~curve.) Observe that the predicted dimensions for
positive~$q$ (low~$\alpha$) are quite good for all realisations,
especially near the information dimension,~$D_1$. However, predicted
dimensions for negative~$q$ (high~$\alpha$) are very poor; this is
also the case for the Hausdorff dimension~$D_0$ (the maximum of the
$f(\alpha)$~curve). The negative~$q$ predictions are poor despite the
fitting process ``knowing'' that there are no very low probability
regions in this artificial process. In general applications one
cannot know this and I expect the negative~$q$ (large~$\alpha$)
predictions to be significantly worse. These numerical results
convincingly support the arguments of the preceding section that we
should use the information dimension, not the Hausdorff.
\bibliographystyle{plain}
|
3,212,635,537,598 | arxiv | \section{Introduction}
Internal shocks \citep{Rees:1994ca} are invoked to explain the
variability of blazars \citep[see
e.g.,][]{Spada:2001p815,Mimica:2005sp} and the light curves of the
prompt phase of gamma-ray bursts (GRBs)
\citep{Sari:1995oq,Sari:1997p766,Daigne:1998wq}. A possible problem in
this model is the question whether this mechanism is efficient enough
to explain the relation between the observed energies both in the
prompt GRB phase and in the afterglow (see e.g.,
\citealt{Kobayashi:1997p657} (KPS97), \citealt{Beloborodov:2000p632},
\citealt{Kobayashi:2000p599}, \citealt{Fan:2006p1375}). To asses the
efficiency of the internal shock model, most of the previous works
focus on the comparison between the observed light curves and the
model predictions employing a simple inelastic collision of two-point
masses (KPS97; \citealp{Lazzati:1999p1360}; \citealt{Nakar:2002p1323};
\citealt{Tanihata:2003p1291}; \citealt{Zhang:2004p1381}). Less
attention has been paid to the hydrodynamic effects during the shell
collision \citep[but
see][]{Kobayashi:2000p599,Kino:2004p811,Mimica:2004fy,Mimica:2005sp,Bosnjak:2009p1427}.
The ejecta in GRBs and blazars may be rather magnetized, particularly
if they are originated as a Poynting-flux-dominated flow
\citep[e.g.,][]{Usov:1992hp} Forming shocks in highly magnetized media
is challenging \citep{Rees:1974yq,Kennel:1984kx}. Therefore, to
account for the observed phenomenology it is necessary to address how
efficient the process of internal collisions in arbitrarily magnetized
flows is. This question has been partly considered by a few recent
works \citep[e.g.,][]{Fan:2004p1007,Mimica:2007db}.
The base to study the efficiency of internal collisions is the
determination of the dynamic efficiency of a single binary collision,
i.e., the efficiency of converting the kinetic energy of the colliding
fluid into thermal and/or magnetic energy. Note that the radiative
efficiency (i.e., the efficiency of converting the kinetic energy of
the flow into radiation) is expected to be somewhat smaller. According
to, e.g., \citet{Panaitescu:1999p1022} and \citet{Kumar:1999p1084}, it
can be as low as $f_r\sim 0.1$. As we shall show in this paper,
binary collisions in relativistic, magnetized flows can be an efficient
enough way to dissipate a major fraction of the bulk kinetic energy
of a relativistic ejecta. Therefore, it will depend on the
efficiency of the particular radiation mechanism, that produces the
observed emission (i.e., on the factor $f_r$), that the model of
internal shocks be efficient enough to explain the observations
(particularly, the distribution of energies between the prompt GRB
phase and the afterglow phase).
We model internal shocks as shells of plasma with the same energy flux
and a non-zero relative velocity. The contact surface, where the
interaction between the shells takes place, can break up either into
two oppositely moving shocks (in the frame where the contact surface
is at rest), or into a reverse shock and a forward rarefaction. The
determination of whether one or the other possibility occurs is
computed by estimating the invariant relative velocity between the
fastest and the slowest shell, i.e., by solving the Riemann problem
posed by the piecewise uniform states given by the physical quantities
on the two interacting shells (\secref{sec:riemann}). In
\secref{sec:edissipation} we define precisely the notion of dynamic
efficiency, both for shocks and rarefactions. We perform a parametric
study of the binary shell collision dynamic efficiency in
\secref{sec:parametric}. The discussion and conclusions are listed in
\secref{sec:discussion}. Finally, we have extended our analysis
on the dynamic efficiency of internal shocks in magnetized,
relativistic plasma to consider more realistic equations of state in
the Appendix.
\section{Relativistic magnetohydrodynamic Riemann problem}
\label{sec:riemann}
We model the interaction between parts of the outflow with varying
properties by considering Riemann problems, i.e. relativistic
magnetohydrodynamic initial-value problems with two constant states
separated by a discontinuity in planar symmetry. We note, that we
could use a more sophisticated approach consisting on performing
numerical relativistic magnetohydrodynamic (RMHD) simulations of the
interaction of parts of the outflow with different
velocities. However, such an approach demands huge computational
resources (even performing one dimensional simulations using the
same code as in \citealp{Mimica:2009qa}), and we are interested in
sampling very finely a large parameter space with our models. Apart
from this numerical reason, it is in order to point out that, by the
internal shock phase, the lateral expansion of the flow is very
small, since the flow is probably cold and ultrarelativistic. Thus,
a description of the interactions assuming planar symmetry suffices
to compute the dynamic efficiency of such interactions (rather than
a more complex spherically symmetric approach).
In the following we use subscripts $L$
and $R$ to denote properties of the (faster) left and (slower) right
state, respectively. To avoid repeated writing of a factor $4\pi$ and
the speed of light $c$, we normalize the rest-mass density $\rho$ to
$\rho_R$, the energy density to $\rho_R c^2$ and the magnetic field
strength to $c\sqrt{4\pi \rho_R}$.
\subsection{Initial states of the Riemann problem}
\label{sec:initial}
For the initial thermal pressure of both states we assume that it is
small fraction of the density, $p_L = \chi \rho_L$ and $p_R =
\chi$. We assume magnetic fields perpendicular to the direction of the
flow propagation. The remaining parameters determining the RMHD
Riemann problem are: the density contrast $\rho_L$, the Lorentz factor
of the right state $\Gamma_R$, the relative Lorentz factor difference
$\Delta g := (\Gamma_L - \Gamma_R)/\Gamma_R$, and the magnetizations
of left and right states, $\sigma_L := B_L^2/(\Gamma_R^2(1+\Delta
g)^2\rho_L)$ and $\sigma_R := B_R^2/\Gamma_R^2$, where $B_L$ and $B_R$
are the lab frame magnetic field strengths of left and right states,
respectively. Furthermore, we define the total (thermal + magnetic)
pressure
\begin{equation}\label{eq:pstar}
p^* := p + \dsfrac{B^2}{2\Gamma^2} = p + \dsfrac{\sigma\rho}{2}\, ,
\end{equation}
the total specific enthalpy
\begin{equation}\label{eq:hstar}
h^*:= 1 + \epsilon + p / \rho + \sigma \, ,
\end{equation}
and
\begin{equation}\label{eq:estar}
e^*:= \rho (1 + \epsilon) + \dsfrac{\sigma\rho}{2}\, .
\end{equation}
where $\epsilon$ denotes the specific internal energy and is dependant on the equation of state used (see \secref{sec:outflow}).
The general solution of a RMHD Riemann problem was found by
\citet{Giacomazzo:2006p2513}, and recently used in RMHD numerical
codes by e.g., \citet{vanderHolst:2008p2553}. However, here we deal
with a degenerate RMHD configuration, which solution was first
found by \citet{Romero:2005zr}. The typical structure of the flow
after the break up of the initial discontinuity consists of two
initial states, and two intermediate states separated by a contact
discontinuity (CD). The total pressure and velocity are the same on
both sides of the CD. The quantity $\sigma/\rho$ is uniform
everywhere, except across the CD, where it can have a jump. We denote
the total pressure of intermediate states $p_S^*$, and rest-mass
density left and right of the CD as $\rho_{S,L}$ and $\rho_{S,
R}$. In the context of internal shocks, if the flow is
ultrarelativistic in the direction of propagation, the velocity
components perpendicular to the flow propagation should be
negligibly small and, hence, they are set up to zero in our
model\footnote{If such velocities were significant, appreciable
changes in the Riemann structure may result as pointed out in \citet{Aloy:2006p2560} or \citet{Aloy:2008kx}.}.
\subsection{Conditions for the existence of a two-shock solution}
\label{sec:twoshock}
One of the key steps in solving a Riemann problem is to determine
under which conditions internal shocks can form. States ahead and
behind the shock front are related by the Lichnerowicz adiabat
\citep{Romero:2005zr}
\begin{equation}\label{eq:Lichnerowicz}
(h_b^*)^2 - (h_a^*)^2 - \left(\dsfrac{h_b^*}{\rho_b}-\dsfrac{h_a^*}{\rho_a}\right)(p_b^* - p_a^*) = 0\, .
\end{equation}
Following \citet{Rezzolla:2001ys}, we study the relative velocity
between the states ahead (a) and behind (b) the shock front (all
velocities are measured in the rest frame of the shock, and all
thermodynamic properties are measured in the fluid rest frame),
\begin{equation}\label{eq:v_ab}
v_{ab} := \dsfrac{v_a - v_b}{1 - v_a v_b} =
\sqrt{\dsfrac{(p_b^* - p_a^*)(e_b^* - e_a^*)}{(e_a^* + p_b^*)(e_b^* + p_a^*)}}\, .
\end{equation}
In our case states ahead of the shock are the initial (L, R)
states, while states behind the shock are the intermediate
states. Since $v_{ab}$ is Lorentz-invariant, we can measure the
velocity ahead of the left-propagating ({\it reverse}) shock
(RS) in the frame in which the CD is at rest,
\begin{equation}\label{eq:v_l}
v_l = \sqrt{\dsfrac{(p_S^* - p_L^*)(e_{S,L}^*(p_S^*) - e_L^*)}{(e_L^* + p_S^*)(e_{S,L}^*(p_S^*) + p_L^*)}}\, .
\end{equation}
Likewise, the velocity ahead of the right-going ({\it forward})
shock (FS) measured in the CD frame is
\begin{equation}\label{eq:v_r}
v_r = -\sqrt{\dsfrac{(p_S^* - p_R^*)(e_{S,R}^*(p_S^*) - e_R^*)}{(e_R^* + p_S^*)(e_{S,R}^*(p_S^*) + p_R^*)}}\, ,
\end{equation}
where $e_{S,L}^*$ and $e_{S,R}^*$ are the energy densities of the
states to the left and to the right of the CD, respectively. The
rest-mass densities $\rho_{S,R}$ and $\rho_{L,R}$ can be obtained from
\eref{eq:Lichnerowicz} and \eref{eq:hstar}.
Since both FS and RS only exist if $p_S^* > p_R^*$ and $p_S^* >
p_L^*$, respectively, with decreasing $p_S^*$ either the FS will
disappear first (for $p_S^* = p_R^* > p_L^*$, giving $v_r=0$) or the
RS will disappear first (for $p_S^* = p_L^* > p_R^*$, giving $v_l =
0$). Using equations \eref{eq:v_l} and \eref{eq:v_r} and the
invariance of the relative velocity between the left and right states,
$v_{lr} := (v_l - v_r)/(1 - v_lv_r)$, we can determine the minimum
relative velocity for which a two-shock solution is possible
\begin{equation}\label{eq:V_LR2S}
(v_{lr})_{2S} = \left\{
\begin{array}{rl}
\sqrt{\dsfrac{(p_L^* - p_R^*)(e_{S,R}^*(p_L^*) - e_R^*)}{(e_{S,R}^*(p_L*) + p_R^*)(e_R^* + p_L^*)}} & \mathrm{if}\ p_L^*=p_S^*>p_R^*\\[4mm]
\sqrt{\dsfrac{(p_R^* - p_L^*)(e_{S,L}^*(p_R^*) - e_L^*)}{(e_{S,L}^*(p_R^*) + p_L^*)(e_L^* + p_R^*)}} & \mathrm{if}\ p_L^*<p_R^*=p_S^*
\end{array}
\right.
\end{equation}
Generally, the quantity $(v_{lr})_{2S}$ can be only determined
numerically. If $(v_{lr}) < (v_{lr})_{2S}$, a single shock and a
rarefaction emerge from the initial discontinuity. It is even possible
that instead of two shocks two rarefactions form (see,
\citealp{Rezzolla:2001ys}).
\section{Energy dissipation efficiency of internal shocks}
\label{sec:edissipation}
Internal shocks in relativistic outflows are invoked as moderately
efficient means of conversion of the kinetic energy of the flow into
radiation. In this section we present our model for inhomogeneous,
ultrarelativistic outflows and provide an operative definition for the
efficiency of conversion of the initial energy of the outflow into
thermal and magnetic energy produced by internal shocks. We assume
that a fraction of this thermal and magnetic energy will be radiated
away.
\subsection{Outflow model}
\label{sec:outflow}
To study internal shocks we idealize interactions of parts of the
outflow moving with different velocities as collisions of homogeneous
shells. In our model the faster (left) shell catches up with the
slower (right) one yielding, in some cases, a pair of shocks
propagating in opposite directions (as seen from the CD frame). In
order to cover a wide range of possible flow Lorentz factors and shell
magnetizations, we assume that initially, the flux of energy in the
lab frame is uniform and the same in both shells\footnote{In
\citet{Mimica:2009qa} we use a similar model to compare afterglow
ejecta shells with different levels of magnetization, with a slight
difference that in that study, instead of having the same flux of energy,
all the ejecta shells were assumed to contain the same total energy.}. The energy
flux for a shell with rest-mass density $\rho$, ratio of thermal
pressure to density $\chi$, magnetization $\sigma$ and Lorentz factor
$\Gamma$ is \citep[e.g.,][]{Leismann:2005rz}.
\begin{equation}\label{eq:F_tau}
F_{\tau}:= \rho \left[ \Gamma^2 (1 + \epsilon + \chi + \sigma) - \Gamma\right] \sqrt{1 - \Gamma^{-2}}\, .
\end{equation}
Using the notation introduced in \secref{sec:initial} and assuming the equality of $F_{\tau}$ in both shells we find that the density contrast $\rho_L$ between left and right shells is
\begin{equation}\label{eq:rho_L}
\rho_L = \dsfrac{(1+\Delta g)^{-2}\left[1 + \epsilon + \chi + \sigma_R - \Gamma_R^{-1}\right]\sqrt{1 - \Gamma_R^{-2}}}
{\left[1 + \epsilon + \chi + \sigma_L - \Gamma_R^{-1}(1+\Delta g)^{-1}\right]\sqrt{1 - \Gamma_R^{-2}(1+\Delta g)^{-2}}}\, .
\end{equation}
Considering $\sigma_L$, $\sigma_R$, $\Gamma_R$ and $\Delta g$ as
parameters, we can use \eref{eq:rho_L} to compute the rest of the
variables needed to set-up the Riemann problem. We then compute the
break up of the initial discontinuity between both shells using the
exact Riemann solver developed by \citet{Romero:2005zr}.
In the following we use a polytropic equation of state with an
adiabatic index $\hat{\gamma} = 4/3$:
\begin{equation}
\epsilon := \dsfrac{p}{(\hat{\gamma} - 1)\rho}
\end{equation}
As we show in the Appendix, the Riemann solver of
\citet{Romero:2005zr} has been suitably modified to include a more
realistic equation of state. However, we find no qualitative
differences between the results using a polytropic EoS (with either
$\hat{\gamma}=4/3$ or $\hat{\gamma}=5/3$) and $\hat{\gamma}$-variable EoS. Furthermore, the
quantitative differences are very small in terms of dynamic
efficiency.
\subsection{Efficiency of energy dissipation by a shock}
To model the dynamic efficiency of energy dissipation we follow
the approach described in \citet{Mimica:2007db}, suitably modified to
account for the fact that in the present work, there can occur
situations where either the FS or the RS do not exist (see
\secref{sec:twoshock}). By using the exact solver we determine the
existence of shocks and (in case one or two shocks exist) obtain the
hydrodynamic state of the shocked fluid. We use subscripts $S, L$ and
$S, R$ to denote shocked portions of left and right shells,
respectively. In the following we treat the efficiency of each shock
separately.
\subsubsection{Reverse shock}
To compute the efficiency we need to compare the energy content of the
initial (unshocked) faster shell with that of the shocked shell at the
moment when RS has crossed the initial shell. Assuming an initial
shell width $\Delta x$, we define total initial kinetic, thermal and
magnetic energy (see also equations (A.1) - (A.3) of
\citealt{Mimica:2007db})
\begin{equation}\label{eq:energies}
\begin{array}{rcl}
E_K(\Gamma, \rho, \Delta x) & := & \Gamma (\Gamma - 1) \rho \Delta x\\[4mm]
E_T (\Gamma, \rho, p, \Delta x)& := & [(\rho \varepsilon + p) \Gamma^2 - p] \Delta x\\[4mm]
E_M (\Gamma, \rho, \sigma, \Delta x) & := & \left(\Gamma^2 - \dsfrac{1}{2}\right)\rho \sigma \Delta x
\end{array}\, .
\end{equation}
When the RS crosses the whole initial shell, the length of the
compressed shell (i.e., the fluid between the RS and the CD) is
$\zeta_L \Delta x$, where
\[
\zeta_L: =\dsfrac{v_{CD} - v_{S, L}}{v_L - v_{S, L}} < 1
\]
and $v_{CD}$ and $v_{S, L}$ are velocities (in the lab frame) of the
contact discontinuity and the RS, both obtained from the solver.
Without loss of generality, we can normalize the initial shell width
so that $\Delta x=1$. Then we define the dynamic \emph{thermal}
efficiency
\begin{equation}\label{eq:thermalL}
\varepsilon_{T, L} := \dsfrac{E_T(\Gamma_{S, L}, \rho_{S, L}, p_{S, L}, \zeta_L) -
E_T(\Gamma_R (1 + \Delta g), \rho_L, \chi \rho_L, 1)}{E_0}\, ,
\end{equation}
and the dynamic \emph{magnetic} efficiency
\begin{equation}\label{eq:magneticL}
\varepsilon_{M, L}:= \dsfrac{E_M(\Gamma_{S, L}, \rho_{S, L}, \sigma_{S, L}, \zeta_L) - E_M(\Gamma_R (1 + \Delta g), \rho_L, \sigma_L, 1)}{E_0}\, ,
\end{equation}
where $E_0$ is the total initial energy of both shells
\begin{equation}\label{eq:E_0}
\begin{array}{rl}
E_0 &:= E_K(\Gamma_R (1 + \Delta g), \rho_L, 1) + E_T(\Gamma_R (1 + \Delta g), \rho_L, \chi \rho_L, 1)\\[4mm]
&+ E_M(\Gamma_R (1 + \Delta g), \rho_L, \sigma_L, 1) + E_K(\Gamma_R, 1, 1) \\[4mm]
&+ E_T(\Gamma_R, 1, \chi, 1) + E_M(\Gamma_R, 1, \sigma_R, 1)\, .
\end{array}\,
\end{equation}
Equations \eref{eq:thermalL} and \eref{eq:magneticL} express the
fraction of the initial energy that the RS has converted into thermal
and magnetic energy, respectively.
\subsubsection{Forward shock}
In complete analogy, we define the thermal and magnetic
efficiencies for the forward shock,
\begin{equation}\label{eq:thermalR}
\varepsilon_{T, R} := \dsfrac{E_T(\Gamma_{S, R}, \rho_{S, R}, p_{S, R}, \zeta_R) - E_T(\Gamma_R, 1, \chi, 1)}{E_0}\, ,
\end{equation}
\begin{equation}\label{eq:magneticR}
\varepsilon_{M, R}:= \dsfrac{E_M(\Gamma_{S, R}, \rho_{S, R}, \sigma_{S,
R}, \zeta_R) - E_M(\Gamma_R, 1, \sigma_R, 1)}{E_0}\, ,
\end{equation}
where
\[
\zeta_R := \dsfrac{v_{S, R} - v_{CD}}{v_{S, R} - v_R} < 1\, .
\]
$v_{S, R}$ is the velocity of the FS in the lab frame. Here we set
$\varepsilon_{T, R}=\varepsilon_{M, R}=0$ if the forward shock is absent.
Combining equations \eref{eq:thermalL}, \eref{eq:magneticL},
\eref{eq:thermalR} and \eref{eq:magneticR} we define the
dynamic thermal and magnetic efficiency of internal shocks
\begin{equation}\label{eq:thermal}
\varepsilon_T := \varepsilon_{T, L} + \varepsilon_{T, R}
\end{equation}
\begin{equation}\label{eq:magnetic}
\varepsilon_M:= \varepsilon_{M, L} + \varepsilon_{M, R}\, .
\end{equation}
We point out that these definitions of efficiency generalize the ones
typically used when cold, unmagnetized shell collisions are
considered. In that case, initially one only has bulk kinetic energy
in the shells (i.e., $E_0=E_K(\Gamma_L, \rho_L, 1) + E_K(\Gamma_R, 1,
1)$). In case of collisions of arbitrarily magnetized shells with
arbitrarily initial thermal content, $E_0$ can be substantially larger
than the initial kinetic energy in the shells.
\subsection{Efficiency of energy dissipation by a rarefaction}
In a rarefaction there is a net conversion of magnetic and/or thermal
energy into kinetic energy, thus the net dynamic efficiency produced
by a rarefaction, defined as in e.g., Eq.~\eref{eq:magneticL}, should
be negative. The consequence of this is that, when we obtain a
shock-contact-rarefaction or rarefaction-contact-shock structure as a
solution to the Riemann problem, it may happen that the total (left
plus right) thermal or magnetic efficiency
(Eqs.~\eref{eq:thermal}-\eref{eq:magnetic}) was negative. However,
this situation does not correctly model the fact that, in cases where
a shock exists only in one of the shells, it is still able to radiate
away part of the thermal or magnetic energy behind it, even though
there is a rarefaction propagating through the other shell. We
also point out that rarefactions happen in our case, where we model
initially cold flows, as a result of a net conversion of magnetic
into kinetic energy. This kinetic energy can be further recycled by
the flow, and dissipated in the course of ulterior binary
collisions. Therefore, directly summing the (positive) dynamic
efficiency of conversion of kinetic-to-thermal/magnetic energy in a
shock to the (negative) dynamic efficiency of conversion in a
rarefaction is inadequate. The total dynamic efficiency in a case
where only one shock forms will be determined only by the efficiency
in the shocked shell. Thus, we set $\varepsilon_{T, L} =
\varepsilon_{M, L}=0$ ($\varepsilon_{T, R} = \varepsilon_{M, R}=0$) if
the reverse (forward) shock is absent. We note that this contrasts
with previous works on internal collisions of unmagnetized shells
\citep[e.g.,][]{Kino:2004p811}, and may yield higher values of the net
dynamic efficiency.
\section{Parametric study of the dynamic dissipation efficiency}
\label{sec:parametric}
Next we study the dynamic dissipation efficiency in the process of
collision of cold, magnetized shells.The shells are
assumed to be cold because in the standard fireball model \citep[e.g.,][]{Piran:2005p2632}
, almost all the internal energy of the ejecta has been
converted to kinetic energy {\it before} internal shocks start to
show up. Thus, a regime where the parameter $\chi$ is large does not
properly model an efficiently accelerated ejecta by non-magnetic
processes. On the other hand, if the ejecta were accelerated by
magnetic fields (like in Poynting-dominated flow models; e.g., \citealt{Usov:1992hp}), then the flow is cold all the way from the beginning to the
internal shock phase, and then $\chi$ should also be small in such a
case.
For all the models in this paper, in order to reduce the dimensions
of the parameter space, we fix $\chi = 10^{-4}$ uniform everywhere,
to model initially cold shells and, unless otherwise specified, set
$\Delta g = 1$ as a reference value. We choose $\chi$
sufficiently small so that it does not influence the solution of the
Riemann problem. In the first two subsections we consider blazar
and GRB regimes. Then, we study the flow structure for three
representative Riemann problems, and end the section with a discussion
of the impact of varying $\Delta g$ on the efficiency.
\subsection{Blazar regime}
In the blazar regime we set $\Gamma_R = 10$ as a typical value of the
Lorentz factor of the outflowing material. We continuously vary
$\sigma_L$ and $\sigma_R$ and show contours of total efficiency
($\varepsilon_T$ + $\varepsilon_M$) in \figref{fig:blazar}.
\begin{figure}
\includegraphics[width=8.5cm]{fig1.eps}
\caption{Contours: total dynamic efficiency $\varepsilon_T +
\varepsilon_M$ (eqs.~\eref{eq:thermal} and \eref{eq:magnetic}) in the
blazar regime ($\Gamma_R = 10$, $\Delta g = 1$) for different
combinations of $(\sigma_L, \sigma_R)$. Contours indicate the
efficiency in percent and their levels are $1$, $2$, $3$, $4$, $5$,
$6$, $7$, $8$, $9$, $10$, $11$, $12$ and $13$. In the region of the
parameter space above the dashed line there is no forward shock,
while the reverse shock is always present for the considered
parametrization. Filled contours: magnetic efficiency $\varepsilon_M$
in percent.}
\label{fig:blazar}
\end{figure}
The maximum efficiency is attained for moderately magnetized slower
shells ($\sigma_R\approx 0.2$) and highly magnetized left shells
($\sigma_L \approx 1$). The broad region to the right of the
efficiency maximum is independent of $\sigma_L$ because in a collision
with such a highly magnetized fast shell almost all the energy is
dissipated by the FS. In the region above the dashed line of
\figref{fig:blazar} the FS is absent and, thus, since only the RS
dissipates the initial energy, the efficiency slightly drops. However,
the transition between the regime where the two shocks operate or only
the RS exists is smooth. The reason being that the efficiency below
the separatrix of the two regimes and close to it is dominated by the
contribution of RS.
As expected, when either $\sigma_L$ or $\sigma_R$ approach low values,
the dynamic efficiency ceases to depend on them. This can be seen in
the center of the left side of \figref{fig:blazar} where, for
$\sigma_L\ll 1$, the dynamic efficiency only depends on
$\sigma_R$. The converse is true in the center of the lower side of
the figure, where $\sigma_R\ll 1$.
\subsection{GRB regime}
\begin{figure}
\includegraphics[width=8.5cm]{fig2.eps}
\caption{Same as \figref{fig:blazar}, but in the GRB regime ($\Gamma_R = 100$, $\Delta g = 1$).}
\label{fig:grb}
\end{figure}
The results in the GRB regime ($\Gamma_R = 100$, $\Delta g = 1$) are
shown in \figref{fig:grb}. The general shape of the contours is
similar to \figref{fig:blazar}, which is expected since both in the
blazar and in the GRB regime the flow is utrarelativistic and most of
the quantities which depend on the \emph{relative} velocity between
the faster and the slower shell depend only weakly on $\Gamma_R$,
$\Delta g$ being the crucial parameter \citep{Daigne:1998wq}. For
example, the dashed curve which delimits regions with and without a
forward shock does not depend on $\Gamma_R$ but only on $\Delta g$.
The maximum of the dynamic efficiency in the GRB regime is localized
at roughly the same spot as in the blazar regime. However, compared to
the former case the region of maximum efficiency (i.e., where
$\varepsilon_T+\varepsilon_M \gtrsim 0.13$) is smaller.
\subsection{Flow structure}
\label{sec:flowstr}
In this subsection we study the flow structure for three
representative models in the GRB regime. Their location in the
parameter space is marked by letters $A$, $B$ and $C$ in
\figref{fig:grb}. Model $A$ corresponds to a prototype of interaction
between non-magnetized shells $(\sigma_L = \sigma_R = 10^{-6})$. Model
$B$ is picked up to illustrate the flow structure at the maximum
efficiency $(\sigma_L = 0.8,\ \sigma_R = 0.2)$. Finally, model $C$
corresponds to the case when the FS is absent $(\sigma_L = 1,\
\sigma_R = 10^2)$. We show the rest mass density profile of these
models in \figref{fig:structure}.
\begin{figure}
\includegraphics[width=8.5cm]{fig3.eps}
\caption{Rest-mass density profile of models $A$, $B$ and $C$ (see
legends) whose parameters are given in
Sect.~\ref{sec:flowstr}. Profiles have been shifted so that the CD
for all models coincides. In models $A$ and $B$ the FS and the RS
are clearly visible, while in the model $C$ a rarefaction wave is
visible as a small ``step'' to the right of the CD.}
\label{fig:structure}
\end{figure}
Model $A$ (thick full line on \figref{fig:structure}) shows two strong
shocks which dissipate kinetic into thermal energy. In contrast, model
$B$ (dashed line) has much weaker shocks due to non-negligible
magnetization in both shells. Finally, model $C$ (thin full line) does
not have forward shock due to very high magnetization in the slower
shell.
All three models have a substantial dynamic efficiency, but there is a
qualitative difference among them. In model $A$ internal shocks
dissipate kinetic to thermal energy only (thermal efficiency). In
model $B$ shocks mainly compress the magnetic field (magnetic
efficiency) and dissipate only a minor fraction of the initial kinetic
and magnetic energy to thermal energy. Finally, in the model $C$ only
the reverse shock is active, compressing the faster shell magnetic
field.
\subsection{Dependence on $\Delta g$ and on $\Delta s$}
\label{sec:deltag-deltas}
\begin{figure}
\includegraphics[width=8.5cm]{fig4.eps}
\caption{The gray scale indicates the value of the maximum total
dynamic efficiency (in percent) as a function of the parameter
pair $(\Delta g, \Delta s)$. The values of the rest of the
parameters are fixed to $\Gamma_R=100$, and $\chi=10^{-4}$.
Contours: magnetization of the slowest shell: $\sigma_R=0.1$, $0.5$,
1, 5 and 10.}
\label{fig:deltag-deltas}
\end{figure}
The choice of a relatively small value of $\Delta g$ in the previous
section is motivated by the results of numerical simulations of
relativistic outflows
\citep[e.g][]{Aloy:2000ad,Aloy:2005zp,Mizuta:2006p1126,Zhang:2003p1193,Zhang:2004p1195,
Morsony:2007p1208,Lazzati:2009p1210,Mizuta:2009p1213} where $\Delta
g < 2$ between adjacent parts of the flow that may catch up (but see,
e.g., \citealt{Kino:2008p1217}, who find $\Delta g\sim 1-19$
appropriate to model Mrk 421). This adjacent flow regions can be
assimilated to pairs of shells whose binary collision we are
considering here.
However, it has been confirmed by several independent works (KPS97;
\citealt{Beloborodov:2000p632}; \citealt{Kobayashi:2000p599};
\citealt{Kino:2004p811}, etc.) that, in order to achieve a high
efficiency (more than a few percent) in internal collisions of
unmagnetized shells, the ratio between the maximum ($\Gamma_{\rm
max}$) and the minimum ($\Gamma_{\rm min}$) Lorentz factor of the
distribution of initial shells should be $\Gamma_{\rm max}/\Gamma_{\rm
min}>10$.
In view of these results, we have also made an extensive analysis of
the dependence of the dynamic efficiency on the variation of $\Delta
g$. Since we are also interested in evaluating the influence of the
magnetic fields on the results, we define a new variable
\begin{equation}\label{eq:magnetics}
\Delta s= \frac{1+\sigma_L}{1+\sigma_R}\, ,
\end{equation}
and plot (Fig.~\ref{fig:deltag-deltas}) the value of the maximum
efficiency reached for every combination $(\Delta g, \Delta s)$ and
fixed values of the rest of the parameters to $\Gamma_R=100$, and
$\chi=10^{-4}$. To be more precise, for fixed $\Delta g$ and $\Delta
s$ we need to look for the maximum of the efficiency of all models
whose $\sigma_L$ and $\sigma_R$ satisfy Eq.~\eref{eq:magnetics}. It
is evident from Fig.~\ref{fig:deltag-deltas} that the maximum total
dynamic efficiency grows (non-monotonically) with increasing $\Delta
g$, in agreement with the above mentioned works (where unmagnetized
collisions have been considered). Indeed, a large value $\Delta g
\gtrsim 10$ yields dynamic efficiency values $\sim 40\%$ if both
shells are moderately magnetized ($\sigma_R\sim\sigma_L \lesssim
0.1$). Nevertheless, the amount of increase of efficiency with $\Delta
g$ depends strongly on $\Delta s$. For $|\Delta s| \gtrsim 1$,
corresponding to cases where the slower shell is highly magnetized
($\sigma_R\gtrsim 4$), the maximum dynamic efficiency is almost
independent of $\Delta g$; while for $|\Delta s| \lesssim 1$, the
maximum dynamic efficiency displays a strong, non-monotonic dependence
on $\Delta g$.
It is remarkable that values $5 \lesssim \Delta s \lesssim 100$ yield
dynamic efficiencies in excess of $\sim 20\%$, regardless of the
relative Lorentz factor between the two shells. In this region of the
parameter space the maximal dynamic efficiency happens when both
shells are magnetized ($\sigma_R>10$, $\sigma_L>50$), and the total
dynamic magnetic efficiency dominates the total dynamic efficiency.
\section{Discussion}
\label{sec:discussion}
We have focused in this paper on the estimation of the dynamic
efficiency of conversion of kinetic-to-thermal/magnetic energy in
collisions (internal shocks) of magnetized shells in relativistic
outflows. A fundamental difference between the internal collisions in
magnetized and unmagnetized outflows is the fact that in the former
case not only shocks but also rarefactions can form. Thus, one would
naturally expect a reduced dynamic efficiency in the magnetized
case. However, we have shown that such dynamic efficiency may reach
values $\sim 10\%-40\%$, in a wide range of the parameter space
typical for relativistic outflows of astrophysical interest (blazars
and GRBs). Thus, the dynamic efficiency of moderately magnetized shell
interactions is larger than in the corresponding unmagnetized
case. This is because when the shells are moderately magnetized, most
of the initial shell kinetic energy is converted to magnetic energy,
rather than to thermal energy.
The difference in efficiency between flows with moderate Lorentz
factors ($\Gamma_R=10$) and ultrarelativistic ones ($\Gamma_R=100$) is
very small, because in the ultrarelativistic kinematic limit (i.e.,
$\Gamma \gg 1$), once the energy flux and the magnetizations of both
shells are fixed, the key parameter governing the dynamic
efficiency is $\Delta g$ rather than $\Gamma_R$. From numerical
simulations one expects that any efficiently accelerated outflow will
not display huge variations in the velocity between adjacent regions
of flow. Therefore, values of $\Delta g\simeq 1$ seem reasonable and
$\Delta g =1$ has been taken as a typical value for both blazars and
GRB jets. A fixed value of $\Delta g=1$ brings maximum efficiency when
the magnetizations of the colliding shells are
$(\sigma_L,\sigma_R)\simeq(1,0.2)$. Larger dynamic efficiency values
$\sim 40\%$ are reached by magnetized internal shocks if $\Delta
g\gtrsim 10$ and $|\Delta s|\lesssim 1$, corresponding to cases
where the magnetization of both shells is moderate ($\sigma_R\simeq
\sigma_L\lesssim 0.1$).
Consistent with our previous work \citep{Mimica:2007db}, in the limit
of low magnetization of both shells, the kinetic energy is mostly
converted into thermal energy, where the increased magnetic energy in
the shocked plasma is only a minor contribution to the total dynamic
efficiency, i.e., $\varepsilon_T \ll \varepsilon_M$. Here we find that
as the magnetization of the shells grows, the roles of $\varepsilon_T$
and $\varepsilon_M$ are exchanged, so that $\varepsilon_T <
\varepsilon_M$ (at the maximum dynamic efficiency $\varepsilon_T
\simeq 0.1 \varepsilon_M$). If the magnetization of both shells is
large, the dynamic efficiency decreases again because producing shocks
in highly magnetized media is very difficult. All these
conclusions are independent on the EoS used to model the plasma,
i.e., they are both qualitatively and quantitatively basically the
same independent on whether a polytropic EoS with a fixed adiabatic
index is taken (either $\hat{\gamma}=4/3$ or $\hat{\gamma}=5/3$) or a more general,
analytic approximation to the exact relativistic EoS (the \emph{TM}
EoS; see Appendix) is considered.
The comparison of our results with previous analytic or semi-analytic
works (e.g.,KPS97; \citealt{Beloborodov:2000p632};
\citealt{Kobayashi:2001p618}; \citealt{Kobayashi:2002p1216}) is not
straightforward. Generally, these works compute the efficiency of the
collision of shells without computing their (magneto-)hydrodynamic
evolution and, on the other hand, these works include not only a
single collision, but the multiple interactions of a number of dense
shells. The bottom line in these previous works is that internal
collisions of unmagnetized shells can be extremely efficient; the
efficiency exceeding $40\%$, or even $\sim 100\%$
\citep{Beloborodov:2000p632} if the spread of the Lorentz factor
(i.e., the ratio between the Lorentz factor of the faster,
$\Gamma_{\rm max}$, and of the slower $\Gamma_{\rm min}$ shell in the
sample) is large ($\Gamma_{\rm max} /\Gamma_{\rm min}=10^3$; e.g.,
KPS97, \citealt{Kobayashi:2002p1216}). For a more moderate spread of
the Lorentz factor $\Gamma_{\rm max}/\Gamma_{\rm min} =10$, the
efficiency is $\sim 20\%$. We note that these high efficiencies are
reached because a large number of binary collisions is included in the
model (not only a single one as in our case). Thus, the kinetic energy
which is not dissipated in the first generation of collisions (between
the initially set up shells), can be further converted into internal
energy as subsequent generations of collisions take place. In
contrast, we find that moderate magnetizations of both shells
($\sigma\lesssim 0.1$) and $\Delta g\gtrsim 10$ (which would roughly
correspond to $\Gamma_{\rm max}/\Gamma_{\rm min} =9$) are enough for a
single binary collision to reach a total dynamic efficiency of $\sim
40\%$.
We point out that the energy radiated in the collision of magnetized
shells is only a fraction, $f_r\simeq 0.1$
\citep[e.g.,][]{Panaitescu:1999p1022,Kumar:1999p1084} to $f_r\simeq 1$
\citep[e.g.,][]{Beloborodov:2000p632} of the energy dynamically
converted into thermal or magnetic energy. Thus, the radiative
efficiency of the process, measured as the fraction of the total
initial energy converted into radiation, will be $1/f_r$ times smaller
than the computed dynamic efficiency. Even considering this factor,
single binary collisions between moderately magnetized shells may
yield efficiencies $\sim 0.4f_r$, which can obviously rise if many
binary collisions take place in the flow reprocessing the remaining
kinetic energy of the first generation of interacting shells (in the
same statistical way as discussed by
\citealt{Kobayashi:2000p599}). Therefore, on the light of our
results, binary collisions in relativistic magnetized flows are
efficient enough, from the dynamical point of view, to be a valid
mechanism to dissipate the bulk kinetic energy of relativistic
ejecta. Hence, the main restriction on the radiative efficiency
comes from the radiation mechanism setting the limiting factor
$f_r$.
We stress that we are not assuming any particular radiation mechanism
in this study (thus, determining a value for $f_r$). Therefore,
we compute the dynamic efficiency including not only the increased
thermal energy in the flow, but also the extra magnetic energy
resulting from the magnetic field compression. This is justified
because, although standard shock acceleration mechanisms are
inefficient in very magnetized shocks
\citep[e.g.,][]{Sironi:2009p414}, other mechanisms might extract the
energy from the whole volume of a very magnetized fluid
\citep[e.g.,][]{Thompson:1994p492,Giannios:2007by}.
The estimated dynamic efficiency in the binary collision of
magnetized shells will be completed in a future work by accounting for
the numerical MHD evolution of such building blocks of the internal
shock models. A step further would be to compute the radiative
efficiency using the method devised in \cite{Mimica:2009p1445}.
\section*{Acknowledgments}
MAA is a Ram\'on y Cajal Fellow of the Spanish Ministry of Education
and Science. We acknowledge the support from the Spanish Ministry of
Education and Science through grants AYA2007-67626-C03-01 and
CSD2007-00050. We thank Jos\'e-Mar\'{\i}a Mart\'{\i} and
Jos\'e-Mar\'{\i}a Ib\'a\~{n}ez for their support and critical
discussions. The authors thankfully acknowledge the computer
resources, technical expertise and assistance provided by the
Barcelona Supercomputing Center - Centro Nacional de
Supercomputaci\'on.
|
3,212,635,537,599 | arxiv | \section{Introduction}
When quantizing gravity, one faces a tough problem, because time disappears from the equations.
If gravity is coupled to matter, then the changes of matter configurations are supposed to have
the role of time in quantum gravity (see, e.g, a review by Anderson\ci{Anderson}). Typically,
matter is described by scalar, spinor or electromagnetic fields\ci{Kiefer}. A different approach was
explored by Rovelli\ci{Rovelli1,Rovelli2} who considered as a model a single particle coupled to general
relativity. In addition to particle's coordinates $X^\mu (\tau)$, he also considered a clock
variable, attached to the particle. In Ref.\ci{PavsicKleinWheeler} a model without the clock variable
was investigated and it was found that the particle's coordinates $X^0$ (as well as $X^i$, $i=1,2,3$)
survive quantization and has the role of time in quantum gravity in the presence of the particle.
The model was also extended to a system of particles\ci{PavsicKleinWheeler}
and further elaborated\ci{PavsicKleinWheeler1}.
Recently that model was reconsidered by Struyve\ci{Struyve1,Struyve2}. He put the action into such a form that
the matter and gravity part had the same time parameter $\tau$. This required to insert an extra
${\dot X}^0$ into the gravity part of the action, and thus change the canonical momenta and
the constraint. Instead of the usual mass shell constraint $p^\mu p_\mu - m^2 = 0$, he obtained a new,
more complicated constraint that contained the Ricci scalar $R$. With this new constraint, it turned
out that upon quantization the time parameter $\tau$ disappeared from the equations. But Struyve
also observed that by a suitable canonical transformation at the classical level and a unitary
transformation at the quantum level one can arrive at the equations obtained
in Ref.\ci{PavsicKleinWheeler,PavsicKleinWheeler1}.
In the present work we intend to clarify this important subject. Firstly, we observe that both the particle and
gravity part of the action can be cast into the form in which they both have the same evolution
parameter, namely, $t\equiv x^0$, while retaining the particle worldline parameter $\tau$ and the
mass shell constraint $p^\mu p_\mu - m^2 =0$. Rewriting the total action by employing the
ADM (1+3) split and varying it with respect to the lapse and shift (considered as Lagrange multipliers),
one obtains the constraints. As a Hamiltonian we take a superposition of those constraints and
find that it leads to the correct equations of motion (the geodesic equation and the
Einstein equations) by emplying the ordinary Poisson brackets. By this we varify the correctness of the Hamiltonian so constructed.
To further explore the meaning of the quantities such as the particle's momenta $p_\mu$ and the Hamiltonian,
we perform the total variation of the action that includes a change $\delta x^\mu$ and $\delta \tau$ of the boundary. So we obtain the generator $H$ of the transformations $\delta t$, the generators $p_\mu$ of
the transformations $\delta X^\mu$ (which are changes of particle's position in spacetime), and
the generator of the transformation $\delta \tau$ (which is proportional to the mass shell constraint).
Such fundamental analysis, at each step covariant, convinces us that all the
momenta $p_\mu$, $ \mu = 0,1,2,3$, take place within the formalism. At the classical level, the presence of
the particle enables the identification (definition) of spacetime points. At the quantum level those
particle variables $X^\mu$, including $X^0$, remain in the equations; the particle's coordinate $X^0$
has the role of time. Because of the presence of the mass shell constraint, the Hamilton obtained by
Rovelli, namely\footnote{
We omit here the clock variable that is included in the Rovelli's equation.}
\be
H = \int \dd^3 \bx \,N^\mu {\cal H}_\mu^{\rm ADM} - N^i p_i - N \sqrt{m^2 + \bp^2} ,
\lbl{1.1}
\ee
can be written as $H= \int \dd^3 \bx \, N^\mu {\cal H}_\mu^{\rm ADM} - p_0$. Upon quantization,
$p_0 \rightarrow {\hat p}_0 = - i \frac{\p}{\p X^0}$. We see that the presence of the particle
``saves'' the concept of spacetime, so that time, namely, $X^0$, is present both in the classical and
quantum equations.
Otherwise it would be difficult to retain in the quantum theory the concept of local Lorentz
transformations of Eq.\,(\ref{1.1}), and understand how different local inertial observers
compare the observed values of $p_i$ without bringing $p_0$ into the description.
In Sec.\,2 we first point out that the Einstein equations imply the relation
$\frac{1}{8 \pi G} \int \dd^3 \bx \sqrt{-g} {G_0}^0 = - p_0$. Then we show that the analogous equation
comes out in the ADM formalism. In Sec.\,3 we discuss quantization of that model. At the end we also
touch the problem of the ordering ambiguities and point out how they could be avoided.
\section{Gravity coupled to particle}
The action for particle coupled to the gravitational field is\footnote{
Such action makes sense if $X^\mu (\tau)$ are not meant to be the coordinates of an exactly
point particle, but coordinates of the center of mass of an extended object. Here
we thus describe not a point particle, but an extended particle (object) coupled to
gravity, and include into the description only a restriced set of the object's variables,
namely its center of mass coordinates.}
\be
I[X^\mu,g_{\mu \nu}] = m \int \dd \tau \left (g_{\mu \nu} {\dot X}^\mu {\dot X}^\nu \right )^{1/2} + \frac{1}{16 \pi G} \int \dd^4 x \, \sqrt{-g}\, R .
\lbl{2.1}
\ee
The variation of this action with respct to the metric $g_{\mu \nu}$ gives
\be
- \frac{1}{8 \pi G}\, G^{\mu \nu}
= \int \dd \tau \,\delta^4 (x-X(\tau))\frac{m {\dot X}^\mu {\dot X}^\nu} {(g_{\alpha \beta} {\dot X}^\alpha {\dot X}^\beta)^{1/2} \sqrt{-g}} = T^{\mu \nu} ,
\lbl{2.2}
\ee
which are the Einstein equations in the presence of the stress-energt tensor $T^{\mu \nu}$ of the
particle.
From Eq.\,(\ref{2.2}) we obtain
\be
- \frac{1}{8 \pi G}\, \int G^{\mu \nu} \sqrt{-g} \,\dd \Sigma_\nu
= \int T^{\mu \nu} \sqrt{-g} \,\dd \Sigma_\nu = p^\mu ,
\lbl{2.3}
\ee
where $p^\mu$ is the particle's momentum. Writing the hypersurface element as
$\dd \Sigma_\nu = n_\nu \dd \Sigma$ and taking coordinates such that $n_\nu = (1,0,0,0)$ and
$\dd \Sigma = \dd^3 \bx$, we have
\be
- \frac{1}{8 \pi G}\, \int G^{\mu 0} \sqrt{-g}\, \dd^3 \bx
= \int T^{\mu 0} \sqrt{-g}\, \dd^3 \bx = \frac{m {\dot X}^\mu}{\sqrt{{\dot X}^\alpha {\dot X}_\alpha}} .
\lbl{2.4}
\ee
Here we have used $\int \dd \tau f(\tau) \delta (x^0 - X^0 (\tau))$ $= \frac{f(\tau)}{{\dot X}^0}|_{\tau_c}$, where $\tau_c$ is the solution of the equation $x^0 = X^0 (\tau)$.
Because of the Bianchi identity, ${G^{\mu \nu}}_{;\nu}=0$ (implying ${T^{\mu \nu}}_{;\nu}=0$),
not all equations (\ref{2.2}) are independent. The equations $\frac{1}{8 \pi G} G^{0\nu} + T^{0 \nu}=0$
are constraints on initial data, and so are the equations $\frac{1}{8 \pi G} G_{0\nu} + T_{0 \nu}=0$.
We thus have four constraints
\be
\phi_\nu = \frac{1}{8 \pi G} G_{0\nu} + T_{0 \nu}=0 .
\lbl{2.5}
\ee
Similarly, not all components of the metric $g_{\mu \nu}$ are independent. The components $g_{0 \nu}$
can be chosen to be an artifact of a choice of coordinates and to have the role of Lagrange multipliers.
The same holds for $g^{0 \nu}$. The variation of the action (\ref{2.1}) with respoct to
$g^{0 \nu}$ (having the role of Lagrange multipliers) gives the constraints (\ref{2.5}).
A linear superposition of the constraints (\ref{2.5}) is the Hamiltonian:
\be
H = \int \alpha^\nu \phi_\nu \, \sqrt{-g} \,\dd^3 \bx
= \int \left ( \frac{1}{8 \pi G} G_{0\nu} + T_{0 \nu}\right ) g^{0 \nu} \sqrt{-g}\, \dd^3 \bx = 0,
\lbl{2.6}
\ee
where $\alpha^\nu = g^{0 \nu}$ are arbitrary functions of $x^\mu$. We thus have
\be
\int \frac{1}{8 \pi G} G_{0\nu} g^{0 \nu} \sqrt{-g} \,\dd^3 \bx = - p_0 ,
\lbl{2.7}
\ee
where
\be
p_0 = \int {T_0}^\nu \sqrt{-g} \,\dd \Sigma_\nu = \int {T_0}^0 \sqrt{-g} \,\dd^3 \bx
= \frac{\p L}{\p {\dot X}^0}
= - \frac{m \, g_{0 \mu} {\dot X}^\mu }{\sqrt{g_{\alpha \beta} {\dot X}^\alpha \, {\dot X}^\beta}}.
\lbl{2.7a}
\ee
The phase space form of the action (\ref{2.1}) is
$$
I[X^\mu,p_\mu,\pi^{ij},q_{ij},\alpha,N,N^i] = \int \dd \tau \left [ p_\mu {\dot X}^\mu
- \frac{\alpha}{2} (g^{\mu \nu} p_\mu p_\nu - m^2) \right ] \delta^4 (x-X(\tau)) \,\dd^4 x $$
\be
\hs {2cm} + \int \dd^4 x \left (\pi^{ij} \sq_{ij}
- N {\cal H}^{\rm ADM} + N^i {\cal H}_i^{\rm ADM} \right ),
\lbl{2.8}
\ee
where $\pi^{ij}$, $q_{ij}$,$i,j = 1,2,3$, are the ADM phase space variables\ci{ADM1,ADM2},
and ${\cal H}^{\rm ADM}$,
${\cal H}_i^{\rm ADM}$ the ADM expressions for the gravitation part of the constraints. Here
${\dot X}^\mu \equiv \dd X^\mu/\dd \tau$ and $\sq_{ij} \equiv \dd q_{ij}/\dd t$. Later we will also
have $\sx^i \equiv \dd X^i/\dd t$, .
Because of the inserted $\delta$-function,tha matter and the gravitational part of the action have
both the same time parameter $x^0 \equiv t$.
Performing the integration overs $\tau$, we obtain
\be
I= \int \dd t \, \dd^3 \bx \left [ \frac{\delta^3 (\bx-\bX (t))}{{\dot X}^0}
\left ( p_\mu {\dot X}^\mu - \frac{\alpha}{2}
\left ( g^{\mu \nu} p_\mu p_\nu - m^2 \right ) \right ) \bigg|_{\tau_c}
+ \pi^{ij} \sq_{ij} - N {\cal H}^{\rm ADM} + N^i {\cal H}_i^{\rm ADM} \right ] ,
\lbl{2.9}
\ee
where $\tau_c$ is the solution of the equation $x^0 - X^0 (\tau) = 0$. Expressing the metric
according to the ADM split\ci{ADM1,ADM2},
\be
g_{\mu \nu} = \begin{pmatrix} N^2-N^i N_i , & -N_i& \\
-N_j ,& - q_{ij} \\
\end{pmatrix}~,~~~~
g^{\mu \nu} = \begin{pmatrix} {1}/{N^2} , & -{N^i}/{N^2}& \\
-{N^j}/{N^2} ,& {N^i N^j}/{N^2}-q^{ij}\\
\end{pmatrix}
\lbl{2.9a}
\ee
where $N=\sqrt{1/g^{00}}$ and $N_i = - g_{0 i}$, $i=1,2,3$, are the laps and
shift functions, we have
$$
I= \int \dd t \, \dd^3 \bx \left [ \delta^3 (\bx-\bX (t))
\left ( p_0 + p_i \sX^i - \frac{\alpha}{2{\dot X}^0}
\left ( \frac{1}{N^2}(p_0 - N^i p_i)^2-q^{ij}p_i p_j - m^2 \right ) \right )\bigg|_{\tau_c} \right .$$
\be
\hs{2cm}+ \left .\pi^{ij} \sq_{ij} - N {\cal H}^{\rm ADM} + N^i {\cal H}_i^{\rm ADM} \right ] ,
\lbl{2.10}
\ee
Here we identify $\alpha/{\dot X}^0$ with a new Lagrange multiplier according to
$\alpha/{\dot X}^0 = \lambda$, because ${\dot X}^0$ is arbitrary and has no longer a formal
role of a velocity as it had in the original action (\ref{2.8}).
The variation of this action with respect to the 3-metric $q_{ij}$ gives the $(ij)$-components
of the Einstein equation in the ADM split. The variation with respect to other variables gives:
\bear
\delta p_0 &:&1 = \frac{\alpha}{{\dot X}^0}\frac{1}{N^2}(p_0 - N^i p_i) =
\frac{\alpha}{{\dot X}^0} p^0 ~~ \Rightarrow~~ p^0 = \frac{\alpha}{{\dot X}^0} ,
\lbl{2.11}\\
\delta p_i &:&\sX^i = \frac{\alpha}{{\dot X}^0} \frac{N^i}{N^2}(p_0 - N^j p_j) - q^{ij}p_j
= p^i \frac{\alpha}{{\dot X}^0}~~
\Rightarrow~~ p^i = \frac{\sX^i {\dot X}^0}{\alpha} = \sX^i p^0 ,\lbl{2.12}\\
\delta \alpha &:& \frac{1}{N^2}(p_0 - N^i p_i)^2-q^{ij}p_i p_j - m^2 = 0, \lbl{2.15}
\ear
\be
\Rightarrow~~ p_0 - N^i p_i = \pm N \sqrt{q^{ij}p_i p_j + m^2}
\lbl{2.15a}
\ee
\bear
\delta N &:& {\cal H}^{\rm ADM} = \frac{1}{N}(p_0 - N^i p_i) \delta^3 (\bx - \bX)
= \sqrt{q^{ij}p_i p_j + m^2} \delta^3 (\bx - \bX) \lbl{2.13}\\
\delta N^i &:& \frac{\alpha}{{\dot X}^0} p_i \frac{1}{N^2}(p_0 - N^j p_j) \delta^3 (\bx - \bX)
= \frac{\alpha}{{\dot X}^0} p_i p^0 \delta^3 (\bx - \bX) = p_i \delta^3 (\bx - \bX) \lbl{2.14}
\ear
Here we simplified the notation so that now ${\dot X}^0 = {\dot X}^0|_{\tau_c}$.
The canonical momenta $p_\mu = \p L_{\rm m}^{(\tau)}/(\p {\dot X}^\mu)$, calculated from the action
(\ref{2.8}), whose matter part contains the parameter $\tau$, are the same as the canonical momenta
$p_i = \p L_{\rm m}^{(t)}/\p \sX^i$ and the quantity $p_0$ obtained from the action (\ref{2.10}) in which
$\tau$ was integrated out and the time parameter was $t$. This can be seen from the relations (\ref{2.11})--(\ref{2.15a}).
Equations (\ref{2.15}),(\ref{2.13})and (\ref{2.14}) imply the following
constrainsts\ci{PavsicKleinWheeler,PavsicKleinWheeler1}:
\bear
&&\chi = \frac{1}{N^2}(p_0 - N^i p_i)^2-q^{ij}p_i p_j - m^2 = 0, \lbl{2.19}\\
&&\phi = {\cal H}^{\rm ADM} - \frac{1}{N} (p_0 - N^i p_i) \delta^3 (\bx - \bX) = 0 ,
\lbl{2.17}\\
&&\phi_i = {\cal H}_i^{\rm ADM} - p_i \delta^3 (\bx - \bX) = 0 .
\lbl{2.18}
\ear
The Hamiltonian is a superposition of those constraints:
\be
H = \int \dd^3 \bx \, \left ( \lambda \chi \delta^3 (\bx - \bX) + N \phi + N^i \phi_i \right )= 0 .
\lbl{2.20}
\ee
Using (\ref{2.19})--(\ref{2.18}), we obtain
$$
H = \int \dd^3 \bx \, \left [ \lambda \left (\frac{1}{N^2}(p_0 - N^i p_i)^2-q^{ij}p_i p_j - m^2
\right ) \delta^3 (\bx - \bX) \right . \nonumber $$
\be
+ N {\cal H}^{\rm ADM} + N^i {\cal H}_i^{\rm ADM}
- p_0 \delta^3 (\bx - \bX) \biggr] = 0 .
\lbl{2.21}
\ee
The terms with $N^i p_i$ have canceled out in the latter expression. The same Hamiltonian we also
obtain from the action (\ref{2.10}) according to the expression
\be
H= \int \dd^3 \bx \, \left ( p_i \sX^i \delta^3 (\bx - \bX) + \pi^i \sq_{ij} - {\cal L} \right ) .
\lbl{2.20a}
\ee
From Eq.\,(\ref{2.21}), after using (\ref{2.19}) we obtain\ci{PavsicKleinWheeler1}
\be
\int \dd^3 \bx \, \left ( N {\cal H}^{\rm ADM} + N^i {\cal H}_i^{\rm ADM} \right ) = p_0 .
\lbl{2.21a}
\ee
This corresponds to Eq.\,(\ref{2.7}), it is its ADM split analog.
The equations of motion obtained from the Hamiltonian (\ref{2.21}) are:
\bear
&&{\mr p}_i = \lbrace p_i,H \rbrace = - \frac{\p H}{\p X^i}~,
~~~~\sX^i = \lbrace X^i,H \rbrace = \frac{\p H}{\p X^i} , \lbl{2.22}\\
&&{\mathring{\pi}}^{ij} = \lbrace \pi^{ij},H \rbrace = - \frac{\delta H}{\delta q_{ij}}~,
~~~\sq_{ij} = \lbrace q_{ij},H \rbrace = \frac{\delta H}{\delta \pi^{ij}} , \lbl{2.23}
\ear
where the usual Poisson brackets relations have been used. The quantity $p_0$ is given by
Eq.\,(\ref{2.15a}), which comes from the constraint (\ref{2.18}).
Besides Eq.\,(\ref{2.22}), there is also the equation of motion for $p_0$, namely,
\be
\frac{\p H}{\p p_0} = - \frac{\p L}{\p p_0}= 0 .
\lbl{2.25a}
\ee
Namely, the same equations (\ref{2.22}),(\ref{2.23}) also follow directly from the phase
space action (\ref{2.10}) according to the Euler-Lagrange equations
\bear
&&\frac{\dd}{\dd t}\frac{\p L}{\p \sX^i} - \frac{\p L}{\p X^i} = 0~,
~~~\frac{\dd}{\dd t}\frac{\p L}{\p \mr{p}_i} - \frac{\p L}{\p p_i} = 0~,
~~~ - \frac{\p L}{\p p_0} = 0 , \lbl{2.25b}\\
&&\frac{\dd}{\dd t}\frac{\delta L}{\delta \sq_{ij}} - \frac{\delta L}{\delta q_{ij}} = 0~,
~~~\frac{\dd}{\dd t}\frac{\delta L}{\delta \mr{\pi}^{ij}} - \frac{\delta L}{\delta \pi^{ij}} = 0 .
\lbl{2.25c}
\ear
Equations (\ref{2.22}) together with (\ref{2.25a}) are equivalent to the geodesic equation.
The same constraints (\ref{2.19})--(\ref{2.18}) also follow directly from the
action (\ref{2.8}) which contains the ``time'' parameter $\tau$ and the velocities ${\dot X}^\mu$.
Then alls quantities $p_\mu = (p_0,p_i)$ have the role of canonical momenta derivable according
to $p_\mu = \frac{\p L}{\p {\dot X}^\mu}$ from such $\tau$-depended Lagrangian.
The Hamiltonian defined in terms of a superposition of those contraints is again given by Eq.\,(\ref{2.20}) in which the parameter $\lambda$ is replaced by another parameter, namely $\alpha$.
The equations of motion for $X^\mu$, $p_\mu$ are
\be
{\dot p_\mu} = \lbrace p_\mu,H \rbrace~,~~~~{\dot X}^\mu = \lbrace X^\mu,H \rbrace .
\lbl{2.26}
\ee
Explicitly this gives
\bear
&&{\dot p}_\mu = - \frac{\p H}{\p X^\mu}
= - \frac{\alpha}{2} \p_\mu g^{\alpha \beta} p_\alpha p_\beta
= \frac{\alpha}{2} \p_\mu g_{\alpha \beta}\, p^\alpha p^\beta , \lbl{2.27}\\
&&{\dot X}^\mu = \frac{\p H}{\p p_\mu} = \alpha p^\mu . \lbl{2.28}
\ear
From the latter equations, after using $\alpha=\sqrt{{\dot X}^\mu {\dot X}_\mu}/m$, we
obtain
\be
\frac{1}{\sqrt{{\dot X}^2}} \frac{\dd}{\dd \tau} \left ( \frac{{\dot X}_\mu}{\sqrt{{\dot X}^2}} \right )
- \frac{1}{2} \p_\mu g_{\alpha \beta} \frac{{\dot X}^\alpha {\dot X}^\beta}{{\dot X}^2} = 0 ,
\lbl{2.29}
\ee
or equivalently,
\be
\frac{1}{\sqrt{{\dot X}^2}} \frac{\dd}{\dd \tau} \left ( \frac{{\dot X}^\mu}{\sqrt{{\dot X}^2}} \right )
+ \Gamma_{\alpha \beta}^{\,\mu} \frac{{\dot X}^\alpha {\dot X}^\beta}{{\dot X}^2} = 0 ,
\lbl{2.30}
\ee
which is the equation of geodesic.
We see that regardles of whether we start (i) from the original phase space action (\ref{2.8}), or,
(ii) from the action (\ref{2.10}), we obtain the same Hamiltonian (\ref{2.21}). In both cases
the Hamiltonian is a superposition of the constraints (\ref{2.17})--{\ref{2.19}), obtained by varying
the action with respect to the Lagrange multipliers $\alpha$, $N$ and $N^i$. In the second case, the
Hamiltonian can also be obtained by using the expression (\ref{2.20a}).
In general, the total variation of an action $I = \int \dd^4 x\, {\cal L}(\phi^a,\p_\mu \phi^a)$
that includes a change $\delta x^\mu$ of the boundary, is (see, e.g.\ci{BarutBook})
\be
{\bar \delta} I = \int_R \dd^4 x \, \delta {\cal L} + \int_{R-R'} \dd^4 x \, {\cal L} =
\int_R \dd^4 x \,\delta {\cal L} + \int_B \dd \Sigma_\mu \, {\cal L} \delta x^\mu .
\lbl{2.31}
\ee
Assuming that the equations of motion are satisfied, we have
\be
{\bar \delta}I = \int_B \dd \Sigma_\mu \left ( \frac{\p {\cal L}}{\p \p \phi^a} \delta \phi^a
+ {\cal L} \delta x^\mu \right ) = \int \dd^4 x \,\p_\mu
\left ( \frac{\p {\cal L}}{\p \p \phi^a} \delta \phi^a + {\cal L} \delta x^\mu \right ) .
\lbl{2.32}
\ee
Here $\delta \phi^a = \phi'^a (x)-\phi^a (x)$. Introducing the total variation
${\bar \delta} \phi^a =$ $\phi^a (x')-\phi^a (x)=$ $\delta \phi^a + \p_\mu \phi^a \delta x^\mu$, we
obtain
\be
{\bar \delta}I = \int_B \dd \Sigma_\mu \left [ \frac{\p {\cal L}}{\p \p_\mu \phi^a} {\bar \delta}\phi^a
+ \left ( {\cal L} {\delta_\nu}^\mu - \frac{\p {\cal L}}{\p \p_\mu \phi^a} \p_\nu \phi^a \right )
\delta x^\nu \right ] .
\lbl{2.33}
\ee
Let us consider the action (\ref{2.10}), identify $\phi^a = (X^i,q_{ij})$, and take coordinates in which
the surface element is $\dd \Sigma_\mu = (\dd \Sigma_0,0,0,0)$, $\dd \Sigma_0 = \dd^3 \bx$.
Then Eq.\,(\ref{2.33}) gives
\be
{\bar \delta} I = \int_{t_1}^{t_3} \dd^3 \bx \left [ \frac{\p {\cal L}}{\p \sX^i} {\bar \delta} X^i
+ \frac{\p {\cal L}}{\p \sq_{ij}} {\bar \delta} q_{ij} +
\left ( {\cal L} - \frac{\p {\cal L}}{\p \sX^i} \sX^i - \frac{\p {\cal L}}{\p \sq_{ij}}
\sq_{ij} \right ) \delta t \right ] .
\lbl{2.34}
\ee
The quantities ${\p {\cal L}}/{\p \sX^i} = p_i$ and ${\p {\cal L}}/{\p \sq_{ij}} = \pi^{ij}$
are, respectively, the generators of the infinitesimal translations $X^i \rightarrow X^i +{\bar \delta} X^i$
and $q_{ij} \rightarrow q_{ij} + {\bar \delta} q_{ij}$. The expression in front of $\delta t$ is
just the negative of the Hamiltonian (\ref{2.20a}).
If we consider the original phase space action (\ref{2.8}) and perform the change $\tau \rightarrow
\tau + \delta \tau$, then we obtain
\be
{\bar \delta}' I = \int \dd \tau \, \frac{\dd}{\dd \tau} \left ( \frac{\p L}{\p {\dot X}^\mu} \delta X^\mu
+ L \delta \tau \right ) = \int \dd \tau \, \left ( p_\mu {\bar \delta} X^\mu +
(L-p_\mu {\dot X}^\mu ) \delta \tau \right ) .
\lbl{2.35}
\ee
Here $p_\mu$ are the generators of the infinitesimal transformations
$X^\mu \rightarrow X^\mu + {\bar \delta} X^\mu$, where ${\bar \delta} X^\mu = X'^\mu (\tau') - X^\mu (\tau) =\delta X^\mu + {\dot X}^\mu \delta \tau$. The quantity in front of $\delta \tau$ is the
generator of infinitesimal transformations $\tau \rightarrow \tau + \delta \tau$; it is equal to
$\frac{\alpha}{2} (g^{\mu \nu} p_\mu p_\nu - m^2 )$. This is the Hamiltonian for the relativistic
particle and it gives the correct equations of motion (namely, that of a geodesic).
We see that by considering the total variation of the action (\ref{2.8}) that includes a change of
$\tau$, we find not only that $p_i$ are the generators of the infinitesimal ``translations''
of $X^\mu$, $i=1,2,3$, but also that $p_0$ is the generator of the translations of $X^0$.
As $p_i$ do not vanish, also $p_0$ does not vanish; $p_\mu = (p_0,p_i)$ are the canonical momenta,
conjugated to the particle's position variables $X^\mu (\tau) = (X^0 (\tau),X^i (\tau))$. Those
variables are distinct objects than the spacetime coordinates $x^\mu = (x^o,x^i)$, $x^0 \equiv t$.
Because the particle is embedded in spacetime, in the action (\ref{2.8}) there occurs
$\delta^4 (x-X(\tau))$, which says precisely that, namely that the particles is described by
a worldline $x^\mu = X^\mu (\tau)$.
Let us now follow the approach by Rovelli\ci{Rovelli1,Rovelli2} and see what do we obtain if instead of the
phase space action (\ref{2.8}) we use the action (\ref{2.1}), express the metric according
to Eq.\,(\ref{2.9a}), and fix the parameter so that $\tau= x^0 \equiv t$. The action (\ref{2.1})
then reads\footnote{In the approach considered by Rovelli, also a term due to a clock variable on
the particle's world line was included.}
\be
I = m \int \dd t\, \sqrt{N^2 - ({\mr \bX} + \bN)^2} +
\int \dd t\, \dd^3 \bx \left ( \pi^{ij} \sq_{ij} - N {\cal H}^{\rm ADM} - N^i {\cal H}_i^{\rm ADM}
\right ) ,
\lbl{2.36}
\ee
where $({\mr \bX} + \bN)^2 = q_{ij} ({\mr \bX}^i + N^i) ({\mr \bX}^j + N^j)$.
We will also write $\bp^2 = q_{ij} p^i p^j$. The particle momentum is
\be
\bp = - \frac{m ({\mr \bX} + \bN) }{ \sqrt{N^2 - ({\mr \bX} + \bN)^2}} ,
\lbl{2.37}
\ee
from which we have
\be
N^2 - ({\mr \bX} + \bN)^2 = \frac{N^2 m^2}{m^2 + \bp^2}~,~~~{\rm and}~~~
{\mr \bX} + \bN = \frac{\bp N}{\sqrt{m^2+\bp^2}} .
\lbl{3.37a}
\ee
The Hamiltonian is given by
\bear
&&H= \bp \sX - L_m + \int \left ( \pi^{ij} \sq_{ij} - {\cal L}_G \right ) \,\dd^3 \bx , \nonumber \\
&& \hs{5mm} = - \bN \bp - N \sqrt{m^2 + \bp^2}
+ \int \dd^3 \bx \, \left ( N {\cal H}^{\rm ADM} + N^i {\cal H}_i^{\rm ADM} \right ) .
\lbl{2.38}
\ear
If we vary the action (\ref{2.36}) with respect to $N$ and $N^i$, we obtain the following
constraints\ci{Rovelli1,Rovelli2}:
\bear
&&{\cal H}^{\rm ADM} = \sqrt{m^2+\bp^2} \delta^3 (\bx - \bX ) , \lbl{2.39} \\
&&{\cal H}_i^{\rm ADM} = p_i \delta^3 (\bx - \bX ) . \lbl{2.39}
\ear
The Hamiltonian (\ref{2.38}) is a superopsition of those constraints with the coefficients
$N$, $N^i$, and is therefore equal to zero (in the weak sense, i.e., on the constraint surface
in the phase space).
But if we consider the form (\ref{2.1}) of the action, before fixing $\tau$, then we obtain
the momentum $p_0 = {\p L}/{\p {\dot X}^0}$, besides the momenta $p_i = {\p L}/{\p {\dot X}^i}$.
Together all those momenta $p_\mu = (p_0,p_i)$ are constrained according to
\be
g^{\mu \nu} p_\mu p_\nu - m^2 = \frac{1}{n^2} (p_0 - N^i p_i)^2 - q^{ij} p_i p_j - m^2 = 0 .
\lbl{2.41}
\ee
Solving the latter equation for $p_0$, we obtain Eq.\,(\ref{2.15a}), i.e.,
$p_0 = N^i p_i + N \sqrt{q^{ij} p_i p_j + m^2}$. Using the latter relation in Eq.\,(\ref{2.38}),
we obtain
\be
H_G \equiv \int \dd^3 \bx \, \left ( N {\cal H}^{\rm ADM} + N^i {\cal H}_i^{\rm ADM} \right )
= p_0 .
\lbl{2.42}
\ee
This means that the gravitational Hamiltonian is equal to the particle momentum $p_0$ which,
as we have seen before, is the generator of the transformation
$X^0 \rightarrow X^0 + {\bar \delta} X^0$.
Altogether, $p_\mu = (p_0,p_i)$ generate the transformations
$X^\mu \rightarrow X^\mu + {\bar \delta} X^\mu$, i.e., they shift the particle's position in spacetime.
Recall that Eq.\,(\ref{2.42}) is consistent with Eq.\,(\ref{2.7}) that we obtained directly
from Einstein's equations.
A different role has the $H$ of Eq.\,(\ref{2.38}). It is the generator of the transformation
$t \rightarrow t+ \delta t$ which in the passive interpretation is just a change of a coordinate,
a reparametrization. And because the action is invariant under reparametrizations of $x^\mu$, the
corresponding generators, defined in Eq.\,(\ref{2.33}), vanish, and so does the $H$
in Eq.\,(\ref{2.38}).
According to the well known Einstein's hole argument, spacetime points cannot be identified.
A way to identify them is to fill spacetime with a reference fluid. If instead of a fluid we
have particles, then spacetime points are identified on the worldlines of those particles.
In the simplified model of a single particle, spacetime points are identified along the
worldline of that particle. From Eq.\,(\ref{2.35}) we
read that ${\bar \delta}X^\mu = ({\bar \delta}X^0,{\bar \delta}X^i)$ is a different sort
of transformation than $\delta x^\mu = (\delta x^0,\delta x^i)$, $\delta x^0 \equiv \delta t$.
An alternative approach was considered by Struyve\ci{Struyve1}. He started from the action
(\ref{2.1}) and cast it into such form that both terms, the particle's and the gravitational,
had the same ``time'' parameter $\tau$. For that purpose the integral
$\int \dd t\, \dd^3 \bx \, \sqrt{-g}\, R$ was transformed into
$\int \dd \tau \,{\dot X}^0 \, \dd^3 \bx\,\sqrt{-{\tl g}} \,{\tl R}$, where ${\tl R}$ and ${\tl g}$ were properly adjusted $R$ and $g$ that took into account the relation $x^0 = X^0 (\tau)$. Because of the occurrence of
${\dot X}^0$ in the gravity part of the action, the null-component of the canonical momentum was not
$p_0 = m {\dot X}_0/\sqrt{{\dot X}^2}$, but $p_0 = m {\dot X}_0/\sqrt{{\dot X}^2}+$ $\int \dd^3 \bx\,\sqrt{-{\tl g}}\, {\tl R}$.
Because of the extra term in $p_0$, the constraint no longer had the simple form (\ref{2.41}).
Consequently, instead of Eq.\,(\ref{2.42}) in which $p_0 \neq 0$, a different equation
was obtained, namely, $H_G =p_0$,
where $p_0$ turned out to be zero (i.e., it vanished weakly).
It was then concluded that, because of the constraint $p_0 \approx 0$, a particle cannot
give rise to time in the quantum version of the theory and that the notorious problem of time still
existed. Struyve also observed that there exists a canonical transformation that relates the
approach based on the action (\ref{2.1}) to the approach based on his modified action, and that upon
quantization, those two approaches are related by the corresponding unitary transformations.
\section{Quantization}
We have arrived, following different paths, to the equation (\ref{2.42}), i.e.,
\be
H= H_G - p_o = 0,
\lbl{3.1}
\ee
where $p_0$ is related to $p_i$ according to Eq.\,(\ref{2.15a}), whcih comes from the mass shell
constraint (\ref{2.15a}), and where
$H_G = \int \dd^3 \bx \, \left ( N {\cal H}^{\rm ADM} + N^i {\cal H}_i^{\rm ADM} \right )$ contains the canonical momenta conjugated to the 3-metric $q_{ij}$.
Upon quantization, the particle coordinates $X^\mu$ and momenta $p_\mu$ become operators satisfying
\be
[{\hat X}^\mu,{\hat X}^\nu] = 0~,~~~[{\hat p}_\mu,{\hat p}_\nu]=0~,~~~
[{\hat X}^\mu,{\hat p}_\nu]=i {\delta^\mu}_\nu .
\lbl{3.2}
\ee
Similarly, the gravity variables become operators satisfying
\be
[{\hat q}_{ij},{\hat q}_{mn}] = 0~,~~~[{\hat \pi}_{ij},{\hat\pi}_{mn}]=0~,~~~
[{\hat q}_{ij},{\hat \pi}^{mn}]=i {\delta_{ij}}^{mn} .
\lbl{3.3}
\ee
In the Schr\"odinger representation in which $X^\mu$ and $q_{ij}$ are diagonal, Eqs.\,(\ref{3.2})
and (\ref{3.3}) are satisfied by ${\hat X}^\mu = X^\mu$, ${\hat q}_{ij} = q_{ij}$, ${\hat p}_\mu
= - i \p/\p X^{\mu}$ and $\pi^{ij} = -i \delta/\delta q_{ij}$.
The constraints (\ref{2.19})--(\ref{2.18}) become operator equations acting on the state vector,
which can be represented as a function of $X^\mu$ and a functional of $q_{ij} (\bx)$, namely
$\Psi [X^\mu,q_{ij} (\bx)]$.
In order to quantize Eq.\,(\ref{3.1}), we take the gauge $N=1$, $N^i =0$, and so we obtain
\be
\int \dd^3 \bx \left ( \frac{-1}{\kappa} G_{ijk \ell} \pi^{ij} \pi^{k \ell} + \kappa
\sqrt{q} R^{(3)} \right ) = p_0 ,
\lbl{3.3a}
\ee
where $\kappa = 1/(16 \pi G)$ and $G_{ijk \ell} \pi^{ij} \pi^{k \ell}$ Wheeler-DeWitt metric.
The latter equation can be written in the following compact form:
\be
\frac{1}{\kappa} G^{a(\bx)b(\bx')} \pi_{a(\bx)} \pi_{b(\bx')} + V[q^{a(\bx)}] = - p_0 ,
\lbl{3.3b}
\ee
where $\pi_{a(\bx)} \equiv \pi^{ij} (\bx)$, $ G^{a(\bx)b(\bx')} = G_{ijk \ell} (\bx) \delta^3 (\bx-\bx')$,
$q^{a(\bx)} \equiv q_{ij} (\bx)$, and $V[q^{a(\bx)}] = - \kappa \sqrt{q} R^{(3)}$.
Upon quantization, $\pi_{a(\bx)}$ become the operators ${\hat \pi}_{a(\bx)} = -i \delta / \delta q^{a(\bx)}
\equiv - i \p_{a(\bx)}$, and $p_0$ becomes ${\hat p}_0 = -i \p/\p T$, where $T \equiv X^0$.
The notorious ordering ambiguity can be avoided\ci{PavsicOrder} if we proceed as follows. First, let me
illustrate the procedure for the constraint (\ref{2.19}). It can be written in the following form:
\be
\gam^\mu p_\mu \gam^\nu p_\nu - m^2 = 0.
\lbl{3.4}
\ee
Here $\gam^\mu$ are the generators of the Clifford algebra, satisfying
\be
\gam^\mu \cdot \gam^\nu \equiv \frac{1}{2} (\gam^\mu \gam^\nu + \gam^\nu \gam^\mu ) = g^{\mu \nu} .
\lbl{3.5}
\ee
They have the role of the basis vectors\ci{Hesteness1,Hesteness2} (see also\ci{PavsicBook,PavsicKaluza,PavsicKaluzaLong}) in a curved spacetime $M$. The quantum version of Eq.\,(\ref{3.4})
is
\be
\left (\gam^\mu {\hat p}_\mu \gam^\nu {\hat p}_\nu - m^2 \right ) \Psi = 0,
\lbl{3.6}
\ee
where $\Psi$ is a scalar wave function.
Using ${\hat p}_\mu = -i \p_\mu$ and\footnote{Here the symbol $\p_\mu$ denotes a generic derivative that can act on any Clifford algebra-valued field. For instance,
if acting on a scalar field it behaves as the partial derivative and if acting on basis vectors $\gam^\nu$
it determines the connection. More details and a justification why the same symbol $\p_\mu$ can be used in both
cases is provided in Ref.\ci{PavsicKaluzaLong}.}
$\p_\mu \gam^\nu =\Gam_{\mu \rho}^{~\nu} \gam^\rho$,
where $\Gam_{\mu \rho}^{~\nu}$ is the connection in $M$, equation (\ref{3.6}) becomes
\be
\left (- {\Gam_{\mu \rho}}^{\,\nu} \gam^\mu \gam^\rho \p_\nu - \gam^\mu \gam^\nu \p_\mu \p_\nu - m^2 \right )
\Psi = 0 ,
\lbl{3.7}
\ee
i.e.,
\be
\left ( \gam^\mu \gam^\nu \p_\mu \p_\nu + \Gam_{\mu \rho}^{\,\,\nu} g^{\mu \rho} \p_\nu + m^2 \right )
\Psi = (\DD_\mu \DD^\mu + m^2 ) \Psi= 0 .
\lbl{3.8}
\ee
Here $\DD_\mu$ is the covariant derivative of the tensor calculus; acting on a vector components, it gives
$\DD a^\nu = \p_\mu a^\nu + {\Gam_{\mu \rho}}^{\,\nu} a^\rho$.
There is no ordering ambiguity in Eq.\,(\ref{3.6}), because $\gam^\mu {\hat \p}_\mu \gam^\nu {\hat \p}_\nu$ is the
product of two vector momentum operators\ci{PavsicOrder} ${\hat p} = \gam^\mu {\hat p}_\mu$ and is
invariant under general coordinate transformations. Using a different product, for instance,
${\hat p}_\mu \gam^\mu \gam^\nu {\hat p}_\nu$
would make no sense, because such an object is not invariant and not a product of two vector operators.
In Ref.\ci{PavsicStumbling} it was shown what happens if $\Psi$ in Eq.\,(\ref{3.6}) is a spinor field, expanded
in term of the spinor basis $\xi_\alpha$, according to $\Psi = \psi^\alpha \xi_\alpha$. Then, instead of
(\ref{3.8}) one obtains
\be
({\cal D}^\mu {\cal D}_\mu +m^2) \psi^\delta + \frac{1}{2} {[\gam^\mu,\gam^\nu]^\delta}_\alpha {R_{\mu \nu}^{~~\alpha}}_ {\beta} \psi^\beta,
\lbl{3.8a}
\ee
where ${\cal D}_\mu$ contains also the spin connection, determined by the relation $\p_\mu \xi_\alpha =
{{\Gam_\mu}^{\,\beta}}_\alpha \xi_\beta$. Analogously we can find the corresponding equation for a vector field.
In a similar way also the ordering ambiguity in Eq.\,(\ref{3.3b}) can be avoided by introducing the superspace
analog of $\gam^\mu$ and rewrite Eq.\,(\ref{3.3b}) in the form
\be
\frac{1}{\kappa} G^{a(\bx)} \pi_{a(\bx)} G^{b(\bx')} \pi_{b(\bx')} + V[q] = - p_0 ,
\lbl{3.9}
\ee
where $ G^{a(\bx)}$ are the generators of the Clifford algebra in the superspace ${\cal S}$, satisfying
\be
G^{a(\bx)} \cdot G^{b(\bx')}
\equiv\frac{1}{2} \left ( G^{a(\bx)} G^{b(\bx')} + G^{b(\bx')} G^{a(\bx)} \right ) = G^{a(\bx)b(\bx')} .
\lbl{3.10a}
\ee
The quantum version of Eq.\,(\ref{3.9}) is
\be
\left ( \frac{1}{\kappa} G^{a(\bx)} {\hat \pi}_{a(\bx)} G^{b(\bx')} {\hat \pi}_{b(\bx')} + V[q] \right ) \Psi
= i \frac{\p \Psi}{\p T},
\lbl{3.10b}
\ee
where $\Psi = \Psi [T,X^i,q^{a(\bx)}]$. In the latter equation ${\hat \pi}_{a(\bx)} = - i \p_{a(\bx)}$ acts
on $G^{b(\bx')}$. Analogously as in the finite dimensional case, the derivative
$\p_{a(\bx)}$ acting on $G^{b(\bx')}$ gives the connection according to
\be
\p_{a({\bx})} G^{b(\bx')} = \Gamma_{a(\bx) c(\bx'')}^{~b(\bx')} G^{c(\bx')}.
\lbl{3.10c}
\ee
Equation (\ref{3.10b}) then becomes
\be
\left ( \frac{1}{\kappa} G^{a(\bx) b(\bx)} \p_{a(\bx)} \p_{b(\bx')}
+ \Gamma_{a(\bx) c(\bx'')}^{~b(\bx')} G^{a(\bx) c(\bx'')} \p_{b(\bx')} + V[q] \right ) \Psi
= \left ( \DD_{a(\bx)} \DD^{a(\bx)} + V [q] \right ) = i \frac{\p \Psi}{\p T} .
\lbl{3.11}
\ee
The connection is given by
\be
\Gamma_{a(\bx) b(\bx')}^{~c(\bx'')} = \frac{1}{2} G^{c(\bx'') d(\bx''')}
\left ( G_{a(\bx) d(\bx'''),b(\bx')} + G_{d(\bx''') b(\bx''),a(\bx)}
- G_{a(\bx)b (\bx'),d(\bx''')} \right ) ,
\lbl{3.12}
\ee
where the comma denotes the functional derivative. Using the established techniques for the
superspace calculations (see, e.g.,\ci{Superspace}), the connection (\ref{3.12}) can be calculated for the
Wheeler-DeWitt metric by using
\be
\p_{c(\bx'')} G^{a(\bx) b(\bx')} = \frac{\delta}{\delta q_{mn}} G_{ijk\ell} \delta (\bx-\bx')~,
{\rm and} ~~~\frac{\delta}{\delta q_{mn}} g_{ij} (\bx') = \delta_{(i}^{(m} \delta_{j)}^{m)} .
\lbl{3.13}
\ee
Because the ordering issues regarding the Wheeler-DeWitt equation are not the main topics of this
paper, they will be discussed in more detail elsewhere.
\section{Conclusion}
We have considered the relativistic particle coupled to gravity and analysed the constraints satisfied
by such system. The constraints follow directly from the action (\ref{2.1}) by varying it with respect
to the non dynamical components of the metric $g^{\mu}$, namely, $g^{0\mu}$, which gives
the $(0 \mu)$-components of Einstein's equations: $\phi_\mu = \frac{1}{8 \pi G} G_{o \mu} + T_{0 \mu} = 0$.
The Hamiltonian is a linear superposition of those constraints,
$H = \int \alpha^\mu \phi_\mu \sqrt{-g} \dd^3 \bx = H_g + H_m = 0$, where
$H_m = \int T_{0 \mu} g^{0 \mu} \sqrt{-g} \dd^3 \bx = p_0$ is the particle momentum. If we perform
the ADM split of the action (\ref{2.1}) or its phase space form (\ref{2.8}), then the non dynamical
variables are the lapse and shift functions, $N$ and $N^i$. The constraints $\phi$, $\phi_i$ come
from varying the action with respect to $N$ and $N^i$. The time variable in the matter and the gravity
part of the action are the same, namely $t\equiv x^0$. In addition, the matter part $I_m$ contains
the worldline parameter $\tau$ and the term $\delta^4 (x-X(\tau))$ which tells that the worldline
is embedded in spacetime and thus satisfies the parametric equation $x^\mu = X^\mu (\tau)$. Because
$I_m$ is invariant under reparametrizations of $\tau$, the particle momenta $p_\mu = \p L/\p {\dot X}^\mu$
satisfy the mass shell constraint, $\chi = p_\mu p^\mu - m^2 = 0$. The Hamiltonian, which is a superposition
of the constrainst $\chi$, $\phi$ and $\phi_i$ gives the correct equations of motion for all the dynamical
variables by using the ordinary Poisson brackets. The matter part of the Hamiltonian is $p_0$, as
it should be, because---as we have seen--- it also comes directly from the Einstein equations.
Upon quantization, $p_0$ becomes the operator ${\hat p}_0 = i \p/\p T$, where $T \equiv X^0$ donotes
the time coordinate of the particle. Altogether, $X^\mu$, $\mu= 0,1,2,3$, denote position of the
particle in spacetime. The Hamilton constraint $H=H_g + H_m =0$, i.e., $H_g = -p_0$, becomes the
Schr\"odinger-like equation $H_g \Psi = i \p \psi/\p T$, where $\Psi = \Psi[T,X^i,q_{ij}]$. The time
and hence spacetime in this approach does not disappear upon quantization.
Spacetime in the quantized theory disappears if no matter is present.
According to Einstein's hole
argument, spacetime points cannot be identified. They can be identified in the presence of a reference fluid.
If there is no reference fluid and instead there are particles, then spacetime points can be identified
on the worldlines of the particles. In this paper we have considered a simplified model with only one particle
present, and found that its time coordinate $T$ remains in the quantized theory.
In the usual approaches in which matter is given by some fields, such as a scalar or spinor field, spacetime
points cannot be directly identified as in the case of particles and one faces the notorious problem of
time, namely, how those fields could give rise to time.
There is a lot of discussion of how to resolve it, but so far no generally accepted resolution has been
found. In the model in which gravity is coupled to particles, the problem of time does not exist.
|
3,212,635,537,600 | arxiv |
\section{Introduction}
The Cabibbo-Kobayashi-Maskawa (CKM) quark flavor-mixing
matrix~\cite{CKM} provides an elegant explanation of the origin of
\ensuremath{C\!P}\xspace\ violation within the Standard Model.
\ensuremath{C\!P}\xspace\ violation manifests itself as a non-zero area of the unitarity triangle~\cite{Jarlskog}.
While it is sufficient to measure one of the angles to demonstrate the
existence of \ensuremath{C\!P}\xspace\ violation,
the unitarity triangle needs to be over-constrained by experimental
measurements
in order to demonstrate that the CKM mechanism is the correct
explanation of this phenomenon. Precision measurements of the sides
and angles of the unitarity triangle are the focus of the physics
program at the
$B$ Factories. While several theoretically clean measurements
of the angle $\beta$ exist~\cite{sin2b}, constraining the other two
angles $\alpha$ and $\gamma$ is significantly more challenging.
A theoretically clean measurement of $\sin(2\beta+\gamma)$ can
be obtained from the study
of the time evolution for $\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^{(*)-} \pi^+$~\cite{chconj}
and $\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^{(*)-} \rho^+$
decays, which are available in large samples at the \ensuremath{B}\xspace\ factories, and for the corresponding
CKM-suppressed modes $\ensuremath{B^0}\xspace{\ensuremath{\rightarrow}\xspace} D^{(*)+}\pi^-$ and $\ensuremath{B^0}\xspace{\ensuremath{\rightarrow}\xspace}
D^{(*)+}\rho^-$~\cite{sin2bg}.
Measurements of
\ensuremath{C\!P}\xspace\ asymmetries in decays $\ensuremath{B^0}\xspace{\ensuremath{\rightarrow}\xspace} D^{(*)\mp}\pi^\pm$ and
$\ensuremath{B^0}\xspace{\ensuremath{\rightarrow}\xspace} D^{\mp}\rho^\pm$ decays have recently been
published~\cite{ref:s2bgDPi,ref:s2bgDRho}.
The interpretation of \ensuremath{C\!P}\xspace\ asymmetries in
$\ensuremath{B^0}\xspace{\ensuremath{\rightarrow}\xspace} D^{(*)\mp}\pi^\pm$ decays as a measurement of \stwobg\
requires knowledge of the ratios of the decay amplitudes,
\begin{equation}
r(D^{(*)}\pi)=\left|\frac{A(\ensuremath{B^0}\xspace{\ensuremath{\rightarrow}\xspace} D^{(*)+}\pi^-)}
{A(\ensuremath{B^0}\xspace{\ensuremath{\rightarrow}\xspace}D^{(*)-}\pi^+)}\right|\ .
\label{eq:rDpi}
\end{equation}
However, direct measurements of the doubly Cabibbo suppressed
branching fractions
${\ensuremath{\cal B}\xspace}(\ensuremath{B^0}\xspace{\ensuremath{\rightarrow}\xspace} D^{(*)+}\pi^-)$ and ${\ensuremath{\cal B}\xspace}(\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^{(*)+}\rho^-)$ are not
possible with the currently available data samples due to the presence
of the copious background from $\ensuremath{\Bbar^0}\xspace{\ensuremath{\rightarrow}\xspace} D^{(*)+}\pi^-, D^{(*)+}\rho^-$.
On the other hand, assuming SU(3) flavor symmetry,
$r({D^{(*)}\pi})$ can be related
to the branching fraction of the decay \btodsospi~\cite{sin2bg}:
\begin{equation}
r(D^{(*)}\pi) =
\tan\theta_c\,
\frac{f_{D^{(*)}}}{f_{D^{(*)}_s}}\sqrt{\frac{{\ensuremath{\cal B}\xspace}(\btodsospi)}{{\ensuremath{\cal B}\xspace}(\btodospi)}}
\ ,
\label{eq:rDPi}
\end{equation}
where $\theta_c$ is the Cabibbo angle, and $f_{D^{(*)}}/f_{D^{(*)}_s}$
is the ratio of $D^{(*)}$ and $D^{(*)}_s$ meson decay
constants~\cite{fdsdTheory,fdsdRef,fdsdExp}. Other
SU(3)-breaking effects are believed to affect $r({D^{(*)}\pi})$ by
(10-15)\%~\cite{ref:Baak}.
The dominant Feynman diagrams for the decays
$\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^{(*)-}\pi^+(\rho^+)$,
$\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^{(*)+}\pi^-(\rho^-)$,
$\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D_s^{(*)+}\pi^-(\rho^-)$,
and \btodsoskokstar\ are shown in Fig.~\ref{fig:diag}.
Since \btodsospi\ has four
different quark flavors in the final state, a single amplitude
contributes to the decay (Fig.~\ref{fig:diag}c).
On the other hand, two diagrams contribute to
$\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^{(*)-}\pi^+$ and $\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^{(*)+}\pi^-$:
tree amplitudes (Fig.~\ref{fig:diag}a,b) and color-suppressed direct
$W$-exchange amplitudes (Fig.~\ref{fig:diag}d,e). The latter are
assumed to be negligibly small in Eq.~(\ref{eq:rDPi}). The decays \btodsosk\
(Fig.~\ref{fig:diag}f) probe the size of the $W$-exchange
amplitudes relative to the dominant processes $\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace
D^{(*)-}\pi^+$.
\begin{figure}[h]
\begin{center}
\epsfig{file=diagA.eps,width=1.5in}
\epsfig{file=diagD.eps,width=1.5in} \\
\epsfig{file=diagB.eps,width=1.5in}
\epsfig{file=diagE.eps,width=1.5in} \\
\epsfig{file=diagC.eps,width=1.5in}
\epsfig{file=diagF.eps,width=1.5in}
\end{center}
\vspace{-0.5cm}
\caption{Dominant Feynman diagrams for (a) CKM-favored decays
$\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^{(*)-}\pi^+(\rho^+)$, (b) doubly CKM-suppressed decays
$\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^{(*)+}\pi^-(\rho^-)$, and (c) the SU(3) flavor symmetry
related decays $\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D_s^{(*)+}\pi^-(\rho^-)$;
(d) the color-suppressed $W$-exchange contributions
to $\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^{(*)-}\pi^+(\rho^+)$, (e) $\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^{(*)+}\pi^-(\rho^-)$, and
(f) decay \btodsoskokstar.}
\label{fig:diag}
\end{figure}
The rate of \btodsoskokstar\
decays could be enhanced by final state rescattering~\cite{Wexch}, in
addition to the $W$-exchange amplitude. Such long-distance effects
could also affect the vector meson polarization in \btodsskstar\
decays.
The angular distribution in vector-vector decays
$\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D_s^{*} V$ ($V=\rho,\, K^{*}$) is given by
\begin{eqnarray}
\frac{d^2\Gamma}{d\cos\theta_{D_s^*}\,d\cos\theta_V} &\propto&
\left[ (1 - f_L) (1+\cos^2\theta_{D_s^*})\sin^2\theta_V \right.
\nonumber\\
&+& \left. 4 f_L \sin^2\theta_{D_s^*}\cos^2\theta_V\right],
\label{eq:helicityshort}
\end{eqnarray}
\noindent
where $\theta_{D_s^*}$ and $\theta_V$ are
the helicity angles of \ensuremath{D^{*+}_s}\xspace\ and the vector meson $V$,
respectively,
$f_L=|A_0|^2/(\Sigma|A_\lambda|^2)$ is the
longitudinal polarization fraction, and
$A_{\lambda=-1,0,+1}$ are the helicity amplitudes. These distributions
are integrated over the angle between the decay planes of \ensuremath{D^{*+}_s}\xspace\ and
$V$.
For amplitudes dominated by the short-range (electroweak)
currents, $f_L$ is
predicted to be near unity~\cite{ref:VV}, with corrections of order
$\mathcal{O}(m_V^2/m_B^2)$, where $m_V$ is the mass of the vector
meson produced by the weak current, and $m_B$ is the mass of the $B$
meson.
Thus, the measurement of $f_L$ can constrain the size of the
long-distance contributions in \btodsskstar\ decays~\cite{Wexch}.
The branching fractions ${\ensuremath{\cal B}\xspace}(\btodsospi)$ and ${\ensuremath{\cal B}\xspace}(\btodsosk)$ have been
measured previously
by the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ Collaboration~\cite{priorBaBar}.
In this paper we present the first evidence for the decays
\btodssrho\ and \btodsoskstar, and a limit on the rate of \btodsrho.
We also update the
measurements of the branching fractions ${\ensuremath{\cal B}\xspace}(\btodsospi)$ and
${\ensuremath{\cal B}\xspace}(\btodsosk)$ with improved precision, using a 65\%
larger dataset.
\section{Data Sample and the Detector}
We use a sample of \lumi\ \Y4S\ decays into \BB\
pairs
collected with the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ detector
at the PEP-II\ asymmetric-energy \ensuremath{e^+e^-}\xspace\ collider~\cite{pep}.
A detailed description of the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ detector is available
elsewhere~\cite{detector}. The components of the detector crucial to
this analysis are summarized below.
Charged particle tracking is provided by a five-layer silicon
vertex tracker (SVT) and a 40-layer drift chamber (DCH).
For charged-particle identification, ionization energy loss ($dE/dx$) in
the DCH and SVT, and Cherenkov radiation detected in a ring-imaging
device (DIRC) are used.
Photons and neutral pions are identified and measured using
an electromagnetic calorimeter (EMC), which comprises 6580 thallium-doped CsI
crystals. These systems are mounted inside a 1.5-Tesla solenoidal
superconducting magnet.
We use the GEANT4~\cite{geant4} software to simulate interactions of particles
traversing the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ detector, taking into account the varying
detector conditions and beam backgrounds.
\section{Event Selection and Analysis}
\label{sec:selection}
The selection of events of interest proceeds in two steps. First, we
preselect events with at least three reconstructed charged-particle tracks
and a total measured energy greater than $4.5$ GeV, as determined
using all charged particles and neutral particles with energy above 30
MeV.
In order to reject $e^{+}e^{-}\ensuremath{\rightarrow}\xspace q\bar{q} (q=u,d,s,c)$ continuum
background, the ratio of the second to zeroth order Fox-Wolfram
moments~\cite{fox} must be less than $0.5$.
Candidates for \ensuremath{D^+_s}\xspace\ mesons
are reconstructed in the $\ensuremath{D^+_s}\xspace\ensuremath{\rightarrow}\xspace\phi \pi^+$, $\KS K^+$ and
$\ensuremath{\Kbar^{*0}}\xspace\ensuremath{K^+}\xspace$ final states, with $\phi{\ensuremath{\rightarrow}\xspace} K^+K^-$, $\KS {\ensuremath{\rightarrow}\xspace} \ensuremath{\pi^+}\xspace \ensuremath{\pi^-}\xspace$, and
$\ensuremath{\Kbar^{*0}}\xspace{\ensuremath{\rightarrow}\xspace} K^-\pi^+$.
The $\KS$ candidates are reconstructed from two
oppositely-charged tracks, and their momentum is required to make an
angle $|\theta_\mathrm{flight}|<11^{\circ}$
with the line connecting their
vertex and the $\ensuremath{e^+e^-}\xspace$ interaction point. All other tracks are
required to originate from the $\ensuremath{e^+e^-}\xspace$ interaction region, loosely defined by
$|d_0|<1.5$~cm and $|z_0|<10$~cm, where $d_0$ and $z_0$ are the
distances of closest approach to the primary $\ensuremath{e^+e^-}\xspace$ vertex in the
directions perpendicular and parallel to the beams, respectively.
In order to reject background from $\ensuremath{D^+}\xspace{\ensuremath{\rightarrow}\xspace}\KS\ensuremath{\pi^+}\xspace$ or $\ensuremath{\Kbar^{*0}}\xspace\ensuremath{\pi^+}\xspace$,
the $\ensuremath{K^+}\xspace$ candidate in the reconstruction of $\ensuremath{D^+_s}\xspace{\ensuremath{\rightarrow}\xspace}\KS\ensuremath{K^+}\xspace$ or
$\ensuremath{\Kbar^{*0}}\xspace\ensuremath{K^+}\xspace$ is
required to satisfy positive kaon identification criteria, which have an
efficiency of 85\% and a 5\% pion misidentification probability. The same
selection is used to identify kaon daughters of the
$B^0$ and $K^{*+}$ mesons in decays \btodsoskokstar. The selection is
based on the ratios of likelihoods for kaon, pion, and proton
identification in the SVT, DCH, and DIRC. The detector likelihoods are
calibrated over a wide range of momenta using particles identified
kinematically in clean decay chains, such as $D^{*+}\ensuremath{\rightarrow}\xspace D^0\pi^+$,
$D^0\ensuremath{\rightarrow}\xspace K^-\pi^+$, and $\Lambda\ensuremath{\rightarrow}\xspace p\pi^-$.
In all other
cases, kaons are
not positively identified, but instead candidates passing
a likelihood-based pion
selection are rejected.
The selection efficiency of this ``pion veto'' is 95\% for the
kaons and 20\% for the pions.
Pion daughters of $B^0$ and $\rho^-$ mesons in the decays \btodsospi\
and \btodsosrho\ are required to be positively identified.
Decay products of $\phi$, $\ensuremath{\Kbar^{*0}}\xspace$, $\ensuremath{D^+_s}\xspace$, and $\ensuremath{B^0}\xspace$ candidates are
constrained to originate from a single vertex.
We reconstruct $\rho^+\ensuremath{\rightarrow}\xspace\pi^+\pi^0$ candidates by combining a
well-identified charged pion with a $\pi^0\ensuremath{\rightarrow}\xspace\gamma\gamma$
candidate. The $K^{*+}$ candidates are reconstructed through the
decays $K^{*+}\ensuremath{\rightarrow}\xspace K^+\pi^0$ and $K^{*+}\ensuremath{\rightarrow}\xspace K^{0}_S\pi^+$.
The neutral pion candidates are reconstructed from a pair of photons
each with a minimum energy of $30$~MeV. The invariant mass of
the photon pair is
required to be within $\pm 25$~\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace of the nominal
value~\cite{PDG2006}. The selected candidates are
constrained to the nominal $\pi^0$ mass before forming the
$\rho^+$ or $K^{*+}$ candidates.
We require that
the invariant mass of the two
pions forming the $\rho^-$ candidate be within $\pm 320$~\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace\ of the nominal
value~\cite{PDG2006}, and the invariant mass of the $K^+\pi^0$ and
$K^{0}_S\pi^+$ pairs be within $\pm 75$~\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace\ of the nominal
$K^{*+}$ mass~\cite{PDG2006}. $K^{0}_S\pi^+$ pairs are
constrained to a common geometric vertex.
We reconstruct \ensuremath{D^{*+}_s}\xspace\ candidates in the mode $\ensuremath{D^{*+}_s}\xspace{\ensuremath{\rightarrow}\xspace}\ensuremath{D^+_s}\xspace\gamma$ by
combining \ensuremath{D^+_s}\xspace\ and photon candidates.
Photon candidates are required to be consistent with an electromagnetic
shower in the EMC, and to have an energy greater than $100$~\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace in the
laboratory frame.
When forming a \ensuremath{D^{*+}_s}\xspace, the \ensuremath{D^+_s}\xspace\ candidate is required to have an invariant mass
within 10~\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace\ of the nominal value~\cite{PDG2006}. For \btodssrho\
and \btodsskstar\ modes, we apply a ``$\pi^0$ veto'' by rejecting
photons that in combination with
any other photon in the event form an invariant mass that
falls within $125< m_{\gamma\gamma}<145$~\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace.
The efficiency of the initial preselection discussed above varies
between $14\%$ (\btodssrho, \dskstark) and $48\%$ (\btodspi,
\dsphipi). After the preselection, we identify signal $B$ decay
candidates using a likelihood ratio
$R_L =
\mathcal{L}_\mathrm{sig}/(\mathcal{L}_\mathrm{sig}+\mathcal{L}_\mathrm{bkg})$,
where
$\mathcal{L}_\mathrm{sig}=\prod_{j}\mathcal{P}_{\mathrm{sig}}(x_k)$
is the multivariate likelihood for
the signal hypothesis and
$\mathcal{L}_\mathrm{bkg}=\prod_{i}\mathcal{P}_{\mathrm{bkg}}(x_k)$
is the likelihood for
the background hypothesis. Here $x_k$ represents one of the discriminating
variables described below, which are computed for each event.
The likelihoods for the signal and background hypotheses are
computed as a product of the probability density functions (PDFs)
$\mathcal{P}_{\mathrm{sig}}(x_k)$ and
$\mathcal{P}_{\mathrm{bkg}}(x_k)$, respectively, for
the following selection variables: invariant masses of the $\phi$,
$\ensuremath{\Kbar^{*0}}\xspace$, $\rho^+$, $K^{*+}$, and $\KS$ candidates; $\chi^2$
confidence level of the vertex
fit for the $\ensuremath{B^0}\xspace$ and $\ensuremath{D^+_s}\xspace$ mesons; the helicity angles of the $\phi$,
$\ensuremath{\Kbar^{*0}}\xspace$, $\rho^+$, $K^{*+}$, and $\ensuremath{D^{*+}_s}\xspace$ meson decays; the
mass difference
$\Delta m(\ensuremath{D^{*+}_s}\xspace) = m(\ensuremath{D^{*+}_s}\xspace)-m(\ensuremath{D^+_s}\xspace)$; the polar angle $\theta_B$ of the
$B$ candidate
momentum vector with respect to the beam axis in the
$\ensuremath{e^+e^-}\xspace$ center-of-mass (c.m.) frame; the angle $\theta_{T}$
between the thrust axis of the $B$ candidate and the thrust
axis of all other particles in the event in the c.m. frame;
the event topology variable $\mathcal{F}$, and the kinematic variable
\De, described below.
Correlations among these variables are small.
The helicity angle
$\theta_H$ is defined as the angle between one of the decay products of
a vector meson and the flight direction of its parent particle in the meson's
rest frame. Polarization of the vector mesons
in the signal decays causes the cosines of their helicity angles to be
distributed as
$\cos^2\theta_H$ ($\phi$, $\ensuremath{\Kbar^{*0}}\xspace$, $\rho^+$, and $K^{*+}$) or
$1-\cos^2\theta_H$ (\ensuremath{D^{*+}_s}\xspace), while the random background combinations tend to
produce a more uniform distribution in $\cos\theta_H$, with a peak in
the forward direction (which corresponds to a low-energy $\pi^0$) for
$\rho^+$ and $K^{*+}$ candidates.
We do not include the helicity angles for \ensuremath{D^{*+}_s}\xspace, $\rho^+$, and
$K^{*+}$ mesons in the likelihood ratio $R_L$ for the vector-vector
\btodssrho\ and \btodsskstar\ modes, since the polarizations of the
vector mesons in these decays are not known {\em a priori}. Instead,
the helicity angles are used in the multi-dimensional likelihood fit
to determine the polarizations, as discussed below.
The variables $\cos\theta_B$, $\cos\theta_T$, and $\mathcal{F}$
discriminate between spherically-symmetric \BB\ events and jet-like
continuum background using event topology.
The polar angle $\theta_B$ is distributed as $\sin^2\theta_B$
for real $B$ decays, while being nearly flat in $\cos\theta_B$ for
the continuum.
\BB\ pairs form a nearly uniform $|\cos\theta_T|$
distribution, while the $|\cos\theta_T|$ distribution for
continuum events peaks at 1.
A linear (Fisher)
discriminant $\mathcal{F}$ is
derived from the values of sphericity and thrust for the event,
and the two Legendre moments $L_0$ and
$L_2$ of the energy flow
around the $B$-candidate thrust axis~\cite{ref:legendre}.
The ratio $R_L$ has a maximum at $R_L=1$ for
signal events, and at $R_L=0$ for background originating
from continuum events. It also discriminates
well against $B$ decays without a real $\ensuremath{D^+_s}\xspace$ meson in the final
state.
The Monte Carlo (MC) simulated distributions of the $R_L$ variable for
signal and
background events, in \btodspi\ decays, are shown in
Fig.~\ref{fig:likeCut}.
\begin{figure}
\begin{center}
\epsfig{file=likeli.eps,width=3.5in}
\end{center}
\vspace{-0.5cm}
\caption{Distribution of the likelihood ratio $R_L$ for the mode
\btodspi, \dsphipi. Shown are (a) the simulated signal events, and
(b) the sum of the simulated background samples from the $B^0$
and $B^+$ decays, and $\ensuremath{e^+e^-}\xspace\ensuremath{\rightarrow}\xspace q\bar{q}$ events.
}
\label{fig:likeCut}
\end{figure}
Finally, two other variables \mbox{$m_{\rm ES}$}\xspace\ and $\De$ take advantage of the unique
kinematic properties
of the $\ensuremath{e^+e^-}\xspace\ensuremath{\rightarrow}\xspace\Upsilon(4S)\ensuremath{\rightarrow}\xspace \BB$ decays. The beam energy spread
is significantly smaller than the energy resolution of the
reconstructed $B$ mesons, and at the same time larger than the
momentum resolution. The momentum of the signal
candidates is included in the beam-energy-substituted mass
$\mbox{$m_{\rm ES}$}\xspace = \sqrt{ (s/2 +
\mathbf{p}_{i}\cdot\mathbf{p}_{B})^{2}/E_{i}^{2}-
\mathbf{p}^{2}_{B}}$,
where $\sqrt{s}$ is the total c.m.
energy, $(E_{i},\mathbf{p}_{i})$ is the four-momentum of the initial
\ensuremath{e^+e^-}\xspace\ system, and $\mathbf{p}_{B}$ is the \ensuremath{B^0}\xspace\ candidate momentum,
both measured in the laboratory frame. The second variable is
$\De = E^{*}_{B} - \sqrt{s}/2$, where $E^{*}_{B}$ is the \ensuremath{B^0}\xspace\ candidate
energy in the c.m. frame.
For signal events, the \mbox{$m_{\rm ES}$}\xspace\ distribution is nearly Gaussian and centered
at the $B$ meson mass with a resolution of about ($2.5$-$2.8$)~\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace,
and the \De\ distribution has a
maximum near zero with a resolution of (17-25)~\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace. We include \De\
in the definition of the likelihood ratio $R_L$; \mbox{$m_{\rm ES}$}\xspace\ is
used as a discriminating variable in the maximum likelihood fit
described below.
We parameterize the signal and background PDFs using large samples
of simulated events.
We select \btodsospi\ and \btodsosk\ candidates that satisfy
$R_L>0.85$, and accept \btodsosrho\ and \btodsoskstar\ candidates with
$R_L>0.96$. We measure the relative efficiencies $\varepsilon_{R_L}$ of
the $R_L$ selections in copious
data samples of decays $\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^-\pi^+,\, D^-\rho^+$ ($D^-\ensuremath{\rightarrow}\xspace
K^+\pi^-\pi^-,\, \KS\pi^-$)
and $B^+\ensuremath{\rightarrow}\xspace\kern 0.2em\overline{\kern -0.2em D}{}\xspace^{*0}\pi^+,\,\kern 0.2em\overline{\kern -0.2em D}{}\xspace^{*0}\rho^+$ ($\kern 0.2em\overline{\kern -0.2em D}{}\xspace^{*0}\ensuremath{\rightarrow}\xspace
\kern 0.2em\overline{\kern -0.2em D}{}\xspace^0\gamma,\, D^0\ensuremath{\rightarrow}\xspace K^-\pi^+$)
in which the kinematics is similar to those of our signal events, and find
that they are consistent with Monte Carlo estimates of
$\varepsilon_{R_L}\approx 70\% (40\%)$ for \btodsospi\ and \btodsosk\
(\btodsosrho\ and \btodsoskstar) modes. The fraction of continuum
background events passing the selection varies between $2\%$ and $15\%$,
depending on the mode.
Less than 30\% of the selected events in the \btodsspi, \btodssk,
\btodsosrho, and \btodsoskstar\
channels ($<2\%$ in \btodspi\ and \btodsk)
contain two or more
candidates that satisfy the criteria listed above. In
such events we select a single $B^0$ candidate based on
an event $\chi^2$ formed from \De, \mDs\ and (where appropriate) $\Delta
m(\ensuremath{D^{*+}_s}\xspace)$, $m_\rho$, $m_{K*}$, $m_{\pi^0}$ and $m_{Ks}$, and their
uncertainties.
\section{Extraction of Signal Yields}
After the $R_L$ requirement is applied, we define the region of
interest using the
beam-energy-substituted mass \mbox{$m_{\rm ES}$}\xspace\ and the mass of the \ensuremath{D^+_s}\xspace\ candidate \mDs.
We require
$5.2<\mbox{$m_{\rm ES}$}\xspace<5.3$~\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace and
$|\mDs-\mDs_\mathrm{PDG}|<50$~\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace\ for \btodspi, \btodsrho, and
\btodskokstar\ modes, where
$\mDs_\mathrm{PDG}$ is the world average $D_s$ mass~\cite{PDG2006}.
The invariant
mass \mDs\ has a resolution of
(5-6)~\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace, depending on the $\ensuremath{D^+_s}\xspace$ decay mode.
The selection is significantly broader than the region populated by
the signal events, and allows us to constrain backgrounds in the signal
region.
For \btodsspi, \btodssrho, and \btodsskokstar, we require
$|\mDs-\mDs_\mathrm{PDG}|<10$~\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace.
Five classes of background events contribute to the fit region.
First is the {\em combinatorial background\/}, in which a true or fake
$D_s^{(*)}$ candidate is combined with a randomly-selected light
meson.
Second, $B$ meson decays such as $\ensuremath{\Bbar^0}\xspace{\ensuremath{\rightarrow}\xspace}D^{(*)+}\ensuremath{\pi^-}\xspace$ or
$\ensuremath{\Bbar^0}\xspace{\ensuremath{\rightarrow}\xspace}D^{(*)+}\rho^-$ with
$\ensuremath{D^+}\xspace{\ensuremath{\rightarrow}\xspace}\KS\ensuremath{\pi^+}\xspace$ or $\ensuremath{\Kbar^{*0}}\xspace\ensuremath{\pi^+}\xspace$ can
constitute a background for the \btodsospi\ and \btodsosrho\ modes if
the pion in the $D$ decay is misidentified as a kaon ({\it{reflection
background}}). The reflection background has nearly the same \mbox{$m_{\rm ES}$}\xspace\ distribution
as the signal but different distributions in \De\ and \mDs.
The corresponding backgrounds for the
\btodskokstar\ mode ($\ensuremath{B^0}\xspace{\ensuremath{\rightarrow}\xspace}\ensuremath{D^-}\xspace K^{(*)+}$) are negligible.
Third, rare {\em charmless\/} $B$ decays into the same final state,
such as $\ensuremath{B^0}\xspace{\ensuremath{\rightarrow}\xspace} \kern 0.2em\overline{\kern -0.2em K}{}\xspace^{(*)0}\ensuremath{K^+}\xspace h$ (where $h=\pi,\,\rho,\,K$, or
$K^*$),
have the same \mbox{$m_{\rm ES}$}\xspace\ and \De\ distributions as the $\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D_s h$
signal, but are nearly flat in \mDs. The charmless background is
significant in \btodspi, \btodsrho, and \btodskokstar\ decays, but is
effectively rejected by the $\Delta m(\ensuremath{D^{*+}_s}\xspace)$
variable for the modes with \ensuremath{D^{*+}_s}\xspace.
Finally, two classes of background events have nearly the same distribution
as the signal events in both \mDs\ and \mbox{$m_{\rm ES}$}\xspace. For \btodsoskstar\
modes we take into account the potential contributions from
the {\em non-resonant\/} decays $\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D_s^{(*)-}K^0\pi^+$ (which have
recently been observed~\cite{ref:Vitaly}), and
color-suppressed $\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D_s^{(*)-}K^+\pi^0$ (unobserved so
far). Analogous non-resonant modes $B^0\ensuremath{\rightarrow}\xspace D_s^{(*)+}\pi^-\pi^0$
require the additional popping of a color-matched $q\bar{q}$ pair. They are
expected to be small compared to \btodsosrho~\cite{ref:Vitaly} and
are ignored. Finally,
{\em crossfeed background\/} from misidentification of
$\kern 0.18em\overline{\kern -0.18em B}{}\xspace^0\ensuremath{\rightarrow}\xspace D_s^{(*)-}\pi^+$ events
as \btodsosk\ signal, and vice versa, needs to be taken into account.
For each mode of interest, we perform an unbinned extended
maximum-likelihood (ML) fit to separate the signal events from the
backgrounds and extract the signal branching fractions.
For \btodspi, \btodsrho, \btodsk, and \btodskstar, we perform a
two-dimensional fit to the \mbox{$m_{\rm ES}$}\xspace\ and \mDs\ distributions.
For \btodsspi\ and \btodssk\ decays, we fit the
one-dimensional \mbox{$m_{\rm ES}$}\xspace\ distribution. In vector-vector modes
\btodssrho and \btodsskstar, we constrain both the branching fractions of
the signal modes and the polarization of the vector mesons by
performing a three-dimensional fit to the distribution of \mbox{$m_{\rm ES}$}\xspace, and
the two helicity angles of the $\ensuremath{D^{*+}_s}\xspace$ and $\rho^-$ ($K^{*+}$) mesons.
For each $B$ decay, we simultaneously fit distributions in the three
\ensuremath{D^+_s}\xspace\ decay modes,
constraining the signal branching fractions to a common value.
The likelihood function contains the contributions
of the signal and the five background components discussed above.
The function to be maximized is
\begin{equation}
{\cal L} = \exp\left(-\sum_{k,m}^{} n_{km}\right)\,
\prod_{i=1}^{N_{\rm cand}}
\left(\sum_{j}~n_{jm}\,
{\cal P}_{jm}(\vec{\zeta}_{i})\right)
\label{eq:likel}
\end{equation}
where $n_{jm}$ is
the number of events for each event type $j$
(signal and all background modes) in each $D_s$ decay mode $m$,
and ${\cal P}_{jm}(\vec{\zeta}_{i})$ is the
probability density function of the variables
$\vec{\zeta}_{i}=(\mbox{$m_{\rm ES}$}\xspace, \mDs, \cos\theta_{D_s^*}, \cos\theta_V)$
for the $i$th event. The likelihood product is computed over all
candidates $N_\mathrm{cand}$ in the region of interest. We
parameterize the event yields as
\begin{equation}
n_{jm} = N_{\BB} \mathcal{B}_j\mathcal{B}_m^{Ds}\varepsilon_m\, ,
\label{eq:yieldDef}
\end{equation}
where $m$ stands for $\dsphipi$, $\dskstark$, or $\dsksk$,
$N_{\BB}=\lumi$, $\mathcal{B}_j$ is the $B$ decay branching fraction,
$\mathcal{B}_m^{Ds}$ is the branching fraction of the
$m$-th $\ds$ mode, and $\varepsilon_m$ is the reconstruction efficiency.
The branching fractions of the channels contributing to the reflection
background are fixed in the
fit to the current world average values~\cite{PDG2006} and the
branching fractions of the crossfeed backgrounds are determined by
iterating the fits over each $B$ decay mode.
The branching fractions of the non-resonant backgrounds are fixed to
the values recently measured by \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}~\cite{ref:Vitaly}. In the case of
$\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D_s^{(*)-}K^+\pi^0$, which can contribute to \btodsoskstar\
($K^{*+}\ensuremath{\rightarrow}\xspace K^+\pi^0$), we estimate the branching fraction by
\begin{eqnarray}
{\ensuremath{\cal B}\xspace}(\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D_s^{(*)-}K^+\pi^0) \approx & \nonumber\\
{\ensuremath{\cal B}\xspace}(B^+\ensuremath{\rightarrow}\xspace D_s^{(*)-}K^+\pi^+) &
\frac{{\ensuremath{\cal B}\xspace}(B^0\ensuremath{\rightarrow}\xspace\ensuremath{\Dbar^0}\xspace\pi^0)}{{\ensuremath{\cal B}\xspace}(B^+\ensuremath{\rightarrow}\xspace\ensuremath{\Dbar^0}\xspace\pi^+)}\, .
\label{eq:DsKpi0}
\end{eqnarray}
This scaling assumes that the dominant mechanism for producing
both $D_s^{(*)-}K^+\pi^0$ and $D_s^{(*)-}K^+\pi^+$ final states is a
sub-threshold production of a charmed $D^{**0}$ meson, which
subsequently decays into $D_s^{(*)-}K^+$, as indicated by the
invariant mass spectrum of $D_s^{(*)-}K^+$~\cite{ref:Vitaly}. Since
the decay $\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^{**0}\pi^0$ is color-suppressed compared to
$B^+\ensuremath{\rightarrow}\xspace D^{**0}\pi^+$, we estimate the color
suppression factor from the $B^0\ensuremath{\rightarrow}\xspace\ensuremath{\Dbar^0}\xspace\pi^0$ decays. Direct
production of the color-suppressed $D_s^{(*)-}K^+\pi^0$ final state
(without the intermediate $D^{**0}$) results in a smaller branching
fraction estimate. We assign a 100\% systematic uncertainty to
${\ensuremath{\cal B}\xspace}(\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D_s^{(*)-}K^+\pi^0)$.
The expected yields of the dominant $B$-decay
backgrounds are listed in Table~\ref{tab:bkg}.
The PDFs and efficiencies for the signal, reflection, and crossfeed
backgrounds are determined independently for each \ensuremath{D^+_s}\xspace\ decay mode
using Monte Carlo samples.
The signal contribution is modeled as a Gaussian (\btodspi\ and \btodsk)
or a ``Crystal Ball'' function~\cite{ref:CB} in \mbox{$m_{\rm ES}$}\xspace\ and a double
Gaussian in \mDs.
The combinatorial background is
described in \mbox{$m_{\rm ES}$}\xspace\ by a threshold function~\cite{argus},
$dN/dx\propto x\sqrt{1-2x^{2}/s}\exp\left[-\xi\left(1-2x^{2}/s\right)\right]$,
characterized by the shape parameter $\xi$.
This shape
parameter, common to all \ensuremath{D^+_s}\xspace\ modes, is allowed to vary in the fit.
In \mDs, the combinatorial background is well described by a
combination of a first-order polynomial (fake \ensuremath{D^+_s}\xspace\ candidates) and a
Gaussian with (5-6)~\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace\ resolution (true \ensuremath{D^+_s}\xspace\
candidates). The charmless background is parameterized by the signal
Gaussian shape in \mbox{$m_{\rm ES}$}\xspace\ and a first order polynomial in \mDs.
Ideally, the distribution of the helicity angles in the vector-vector
decays is given by Eq.~(\ref{eq:helicityshort}).
The helicity angle $\theta_{D_s^*}$ is defined as the angle between
the direction of
the photon in $D_s^*\ensuremath{\rightarrow}\xspace D_s\gamma$ and the direction
of the $B$ in the rest frame of the
$D_s^{*}$ candidate. The helicity angle $\theta_V$ is similarly
defined by the direction of the charged daughter particle in the decays
$\rho^+\ensuremath{\rightarrow}\xspace\pi^+\pi^0$, $K^{*+}\ensuremath{\rightarrow}\xspace K^+\pi^0$, and
$K^{*+}\ensuremath{\rightarrow}\xspace \KS\pi^{+}$.
Since the momenta of the decay products in
the laboratory frame depend on the helicity angles, acceptance and
efficiency effects modify the ideal angular distribution. We determine
the PDFs of the signal events using
the Monte Carlo simulation, and measure the angular distribution of
the combinatorial background in the data region $\mbox{$m_{\rm ES}$}\xspace<5.27$~\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace.
For \btodspi, \btodsrho, and \btodsk, the fit constrains
14 free parameters: the shape parameter of the combinatorial background
$\xi$ (1 parameter for all \ensuremath{D^+_s}\xspace\ modes), the slopes of the combinatorial
and charmless backgrounds in \mDs\ (3 parameters), the fractions of
true \ensuremath{D^+_s}\xspace\ candidates in
combinatorial background (3), the numbers of combinatorial background
events (3), the numbers of charmless events (3), and the branching fraction
of the signal mode (1).
In the \btodskstar\ mode (6 individual sub-modes, spanning 3 \ensuremath{D^+_s}\xspace\ channels and
2 $K^{*+}$ channels), 26 free parameters are constrained.
For the \btodsspi\ and \btodssk\ decays, 5 free
parameters are determined by the fit: $\xi$ (1 parameter for all \ensuremath{D^+_s}\xspace\
modes), the number of combinatorial background events (3), and the
branching fraction of the signal mode (1).
For \btodssrho\ and \btodsskstar\ fits, we add one more free parameter
to the fit: the longitudinal polarization fraction $f_L$ (see
\eqref{eq:helicityshort}). The total number of free parameters is 6 in
\btodssrho\ and 9 in \btodsskstar.
The results of the fits are shown in
Figs.~\ref{fig:fit1}-\ref{fig:fit3} and summarized in Table~\ref{tab:fit}.
\section{Systematic Uncertainties}
For the branching fractions, the systematic errors are dominated by
the 13\% relative uncertainty for
${\ensuremath{\cal B}\xspace}(\ensuremath{D^+_s}\xspace\rightarrow\phi\ensuremath{\pi^+}\xspace)$~\cite{PDG2006}.
The uncertainty in
the branching fraction ratio
${\ensuremath{\cal B}\xspace}(\ensuremath{D^+_s}\xspace{\ensuremath{\rightarrow}\xspace}\ensuremath{\Kbar^{*0}}\xspace\ensuremath{K^+}\xspace)/{\ensuremath{\cal B}\xspace}(\ensuremath{D^+_s}\xspace{\ensuremath{\rightarrow}\xspace}\phi\ensuremath{\pi^+}\xspace)$
contributes (2-4)\%, depending on the decay channel.
For ${\ensuremath{\cal B}\xspace}(\ensuremath{D^+_s}\xspace{\ensuremath{\rightarrow}\xspace}\KS\ensuremath{K^+}\xspace)$, we use the most recent
measurement from the CLEO Collaboration~\cite{ref:KsKCLEO}, which
differs from the previously
reported central value~\cite{PDG2006} by about 50\%. We
estimate uncertainties due to modeling of the resonance ($K^{*0}$,
$\phi$, $\rho$, and $K^{*+}$) lineshapes by measuring the
effect of the lineshape variation on signal selection efficiency.
The uncertainties in the signal selection efficiency are determined by
the accuracy with which the detector effects are modeled in the Monte
Carlo simulations.
Tracking, particle identification (PID), photon, $\pi^0$ and
$\KS$ reconstruction efficiencies are measured across the wide range
of particle momenta in the dedicated data control
samples. The tracking efficiency and
resolution are adequately reproduced by the simulations. The simulated
distributions are corrected for the efficiency and resolution of the
$\pi^0$ reconstruction. The efficiency of the $R_L$ cut is also
measured in the data
control samples, as discussed in Section~\ref{sec:selection}.
The uncertainties due to the knowledge of the signal and background
PDFs in the ML fit are estimated by measuring the variation of the
fitted values of the branching fractions when PDF parameters are
varied within their uncertainties.
The correlations between parameters
are taken into account.
The uncertainties in the signal PDF parameters for the key
discriminants \De, \mbox{$m_{\rm ES}$}\xspace, \mDs, $\Delta m(\ensuremath{D^{*+}_s}\xspace)$, and
$\cos\theta_{\ensuremath{D^{*+}_s}\xspace}$ are determined by comparing data and Monte
Carlo simulations for the samples of decays
$\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^-\pi^+,\, D^-\rho^+$ ($D^-\ensuremath{\rightarrow}\xspace K^+\pi^-\pi^-,\, \KS\pi^-$)
and $B^+\ensuremath{\rightarrow}\xspace\kern 0.2em\overline{\kern -0.2em D}{}\xspace^{*0}\pi^+,\,\kern 0.2em\overline{\kern -0.2em D}{}\xspace^{*0}\rho^+$ ($\kern 0.2em\overline{\kern -0.2em D}{}\xspace^{*0}\ensuremath{\rightarrow}\xspace
\kern 0.2em\overline{\kern -0.2em D}{}\xspace^0\gamma,\, D^0\ensuremath{\rightarrow}\xspace K^-\pi^+$). The uncertainties in the signal
PDFs for $\cos\theta_{\rho,K*}$ and the PDFs for the peaking
backgrounds are determined by Monte Carlo simulations. These
distributions depend on
the modeling of the charged track and $\pi^0$ reconstruction,
discussed above.
The helicity angle PDFs for the
continuum background are determined in the data sideband
$m_{ES}<5.27~\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$, and their uncertainties are statistical in
nature.
Uncertainties due to reflection and crossfeed
backgrounds include the uncertainties in the branching fractions of the
relevant modes, and also account for the contributions of the
sub-dominant modes that are not explicitly included in the ML
fit. These contributions dominate the systematic uncertainty for the
\btodsrho\ mode, which has a small absolute branching fraction.
As ML estimators may be biased in small samples,
we measure the bias using a large ensemble of simulated
experiments. In each of these experiments,
generated according to the sample composition
observed in data,
the signal and $B$-decay
background events are fully simulated, and the combinatorial
background events are generated from their PDFs.
The bias is found to be negligible for the 1- and
2-dimensional ML fits (\btodsospi, \btodsosk, \btodskstar\ modes).
On the other hand, we find that in the vector-vector modes
(\btodssrho\ and \btodsskstar\ decays), the 3-dimensional ML fits
underestimate the true values of the signal branching fraction and
the fraction of the longitudinal polarization.
We correct for the biases of
$\Delta{\ensuremath{\cal B}\xspace}=(-0.37\pm 0.03)\times10^{-5}$ and
$\Delta f_L=(-5.3\pm 0.6)\%$ (\btodssrho) and
$\Delta{\ensuremath{\cal B}\xspace}=(-0.14\pm 0.04)\times10^{-5}$ and
$\Delta f_L=(-5.5\pm 0.8)\%$ (\btodsskstar).
We assign a conservative uncertainty of 50\% of
the bias to this correction.
For the longitudinal polarization fractions $f_L$ in the vector-vector
modes, the systematic errors are dominated by the uncertainties
in the shapes of the signal and background PDFs and the fit bias.
The systematic uncertainties for each mode are summarized in
Tables~\ref{tab:systematics1}-\ref{tab:systematics3}.
\section{Results}
We estimate the significance of a non-zero signal yield by computing
$\mathcal{S} = \sqrt{-2\log(\mathcal{L}_0/\mathcal{L}_{\max})}$,
where
$\mathcal{L}_{\max}$ is the maximum likelihood value, and
$\mathcal{L}_0$ is the likelihood for a fit in which the signal
contribution is set to zero.
Including systematic uncertainties and assuming Gaussian-distributed
errors, we obtain signal
observation significances of
$3.9$ (\btodssrho), $4.6$ (\btodskstar), and $3.1$ (\btodsskstar) standard
deviations, providing the first evidence for these decays.
We test that $\mathcal{S}$ measures the probability for the
background events to fluctuate to the observed number of signal events
with a large ensemble of simulated experiments. For each such
experiment, we generate a set of pure background events according to
the PDFs and sample composition observed in our dataset. We then fit
each simulated experiment and measure the signal and background yields
and, for the vector-vector modes, the polarization fraction $f_L$. By
counting the fraction of such pseudo-experiments in which the signal
yields are at least as large as the yield observed in the real dataset,
we confirm that $\mathcal{S}^2$ follows closely the $\chi^2$
distribution with one degree of freedom.
The branching fraction and polarization results are collected in
Table~\ref{tab:fit}.
Since we do not
observe a significant event yield in \btodsrho, we set a 90\%
confidence-level Bayesian upper limit assuming a constant prior for
${\ensuremath{\cal B}\xspace}(\btodsrho)>0$.
\section{Conclusions}
We report the following improved measurements of the branching
fractions for the rare
decays \btodsospi\ and \btodsosk, and the first measurements of the
branching fractions for the decays \btodsosrho\ and \btodsoskstar, as
well as the measurements of
the longitudinal polarization fractions $f_L$ in vector-vector final
states \btodssrho\ and \btodsskstar:
\begin{eqnarray*}
{\ensuremath{\cal B}\xspace}(\btodspi) &=& [2.5\pm 0.4\pm 0.2]\times 10^{-5}\\
{\ensuremath{\cal B}\xspace}(\btodsspi) &=& [2.6^{+0.5}_{-0.4}\pm 0.3]\times 10^{-5}\\
{\ensuremath{\cal B}\xspace}(\btodsrho) &=& [1.1^{+0.9}_{-0.8}\pm 0.3]\times 10^{-5}\\
{\ensuremath{\cal B}\xspace}(\btodsrho) &<& 2.4\times 10^{-5}\ (90\% \mathrm{C.L.})\\
{\ensuremath{\cal B}\xspace}(\btodssrho) &=& [4.4^{+1.3}_{-1.2}\pm 0.8]\times 10^{-5}\\
f_L(\btodssrho) &=& 0.86^{+0.26}_{-0.28}\pm 0.15\\
{\ensuremath{\cal B}\xspace}(\btodsk) &=& [2.9\pm 0.4\pm 0.3]\times 10^{-5}\\
{\ensuremath{\cal B}\xspace}(\btodssk) &=& [2.4\pm 0.4\pm 0.2]\times 10^{-5}\\
{\ensuremath{\cal B}\xspace}(\btodskstar) &=& [3.6^{+1.0}_{-0.9}\pm 0.4]\times 10^{-5}\\
{\ensuremath{\cal B}\xspace}(\btodsskstar) &=& [3.0^{+1.4}_{-1.2}\pm 0.3]\times 10^{-5}\\
f_L(\btodsskstar) &=& 0.96^{+0.38}_{-0.31}\pm 0.08\ , \\
\end{eqnarray*}
where the first quoted uncertainty is statistical, and the second is
systematic.
The branching fractions for \btodsoskokstar\ are small
compared to the dominant decays $\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace D^{(*)-}\pi^+$ and $\ensuremath{B^0}\xspace\ensuremath{\rightarrow}\xspace
D^{(*)-}\rho^+$, implying
insignificant contributions from the color-suppressed
$W$-exchange diagrams. The ratios ${\ensuremath{\cal B}\xspace}(\btodsk)/{\ensuremath{\cal B}\xspace}(\btodssk)$ and
${\ensuremath{\cal B}\xspace}(\btodskstar)/{\ensuremath{\cal B}\xspace}(\btodsskstar)$ are consistent with the
expectation of unity~\cite{Wexch}. The predictions for the
branching fractions of \btodsospi\ and \btodsosrho\ decays are based on
the factorization assumption~\cite{ref:factor} and depend on the estimates
of the hadronic form factors. The observed pattern
${\ensuremath{\cal B}\xspace}(\btodsrho)<{\ensuremath{\cal B}\xspace}(\btodspi)\approx{\ensuremath{\cal B}\xspace}(\btodsspi)<{\ensuremath{\cal B}\xspace}(\btodssrho)$
appears to be most consistent with the form factors computed in
\cite{ref:FF_BSW}.
The polarizations of the vector mesons in
\btodssrho\ and \btodsskstar\ are consistent with
expectations~\cite{Wexch,ref:VV}.
Assuming the SU(3) relation, \eqref{eq:rDPi}, and the recent value of
$f_{D^{(*)}_s}/f_{D^{(*)}}$ from an unquenched Lattice QCD
calculation~\cite{fdsdRef},
we determine the ratios
of the CKM-suppressed to CKM-favored decay amplitudes in
decays $\ensuremath{B}\xspace^0\ensuremath{\rightarrow}\xspace D^{(*)\pm}\pi^{\mp}$ and $\ensuremath{B}\xspace^0\ensuremath{\rightarrow}\xspace
D^{(*)\pm}\rho^{\mp}$:
\begin{eqnarray*}
r(D\pi) &=& [1.75\pm0.14\,\ensuremath{\mathrm{(stat)}}\xspace\pm0.09\,\ensuremath{\mathrm{(syst)}}\xspace\pm0.10\,(\mathrm{th})]\%\\
r(D^{*}\pi) &=& [1.81^{+0.17}_{-0.14}\,\ensuremath{\mathrm{(stat)}}\xspace\pm0.12\,\ensuremath{\mathrm{(syst)}}\xspace\pm0.10\,(\mathrm{th})]\%\\
r(D\rho) &=& [0.71^{+0.29}_{-0.26}\,\ensuremath{\mathrm{(stat)}}\xspace\pm0.11\,\ensuremath{\mathrm{(syst)}}\xspace\pm0.04\,(\mathrm{th})]\%\\
r(D^{*}\rho) &=& [1.50^{+0.22}_{-0.21}\,\ensuremath{\mathrm{(stat)}}\xspace\pm0.16\,\ensuremath{\mathrm{(syst)}}\xspace\pm0.08\,(\mathrm{th})]\%\\
\end{eqnarray*}
where the first error is statistical, the second includes experimental
systematics, and the last accounts for the uncertainty in the
theoretical value of $f_{D^{(*)}_s}/f_{D^{(*)}}$~\cite{fdsdRef}.
These amplitude ratios are below 2\%,
which implies small $C\! P$ asymmetries in
$\ensuremath{B^0}\xspace{\ensuremath{\rightarrow}\xspace} D^{(*)\mp}\pi^\pm$ and $\ensuremath{B^0}\xspace{\ensuremath{\rightarrow}\xspace} D^{(*)\mp}\rho^\pm$
decays, making it difficult to measure $\sin(2\beta+\gamma)$ precisely
in these
decays. The results presented here
supersede our previously published measurements~\cite{priorBaBar}. \\
\section{Acknowledgments}
\input acknowledgements.tex
|
3,212,635,537,601 | arxiv | \section{Introduction}
CNNs learn to construct a hierarchy of representations from training images. These representations are incomprehensible to humans. It is challenging to gain a good understanding of what a trained network has learned without using appropriate visualisation techniques. There have been several efforts devoted to visualising deep neural network inference in recent years. These techniques complement each other to show different aspects of a CNN inference.
An early approach to network introspection is based on Activation Maximization.
\emph{Activation Maximization} aims to find out what a neuron has learned by finding the image to which a neuron is maximally activated. Then the pattern in the image can be thought as an approximation to the learned representation.
Activation Maximization collects sample images from the training set that have the highest activations for the neuron. By comparing them for similarities a common pattern can be found. This na\"ive approach is easy to implement but has several drawbacks. First, there is no quantitative measure of similarity and human observation is ambiguous and error-prone. Images may be interpreted in several ways.
Second, the whole training set has to be fed into the model to compute activation of neurons for every image, which is very time-consuming. Hence, it is often preferred to synthesize an image that can maximally activate a given neuron.
The aim is to maximize the activation of a neuron, thus this can be treated as an optimisation problem \cite{erhan2009visualizing}. Let ${a_i}\left( {x;\theta } \right)$ be the activation of neuron $i$. It is a function of input image $x$ and model parameters $\theta$. We want to find a specific input $x^*$ that maximizes $a_i$ with
\begin{align}
{x^*} = \argmax_{\left\| x \right\| = p} {a_i}\left( {x;\theta } \right).
\end{align}
This optimisation problem can be solved by gradient ascent. To start with, $x$ is an image filled with random pixels. In each iteration, the activation $a_i$ is computed. Its derivative with respect to $x$ is then used to update the pixel intensity of $x$. This process is repeated until convergence, and the resulting image should have high activation for neuron $i$.
Applying this technique to neurons in hidden layers can produce some recognisable features learned by the network. However, if the aim is to visualise neurons in the output layer, this procedure would fail and generate images interpretable to humans \cite{nguyen2015deep}. An improvement was made by Simonyan et al. \cite{simonyan2013deep}. They added L2 regularization to the gradient ascent formulation and proposed
\begin{align}
{x^*} = \argmax_{x} \left({a_i}\left( {x;\theta } \right) - \lambda \left\| x \right\|_2^2\right).
\label{eq:Simonyan}
\end{align}
In Eq.~\ref{eq:Simonyan}, $\lambda$ is a regularization parameter. It controls the degree to which $x$ is penalized. With this small change in the objective function, gradient ascent generates more interpretable images for neurons in the output layer.
Based on this work, Yosinski et al. \cite{yosinski2015understanding} further generalized the gradient ascent updating rule. Instead of sticking to L2 regularization, they regularized an operator $r_{\theta}$ and experimented with different options as
\begin{align}
x \leftarrow {r_\theta }\left( {x + \eta \frac{{\partial {a_i}}}{{\partial x}}} \right).
\end{align}
Here, $r_{\theta}$ may refer to L2 regularization, Gaussian blur or pixel clipping. In practice, a combination of these regularizers has shown to produce the most natural images. Figure \ref{actmax} shows the visualization of output layer neurons in an 8-layer CNN using this approach.
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\columnwidth]{images/actmax.jpg}
\caption{visualisation of output layer features \cite{yosinski2015understanding}.}
\label{actmax}
\end{figure}
\noindent
The images in Figure~\ref{actmax} can capture class-specific features to some extent, but they are too abstract and noisy for human observers to infer useful information. A different approach of visualisation synthesis was adopted by Nguyen et al. \cite{nguyen2016synthesizing}. Instead of performing gradient ascent directly on the input space of the model, they use a generator network $G$ for generating visualisations and optimise on the input space of $G$ by
\begin{align}
{x^*} = \argmax_{x}\left( {{a_i}\left( {G\left( x \right)} \right) - \lambda \left\| x \right\|_2^2} \right).
\end{align}
Since $G$ was explicitly trained to generate natural images, it acts as a prior to ensure the interpretability of visualizations. Example images are shown in Figure \ref{synthesize}.
\begin{figure}[htb]
\includegraphics[width=\columnwidth]{images/synthesize.jpg}
\caption{synthesized images that maximally activate given output neurons \cite{nguyen2016synthesizing}.}
\label{synthesize}
\end{figure}
\noindent
Activation maximization is helpful for human observers to understand the general representation learned by a deep neural network. It cannot explain the network's response to an individual image. \emph{Representation Inversion} has been developed as a class of techniques tailored for this purpose.
It works by projecting the output or hidden activations of a network back to input space and visualising the result.
A classic Representation Inversion algorithm is deconvolution. As its name suggests, a deconvolutional network (DCNN) performs the inverse operations of a CNNs \cite{zeiler2011adaptive}. In correspondence with a CNN, a DCNN has three components: unpooling, rectification and filtering.
To visualise a given neuron, a DCNN is attached to the layers of a CNN and all other neuron activations are set to zero. Feeding the feature map as input to the DCNN, the inverse of all operations in the CNN are performed in reverse order until input space is reached.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{images/deconvolution.png}
\caption{image patches and the corresponding reconstructed patterns \cite{zeiler2014visualizing}.}
\label{deconvolution}
\end{figure}
\noindent
Figure \ref{deconvolution} shows the visualisation of high-level features of a trained CNN. We can observe that patterns reconstructed via deconvolution have similar shapes for images in the same class despite their individual differences. This might not be desirable if we want to visualise more image-specific features.
Simonyan et al. proposed to provide class saliency maps by computing partial derivative of the class score with respect to each input pixel \cite{simonyan2013deep}. This can be thought as a Representation Inversion method because gradients information is propagated back in this case.
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\columnwidth]{images/sensitivity.png}
\caption{an image and its corresponding saliency map \cite{simonyan2013deep}.}
\label{sensitivity}
\end{figure}
Figure \ref{sensitivity} demonstrates an example of a class saliency map. The object's contour is roughly preserved in the visualisation. This method is fast to compute, but it only illustrates the sensitivity of a model's prediction to individual pixels. It cannot be used to accurately measure each pixel's relevance since it does not consider higher-order interactions of pixels.
Bach et al. proposed the principle of conservation for pixel relevance distribution; the sum of relevance of pixel $i$, $R_i$, should be roughly equal to the model's output $f(x)$ \cite{bach2015pixel}. Following this principle, a distribution rule called layer-wise relevance propagation is given by
\begin{align}
R_i^l = \sum\limits_j {\frac{{{z_{ij}}}}{{\sum\limits_{i'} {{z_{i'j}}} }}R_j^{l + 1}},
\end{align}
where $R_i^l$ is relevance of neuron $i$ in layer $l$ and $z_{ij}$ equals $x_i^lw_{ij}^{l + 1}$.
This rule assigns relevance of upper-layer neurons to a lower-layer neuron proportionally to their connecting weights. Since there is a non-linear activation function between layers, an approximation rule (deep Taylor decomposition) is used \cite{montavon2017explaining}, which is given by
\begin{align}
{R_i} = {\frac{{\partial f}}{{\partial {{\tilde x}_i}}} \cdot \left( {{x_i} - {{\tilde x}_i}} \right)},
\end{align}
where $\tilde x$ is chosen such that $f\left(\tilde x\right) = 0$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\columnwidth]{images/taylor.png}
\caption{an image and its corresponding deep Taylor decomposition \cite{montavon2017explaining}}
\label{taylor}
\end{figure}
\noindent
Selvaraju et al. used another approach to distribute feature relevance: \emph{Gradient-weighted Class Activation Mapping} (Grad-CAM) \cite{selvaraju2016grad} is a generalization of \emph{Class Activation Mapping} (CAM) \cite{zhou2016learning}. It is based on the observations that deeper layers of a CNN usually encode higher-level visual representations, and spatial locations are lost in fully connected layers. So, the last convolutional layer preserves both semantic and spatial information.
Grad-CAM computes first the partial derivatives of class scores with respect to each feature map $A^k$ of the last convolutional layer and takes the average to get feature map importance $a_k$:
\begin{align}
{a_k} = \frac{1}{Z}\sum\limits_i {\sum\limits_j {\frac{{\partial y}}{{\partial A_{ij}^k}}} },
\end{align}
where $Z$ is feature map size. Then the class activation map is computed by
\begin{align}
{L_{CAM}} = {\mathop{ReLU}}\left( {\sum\limits_k {{a_k}{A^K}} } \right) \odot \frac{{\partial f}}{{\partial x}}.
\end{align}
Since $L_{CAM}$ is of the same size as the last feature map, it needs to be up-sampled to input size for visualisation, as shown in Figure \ref{gradcam}.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{images/gradcam.png}
\caption{an image and its corresponding Grad-CAM for different classes \cite{selvaraju2016grad}}
\label{gradcam}
\end{figure}
An alternative way to visualise the decision processes of black-box predictors is through occluding parts of an input image and observe the changes in a predictor's performance. If the prediction changes significantly after occluding a patch, pixels in that patch are assigned higher importance. Zeiler et al. used grey patches for occlusion and produced heatmaps to show evidence for and against a classification decision \cite{zeiler2014visualizing}.
Zintgraf et al. argued in their work that Zeiler's approach is inaccurate because grey patches would feed new information into the model \cite{zintgraf2017visualizing}. Instead, they applied \emph{Prediction Difference Analysis} (PDA) and measured a pixel's importance by the change in a classifier's output after marginalizing out that pixel. The marginalized classifier output is derived from Bayes’ rule:
\begin{align}
P\left( {c|{x_{\backslash i}}} \right) = \sum\limits_{{x_i}} {P\left( {{x_i}|{x_{\backslash i}}} \right)P\left( {c|{x_i},{x_{\backslash i}}} \right)},
\label{eq:PDA}
\end{align}
where $c$ is predicted class of the input image, $x_i$ is the $i$-th pixel and $x_{\backslash i}$ denotes all pixels of the image except $x_i$.
A few changes have been made to accommodate this technique to deep neural networks. First, instead of a single pixel, a small patch of pixels with size $k \times k$ is marginalized out each time for larger fluctuations in model prediction. Second, for images of large size $n \times n$, sampling a window of pixels with multivariate Gaussian distribution conditioned on the remaining pixels is clearly infeasible. So, a patch surrounding the window with size $l \times l$ is chosen so that sampling is conditioned on that outer patch. The algorithm is demonstrated in pseudo-code in Algorithm~\ref{pda_code}.
\begin{algorithm}
\caption{Prediction Difference Analysis \cite{zintgraf2017visualizing}}
\label{pda_code}
\begin{algorithmic}[1]
\State $WE =$ zeros($n*n$), $counts =$ zeros($n*n$)
\For {every patch $x_w$ of size $k \times k$ in $x$}
\State $x'$ = copy($x$)
\State $sum_w = 0$
\State define patch ${\hat x_w}$ of size $l \times l$ that contains $x_w$
\For{$s=1$ \textbf{to} $S$}
\State ${x_w'} = {x_w}$ sampled from $P\left( {\left. {{x_w}} \right|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}} \over x} }_w}\backslash {x_w}} \right)$
\State $sum_w$ += $p(x')$
\EndFor
\State $P\left( {\left. c \right|x\backslash {x_w}} \right) = sum_w / S$
\State $WE[$coordinates of $x_w]$ += ${\log _2}\left( {odds\left( {c|x} \right)} \right) - {\log _2}\left( {odds\left( {c|x\backslash {x_w}} \right)} \right)$
\State $counts[$coordinates of $x_w]$ += $1$
\EndFor
\State \textbf{return} $WE / counts$
\end{algorithmic}
\end{algorithm}
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\columnwidth]{images/pda.png}
\caption{an image and its corresponding prediction difference analysis \cite{zintgraf2017visualizing}.}
\label{pda}
\end{figure}
\noindent
Figure \ref{pda} shows an example of PDA. In the visualisation, red regions show evidence supporting a model's prediction, while blue regions show evidence against a model's prediction.
This technique has several limitations. First, it is very slow because of tens of thousands of Gaussian conditional sampling and deep network forward passes. It could take more than an hour to visualise a single image for some large models. Moreover, the authors approximated Eq.~\ref{eq:PDA} by taking only ten samples for computational reasons, which is a source of error. Lastly, a Gaussian distribution conditioned on local patch does not take global context into account, making the conditional probability approximation less accurate because local patches have been shown to have different semantics under different contexts \cite{rabinovich2007objects}. Although easy to compute, Gaussian distribution itself is not a good model for natural images \cite{ruderman1994statistics}.
In this paper we propose an alternative formulation of PDA to make this method \textbf{(1)} efficient enough to be applicable for the practice, \textbf{(2)} more accurate, thus providing better interpretability, and \textbf{(3)} link it to other gradient-based visualisation techniques. Our approach is up to $10\times$ faster than the original formulation and provides a comprehensive mathematical framework for such approaches.
\section{Method}
In the following we will first make an informal analysis of the complexity of PDA. This can help us intuitively understand where the major performance bottlenecks are. Then, an alternative formulation for this algorithm will be designed to avoid these bottlenecks while producing visualisations with similar quality.
We shall also discuss the circumstances under which this formulation would give accurate result, and in turn show that it holds for most of modern CNN architectures. In Section~\ref{sect:eval}, we will run benchmark experiments on various classifiers to quantitatively compare the runtime of the original version of PDA and the efficient version proposed by us.
\textbf{Complexity of PDA:} Algorithm~\ref{pda_code} mainly consists of an outer loop and an inner loop. The outer loop iterates over every patch in the input image, and the inner loop takes $S$ samples for each given patch and performs an equal number of forward passes of the classifier to get the average prediction.
Suppose the input image is of size $n \times n$, a patch is of size $k \times k$ and the outer patch it conditioned on is of size $l \times l$. Then there are $ \left( {n - k + 1} \right) \times \left( {n - k + 1} \right)$ patches in total in the image. Hence, there are $ S\left( {n - k + 1} \right)^2$ samples to be taken. If we allow the classifier to operate on small batches of size $m$, then it would run $\frac{{S{{\left( {n - k + 1} \right)}^2}}}{m}$ forward passes.
Hence, the performance of PDA depends on sampling time and forward pass time.
\textbf{Sampling Time:} One common way to draw a sample $y$ from a multivariate Gaussian distribution is through Cholesky decomposition, which takes the form $\Sigma = L{L^T}$. If $\Sigma$ is positive definite, this decomposition is unique. Suppose the multivariate Gaussian distribution, which we want to sample from, has mean $\mu$ and covariance matrix $\Sigma$. We could first decompose $\Sigma$ to get $L$. Then, we draw a sample $x$ from a standard multivariate Gaussian distribution with mean $0$ and identity covariance matrix $I$, and transform it by $y = Lx + \mu$.
Finding the exact time complexity of multivariate Gaussian distribution sampling requires to first analyse the complexity of Cholesky decomposition and standard Gaussian distribution sampling. However, since the practical performance of these basic matrix operations strongly depend on the low level BLAS implementation and CPU instruction set, we will empirically measure the sampling time instead of focusing on theory.
Let the image size be $224 \times 224$, patch size be $10 \times 10$ and number of samples taken for each patch be $10$, which are the settings adopted by Zintgraf et al. \cite{zintgraf2017visualizing} in their implementation of PDA. Hence there are $224 \times 224 \times 10 = 501760$ samples of $100$ dimensions to be drawn. Under Intel Core i5 CPU and NumPy\footnote{the fundamental scientific computing library for Python: \url{www.numpy.org}}, it is measured that the sampling process takes roughly $5$ minutes on average, which is a considerable cost.
\textbf{Forward Pass Time:} Since the inference speed differs significantly for different CNN architectures, we will again run experiments to empirically measure the time spent on forward pass. We will use a batch size of $160$ on a Tesla K80 GPU. Caffe\footnote{a deep learning framework specialized in vision applications: \url{caffe.berkeleyvision.org}} \cite{jia2014caffe} with CuDNN\footnote{a library of primitives for deep learning with CUDA support: \url{developer.nvidia.com/cudnn}} \cite{chetlur2014cudnn} support is used in the following experiments. All models used here are pretrained and available from the Caffe Model Zoo.
Among various CNN architectures we choose to measure inference speed of AlexNet, VGG-16 and GoogLenet for two reasons. First, these architectures were winners of past ILSVRC classification task, and they have remained influential since many other models are finetuned with respect to them. Also, it would allow direct comparisons with PDA since these architectures were also used by Zintgraf et al. for their experiments.
\begin{table}[htb]
\begin{center}
\begin{tabular}{| c | c | c |}
\hline
Architecture & Input Size & Forward Pass Time (ms) \\ \hline
AlexNet & $227 \times 227$ & 342 \\ \hline
VGG-16 & $224 \times 224$ & 3301 \\ \hline
GoogLenet & $224 \times 224$ & 634 \\ \hline
\end{tabular}
\caption{Inference time for three popular CNN architectures.}
\label{benchmark}
\end{center}
\end{table}
\noindent
Table~\ref{benchmark} summarizes the single batch forward pass time for AlexNet, VGG-16 and GoogLeNet respectively. VGG-16 is especially computationally expensive in inference. It takes more than $3$ seconds to compute predictions for a batch of $160$ input images. Since $501760$ samples can be put into $3136$ batches exactly, it would take a VGG-16 model approximately $172$ minutes to do all the inference work in order to visualize PDA for a single image, which is too long to be useful in real scenarios.
Note that there are other operations which we have not discussed, such as fitting conditional Gaussian distributions for each patch. However, these distribution parameters can be computed in advance and loading them would only incur minimal computational overhead.
\subsection{Alternative Formulation of PDA}
From the empirical results above, it is obvious that we cannot achieve a significant speedup without reducing the number of samples to be taken and the number of forward passes to be run.
We observe that in PDA the class probability after marginalizing out a small window of pixels $P\left( {\left. c \right|x\backslash {x_w}} \right)$ is approximated by
\begin{align}
\sum\limits_{{x_w}} {P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over x} }_w}\backslash {x_w}} \right)P\left( {c|{x_w},x\backslash {x_w}} \right)},
\end{align}
which is the arithmetic mean of ${P\left( {c|{x_w},x\backslash {x_w}} \right)}$. If we substitute this with the geometric mean, we will get $\prod\limits_{{x_w}} {P{{\left( {c|{x_w},x\backslash {x_w}} \right)}^{P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}} \over x} }_w}\backslash {x_w}} \right)}}} $. Since the geometric mean of a probability distribution is not necessarily itself a probability distribution, we need to first normalize it:
\begin{align}
\begin{split}
P\left( {\left. c \right|x\backslash {x_w}} \right) \approx \frac{{\prod\limits_{{x_w}} {P{{\left( {c|{x_w},x\backslash {x_w}} \right)}^{P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over x} }_w}\backslash {x_w}} \right)}}} }}{{\sum\limits_c {\prod\limits_{{x_w}} {P{{\left( {c|{x_w},x\backslash {x_w}} \right)}^{P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over x} }_w}\backslash {x_w}} \right)}}} } }}.
\end{split}
\end{align}
For a CNN, the last layer is often a softmax layer:
\begin{align}
\begin{split}
P\left( {\left. c \right|x} \right) &= softmax {\left( {{z^l}} \right)_c}\\
&= \frac{{\exp \left( {z_c^l} \right)}}{{\sum\limits_j {\exp \left( {z_j^l} \right)} }},
\end{split}
\label{eq:oldpda}
\end{align}
where $z^l$ is the last layer before softmax and $z_j^l$ is its $j$-th neuron. So,
\begin{align}
\begin{split}
&P\left( {\left. c \right|x\backslash {x_w}} \right) \approx \\
&\approx \frac{{\prod\limits_{{x_w}} {softmax \left( {{z^l}} \right)_c^{P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over x} }_w}\backslash {x_w}} \right)}} }}{{\sum\limits_j {\prod\limits_{{x_w}} {softmax \left( {{z^l}} \right)_j^{P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over x} }_w}\backslash {x_w}} \right)}} } }}\\
&= \frac{1}{{\sum\limits_j {\prod\limits_{{x_w}} {{{\left( {\frac{{softmax {{\left( {{z^l}} \right)}_j}}}{{softmax {{\left( {{z^l}} \right)}_c}}}} \right)}^{P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}} \over x} }_w}\backslash {x_w}} \right)}}} } }}\\
&= \frac{1}{{\sum\limits_j {\prod\limits_{{x_w}} {{{\left( {\frac{{\exp \left( {z_j^l} \right)}}{{\exp \left( {z_c^l} \right)}}} \right)}^{P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}} \over x} }_w}\backslash {x_w}} \right)}}} } }}\\
&= \frac{1}{{\sum\limits_j {\prod\limits_{{x_w}} {\exp {{\left( {z_j^l - z_c^l} \right)}^{P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over x} }_w}\backslash {x_w}} \right)}}} } }}\\
&= \frac{1}{{\sum\limits_j {\exp \left( {\sum\limits_{{x_w}} {P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over x} }_w}\backslash {x_w}} \right)\left( {z_j^l - z_c^l} \right)} } \right)} }}\\
&= \frac{1}{{\sum\limits_j {\frac{{\exp \left( {\sum\limits_{{x_w}} {P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over x} }_w}\backslash {x_w}} \right)z_j^l} } \right)}}{{\exp \left( {\sum\limits_{{x_w}} {P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over x} }_w}\backslash {x_w}} \right)z_c^l} } \right)}}} }}\\
\end{split}
\label{eq:extpda1}
\end{align}
\begin{align}
\begin{split}
&= \frac{{\exp \left( {\sum\limits_{{x_w}} {P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over x} }_w}\backslash {x_w}} \right)z_c^l} } \right)}}{{\sum\limits_j {\exp \left( {\sum\limits_{{x_w}} {P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over x} }_w}\backslash {x_w}} \right)z_j^l} } \right)} }}\\
&= softmax {\left( {\sum\limits_{{x_w}} {P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over x} }_w}\backslash {x_w}} \right){z^l}} } \right)_c}
\end{split}
\label{eq:extpda}
\end{align}
Comparing Eq~\ref{eq:oldpda} to Eq~\ref{eq:extpda}, we can immediately observe that $P\left( {\left. c \right|x\backslash {x_w}} \right)$ is roughly equal to applying the softmax function to the conditional expectation of $z^l$, ${\rm E}\left[ {{z^l}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}} \over x} }_w}\backslash {x_w}} \right]$.
What we have just shown is that the conditional probability ${P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}} \over x} }_w}\backslash {x_w}} \right)}$ can be pushed into a convolutional network's decision function if we approximate the arithmetic mean by the corresponding normalized geometric mean. This observation leads to
\begin{align}
\begin{split}
&P\left( {\left. c \right|x\backslash {x_w}} \right)\approx\\
&\approx \sum\limits_{{x_w}} {P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over x} }_w}\backslash {x_w}} \right)softmax {{\left( {{z^l}\left( {{x_w},x\backslash {x_w}} \right)} \right)}_c}} \approx\\
&\approx softmax {\left( {\sum\limits_{{x_w}} {P\left( {{x_w}|{{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\frown$}}
\over x} }_w}\backslash {x_w}} \right){z^l}\left( {{x_w},x\backslash {x_w}} \right)} } \right)_c}.
\end{split}
\end{align}
The arithmetic mean is now taken in layer $l$, the last layer before softmax. It can reduce the number of softmax functions to be evaluated for each patch from $S$ to $1$. This is not so satisfactory because we still have to run $S$ forward passes for each patch, except that each forward pass stops at layer $l$.
As explained previously, a CNN is composed of many layers of different types, and each layer can be thought of as a simple function in this composition of functions. So, to gain further speed-up we need to investigate whether the arithmetic mean could be taken at lower layers. Specifically, we are looking for component functions $f$ of a convolutional network that satisfies either
\begin{align}
\rm E\left[ {f\left( x \right)} \right] = f\left( {{\rm E}\left[ x \right]} \right)
\label{eg:E}
\end{align}
or
\begin{align}
GM\left( {f\left( x \right)} \right) = f\left( {{\rm E}\left[ x \right]} \right),
\label{eg:GM}
\end{align}
where $GM$ denotes (normalized) geometric mean.
If $f$ satisfies Eq.~\ref{eg:E}, the arithmetic mean can be propagated to a lower layer exactly. If $f$ satisfies Eq.~\ref{eg:GM}, we can use geometric mean for $f$ as an approximation in the same way as softmax. In the following, we will investigate linear components, piece-wise linear components and non-linear components of a CNN respectively and prove that they possess the required properties.
\textbf{Linear Components:} Linear transformations constitute an important part of many classifiers because they allow more efficient optimisation and inference. The most common linear transformations in a CNN include convolutional layers, fully connected layers and batch normalization layers.
By linearity of expectation, Eq.~\ref{eg:E} trivially holds for all linear transformations, which means the expectation can be propagated down through these layers without losing accuracy.
\textbf{Piece-wise Linear Components:} Piece-wise linear functions are often used in CNNs as activation functions to solve the vanishing gradient problem. Examples of this kind include rectified linear units and maxout units \cite{goodfellow2013maxout}. Also, the max pooling layer following a convolutional layer is piece-wise linear.
We first consider the case of a ReLU function. Assume the input $x$ is Gaussian distributed with mean $\mu$ and variance ${\sigma ^2}$. This assumption may not hold in reality, but it can give us some intuition on ReLU's property. Baldi et al. \cite{baldi2014dropout} prove that if $\mu = 0$,
\begin{align}
\left| {{\rm E}\left[ {{\mathop{\rm ReLU}\nolimits} \left( x \right)} \right] - {\mathop{\rm ReLU}\nolimits} \left( {{\rm E}\left[ x \right]} \right)} \right| = \frac{\sigma }{{\sqrt {2\pi } }}.
\label{eq:relu1}
\end{align}
Moreover, if $\frac{{\left| \mu \right|}}{\sigma }$ is large,
\begin{align}
\left| {{\rm E}\left[ {{\mathop{\rm ReLU}\nolimits} \left( x \right)} \right] - {\mathop{\rm ReLU}\nolimits} \left( {{\rm E}\left[ x \right]} \right)} \right| \approx 0.
\label{eq:relu2}
\end{align}
Eq.~\ref{eq:relu1} and Eq.~\ref{eq:relu2} together indicate that reducing the error of approximation depends on variance of input being small. We are going to show that the same conclusion applies to more general cases.
Specifically,\emph{maxout units} are capable of representing a broad class of piece-wise linear functions. A single maxout unit can approximate arbitrary convex functions. Figure~\ref{maxout} demonstrates how a maxout unit learns to behave like ReLU, absolute value function and quadratic function. Moreover, when applying to a convolutional layer, computing maxout activation is equivalent to performing max pooling across channels as well as spatial locations. So it covers the case of pooling layer as well.
\begin{figure}[htb]
\centering
\includegraphics[width=1.0\columnwidth]{images/maxout.png}
\caption{a maxout unit learns ReLU, absolute value function and approximates quadratic function \cite{goodfellow2013maxout}}
\label{maxout}
\end{figure}
\noindent
Let the input $x$ be a $d$-dimensional vector. A maxout unit $h(x)$ is defined by
\begin{align}
h\left( x \right) = \max \left( {{w_1}^Tx + {b_1},...,{w_k}^Tx + {b_k}} \right)
\end{align}
where $w_i$ and $b_i$ are parameters.
Assume elements of $x$ are independent and Gaussian distributed. Since the linear combination of independent Gaussian random variables remains Gaussian distributed, we can write
\begin{align}
h\left( x \right) = \max \left( {{{\tilde x}_1},...,{{\tilde x}_k}} \right),
\end{align}
where ${{\tilde x}_i} = {w_i}^Tx + {b_i}$ and ${{\tilde x}_i} \sim N\left( {{{\tilde x}_i}|{\mu _i},\sigma _i^2} \right)$.
Since the maximum of a set of convex functions is convex, $h(x)$ is a convex function. Then by Jensen's inequality,
\begin{align}
{\rm E}\left[ {h\left( x \right)} \right] &= E\left[ {\max \left( {{{\tilde x}_1},...,{{\tilde x}_k}} \right)} \right]\\
& \ge \max \left( {{\rm E}\left[ {{{\tilde x}_1}} \right],..,{\rm E}\left[ {{{\tilde x}_k}} \right]} \right)\\
& = \max \left( {{\mu _1},...,{\mu _k}} \right).
\end{align}
On the other hand, again by Jensen's inequality
\begin{align}
\begin{split}
\exp \left( {t{\rm E}\left[ {h\left( x \right)} \right]} \right) &\le {\rm E}\left[ {\exp \left( {th\left( x \right)} \right)} \right]\\
& = {\rm E}\left[ {\exp \left( {t\max \left( {{{\tilde x}_1},...,{{\tilde x}_k}} \right)} \right)} \right]\\
& = {\rm E}\left[ {\max \left( {\exp \left( {t{{\tilde x}_1}} \right),...,\exp \left( {t{{\tilde x}_k}} \right)} \right)} \right]\\
& \le {\rm E}\left[ {\sum\limits_i {\exp \left( {t{{\tilde x}_i}} \right)} } \right]\\
& = \sum\limits_i {{\rm E}\left[ {\exp \left( {t{{\tilde x}_i}} \right)} \right]}
\end{split}
\end{align}
Since $\tilde x_i$ is Gaussian distributed, by definition of the moment generating function of Gaussian distributions,
\begin{align}
\exp \left( {t{{\tilde x}_i}} \right) = \exp \left( {{\mu _i}t + \frac{1}{2}{t^2}\sigma _i^2} \right).
\end{align}
Hence,
\begin{align}
\begin{split}
\exp \left( {t{\rm E}\left[ {h\left( x \right)} \right]} \right) &\le \sum\limits_i {{\rm E}\left[ {\exp \left( {t{{\tilde x}_i}} \right)} \right]} \\
& = \sum\limits_i {\exp \left( {{\mu _i}t + \frac{1}{2}{t^2}\sigma _i^2} \right)}
\end{split}
\end{align}
Taking the logarithm on both sides, we get
\begin{align}
{\rm E}\left[ {h\left( x \right)} \right] \le
\frac{1}{t}\log \sum\limits_i {\exp \left( {{\mu _i}t + \frac{1}{2}{t^2}\sigma _i^2} \right)}
\end{align}
We still need an upper bound for the log-sum-exp function, which can be derived by
\begin{align}
\begin{split}
\exp \left( {\log \sum\limits_i {\exp \left( {{x_i}} \right)} } \right) &= \sum\limits_i {\exp \left( {{x_i}} \right)} \\
& \le k\max \left( {\exp \left( {{x_1}} \right),...,\exp \left( {{x_2}} \right)} \right)\\
& = k\exp \left( {\max \left( {{x_1},...,{x_k}} \right)} \right)\\
\log \sum\limits_i {\exp \left( {{x_i}} \right)} &\le \max \left( {{x_1},...,{x_k}} \right) + \log k
\end{split}
\end{align}
Without loss of generality we can set $t = 1$. We have the following bounds for ${\rm E}\left[ {h\left( x \right)} \right]$:
\begin{align}
\begin{split}
&\max \left( {{\mu _1},...,{\mu _k}} \right) \le \\
&{\rm E}\left[ {h\left( x \right)} \right] \le \log k + \max \left( {{\mu _1} + \frac{1}{2}\sigma _1^2,...,{\mu _k} + \frac{1}{2}\sigma _k^2} \right).
\end{split}
\end{align}
So the approximation error is
\begin{align}
\begin{split}
&\left| {{\rm E}\left[ {h\left( x \right)} \right] - h\left( {{\rm E}\left[ x \right]} \right)} \right| \le
\log k + \frac{1}{2}\max \left( {\sigma _1^2,...,\sigma _k^2} \right) + \\
& + \max \left( {{\mu _1},...,{\mu _k}} \right) - \max \left( {{\mu _1},...,{\mu _k}} \right) = \\
& = \log k + \frac{1}{2}\max \left( {\sigma _1^2,...,\sigma _k^2} \right)
\end{split}
\label{eq:aerror}
\end{align}
Eq.~\ref{eq:aerror} shows that the approximation quality for maxout units depends on the maximum of input variance.
We have shown that the error incurred by pushing the expectation inside a piece-wise linear function is small given that the variance of input is small. To find out whether it holds for a convolutional network, we design the following experiment.
We first choose an arbitrary image from ILSVRC dataset. In order to simulate PDA, we fill the patch of size $10 \times 10$ at the image's top left corner with samples drawn from a conditional Gaussian distribution. A total of $160$ samples are used to make a mini-batch. After feeding the batch to a CNN, we collect the outputs of two fully connected layers, which are also inputs to the following ReLU layers. The distributions of their mean and standard deviation are plotted in histograms \ref{alexnet_act} and \ref{vgg16_act}.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{images/alexnet.png}
\caption{Distributions of mean and standard deviation for fully-connected layers in AlexNet.}
\label{alexnet_act}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{images/vgg16.png}
\caption{Distributions of mean and standard deviation for fully-connected layers in VGG16.}
\label{vgg16_act}
\end{figure}
\noindent
In Figures \ref{alexnet_act} and \ref{vgg16_act} we can observe that most neurons have their mean and standard deviation at around $0$. Also, the ratio between mean and standard deviation is very large, which meets the requirements for Eq.~\ref{eq:relu1} and Eq.~\ref{eq:relu2} to hold.
\textbf{Nonlinear Components: }
The most commonly used non-linear functions in neural networks are sigmoid functions. It was the default activation function before ReLU was introduced, and it is still used to output probabilities for binary classification tasks. To prove that Eq.~\ref{eg:GM} holds for sigmoid functions, we just need to show that sigmoid functions are a special case of softmax functions:
\begin{align}
\sigma \left( x \right) &= \frac{1}{{1 + \exp \left( { - x} \right)}}
= \frac{{\exp \left( 0 \right)}}{{\exp \left( 0 \right) + \exp \left( { - x} \right)}}
\end{align}
The observations in this sections lead to a new efficient algorithm to perform prediction evidence analysis. The pseudo-code of this method is shown in Algorithm~\ref{efficient_pda}.
\begin{algorithm}[htb]
\caption{Efficient Prediction Difference Analysis}
\label{efficient_pda}
\begin{algorithmic}[1]
\State $WE =$ zeros($n*n$), $counts =$ zeros($n*n$)
\For {every patch $x_w$ of size $k \times k$ in $x$}
\State $x'$ = copy($x$)
\State define patch ${\hat x_w}$ of size $l \times l$ that contains $x_w$
\State ${x_w'}$ = conditional mean of ${x_w}$ given ${\hat x_w}\backslash {x_w}$
\State $P\left( {\left. c \right|x\backslash {x_w}} \right) = P\left( {c|x'} \right)$
\State $WE[$coordinates of $x_w]$ += ${\log _2}\left( {odds\left( {c|x} \right)} \right) - {\log _2}\left( {odds\left( {c|x\backslash {x_w}} \right)} \right)$
\State $counts[$coordinates of $x_w]$ += $1$
\EndFor
\State \textbf{return} $WE / counts$
\end{algorithmic}
\end{algorithm}
\noindent
Algorithm~\ref{efficient_pda} has $S$ times fewer forward passes to run than Algorithm~\ref{pda_code}. Thus it is expected to be at least $S$ times faster.
\section{Evaluation \& Results}
\label{sect:eval}
\textbf{Quantitative Experiments:}
We have shown that by modelling with normalized geometric mean we can propagate the expectation in output space down to the input space in a layerwise manner. However, it remains a question whether normalized geometric mean provides a good approximation to the arithmetic mean. In this section, we will conduct both theoretical analysis and quantitative experiments to evaluate the relationship between these two kinds of mean and how it would affect the approximation quality.
The inequality of arithmetic mean and geometric mean imply that the geometric mean of non-negative numbers is less than or equal to the corresponding arithmetic mean. This means that geometric mean consistently underestimates arithmetic mean. However, this property does not hold for normalized geometric mean. For simplicity we consider the binary classification case.
We have shown earlier that the normalized geometric mean for sigmoid function is
\begin{align}
\begin{split}
NGM\left( {\sigma \left( x \right)} \right) &= \frac{{\prod\limits_{{x_i}} {\sigma {{\left( {{x_i}} \right)}^{P\left( {{x_i}} \right)}}} }}{{\prod\limits_{{x_i}} {\sigma {{\left( {{x_i}} \right)}^{P\left( {{x_i}} \right)}}} + \prod\limits_{{x_i}} {{{\left( {1 - \sigma \left( {{x_i}} \right)} \right)}^{P\left( {{x_i}} \right)}}} }} \\
&= \sigma \left( {\sum\limits_{{x_i}} {P\left( {{x_i}} \right){x_i}} } \right)
\end{split}
\end{align}
and the arithmetic mean is
\begin{align}
AM\left( {\sigma \left( x \right)} \right) = \sum\limits_{{x_i}} {P\left( {{x_i}} \right)} \sigma \left( {{x_i}} \right)
\end{align}
Since sigmoid function is s-shaped, it is convex on $( - \infty ,0]$ and concave on $[0, + \infty)$. By Jensen's inequality, the normalized geometric mean of sigmoid function is less than or equal to the arithmetic mean for $x_i \le 0$, and it is greater than or equal to the arithmetic mean for $x_i \ge 0$.
Moreover, with first order Taylor series expansion,
\begin{align}
\begin{split}
\log GM\left( {\sigma \left( x \right)} \right) &= \log \prod\limits_{{x_i}} {\sigma {{\left( {{x_i}} \right)}^{P\left( {{x_i}} \right)}}} \\
& = \sum\limits_{{x_i}} {P\left( {{x_i}} \right)} \log \sigma \left( {{x_i}} \right)\\
&\approx \sum\limits_{{x_i}} {P\left( {{x_i}} \right)} \left( {\sigma \left( {{x_i}} \right) - 1} \right)\\
& = AM\left( {\sigma \left( x \right)} \right) - 1\\
GM\left( {\sigma \left( x \right)} \right) &\approx \exp \left( {AM\left( {\sigma \left( x \right)} \right) - 1} \right)\\
& \approx AM\left( {\sigma \left( x \right)} \right)
\end{split}
\end{align}
So geometric mean is a first order approximation of arithmetic mean for sigmoid functions. Based on this, we can show that normalized geometric mean is also a first order approximation of the arithmetic mean:
\begin{align}
\begin{split}
NGM\left( {\sigma \left( x \right)} \right) &= \frac{{\prod\limits_{{x_i}} {\sigma {{\left( {{x_i}} \right)}^{P\left( {{x_i}} \right)}}} }}{{\prod\limits_{{x_i}} {\sigma {{\left( {{x_i}} \right)}^{P\left( {{x_i}} \right)}}} + \prod\limits_{{x_i}} {{{\left( {1 - \sigma \left( {{x_i}} \right)} \right)}^{P\left( {{x_i}} \right)}}} }}\\
& \approx \frac{{AM\left( {\sigma \left( x \right)} \right)}}{{AM\left( {\sigma \left( x \right)} \right) + \left( {1 - AM\left( {\sigma \left( x \right)} \right)} \right)}}\\
&= {AM\left( {\sigma \left( x \right)} \right)}
\end{split}
\end{align}
It is easy to see that the same argument can be generalized to softmax functions. Since normalized geometric mean does not always underestimate arithmetic mean, it is reasonable to use normalized geometric mean for approximation in our case.
In order to visualise the empirical approximation quality, we randomly select a set of $200$ images and a fixed patch location. The arithmetic mean of output probability is approximated by drawing $500$ sample patches at that location, and the normalized geometric mean of output probability is obtained by using conditional mean for that location. The distribution of approximation error is plotted in Figure~\ref{appro_error}.
\begin{figure}[htb]
\centering
\includegraphics[width=1.0\columnwidth]{images/appro_error.png}
\caption{distribution of approximation error for 200 images}
\label{appro_error}
\end{figure}
\noindent
Figure~\ref{appro_error} shows the differences between arithmetic mean and normalized geometric mean for the output probabilities of AlexNet. We can see that for more than half of the input images the approximation error is very close to zero. In addition to that, as the approximation error increases, fewer images fall into the corresponding intervals. This suggests that our alternative formulation gives a good approximation.
\textbf{Computational Speed:}
We run experiments to record the time taken by both algorithms to visualise a single image. In particular, they both use a $10 \times 10$ window size and an $18 \times 18$ outer patch size. $10$ samples are taken for each window location by the original algorithm.
The classifiers used are Alexnet, VGG16 and GoogLenet. The benchmarks are performed using Caffe with CuDNN on a Tesla K80 GPU. A batch size of 160 is used in forward pass.
Note that all the above details have an influence on the overall runtime. However, we are interested int the relative speed-up of our new formulation.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{images/runtime.pdf}
\caption{Time required to visualise the decision evidence for a single image by PDA \cite{zintgraf2017visualizing} and our approach.}
\label{benchmarking}
\end{figure}
\noindent
Figure \ref{benchmarking} summarizes the benchmarking result for our method vs. \cite{zintgraf2017visualizing}. We can see that our proposed modification results in a 10x speed-up, which is a significant improvement.
Furthermore, a batch size of $160$ samples is the maximum for VGG16 to be fit in the GPU memory. For smaller models like Alexnet, we could finish visualisation within minutes by using a much larger batch size.
\textbf{Qualitative Experiments:}
We train a six-layer convolutional model that consists of two convolutional layers, two max-pooling layers and two fully connected layers. The detailed architecture is illustrated in \cite{deep_mnist}. This model achieves an accuracy of 99.2\% on the MNIST test set.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{images/mnist.png}
\caption{Visualisations for MNIST dataset. The first column shows the input images, which are deliberately chosen to contain digits from zero to nine. The second and fourth columns show the heatmaps generated by \cite{zintgraf2017visualizing} and our approach respectively. The red regions are input pixels supporting the classification decision, while the blue regions are pixels against the decision. The colour intensity is proportional to pixel importance. The third and last columns show the input images overlaid with their heatmaps.}
\label{mnist_result}
\end{figure}
In Figure \ref{mnist_result} both algorithms use a window size of $4 \times 4$ since it is verified to produce the smoothest and most interpretable results. Also, marginal sampling is used instead of conditional sampling, which means we now use $ P\left( {x_w} \right)$ to approximate $ P\left( {{x_w}|x\backslash {x_w}} \right)$. This is justified for images from MNIST dataset because their pixels have relatively weak interdependencies.
\textbf{ILSVRC experiment: }
The classifier used for experiments on the ILSVRC 2012 dataset is VGG16, and the parameters of conditional Gaussian distributions are estimated from the validation set of ILSVRC 2012, which contains 50000 images.
VGG16 is composed of 16 weight layers and 4 max-pooling layers. Since the propagation of expectation we used in deriving our method incurs an approximation error for each layer, we can expect the accumulated error for VGG16 to be larger than the 6-layer model in previous section. In this case, we would like to figure out whether our method could still explain the classifier's decision well.
We use a window size of $10 \times 10$ and outer patch size of $18 \times 18$ for both algorithms. For original prediction difference analysis, we still draw 10 samples for each window location. Since repeated experiments are computationally expensive and our purpose is not to find the optimal set of parameters, we simply adopt the settings from \cite{zintgraf2017visualizing}.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{images/ilsvrc_1.png}
\caption{Visualisations for ILSVRC dataset: class labels are "Siberian husky", "bea eater", "maillot", "washer" and "digital watch" respectively in a top-down order. }
\label{ilsvrc_result1}
\end{figure}
Figure \ref{ilsvrc_result1} shows the visualisation results for five correctly classified images from ILSVRC dataset.
The column layout is the same as in Figure \ref{mnist_result}.
In the first row of Figure \ref{ilsvrc_result1}, the input image is a Siberian husky. We can observe that both algorithms treat the husky's ears and nose as strong positive evidence while treat its forehead as strong negative evidence. However, the results from \cite{zintgraf2017visualizing} is much noisier; the evidence captured by it spreads over the whole image space, making the heatmap look chaotic. On the other hand, although our method also marks regions outside the object as evidence, their pixel intensities are too low to be confused with the main region. This could help observers to omit unnecessary details and focus on the most important evidence.
The same properties also appear in other examples. In the case of bee eater, both algorithms consider the yellow feather beneath the bird's beak as positive evidence and its tail as negative evidence, but the original algorithm also highlights the region outside the bird's head. In the case of maillot, the original algorithm attributes the highest importance to a large region of wall outside the woman, which is unlikely to help the classifier make its decision.
Furthermore, even inside the objects, the two algorithms can make different judgements. For example, the \cite{zintgraf2017visualizing} decides that the chest region of the husky class votes against this class, but the same region is not considered important by our approach. Also, the two algorithms have contrary evidence assignments for feathers on the bee eater's back.
The above-mentioned differences in feature heatmaps could come from two sources. First, it may be the result of approximation error in our approach. Also, it may be caused by the sampling approach in original prediction difference analysis. In order to differentiate these two kinds of error, we design the following experiment: Intuitively, if we increase the number of samples taken at each window location for original prediction difference analysis, the result would be more accurate and closer to its true value. However, we cannot afford it because it makes this algorithm even more computationally expensive. So we use a different approach to demonstrate the error caused by sampling. Now, we replace the conditional mean in our method with empirical mean of 10 samples instead. The results are shown in Figure~\ref{ilsvrc_result2}.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{images/ilsvrc_2.png}
\caption{the same as Figure \ref{ilsvrc_result1}, except that visualisations produced by our approach are replaced by their sampled approximations.}
\label{ilsvrc_result2}
\end{figure}
Figure \ref{ilsvrc_result2} demonstrates the feature heatmaps generated by \cite{zintgraf2017visualizing} and our sampled approach for the same set of input images as \ref{ilsvrc_result1}.
First of all, we can immediately notice that the regions highlighted by the sampled approximation are nearly the same as those highlighted by original prediction difference analysis for maillot example. In particular, both algorithms agree that the wall is positive evidence, while the bed sheet near the hands is negative evidence. This effect is not so obvious for other examples, whose feature heatmaps look almost identical after switching to empirical mean. However, we can still discover those subtle changes if we take a closer look.
For example, the region above bee eater's head is now taken as positive evidence by the sampled variant of our approach. Also, husky's chest is now taken as negative evidence. These subtleties are difficult to be detected only because their intensities are very low.
Based on these observations, we propose the following hypothesis. For \cite{zintgraf2017visualizing} and our approach, the difference in heatmap's shape is mainly caused by the former's sampling behaviour, while the difference in heatmap's intensity is mainly caused by the latter's approximation error.
To further verify the first part of this hypothesis, we randomly pick an image $x$ and fix a window location $x_w$. We then measure the absolute difference in $P\left( {c|x\backslash {x_w}} \right)$ outputted by the two algorithms when different number of samples are taken at $x_w$. The result is plotted below.
\begin{figure}[htb]
\centering
\includegraphics[width=0.6\columnwidth]{images/pred_diff.png}
\caption{absolute difference in $P\left( {c|x\backslash {x_w}} \right)$ changes with the number of samples drawn at a particular window location $x_w$}
\label{pred_diff}
\end{figure}
We can see from Figure \ref{pred_diff} that the prediction difference fluctuates intensely when only a few samples are drawn. Since it is infeasible to draw hundreds of samples in practice, it is obvious that \cite{zintgraf2017visualizing} is noisier than our approach.
\textbf{Invariance Experiment:}
In the previous section, we observe that our approach generates clearer visualisations. However, we cannot affirm that the important regions captured by it are truly discriminative since there is not a general quantitative evaluation of visualisation result. So, what we would show instead is that our method can capture invariant features.
The intuition behind this is that if a CNN generalises well outside its training data, it must have learned a set of invariant features for an object class. If we can show that our method captures similar evidence for different images belonging to the same class, then the evidence is likely to have strong discriminative power.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{images/new_husky.png}
\caption{Visualisation result for another husky image.}
\label{new_husky}
\end{figure}
Figure \ref{new_husky} shows the heatmap generated by our approach for another husky image that is correctly classified by VGG16. It is obvious that the strongest evidence speaking for its class is the husky's eye. If we look back at Figure \ref{ilsvrc_result1}, we would see that the positive evidence found for the first husky image is its ears and nose. Important features found for these two images do not seem to agree with each other.
Before explaining the reason, we shall run further experiments regarding feature invariance. This time, we choose to process the first husky image with operations including rotation, flipping and cropping to see the changes in the evidence analysis.
\begin{figure}[htb]
\centering
\includegraphics[width=0.9\columnwidth]{images/huskies.png}
\caption{Visualisation results on augmented husky images.}
\label{huskies}
\end{figure}
In Figure \ref{huskies}, the first row is the original husky image. Row 2 and row 5 are generated by rotation. Row 3 is generated by flipping, and row 4 is generated by cropping.
We can observe that all heatmaps except the last one mark the husky's ears and nose as strong positive evidence. This indicates that the presense of ears and nose can indeed help VGG16 to recognise a husky. As for the last case, when one ear is occluded, VGG16 marks its eye as positive evidence. This behaviour agrees with Figure \ref{new_husky}, in which one ear is occluded as well.
We can even make a guess about VGG16's procedure for recognising huskies. It will first search for the pair of ears and the nose whenever possible. If one ear is missing, then the other one would not be considered important anymore and VGG16 searches for eyes instead.
\textbf{Further Simplification: }
If we ignore log odds, what our approach essentially does for each window $x_w$ is to measure the difference between $P\left( {c|x} \right)$ and $P\left( {c|x'} \right)$, where $x'$ is obtained by replacing $x_w$ with its conditional mean.
Let $f(x)$ be the decision function that outputs $P\left( {c|x} \right)$. Then the first order Taylor expansion of $f(x)$ at $x_0$ is
\begin{align}
f\left( x \right) \approx f\left( {{x_0}} \right) + f'{\left( {{x_0}} \right)^T}\left( {x - {x_0}} \right)
\end{align}
If we let $x_0$ be the original image, we can further approximate our method with the following equation.
\begin{align}
P\left( {c|x} \right) - P\left( {c|x'} \right) \approx f'{\left( x \right)^T}\left( {x - x'} \right)
\end{align}
Hence, we can even further simplify Algorithm~\ref{efficient_pda}. Pseudo-code for the simplified algorithm is presented in Algorithm~\ref{simple_pda}.
\begin{algorithm}[htb]
\caption{Further simplification of our approach}
\label{simple_pda}
\begin{algorithmic}[1]
\State $WE =$ zeros($n*n$), $counts =$ zeros($n*n$)
\State $grad = \frac{\partial }{{\partial x}}P\left( {c|x} \right)$
\For {every patch $x_w$ of size $k \times k$ in $x$}
\State $x'$ = copy($x$)
\State define patch ${\hat x_w}$ of size $l \times l$ that contains $x_w$
\State ${x_w'}$ = conditional mean of ${x_w}$ given ${\hat x_w}\backslash {x_w}$
\State $WE[$coordinates of $x_w]$ += $grad^T\left( {x - x'} \right)$
\State $counts[$coordinates of $x_w]$ += $1$
\EndFor
\State \textbf{return} $WE / counts$
\end{algorithmic}
\end{algorithm}
The simplified algorithm now only needs to perform one forward pass and one backward pass to visualise an image.
To illustrate its visualisation quality, we compare it with our method on ILSVRC dataset. The results are shown below.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{images/ilsvrc_3.png}
\caption{Visualisation results by our approach and its simplified version on the same set of input images as Figure \ref{ilsvrc_result1} and Figure \ref{ilsvrc_result2}}
\label{ilsvrc_result3}
\end{figure}
In Figure \ref{ilsvrc_result3}, the middle two columns are feature heatmaps generated by our approach, and the last two columns are feature heatmaps generated by its simplified version.
We can see that the simplified version is capable of capturing some important input features, but the results are too noisy and lack interpretability. Although it runs significantly faster, the decrease in visualisation quality would reduce its usefulness.
However, the formulation of simplification itself can help us reason about other visualisation techniques. IN Table~\ref{alg_compare} we compare our method to class saliency map and deep Taylor decomposition in the aspect of feature importance evaluation.
\begin{table}[htb]
\begin{center}
\begin{tabular}{ll}
\toprule
Visualisation Technique & Importance of Pixel $x_i$ \\
\toprule
Class Saliency Map~\cite{simonyan2013deep} & ${R_i} = { {\left| {\frac{{\partial f}}{{\partial x_i}}} \right|} }$ \\
\toprule
Deep Taylor Decomposition~\cite{montavon2017explaining} & ${R_i} = \frac{{\partial f}}{{\partial {{\tilde x}_i}}} \cdot \left( {{x_i} - {{\tilde x}_i}} \right)$ \\
\midrule
Our approach & ${R_i} = \frac{{\partial f}}{{\partial {x_i}}} \cdot \left( {{x_i} - {x_i'}} \right)$ \\
\bottomrule
\end{tabular}
\caption{three visualisation techniques and their evaluation of pixel importance}
\label{alg_compare}
\end{center}
\end{table}
In Table~\ref{alg_compare}, $f$ denotes decision function of the classifier to be visualised. $x$ is the input image, $\tilde x$ is chosen such that $f(\tilde x) = 0$, and $x_i'$ is the conditional mean of $x_i$ given its neighbouring pixels.
We can observe that all three algorithms utilize partial derivatives of the decision function with respect to input pixels to evaluate each pixel's importance. What differentiates them is the weighting rule for pixel contribution. Class saliency maps~\cite{simonyan2013deep} treat every pixel equally, so it only uses gradient information to decide pixel importance. Deep Taylor decomposition~\cite{montavon2017explaining}, on the other hand, first finds a root point of the decision function. It assigns larger weights to pixels far away from this root point because those pixels carry more information about the class identity. Finally, our approach models the interdependence between local pixels and assigns larger weights to pixels that are hard to be predicted from context.
Different weighting rules decide the properties of visualisations produced. Equal weighting is built on the assumption that each pixel independently contributes to the classifier's decision. This is not true for a CNN, which learns a hierarchy of representations from complex pixel interactions. Therefore visualisations generated by class saliency map tend to be noisy. Both Deep Taylor decomposition and our approach weigh pixels proportionally to their distances to a base image; the former's base image encodes information about decision function, while the latter's base image encodes information about distribution of input pixels. Since they explain the classifier's decision from different perspectives, their results can be combined in practice to gain a better interpretation.
\section{Conclusion}
In this work we have investigated a framework for predictor evidence analysis. Our work is based on an alternative formulation for prediction difference analysis~\cite{zintgraf2017visualizing}.
It is derived by taking advantage of the hierarchical structure and special properties of component functions of CNNs. We have run various experiments on different datasets and classifiers to compare it with other methods aiming to explain classification decisions.
Our proposed method runs at least $10\times$ faster than \cite{zintgraf2017visualizing}. This acceleration comes from reducing the number of forward passes as well as avoiding sampling from high dimensional Gaussian distributions.
Our method generates more interpretable visualisations. This is due to a side-effect of the new formulation. Instead of using sampling to approximate expectation at output space, the expectation is now taken at input space and we no longer need to draw samples. So, our evidence visualisations are less noisy and the evidence captured focuses more on the objects.
Our approach can capture invariant class-specific features. If applied to a set of images belonging to the same class, it allows to find the ``decision rules'' of the classifier for recognising that class.
Overall, we also showed that our method can be interpreted as a gradient weighting rule. This interpretation links it with other gradient-based visualisation techniques, which fall into the same mathematical framework.
{\small
\bibliographystyle{ieee}
|
3,212,635,537,602 | arxiv | \section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/Toy_Model.png}
\caption{Toy model built by subjects interacting with 2 tools, 49 components and the instructions booklet. Better seen on screen.}
\label{fig:toy_model}
\end{figure}
Being able to analyze human behavior from egocentric observations has many potential applications related to the recent development of wearable devices \cite{Epson_moverio, Holo2, Vuzix} which range from improving the personal safety of workers in a factory~\cite{DeepVisionShield_Colombo19} to providing assistance to the visitors of a museum~\cite{vedi2019, RagusaPRL, cucchiara2014visions}.
In particular, with the rapid growth of interest in wearable devices in industrial scenarios, recognizing human-object interactions can be useful to prevent safety hazards, implement energy saving policies and issue notifications about actions that may be missed in a production pipeline \cite{miss_actions_shapiro}.
In recent years, progress has been made in many research areas related to human behavior understanding, such as action recognition \cite{feichtenhofer2018slowfast, TwoStream_convolutional_action_Zisserman_14, Two-Stream_Zisserman, Zhou2018TemporalRR, kazakos2019TBN, TSM_2019}, object detection \cite{girshick2014rich, girshick2015fast, ren2015faster, yolov3} and human-object interaction detection \cite{Gkioxari2018DetectingAR, Gupta2015VisualSR, Hands_in_contact_Shan20, Nagarajan2020EGOTOPOEA}. These advances have been possible thanks to the availability of large-scale datasets \cite{Imagenet, lin2014COCO, Damen2018EPICKITCHENS, Li2018_EGTEA-GAZE+, Gupta2015VisualSR, HICO_Chao} which have been curated and often associated with dedicated challenges. In the egocentric vision domain, in particular, previous investigations have considered the contexts of kitchens \cite{Damen2018EPICKITCHENS, Li2018_EGTEA-GAZE+, Torre2009CMU-MMAC}, as well as daily living activity at home and in offices \cite{Ramanan_12_ADL, thu-read_17, You-Do_Damen_14, Ortis2017OrganizingEV}. While these contexts provide interesting test-beds to study user behavior in general, egocentric human-object interactions have not been previously studied in industrial environments such as factories, building sites, mechanical workshops, etc. This is mainly due to the fact that data acquisition in industrial domains is difficult because of privacy issues and the need to protect industrial secrets. \\
In this paper, we present MECCANO, the first dataset of egocentric videos to study human-object interactions in industrial-like settings. To overcome the limitations related to data collection in industry, we resort to an industrial-like scenario in which subjects are asked to build a toy model of a motorbike using different components and tools (see Figure~\ref{fig:toy_model}). Similarly to an industrial scenario, the subjects interact with tools such as a screwdriver and a wrench, as well as with tiny objects such as screws and bolts while executing a task involving sequential actions (e.g., take wrench, tighten bolt, put wrench). Despite the fact that this scenario is a simplification of what can be found in real industrial settings, it is still fairly complex, as our experiments show.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{images/concept.png}
\caption{Examples of Human-Object Interactions in third person vision (first row) and first person vision (second row)\protect\footnotemark.}
\label{fig:concept}
\end{figure}
MECCANO was collected by 20 different participants in two countries (Italy and United Kingdom). We densely annotated the acquired videos with temporal labels to indicate the start and end times of each human-object interaction, and with bounding boxes around the active objects involved in the interactions. We hope that the proposed dataset will encourage research in this challenging domain. The dataset is publicly released at the following link: \url{https://iplab.dmi.unict.it/MECCANO}.
We show that the proposed dataset can be used to study four fundamental tasks related to the understanding of human-object interactions: 1)~Action Recognition, 2)~Active Object Detection, 3)~Active Object Recognition and 4)~Egocentric Human-Object Interaction Detection. While past works have already investigated the tasks of action recognition \cite{Damen2018EPICKITCHENS, Li2018_EGTEA-GAZE+, Ma_2016_CVPR, Sudhakaran_2019_CVPR}, active object detection \cite{Ramanan_12_ADL}, and active object recognition~\cite{Damen2018EPICKITCHENS} in the context of egocentric vision, Human-Object Interaction (HOI) detection has been generally studied in the context of third person vision \cite{Gupta2015VisualSR, Gkioxari2018DetectingAR, Qi2018LearningHI, Chao2018LearningTD, RPN_Zhou, PPDM_liao2019, Wang_InteractionPoints_2020_CVPR}. Since we believe that modelling actions both semantically and spatially is fundamental for egocentric vision applications, we instantiate the Human-Object Interaction detection task in the context of the proposed dataset.
HOI detection consists in detecting the occurrence of human-object interactions, localizing both the humans taking part in the action and the interacted objects. HOI detection also aims to understand the relationships between humans and objects, which is usually described with a verb.
Possible examples of HOIs are \textit{``talk on the cell phone''} or \textit{``hold a fresbee''}. HOI detection models mostly consider one single object involved in the interaction \cite{Gupta2015VisualSR, HOI_Gupta_09, Gkioxari2018DetectingAR, HOI_Fei_Fei,Chao2018LearningTD}. Hence, an interaction is generally defined as a triplet in the form \textit{$<$human, verb, object$>$}, where the human is the subject of the action specified by a verb and an object. Sample images related to human-object interactions in a third-person scenario are shown in Figure~\ref{fig:concept}-top.
We define Egocentric Human-Object Interaction (EHOI) detection as the task of producing $<$verb, objects$>$ pairs describing the interaction observed from the egocentric point of view. Note that in EHOI, the human interacting with the objects is always the camera wearer, while one or more objects can be involved simultaneously in the interaction. The goal of EHOI detection is to infer the verb and noun classes, and to localize each active object involved in the interaction. Figure~\ref{fig:concept}-bottom reports some examples of Egocentric Human-Object Interactions.
We perform experiments with baseline approaches to tackle the four considered tasks. Results suggest that the proposed dataset is a challenging benchmark for understanding egocentric human-object interactions in industrial-like settings.
In sum, the contributions of this work are as follows: 1) we present MECCANO, a new challenging egocentric dataset to study human-object interactions in an industrial-like domain; 2) we instantiate the HOI definition in the context of egocentric vision (EHOI); 3) we show that the current state-of-the-art approaches achieve limited performance, which suggests that the proposed dataset is an interesting benchmark for studying egocentric human-object interactions in industrial-like domains.
\begin{table*}[]
\resizebox{\textwidth}{!}{%
\setlength\tabcolsep{2pt}
\begin{tabular}{lccclcccccccc}
\multicolumn{1}{c}{\textbf{Dataset}} & \textbf{Settings} & \textbf{EGO?} & \textbf{Video?} & \multicolumn{1}{c}{\textbf{Tasks}} & \textbf{Year} & \textbf{Frames} & \textbf{Sequences} & \textbf{AVG. video duration} & \textbf{Action classes} & \textbf{Object classes} & \textbf{Object BBs} & \textbf{Participants} \\ \hline
MECCANO & Industrial-like & \checkmark & \checkmark & EHOI, AR, AOD, AOR & 2020 & 299,376 & 20 & 20.79 min & 61 & 20 & 64,349 & 20 \\ \hline
EPIC-KITCHENS \cite{Damen2018EPICKITCHENS} & Kitchens & \checkmark & \checkmark & AR, AOR & 2018 & 11,5M & 432 & 7.64 min & 125 & 352 & 454,255 & 32 \\
EGTEA Gaze+ \cite{Li2018_EGTEA-GAZE+} & Kitchens & \checkmark & \checkmark & AR & 2018 & 2,4M & 86 & 0.05 min & 106 & 0 & 0 & 32 \\
THU-READ \cite{thu-read_17} & Daily activties & \checkmark & \checkmark & AR & 2017 & 343,626 & 1920 & 7.44 min & 40 & 0 & 0 & 8 \\
ADL \cite{Ramanan_12_ADL} & Daily activities & \checkmark & \checkmark & AR, AOR & 2012 & 1,0M & 20 & 30.0 min & 32 & 42 & 137,780 & 20 \\
CMU \cite{Torre2009CMU-MMAC} & Kitchens & \checkmark & \checkmark & AR & 2009 & 200,000 & 16 & 15.0 min & 31 & 0 & 0 & 16 \\ \hline
Something-Something \cite{Something_Something_Goyal} & General & X & \checkmark & AR, HOI & 2017 & 5,2 M & 108,499 & 0.07 min & 174 & N/A & 318,572 & N/A \\
Kinetics \cite{Kinetics_Carreira2019ASN} & General & X & \checkmark & AR & 2017 & N/A & 455,000 & 0.17 min & 700 & 0 & 0 & N/A \\
ActivityNet \cite{caba2015activitynet} & Daily activites & X & \checkmark & AR & 2015 & 91,6 M & 19,994 & 2.55 min & 200 & N/A & N/A & N/A \\ \hline
HOI-A \cite{PPDM_liao2019} & General & X & X & HOI, AOR & 2020 & 38,668 & N/A & N/A & 10 & 11 & 60,438 & N/A \\
HICO-DET \cite{HICO_Chao} & General & X & X & HOI, AOR & 2018 & 47,776 & N/A & N/A & 117 & 80 & 256,672 & N/A \\
V-COCO \cite{Gupta2015VisualSR} & General & X & X & HOI, OD & 2015 & 10,346 & N/A & N/A & 26 & 80 & N/A & N/A \\ \hline
\end{tabular}%
}
\caption{Comparative overview of relevant datasets. HOI: HOI Detection. EHOI: EHOI Detection. AR: Action Recognition. AOD: Active Object Detection. AOR: Active Object Recognition. OD: Object Detection.}
\label{tab:datasets}
\end{table*}
\footnotetext{Images in the first row were taken from the COCO dataset \cite{lin2014COCO} while those in the second row belong to the MECCANO dataset.}
\section{Related Work}
\label{sec:related_work}
\textbf{Datasets for Human Behavior Understanding \hspace{1mm}} Previous works have proposed datasets to tackle the task of Human-Object Interaction (HOI) detection.
Among the most notable datasets, we can mention V-COCO~\cite{Gupta2015VisualSR}, which adds $26$ verb labels to the $80$ objects of COCO~\cite{lin2014COCO}, HICO-Det \cite{HICO_Chao}, labeled with $117$ verbs and $80$ objects, HOI-A~\cite{PPDM_liao2019}, which focuses on $10$ verbs and $11$ objects indicating actions dangerous while driving. Other works have proposed datasets for action recognition from video. Among these, ActivityNet \cite{caba2015activitynet} is a large-scale dataset composed of videos depicting $203$ activities that are relevant to how humans spend their time in their daily lives, Kinetics~\cite{Kinetics_2017, Kinetics_Carreira2019ASN} is a dataset containing $700$ human action classes, Something-Something~\cite{Something_Something_Goyal} includes low-level concepts to represent simple everyday aspects of the world.
Previous works also proposed datasets of egocentric videos. Among these datasets, EPIC-Kitchens~\cite{Damen2018EPICKITCHENS, Damen2020RESCALING, Damen2020Collection} focuses on unscripted activities in kitchens, EGTEA Gaze+~\cite{Li2018_EGTEA-GAZE+} contains videos paired with gaze information collected from participants cooking different recipes in a kitchen, CMU~\cite{Torre2009CMU-MMAC} is a multi-modal dataset of egocentric videos including RGB, audio and motion capture information, ADL~\cite{Ramanan_12_ADL} contains egocentric videos of subjects performing daily activities, THU-READ~\cite{thu-read_17} contains RGB-D videos of subjects performing daily-life actions in different scenarios.
Table~\ref{tab:datasets} compares the aforementioned datasets with respect to the proposed dataset. MECCANO is the first dataset of egocentric videos collected in an industrial-like domain and annotated to perform EHOI Detection.
It is worth noting that previous egocentric datasets have considered scenarios related to kitchens, offices, and daily-life activities and that they have generally tackled the action recognition task rather than EHOI detection.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{images/Meccano_Dataset.png}
\caption{Examples of data acquired by the 20 different participants in two countries (Italy, United Kingdom).}
\label{fig:dataset}
\end{figure*}
\textbf{Action Recognition \hspace{1mm}} Action recognition from video has been thoroughly investigated especially in the third person vision domain. Classic works~\cite{Learning_actions_movies_Laptev, Human_detection_Flow_Schmid_06} relied on motion-based features such as optical flow and space-time features.
Early deep learning works fused processing of RGB and optical flow features with two-stream networks~\cite{TwoStream_convolutional_action_Zisserman_14, Two-Stream_Zisserman, temporal_segNet}, 3D ConvNets are commonly used to encode both spatial and temporal dimensions~\cite{ Conv_spatio-temporal_Taylor, Learning_spatio-temporal_Paluri, Carreira2017QuoVA}, long-term filtering and pooling has focused on representing actions considering their full temporal extent \cite{Long-term_action_Schmid, Two-Stream_Zisserman, temporal_segNet, Zhou2018TemporalRR,Zhou2018TemporalRR,TSM_2019}.
Other works separately factor convolutions into separate 2D spatial and 1D temporal filters~\cite{Spatiotemporal_residual_action, closer_spatiotemp_action, rethinking_spatiotemporal, Learning_spatiotemporal_pseudo}.
Among recent works, Slow-Fast networks~\cite{feichtenhofer2018slowfast} avoid using pre-computed optical flow and encodes motion into a ``fast'' pathway (which operates at a high frame rate) and simultaneously a ``slow'' pathway which captures semantics (operating at a low frame rate).
We asses the performance of state-of-the-art action recognition methods on the proposed dataset considering SlowFast networks~\cite{feichtenhofer2018slowfast}, I3D~\cite{Carreira2017QuoVA} and 2D CNNs as baselines.
\textbf{HOI Detection \hspace{1mm}} Previous works have investigated HOI detection mainly from a third person vision point of view.
The authors of~\cite{Gupta2015VisualSR} proposed a method to detect people performing actions able to localize the objects involved in the interactions on still images.
The authors of~\cite{Gkioxari2018DetectingAR} proposed a human-centric approach based on a three-branch architecture (InteractNet) instantiated according to the definition of HOI in terms of a $<$human, verb, object$>$ triplet.
Some works~\cite{Qi2018LearningHI, Chao2018LearningTD, RPN_Zhou} explored HOI detection using graph convolutional neural networks after detecting humans and objects in the scene.
Recent works~\cite{PPDM_liao2019, Wang_InteractionPoints_2020_CVPR} represented the relationship between both humans and objects as the intermediate point which connects the center of the human and object bounding boxes.
The aforementioned works addressed the problem of HOI detection in the third person vision domain. In this work, we look at the task of HOI detection from an egocentric perspective considering the proposed MECCANO dataset.
\textbf{EHOI Detection \hspace{1mm}}
EHOI detection is understudied due to the limited availability of egocentric datasets labelled for this task.
While some previous datasets such as EPIC-KITCHENS~\cite{Damen2018EPICKITCHENS,Damen2020Collection} and ADL~\cite{Ramanan_12_ADL} have been labeled with bounding box annotations, these datasets have not been explicitly labeled for the EHOI detection task indicating relationships between labeled objects and actions, hence preventing the development of EHOI detection approaches.
Some related studies have modeled the relations between entities for interaction recognition as object affordances~\cite{Hotspots_Grauman19, Nagarajan2020EGOTOPOEA, affordance_Fang18}.
Other works tackled tasks related to EHOI detection proposing hand-centric methods \cite{Cai2016UnderstandingHM, Lending_Hand_Bambach_15, Hands_in_contact_Shan20}.
Despite these related works have considered human-object interaction from an egocentric point of view, the EHOI detection task has not yet been defined or studied systematically in past works.
With this paper we aim at providing a definition of the task, a suitable benchmark dataset, as well as an initial evaluation of baseline approaches.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{images/participants.pdf}
\caption{Statistics of the 20 participants.}
\label{fig:participants}
\end{figure}
\section{MECCANO Dataset}
\label{sec:dataset}
\subsection{Data Collection}
The MECCANO dataset has been acquired in an industrial-like scenario in which subjects built a toy model of a motorbike (see Figure~\ref{fig:toy_model}). The motorbike is composed of 49 components with different shapes and sizes belonging to 19 classes. In our settings, the components \textit{A054} and \textit{A051} of Figure~\ref{fig:toy_model} have been grouped under the ``screw'' class, whereas \textit{A053}, \textit{A057} and \textit{A077} have been grouped under the ``washers'' class. As a result, we have 16 component classes\footnote{See supplementary material for more details.}. Note that multiple instances of each class are necessary to build the model. In addition, 2 tools, a \textit{screwdriver} and a \textit{wrench}, are available to facilitate the assembly of the toy model. The subjects can use the instruction booklet to understand how to build the toy model following the sequential instructions.
For the data collection, the $49$ components related to the considered $16$ classes, the $2$ tools and the instruction booklet have been placed on a table to simulate an industrial-like environment. Objects of the same component class have been grouped and placed in a heap, and heaps have been placed randomly (see Figure~\ref{fig:dataset}). Other objects not related to the toy model were present in the scene (i.e., clutter background). We have considered two types of table: a light-colored table and a dark one. The dataset has been acquired by 20 different subjects in 2 countries (Italy and United Kingdom) between May 2019 and January 2020. Participants were from $8$ different nationalities with ages between $18$ and $55$. Figure~\ref{fig:participants} reports some statistics about participants. We asked participants to sit and build the model of the motorbike. No other particular instruction was given to the participants, who were free to use all the objects placed in the table as well as the instruction booklet. Some examples of the captured data are reported in Figure~\ref{fig:dataset}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{images/stats.pdf}
\caption{Long-tail distribution of verbs classes.}
\label{fig:stats}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{images/Temporal_annotations.png}
\caption{Example of two overlapping temporal annotations along with the associated verbs.}
\label{fig:temporal}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{images/bbox_annotations.pdf}
\caption{Example of bounding box annotations for \textit{active} objects (first row) and occluded \textit{active} objects (second row).}
\label{fig:bbox}
\end{figure*}
Data was captured using an Intel RealSense SR300 device which has been mounted on the head of the participant with an headset. The headset was adjusted to control the point of view of the camera with respect to the different heights and postures of the participants, in order to have the hands located approximately in the middle of the scene to be acquired. Videos were recorded at a resolution of 1280x720 pixels and with a framerate of 12\textit{fps}. Each video corresponds to a complete assembly of the toy model starting from the 49 pieces placed on the table. The average duration of the captured videos is 21.14\textit{min}, with the longest one being 35.45\textit{min} and the shortest one being~9.23\textit{min}. \\
\subsection{Data Annotation}
We annotated the MECCANO dataset in two stages. In the first stage, we temporally annotated the occurrences of all human-object interactions indicating their start and end times, as well as a verb describing the interaction. In the second stage, we annotated the \textit{active objects} with bounding boxes for each temporal segment.
\textbf{Stage 1: Temporal Annotations \hspace{1mm}}
We considered 12 different verbs which describe the actions performed by the participants: \textit{take, put, check, browse, plug, pull, align, screw, unscrew, tighten, loosen} and \textit{fit}. As shown in Figure~\ref{fig:stats}, the distribution of verb classes of the labeled samples in our dataset follows a long-tail distribution, which suggests that the taxonomy captures the complexity of the considered scenario.
Since a participant can perform multiple actions simultaneously, we allowed the annotated segments to overlap (see Figure~\ref{fig:temporal}). In particular, in the MECCANO dataset there are 1401 segments (15.82 \%) which overlap with at least another segment. We consider the start time of a segment as the timestamp in which the hand touches an object, changing its state from \textit{passive} to \textit{active}. The only exception is for the verb \textit{check}, in which case the user doesn't need to touch an object to perform an interaction. In this case, we annotated the start time when it is obvious from the video sequence that the user is looking at the object (see Figure~\ref{fig:temporal}). With this procedure, we annotated $8857$ video segments.
\textbf{Stage 2: Active Object Bounding Box Annotations \hspace{1mm}}
We considered $20$ object classes which include the $16$ classes categorizing the $49$ components, the two tools (\textit{screwdriver} and \textit{wrench}), the instructions booklet and a \textit{partial\_model} class. The latter object class represents assembled components of the toy model which are not yet complete (e.g., a \textit{screw} and a \textit{bolt} fixed on a \textit{bar} which have not yet been assembled with the rest of the model\footnote{See the supplementary material for examples of partial model.}).
For each temporal segment, we annotated the \textit{active} objects in frames sampled every $0.2$ seconds. Each active object annotation consists in a \textit{(class, x, y, w, h)} tuple, where \textit{class} represents the class of the object and \textit{(x, y, w, h)} defines a bounding box around the object.
We annotated multiple objects when they were \textit{active} simultaneously (see Figure~\ref{fig:bbox} - first row). Moreover, if an active object is occluded, even just in a few frames, we annotated it with a \textit{(class, x, y)} tuple, specifying the class of the object and its estimated 2D position. An example of occluded active object annotation is reported in the second row of Figure~\ref{fig:bbox}.
With this procedure, we labeled a total of 64349 frames.
\begin{table*}[]
\centering
\resizebox{0.9\textwidth}{!}{%
\begin{tabular}{l|ccccccc}
\multicolumn{1}{c|}{\textbf{Split}} & \textbf{\#Videos} & \textbf{Duration (min)} & \textbf{\%} & \textbf{\#EHOIs Segments} & \textbf{Bounding Boxes} & \textbf{Country (U.K/Italy)} & \textbf{Table (Light/Dark)} \\ \hline
Train & 11 & 236.47 & 55\% & 5057 & 37386 & 6/5 & 6/5 \\
Val & 2 & 46.57 & 10\% & 977 & 6983 & 1/1 & 1/1 \\
Test & 7 & 134.93 & 35\% & 2824 & 19980 & 4/3 & 4/3 \\ \hline
\end{tabular}%
}
\caption{Statistics of the three splits: Train, Validation and Test.}
\label{tab:splits}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{images/actions_stats.png}
\caption{Distribution of action instances in the MECCANO dataset.}
\label{fig:action_stats}
\end{figure*}
\textbf{Action Annotations \hspace{1mm}}
Starting from the temporal annotations, we defined $61$ action classes\footnote{See the supplementary material for details on action class selection.}. Each action is composed by a verb and one or more objects, for example \textit{``align screwdriver to screw''} in which the verb is \textit{align} and the objects are \textit{screwdriver} and \textit{screw}. Depending on the verb and objects involved in the interaction, each temporal segment has been associated to one of the $61$ considered action classes.
Figure~\ref{fig:action_stats} shows the list of the $61$ action classes, which follow a long-tail distribution.
\textbf{EHOI Annotations \hspace{1mm}}
Let $O = \{o_1, o_2, ..., o_n\}$ and $V = \{v_1, v_2, ..., v_m\}$ be the sets of objects and verbs respectively. We define an Egocentric Human-Object Interaction $e$ as:
\begin{equation} \label{eq:1}
e = (v_h, \{o_1, o_2, ..., o_i\})
\end{equation}
where \begin{math}v_h \in V\end{math} is the verb characterizing the interaction and \begin{math}(o_1, o_2, ..., o_i) \subseteq O \end{math} represent the active objects involved in the interaction.
Given the previous definition, we considered all the observed combinations of verbs and objects to represent EHOIs performed by the participants during the acquisition (see examples in Figure~\ref{fig:concept}-bottom).
Each EHOI annotation is hence composed of a verb annotation and the \textit{active} object bounding boxes. The MECCANO dataset is the first dataset of egocentric videos explicitly annotated for the EHOI detection task.
\begin{table*}[]
\centering
\resizebox{0.8\textwidth}{!}{%
\begin{tabular}{ll|l|l|l|l}
\textbf{} & \multicolumn{1}{c|}{\textbf{Top-1 Accuracy}} & \multicolumn{1}{c|}{\textbf{Top-5 Accuracy}} & \multicolumn{1}{c|}{\textbf{Avg Class Precision}} & \multicolumn{1}{c|}{\textbf{Avg Class Recall}} & \multicolumn{1}{c}{\textbf{Avg Class F\_1-score}} \\ \hline
\multicolumn{1}{l|}{C2D \cite{fan2020pyslowfast}} & \multicolumn{1}{c|}{41.92} & \multicolumn{1}{c|}{71.95} & \multicolumn{1}{c|}{37.6} & \multicolumn{1}{c|}{38.76} & \multicolumn{1}{c}{36.49} \\
\multicolumn{1}{l|}{I3D \cite{Carreira2017QuoVA}} & \multicolumn{1}{c|}{42.51} & \multicolumn{1}{c|}{72.35} & \multicolumn{1}{c|}{40.04} & \multicolumn{1}{c|}{40.42} & \multicolumn{1}{c}{38.88} \\
\multicolumn{1}{l|}{SlowFast \cite{feichtenhofer2018slowfast}} & \multicolumn{1}{c|}{\textbf{42.85}} & \multicolumn{1}{c|}{\textbf{72.47}} & \multicolumn{1}{c|}{\textbf{42.11}} & \multicolumn{1}{c|}{\textbf{41.48}} & \multicolumn{1}{c}{\textbf{41.05}} \\ \hline
\end{tabular}%
}
\caption{Baseline results for the action recognition task.}
\label{tab:action}
\end{table*}
\section{Benchmarks and Baseline Results}
\label{sec:benchmarks}
The MECCANO dataset is suitable to study a variety of tasks, considering the challenging industrial-like scenario in which it was acquired. In this paper, we consider four tasks for which we provide baseline results: 1) \textit{Action Recognition}, 2) \textit{Active Object Detection}, 3) \textit{Active Object Recognition} and 4) \textit{Egocentric Human-Object Interaction (EHOI) Detection}. While some of these tasks have been considered in previous works, none of them has been studied in industrial scenarios from the egocentric perspective. Moreover, it is worth noting that the EHOI Detection task has never been treated in previous works.
We split the dataset into three subsets (\textit{Training, Validation} and \textit{Test}) designed to balance the different types of desks (light, dark) and countries in which the videos have been acquired (IT, U.K.). Table~\ref{tab:splits} reports some statistics about the three splits, such as the number of videos, the total duration (in seconds), the number of temporally annotated EHOIs and the number of bounding box annotations.
\subsection {Action Recognition}
\textbf{Task:} Action Recognition consists in determining the action performed by the camera wearer from an egocentric video segment. Specifically, let \begin{math}C_a = \{c_1, c_2, ..., c_n\} \end{math} be the set of action classes and let \begin{math}A_i = [t_{s_i}, t_{e_i}]\end{math} be a video segment, where $t_{s_i}$ and $t_{e_i}$ are the start and the end times of the action respectively. The aim is to assign the correct action class $c_i \in C_a$ to the segment $A_i$.\\
\textbf{Evaluation Measures:} We evaluate action recognition using Top-1 and Top-5 accuracy computed on the whole test set. As class-aware measures, we report class-mean precision, recall and $F_1$-score. \\
\textbf{Baselines:} We considered 2D CNNs as implemented in the PySlowFast library \cite{fan2020pyslowfast} (C2D), I3D \cite{Carreira2017QuoVA} and SlowFast \cite{feichtenhofer2018slowfast} networks, which are state-of-the-art methods for action recognition. In particular, for all baselines we used the PySlowFast implementation based on a ResNet-50 \cite{He2015_ResNet} backbone pre-trained on Kinetics \cite{Kinetics_2017}. See supplementary material for implementation details. \\
\textbf{Results:} Table~\ref{tab:action} reports the results obtained by the baselines for the action recognition task. All baselines obtained similar performance in terms of Top-1 and Top-5 accuracy with SlowFast networks achieving slightly better performance.
Interestingly, performance gaps are more consistent when we consider precision, recall and $F_1$ scores, which is particularly relevant given the long-tailed distribution of actions in the proposed dataset (see Figure~\ref{fig:action_stats}). Note that, in our benchmark, SlowFast obtained the best results with a Top-1 accuracy of 47.82 and an $F_1$-score of 41.05.
See supplementary material for qualitative results.
In general, the results suggest that action recognition with the MECCANO dataset is challenging and offers a new scenario to compare action recognition algorithms.
\begin{table}[]
\centering
\resizebox{0.5\textwidth}{!}{%
\begin{tabular}{l|c}
\multicolumn{1}{c|}{\textbf{Method}} & \multicolumn{1}{l}{\textbf{AP (IoU \textgreater 0.5)}} \\ \hline
Hand Object Detector \cite{Hands_in_contact_Shan20} & 11.17\% \\
Hand Object Detector \cite{Hands_in_contact_Shan20} (Avg dist.) & 11.10\% \\
Hand Object Detector \cite{Hands_in_contact_Shan20} (All dist) & 11.34\% \\
Hand Object Detector \cite{Hands_in_contact_Shan20} + Objs re-training & 20.18\% \\
Hand Object Detector \cite{Hands_in_contact_Shan20} + Objs re-training (Avg dist.) & 33.33\% \\
Hand Object Detector \cite{Hands_in_contact_Shan20} + Objs re-training (All dist.) & \textbf{38.14\%} \\ \hline
\end{tabular}%
}
\caption{Baseline results for the \textit{active} object detection task.}
\label{tab:active_det}
\end{table}
\subsection {Active Object Detection}
\textbf{Task:} The aim of the Active Object Detection task is to detect all the \textit{active} objects involved in EHOIs.
Let \begin{math} O_{act} = \{o_1, o_2, ..., o_n\} \end{math} be the set of \textit{active} objects in the image. The goal is to detect with a bounding box each \textit{active} object $o_i \in O_{act}$. \\
\textbf{Evaluation Measures:} As evaluation measure, we use Average Precision~(AP), which is used in standard object detection benchmarks. We set the IoU threshold equal to~$0.5$ in our experiments. \\
\textbf{Baseline:} We considered the Hand-Object Detector proposed in \cite{Hands_in_contact_Shan20}. The model has been designed to detect hands and objects when they are in contact. This architecture is based on Faster-RCNN \cite{ren2015faster} and predicts a box around the visible human hands, as well as boxes around the objects the hands are in contact with and a link between them. We used the Hand-Object Detector \cite{Hands_in_contact_Shan20} pretrained on EPIC-Kitchens \cite{Damen2018EPICKITCHENS}, EGTEA \cite{Li2018_EGTEA-GAZE+} and CharadesEGO \cite{Sigurdsson2018Charades} as provided by authors \cite{Hands_in_contact_Shan20}. The model has been trained to recognize hands and to detect the \textit{active} objects regardless their class. Hence, it should generalize to others domains.
With default parameters, the Hand-Object Detector can find at most two \textit{active} objects in contact with hands. Since our dataset tends to contain more \textit{active} objects in a single EHOI (up to 7), we consider two variants of this model by changing the threshold on the distance between hands and detected objects. In the first variant, the threshold is set to the average distance between hands and \textit{active} objects on the MECCANO dataset. We named this variant ``\textit{Avg distance}''. In the second variant, we removed the thresholding operation and considered all detected objects as \textit{active} objects. We named this variant ``\textit{All objects}''.
We further adapted the Hand-Object Detector \cite{Hands_in_contact_Shan20} re-training the Faster-RCNN component to detect all \textit{active} objects of the MECCANO dataset. See supplementary material for implementation details.\\
\textbf{Results:} Table~\ref{tab:active_det} shows the results obtained by the \textit{active} object detection task baselines. The results highlight that the Hand-Object Detector \cite{Hands_in_contact_Shan20} is not able to generalize to a domain different than the one on which it was trained. All the three variants of the Hand-Object Detector using the original object detector obtained an AP approximately equal to 11\% (first three rows of Table~\ref{tab:active_det}). Re-training the object detector on the MECCANO dataset allowed to improve performance by significant margins. In particular, using the standard distance threshold value, we obtained an AP of 20.18\%. If we consider the average distance as the threshold to discriminate \textit{active} and \textit{passive} objects, we obtain an AP of 33.33\%. Removing the distance threshold (last row of Table~\ref{tab:active_det}), allows to outperform all the previous results obtaining an AP equal to 38.14\%. This suggests that adapting the general object detector to the challenging domain of the proposed dataset is key to performance. Indeed, training the object detector to detect only \textit{active} objects in the scene already allows to obtain reasonable results, while there still space for improvement.
\subsection {Active Object Recognition}
\textbf{Task:} The task consists in detecting and recognizing the \textit{active} objects involved in EHOIs considering the $20$ object classes of the MECCANO dataset.
Formally, let \begin{math} O_{act} = \{o_1, o_2, ..., o_n\}\end{math} be the set of \textit{active} objects in the image and let \begin{math} C_{o} = \{c_1, c_2, ..., c_m\} \end{math} be the set of object classes. The task consists in detecting objects $o_i \in O_{act}$ and assigning them the correct class label $c \in C_{o}$. \\
\textbf{Evaluation Measures:} We use mAP \cite{PascalVOC_Zisserman_15} with threshold on IoU equal to $0.5$ for the evaluations.\\
\textbf{Baseline:} As a baseline, we used a standard Faster-RCNN \cite{ren2015faster} object detector. For each image the object detector predicts \textit{(x, y, w, h, class)} tuples which represent the object bounding boxes and the associated classes. See supplementary material for implementation details. \\
\textbf{Results:} Table~\ref{tab:active_rec} reports the results obtained with the baseline in the \textit{Active} Object Recognition task.
We report the AP values for each class considering all the videos belonging to the test set of the MECCANO dataset. The last column shows the average of the AP values for each class and the last row reports the mAP value for the test set. The mAP was computed as the average of the mAP values obtained in each test video. AP values in the last column show that large objects are easier to recognize (e.g. \textit{instruction booklet: 46.48\%; screwdriver: 60.50\%; tire: 58.91\%; rim: 50.35\%}). Performance suggests that the proposed dataset is challenging due to the presence of small objects.
We leave the investigation of more specific approaches to active object detection to future studies.
\begin{table}[]
\centering
\resizebox{0.8\columnwidth}{!}{%
\begin{tabular}{clc}
\multicolumn{1}{l|}{\textbf{ID}} & \multicolumn{1}{c|}{\textbf{Class}} & \textbf{AP (per class)} \\ \hline
\multicolumn{1}{c|}{0} & \multicolumn{1}{l|}{instruction booklet} & 46.18\% \\
\multicolumn{1}{c|}{1} & \multicolumn{1}{l|}{gray\_angled\_perforated\_bar} & 09.79\% \\
\multicolumn{1}{c|}{2} & \multicolumn{1}{l|}{partial\_model} & 36.40\% \\
\multicolumn{1}{c|}{3} & \multicolumn{1}{l|}{white\_angled\_perforated\_bar} & 30.48\% \\
\multicolumn{1}{c|}{4} & \multicolumn{1}{l|}{wrench} & 10.77\% \\
\multicolumn{1}{c|}{5} & \multicolumn{1}{l|}{screwdriver} & 60.50\% \\
\multicolumn{1}{c|}{6} & \multicolumn{1}{l|}{gray\_perforated\_bar} & 30.83\% \\
\multicolumn{1}{c|}{7} & \multicolumn{1}{l|}{wheels\_axle} & 10.86\% \\
\multicolumn{1}{c|}{8} & \multicolumn{1}{l|}{red\_angled\_perforated\_bar} & 07.57\% \\
\multicolumn{1}{c|}{9} & \multicolumn{1}{l|}{red\_perforated\_bar} & 22.74\% \\
\multicolumn{1}{c|}{10} & \multicolumn{1}{l|}{rod} & 15.98\% \\
\multicolumn{1}{c|}{11} & \multicolumn{1}{l|}{handlebar} & 32.67\% \\
\multicolumn{1}{c|}{12} & \multicolumn{1}{l|}{screw} & 38.96\% \\
\multicolumn{1}{c|}{13} & \multicolumn{1}{l|}{tire} & 58.91\% \\
\multicolumn{1}{c|}{14} & \multicolumn{1}{l|}{rim} & 50.35\% \\
\multicolumn{1}{c|}{15} & \multicolumn{1}{l|}{washer} & 30.92\% \\
\multicolumn{1}{c|}{16} & \multicolumn{1}{l|}{red\_perforated\_junction\_bar} & 19.80\% \\
\multicolumn{1}{c|}{17} & \multicolumn{1}{l|}{red\_4\_perforated\_junction\_bar} & 40.82\% \\
\multicolumn{1}{c|}{18} & \multicolumn{1}{l|}{bolt} & 23.44\% \\
\multicolumn{1}{c|}{19} & \multicolumn{1}{l|}{roller} & 16.02\% \\ \hline
\multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ \cline{2-3}
\multicolumn{1}{l}{} & \multicolumn{1}{c|}{\textbf{mAP}} & 30.39\%
\end{tabular}%
}
\caption{Baseline results for the \textit{active} object recognition task.}
\label{tab:active_rec}
\end{table}
\subsection {EHOI Detection}
\textbf{Task:} The goal is to determine egocentric human-object interactions (EHOI) in each image. Given the definition of EHOIs as $<$verb, objects$>$ pairs (see Equation~\ref{eq:1}), methods should detect and recognize all the \textit{active} objects in the scene, as well as the verb describing the action performed by the human. \\
\textbf{Evaluation Measures:}
Following \cite{Gupta2015VisualSR, Gkioxari2018DetectingAR}, we use the \textit{``role AP''} as an evaluation measure.
Formally, a detected EHOI is considered as a true positive if 1) the predicted object bounding box has a IoU of 0.5 or higher with respect to a ground truth annotation and 2) the predicted verb matches with the ground truth.
Note that only the \textit{active} object bounding box location (not the correct class) is considered in this measure.
Moreover, we used different values of IoU (e.g., 0.5, 0.3 and 0.1) to compute the \textit{``role AP''}.\\
\textbf{Baseline:} We adopted three baselines for the EHOI detection task.
The first one is based on InteractNet \cite{Gkioxari2018DetectingAR}, which is composed by three branches: 1) the ``human-branch'' to detect the humans in the scene, 2) the ``object-branch'' to detect the objects and 3) the ``interaction-branch' which predicts the verb of the interaction focusing on the humans and objects appearance. The second one is an extension of InteractNet which also uses context features derived from the whole input frame to help the ``interaction-branch'' in verb prediction. The last baseline is based on the combination of a SlowFast network \cite{fan2020pyslowfast} trained to predict the verb of the EHOI considering the spatial and temporal dimensions, and Faster-RCNN \cite{ren2015faster} which detects and recognizes all \textit{active} objects in the frame.
See supplementary material for implementation details.\\
\textbf{Results:} Table~\ref{tab:EHOI_det} reports the results obtained by the baselines on the test set for the EHOI detection task. The InteractNet method obtains low performance on this task with a mAP role of 4.92\%. Its extension with context features, slightly improves the performance with a mAP role of 8.45\%, whereas SlowFast network with Faster-RCNN achieved best results with a mAP equal to 25.93\%.
The results highlight that current state-of-the-art approaches developed for the analysis of still images in third person scenarios are unable to detect EHOIs in the proposed dataset, which is likely due to the presence of multiple tiny objects involved simultaneously in the EHOI and to the actions performed.
On the contrary, adding the ability to process video clips with SlowFast allows for significant performance boosts.
Figure~\ref{fig:EHOI_qual} shows qualitative results obtained with the SlowFast+Faster-RCNN baseline. Note that in the second example the method correctly predicted all the objects involved simultaneously in the EHOI.
Despite promising performance of the suggested baseline, the proposed EHOI detection task needs more investigation due to the challenging nature of the considered industrial-like domain.
\begin{table}[]
\centering
\resizebox{0.5\textwidth}{!}{%
\begin{tabular}{l|lll}
& \multicolumn{3}{c|}{\textbf{mAP role}} \\ \hline
\textbf{Model} & \textbf{IoU $\geq$ 0.5} & \textbf{IoU $\geq$ 0.3} & \textbf{IoU $\geq$ 0.1} \\ \hline
InteractNet \cite{Gkioxari2018DetectingAR} & \multicolumn{1}{c}{04.92\%} & \multicolumn{1}{c}{05.30\%} & \multicolumn{1}{c}{05.72\%} \\
InteractNet \cite{Gkioxari2018DetectingAR} + Context & \multicolumn{1}{c}{08.45\%} & \multicolumn{1}{c}{09.01\%} & \multicolumn{1}{c}{09.45\%} \\
SlowFast \cite{feichtenhofer2018slowfast} + Faster-RCNN \cite{ren2015faster} & \multicolumn{1}{c}{\textbf{25.93\%}} & \multicolumn{1}{c}{\textbf{28.04\%}} & \multicolumn{1}{c}{\textbf{29.65\%}} \\\hline
\end{tabular}%
}
\caption{Baseline results for the EHOI detection task.}
\label{tab:EHOI_det}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{images/EHOI.png}
\caption{Qualitative results for the EHOI detection task.}
\label{fig:EHOI_qual}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We proposed MECCANO, the first dataset to study egocentric human-object interactions (EHOIs) in an industrial-like scenario.
We publicly release the dataset with both temporal (action segments) and spatial (active object bounding boxes) annotations considering a taxonomy of $12$ verbs, $20$ nouns and $61$ unique actions.
In addition, we defined the Egocentric Human-Object Interaction (EHOI) detection task and performed baseline experiments to show the potential of the proposed dataset on four challenging tasks: action recognition, \textit{active} object detection, \textit{active} object recognition and EHOI detection.
Future works will explore approaches for improved performance on this challenging data.
\section*{Acknowledgments}
This research has been supported by MIUR PON PON R\&I 2014-2020 - Dottorati innovativi con caratterizzazione industriale, by MIUR AIM - Attrazione e Mobilita Internazionale Linea 1 - AIM1893589 - CUP: E64118002540007, and by MISE - PON I\&C 2014-2020 - Progetto ENIGMA - Prog n. F/190050/02/X44 – CUP: B61B19000520008.
\section*{\uppercase{Supplementary Material}}
\label{sec:supp_material}
This document is intended for the convenience of the reader and reports additional information about the proposed dataset, the annotation stage, as well as implementation details related to the performed experiments. This supplementary material is related to the following submission:
\begin{itemize}
\item F. Ragusa, A. Furnari, S. Livatino, G. M. Farinella. The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain, submitted to IEEE Winter Conference on Applications of Computer Vision (WACV), 2021.
\end{itemize}
The remainder of this document is organized as follows. Section~\ref{ref:dataset} reports additional details about data collection and annotation. Section~\ref{ref:implementation} provides implementation details of the compared methods. Section~\ref{ref:qualitative} reports additional qualitative results.
\section{Additional details on the MECCANO Dataset}
\label{ref:dataset}
\subsection{Component classes and grouping}
The toy motorbike used for our data collection is composed of 49 components belonging to 19 classes (Figure~\ref{fig:toy_model}), plus two tools.
In our settings, we have grouped two types of components which are similar in their appearance and have similar roles in the assembly process. Figure~\ref{fig:groups} illustrates the two groups. Specifically, we grouped A054 and A051 under the ``screw'' class. These two types of components only differ in their lengths. We also grouped A053, A057 and A077 under the ``washers'' class. Note that these components only differ in the radius of their holes and in their thickness.
As a results, we have 20 object classes in total: 16 classes are related to the 49 motorbike components, whereas the others are associated to the two tools, to the instruction booklet and to a partial model class, which indicates a set of components assembled together to form a part of the model (see Figure~\ref{fig:partial} ).
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{images/Components.png}
\caption{Grouped pieces belonging to \textit{screw} and \textit{washer} classes.}
\label{fig:groups}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{images/partial_model.png}
\caption{Examples of objects belonging to the partial model class.}
\label{fig:partial}
\end{figure*}
\subsection{Data Annotation}
\textbf{Verb Classes and Temporal Annotations \hspace{1mm}}
We considered $12$ verb classes which describe all the observed actions performed by the participants during the acquisitions. Figure~\ref{fig:verbs} reports the percentage of the temporally annotated instances belonging to the $12$ verb classes. The considered verb classes are: \textit{take, put, check, browse, plug, pull, align, screw, unscrew, tighten, loosen} and \textit{fit}.
We used the ELAN Annotation tool~\cite{ELAN} to annotate a temporal segment around each instance of an action. Each segment has been associated to the verb which best described the contained action.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/verbs.png}
\caption{Fractions of instances of each verb in the MECCANO dataset.}
\label{fig:verbs}
\end{figure}
\textbf{Active Object Bounding Box Annotations \hspace{1mm}}
For each annotated video segment, we sampled frames every $0.2$ seconds.
Each of these frames has been annotated to mark the presence of all \textit{active} objects with bounding boxes and related component class label.
To this aim, we used VGG Image Annotator (VIA) \cite{dutta2019vgg} with a customized project which allowed annotators to select component classes from a dedicated panel showing the thumbnails of each of the $20$ object classes to facilitate and speed up the selection of the correct object class. Figure~\ref{fig:VIA} reports an example of the customized VIA interface. Moreover, to support annotators and reduce ambiguities, we prepared a document containing a set of fundamental rules for the annotations of \textit{active} objects, where we reported the main definitions (e.g., active object, occluded active object, partial\_model) along with visual examples. Figure~\ref{fig:rules} reports an example of such instructions.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{images/VIA.png}
\caption{Customized VIA project to support the labeling of active objects. Annotators were presented with a panel which allowed them to identify object classes through their thumbnails.}
\label{fig:VIA}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{images/rules_labelers.png}
\caption{\textit{Active} object definition given to the labelers for the \textit{active} object bounding box annotation stage.}
\label{fig:rules}
\end{figure}
\textbf{Action Annotation \hspace{1mm}}
In the MECCANO dataset, an action can be seen as a combination of a verb and a set of nouns (e.g., ``take wrench''). We analyzed the combinations of our $12$ verb classes and $20$ object classes to find a compact, yet descriptive set of actions classes.
The action class selection process has been performed in two stages.
In the first stage, we obtained the distributions of the number of active objects generally occurring with each of the $12$ verbs. The distributions are shown in Figure~\ref{fig:actions}. For example, the dataset contains $120$ instances of ``browse'' (second row - first column), which systematically involves one single object.
Similarly, most of the instance of ``take'' appear with $1$ object, while few instances have $2-3$ objects.
In the second stage, we selected a subset of actions from all combinations of verbs and nouns. Figure~\ref{fig:actions1} reports all the action classes obtained from the 12 verbs classes of the MECCANO dataset as discussed in the following.
Let \begin{math} O = \{o_1, o_2, ..., o_n\} \end{math} and \begin{math} V = \{v_1, v_2, ..., v_m\} \end{math} be the set of the objects and verb classes respectively.
For each verb $v \in V$, we considered all the object classes $o \in O$ involved in one or more temporal segments labeled with verb $v$. We considered the following rules:
\begin{itemize}
\item \textbf{Take and put}: We observed that all the objects $o \in O$ occurring with $v=take$ are taken by participants while they build the motorbike. Hence, we first defined 20 action classes as $(v, o)$ pairs (one for each of the available objects).
Since subjects can take more than one object at a time, we added an additional ``take objects'' action class when two or more objects are taken simultaneously.
The same behavior has been observed for the verb $ v = put$. Hence, we similarly defined 21 action classes related to this verb.
\item \textbf{Check and browse}: We observed that verbs $v = check$ and $v = browse$ always involve only the object $o = instruction$ $booklet$. Hence, we defined the two action classes \textit{check instruction booklet} and \textit{browse instruction booklet}.
\item \textbf{Fit}: When the verb is $v = fit$, there are systematically two objects involved simultaneously (i.e., $o = rim$ and $o = tire$). Hence, we defined the action class \textit{fit rim and tire}.
\item \textbf{Loosen}: We observed that participants tend to loosen bolts always with the hands. We hence defined the action class \textit{loosen bolt with hands}.
\item \textbf{Align}: We observed that participants tend to align the screwdriver tool with the screw before starting to screw, as well as the wrench tool with the bolt before tightening it. Participants also tended to align objects to be assembled to each other.
From these observations, we defined three action classes related to the verb $v = align$: \textit{align screwdriver to screw}, \textit{align wrench to bolt} and \textit{align objects}.
\item \textbf{Plug}: We found three main uses of verb $v=plug$ related to the objects $o = screw$, $o = rod$ and $o = handlebar$. Hence, we defined three action classes: \textit{plug screw}, \textit{plug rod} and \textit{plug handlebar}.
\item \textbf{Pull}: Similar observations apply to verb $v = pull$. Hence we defined three action classes involving ``pull'': \textit{pull screw}, \textit{pull rod} and \textit{pull partial model}.
\item \textbf{Screw and unscrew}: The main object involved in actions characterized by the verbs $v = screw$ and $v = unscrew$ is $o = screw$. Additionally, the screw or unscrew action can be performed with a screwdriver or with hands. Hence, we defined four action classes \textit{screw screw with screwdriver, screw screw with hands, unscrew screw with screwdriver} and \textit{unscrew screw with hands}.
\item \textbf{Tighten}: Similar observation holds for the verb $v = tighten$, the object $o = bolt$ and the tool $o = wrench$. We hence defined the following two action classes: \textit{tighten bolt with wrench} and \textit{tighten bolt with hands}.
\end{itemize}
In total, we obtained 61 action classes composing the MECCANO dataset.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{images/actions.pdf}
\caption{Number of objects and occurrences of \textit{active} objects related to each verb.}
\label{fig:actions}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{images/actions_1.png}
\caption{$61$ action classes definition from the $12$ verb classes and the analysis performed observing the participant behavior.}
\label{fig:actions1}
\end{figure*}
\section{Baseline Implementation Details}
\label{ref:implementation}
\subsection {Action Recognition}
The goal of action recognition is to classify each action segment into one of the $61$ action classes of the MECCANO dataset.
The SlowFast, C2D and I3D baselines considered in this paper all require fixed-length clips at training time. Hence, we temporally downsample or upsample uniformly each video shot before passing it to the input layer of the network. The average number of frames in a video clip in the MECCANO dataset is $26.19$. For SlowFast network, we set $\alpha=4$ and $\beta=\frac{1}{8}$.
We set the batch-size to $12$ for C2D and I3D, we used a batch-size of $20$ for SlowFast.
We trained C2D, I3D and SlowFast networks on 2 NVIDIA V100 GPUs for $80$, $70$ and $40$ epochs with learning rates of 0.01, 0.1 and 0.0001 respectively.
These settings allowed all baselines to converge.
\subsection {Active Object Detection}
We trained Faster-RCNN on the training and validation sets using the provided \textit{active} object labels. We set the learning rate to 0.005 and trained Faster-RCNN with a ResNet-101 backbone and Feature Pyramid Network for 100K iterations on 2 NVIDIA V100 GPUs. We used the Detectron2 implementation \cite{wu2019detectron2}. The model is trained to recognize objects along with their classes. However, for the active object detection task, we ignore output class names and only consider a single ``active object'' class.
\subsection {Active Object Recognition}
We used the same model adopted for the Active Object Detection task, retaining also object classes at test time.
\begin{table*}[]
\centering
\resizebox{0.9\textwidth}{!}{%
\begin{tabular}{clcccccccc}
\multicolumn{1}{l|}{\textbf{ID}} & \multicolumn{1}{c|}{\textbf{Class\textbackslash{}Video}} & \textbf{0008} & \textbf{0009} & \textbf{0010} & \textbf{0011} & \textbf{0012} & \textbf{0019} & \textbf{0020} & \textbf{AP (per class)} \\ \hline
\multicolumn{1}{c|}{0} & \multicolumn{1}{l|}{instruction booklet} & 62.00\% & 38.78\% & 42.97\% & 63.75\% & 29.84\% & 38.25\% & 47.65\% & 46.18\% \\
\multicolumn{1}{c|}{1} & \multicolumn{1}{l|}{gray\_angled\_perforated\_bar} & 9.55\% & 18.81\% & 14.72\% & 2.17\% & 16.42\% & 0\% & 6.89\% & 9.79\% \\
\multicolumn{1}{c|}{2} & \multicolumn{1}{l|}{partial\_model} & 35.68\% & 31.74\% & 35.82\% & 42.55\% & 32.16\% & 33.02\% & 43.80\% & 36.40\% \\
\multicolumn{1}{c|}{3} & \multicolumn{1}{l|}{white\_angled\_perforated\_bar} & 43.70\% & 39.86\% & 9.90\% & 45.32\% & 24.94\% & 16.35\% & 33.31\% & 30.48\% \\
\multicolumn{1}{c|}{4} & \multicolumn{1}{l|}{wrench} & // & // & // & 11.11\% & // & 10.43\% & // & 10.77\% \\
\multicolumn{1}{c|}{5} & \multicolumn{1}{l|}{screwdriver} & 61.82\% & 57.68\% & 68.57\% & 54.21\% & 57.14\% & 62.68\% & 61.37\% & 60.50\% \\
\multicolumn{1}{c|}{6} & \multicolumn{1}{l|}{gray\_perforated\_bar} & 19.36\% & 40.26\% & 30.89\% & 53.06\% & 29.68\% & 26.82\% & 15.76\% & 30.83\% \\
\multicolumn{1}{c|}{7} & \multicolumn{1}{l|}{wheels\_axle} & 11.37\% & 18.34\% & 04.63\% & 1.79\% & 31.61\% & 03.91\% & 04.35\% & 10.86\% \\
\multicolumn{1}{c|}{8} & \multicolumn{1}{l|}{red\_angled\_perforated\_bar} & 18.65\% & 01.57\% & 4.81\% & 00.09\% & 12.27\% & 05.98\% & 09.64\% & 07.57\% \\
\multicolumn{1}{c|}{9} & \multicolumn{1}{l|}{red\_perforated\_bar} & 23.35\% & 26.69\% & 34.72\% & 24.58\% & 20.70\% & 11.21\% & 17.91\% & 22.74\% \\
\multicolumn{1}{c|}{10} & \multicolumn{1}{l|}{rod} & 14.90\% & 07.40\% & 22.41\% & 19.73\% & 15.57\% & 17.84\% & 14.04\% & 15.98\% \\
\multicolumn{1}{c|}{11} & \multicolumn{1}{l|}{handlebar} & 44.39\% & 36.31\% & 28.79\% & 26.92\% & 12.50\% & 27.27\% & 52.48\% & 32.67\% \\
\multicolumn{1}{c|}{12} & \multicolumn{1}{l|}{screw} & 48.64\% & 42.87\% & 40.00\% & 16.96\% & 44.99\% & 43.88\% & 35.35\% & 38.96\% \\
\multicolumn{1}{c|}{13} & \multicolumn{1}{l|}{tire} & 45.93\% & 71.68\% & 63.09\% & 89.01\% & 37.83\% & 39.69\% & 65.15\% & 58.91\% \\
\multicolumn{1}{c|}{14} & \multicolumn{1}{l|}{rim} & 45.10\% & 35.71\% & 42.57\% & 59.26\% & 22.28\% & 90.00\% & 57.54\% & 50.35\% \\
\multicolumn{1}{c|}{15} & \multicolumn{1}{l|}{washer} & 31.52\% & 39.39\% & 19.00\% & 19.57\% & 53.43\% & 44.45\% & 09.06\% & 30.92\% \\
\multicolumn{1}{c|}{16} & \multicolumn{1}{l|}{red\_perforated\_junction\_bar} & 19.28\% & 13.51\% & 07.55\% & 30.74\% & 28.63\% & 22.02\% & 16.89\% & 19.80\% \\
\multicolumn{1}{c|}{17} & \multicolumn{1}{l|}{red\_4\_perforated\_junction\_bar} & 24.20\% & 43.50\% & 39.11\% & 85.71\% & 44.23\% & 28.37\% & 20.62\% & 40.82\% \\
\multicolumn{1}{c|}{18} & \multicolumn{1}{l|}{bolt} & 33.14\% & 33.61\% & 11.29\% & 17.16\% & 28.46\% & 21.31\% & 19.12\% & 23.44\% \\
\multicolumn{1}{c|}{19} & \multicolumn{1}{l|}{roller} & 09.93\% & 40.50\% & 28.15\% & 5.76\% & 0.23\% & 18.20\% & 09.36\% & 16.02\% \\ \hline
\multicolumn{1}{l}{} & & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \cline{2-10}
\multicolumn{1}{l}{} & \multicolumn{1}{c|}{\textbf{mAP (per video)}} & 31.71\% & 33.59\% & 28.89\% & 33.47\% & 28.57\% & 28.08\% & 28.44\% & \textbf{30.39\%}
\end{tabular}%
}
\caption{Baseline results for the \textit{active} object recognition task. We report the AP values for each class which are the averages of the AP values for each class of the Test videos. In the last column, we report the mAP per class, which is the average mAP of the Test videos.}
\label{tab:active_rec}
\end{table*}
\subsection {EHOI Detection}
For the ``SlowFast + Faster-RCNN'' baseline, we trained SlowFast network to recognize the $12$ verb classes of the MECCANO dataset using the same settings as the ones considered for the action recognition task. We trained the network for $40$ epochs and obtained a verb recognition Top-1 accuracy of $58.04\%$ on the Test set.
For the object detector component, we used the same model trained for the active object recognition task.
For the ``human-branch'' of the ``InteractNet'' model, we used the Hand-Object Detector~\cite{Hands_in_contact_Shan20} to detect hands in the scene. The object detector trained for active object recognition has been used for the ``object-branch''.
The MLPs used to predict the verb class form the appearance of hands and active objects are composed by an input linear layer (e.g., 1024-d for the hands MLP and 784-d for the objects one), a ReLU activation function and an output linear layer (e.g., 12-d for both MLPs). We fused by late fusion the output probability distributions of verbs obtained from the two MLPs (hands and objects) to predict the final verb of the EHOI. We jointly trained the MLPs for $50K$ iterations on an Nvidia V100 GPU, using a batch size of $28$ and a learning rate of $0.0001$.
In ``InteractNet + Context'', we added a third MLP which predicts the verb class based on context features. The context MLP has the same architecture of the others MLPs (hands and objects) except the input linear layer which is 640-d. In this case, we jointly trained the three MLPs (hands, objects and context) for $50K$ iterations on a TitanX GPU with a batch size equal to $18$ and the learning rate equal to $0.0001$.
The outputs of the three MLPs are hence fused by late fusion. \\
\section{Additional Results}
\label{ref:qualitative}
Figure~\ref{fig:actions_rec} shows some qualitative results of the SlowFast baseline. Note that, in the second and third example, the method predicts correctly only the verb or the object.
Table~\ref{tab:active_rec} reports the results obtained with the baseline in the \textit{Active} Object Recognition task.
We report the AP values for each class considering all the videos belonging to the test set of the MECCANO dataset. The last column shows the average of the AP values for each class and the last row reports the mAP values for each test video.
Figure~\ref{fig:active_objects_rec} reports some qualitative results for this task. In particular, in the first row, we report the correct \textit{active} object predictions, while in the second row we report two examples of wrong predictions. In the wrong predictions, the right \textit{active} object is recognized but other \textit{passive} objects are wrongly detected and recognized as \textit{active} (e.g., instruction booklet in the example bottom-left or the red bars in the example bottom-right of Figure~\ref{fig:active_objects_rec}).
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{images/action_rec.png}
\caption{Qualitative results for the action recognition task. Correct predictions are in green while wrong predictions are in red.}
\label{fig:actions_rec}
\end{figure*}
\iffalse
\begin{table}[]
\resizebox{0.4\textwidth}{!}{%
\begin{tabular}{clc}
\multicolumn{1}{l|}{\textbf{ID}} & \multicolumn{1}{c|}{\textbf{Class}} & \textbf{AP (per class)} \\ \hline
\multicolumn{1}{c|}{0} & \multicolumn{1}{l|}{instruction booklet} & 46.18\% \\
\multicolumn{1}{c|}{1} & \multicolumn{1}{l|}{gray\_angled\_perforated\_bar} & 09.79\% \\
\multicolumn{1}{c|}{2} & \multicolumn{1}{l|}{partial\_model} & 36.40\% \\
\multicolumn{1}{c|}{3} & \multicolumn{1}{l|}{white\_angled\_perforated\_bar} & 30.48\% \\
\multicolumn{1}{c|}{4} & \multicolumn{1}{l|}{wrench} & 10.77\% \\
\multicolumn{1}{c|}{5} & \multicolumn{1}{l|}{screwdriver} & 60.50\% \\
\multicolumn{1}{c|}{6} & \multicolumn{1}{l|}{gray\_perforated\_bar} & 30.83\% \\
\multicolumn{1}{c|}{7} & \multicolumn{1}{l|}{wheels\_axle} & 10.86\% \\
\multicolumn{1}{c|}{8} & \multicolumn{1}{l|}{red\_angled\_perforated\_bar} & 07.57\% \\
\multicolumn{1}{c|}{9} & \multicolumn{1}{l|}{red\_perforated\_bar} & 22.74\% \\
\multicolumn{1}{c|}{10} & \multicolumn{1}{l|}{rod} & 15.98\% \\
\multicolumn{1}{c|}{11} & \multicolumn{1}{l|}{handlebar} & 32.67\% \\
\multicolumn{1}{c|}{12} & \multicolumn{1}{l|}{screw} & 38.96\% \\
\multicolumn{1}{c|}{13} & \multicolumn{1}{l|}{tire} & 58.91\% \\
\multicolumn{1}{c|}{14} & \multicolumn{1}{l|}{rim} & 50.35\% \\
\multicolumn{1}{c|}{15} & \multicolumn{1}{l|}{washer} & 30.92\% \\
\multicolumn{1}{c|}{16} & \multicolumn{1}{l|}{red\_perforated\_junction\_bar} & 19.80\% \\
\multicolumn{1}{c|}{17} & \multicolumn{1}{l|}{red\_4\_perforated\_junction\_bar} & 40.82\% \\
\multicolumn{1}{c|}{18} & \multicolumn{1}{l|}{bolt} & 23.44\% \\
\multicolumn{1}{c|}{19} & \multicolumn{1}{l|}{roller} & 16.02\% \\ \hline
\multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ \cline{2-3}
\multicolumn{1}{l}{} & \multicolumn{1}{c|}{\textbf{mAP}} & 30.39\%
\end{tabular}%
}
\caption{Baseline results for the \textit{active} object recognition task.}
\label{tab:active_rec}
\end{table}
\fi
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{images/active_objects.png}
\caption{Qualitative results for the \textit{active} object recognition task.}
\label{fig:active_objects_rec}
\end{figure*}
\balance
{\small
\bibliographystyle{ieee_fullname}
|
3,212,635,537,603 | arxiv | \section{INTRODUCTION}
Markov random fields (MRFs, a.k.a.~Markov networks, undirected graphical models) are a compact representation of the joint distribution among multiple variables, with each variable being a node and an edge between two nodes indicating conditional dependence between the two corresponding variables. Sparse discrete MRF learning is proposed in the seminal work of \cite{lee2006efficient}. By considering an $l_1$-regularized MLE problem, many components of the parameterization are driven to zero, yielding a sparse solution to structure learning. However, in general, solving an $l_1$-regularized MLE problem exactly for a discrete MRF is infamously difficult due to the NP-hard inference problem posed by exact gradient evaluation \citep{koller2009probabilistic}. We hence inevitably have to compromise accuracy for the gain of efficiency and scalability via \emph{inexact} learning techniques \citep{liu2013bayesian, liu2014learning, liu2016multiple,geng2018temporal}.
In this paper, we consider stochastic proximal gradient (SPG; \citealt{honorio2012convergence, atchade2014stochastic, miasojedow2016sparse}), a stochastic learning framework for $l_1$-regularized discrete MRFs. SPG hinges on a stochastic oracle for gradient approximation of the log-likelihood function (inexact inference). However, both the theoretical guarantees and the practical performances of existing algorithms are unsatisfactory.
The stochastic oracle behind SPG is Gibbs sampling \citep{levin2009markov}, which is an effective approach to draw samples from an intractable probability distribution. With enough samples, the intractable distribution can be approximated effectively by the empirical distribution, and hence many quantities (e.g., the gradient of the log-likelihood function) related to the intractable distribution can be estimated efficiently. Since SPG uses Gibbs sampling for gradient approximation, it can be viewed as an inexact proximal gradient method \citep{schmidt2011convergence}, whose success depends on whether the gradient approximation error can be effectively controlled. While previous works \citep{honorio2012convergence, atchade2014stochastic, miasojedow2016sparse} have shown that the quality of the gradient approximation can be improved \emph{in the long run} with increasingly demanding computational resources, such long term guarantees might not translate to satisfactory performance in practice (see Section~\ref{sec:exp}). Therefore, it is desirable to estimate and control the gradient approximation error of SPG meticulously in each iteration so that a more refined approximation to the exact gradient will be rewarded with a higher gain of efficiency and accuracy in practice.
Careful analysis and control of the quality of the gradient approximation of SPG call for the cross-fertilization of theoretical and empirical insights from stochastic approximate inference \citep{bengio2009justifying,fischer2011bounding}, inexact proximal methods \citep{schmidt2011convergence}, and statistical sampling \citep{mitliagkas2017improving}. Our contributions are hence both theoretical and empirical. Theoretically, we provide novel \emph{verifiable} bounds (Section~\ref{sec:bound}) to inspect and control the gradient approximation error induced by Gibbs sampling. Also, we provide a proof sketch for the main results in Section~\ref{sec:proof-sketch}. Empirically, we propose the \emph{tighten asymptotically} (TAY) learning strategy (Section~\ref{sec:applications}) based on the verifiable bounds to boost the performance of SPG.
\section{BACKGROUND}
We first introduce $l_1$-regularized discrete MRFs in Section~\ref{sec:mrf}. We then briefly review SPG as a combination of proximal gradient for sparse statistical learning and Gibbs sampling for addressing the intractable exact gradient evaluation problem.
\subsection{$l_1$-Regularized Discrete MRF}
\label{sec:mrf}
For the derivation, we focus on the binary pairwise case and we illustrate that our framework can be generalized to other models in Section ~\ref{sec:applications}.
Let $\mathbf{X} = \begin{bmatrix}
X_1,X_2,\cdots,X_p \end{bmatrix}^\top \in \curly{0,1}^p$
be a $p\times 1$ binary random vector. We use an uppercase letter such as $X$ to denote a random variable and the corresponding lowercase letter to denote a particular \emph{assignment} of the random variable, i.e., $X = x$. We also use boldface letters to represent vectors and matrices and regular letters to represent scalars. We define the function $\bm{\psi}:\curly{0,1}^p \rightarrow \curly{0,1}^m, \; \mathbf{x} \rightarrow \bm{\psi}(\mathbf{x})$ to represent the \emph{sufficient statistics} (a.k.a.~\emph{features}) whose values depend on the assignment $\mathbf{x}$ and compose an $m\times 1$ vector $\bm{\psi}(\mathbf{x})$, with its $j^{th}$ component denoted as $\psi_j (\mathbf{x})$. We use $\mathbb{X}$ to represent a dataset with $n$ independent and identically distributed (i.i.d.) samples.
With the notation introduced above, the $l_1$-regularized discrete MRF problem can be formulated as the following convex optimization problem:
\begin{equation}
\begin{gathered}
\label{eq:l1mrf}
\hat{\bm{\theta}} = \arg\min_{\bm{\theta}\in\bm{\Theta}} -\frac{1}{n}\sum_{\mathbf{x}\in \mathbb{X}} \bm{\theta}^\top \bm{\psi}(\mathbf{x}) + A(\bm{\theta}) + \lambda \@ifstar{\oldnorm}{\oldnorm*}{\bm{\theta}}_1,
\end{gathered}
\end{equation}
\begin{figure}[H]
\begin{minipage}[t]{.475\textwidth}
\begin{algorithm}[H]
\caption{Gibbs Sampling (Gibbs-1)}
\label{alg:Gibbs-1}
\begin{algorithmic}[1]
\Require initial samples $\mathbb{S}_0$ and $\bm{\theta}$.
\Ensure $\mathbb{S}$.
\Function{Gibbs-1}{$\mathbb{S}_0$, $\bm{\theta}$}
\State $\mathbb{S} \leftarrow \mathbb{S}_0$, and decide $p$ from $\mathbb{S}_0$.
\For{$i \in \left\{ 1, \cdots, p\right \}$}
\For{$\mathbf{x} \in \mathbb{S}$}
\State Compute $\text{P}_{\bm{\theta}}(X_i\given \mathbf{x}_{-i})$ according to (\ref{eq:cond}).
\State Update $x_i$ by $\text{P}_{\bm{\theta}}(X_i\given \mathbf{x}_{-i})$.
\EndFor
\EndFor
\State \Return $\mathbb{S}$.
\EndFunction
\end{algorithmic}
\end{algorithm}
\end{minipage}
\hfill
\hfill
\begin{minipage}[t]{.475\textwidth}
\begin{algorithm}[H]
\caption{Gradient Approximation (GRAD)}
\label{alg:grad}
\begin{algorithmic}[1]
\Require $\bm{\theta}$, $\mathbb{E}_{\mathbb{X}}\bm{\psi}(\mathbf{x})$, and $q$.
\Ensure $\bm{\Delta} f(\bm{\theta})$.
\Function{Grad}{$\bm{\theta}$, $\mathbb{E}_{\mathbb{X}}\bm{\psi}(\mathbf{x})$, $q$}
\State Initialize $\mathbb{S}$ with $q$ samples.
\While{true}
\State $\mathbb{S} \leftarrow$ \Call{Gibbs-1}{ $\mathbb{S}$, $\bm{\theta}$}.
\If{stopping criteria met}\label{step:stop}
\State Compute $\mathbb{E}_{\mathbb{S}}\bm{\psi}(\mathbf{x})$ according to (\ref{eq:replacement}).
\State $\bm{\Delta} f(\bm{\theta}) \leftarrow \mathbb{E}_{\mathbb{S}}\bm{\psi}(\mathbf{x})-\mathbb{E}_{\mathbb{X}}\bm{\psi}(\mathbf{x})$.
\State\textbf{break}.
\EndIf
\EndWhile
\State \Return $\bm{\Delta} f(\bm{\theta})$.
\EndFunction
\end{algorithmic}
\end{algorithm}
\end{minipage}%
\hfill
\hfill
\begin{minipage}[t]{.475\textwidth}
\begin{algorithm}[H]
\caption{Stochastic Proximal Gradient (SPG)}
\label{alg:pxgb}
\begin{algorithmic}[1]
\Require $\mathbb{X}$, $\lambda$, and $q$.
\Ensure $\tilde{\bm{\theta}}$.
\Function{SPG}{$\mathbb{X}$, $\lambda$, $q$}
\State Compute $\mathbb{E}_{\mathbb{X}}\bm{\psi}(\mathbf{x})$ according to (\ref{eq:der-moment}).
\State Initialize $\bm{\theta}^{(0)}$ randomly and $k \leftarrow 0$.
\State Choose step length $\alpha$.
\While{true}
\State\label{step:pcd}$\bm{\Delta} f(\bm{\theta}^{(k)})\leftarrow$ \Call{Grad}{$\bm{\theta}^{(k)}$, $\mathbb{E}_{\mathbb{X}}\bm{\psi}(\mathbf{x})$, $q$}.
\State $\bm{\theta}^{(k+1)} \leftarrow \bm{\mathcal{S}}_{\alpha\lambda}\left(\bm{\theta}^{(k)} - \alpha \bm{\Delta} f(\bm{\theta}^{(k)})\right).$
\If{Stopping criteria met}\label{step:converge}
\State $\tilde{\bm{\theta}} =\bm{\theta}^{(k+1)}$, \Return $\tilde{\bm{\theta}}$.
\EndIf
\State $k \leftarrow k+1$
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
\end{minipage}
\end{figure}
with
\begin{equation*}
\begin{gathered}
A(\bm{\theta}) = \log \hspace{-4mm} \sum_{\mathbf{x}\in \curly{0,1}^p} \hspace{-3mm} \exp(\bm{\theta}^\top \bm{\psi(\mathbf{x})}),
\end{gathered}
\end{equation*}
where $\bm{\Theta}\subseteq \mathbb{R}^m$ is the parameter space of $\bm{\theta}$'s, $\lambda \ge 0$, and $A(\bm{\theta})$ is the \emph{log partition function}. We denote the differentiable part of (\ref{eq:l1mrf}) as
\begin{equation}
\label{eq:f}
f(\bm{\theta}) = -\frac{1}{n}\sum_{\mathbf{x}\in \mathbb{X}} \bm{\theta}^\top \bm{\psi}(\mathbf{x}) + A(\bm{\theta}).
\end{equation}
Solving (\ref{eq:l1mrf}) requires evaluating the gradient of $f(\bm{\theta})$, which is given by:
\begin{equation}
\begin{gathered}
\label{eq:grad}
\bm{\nabla} f(\bm{\theta}) = \mathbb{E}_{\bm{\theta}} \bm{\psi}(\mathbf{x})-\mathbb{E}_{\mathbb{X}} \bm{\psi}(\mathbf{x}),
\end{gathered}
\end{equation}
with
\begin{equation}
\begin{gathered}
\label{eq:der-moment}
\mathbb{E}_{\bm{\theta}} \bm{\psi}(\mathbf{x}) = \hspace{-4mm} \sum_{\mathbf{x}\in\curly{0,1}^p} \hspace{-3mm} \mathrm{P}_{\bm{\theta}}(\mathbf{x})\bm{\psi}(\mathbf{x}), \quad \mathbb{E}_{\mathbb{X}} \bm{\psi}(\mathbf{x}) = \frac{1}{n}\sum_{\mathbf{x} \in \mathbb{X}} \bm{\psi}(\mathbf{x}).
\end{gathered}
\end{equation}
$\mathbb{E}_{\bm{\theta}} \bm{\psi}(\mathbf{x})$ represents the expectation of the sufficient statistics under $\mathrm{P}_{\bm{\theta}}(\mathbf{x})= \frac{\exp (\bm{\theta}^\top\bm{\psi}(\mathbf{x}))}{\exp(A(\bm{\theta}))}$, which is a discrete MRF probability distribution parameterized by $\bm{\theta}$. $\mathbb{E}_{\mathbb{X}} \bm{\psi}(\mathbf{x})$ represents the expectation of the sufficient statistics under the empirical distribution. Computing $\mathbb{E}_{\mathbb{X}} \bm{\psi}(\mathbf{x})$ is straightforward, but computing $\mathbb{E}_{\bm{\theta}} \bm{\psi}(\mathbf{x})$ exactly is intractable due to the entanglement of $A(\bm{\theta})$. As a result, various approximations have been made \citep{wainwright2007high, hofling2009estimation, viallon2014empirical}.
\subsection{Stochastic Proximal Gradient}
\label{sec:SPG}
To efficiently solve (\ref{eq:l1mrf}), many efforts have been made in combining Gibbs sampling \citep{levin2009markov} and proximal gradient descent \citep{parikh2014proximal} into SPG, a method that adopts the proximal gradient framework to update iterates, but uses Gibbs sampling as a stochastic oracle to approximate the gradient when the gradient information is needed \citep{honorio2012convergence,atchade2014stochastic,miasojedow2016sparse}.
Specifically, Gibbs sampling with $q$ chains running $\tau$ steps (Gibbs-$\tau$) can generate $q$ samples for $\mathrm{P}_{\bm{\theta}}(\mathbf{x})$. Gibbs-$\tau$ is achieved by iteratively applying Gibbs-$1$ for $\tau$ times. Gibbs-$1$ is summarized in Algorithm~\ref{alg:Gibbs-1}, where
\begin{equation}
\label{eq:cond}
\text{P}_{\bm{\theta}}(X_i \mid \mathbf{x}_{-i}) = \text{P}_{\theta}(\mathbf{x}_i \given x_{1}, \cdots, x_{i-1}, x_{i+1}, \cdots, x_{p})
\end{equation}
represents the conditional distribution of $X_i$ given the assignment of the remaining variables $\mathbf{x}_{-i}$ under the parameterization $\bm{\theta}$. Denoting the set of these $q$ (potentially repetitive) samples as $\mathbb{S}$, we can approximate $\mathbb{E}_{\bm{\theta}} \bm{\psi}(\mathbf{x})$ by the easily computable
\begin{equation}
\begin{gathered}
\label{eq:replacement}
\mathbb{E}_{\mathbb{S}} \bm{\psi}(\mathbf{x})=\frac{1}{q} \sum_{\mathbf{x} \in \mathbb{S}} \bm{\psi}(\mathbf{x})
\end{gathered}
\end{equation}
and thus reach the approximated gradient $\bm{\Delta} f(\bm{\theta}) = \mathbb{E}_{\mathbb{S}}\bm{\psi}(\mathbf{x})-\mathbb{E}_{\mathbb{X}}\bm{\psi}(\mathbf{x})$ with the gradient approximation error:
\begin{equation*}
\bm{\delta}(\bm{\theta}) = \bm{\Delta} f(\bm{\theta})-\bm{\nabla} f(\bm{\theta}).
\end{equation*}
By replacing $\bm{\nabla} f(\bm{\theta})$ with $\bm{\Delta} f(\bm{\theta})$ in proximal gradient, the update rule for SPG can be derived as $\bm{\theta}^{(k+1)} = \bm{\mathcal{S}}_{\alpha\lambda}\left(\bm{\theta}^{(k)}-\alpha \bm{\Delta} f(\bm{\theta}^{(k)})\right)$,
where $\alpha>0$ is the step length and $\bm{\mathcal{S}}_{\lambda}(\bm{a})$ is the soft-thresholding operator whose value is also an $m \times 1$ vector, with its $i^{th}$ component defined as $\mathcal{S}_{\lambda}(\bm{a})_i = \mathrm{sgn}(a_i)\max(0, \@ifstar{\oldabs}{\oldabs*}{a_i} - \lambda)$ and $\mathrm{sgn}(a_i)$ is the sign function.
By defining
\begin{align}
\label{eq:G}
\begin{split}
\bm{G}_\alpha( \bm{\theta}^{(k)}) := &\frac{1}{\alpha} \left(\bm{\theta}^{(k)} - \bm{\theta}^{(k+1)}\right)
\\= &\frac{1}{\alpha} \left(\bm{\theta}^{(k)} - S_{\alpha\lambda} \left(\bm{\theta}^{(k)}-\alpha \bm{\Delta} f(\bm{\theta}^{(k)})\right)\right),
\end{split}
\end{align}
we can rewrite the previous update rule in a form analogous to the update rule of a standard gradient descent, resulting in the update rule of a \emph{generalized gradient descent} algorithm:
\begin{equation}
\label{eq:ggd}
\bm{\theta}^{(k+1)} = \bm{\theta}^{(k)} - \alpha \bm{G}_\alpha(\bm{\theta}^{(k)}).
\end{equation}
SPG is summarized in Algorithm~\ref{alg:pxgb}. Its gradient evaluation procedure based on Algorithm~\ref{alg:Gibbs-1} is given in Algorithm~\ref{alg:grad}.
\section{MOTIVATION}
\label{sec:convergence-rate-err}
Both practical performance and theoretical guarantees of SPG are still far from satisfactory. Empirically, there are no convincing schemes for selecting $\tau$ and $q$, which hinders the efficiency and accuracy of SPG. Theoretically, to the best of our knowledge, existing non-asymptotic convergence rate guarantees can only be achieved for SPG with an averaging scheme \citep{schmidt2011convergence, honorio2012convergence, atchade2014stochastic} (see also Section~\ref{sec:convergence-rate}), instead of ordinary SPG. In contrast, in the exact proximal gradient descent method, the objective function value is non-decreasing and convergent to the optimal value under some mild assumptions \citep{parikh2014proximal}. In Section~\ref{sec:decreasing-objective}, we identify that the absence of non-asymptotic convergence rate guarantee for SPG primarily comes from the existence of gradient approximation error $\bm{\delta}(\bm{\theta}) $. In Section~\ref{sec:convergence-rate}, we further validate that the objective function value achieved by SPG is also highly dependent on $\bm{\delta}(\bm{\theta}) $. These issues bring about the demand of inspecting and controlling $\bm{\delta}(\bm{\theta}) $ in each iteration.
\subsection{Setup and Assumptions}
For the ease of presentation, we rewrite the objective function in (\ref{eq:l1mrf}) as $g(\bm{\theta}) = f(\bm{\theta})+h(\bm{\theta})$,
where $h(\bm{\theta}) = \lambda \Norm{\bm{\theta}}_1$, and $f(\bm{\theta})$ is given in (\ref{eq:f}). Since $\bm{\nabla} f(\bm{\theta})$ is Lipschitz continuous \citep{honorio2012lipschitz}, we denote its Lipschitz constant as $L$. We also make the same assumption that $\alpha \le 1/L$ as \cite{schmidt2011convergence}.
\subsection{Decreasing Objective}
\label{sec:decreasing-objective}
It is well-known that exact proximal gradient enjoys a $O\left(\frac{1}{k}\right)$ convergence rate \citep{parikh2014proximal}. One premise for this convergence result is that the objective function value decreases in each iteration. However, satisfying the decreasing condition is much more intricate in the context of SPG. Theorem~\ref{thm:obj-decrease} clearly points out that $\bm{\delta}(\bm{\theta})$ is one main factor determining whether the objective function decreases in SPG.
\begin{theorem}\normalfont
\label{thm:obj-decrease}
Let $\bm{\theta}^{(k)}$ be the iterate of SPG after the $k^{th} $ iteration. Let $\bm{\theta}^{(k+1)}$ be defined as in (\ref{eq:ggd}). With $\alpha \le 1/L$, we have
\begin{align*}
\begin{split}
g(\bm{\theta}^{(k+1)}) - g(\bm{\theta}^{(k)}) \le
\alpha \bm{\delta} (\bm{\theta}^{(k)})^\top \bm{G}_\alpha &(\bm{\theta}^{(k)})
\\- &\frac{\alpha}{2}\@ifstar{\oldnorm}{\oldnorm*}{\bm{G}_\alpha(\bm{\theta}^{(k)})}_2^2.
\end{split}
\end{align*}
Furthermore, a sufficient condition for $g(\bm{\theta}^{(k+1)})< g(\bm{\theta}^{(k)})$ is
\begin{equation*}
\@ifstar{\oldnorm}{\oldnorm*}{\bm{\delta}(\bm{\theta}^{(k)})}_2 < \frac{1}{2}\@ifstar{\oldnorm}{\oldnorm*}{\bm{G}_\alpha(\bm{\theta}^{(k)})}_2.
\end{equation*}
\end{theorem}
According to Theorem~\ref{thm:obj-decrease}, if the magnitude of the noise, quantified by $\@ifstar{\oldnorm}{\oldnorm*}{\bm{\delta}(\bm{\theta}^{(k)})}_2$, is reasonably small, the objective function value decreases in each iteration. Under this condition, we can further construct a theoretical support for the convergence rate of the objective function value in the Section~\ref{sec:convergence-rate}.
\subsection{Convergence Rate}
\label{sec:convergence-rate}
Assuming that $\bm{\delta}(\bm{\theta})$ is small enough in each iteration to generate a decreasing objective value sequence, we can derive Theorem~\ref{thm:convergence-rate} following Proposition 1 in \cite{schmidt2011convergence}:
\begin{theorem}\normalfont
\label{thm:convergence-rate}
Let $\mathcal{K}=(\bm{\theta}^{(0)},\bm{\theta}^{(1)},\bm{\theta}^{(2)},\cdots,\bm{\theta}^{(\kappa)})$ be the iterates generated by Algorithm~\ref{alg:pxgb}. Then if $g(\bm{\theta}^{(k+1)}) \le g(\bm{\theta}^{(k)})$ with $k \in \{1, 2, \cdots, \kappa-1 \}$, we have
\begin{align}
\label{eq:con-avg}
\begin{split}
g (\bm{\theta}^{(\kappa)}) - g(\hat{\bm{\theta}}) \le &\\
\frac{L}{2\kappa}& \left(\@ifstar{\oldnorm}{\oldnorm*}{\bm{\theta}^{(0)}-\hat{\bm{\theta}}}_2 + \frac{2}{L}\sum_{k=1}^\kappa \@ifstar{\oldnorm}{\oldnorm*}{\bm{\delta}(\bm{\theta}^{(k)})}_2 \right)^2.
\end{split}
\end{align}
\end{theorem}
Recall that $\hat{\bm{\theta}}$ is an optimal solution to the sparse MLE problem defined in (\ref{eq:l1mrf}). From (\ref{eq:con-avg}), it is obvious that if the gradient approximation error is reasonably small, then during the early iterations of SPG, $\@ifstar{\oldnorm}{\oldnorm*}{\bm{\theta}^{(0)}-\hat{\bm{\theta}}}_2$ dominates $\frac{2}{L}\sum_{k=1}^\kappa \@ifstar{\oldnorm}{\oldnorm*}{\bm{\delta}(\bm{\theta}^{(k)})}_2$. Therefore, in the beginning, the convergence rate is $O(1/\kappa)$. However, as the iteration proceeds, $\frac{2}{L}\sum_{k=1}^\kappa \@ifstar{\oldnorm}{\oldnorm*}{\bm{\delta}(\bm{\theta}^{(k)})}_2$ accumulates and hence in practice SPG can only maintain a convergence rate of $O(1/\kappa)$ up to some noise level that is closely related to $\bm{\delta}(\bm{\theta}^{(k)})$. Therefore, $\bm{\delta}(\bm{\theta}^{(k)})$ plays an importance role in the performance of SPG.
Notice that Theorem~\ref{thm:convergence-rate} offers convergence analysis of the objective function value in the last iteration $g (\bm{\theta}^{(\kappa)})$. This result is different from the existing non-asymptotic analysis on $g(\sum_{k=1}^{\kappa} \bm{\theta}^{(k)}/{\kappa})$, the objective function evaluated on the average of all the visited solutions \citep{schmidt2011convergence, honorio2012convergence, atchade2014stochastic}. Theorem~\ref{thm:convergence-rate} is more practical than previous analysis, since $\sum_{k=1}^{\kappa} \bm{\theta}^{(k)}/{\kappa}$ is a dense parameterization not applicable to structure learning.
According to the analysis above, we need to control $\bm{\delta}(\bm{\theta}^{(k)})$ in each iteration to achieve a decreasing and $O\left( \frac{1}{k} \right)$-converging objective function value sequence. Therefore, we focus on checkable bounds for gradient approximation error in Section~\ref{sec:bound}.
\section{MAIN RESULTS}
\label{sec:bound}
In this section, we derive an asymptotic and a non-asymptotic bound to control the gradient approximation error $\bm{\delta}(\bm{\theta}^{(k)})$ in each iteration. For this purpose, we consider an arbitrary $\bm{\theta}$, and perform gradient approximation via Gibbs-$\tau$ using Algorithm~\ref{alg:grad}, given an initial value for the Gibbs sampling algorithm, $\tilde{\mathbf{x}}_0$. By bounding $\bm{\delta}(\bm{\theta})$, we can apply the same technique to address $\bm{\delta}(\bm{\theta}^{(k)})$.
We first provide a bound for the magnitude of the conditional expectation of $\bm{\delta}(\bm{\theta})$, $\Norm{\mathbb{E}_{\tilde{\mathbf{x}}_\tau} [\bm{\delta}(\bm{\theta})\mid \tilde{\mathbf{x}}_0]}_2$, in Section~\ref{sec:asy-bound}. Based on this result, we further draw a non-asymptotic bound for the magnitude of the gradient approximation error, $\Norm{\bm{\delta}(\bm{\theta})}_2$, in Section~\ref{sec:non-asy-bound}. Both results are \emph{verifiable} in each iteration.
For the derivation of the conclusions, we will focus on binary pairwise Markov networks (BPMNs). Let $\mathbf{x}\in\curly{0,1}^p$ and $\bm{\theta}$ be given, a binary pairwise Markov network \citep{hofling2009estimation, geng2017efficient} is defined as:
\begin{equation}
\label{eq:bpmn}
\mathrm{P}_{\bm{\theta}} (\mathbf{x}) = \frac{1}{Z(\bm{\theta})} \exp \left(\sum_{i=1}^p\sum_{j\ge i}^p\theta_{ij}x_ix_j\right),
\end{equation}
where $Z(\bm{\theta})=\exp(A(\bm{\theta}))$ is the partition function. $\theta_{ij}$ is a component of $\bm{\theta}$ that represents the strength of conditional dependence between $X_i$ and $X_j$.
\subsection{An Asymptotic Bound}
\label{sec:asy-bound}
We first consider the magnitude of the conditional expectation of $\bm{\delta}(\bm{\theta})$ with respect to $\tilde{\mathbf{x}}_\tau$, $\Norm{\mathbb{E}_{\tilde{\mathbf{x}}_\tau} [\bm{\delta}(\bm{\theta})\mid \tilde{\mathbf{x}}_0]}_2$. To this end, we define $\mathbf{U}$ a $p\times p$ \emph{computable} matrix that is related to $\bm{\theta}$ and the type of MRF in question. $U_{ij}$, the component in the $i^{th}$ row and the $j^{th}$ column of $\mathbf{U}$, is defined as follows:
\begin{align}
\label{eq:U}
U_{ij} = \frac{\lvert \exp\left(-\xi_{ij}\right) - 1 \rvert b^*}{\left(1+b^*\exp\left(-\xi_{ij}\right)\right)(1+b^*)},
\end{align}
where
\begin{gather*}
b^* = \max\curly{r,\min\curly{s,\exp\left(\frac{\xi_{ij}}{2}\right)}},\\
s = \exp \left( - \xi_{ii} -\sum_{k\ne i,k\ne j}\xi_{ik} \max\curly{-\mathrm{sgn}(\xi_{ik}),0} \right),\\
r = \exp \left(- \theta_{ii} - \sum_{k\ne i,k\ne j} \xi_{i, k} \max\curly{\mathrm{sgn}(\xi_{i, k}),0} \right),
\end{gather*}
and $\mathrm{sgn}(\xi_{ik})$ is the sign function evaluated on $\xi_{ij} = \theta_{\min\curly{i,j},\max\curly{i,j}}$.
We then define $\mathbf{B}_i$ as a $p\times p$ identity matrix except that its $i^{th}$ row is replaced by the $i^{th}$ row of $\mathbf{U}$, with $i\in\curly{1,2,\cdots,p}$. We further define
\begin{equation*}
\mathbf{B} = \mathbf{B}_p \mathbf{B}_{p-1} \mathbf{B}_{p-2} \cdots \mathbf{B}_i \cdots\mathbf{B}_1
\end{equation*}
and the grand sum $\mathscr{G}(\mathbf{B}) = \sum_{i=1}^p\sum_{j=1}^p B_{ij}$, where $B_{ij}$ is the entry in the $i^{th}$ row and the $j^{th}$ column of $\mathbf{B}$. With the definitions above, $\Norm{\mathbb{E}_{\tilde{\mathbf{x}}_\tau} [\bm{\delta}(\bm{\theta})\mid \tilde{\mathbf{x}}_0]}_2$ can be upper bounded by Theorem~\ref{thm:exp-err-bound}.
\begin{theorem}\label{thm:exp-err-bound}\normalfont
Let $\tilde{\mathbf{x}}_\tau$ be the sample generated after running Gibbs sampling for $\tau$ steps (Gibbs-$\tau$) under the parameterization $\bm{\theta}$ initialized by $\tilde{\mathbf{x}}_0\in\curly{0,1}^p$; then with $m$ denoting the size of sufficient statistics, the following inequality holds:
\begin{equation}
\label{eq:exp-err-bound}
\Norm{\mathbb{E}_{\tilde{\mathbf{x}}_\tau} [\bm{\delta}(\bm{\theta})\mid \tilde{\mathbf{x}}_0]}_2 \le 2\sqrt{m} \mathscr{G}(\mathbf{B}^\tau),
\end{equation}
where $\mathbf{B}^\tau$ represents the $\tau^{th}$ power of $\mathbf{B}$.
\end{theorem}
In Theorem~\ref{thm:exp-err-bound}, the bound provided is not only observable in each iteration, but also efficient to compute, offering a convenient method to inspect the quality of the gradient approximation. When the spectral norm of $\mathbf{U}$ is less than $1$, the left hand side of (\ref{eq:exp-err-bound}) will converge to 0. Thus, by increasing $\tau$, we can decrease $\Norm{\mathbb{E}_{\tilde{\mathbf{x}}_\tau}[\bm{\delta}(\bm{\theta})\mid \tilde{\mathbf{x}}_0]}_2$ to an arbitrarily small value.
Theorem~\ref{thm:exp-err-bound} is derived by bounding the influence of a variable on another variable in $\mathbf{X}$ (i.e., the Dobrushin influence defined in \ref{def:d-influence-matrix}) with $\mathbf{U}$. Furthermore, $\mathbf{U}$ defined in (\ref{eq:U}) is a sharp bound of the Dobrushin influence whenever $b^* \neq \exp\left(\frac{\xi_{ij}}{2}\right)$, explaining why (\ref{eq:exp-err-bound}) using the definition of $\mathbf{U}$ is tight enough for practical applications.
\subsection{A Non-Asymptotic Bound}
\label{sec:non-asy-bound}
In order to provide a non-asymptotic guarantee for the quality of the gradient approximation, we need to concentrate $\Norm{\bm{\delta}(\bm{\theta})}_2$ around $\Norm{\mathbb{E}_{\tilde{\mathbf{x}}_\tau} [\bm{\delta}(\bm{\theta}) \mid \tilde{\mathbf{x}}_0]}_2$. Let $q$ defined in Section~\ref{sec:SPG} be given. Then, $q$ trials of Gibbs sampling are run, resulting in $q$ samples, $\{\tilde{\mathbf{x}}_\tau^{(1)}, \tilde{\mathbf{x}}_\tau^{(2)}, \cdots, \tilde{\mathbf{x}}_\tau^{(q)}\}$. That is to say, for each sufficient statistic, $\psi_j(\bm{\theta})$, with $j \in \{1, 2, \cdots, m\}$, we have $q$ samples, $\curly{\psi_j^{(1)}(\bm{\theta}) , \psi_j^{(2)}(\bm{\theta}) ,\cdots, \psi_j^{(q)}(\bm{\theta})}$. Defining the sample variance of the corresponding sufficient statistics as $V_{\psi_j}$, we have Theorem~\ref{thm:sample-err-bound} to provide a non-asymptotic bound for $\Norm{\bm{\delta}(\bm{\theta})}_2$:
\begin{theorem}\normalfont
\label{thm:sample-err-bound}
Let $\bm{\theta}$, $q$, and an arbitrary $\tilde{\mathbf{x}}_0\in\curly{0,1}^p$ be given. Let $m$ represent the dimension of $\bm{\theta}$ and $\Norm{\bm{\delta}(\bm{\theta})}_2$ represent the magnitude of the gradient approximation error by running $q$ trials of Gibbs-$\tau$ initialized by $\tilde{\mathbf{x}}_0$. Compute $\mathbf{B}$ according to Section~\ref{sec:asy-bound} and choose $\epsilon_j>0$. Then, with probability at least $1 - 2 \sum_{j = 1}^m \beta_j$, where $\beta_j > 0$, $j\in\curly{1,2,\cdots,m}$,
\begin{equation}
\begin{gathered}
\label{eq:sample-err-bound}
\Norm{ \bm{\delta}(\bm{\theta})}_2 \le 2\sqrt{m}\left(\mathscr{G}(\mathbf{B}^\tau)+\sqrt{\frac{\sum_{ j = 1}^m \epsilon_j^2}{4m}}\right),
\end{gathered}
\end{equation}
with $\beta_j$ satisfying
\begin{equation}
\begin{gathered}
\epsilon_j =2 \left(\sqrt{\frac{V_{\psi_j}\ln 2/\beta_j } {2q}} + \frac{7\ln2/\beta_j}{3(q-1)}\right).
\end{gathered}
\end{equation}
\end{theorem}
Notice that the bound in Theorem~\ref{thm:sample-err-bound} is easily \emph{checkable}, i.e., given $\tau$, $q$, $V_{\psi_j}$'s, and $\bm{\theta}$, we can determine a bound for $\Norm{ \bm{\delta}(\bm{\theta})}_2$ that holds with high probability. Furthermore, Theorem~\ref{thm:sample-err-bound} provides the sample complexity needed for gradient estimation. Specifically, given small enough $\beta_j$'s, if we let
\begin{equation*}
\mathscr{G}(\mathbf{B}^\tau) = \sqrt{\sum_{ j = 1}^m \epsilon_j^2/4m},
\end{equation*}
we can show that
\begin{equation*}
2\sqrt{m}\left(\mathscr{G}(\mathbf{B}^\tau)+\sqrt{\sum_{ j = 1}^m \epsilon_j^2/4m}\right)= O \left (\frac{1}{q}\right ).
\end{equation*}
That is to say, by assuming that $\mathscr{G}(\mathbf{B}^\tau)$ and $\sqrt{\sum_{ j = 1}^m \epsilon_j^2/4m}$ share the same scale, the upper bound of the gradient approximation error converges to 0 as $q$ increases. Moreover, we include sample variance, $V_{\psi_j}$'s, in (\ref{eq:sample-err-bound}). This is because the information provided by sample variance leads to an improved data dependent bound.
\section{PROOF SKETCH OF MAIN RESULTS}
\label{sec:proof-sketch}
As mentioned in Section~\ref{sec:non-asy-bound}, the non-asymptotic result in Theorem~\ref{thm:sample-err-bound} is derived from the asymptotic bound in Theorem~\ref{thm:exp-err-bound} by concentration inequalities, we therefore only highlight the proof of Theorem~\ref{thm:exp-err-bound} in this section, and defer other technical results to Supplements. Specifically, the proof of Theorem~\ref{thm:exp-err-bound} is divided into two parts: bounding $\Norm{\mathbb{E}_{\tilde{\mathbf{x}}_\tau} [\bm{\delta}(\bm{\theta})\mid \tilde{\mathbf{x}}_0]}_2$ by the total variation distance (Section~\ref{sec:bound-expectation-tv}) and bounding the total variation distance (Section~\ref{sec:bound-TV}).
\subsection{Bounding \texorpdfstring{$\Norm{\mathbb{E}_{\tilde{\mathbf{x}}_\tau} [\bm{\delta}(\bm{\theta})\mid \tilde{\mathbf{x}}_0]}_2$}{TEXT} by the Total Variation Distance}
\label{sec:bound-expectation-tv}
To quantify $\Norm{\mathbb{E}_{\tilde{\mathbf{x}}_\tau} [\bm{\delta}(\bm{\theta})\mid \tilde{\mathbf{x}}_0]}_2$, we first introduce the concept of total variation distance \citep{levin2009markov} that measures the distance between two distributions over $\curly{0,1}^p$.
\begin{definition}\normalfont
Let $u(\mathbf{x})$, and $v(\mathbf{x})$ be two probability distributions of $\mathbf{x}\in\curly{0,1}^p$. Then the total variation distance between $u(\mathbf{x})$ and $v(\mathbf{x})$ is given as:
\begin{equation*}
\Norm{u(\mathbf{x})-v(\mathbf{x})}_{\text{TV}} = \frac{1}{2} \hspace{-2mm} \sum_{\mathbf{x}\in\curly{0,1}^p} \hspace{-2mm} \lvert u(\mathbf{x})-v(\mathbf{x}) \rvert.
\end{equation*}
\end{definition}
With the definition above, $\Norm{\mathbb{E}_{\tilde{\mathbf{x}}_\tau} [\bm{\delta}(\bm{\theta})\mid \tilde{\mathbf{x}}_0]}_2$ can be upper bounded by the total variation distance between two distributions ($\mathrm{P}_\tau(\mathbf{x}\mid\tilde{\mathbf{x}}_0)$ and $\mathrm{P}_{\bm{\theta}}(\mathbf{x})$) using the following lemma:
\begin{lemma}\label{lem:exp-err}\normalfont
Let $\tilde{\mathbf{x}}_\tau$ be the sample generated after running Gibbs sampling for $\tau$ steps (Gibbs-$\tau$) under the parameterization $\bm{\theta}$ initialized by $\tilde{\mathbf{x}}_0\in\curly{0,1}^p$, then the following is true:
\begin{equation*}
\Norm{\mathbb{E}_{\tilde{\mathbf{x}}_\tau} [\bm{\delta}(\bm{\theta})\mid \tilde{\mathbf{x}}_0]}_2 \le 2\sqrt{m}\Norm{\mathrm{P}_\tau(\mathbf{x}\mid\tilde{\mathbf{x}}_0) -\mathrm{P}_{\bm{\theta}}(\mathbf{x})}_{\text{TV}}.
\end{equation*}
\end{lemma}
With Lemma~\ref{lem:exp-err}, bounding $\Norm{\mathbb{E}_{\tilde{\mathbf{x}}_\tau} [\bm{\delta}(\bm{\theta})\mid \tilde{\mathbf{x}}_0]}_2$ can be achieved by bounding the total variation distance $\Norm{\mathrm{P}_\tau(\mathbf{x}\mid\tilde{\mathbf{x}}_0) -\mathrm{P}_{\bm{\theta}}(\mathbf{x})}_{\text{TV}}$. Recent advances in the quality control of Gibbs samplers offer us \emph{verifiable} upper bounds for $\Norm{\mathrm{P}_\tau(\mathbf{x}\mid\tilde{\mathbf{x}}_0) -\mathrm{P}_{\bm{\theta}}(\mathbf{x})}_{\text{TV}}$ on the learning of a variety of MRFs \citep{mitliagkas2017improving}. However, they can not be applied to BPMNs because of the positivity constraint on parameters. We describe these next.
\subsection{Bounding \texorpdfstring{$\Norm{\mathrm{P}_\tau(\mathbf{x}\mid\tilde{\mathbf{x}}_0) -\mathrm{P}_{\bm{\theta}}(\mathbf{x})}_{\text{TV}}$}{TEXT}}
\label{sec:bound-TV}
Now we generalize the analysis in \cite{mitliagkas2017improving} to BPMNs without constraints on the sign of parameters by introducing the definition of the Dobrushin influence matrix and a technical lemma.
\begin{definition}[Dobrushin influence matrix]\normalfont
\label{def:d-influence-matrix}
The Dobrushin influence matrix of $\mathrm{P}_{\bm{\theta}}(\mathbf{x})$ is a $p\times p$ matrix $\mathbf{C}$ with its component in the $i^{th}$ row and the $j^{th}$ column, $C_{ij}$, representing the influence of $X_j$ on $X_i$ given as:
\begin{equation*}
C_{ij} = \max_{(\mathbf{X},\mathbf{Y}) \in N_j} \Norm{\mathrm{P}_{\bm{\theta}}(X_i\mid\mathbf{X}_{-i})-\mathrm{P}_{\bm{\theta}}(Y_i\mid\mathbf{Y}_{-i})}_{\text{TV}},
\end{equation*}
where $(\mathbf{X},\mathbf{Y}) \in N_j$ represents $X_l=Y_l$ for all $l\ne j$.
\end{definition}
\begin{lemma}\normalfont
\label{lem:u-bpmn}
Let $\mathrm{P}_{\bm{\theta}} (\mathbf{x})$ represent a binary pairwise Markov network defined in (\ref{eq:bpmn}) that is parameterized by $\bm{\theta}$. An upper bound of the total influence matrix is given by $\mathbf{U}$ defined in Section~\ref{sec:asy-bound}.
\end{lemma}
It should be noticed that, similar to the Theorem 12 in \cite{mitliagkas2017improving}, Lemma~\ref{lem:u-bpmn} provides an exact calculation except when $b^* =\exp\left(\frac{\xi_{i, j}}{2}\right)$.
Therefore, we can consider the $\mathbf{U}$ defined in Section~\ref{sec:asy-bound} as an upper bound for Dobrushin influence matrix in BPMN and thus apply $\mathbf{U}$ to Theorem 9 in \cite{mitliagkas2017improving}. Then, we have
\begin{equation*}
\Norm{\mathrm{P}_\tau(\mathbf{x}\mid\tilde{\mathbf{x}}_0) -\mathrm{P}_{\bm{\theta}}(\mathbf{x})}_{\text{TV}} \le \mathscr{G}(\mathbf{B}^\tau),
\end{equation*}
where $\mathbf{B}^\tau$ represents the $\tau^{th}$ power of $\mathbf{B}$. Theorem~\ref{thm:exp-err-bound} follows this combined with Lemma~\ref{lem:exp-err}
\section{STRUCTURE LEARNING}
\label{sec:applications}
With the two bounds introduced in Section~\ref{sec:bound}, we can easily examine and control the quality of gradient approximation in each iteration by choosing $\tau$. In detail, we introduce a criterion for the selection of $\tau$ in each iteration. Satisfying the proposed criterion, the objective function is guaranteed to decrease asymptotically. That is to say, the difference between $g(\bm{\theta}^{(k+1)})$ and $g(\hat{\bm{\theta}})$ is asymptotically \emph{tightened}, compared with the difference between $g\left(\bm{\theta}^{(k)}\right)$ and $g(\hat{\bm{\theta}})$. Therefore, we refer to the proposed criterion as \ref{eq:criterion-aggressive}. Furthermore, using \ref{eq:criterion-aggressive} we provide an improved SPG method denoted by TAY for short.
Specifically, staring from $\tau = 1$, TAY stops increasing $\tau$ when the following bound is satisfied:
\begin{equation}
\label{eq:criterion-aggressive}
2\sqrt{m} \mathscr{G}(\mathbf{B}^\tau) < \frac{1}{2}\@ifstar{\oldnorm}{\oldnorm*}{\bm{G}_\alpha(\bm{\theta}^{(k)})}_2. \tag{\textsc{TAY-Criterion}}
\end{equation}
We can also derive a non-asymptotic counterpart of \ref{eq:criterion-aggressive} by combining the results of Theorem~\ref{thm:obj-decrease} and Theorem~\ref{thm:sample-err-bound}:
\begin{equation}
\begin{gathered}
\label{eq:criterion-conservative}
0<2\sqrt{m}\left( \mathscr{G}(\mathbf{B}^\tau)+\sqrt{\frac{\sum_{ j = 1}^m \epsilon_j^2}{4m}}\right) \le \frac{1}{2}\@ifstar{\oldnorm}{\oldnorm*}{\bm{G}_\alpha(\bm{\theta}^{(k)})}_2, \\ \epsilon_j =2 \left(\sqrt{\frac{2V_{\psi_j}\ln 2/\beta_j } {4q}} + \frac{7\ln2/\beta_j}{3(q-1)}\right),
\end{gathered}
\end{equation}
where the $V_{\psi_j}$'s and $\beta_j$'s are defined in Theorem~\ref{thm:sample-err-bound}. (\ref{eq:criterion-conservative}) provides the required sample complexity, $q$, for TAY in each iteration. However, the selection of $q$ according to (\ref{eq:criterion-conservative}) is conservative, because it includes the worst-case scenario where the gradient approximation errors in any two iterations cannot offset each other.
In Section~\ref{sec:tay-criterion} and \ref{sec:tay}, we theoretically analyze the performance guarantees of \ref{eq:criterion-aggressive} and the convergence of TAY, respectively.
\subsection{ Guarantees of \ref{eq:criterion-aggressive}}
\label{sec:tay-criterion}
The theorem below provides the performance guarantee for \ref{eq:criterion-aggressive} in each iteration.
\begin{theorem}\normalfont
\label{thm:criterion-decrease}
Let $\bm{\theta}^{(k)}$ and $\tilde{\mathbf{x}}_0$ be given. Let $q$ and $\mathbf{B}$ defined in Theorem~\ref{thm:sample-err-bound} be given. For $\bm{\theta}^{(k+1)}$ generated in Algorithm~\ref{alg:pxgb} using \ref{eq:criterion-aggressive}, the following is true:
\begin{align*}
\lim_{q \to \infty}\mathrm{P}\left(g(\bm{\theta}^{(k+1)}) < g(\bm{\theta}^{(k)}) \given
\quad\quad\quad\quad\quad\quad\quad \right.\\
\left. \quad2\sqrt{m} \mathscr{G}(\mathbf{B}^\tau) < \frac{1}{2}\@ifstar{\oldnorm}{\oldnorm*}{\bm{G}_\alpha(\bm{\theta}^{(k)})}_2 \right) = 1.
\end{align*}
\end{theorem}
Theorem~\ref{thm:criterion-decrease} makes a statement that the objective function value decreases with large $q$. Specifically, \ref{eq:criterion-aggressive} assumes that the upper bound of the conditional expectation of $\Norm{\bm{\delta}(\bm{\theta})}_2$ is small enough to satisfy the sufficient condition proven in Theorem~\ref{thm:obj-decrease}. When the number of samples $q$ is large enough, $\Norm{\bm{\delta}(\bm{\theta})}_2$ itself is very likely to meet the condition and hence the objective function is also likely to decrease with \ref{eq:criterion-aggressive} satisfied.
\subsection{ Convergence of TAY}
\label{sec:tay}
Finally, based on Theorem~\ref{thm:convergence-rate} and Theorem~\ref{thm:criterion-decrease}, we derive the following theorem on the convergence of TAY.
\begin{theorem}\normalfont
\label{thm:convergence-rate-tay}
Let $\mathcal{K}=(\bm{\theta}^{(0)},\bm{\theta}^{(1)},\bm{\theta}^{(2)},\cdots,\bm{\theta}^{(\kappa)})$ be the iterates generated by TAY. Then, with $k \in \{1, 2, \cdots, \kappa-1 \}$, the following is true:
\begin{minipage}{\linewidth}
\begin{align*}
\lim_{q \to \infty}&\mathrm{P}\left[g (\bm{\theta}^{(\kappa)}) - g(\hat{\bm{\theta}}) \le \vphantom{\frac{L}{2\kappa} \left(\@ifstar{\oldnorm}{\oldnorm*}{\bm{\theta}^{(0)}-\hat{\bm{\theta}}}_2 + \frac{2}{L}\sum_{k=1}^\kappa \@ifstar{\oldnorm}{\oldnorm*}{\bm{\delta}(\bm{\theta}^{(k)})}_2 \right)^2} \right.\\
&\left.\frac{L}{2\kappa} \left(\@ifstar{\oldnorm}{\oldnorm*}{\bm{\theta}^{(0)}-\hat{\bm{\theta}}}_2 + \frac{2}{L}\sum_{k=1}^\kappa \@ifstar{\oldnorm}{\oldnorm*}{\bm{\delta}(\bm{\theta}^{(k)})}_2 \right)^2
\right] = 1,
\end{align*}
\end{minipage}
where $\hat{\bm{\theta}}$ is defined in (\ref{eq:l1mrf}).
\end{theorem}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[scale=0.20]{./support/structureLearningAUC1025.pdf}
\caption{AUC v.s. Time \\10 nodes}\label{fig:time1}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[scale=0.20]{./support/structureLearningAUCStep1025.pdf}
\caption{AUC v.s. Iterations\\ 10 nodes}\label{fig:iterations1}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[scale=0.20]{./support/structureLearningTau1025.pdf}
\caption{$\tau$ v.s. Iterations \\ 10 nodes}
\end{subfigure}
\centering
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[scale=0.20]{./support/structureLearningAUC20.pdf}
\caption{AUC v.s. Time \\20 nodes}\label{fig:time2}
\end{subfigure}
\centering
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[scale=0.20]{./support/structureLearningAUCStep20.pdf}
\caption{AUC v.s. Iterations\\ 20 nodes}\label{fig:iterations2}
\end{subfigure}
\centering
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[scale=0.20]{./support/structureLearningTau20.pdf}
\caption{$\tau$ v.s. Iterations \\ 20 nodes}
\end{subfigure}
\vspace{-0.3cm}
\caption{Area under curve (AUC) and the steps of Gibbs sampling ($\tau$) in each iteration for structure learning of a 20-node network.}
\label{fig:structure-learning}
\end{figure*}
\subsection{Generalizations}
As we demonstrate in Section~\ref{sec:bound} and Section~\ref{sec:proof-sketch}, the derivation of our main results relies on bounding the Dobrushin influence with $\mathbf{U}$ and we show a procedure to construct $\mathbf{U}$ in the context of BPMNs. Moreover, \cite{mitliagkas2017improving} and \cite{liu2014projecting} provide upper bounds $\mathbf{U}$'s for other types of discrete pairwise MRFs. Therefore, combined with their results, our framework can also be applied to other discrete pairwise Markov networks. Dealing with pairwise MRFs is without any loss of generality, since any discrete MRF can be transformed into a pairwise one \citep{wainwright2008graphical, ravikumar2010high}.
\section{EXPERIMENTS}
\label{sec:exp}
We demonstrate that the structure learning of discrete MRFs benefits substantially from the application of TAY with synthetic data and that the bound provided on the gradient estimation error by Theorem~\ref{thm:exp-err-bound} is tighter than existing bounds. To illustrate that TAY is readily available for practical problems, we also run TAY using a real world dataset.
Because of the limit of space, we only report the experiments under one set of representative experiment configurations. Exhaustive results using different experiment configurations are presented in the Supplements.
\subsection{Structure Learning}
\label{sec:structure-learning}
In order to demonstrate the utility of TAY for effectively learning the structures of BPMNs, we simulate two BPMNs (one with 10 nodes and the other one with 20 nodes):
\begin{prettyitem}{*}
\item We set the number of features to $p = 10$ ($p = 20$). Components of $\bm{\theta}$ in the ground truth model are randomly chosen to be nonzero with an edge generation probability of $0.3$. The non-zero components of the real parameter have a uniform distribution on $[-2,-1]\bigcup[1,2]$
\item 1000 (2000 for 20 nodes) samples are generated by Gibbs sampling with 1000 burn-in steps.
\item The results are averaged over 10 trials.
\end{prettyitem}
The sizes of the BPMNs generated in this paper are comparable to those in \citep{honorio2012convergence, atchade2014stochastic, miasojedow2016sparse}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.25]{./support/tightness.pdf}
\caption{The gradient approximation error, the existing bound and the bound (\ref{eq:exp-err-bound}) in the structure learning of a 10-node network.}\label{fig:tightness}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.5]{./support/senatorStructureRe.pdf}
\vspace{0.3cm}
\caption{The result of TAY on the senator voting data: Red vertices denote Republicans, blue Democracts, and green Independent.
The figure is rendered by \texttt{Gephi} \citep{bastian2009gephi}.}\label{fig:senate}
\end{figure*}
Then, using the generated samples, we consider SPG and TAY. According to the analysis in Section~\ref{sec:bound}, the quality of the gradient approximation is closely related to the number of Gibbs sampling steps $\tau$. However, for SPG, there are no convincing schemes for selecting $\tau$. We select $\tau = 30$ ($\tau = 60$ for 20 nodes) to ensure that the gradient approximation error is small enough. Furthermore, we also evaluate the performance of the algorithm using an increasing $\tau$ ( $\tau = k$ in the $k^{th}$ iteration), suggested by \cite{atchade2014stochastic} (SPG-Inc).
To strike a fair comparison, we use the same step length $\alpha = 0.4$ and regularization parameter $\lambda = 0.025$ ( $\lambda = 0.017$ for 20 nodes) for different methods. We do not tune the step length individually for each method, since \cite{atchade2014stochastic} has shown that various learning rate selection schemes have minimal impact on the performance in the context of SPG. The number of chains used in Gibbs sampling, $q$, is not typically a tunable parameter either, since it indicates the allocation of the computational resources. For each method, it can be easily noticed that the larger the number of samples is, the slower but more accurate the method will be. Furthermore, if the $q$'s are different for different methods, it would be difficult to distinguish the effect of $\tau$ from that of $q$. Therefore, we set it to $2000$ for 10-node networks and $5000$ for 20-node networks. Performances of different methods are compared using the area under curve (AUC) of receiver operating characteristic (ROC) for structure learning in Figure~\ref{fig:structure-learning}. The Gibbs sampling steps in each method are also compared in Figure \ref{fig:structure-learning}.
Notice that we plot AUCs against both time (Figure~\ref{fig:time1} and Figure~\ref{fig:time2}) and iterations (Figure~\ref{fig:iterations1} and Figure~\ref{fig:iterations2}). The two kinds of plots provide different information about the performances of different methods: the former ones focus on overall complexity and the latter illustrate iteration complexity. We run each method until it converges. Using much less time, TAY achieves a similar AUC to SPG with $\tau = 30$ and $\tau = 60$.
Moreover, SPG with $\tau = 1$ reaches the lowest AUC, since the quality of the gradient approximation cannot be guaranteed with such a small $\tau$. Therefore, the experimental results indicate that TAY adaptively chooses a $\tau$ achieving reasonable accuracy as well as efficiency for structure learning in each iteration. For a more thorough comparison, we also contrast the performance of TAY and a non-SPG-based method, i.e., the pseudo-likelihood method \citep{hofling2009estimation, geng2017efficient}, in the Supplements. As a result, the two methods achieve comparable AUCs.
\subsection{Tightness of the Proposed Bound}
According to the empirical results above, TAY needs a $\tau$ only on the order of ten, suggesting that the bound in Theorem~\ref{thm:exp-err-bound} is tight enough for practical applications. To illustrate this more clearly, we compare (\ref{eq:exp-err-bound}) with another bound on the expectation of the gradient approximation error derived by \cite{fischer2015training}.
Specifically, we calculate the gradient approximation error, the bound (\ref{eq:exp-err-bound}), and \cite{fischer2015training}'s bound, in each iteration of learning a 10-node network. The results are reported in Figure~\ref{fig:tightness}. Notice that the bound in \cite{fischer2015training} gets extraordinarily loose with more iterations. Considering this, we may need run Gibbs chains for thousands of steps if we use this bound. In contrast, bound (\ref{eq:exp-err-bound}) is close to and even slightly less than the real error. This is reflective of the fact that the proposed bound is on the expectation instead of the error itself. As a result, (\ref{eq:exp-err-bound}) is much tighter and thus more applicable.
\subsection{Real World Data}
In our final experiment, we run TAY using the Senate voting data from the second session of the $109^{th}$ Congress \citep{USSenate38:online}. The dataset has 279 samples and 100 variables. Each sample represents the vote cast by each of the 100 senators for a particular bill, where $0$ represents nay, and $1$ represents yea. Missing data are imputed as $0$'s. The task of interest is to learn a BPMN model that identifies some clusters that represent the dependency between the voting inclination of each senator and the party with which the senator is affiliated.
We use TAY with $ \alpha = 0.4$. 5000 Markov chains are used for Gibbs sampling. Since our task is exploratory analysis, $\lambda=0.1$ is selected in order to deliver an interpretable result. The proposed algorithm is run for 100 iterations. The resultant BPMN is shown in Figure~\ref{fig:senate}, where each node represents the voting record of a senator and the edges represent some positive dependency between the pair of senators connected. The nodes in red represent Republicans and the nodes in blue represents Democrats. The clustering effects of voting consistency within a party are captured, coinciding with conventional wisdom. More interestingly, Jay Rockefeller, as a Democrat, has many connections with Republicans. This is consistent with the fact that his family has been a ``traditionally Republican dynasty'' \citep{wiki:JayRocke32}.
\section{CONCLUSION}
We consider SPG for $l_1$-regularized discrete MRF estimation. Furthermore, we conduct a careful analysis of the gradient approximation error of SPG and provide upper bounds to quantify its magnitude. With the aforementioned analysis, we introduce a learning strategy called TAY and show that it can improve the accuracy and efficiency of SPG.
\small \textbf{Acknowledgement}: Sinong Geng, Zhaobin Kuang, and David Page would like to gratefully acknowledge the NIH BD2K Initiative grant U54 AI117924 and the NIGMS grant 2RO1 GM097618. Stephen Wright would like to gratefully acknowledge NSF Awards IIS-1447449, 1628384, 1634597, and 1740707; AFOSR Award FA9550-13-1-0138; Subcontracts 3F-30222 and 8F-30039 from Argonne National Laboratory; and DARPA Award N660011824020.
\nocite{liu2013genetic,
liu2014new,
liu2016multiple}
\clearpage
\subsubsection*{References}
\renewcommand\refname{\vskip -1cm}
|
3,212,635,537,604 | arxiv | |
3,212,635,537,605 | arxiv | \section{Introduction}
Since the advent of the Ryu-Takayanagi formula\cite{RT,RT2,HRT} it has become clear that the entanglement structure of certain quantum states of a $d$-dimensional boundary Conformal Field Theory (CFT) is encoded into classical geometric structures of a $d+1$ dimensional bulk.
The role of the classical geometric structures is played by extremal surfaces in the bulk homologous to boundary regions with their associated reduced density matrices encapsulating entanglement information.
It turned out that in order to obtain a deeper understanding of this encoding it is rewarding to consider the space of these extremal surfaces. This space as a geometric entity is playing the role of an intermediary between the spaces of the bulk and the boundary\cite{Czech1,Czech1b}.
For example for the $d=2$ case one is considering the $AdS_3/CFT_2$ correspondence where the extremal surfaces are geodesics of the asymptotically $AdS_3$ geometry. This space of geodesics is the kinematic space\cite{Czech1,Czech1b} $\mathbb K$.
An alternative characterization of $\mathbb K$ as the the moduli space of boundary causal diamonds has also appeared\cite{Myers}.
The advantage of the introduction of these spaces rests in the possibility of a clear understanding of how patterns of entanglement manifest themselves in patterns of geometry.
For example in the $d=2$ case the conditional mutual informations of entangled intersecting domains\cite{NC,CS} of the boundary appear in kinematic space as areas of certain regions with respect to an area form called the Crofton form\cite{Czech1}.
In particular it is known that taking the static slice of $AdS_3$ entanglement quantities of the CFT vacuum are represented by area labels of causal diamonds in a 1+1 dimensional se Sitter ($dS_2$) space\cite{Czech1} playing the role of kinematic space.
However, much more can be revealed.
In our previous paper\cite{LB} in the simplest case of pure $AdS_3$ dual to the vacuum state of a $CFT_2$ we elaborated on this representation.
We have shown, that area labels of $dS_2$ causal diamonds are encoding entanglement information via Zamolodchikov $Y$-systems\cite{Zamo} well-known from studies of integrable systems\cite{FrenkelSzenes,Ravanini,Gliozzi}.
In arriving at this result another space, the space of horocycles related to the gauge degree of freedom\cite{Czech2} for choosing a cutoff for regularizing the divergent entanglement entropies has also appeared\cite{Levay}.
Horocycles are geometric objects regularizing the diverging length of geodesics. Employing them resulted in the observation that the lambda lengths of Penner\cite{Penner,Pennerbook}, encapsulating this regularization in a geometric manner, are direcly related to entanglement entropies via the Ryu-Takayanagi formula\cite{RT}.
An extra bonus of this observation was the realization\cite{Levay} that lambda lengths also provide a link to cluster algebras\cite{ClusterFZ,Fomin,Williams}, algebraic structures that are under intense scrutiny in the mathematics literature.
Cluster algebras are defined recursively via certain transformations called flips. The results of Ref.\cite{LB} show that these flips are related to mutations between patterns of entanglement associated to a partition of the boundary to $N$ subsystems.
For example in the static pure $AdS_3/CFT_2$ scenario one can choose the quantum state associated to the boundary as the $CFT_2$ vacuum. Then one fixes a partition of the boundary to $N$ subsystems. To this fixed partition there are many geodesic triangulations of the bulk. Flips are operating between such triangulations, by exchanging the two possible diagonals of geodesic quadrangles. Using successive flips in the space of triangulations one can define an $N-3$ dimensional polytope the associahedron ${\mathcal A}_{N-3}$ encapsulating entanglement information in a geometric manner. Moreover, one can show\cite{LB} that such flips define recursively a cluster algebra of type $A_{N-3}$. Hence one arrives at the result\cite{LB} that after partitioning the static slice of the boundary to $N$ subsystems the entanglement properties of the $CFT_2$ vacuum are encoded into the structure of an $A_{N-3}$ cluster algebra.
Based on this result one can conjecture that a similar association of entanglement patterns of other $CFT_2$ states and cluster algebras might exist.
In this paper we show that this is really the case.
We show that the thermal state of the $CFT_2$ which is dual to the static BTZ black hole\cite{BTZ} in the high temperature limit
provides a nontrivial example of that kind.
We show that after partitioning the static BTZ slice into $N$ subsystems the underlying cluster algebra encoding entanglement patterns is a $C_{N-1}$ one. Moreover, it turns out that in this new case the polytope encapsulating such patterns in a geometric manner for a fixed $N$ is the cyclohedron ${\mathcal C}_{N-1}$.
Displaying these patterns of entanglement in kinematic space reveals that their algebraic structure is connected to a Zamolodchikov $Y$-system of $C_{N-1}$ type. The boundary condition for such an $Y$-system is featuring the entropy of the BTZ black hole showing up in area labels of boundary triangles representing conditional entropies.
We hope that our results will pave the way for further elaborations exploring the mathematical properties of quantum states
where such cluster algebraic connection shows up.
The organization of this paper is as follows.
In Section II. the basic properties of the BTZ black hole in the high temperature limit are reviewed.
Section III. is devoted to a short reminder on lambda lengths and their connection to geodesics in the BTZ context.
In Section IV. we summarize the basic quantities of quantum information theoretic meaning relevant for our elaborations. Armed with the background material of these sections in Section V. we explore the properties of geodesic triangulations of the bulk displaying the geometric structure of a BTZ black hole. Then we show that for a partition of the boundary featuring $N$ subsystems the associated lambda lengths are generating a $C_{N-1}$ cluster algebra.
We observe that the exchange graph of this algebra is the cyclohedron ${\mathcal C}_{N-1}$. We illustrate its structure
in the BTZ picture, displaying different types of flips of BTZ geodesics.
In section VI. we explore how these algebraic structures are represented in the BTZ kinematic space. We find that the set of areas of causal diamonds in the BTZ kinematic space can be expressed in terms of cross ratios, and that they can be organized into an Y-system characterizing the BTZ scenario in the high temperature limit.
The $Y$-system we find is a Zamolodchikov system of type $C_{N-1}$ with a boundary condition featuring the entropy of the BTZ black hole.
Our conclusions and some comments are left for Section VII.
\section{BTZ black hole}
Three dimensional anti de Sitter space AdS\textsubscript{3} is defined as the set of points of the flat $\mathbb{R}^{2,2}$
space satisfying the constraint
\begin{equation}\label{eq:constraint}
-U^2-V^2+X^2+Y^2=-R^2
\end{equation}
where $R$ is the AdS radius. The induced metric of this space is
\begin{equation}\label{eq:metric_ads}
ds^2=-dU^2-dV^2+dX^2+dY^2
\end{equation}
It is well-known \cite{Brill,Ingemar1, Skenderis} that multiboundary wormhole solutions of Einstein's equations with negative cosmological constant can be obtained from $AdS_3$ by factorizing this space by the action of a suitable discrete subgroup of its isometry group.
In this paper we are focusing on the the simplest solution of that kind, namely the BTZ black hole\cite{BTZ}. It is characterized by two parameters, namely the mass and angular momentum of the black hole. In the special case when the angular momentum is set to zero the solution is given in terms of Schwarzschild coordinates
\begin{comment}
Introducing the static AdS coordinates:
\begin{equation}\label{eq:coords}
\left(\begin{array}{c}
U \\
V \\
X \\
Y
\end{array}\right)=\left(\begin{array}{c}
\sqrt{r^{2}+R^{2}} \sin(\tau / R)\\
\sqrt{r^{2}+R^{2}} \cos(\tau / R) \\
r \cos \varphi \\
r \sin \varphi
\end{array}\right)
\end{equation}
The metric \eqref{eq:metric_ads} becomes:
\begin{equation}\label{eq:metric_global}
d s^{2}=-\frac{r^{2}+R^{2}}{R^2} d \tau^{2}+\frac{R^{2}}{r^{2}+R^{2}} d r^{2}+r^{2} d \varphi^{2}
\end{equation}
This is the global AdS space with the usual $\tau\in[-\infty,\infty]$ time, $r>0$ radial and $\varphi$ polar coordinate covering.
\end{comment}
\begin{equation}\label{eq:coords}
\left(\begin{array}{c}
U \\
V \\
X \\
Y
\end{array}\right)=\left(\begin{array}{c}
\frac{R}{r_+}r \cosh \left(\frac{r_+}{R}\varphi\right) \\
\sqrt{\left(\frac{R}{r_+}r\right)^{2}-R^{2}} \sinh \left(\frac{r_+}{R}\tau\right) \\
\frac{R}{r_+}r \sinh \left(\frac{r_+}{R}\varphi \right)\\
\sqrt{\left(\frac{R}{r_+}r\right)^{2}-R^{2}} \cosh \left(\frac{r_+}{R}\tau\right)
\end{array}\right)
\end{equation}
subject to certain constraints coming from this group action.
Namely, the corresponding action\cite{Brill,Ingemar1, Skenderis,Carlip} boils down to a $\varphi\sim\varphi+2\pi$ identification of the hyperbolic angle, so that $-\pi<\varphi<\pi$.
Using \eqref{eq:metric_ads} in terms of these coordinates the BTZ metric becomes
\begin{equation}\label{eq:metric_hyp}
d s^{2}=-\left(r^{2}-r_{+}^{2}\right) d \tau^{2}+\frac{R^{2}}{r^{2}-r_{+}^{2}} d r^{2}+r^{2} d \varphi^{2}
\end{equation}
where the BTZ black hole solution has an event horizon at $r=r_+$. The boundary of the geometry is given by $r\rightarrow\infty$ and we denote it by $\partial BTZ$.
We can also introduce new scaled coordinates
\begin{equation}
\label{eq:kell}
\left(\begin{array}{c}
r \\
\tau \\
\varphi
\end{array}\right)=\left(\begin{array}{c}
r_+\cosh{\rho} \\
\frac{R}{r_+}\Theta \\
\frac{R}{r_+}t
\end{array}\right)
\end{equation}
With these variables the metric takes the following form:
\begin{equation}\label{eq:btz2}
d s^{2}=R^2\left(\cosh^2\rho dt^2+d\rho^2-\sinh^2\rho d\Theta^2\right)
\end{equation}
Now the metric is independent of the size of the horizon $r_+$, but the ranges of $\Theta$ and $t$ are depending on it.
In these considerations the AdS length scale and horizon radius are related to the mass of the black hole via $M=r_+^2/R^2$ (in units where $8G=1$). From now on we deal with the $r_+\gg R$ macroscopic limit of the static slice of the BTZ black hole, that is the mass $M\gg1$. This means that the range of the scaled angle variable is formally $-\infty<t<\infty$ while the periodic identification still holds.
Recall that at high temperature the gravity dual of the $CFT_2$ is the Euclidean version of our BTZ black hole\cite{RT}. Then we have the correspondence $\beta/L=R/r_+\ll1$ where $L=2\pi r_0$ is the systems size with $r_0$ is the cutoff taken to be large and $\beta=1/T$ is the inverse temperature. Hence our macroscopic limit also corresponds to the high temperature limit (HTL).
In the static case we choose $\tau=0$ (i.e. $V=0$ and $\Theta$=0). From \eqref{eq:btz2} the metric of the constant time slice is
\begin{equation}\label{eq:static_btz}
d s^{2}=R^2\left(\cosh^2\rho dt^2+d\rho^2\right)
\end{equation}
With an alternative set of coordinates we can map this static slice into the Poincar\'e disk $\mathbb{D}$
\begin{equation}\label{eq:D}
z=\frac{X+i Y}{R+U}=x+i y=\vert z\vert e^{i\vartheta} \in \mathbb{D}
\end{equation}
$\vert z\vert<1$.
In these coordinates the \eqref{eq:metric_ads} metric takes the following form
\begin{equation}\label{eq:metricD}
ds^2=\frac{4R^2}{(1-x^2-y^2)^2}(dx^2+dy^2)
\end{equation}
The $\partial\mathbb{D}$ boundary of the geometry is obtained by taking the $\vert z\vert ^2=x^2+y^2\rightarrow 1$ limit, yielding the complex unit circle.
For $\tau=0$ and $-\infty\leq\varphi\leq +\infty$, we obtain the so called BTZ black string. According to \eqref{eq:coords} and \eqref{eq:D} in this case the resulting space is just the upper half of the Poincar\'e disk. If we also make the $\varphi\sim\varphi+2\pi$ identification characterizing our BTZ black hole, then it means that we factorize the disk by a corresponding discrete subgroup. This formally means that we cut out a segment of the upper semi-disk bounded by two geodesics (which are perpendicular to the horizon) and we glue together this segment along these two geodesics. However, in the macroscopic limit these two geodesics shrink to the $\vartheta=0$ and $\vartheta=\pi$ points so the covered segment is the semi-disk itself\cite{Zukowski}, whose $\vartheta=0$ and $\vartheta=\pi$ points are identified (see \hyperref[fig:BTZ_disk]{FIG. 1.}). The $y=0$ diameter is the horizon of the black hole. In this case the conversion between the hyperbolic angle $t$ and the disk angle $\vartheta$ is given by the formula
\begin{equation}\label{eq:angles}
e^t=\cot{\frac{\vartheta}{2}}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{BTZdisk.png}
\caption{The Poincar\'e disk representation of the static, macroscopic BTZ blackhole. The space covers the upper half of the disk. The red dashed lines correspond to different values of the radial coordinate $\rho$. The $\rho=0$ diameter represents the horizon of the black hole. The green dashed lines are $t=\text{const.}$ lines. The $x=\pm1$ points are identified.}
\label{fig:BTZ_disk}
\end{figure}
\section{Geodesics and the lambda length}\label{sec:3}
Now we examine the minimal geodesics between boundary points of the static BTZ black hole in the macroscopic limit. First let us mark two points $a$ and $b$ on the boundary of the BTZ black string\cite{Zukowski} with hyperbolic angles $-\infty<\varphi_a<\varphi_b<\infty$. These two points serve as endpoints of a geodesic.
\begin{comment}
Then the equation of the the geodesic connecting the two points is
\begin{equation}
r(\varphi ; \varphi_a, \varphi_b)=r_{+} \frac{\cosh \frac{r_+(\varphi_b-\varphi_a)}{2R}}{\sqrt{\cosh ^{2} \frac{r_+(\varphi_b-\varphi_a)}{2R}-\cosh ^{2}\left(\frac{r_+\varphi}{R}-\frac{r_+(\varphi_b+\varphi_a)}{2R}\right)}} \quad(\varphi_a \leq \varphi \leq \varphi_b)
\end{equation}
(forrás: \cite{equivalence}, jó lenne ellenőrizni)
\end{comment}
If we make the identification $\varphi\sim\varphi+2\pi$ to get the static BTZ black hole geometry,
depending on the mutual positions of $a$ and $b$
we get different types of geodesics. From now on we restrict to geodesics whose endpoints satisfies that $\varphi_b-\varphi_a\leq2\pi$, i.e. they do not intersect with themselves in the bulk. We call them as minimal geodesics. If $\varphi_b-\varphi_a=2\pi$ (so that $a\sim b$), the geodesic winds around the horizon and its endpoints coincide. We refer to these type of geodesics as loops. Finally if $\varphi_b-\varphi_a<2\pi$, then there are two different arcs between the two marked points. One bypass the hole in one direction and another in the other direction, see FIG 2a.
How can one represent BTZ black hole geodesics on the Poincar\'e disk? First let us describe in general the minimal arcs of the disk geometry. It is well-know that they are given by the following equation\cite{Voros}
\begin{equation}
\left(x-\frac{B_{1}}{M}\right)^{2}+\left(y-\frac{B_{2}}{M}\right)^{2}=\frac{1}{M^{2}}
\end{equation}
where $B_1,B_2$ and $M$ are conserved quantities of the geodesic motion. Hence we see that the geodesics are circular arcs whose endpoints are on the boundary. Let us denote the midpoint coordinate and the half of the opening angle of the boundary interval lying between the two endpoints, by $\theta$ and $\alpha$ respectively.
Hence we have
\begin{equation}
\label{eq:alfateta}
\theta=(\vartheta_b+\vartheta_a)/2,\qquad
\alpha=(\vartheta_b-\vartheta_a)/2.
\end{equation}
Then we can express the parameters in the geodesic equation by the $\theta$ and $\alpha$ variables as\cite{Czech1}
\begin{equation}\label{eq:conserved}
B_{1}=\frac{\cos \theta}{\sin \alpha} \quad B_{2}=\frac{\sin \theta}{\sin \alpha} \quad M=\frac{\cos \alpha}{\sin \alpha}
\end{equation}
The points terminating the geodesics are of the form $e^{iu},e^{iv}\in\partial\mathbb{D}$. Hence another useful parametrization for geodesics is given by the $(u,v)$ pairs, where $u$ stands for the complex argument of the starting, and $v$ for the ending point. $u$ and $v$ can be expressed by $\theta$ and $\alpha$
\begin{equation}\label{eq:uv}
u=\theta-\alpha \quad v=\theta+\alpha
\end{equation}
The Poincar\'e model, a geodesic and its parameters are shown in \hyperref[fig:disk_geodesic]{FIG. 3.}
\begin{figure*}[t]
\hspace{0.5cm}
\subfloat[]{
\includegraphics[width=0.5\columnwidth]{BTZtriang.png}}
\hfill
\subfloat[]{
\includegraphics[width=0.65\columnwidth]{Disk_triang.png}}
\hfill
\subfloat[]{
\includegraphics[width=0.55\columnwidth]{disk_triang_2N.png}}
\hspace{0.5cm}
\caption{Geodesics of the BTZ geometry in the AdS representation (a), and in the disk representation (b). Both the red and green geodesics are connecting the points $a$ and $b$, but they are getting around the horizon on different sides. The blue curve is a loop geodesics, starting and ending in $a$. On the Poincar\'e disk, this can be represented by a diameter between the points $a$ and $\bar{a}\sim a$.
(c) Lambda lengths of a doubly marked BTZ black hole in the disk representation.
}
\label{fig:geod_representations}
\end{figure*}
Now examine some special cases of black hole geodesics in the disk representation. As we mentioned before, if we transform the static slice of the BTZ black-hole to the Poincar\'e disk, in the macroscopic limit it will cover half of the unit disk, where the $\vartheta=0$ and $\vartheta=\pi$ points will be identified. If we mark a point at $\vartheta=0$ then the loop geodesic that starts and ends at this point will correspond to the $y=0$ diameter of the disk because of the $\vartheta=0\sim\vartheta=\pi$ identification. We can then fix two different points on the boundary and choose one of them to be the $\vartheta_a=0$ point, and the other one to be arbitrary with coordinate $0<\vartheta_b<\pi$. Again because of the identification, the $a$ point with $\vartheta_a=0$ and the $\bar{a}$ one with $\vartheta_{\bar{a}}=\pi$ represents the same point on the BTZ boundary. Then as in the black hole case, there are two geodesics between the points $a\sim \bar{a}$ and $b$. One of them is going from $\vartheta_a=0$ to $\vartheta_b$ and the other is going from $\vartheta_b$ to $\vartheta_{\bar{a}}=\pi$.
Let us consider a disk geodesic with endpoints $a$ and $b$ with a half opening angle $\alpha$. One can show that its regularized length with respect to the metric \eqref{eq:metricD} is given by\cite{RT,Czech1}
\begin{equation}\label{eq:disk_length}
\ell(ab)=2R\log\left(e^{\rho_0}\sin\alpha\right)
\end{equation}
Where we used \eqref{eq:alfateta} and regulated the length by restricting the space to the region $x^2+y^2\leq\tanh^2{\rho_0/2}$, assuming that $e^{\rho_0}\gg1$.
Let us now introduce the following quantity
\begin{equation}\label{eq:lambda_D}
\lambda(ab)=e^{\ell(ab)/2R}=e^{\rho_0}\sin\alpha
\end{equation}
For $R=1$ this quantity corresponds to the so called lambda length introduced by Penner\cite{Penner,Pennerbook,Levay}. Using the notion of the lambda length in\cite{Levay} it was shown that it is rewarding to regularize the length of a geodesic by introducing horocycles. A horocycle associated to a boundary point is a circle in the bulk touching the boundary merely at this boundary point. Then the regularized length of a geodesic is that finite length part of it, which is lying entirely outside (or inside) both of the circles associated to the endpoints of the geodesic. It is known that choosing a horocycle associated to a boundary point can be interpreted as a choice of gauge\cite{Czech2,Levay}. Now the regularization of \eqref{eq:lambda_D} usually showing up in the literature
is a uniform one corresponding to a special uniform choice for the horocycles with their infinitesimally small radii being equal and related to the large value of $\rho_0$.
One can use the notion of a lambda length to identify special classes of geodesics. If we mark a point at $\vartheta=0$, then the loop geodesic that starts and ends at the same BTZ boundary point has got a lambda length
\begin{equation}\label{eq:lambda1}
\lambda(a\bar{a})=e^{\rho_0}\sin\frac{\pi}{2}
\end{equation}
because this corresponds to the $y=0$ diameter with half opening angle $\alpha=\pi/2$. Notice that this is the longest possible minimal geodesic.
Next we fix two different points on the BTZ boundary. We choose one of them to be the $\vartheta_a=0\sim\vartheta_{\bar{a}}=\pi$ point, and the other to be $0<\vartheta_b<\pi$. Then using
\eqref{eq:alfateta} the geodesic going from $\vartheta_a=0$ to $\vartheta_b$ has a lambda length
\begin{equation}\label{eq:lambda2}
\lambda(ab)=e^{\rho_0}\sin\frac{\vartheta_b}{2}
\end{equation}
and the other that is going from $\vartheta_b$ to $\vartheta_{\bar{a}}=\pi$ is
\begin{equation}\label{eq:lambda3}
\lambda(b\bar{a})=e^{\rho_0}\sin\left(\frac{\pi-\vartheta_b}{2}\right)=e^{\rho_0}\cos\frac{\vartheta_b}{2}
\end{equation}
Notice that the metric in \eqref{eq:metricD} is invariant under the transformation $z\rightarrow ze^{i\gamma}$ which is a rotation of the disk by an angle $\gamma$. Let us assume that we are only interested in the lambda length of a given geodesic with half opening angle $\alpha$. Then we can represent it by any disk geodesic, which has the same $\alpha$ and regularization, and is rotationally equivalent to the original arc. This confirms the fact, that the lambda length is independent of the midpoint angle $\theta$.
This gives us the opportunity to generalize the expressions above. To the points $a,b\in\partial BTZ$ (such that $-\infty<t_a<t_b<\infty$) we associate two centrally symmetric pairs of points on $\partial\mathbb{D}$ \cite{Fomin3}. Denote these by $a,b,\bar{a},\bar{b}\in\partial\mathbb{D}$. If $a,b\in\partial BTZ$ has got a hyperbolic angles $t_a<t_b$ then we can calculate the complex arguments $0\leq\vartheta_a<\vartheta_b<\pi$ of $a,b\in\partial\mathbb{D}$ by \eqref{eq:angles} and then for $\bar{a},\bar{b}\in\partial\mathbb{D}$, $\vartheta_{\bar{a}}=\vartheta_a+\pi<\vartheta_{\bar{b}}=\vartheta_b+\pi$.
At this point it is convenient to introduce a notation for different boundary representations. We denote BTZ boundary intervals by square brackets, and $\partial\mathbb{D}$ intervals by round brackets. For example, let $a$ and $b$ be two distinct points on $\partial BTZ$ (such that $t_a<t_b$) determining two intervals $[ab],[ba]\subset\partial BTZ$. Let $a,\bar{a},b,\bar{b}$ their centrally symmetric representatives on $\partial\mathbb{D}$ and the intervals between them are $(ab),(\bar{a}\bar{b}),(b\bar{a}),(\bar{b}a),(a\bar{a}),(b\bar{b}),(\bar{a}a),(\bar{b}b)\subset \partial\mathbb{D}$. Now we can assign to each centrally symmetric pair of intervals between the four points on $\partial\mathbb{D}$ exactly one physical interval between the two points on $\partial BTZ$, namely:
\begin{subequations}
\begin{align}
\partial\mathbb{D}\supset(ab),(\bar{a}\bar{b})&\sim[ab]\subset\partial BTZ \label{eq:intervals1}\\
\partial\mathbb{D}\supset(b\bar{a}),(a\bar{b})&\sim[ba]\subset\partial BTZ \label{eq:intervals2}\\
\partial\mathbb{D}\supset(a\bar{a}),(\bar{a}a)&\sim[\partial BTZ] \label{eq:intervals3}\\
\partial\mathbb{D}\supset(b\bar{b}),(\bar{b}b)&\sim[\partial BTZ] \label{eq:intervals4}
\end{align}
\end{subequations}
Where $[\partial BTZ]$ denotes the whole BTZ boundary.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{Disk.png}
\caption{The Poincar\'e disk is the unit disk in the complex plane endowed with the \eqref{eq:metricD} metric. The red arc is a geodesic. These curves can be parametrized by $(\theta,\alpha)$ pairs. $\theta\in[0,2\pi]$ is the center, and $\alpha\in[0,\pi]$ is half the opening angle of the geodesic. Another useful parametrizations is given by the $(u,v)$ pair defined by Eq. \eqref{eq:uv}.}
\label{fig:disk_geodesic}
\end{figure}
Notice that the lambda length of the geodesic between points $a,b \in\partial\mathbb{D}$ is the same as the lambda length between $\bar{a},\bar{b} \in\partial\mathbb{D}$, because they have the same opening angle (they differ only in a rotation around the origin). They also represent the same geodesic in the BTZ bulk. So the lambda length associated to an $[ab]$ geodesic in the BTZ bulk can be written in the following ways: $\lambda[ab]=\lambda(ab)=\lambda(\bar{a}\bar{b})$. Similarly $\lambda[ba]=\lambda(b\bar{a})=\lambda(\bar{b}a)$ and for loop geodesics $\lambda[\partial BTZ]=\lambda(a\bar{a})=\lambda(\bar{a}a)=\lambda(b\bar{b})=\lambda(\bar{b}b)$. The explicit expressions for these quantities can be given by equations \eqref{eq:lambda1}, \eqref{eq:lambda2} and \eqref{eq:lambda3}. If $\alpha=(\vartheta_b-\vartheta_a)/2$, then
\begin{subequations}
\begin{align}
\lambda[ab]=\lambda(ab)=\lambda(\bar{a}\bar{b})&=e^{\rho_0}\sin\alpha\\
\lambda[ba]=\lambda(b\bar{a})=\lambda(\bar{b}a)&=e^{\rho_0}\cos\alpha\\
\lambda[\partial BTZ]=\lambda(a\bar{a})=\lambda(\bar{a}a)&=e^{\rho_0}\sin\frac{\pi}{2}\\
\lambda[\partial BTZ]=\lambda(b\bar{b})=\lambda(\bar{b}b)&=e^{\rho_0}\sin\frac{\pi}{2}
\end{align}
\end{subequations}
These lambda lengths fully describe quantitatively the geodesic structure of a doubly marked BTZ black hole in the HTL (see \hyperref[fig:geod_representations]{FIG. 3c.}).
We make one last remark to this section, namely that the usual expression for the geodesic length calculated by the line element \eqref{eq:static_btz} is of the form
\begin{equation}
\label{eq:btzlength}
\ell[ab]=2R\log \left(\frac{2r_0}{r_+} \sinh \frac{t_b-t_a}{2}\right)
\end{equation}
Where $r_0$ is some regularization factor taken to be large. By virtue of the \eqref{eq:kell}
relation $r_0=r_+\cosh\varrho_0$ for $\varrho_0$ large one has $\frac{2r_0}{r^+}=e^{\varrho_0}$.
Then this formula can be connected to \eqref{eq:disk_length} via \eqref{eq:angles}, because one can rewrite the hyperbolic factor in the argument of the logarithm in the following way
\begin{equation}
\label{eq:kapocs}
\begin{aligned}
\sinh\frac{t_b-t_a}{2}&=\frac{1}{2}e^{-(t_b+t_a)/2}\left(e^{t_b}-e^{t_a}\right)=\\
&=\frac{\cot{(\vartheta_b/2)}-\cot{(\vartheta_a/2)}}{2\sqrt{\cot{(\vartheta_b/2)}\cot{(\vartheta_a/2)}}}=\\
&=\frac{1}{\sqrt{\sin{\vartheta_b}\sin{\vartheta_a}}}\sin{\frac{\vartheta_b-\vartheta_a}{2}}
\end{aligned}
\end{equation}
or alternatively
\begin{equation}
\label{eq:kapocs2}
\sin{\frac{\vartheta_b-\vartheta_a}{2}}=
\frac{1}{\sqrt{\cosh{t_b}\cosh{t_a}}}
\sinh\frac{t_b-t_a}{2}
\end{equation}
Now one can compare \eqref{eq:btzlength} and \eqref{eq:disk_length}. The difference between the the two formulas comes from the fact, that one can use uniform regularization either in the BTZ representation
or in the disk representation but not both. Indeed \eqref{eq:kapocs} and \eqref{eq:kapocs2} show that when going from one representation to the other the regularization will be either $t$ or $\vartheta$ dependent. It can be shown that this corresponds to regularizations via horocycles with different diameters at the points $a$ and $b$.
Notice, however that there exist gauge invariant (regularization independent) combinations of lambda lengths like cross ratios where these subtleties are immaterial.
These combinations give rise to physical quantities of great importance.
Next we turn to such quantities.
\section{Entropies and conditional informations}
According to the Ryu-Takayanagi proposal the entanglement entropy of a given domain $A$ is proportional to the area of the minimal surface that has the same boundary as $A$. In the static case this means that
\begin{equation}\label{eq:entropy_ell}
S(A)=\frac{\ell(A)}{4G}
\end{equation}
where $A$ is a boundary interval and $\ell(A)$ is the regularized length of the geodesic homologous to $A$.
With the help of lambda lengths defined in \eqref{eq:lambda_D} we can reformulate this expression
as follows
\begin{equation}\label{eq:entropy_lambda}
S(A)=\frac{c}{3}\log\lambda(A)
\end{equation}
Where $c$ is the central charge of the boundary theory expressed by the Brown-Henneaux relation \cite{BH} $c=\frac{3R}{2G}$.
In the BTZ black hole case studied here there is an important caveat. The expressions as given by \eqref{eq:entropy_ell} and
\eqref{eq:entropy_lambda} are only valid in the high temperature limit i.e. when $M\gg 1$. Indeed, for a finite black hole mass there exists a critical size for the region past which there is a new family of disconnected geodesics that have smaller length then the connected homologous ones\cite{HT,EP}.
In this case the prescription \eqref{eq:entropy_lambda} is ambiguous. However, in the high temperature limit (HTL) the critical size of the region (the entanglement plateaux scale\cite{EP}) measured by half the opening angle of that region is $\pi$ hence we did not need to worry about this subtlety. Moreover, in this case the Ryu-Takayanagi geodesics reach all the way to the horizon (the smallest entanglement shadow occurs in the HTL\cite{Zukowski}).
Hence in the following we restrict attention to the macroscopic BTZ black hole showing up in this HTL.
Now assume that we choose two intersecting regions labeled by $E,F$ on the BTZ boundary. These two regions give rise to the following CFT subsystems denoted by $A=E\setminus F$, $B=E\cap F$, $C=F\setminus E$ and $D=E\cup F$. According to strong subadditivity, for the corresponding von-Neumann entropies one has\cite{NC}
\begin{equation}\label{eq:strong_sub}
S(AB)+S(BC)-S(B)-S(D)\geq0
\end{equation}
If we introduce the conditional entropy
\begin{equation}
S(A|B)\equiv S(AB)-S(B)
\end{equation}
Mutual information
\begin{equation}
I(A,B)\equiv S(A)-S(A|B)
\end{equation}
And the conditional mutual information
\begin{equation}
\begin{aligned}
I(A,C|B)&\equiv S(A|B)-S(A|BC)=\\
&=I(A,BC)-I(A,B)
\end{aligned}
\end{equation}
Then we can rewrite \eqref{eq:strong_sub} as
\begin{equation}
I(A,C|B)=S(A|B)-S(A|BC)\geq 0
\end{equation}
Hence strong subadditivity indicates that conditioning on a larger subsystem can only reduce the uncertainty about a system.
Due to the Ryu-Takayanagi conjecture, we can also express these entropic quantities in terms of lambda lengths. The conditional entropy is
\begin{equation}
S(A|B)=\frac{c}{3}\log\frac{\lambda(E)}{\lambda(B)}
\end{equation}
The mutual information
\begin{equation}
I(A,B)=\frac{c}{3}\log\frac{\lambda(A)\lambda(B)}{\lambda(E)}
\end{equation}
And the conditional mutual information is
\begin{equation}\label{eq:cond1}
I(A,C|B)=\frac{c}{3}\log\frac{\lambda(E)\lambda(F)}{\lambda(B)\lambda(D)}
\end{equation}
Now let us turn to the Poincar\'e disk representation. Label the BTZ boundary points by $a,b,c,d\in\partial BTZ$. Then the corresponding regions are $A=[ab]$, $B=[bc]$ and $C=[cd]$. Each point on $\partial BTZ$ gives rise to a centrally symmetric pair of points on $\partial\mathbb{D}$. Let us denote them by $a,\bar{a},b,\bar{b},c,\bar{c},d,\bar{d}\in\mathbb{D}$. Next as in \eqref{eq:intervals1}, \eqref{eq:intervals2}, \eqref{eq:intervals3} and \eqref{eq:intervals4}
we associate to BTZ boundary regions disk boundary ones.
Since for every $\partial\mathbb{D}$ region there is a homologous $\mathbb{D}$ geodesic then in the disk representation one can explicitly write down the expressions for the lambda lengths and von-Neumann entropies. In HTL the exact result for the entropy of subsystem $A$ is
\begin{equation}\label{eq:entropy_disk}
\begin{aligned}
S[A]&=S(A)=\\
&=\frac{c}{3}\log e^{\rho_0}\sin\left(\alpha\right)=\\
&=\frac{c}{3}\log e^{\rho_0}\sin\left(\frac{\vartheta_b-\vartheta_a}{2}\right)
\end{aligned}
\end{equation}
Where $\vartheta_a,\vartheta_b$ are the complex arguments corresponding to endpoints of the regions $(ab)\subset\partial\mathbb{D}$. We can say that $S(A)=S(ab)=S(\bar{a}\bar{b})$, because both regions represent the same subsystem on the BTZ boundary. Finally we record that in this doubly marked disk representation the entanglement entropy of the entire boundary is $S[\partial BTZ]=S(a\bar{a})=S(b\bar{b})$.
We can calculate similarly the expressions for the other entropic quantities.
For example $S(A|B)$ can be calculated as
\begin{equation}
S[A|B]=S(A|B)=\frac{c}{3}\log\frac{\sin\left(\frac{\vartheta_c-\vartheta_a}{2}\right)}{\sin\left(\frac{\vartheta_c-\vartheta_b}{2}\right)}
\end{equation}
An explicit formula, to be used later, for the mutual information is given by
\begin{equation}
I[A,B]=I(A,B)=\frac{c}{3}\log e^{\rho_0}\frac{\sin\left(\frac{\vartheta_b-\vartheta_a}{2}\right)\sin\left(\frac{\vartheta_c-\vartheta_b}{2}\right)}{\sin\left(\frac{\vartheta_c-\vartheta_a}{2}\right)}
\end{equation}
and for the conditional mutual information of \eqref{eq:cond1}
\begin{equation}\label{eq:mutual}
\begin{aligned}
I[A,C|B]&=I(A,C|B)=\\
&=\frac{c}{3}\log\frac{\sin\left(\frac{\vartheta_c-\vartheta_a}{2}\right)\sin\left(\frac{\vartheta_d-\vartheta_b}{2}\right)}
{\sin\left(\frac{\vartheta_c-\vartheta_b}{2}\right)\sin\left(\frac{\vartheta_d-\vartheta_a}{2}\right)}
\end{aligned}
\end{equation}
Notice that the conditional mutual information is finite, and independent of the choice of regularization, while the the entanglement entropy and mutual information are divergent and they depend on the regularization.
\begin{comment}
the situation is not as simple. Consider the case when we want to calculate the entanglement entropy of the system starts in $a$ and ends in $\bar{a}$. This gives us the entanglement entropy of the whole BTZ boundary. The connected homologous BTZ geodesic is the one that starts in the point corresponds to $a$, winds around the horizon and ends in the same point. The regularizations at the ends are the same, namely the horocycle choosen in $a$ (which is the same as in $\bar{a}$). So we can write:
\begin{equation}
S(a\bar{a})=S[aa]=S[\partial BTZ]=\frac{c}{3}\log\Delta_a
\end{equation}
Since $\vartheta_{\bar{a}}=\vartheta_{a}+\pi$. However we can calculate the boundary entropy by a geodesic starts from $b$ and ends in $\bar{b}$, but now the regularization factor depends on the horocycle in $b$. So we can write:
\begin{equation}
S(b\bar{b})=S[bb]=S[\partial BTZ]=\frac{c}{3}\log\Delta_b
\end{equation}
And similarly for points $c$ and $d$. The same argument holds for the conditional entropy. For example imagine a situation when we have got only two $\partial BTZ$ points so the whole system is split into two parts $[A]$ and $[B]$. Then $S[B|A]=S[AB]-S[A]$. But $S[AB]=S[\partial BTZ]$ and as we have seen it can be calculated by either $S(a\bar{a})$ or $S(b\bar{b})$, which give different regularizations for $S[B|A]$. Because of the regularization dependence, in the following we explicitly write out the enpoints of the subsystem whose entropy we are calculating to make sure what factors are included in the exact expression.
\end{comment}
Having introduced our basic quantities of quantum information in the disk representation, let us elucidate the meaning some of these in the BTZ representation.
First of all in the usual BTZ representation we have the following formula for the entanglement entropy\cite{Calabrese}
\begin{equation}
\label{eq:usual}
S[A]=\frac{c}{3}\log\left(\frac{\beta}{\pi \epsilon}\sinh\left(\frac{\pi\ell}{\beta}\right)\right)
\end{equation}
where $\ell =\Delta\varphi r_0$, $L=2\pi r_0$ and $\sqrt{M}=\frac{r_+}{R}=\frac{L}{\beta}$.
One can also write
\begin{equation}
\label{eq:reszformula}
\frac{\pi\ell}{\beta}=\frac{1}{2}\sqrt{M}\Delta\varphi=\frac{1}{2}\Delta t
\end{equation}
where we have used \eqref{eq:kell} and
$\Delta t=t_a-t_b$.
Using now $\frac{2r_0}{r_+}\simeq\frac{\beta}{\pi\epsilon}$ and formula
\eqref{eq:kapocs} one can see that expressions \eqref{eq:usual} and \eqref{eq:entropy_disk} can be converted to each other by applying either $t$ or $\vartheta$ dependent cutoffs.
Of course when calculating conditional mutual informations such cutoffs are immaterial, since these quantities are gauge (regularization) invariant ones.
Indeed, an alternative formula for \eqref{eq:mutual} is then given by
\begin{equation}\label{eq:mutual2}
I(A,C|B)=\frac{c}{3}\log\frac{\sinh\left(\sqrt{M}\frac{\varphi_c-\varphi_a}{2}\right)\sinh\left(\sqrt{M}\frac{\varphi_d-\varphi_b}{2}\right)}
{\sinh\left(\sqrt{M}\frac{\varphi_c-\varphi_b}{2}\right)\sinh\left(\sqrt{M}\frac{\varphi_d-\varphi_a}{2}\right)}
\end{equation}
Let us consider now in the BTZ picture the special case when in the disc picture we have $c=\overline{a}$ and $d=\overline{b}$.
In this case $\varphi_c-\varphi_a=\varphi_d-\varphi_b=2\pi$, and $\Delta\varphi\equiv\varphi_c-\varphi_b=\varphi_d-\varphi_a$ (see FIG 3c).
Then in this special case to be used later we have
\begin{equation}
\label{eq:later}
\begin{aligned}
I(ab,\overline{a}\overline{b}|b\overline{a})&=I[A,C|B]=\\
&=\frac{2c}{3}\left[\log\left(\sinh\pi\sqrt{M}\right)-\log\left(\sinh\frac{\Delta\varphi}{2}\sqrt{M}\right)\right]
\\&=
\frac{c}{3}(2\pi-\Delta\varphi)\sqrt{M}+\dots\\& =2\frac{(2\pi-\Delta\varphi)r_+}{4G}+\dots
\end{aligned}
\end{equation}
where the dots refer to terms vanishing in the high temperture limit, $\sqrt{M}=\frac{r_+}{R}\gg 1$ and $c=\frac{3R}{2G}$ due to the Brown-Henneaux relation.
\section{The cluster algebraic structure of BTZ triangulations}
\begin{figure*}[t]
\hspace{1cm}
\subfloat[]{
\includegraphics[width=0.41\columnwidth]{flip3_jo.png}}
\hfill
\subfloat[]{
\includegraphics[width=0.4\columnwidth]{flip2_jo.png}}
\hfill
\subfloat[]{
\includegraphics[width=0.42\columnwidth]{flip1_jo.png}}
\hspace{1cm}
\caption{Triangulations of a BTZ black hole with $N=4$ marked points on the boundary (top figures), and their $2N=8$-gon disk representations (bottom figures). Solid black and red curves are the edges and diagonals of the polygons. Each centrally symmetric pair of non-diametric diagonals (or one simple diameter) in the disk corresponds to a diagonal of the BTZ quadrangle. In each picture we illustrated a flip from the solid red curves to the dashed red ones. In (a) there is a flip in an ordinary quadrangle. The exchange relations for this type of flip are given by \eqref{eq:flip1} and \eqref{eq:flip1_BTZ} with labels $i=a$, $j=b$, $k=c$ and $l=d$. In (b) a diagonal of a folded quadrangle is flipped. The exchange relations are \eqref{eq:flip4} and \eqref{eq:flip2_BTZ} with labels $i=a$, $j=b$ and $k=d$. Finally (c) shows a loop flip. In this case the exchange relations are given by \eqref{eq:flip6} and \eqref{eq:flip3_BTZ} with labels $i=a$ and $j=b$.}
\label{fig:flips}
\end{figure*}
Now we are in a position to present the main result of this paper.
We show that geodesic triangulations of the BTZ black hole in the HTL, with $N$ marked points on the boundary, exhibit a particular structure. This structure manifests itself in an exchange pattern called $C_{N-1}$ well known from the literature on cluster algebras. Since via the Ryu-Takayanagi formula regularized geodesic lengths are directly related to entanglement entropies, this $C_{N-1}$ structure also provides an algebraic characterization of the entanglement patterns of the boundary thermal state dual to the BTZ geometry.
Before presenting a detailed elaboration of this result, let us recall some basic definitions\cite{Williams}. Fix $N$ points on the $r\rightarrow\infty$ boundary. A triangulation of the bulk is a maximal set of distinct, pairwise non-intersecting geodesics. We assume that these geodesics do not intersect themselves in the bulk. They split the bulk geometry into domains, which we call triangles. Now assume that we delete an arc from a given triangulation and add another one instead such that we get a new triangulation. We call this transformation a flip. Let the number of compatible arcs in the triangulations be $n$. Then we call the $n$-regular graph with the triangulations as its vertices an exchange graph, if its vertices are connected by the flips between the corresponding triangulations.
Now we turn to the static, macroscopic BTZ black hole. Its outer horizon region is not simply connected. Recall from \hyperref[sec:3]{Section III.}, as a result of this in the BTZ geometry we find different types of geodesics. There are geodesic arcs that wind around the hole and having coinciding boundary endpoints. Such arcs are called loops. On the other hand between two different marked boundary points there can be two ordinary arcs: one of them located on one side of the hole and another one on the other.
A triangulation of the BTZ black hole with $N$ marked point consists of $N-2$ ordinary geodesics and a loop. If we triangulate the bulk, then it is built from the following domains. First of all there are ordinary triangles whose edges are sides or diagonals of the polygon. Then we also have folded triangles with one of its edges being the loop (so it has got two vertices). Finally we have a non-simply connected domain containing the hole, with the domain boundary formed by the loop and the horizon. In this case there are two types of flips. First we can flip a diagonal in a quadrangle. This quadrangle can also be folded if one of its sides is the loop. Second we can also flip the loop in a digon to get an other loop. The three examples of such flips for $N=4$ are illustrated in \hyperref[fig:flips]{FIG. 4.}
We can construct the exchange graph of the triangulation if we notice that there is a one-to-one correspondence between the triangulations of an $N$-gon with a hole in its center, and the centrally-symmetric triangulations of an ordinary $2N$-gon. Let us denote the $N$ marked points on the upper half $\partial \mathbb{D}$ by $a,b,\dotsc$. We can construct the $2N$-gon by adjoining an antipodal copy of each vertex of the $N$-gon denoted by $\bar{a},\bar{b},\dotsc$. Then every ordinary arc of the $N$-gon will correspond to a centrally symmetric pair of diagonals $(i\bar{j})$ and $(\bar{i}j)$ ($i$ and $j$ labels different vertices) and the loop corresponds to a diameter $(i\bar{i})$ of the $2N$-gon. Then an ordinary flip of the $N$-gon is a centrally-symmetric flip of two diagonals, and the exchange of the loop is the flip of the diameter in the picture of the $2N$-gon. With this construction the resulting exchange graph is a $\mathcal{C}_{N-1}$ cyclohedron \cite{BT,HL,Devadoss}. For $N=4$ see \hyperref[fig:cyclo]{FIG. 5}.
Then one can conclude that the exchange pattern of the static, macroscopic BTZ black hole is a cyclohedron.
\begin{figure*}[t]
\hspace{1cm}
\subfloat[]{
\includegraphics[width=0.7\columnwidth]{cycloBTZ.png}}
\hfill
\subfloat[]{
\includegraphics[width=0.7\columnwidth]{cycloDisk.png}}
\hspace{1cm}
\caption{The cyclohedron $\mathcal{C}_3$ is the exchange graph of a $C_{3}$ cluster algebra. In (a) the vertices are the triangulations of a quadrangle with a hole inside. This illustrates the triangulations of the BTZ black hole with $N=4$ marked points on the boundary. In (b) the vertices are the centrally symmetric octagon equivalents of the triangulations in (a). This meets with the $2N=8$ disk representation of the BTZ quadrangle. Each pairs of non-diametric diagonal with one color represent one single diagonal in the quadrangle picture with the same color.}
\label{fig:cyclo}
\end{figure*}
There is a natural correspondence between triangulations of bordered surfaces and cluster algebras\cite{Williams}. In the case of the entanglement patterns of the vacuum state dual to pure $AdS_3$ we already know that the associahedron is not only a special case for an exchange graph of such triangulations, but it is generated by a seed pattern of an $A_{N-3}$ cluster algebra\cite{LB}. Hence one can pose the question: Is there also a cluster algebra behind the thermal state dual to the BTZ black hole encoding entanglement patterns via our cyclohedron?
It is known from the theory of cluster algebras that the cyclohedron is the seed pattern for $C_n$ type cluster algebras. Hence one can conjecture that a $C_n$ type cluster algebra governs the algebraic properties of the geodesics of the static, HTL BTZ black hole. Moreover, since this bulk black hole is dual to a boundary thermal state, one can argue via the Ryu-Takayanagi correspondence that the same algebraic structure also governs the entanglement patterns of this quantum state.
In order to elaborate on this conjecture first we mark $N=4$ points (label them by $a,b,c,d$) on the boundary of
the Poincar\'e disk. We can think of them forming a quadrilateral with sides $(ab)$,$(bc)$,$(cd)$ and $(ad)$. Then the following relation holds between the corresponding geodesic lambda lengths \eqref{eq:lambda_D}:
\begin{equation}\label{eq:Ptolemy}
\lambda(ac)\lambda(bd)=\lambda(ab)\lambda(cd)+\lambda(ad)\lambda(bc)
\end{equation}
This is the Ptolemy relation\cite{Penner} which is true for any arbitrary geodesic quadrangle on the whole Poincar\'e disk.
Now we cut the disk in half and make the $\vartheta=0\sim\vartheta=\pi$ identification to get the representation of our BTZ black hole. If we mark $N$ points on the BTZ boundary, then in this disk representation each point will be located on the upper semicircle of the disk.
We already know that to each $a,b,c,\dotsc\in\partial BTZ$ (in cyclic order) vertex of a BTZ $N$-gon there corresponds a centrally symmetric pair of vertices in the corresponding $2N$-gon picture. Denote this new set of points by $a,b,c,\dotsc,\bar{a},\bar{b},\bar{c},\dotsc\in\partial\mathbb{D}$. As in \eqref{eq:intervals1}, \eqref{eq:intervals2}, \eqref{eq:intervals3} and \eqref{eq:intervals4}, for every ordinary geodesic $[ij]\subset\partial BTZ$ corresponds a centrally symmetric pair of geodesics $(ij),(\bar{i}\bar{j})\subset\partial\mathbb{D}$ (or $(i\bar{j}),(j\bar{i})\subset\partial\mathbb{D}$), and for every loop like geodesic $[ii]\subset\partial BTZ$ corresponds a diameter $(i\bar{i})\subset\partial\mathbb{D}$ of the $2N$-gon.
Every flip in a triangulation of the $N$-gon corresponds to a centrally symmetric (or a diametrical) flip of the $2N$-gon. Choose four different $i,j,k,l\in\partial\mathbb{D}$ points (in cyclic order) from $a,b,c,\dots\in\partial\mathbb{D}$. With the Ptolemy relation in \eqref{eq:Ptolemy} and the lambda lengths of the orbits we can give six exchange relations to different types of flips
\begin{subequations}
\begin{align}
&\lambda(i k) \lambda(j l)=\lambda(i j) \lambda(k l)+\lambda(i l) \lambda(j k) \label{eq:flip1}\\
&\lambda(k \bar{i}) \lambda(j l)=\lambda(j \bar{i}) \lambda(k l)+\lambda(l \bar{i}) \lambda(j k)\label{eq:flip2}\\
&\lambda(k \bar{i}) \lambda(l \bar{j})=\lambda(\bar{i} \bar{j}) \lambda(k l)+\lambda(l \bar{i}) \lambda(k \bar{j})\label{eq:flip3}\\
&\lambda(i k) \lambda(j \bar{i})=\lambda(i j) \lambda(k \bar{i})+\lambda(i \bar{i}) \lambda(j k)\label{eq:flip4}\\
&\lambda(j \bar{i}) \lambda(k \bar{j})=\lambda(\bar{i} \bar{j}) \lambda(j k)+\lambda(j \bar{j}) \lambda(k\bar{i})\label{eq:flip5}\\
&\lambda(i \bar{i}) \lambda(j \bar{j})=\lambda(i j)\lambda(\bar{i} \bar{j})+\lambda(j \bar{i})\lambda(\bar{j} i)\label{eq:flip6}
\end{align}
\end{subequations}
Now one can compare this set of relation with the ($r=2$) set of equations (12.10)-(12.15) of Ref.\cite{Fomin3}.
After taking into account \eqref{eq:intervals1} and \eqref{eq:intervals2} one can realize that these
are the exchange relations of a $C_{N-1}$ cluster algebra\cite{Fomin3}. This means that for the triangulations of the macroscopic BTZ black hole with $N$ marked points on its boundary, the geodesic lambda lengths generate a $C_{N-1}$ cluster algebra.
Using \eqref{eq:intervals2}, \eqref{eq:intervals3} and \eqref{eq:intervals4}, we can express the lambda lengths in terms of the BTZ boundary intervals as well. One can see that the first three relations correspond to the same type of flip (flip in an ordinary quadrangle). Similarly \eqref{eq:flip4} and \eqref{eq:flip5} are the algebraic relations for flips in folded quadrangles. Finally the last one corresponds to loop flips. Hence in terms of BTZ intervals we have got only the following three different exchange relations
\begin{subequations}
\begin{align}
&\lambda[i k] \lambda[j l]=\lambda[i j] \lambda[k l]+\lambda[i l] \lambda[j k] \label{eq:flip1_BTZ}\\
&\lambda[i k] \lambda[j i]=\lambda[i j] \lambda[k i]+\lambda[\partial BTZ] \lambda[j k]\label{eq:flip2_BTZ}\\
&\lambda[\partial BTZ]^2=\lambda[i j]^2+\lambda[j i]^2\label{eq:flip3_BTZ}
\end{align}
\end{subequations}
where now $i,j,k,l\in\partial BTZ$ label $\partial BTZ$ points in cyclic order.
We can rewrite these relations in a more general form by encoding the geometry of a triangulations in an incidence matrix. In order to do this we choose an arbitrary BTZ triangulation and put a mark at the midpoint of its $[ij]\subset\partial BTZ$ diagonals and edges. We label the diagonals of the triangulation by $1,\dots,N-1$, and its edges by $N,\dots,2N-1$. In the disk $2N$-gon representation label $(ij)$ and $(j\bar{i})$ (or $(ij)$ and $(j\bar{i})$ if $j<i$) with the same number as $[ij]$. Now within each $\mathbb{D}$ triangle of the centrally symmetric $2N$ triangulation connect the markers of its sides. In this way we have obtained new inscribed triangles. Orient these new triangles clockwise. In this way we have obtained a directed graph whose vertices represent diagonals and edges of a given centrally symmetric disk triangulation (see \hyperref[fig:quiver]{FIG. 6.}).
Now we construct a $(2N-1)\times(2N-1)$ matrix $B$ with matrix elements $b_{ij}$ in the following way. Assume that $i$ and $j$ label different edges in the disk triangulation. Then $b_{ij}=k>0$ if there are $k$ arrows in the graph going from a chosen vertex labeled by $i$ to $k$ different vertices labeled by $j$, and $b_{ij}=-k<0$ if there are $k$ edges going from $k$ different vertices labeled by $j$ to a chosen edge labeled by $i$. For example one can check that in FIG. 6. we have $b_{15}=1$, $b_{32}=-2$ and $b_{37}=2$. We see that
via keeping track of the mutual positions of the edges and diagonals
the matrix $B$ fully characterizes a BTZ triangulation.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{quiver.png}
\caption{Graph representation of a BTZ quadrangle triangulation. On the left hand side the triangulation is in the BTZ picture. The diagonals are labeled by $1,\dots,N-1$ while the edges by $N,\dots,2N-1$. On the right hand side one can find the corresponding $2N$-gon representation in $\mathbb{D}$. Here the edges and diagonals are labeled according to \eqref{eq:intervals1},\eqref{eq:intervals2},\eqref{eq:intervals3} and \eqref{eq:intervals4}. These give rise to a directed graph, based on inscribed triangles, whose vertices correspond to geodesics. The edges of the graph (red arrows) belong to clockwise oriented inscribed triangles.}
\label{fig:quiver}
\end{figure}
Assume that we are flipping the $j$'th ($j\in{1,\dots,N-1}$) diagonal. We denote its lambda length before the flip by $\lambda[j]$, and after the flip by $\lambda'[j]$. Then
for an arbitrary triangulation
we can rewrite equations \eqref{eq:flip1_BTZ},\eqref{eq:flip2_BTZ} and \eqref{eq:flip3_BTZ} in the compact form
\begin{equation}
\lambda[j]\lambda'[j]=\prod_{1\leq i\leq 2N-1 \atop b_{ij}>0}\lambda[i]^{b_{ij}}+\prod_{1\leq i\leq 2N-1 \atop b_{ij}<0}\lambda[i]^{-b_{ij}}
\end{equation}
Which is a special case of the defining relations for a cluster algebra\cite{Fomin1,Fomin3,Williams}.
With the Ryu-Takayanagi conjecture we can give the exchange relation for von-Neumann entropies as well. Using \eqref{eq:entropy_lambda} one can show that the following holds:
\begin{equation}\label{eq:entropy_recursion}
S[j]+S'[j]=\frac{1}{2}\sum_{i=1}^{2N-1}|b_{ij}|S[i]+\frac{c}{3}\log{2\cosh{\frac{3}{2c}}\sum_{i=1}^{2N-1}b_{ij}S[i]}
\end{equation}
Notice that this formula is also true for the vacuum case, where we have shown\cite{LB} that the lambda lengths in that case determine an $A_{N-3}$ cluster algebra. The differences are only in the $B$ matrices constructed for the corresponding triangulations and in the number of independent entanglement entropies for a given number of CFT subsystems. Hence we obtained the result that in the case of the vacuum (dual to pure $AdS_3$) and the thermal state (dual to the BTZ black hole in the HTL), the CFT entanglement structures are encoded in $B$ matrices of cluster algebras. In both cases the \eqref{eq:entropy_recursion} recursion relation gives us an effective way to determine all of the $\mathcal{O}(N^2)$ entanglement entropies knowing only $\mathcal{O}(N)$ of such quantities.
\section{Kinematic space and $Y$-systems}
In this chapter we examine how our cluster algebraic structures manifest themselves in the space of directed geodesics, the so called kinematic space\cite{Czech1}. In Section III. we parametrized the geodesics on the Poincar\'e disk by $(B_1,B_2,M)$, used as coordinates in kinematic space. According to \eqref{eq:conserved}, the following relation holds for the parameters characterize geodesics: $B_1^2+B_2^2-M^2=1$. So we can think of the kinematic space as\cite{Czech1} a two dimensional $dS_2$ de Sitter space embedded in $\mathbb{R}^{2,1}$, endowed with the inner product
\begin{equation}
ds^2_{\mathbb{K}}=dB_1^2+dB_2^2-dM^2
\end{equation}
A more useful way to deal with the kinematic space is to use the $(\theta, \alpha)$ or the $(u,v)$ pairs from \eqref{eq:alfateta} and \eqref{eq:uv} as generalized coordinates. Applying the transformations of \eqref{eq:conserved} the induced metric is
\begin{equation}\label{eq:metric_kinematic}
d s_{\mathbb{K}}^{2}=\frac{d \theta^{2}-d \alpha^{2}}{\sin ^{2} \alpha}=\frac{dudv}{\sin ^{2} \frac{v-u}{2}}
\end{equation}
We can think of $(\theta,\alpha)$ as spacelike and timelike and $(u,v)$ as lightlike coordinates. The coordinate pairs $(\theta,\alpha)$ and $(\theta+\pi,\pi-\alpha)$ represent the same geodesic on $\mathbb{D}$ with different orientations. This means that the kinematic space of the whole Poincar\'e disk can be represented by the coordinate chart $(\theta,\alpha)\in[0,2\pi]\times[0,\pi]$, where $\theta\sim\theta+2\pi$ and a particular geodesic is represented by two points on the chart. On the other hand points of the Poincar\'e disk are represented by curves on the kinematic space. These are called point curves\cite{Czech1}. In the case of boundary points these are light-like straight lines (see \hyperref[fig:disk_kinematic]{FIG. 7.}).
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{Disk_kinematic.png}
\caption{Kinematic space for the Poincar\'e disk. The colored point curves correspond to the vertices labeled by $(0,1,2,3,4,5)$ of a geodesic hexagon. The bounded rectangular domains labeled by the $(j,k)$ pairs are defined by \eqref{eq:jk}.}
\label{fig:disk_kinematic}
\end{figure}
If we are dealing with geodesic polygons on $\mathbb{D}$, we have got $N$ boundary points, giving rise to $N$ point curves on the kinematic space. They form a grid of the chart with rectangular domains. Let us choose two arbitrary $0\leq a<b\leq N-1\in\partial\mathbb{D}$ points such that
\begin{align}\label{eq:jk}
a\equiv\frac{j-k}{2},&&b\equiv\frac{j+k}{2}&&\mod N
\end{align}
Where $j=0,\dots,2N-1$, $k=0,\dots, N-2$ and $j+k\equiv0 \mod2$. This gives us a $(j,k)$ coordinate set for the kinematic space tiles (see \hyperref[fig:disk_kinematic]{FIG. 7.}). The area of these tiles can be calculated in the $(u,v)$ representation
\begin{equation}\label{eq:area}
\begin{aligned}
T_{j,k}&\equiv\int_{\vartheta_b}^{\vartheta_{b+1}} \int_{\vartheta_{a-1}}^{\vartheta_a} \frac{du\wedge dv}{4 \sin ^{2} \frac{v-u}{2}}=\\
&=\log\frac{\sin \left(\frac{\vartheta_b-\vartheta_{a-1}
}{2}\right) \sin \left(\frac{\vartheta_{b+1}-\vartheta_a}{2}\right)}{\sin \left(\frac{\vartheta_b-\vartheta_a}{2}\right) \sin \left(\frac{\vartheta_{b+1}-\vartheta_{a-1}
}{2}\right)}
\end{aligned}
\end{equation}
This cross ratio can also be expressed in terms of lambda lengths
\begin{equation}
T_{j,k}=\log\frac{\lambda(a-1b)\lambda(ab+1)}{\lambda(ab)\lambda(a-1b+1)}
\end{equation}
Notice, that the area of the $k=0$ and $k=N-2$ domains are divergent.
The area form associated to the metric of \eqref{eq:metric_kinematic} is related to the Crofton form on kinematic space\cite{Czech1}
\begin{equation}
\omega=\frac{\partial^2 S(u,v)}{\partial u\partial v}du\wedge dv =\frac{c}{12}\frac{du\wedge dv}{ \sin ^{2} \frac{v-u}{2}}
\end{equation}
Using this relation and comparing equations \eqref{eq:mutual} and \eqref{eq:area} for the vacuum state dual to pure $AdS_3$ one can relate every inner tile a conditional mutual information\cite{Czech1,LB}, namely
\begin{equation}
\begin{aligned}
I_{j,k}&=I(a-1a,bb+1|ab)=\\
&=S(a-1b)+S(ab+1)-S(ab)-S(a-1b+1)=\\
&=\frac{c}{3}T_{j,k}
\end{aligned}
\end{equation}
The $k=0$ and $k=N-2$ tiles with divergent areas can be associated to mutual informations of the form $I(A,B)=S(A)+S(B)-S(AB)$ or labeled by the pointcurves $I(ab,cd)=S(a-1a)+S(bb+1)-S(a-1b+1)$, where $a=b$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{y-system.png}
\caption{Four neighbouring tiles of the kinematic space in case of a geodesic $N$-gon. The tiles are labeled by $(j,k)$, $(j+1,k-1)$, $(j+1,k+1)$ and $(j,k+2)$. These regions are bounded by the point curves correspond to vertices $a-1$, $a$, $a+1$, $b$, $b+1$ and $b+2$ of the $N$-gon.}
\label{fig:ysystem}
\end{figure}
Now consider four neighbouring tiles labeled by $(j,k)$, $(j+1,k-1)$, $(j+1,k+1)$ and $(j,k+2)$ (see \hyperref[fig:ysystem]{FIG. 8.}). These domains are bounded by the point curves corresponding to the $a-1$, $a$, $a+1$, $b$, $b+1$ and $b+2$ vertices. The areas of the four tiles:
\begin{subequations}
\begin{align}
T_{j,k}&=\log\frac{\lambda(a-1b)\lambda(ab+1)}{\lambda(a-1b+1)\lambda(ab)},\\
T_{j+1,k-1}&=\log\frac{\lambda(ab)\lambda(a+1b+1)}{\lambda(ab+1)\lambda(a+1b)},\\
T_{j+1,k+1}&=\log\frac{\lambda(a-1b+1)\lambda(ab+2)}{\lambda(a-1b+2)\lambda(ab+1)},\\
T_{j,k+2}&=\log\frac{\lambda(ab+1)\lambda(a+1b+2)}{\lambda(a+1b+1)\lambda(ab+2)}
\end{align}
\end{subequations}
Let us introduce the following quantity for each domain:
\begin{equation}\label{eq:Y}
Y_{j,k}=\frac{1}{e^{T_{j,k}}-1}=\frac{1}{e^{\frac{3}{c}I_{j,k}}-1}
\end{equation}
Using the definition of the lambda length, and trigonometric identities one can show the following relation
\begin{equation}\label{eq:recursion2}
Y_{j,k}Y_{j+2,k}=(1+Y_{j+1,k-1})(1+Y_{j+1,k+1})
\end{equation}
Or by changing the label $j\rightarrow j-1$ (now $j+k\equiv1 \mod2$):
\begin{equation}\label{eq:recursion}
Y_{j-1,k}Y_{j+1,k}=(1+Y_{j,k-1})(1+Y_{j,k+1})
\end{equation}
With the boundary conditions $Y_{j,0}=Y_{j,N-2}=0$.
As we have shown in our previous paper\cite{LB} in the case of the vacuum/pure $AdS_3$ duality these relations define a Zamolodchikov $Y$-system\cite{Zamo,FrenkelSzenes}.
\begin{figure*}[t]
\hspace{0.2cm}
\subfloat[]{
\includegraphics[width=0.9\columnwidth]{kinematic.png}}
\hfill
\subfloat[]{
\includegraphics[width=0.95\columnwidth]{BTZ_kinematic.png}}
\hspace{0.3cm}
\caption{(a) The kinematic space of a Poincar\'e disk hexagon with vertices labeled by $a,b,c,\bar{a},\bar{b},\bar{c}$ such that $a,b,c$ are centrally symmetric to $\bar{a},\bar{b},\bar{c}$. In this special case kinematic space is build up from four identical domains, bordered by dashed grey lines. In the bottom left segment we labeled the tiles by the $(j,k)$ pairs defined by \eqref{eq:jk}. In the other three segments we labeled the tiles respectively. One can look at this space as the kinematic space of a $2N$-gon equivalent to a BTZ black hole $N$-gon. (b) A fundamental domain that fully represents the BTZ black hole $N$-gon. Notice that the uppermost triangles, now have finite areas.}
\label{fig:kinematic_BTZ}
\end{figure*}
\begin{comment}
As we have discussed in ..., the triangulations of a $\mathbb{D}$ $N$-gon and the flip connecting them are generating a walk on the kinematic space. Each diagonal of the $N$-gon represent two points on the kinematic space identified by the Möbius identification. This identification splits the kinematic space into two identical fundamental domains, that give place to the walk of $N-3$ points. The fundamental domains can be generated by a construction known from scattering theory, called the ABHY construction. In scattering theory, especially in bi-adjoint $\phi^3$ theory, this is used to determine the pole structure of scattering amplitudes. Using the algorithm, given in ... we are able to generate a directed graph, starting from simple quivers, that are directed versions of $A_{N-3}$ Dynkin-diagrams. The vertices of the resulting graph can be associated to the intersection points of non-neighbouring point curves of the kinematic space and its directed edges to pointcurve segments. (Figure)
Choose a graph generated by the ABHY construction. A walk can be generated
\end{comment}
Based on this result a natural question to be asked is the following. What is the form of the $Y$-system for the thermal state/BTZ black hole duality?
In order to answer this question we turn to the kinematic space representation of BTZ geodesic polygons. Again we deal with the triangulations of the BTZ $N$-gon using a $2N$-gon on the Poincar\'e disk. This means that the kinematic space of the Poincar\'e disk (from now on denoted by $\mathbb{K}_{\mathbb{D}}$) with $2N$ point curves fully represent a BTZ $N$-gon triangulation. In this picture now we have centrally symmetric pairs of vertices and geodesics on $\mathbb{D}$, and the real BTZ black hole space is covered by just a half of the disk. As a result of this as a BTZ representative one can identify four identical fundamental domains on $\mathbb{K}_{\mathbb{D}}$, and one of them will completely describe the BTZ $N$-gon. Let us denote an arbitrary copy from these four domains by $\mathbb{K}_{BTZ}$. We can say that $\mathbb{K}_{BTZ}$ is the $[0,\pi]\times[0,\pi/2]$ quarter of the $[0,2\pi]\times[0,\pi]$ chart. For $\mathbb{K}_{\mathbb{D}}:\theta\sim\theta+2\pi$, but it is made up by two identical copies along $\theta$, and for $\mathbb{K}_{BTZ}:\theta\sim\theta+\pi$. The kinematic space of the disk representation is in \hyperref[fig:kinematic_BTZ]{FIG. 9. (a)}. The $\theta=\pi$ and $\alpha=\pi/2$ lines cut the kinematic space into four identical domains, and one of them fully represents the BTZ black hole. This domain is shown in \hyperref[fig:kinematic_BTZ]{FIG. 9. (b)}.
We can do the $(j,k)$ labeling for $\mathbb{K}_{\mathbb{D}}$ as before. But now, there are four identical fundamental domains, so each tile with a given area is included in the kinematic space four times. So we can label the tiles of one fundamental domain $\mathbb{K}_{BTZ}$ by the previous rules, and copy the labeling to the other three domains respectively to label $\mathbb{K}_{\mathbb{D}}$. We can choose the range of coordinates to be $j=0,1,\dots,2N+1$ and $k=0,1,\dots,N-1$, $j+k\equiv0\mod\,2$ to cover all the different tiles in $\mathbb{K}_{BTZ}$. The labeling for the disk representation is shown in \hyperref[fig:kinematic_BTZ]{FIG. 9. (a)} and for the BTZ representation is in \hyperref[fig:kinematic_BTZ]{FIG. 9. (b)}.
Let see what entanglement quantities are encoded in these tiles. The areas of $k=0$ domains are proportional to the divergent mutual informations, namely:
\begin{equation}
\begin{aligned}
\frac{c}{3}T_{j,0}&=I[a-1a,bb+1]=\\
&=S[a-1,a]+S[b,b+1]-S[a-1,b+1]
\end{aligned}
\end{equation}
where $a=b$. Notice that we are using the square bracket notation since we are working in $\mathbb{K}_{BTZ}$. For $k\leq N-2$ we get conditional mutual informations of the form
\begin{equation}
\begin{aligned}
\frac{c}{3}T_{j,k}&=I_{j,k}=\\
&=I[a-1a,bb+1|ab]=\\
&=S[a-1b]+S[ab+1]-S[ab]-S[a-1b+1]
\end{aligned}
\end{equation}
Let denote the areas of $k=N-1$ tiles (e.g. see \hyperref[fig:kinematic_BTZ]{Figure 8 (b) topmost triangles}) by $T_{j,N-1}$. Now we go back to $\mathbb{K}_{\mathbb{D}}$. In this picture the $k=N-1$ tiles (e.g. see \hyperref[fig:kinematic_BTZ]{Figure 8 (a) square in the middle strip}) have got areas $2\cdot T_{j,N-1}$. These squares are bounded by pointcurves $a,\bar{a},b,\bar{b}$, where
\begin{align}\label{eq:jk_BTZ}
a\equiv\frac{j-k}{2},&&b\equiv\frac{j+k}{2},&&\bar{b}\equiv a-1,&&\bar{a} \equiv b+1&&\mod 2N
\end{align}
($a$ and $b$ can denote pointcurve with bar as well). So the areas can be expressed by entanglement entropies in the following way
\begin{equation}
2\cdot\frac{c}{3}T_{j,N-1}=
I(ab,\overline{a}\overline{b}|b\overline{a})=
S(a\bar{a})+S(b\bar{b})-S(a\bar{b})-S(b\bar{a})
\end{equation}
where $I(ab,\overline{a}\overline{b}|b\overline{a})$ is the conditional mutual information we have calculated in \eqref{eq:later}.
Notice, that $S(a\bar{a})$ and $S(b,\bar{b})$ both gives the $S[\partial BTZ]$ von-Neumann entropy (which is nonzero since the thermal state is a mixed state) of the whole BTZ boundary and $S(a\bar{b})=S(b\bar{a})=S[ba]$. Now one can see that
\begin{equation}
\begin{aligned}
S(a\bar{a})-S(a\bar{b})&=S(b\bar{b})-S(b\bar{a})=\\
&=S[\partial BTZ]-S[ab]=\\
&=S[ba|ab]
\end{aligned}
\end{equation}
Where $ba$ and $ab$ now represent BTZ boundary intervals and $S[ba|ab]=S[\partial BTZ]-S[ab]$ is a conditional entropy. Interestingly in this special case it is just the half of the conditional mutual information $I(ab,\overline{a}\overline{b}|b\overline{a})$ that we calculated in \eqref{eq:later}. If we return to the BTZ kinematic space one can also write
\begin{equation}
\frac{c}{3}T_{j,N-1}=S[ba|ab]=\frac{1}{2}I(ab,\overline{a}\overline{b}|b\overline{a})
\end{equation}
Summarizing what we have so far
\begin{equation}
T_{j,k}=\frac{3}{c}\cdot\left\{\begin{array}{ll}
I[a-1a,bb+1], & \text { if } k=0 \\
I[a-1a,bb+1|ab], & \text { if } 0< k <N-1\\
S[ba,ab],& \text { if } k=N-1
\end{array}\right.
\end{equation}
Where
\begin{align}
a\equiv\frac{j-k}{2},&&b\equiv\frac{j+k}{2}&&\mod N
\end{align}
\begin{comment}
The choice of $(j,k)$ coordinates of tiles is The question arises: how can we split up the kinematic space into four fundamental domains. First notice that each geodesic of $\mathbb{D}$ represent two points on the kinematic space, identified by the Möbius identification. This splits the kinematic space into two domains. These domains can be generated by a construction known from scattering theory, called the ABHY construction. In scattering theory, especially in bi-adjoint $\phi^3$ theory, this is used to determine the pole structure of scattering amplitudes. Using the algorithm, given in ... we are able to generate a graph, starting from simple quivers, that are directed versions of $A_{N-3}$ Dynkin-diagrams. The vertices of the resulting graph can be associated to the inner tiles of the kinematic space determined by the $2N$ point curves. The fact that in the BTZ case we have got centrally symmetric $\mathbb{D}$ triangulations we are restricted to graphs generated by the algorithm that are symmetric
\end{comment}
Now we want to derive an Y-system for the high-temperature BTZ case. Similarly to \eqref{eq:Y}, we can introduce the following quantities
\begin{equation}
Y_{j,k}=\left\{\begin{array}{ll}
\frac{1}{e^{T_{j,k}}-1}, & \text { if } 0\leq k <N-1 \\
\frac{1}{e^{2T_{j,k}}-1}, & \text { if } k=N-1
\end{array}\right.
\end{equation}
Where $T_{j,k}$ are the areas of tiles in $\mathbb{K}_{BTZ}$. With the $\mathbb{K}_\mathbb{D}$ representation one can give two types of relations between different tiles. One for the inner tiles of a fundamental domain
\begin{equation}
Y_{j-1,k}Y_{j+1,k}=(1+Y_{j,k-1})(1+Y_{j,k+1})
\end{equation}
Where $k\leq N-2$. And one for the topmost tiles, shared by two fundamental domains:
\begin{equation}
Y_{j-1,N-1}Y_{j+1,N-1}=(1+Y_{j,N-2})^2
\end{equation}
These relations determine a Zamolodchikov $Y$-system of $C_{N-1}$ type
\begin{equation}
Y_{j-1,k} Y_{j+1,k}=\prod_{i \neq k}\left(Y_{j,i}+1\right)^{-a_{ki}}
\end{equation}
Where the boundary conditions are now
\begin{align}
Y_{j,0}=0,&&Y_{j,N-1}=\frac{1}{e^{2T_{j,N-1}}-1}=\frac{1}{e^{2S[ba,ab]}-1}
\end{align}
Here $a_{ki}$ is the Cartan matrix of the $C_{N-1}$ Dynkin diagram. In general its solutions are periodic in the variable $j$ with period $4N$ which is inherited from the $\theta\sim\theta+2\pi$ periodicity of kinematic space. However, since now the $2N$-gon being centrally symmetric, the period in our case is $2N$.
Notice that by virtue of \eqref{eq:later} the boundary conditions for $Y_{j,N-1}$ are featuring the Bekenstein-Hawking entropy of the BTZ black hole
\begin{equation}
\label{eq:BTZblack}
S_{BH}=\frac{2\pi r_+}{4G}
\end{equation}
where
$2\pi r_+$ is the area of the black hole.
\section{Conclusions}
Within the framework of the $AdS_3/CFT_2$ correspondence in this paper we investigated how entangled quantum states of the boundary are encoded into the classical geometric structure of the bulk. In our previous work\cite{LB} we have shown that the entanglement patterns of the CFT vacuum are encoded into the geometry of pure $AdS_3$ by the algebraic structure of a cluster algebra. For a partitioning of the boundary into $N$ regions this algebra turned out to be of $A_{N-3}$ type.
After this observation the natural question to be asked was the one: are there any other interesting cases where we again find this particular type of encoding via cluster algebras?
In this work we have shown that the answer to this question is yes. We have shown that the entanglement patterns of a thermal state of the boundary are encoded into the high temperature limit of the static BTZ geometry via another type of a cluster algebra. For a similar partitioning of the boundary to $N$ regions this is of type $C_{N-1}$.
One can study this encoding phenomenon in the bulk or in kinematic space.
In the bulk case the cluster algebraic structure manifests itself in algebraic relations between the regularized (lambda) lengths of geodesics. On the other hand in the kinematic space description this structure is captured by relations between areas of causal diamonds with respect to the Crofton form. For our examples studied so far the kinematic space version of this encoding has given rise to Zamolodchikov Y-systems of type $A_{N-3}$ (vacuum) and $C_{N-1}$ (thermal state).
We also observed that in the $C_{N-1}$ case the boundary conditions for the $Y$ system display in the explicit form of the $Y_{j,N-1}$ quantities the Bekenstein-Hawking entropy of the BTZ black hole.
We note that interestingly in the language of cluster algebras\cite{Williams} in the bulk representation the encoding manifests itself via cluster dynamics of flips , and in the kinematic space representation by the so called coefficient dynamics of flips. In physical terms cluster dynamics is the one based on mutation between possible partitions of the boundary
captured by regularized entanglement entropies. On the other hand coefficient dynamics is the one based on similar mutations encapsulating changes in regularization independent conditional mutual informations.
The advantage of studying this encoding phenomenon with the help of algebraic structures is particularly transparent in kinematic space.
Here one can investigate the dynamics of cross ratios which are gauge invariant quantities, meaning that they are independent of the regularization prescription. Moreover, one also has the physical interpretation of cross ratios as conditional mutual informations subject to strong subadditivity. This constraint gives rise to further interesting connections with the topic of positive geometry which is an important ingredient of recent studies on scattering amplitudes\cite{Nima,Assoc}.
Our investigations also revealed an interesting connection between quantum entanglement on the boundary and cluster polytopes. These cluster polytopes\cite{Nima,Nima1} are playing a very important role in the rapidly evolving research field on scattering amplitudes. Such research studies culminated in the appearance of the amplituhedron a polytopal object geometrizing the factorization properties of scattering amplitudes\cite{Ampli}. Now in this new context we have found that for an $N$-fold partitioning of the boundary the associahedron ${\mathcal A}_{N-3}$ geometrizes entaglement information of the vacuum and the cyclohedron ${\mathcal C}_{N-1}$ is doing the same for the thermal state.
Since these objects are encoding holographic entanglement information in a polytopal manner, they can be regarded as some sort of holographic entanglement polytopes\cite{Assoc}.
However, this term
should be handled with care not be confused with the
existing topic of entanglement polytopes in the quantum
information\cite{Borland,Klyachko,Saw} and in the holographic context\cite{Stoica,Hub1,Hub2,Hub3}.
In any case the associahedron ${\mathcal A}_{N-3}$ for example can be visualized as a polytope existing in a $N-3$ dimensional Euclidean space. This space is spanned by the {\it regularized} entanglement entropies associated to the diagonals of the quadrangles arising from a particular triangulation of the bulk. Then the associahedron is cut out from this space by the positivity constraints dictated by strong subadditivity\cite{Assoc}.
Clearly this polytopal type of encoding of holographic entanglement information should be further investigated.
Finally we note that cluster algebras originally appeared implicitely in Teichm\"uller thory of Riemann surfaces. In this context one should bear in mind that one can associate a cluster algebra to any bordered surface with marked points\cite{Williams}. For example this construction specializes to our type $A_{N-3}$ case dual to the $CFT_2$ vacuum. In this special case the surface is just a disk with $N$ marked points. Since in the $AdS_3/CFT_2$ context multiboundary wormhole solutions are naturally showing up as ones featuring such surfaces\cite{Skenderis} one expects that the examples investigated in our paper provide just the simplest ones based on a generic construction.
This conjectured encoding\cite{Levay} of quantum states in a holographic manner via cluster algebras and their associated cluster polytopes is certainly an interesting possibility worth exploring in the future.
\bigskip
\section{Acknowledgement}
This work was supported by the National Research Development and Innovation Office of Hungary within the Quantum Technology National Excellence Program (Project No. 2017-1.2.1-NKP-2017-0001).
Supported by the ÚNKP-20-1 New National Excellence Program of the Ministry for Innovation and Technology from the source of National Research, Development and Innovation Fund.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.